Codebase list python-werkzeug / e60525f
New upstream version 0.15.4+dfsg1 Ondřej Nový 4 years ago
313 changed file(s) with 33655 addition(s) and 30576 deletion(s). Raw diff Collapse all Expand all
00 environment:
11 global:
2 TOXENV: "py"
2 TOXENV: py,codecov
33
44 matrix:
5 - PYTHON: "C:\\Python27"
6 - PYTHON: "C:\\Python36"
5 - PYTHON: C:\Python37-x64
6 - PYTHON: C:\Python27-x64
7
8 init:
9 - SET PATH=%PYTHON%;%PATH%
710
811 install:
9 - "%PYTHON%\\python.exe -m pip install -U pip setuptools wheel tox"
12 - python -m pip install -U tox
1013
1114 build: false
1215
1316 test_script:
14 - "%PYTHON%\\python.exe -m tox"
17 - python -m tox --skip-missing-interpreters false
18
19 branches:
20 only:
21 - master
22 - /^\d+(\.\d+)*(\.x)?$/
23
24 cache:
25 - '%LOCALAPPDATA%\pip\Cache'
+0
-11
.coveragerc less more
0 [run]
1 branch = True
2 source =
3 werkzeug
4 tests
5
6 [paths]
7 source =
8 werkzeug
9 .tox/*/lib/python*/site-packages/werkzeug
10 .tox/pypy/site-packages/werkzeug
0 root = true
1
2 [*]
3 indent_style = space
4 indent_size = 4
5 insert_final_newline = true
6 trim_trailing_whitespace = true
7 end_of_line = lf
8 charset = utf-8
9 max_line_length = 88
10
11 [*.{yml,yaml,json,js,css,html}]
12 indent_size = 2
0 tests/res/chunked.txt binary
1
0 tests/**/*.http binary
00 MANIFEST
11 build
22 dist
3 *.egg-info
3 /src/Werkzeug.egg-info
44 *.pyc
55 *.pyo
66 env
1818 .hypothesis
1919 test_uwsgi_failed
2020 .idea
21 .pytest_cache/
0 repos:
1 - repo: https://github.com/asottile/reorder_python_imports
2 rev: v1.4.0
3 hooks:
4 - id: reorder-python-imports
5 name: Reorder Python imports (src, tests)
6 files: "^(?!examples/)"
7 args: ["--application-directories", ".:src"]
8 - id: reorder-python-imports
9 name: Reorder Python imports (examples)
10 files: "^examples/"
11 args: ["--application-directories", "examples"]
12 - repo: https://github.com/ambv/black
13 rev: 18.9b0
14 hooks:
15 - id: black
16 - repo: https://gitlab.com/pycqa/flake8
17 rev: 3.7.7
18 hooks:
19 - id: flake8
20 additional_dependencies: [flake8-bugbear]
21 - repo: https://github.com/pre-commit/pre-commit-hooks
22 rev: v2.1.0
23 hooks:
24 - id: check-byte-order-marker
25 - id: trailing-whitespace
26 - id: end-of-file-fixer
27 exclude: "^tests/.*.http$"
00 os: linux
1 sudo: false
1 dist: xenial
22 language: python
3 python:
4 - "3.7"
5 - "3.6"
6 - "3.5"
7 - "3.4"
8 - "2.7"
9 - "nightly"
10 - "pypy3.5-6.0"
11 env: TOXENV=py,codecov
312
413 matrix:
514 include:
6 - python: 3.6
7 env: TOXENV=hypothesis-uwsgi,codecov,stylecheck,docs-html
8 - python: 3.5
9 env: TOXENV=py,codecov
10 - python: 3.4
11 env: TOXENV=py,codecov
12 - python: 2.7
13 env: TOXENV=py,codecov
14 - python: pypy
15 env: TOXENV=py,codecov
16 - python: nightly
17 env: TOXENV=py
15 - env: TOXENV=stylecheck,docs-html
1816 - os: osx
1917 language: generic
20 env: TOXENV=py
18 env: TOXENV=py3,codecov
19 cache:
20 directories:
21 - $HOME/Library/Caches/Homebrew
22 - $HOME/Library/Caches/pip
2123 allow_failures:
24 - python: nightly
25 - python: pypy3.5-6.0
2226 - os: osx
23 language: generic
24 env: TOXENV=py
2527 fast_finish: true
2628
27 before_install:
28 - if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then
29 brew update;
30 brew install python3 redis memcached;
31 virtualenv -p python3 ~/py-env;
32 . ~/py-env/bin/activate;
33 fi
34 # Travis uses an outdated PyPy, this installs a more recent one.
35 - if [[ "$TRAVIS_PYTHON_VERSION" == "pypy" ]]; then
36 git clone https://github.com/pyenv/pyenv.git ~/.pyenv;
37 PYENV_ROOT="$HOME/.pyenv";
38 PATH="$PYENV_ROOT/bin:$PATH";
39 eval "$(pyenv init -)";
40 pyenv install pypy2.7-5.8.0;
41 pyenv global pypy2.7-5.8.0;
42 fi
43 - if [[ "$TRAVIS_PYTHON_VERSION" == "pypy3" ]]; then
44 git clone https://github.com/pyenv/pyenv.git ~/.pyenv;
45 PYENV_ROOT="$HOME/.pyenv";
46 PATH="$PYENV_ROOT/bin:$PATH";
47 eval "$(pyenv init -)";
48 pyenv install pypy3.5-5.8.0;
49 pyenv global pypy3.5-5.8.0;
50 fi
51
5229 install:
53 - pip install tox
30 - pip install -U tox
5431
5532 script:
56 - tox
33 - tox --skip-missing-interpreters false
5734
5835 cache:
59 - pip
36 directories:
37 - $HOME/.cache/pip
38 - $HOME/.cache/pre-commit
6039
6140 branches:
6241 only:
6342 - master
64 - /^.*-maintenance$/
43 - /^\d+(\.\d+)*(\.x)?$/
6544
6645 notifications:
6746 email: false
+0
-65
AUTHORS less more
0 Werkzeug is developed and maintained by the Pallets team and community
1 contributors. It was created by Armin Ronacher. The core maintainers
2 are:
3
4 - Armin Ronacher (mitsuhiko)
5 - Marcus Unterwaditzer (untitaker)
6 - Adrian Mönnich (ThiefMaster)
7 - David Lord (davidism)
8
9 A full list of contributors is available from git with:
10
11 - Georg Brandl
12 - Leif K-Brooks <eurleif@gmail.com>
13 - Thomas Johansson
14 - Marian Sigler
15 - Ronny Pfannschmidt
16 - Noah Slater <nslater@tumbolia.org>
17 - Alec Thomas
18 - Shannon Behrens
19 - Christoph Rauch
20 - Clemens Hermann
21 - Jason Kirtland
22 - Ali Afshar
23 - Christopher Grebs <cg@webshox.org>
24 - Sean Cazzell <seancazzell@gmail.com>
25 - Florent Xicluna
26 - Kyle Dawkins
27 - Pedro Algarvio
28 - Zahari Petkov
29 - Ludvig Ericson
30 - Kenneth Reitz
31 - Daniel Neuhäuser
32 - Markus Unterwaditzer
33 - Joe Esposito <joe@joeyespo.com>
34 - Abhinav Upadhyay <er.abhinav.upadhyay@gmail.com>
35 - immerrr <immerrr@gmail.com>
36 - Cédric Krier
37 - Phil Jones
38 - Michael Hunsinger
39 - Lars Holm Nielsen
40 - Joël Charles
41 - Benjamin Dopplinger
42 - Nils Steinger
43 - Mark Szymanski
44 - Andrew Bednar
45 - Craig Blaszczyk
46 - Felix König
47
48 The SSL parts of the Werkzeug development server are partially taken
49 from Paste. The same is true for the range support which comes from
50 WebOb, a Paste project. The original code is MIT licensed and largely
51 compatible with the BSD 3-clause license. The following copyrights
52 apply:
53
54 - (c) 2005 Ian Bicking and contributors
55 - (c) 2005 Clark C. Evans
56
57 The rename() function from the posixemulation was taken almost
58 unmodified from the Trac project's utility module. The original code is
59 BSD licensed with the following copyrights from that module:
60
61 - (c) 2003-2009 Edgewall Software
62 - (c) 2003-2006 Jonas Borgström <jonas@edgewall.com>
63 - (c) 2006 Matthew Good <trac@matt-good.net>
64 - (c) 2005-2006 Christian Boos <cboos@neuf.fr>
0 Werkzeug Changelog
1 ==================
0 .. currentmodule:: werkzeug
1
2 Version 0.15.4
3 --------------
4
5 Released 2019-05-14
6
7 - Fix a ``SyntaxError`` on Python 2.7.5. (:issue:`1544`)
8
9
10 Version 0.15.3
11 --------------
12
13 Released 2019-05-14
14
15 - Properly handle multi-line header folding in development server in
16 Python 2.7. (:issue:`1080`)
17 - Restore the ``response`` argument to :exc:`~exceptions.Unauthorized`.
18 (:pr:`1527`)
19 - :exc:`~exceptions.Unauthorized` doesn't add the ``WWW-Authenticate``
20 header if ``www_authenticate`` is not given. (:issue:`1516`)
21 - The default URL converter correctly encodes bytes to string rather
22 than representing them with ``b''``. (:issue:`1502`)
23 - Fix the filename format string in
24 :class:`~middleware.profiler.ProfilerMiddleware` to correctly handle
25 float values. (:issue:`1511`)
26 - Update :class:`~middleware.lint.LintMiddleware` to work on Python 3.
27 (:issue:`1510`)
28 - The debugger detects cycles in chained exceptions and does not time
29 out in that case. (:issue:`1536`)
30 - When running the development server in Docker, the debugger security
31 pin is now unique per container.
32
33
34 Version 0.15.2
35 --------------
36
37 Released 2019-04-02
38
39 - ``Rule`` code generation uses a filename that coverage will ignore.
40 The previous value, "generated", was causing coverage to fail.
41 (:issue:`1487`)
42 - The test client removes the cookie header if there are no persisted
43 cookies. This fixes an issue introduced in 0.15.0 where the cookies
44 from the original request were used for redirects, causing functions
45 such as logout to fail. (:issue:`1491`)
46 - The test client copies the environ before passing it to the app, to
47 prevent in-place modifications from affecting redirect requests.
48 (:issue:`1498`)
49 - The ``"werkzeug"`` logger only adds a handler if there is no handler
50 configured for its level in the logging chain. This avoids double
51 logging if other code configures logging first. (:issue:`1492`)
52
53
54 Version 0.15.1
55 --------------
56
57 Released 2019-03-21
58
59 - :exc:`~exceptions.Unauthorized` takes ``description`` as the first
60 argument, restoring previous behavior. The new ``www_authenticate``
61 argument is listed second. (:issue:`1483`)
62
63
64 Version 0.15.0
65 --------------
66
67 Released 2019-03-19
68
69 - Building URLs is ~7x faster. Each :class:`~routing.Rule` compiles
70 an optimized function for building itself. (:pr:`1281`)
71 - :meth:`MapAdapter.build() <routing.MapAdapter.build>` can be passed
72 a :class:`~datastructures.MultiDict` to represent multiple values
73 for a key. It already did this when passing a dict with a list
74 value. (:pr:`724`)
75 - ``path_info`` defaults to ``'/'`` for
76 :meth:`Map.bind() <routing.Map.bind>`. (:issue:`740`, :pr:`768`,
77 :pr:`1316`)
78 - Change ``RequestRedirect`` code from 301 to 308, preserving the verb
79 and request body (form data) during redirect. (:pr:`1342`)
80 - ``int`` and ``float`` converters in URL rules will handle negative
81 values if passed the ``signed=True`` parameter. For example,
82 ``/jump/<int(signed=True):count>``. (:pr:`1355`)
83 - ``Location`` autocorrection in :func:`Response.get_wsgi_headers()
84 <wrappers.BaseResponse.get_wsgi_headers>` is relative to the current
85 path rather than the root path. (:issue:`693`, :pr:`718`,
86 :pr:`1315`)
87 - 412 responses once again include entity headers and an error message
88 in the body. They were originally omitted when implementing
89 ``If-Match`` (:pr:`1233`), but the spec doesn't seem to disallow it.
90 (:issue:`1231`, :pr:`1255`)
91 - The Content-Length header is removed for 1xx and 204 responses. This
92 fixes a previous change where no body would be sent, but the header
93 would still be present. The new behavior matches RFC 7230.
94 (:pr:`1294`)
95 - :class:`~exceptions.Unauthorized` takes a ``www_authenticate``
96 parameter to set the ``WWW-Authenticate`` header for the response,
97 which is technically required for a valid 401 response.
98 (:issue:`772`, :pr:`795`)
99 - Add support for status code 424 :exc:`~exceptions.FailedDependency`.
100 (:pr:`1358`)
101 - :func:`http.parse_cookie` ignores empty segments rather than
102 producing a cookie with no key or value. (:issue:`1245`, :pr:`1301`)
103 - :func:`~http.parse_authorization_header` (and
104 :class:`~datastructures.Authorization`,
105 :attr:`~wrappers.Request.authorization`) treats the authorization
106 header as UTF-8. On Python 2, basic auth username and password are
107 ``unicode``. (:pr:`1325`)
108 - :func:`~http.parse_options_header` understands :rfc:`2231` parameter
109 continuations. (:pr:`1417`)
110 - :func:`~urls.uri_to_iri` does not unquote ASCII characters in the
111 unreserved class, such as space, and leaves invalid bytes quoted
112 when decoding. :func:`~urls.iri_to_uri` does not quote reserved
113 characters. See :rfc:`3987` for these character classes.
114 (:pr:`1433`)
115 - ``get_content_type`` appends a charset for any mimetype that ends
116 with ``+xml``, not just those that start with ``application/``.
117 Known text types such as ``application/javascript`` are also given
118 charsets. (:pr:`1439`)
119 - Clean up ``werkzeug.security`` module, remove outdated hashlib
120 support. (:pr:`1282`)
121 - In :func:`~security.generate_password_hash`, PBKDF2 uses 150000
122 iterations by default, increased from 50000. (:pr:`1377`)
123 - :class:`~wsgi.ClosingIterator` calls ``close`` on the wrapped
124 *iterable*, not the internal iterator. This doesn't affect objects
125 where ``__iter__`` returned ``self``. For other objects, the method
126 was not called before. (:issue:`1259`, :pr:`1260`)
127 - Bytes may be used as keys in :class:`~datastructures.Headers`, they
128 will be decoded as Latin-1 like values are. (:pr:`1346`)
129 - :class:`~datastructures.Range` validates that list of range tuples
130 passed to it would produce a valid ``Range`` header. (:pr:`1412`)
131 - :class:`~datastructures.FileStorage` looks up attributes on
132 ``stream._file`` if they don't exist on ``stream``, working around
133 an issue where :func:`tempfile.SpooledTemporaryFile` didn't
134 implement all of :class:`io.IOBase`. See
135 https://github.com/python/cpython/pull/3249. (:pr:`1409`)
136 - :class:`CombinedMultiDict.copy() <datastructures.CombinedMultiDict>`
137 returns a shallow mutable copy as a
138 :class:`~datastructures.MultiDict`. The copy no longer reflects
139 changes to the combined dicts, but is more generally useful.
140 (:pr:`1420`)
141 - The version of jQuery used by the debugger is updated to 3.3.1.
142 (:pr:`1390`)
143 - The debugger correctly renders long ``markupsafe.Markup`` instances.
144 (:pr:`1393`)
145 - The debugger can serve resources when Werkzeug is installed as a
146 zip file. ``DebuggedApplication.get_resource`` uses
147 ``pkgutil.get_data``. (:pr:`1401`)
148 - The debugger and server log support Python 3's chained exceptions.
149 (:pr:`1396`)
150 - The interactive debugger highlights frames that come from user code
151 to make them easy to pick out in a long stack trace. Note that if an
152 env was created with virtualenv instead of venv, the debugger may
153 incorrectly classify some frames. (:pr:`1421`)
154 - Clicking the error message at the top of the interactive debugger
155 will jump down to the bottom of the traceback. (:pr:`1422`)
156 - When generating a PIN, the debugger will ignore a ``KeyError``
157 raised when the current UID doesn't have an associated username,
158 which can happen in Docker. (:issue:`1471`)
159 - :class:`~exceptions.BadRequestKeyError` adds the ``KeyError``
160 message to the description, making it clearer what caused the 400
161 error. Frameworks like Flask can omit this information in production
162 by setting ``e.args = ()``. (:pr:`1395`)
163 - If a nested ``ImportError`` occurs from :func:`~utils.import_string`
164 the traceback mentions the nested import. Removes an untested code
165 path for handling "modules not yet set up by the parent."
166 (:pr:`735`)
167 - Triggering a reload while using a tool such as PDB no longer hides
168 input. (:pr:`1318`)
169 - The reloader will not prepend the Python executable to the command
170 line if the Python file is marked executable. This allows the
171 reloader to work on NixOS. (:pr:`1242`)
172 - Fix an issue where ``sys.path`` would change between reloads when
173 running with ``python -m app``. The reloader can detect that a
174 module was run with "-m" and reconstructs that instead of the file
175 path in ``sys.argv`` when reloading. (:pr:`1416`)
176 - The dev server can bind to a Unix socket by passing a hostname like
177 ``unix://app.socket``. (:pr:`209`, :pr:`1019`)
178 - Server uses ``IPPROTO_TCP`` constant instead of ``SOL_TCP`` for
179 Jython compatibility. (:pr:`1375`)
180 - When using an adhoc SSL cert with :func:`~serving.run_simple`, the
181 cert is shown as self-signed rather than signed by an invalid
182 authority. (:pr:`1430`)
183 - The development server logs the unquoted IRI rather than the raw
184 request line, to make it easier to work with Unicode in request
185 paths during development. (:issue:`1115`)
186 - The development server recognizes ``ConnectionError`` on Python 3 to
187 silence client disconnects, and does not silence other ``OSErrors``
188 that may have been raised inside the application. (:pr:`1418`)
189 - The environ keys ``REQUEST_URI`` and ``RAW_URI`` contain the raw
190 path before it was percent-decoded. This is non-standard, but many
191 WSGI servers add them. Middleware could replace ``PATH_INFO`` with
192 this to route based on the raw value. (:pr:`1419`)
193 - :class:`~test.EnvironBuilder` doesn't set ``CONTENT_TYPE`` or
194 ``CONTENT_LENGTH`` in the environ if they aren't set. Previously
195 these used default values if they weren't set. Now it's possible to
196 distinguish between empty and unset values. (:pr:`1308`)
197 - The test client raises a ``ValueError`` if a query string argument
198 would overwrite a query string in the path. (:pr:`1338`)
199 - :class:`test.EnvironBuilder` and :class:`test.Client` take a
200 ``json`` argument instead of manually passing ``data`` and
201 ``content_type``. This is serialized using the
202 :meth:`test.EnvironBuilder.json_dumps` method. (:pr:`1404`)
203 - :class:`test.Client` redirect handling is rewritten. (:pr:`1402`)
204
205 - The redirect environ is copied from the initial request environ.
206 - Script root and path are correctly distinguished when
207 redirecting to a path under the root.
208 - The HEAD method is not changed to GET.
209 - 307 and 308 codes preserve the method and body. All others
210 ignore the body and related headers.
211 - Headers are passed to the new request for all codes, following
212 what browsers do.
213 - :class:`test.EnvironBuilder` sets the content type and length
214 headers in addition to the WSGI keys when detecting them from
215 the data.
216 - Intermediate response bodies are iterated over even when
217 ``buffered=False`` to ensure iterator middleware can run cleanup
218 code safely. Only the last response is not buffered. (:pr:`988`)
219
220 - :class:`~test.EnvironBuilder`, :class:`~datastructures.FileStorage`,
221 and :func:`wsgi.get_input_stream` no longer share a global
222 ``_empty_stream`` instance. This improves test isolation by
223 preventing cases where closing the stream in one request would
224 affect other usages. (:pr:`1340`)
225 - The default :attr:`SecureCookie.serialization_method
226 <contrib.securecookie.SecureCookie.serialization_method>` will
227 change from :mod:`pickle` to :mod:`json` in 1.0. To upgrade existing
228 tokens, override :meth:`~contrib.securecookie.SecureCookie.unquote`
229 to try ``pickle`` if ``json`` fails. (:pr:`1413`)
230 - ``CGIRootFix`` no longer modifies ``PATH_INFO`` for very old
231 versions of Lighttpd. ``LighttpdCGIRootFix`` was renamed to
232 ``CGIRootFix`` in 0.9. Both are deprecated and will be removed in
233 version 1.0. (:pr:`1141`)
234 - :class:`werkzeug.wrappers.json.JSONMixin` has been replaced with
235 Flask's implementation. Check the docs for the full API.
236 (:pr:`1445`)
237 - The :doc:`contrib modules </contrib/index>` are deprecated and will
238 either be moved into ``werkzeug`` core or removed completely in
239 version 1.0. Some modules that already issued deprecation warnings
240 have been removed. Be sure to run or test your code with
241 ``python -W default::DeprecationWarning`` to catch any deprecated
242 code you're using. (:issue:`4`)
243
244 - ``LintMiddleware`` has moved to :mod:`werkzeug.middleware.lint`.
245 - ``ProfilerMiddleware`` has moved to
246 :mod:`werkzeug.middleware.profiler`.
247 - ``ProxyFix`` has moved to :mod:`werkzeug.middleware.proxy_fix`.
248 - ``JSONRequestMixin`` has moved to :mod:`werkzeug.wrappers.json`.
249 - ``cache`` has been extracted into a separate project,
250 `cachelib <https://github.com/pallets/cachelib>`_. The version
251 in Werkzeug is deprecated.
252 - ``securecookie`` and ``sessions`` have been extracted into a
253 separate project,
254 `secure-cookie <https://github.com/pallets/secure-cookie>`_. The
255 version in Werkzeug is deprecated.
256 - Everything in ``fixers``, except ``ProxyFix``, is deprecated.
257 - Everything in ``wrappers``, except ``JSONMixin``, is deprecated.
258 - ``atom`` is deprecated. This did not fit in with the rest of
259 Werkzeug, and is better served by a dedicated library in the
260 community.
261 - ``jsrouting`` is removed. Set URLs when rendering templates
262 or JSON responses instead.
263 - ``limiter`` is removed. Its specific use is handled by Werkzeug
264 directly, but stream limiting is better handled by the WSGI
265 server in general.
266 - ``testtools`` is removed. It did not offer significant benefit
267 over the default test client.
268 - ``iterio`` is deprecated.
269
270 - :func:`wsgi.get_host` no longer looks at ``X-Forwarded-For``. Use
271 :class:`~middleware.proxy_fix.ProxyFix` to handle that.
272 (:issue:`609`, :pr:`1303`)
273 - :class:`~middleware.proxy_fix.ProxyFix` is refactored to support
274 more headers, multiple values, and more secure configuration.
275
276 - Each header supports multiple values. The trusted number of
277 proxies is configured separately for each header. The
278 ``num_proxies`` argument is deprecated. (:pr:`1314`)
279 - Sets ``SERVER_NAME`` and ``SERVER_PORT`` based on
280 ``X-Forwarded-Host``. (:pr:`1314`)
281 - Sets ``SERVER_PORT`` and modifies ``HTTP_HOST`` based on
282 ``X-Forwarded-Port``. (:issue:`1023`, :pr:`1304`)
283 - Sets ``SCRIPT_NAME`` based on ``X-Forwarded-Prefix``.
284 (:issue:`1237`)
285 - The original WSGI environment values are stored in the
286 ``werkzeug.proxy_fix.orig`` key, a dict. The individual keys
287 ``werkzeug.proxy_fix.orig_remote_addr``,
288 ``werkzeug.proxy_fix.orig_wsgi_url_scheme``, and
289 ``werkzeug.proxy_fix.orig_http_host`` are deprecated.
290
291 - Middleware from ``werkzeug.wsgi`` has moved to separate modules
292 under ``werkzeug.middleware``, along with the middleware moved from
293 ``werkzeug.contrib``. The old ``werkzeug.wsgi`` imports are
294 deprecated and will be removed in version 1.0. (:pr:`1452`)
295
296 - ``werkzeug.wsgi.DispatcherMiddleware`` has moved to
297 :class:`werkzeug.middleware.dispatcher.DispatcherMiddleware`.
298 - ``werkzeug.wsgi.ProxyMiddleware`` as moved to
299 :class:`werkzeug.middleware.http_proxy.ProxyMiddleware`.
300 - ``werkzeug.wsgi.SharedDataMiddleware`` has moved to
301 :class:`werkzeug.middleware.shared_data.SharedDataMiddleware`.
302
303 - :class:`~middleware.http_proxy.ProxyMiddleware` proxies the query
304 string. (:pr:`1252`)
305 - The filenames generated by
306 :class:`~middleware.profiler.ProfilerMiddleware` can be customized.
307 (:issue:`1283`)
308 - The ``werkzeug.wrappers`` module has been converted to a package,
309 and its various classes have been organized into separate modules.
310 Any previously documented classes, understood to be the existing
311 public API, are still importable from ``werkzeug.wrappers``, or may
312 be imported from their specific modules. (:pr:`1456`)
313
2314
3315 Version 0.14.1
4316 --------------
58370
59371 - **Deprecate support for Python 2.6 and 3.3.** CI tests will not run
60372 for these versions, and support will be dropped completely in the next
61 version. (`pallets/meta#24`_)
62 - Raise ``TypeError`` when port is not an integer. (`#1088`_)
63 - Fully deprecate ``werkzeug.script``. Use `Click`_ instead. (`#1090`_)
373 version. (:issue:`pallets/meta#24`)
374 - Raise ``TypeError`` when port is not an integer. (:pr:`1088`)
375 - Fully deprecate ``werkzeug.script``. Use `Click`_ instead.
376 (:pr:`1090`)
64377 - ``response.age`` is parsed as a ``timedelta``. Previously, it was
65378 incorrectly treated as a ``datetime``. The header value is an integer
66 number of seconds, not a date string. (`#414`_)
379 number of seconds, not a date string. (:pr:`414`)
67380 - Fix a bug in ``TypeConversionDict`` where errors are not propagated
68 when using the converter. (`#1102`_)
381 when using the converter. (:issue:`1102`)
69382 - ``Authorization.qop`` is a string instead of a set, to comply with
70 RFC 2617. (`#984`_)
383 RFC 2617. (:pr:`984`)
71384 - An exception is raised when an encoded cookie is larger than, by
72385 default, 4093 bytes. Browsers may silently ignore cookies larger than
73386 this. ``BaseResponse`` has a new attribute ``max_cookie_size`` and
74387 ``dump_cookie`` has a new argument ``max_size`` to configure this.
75 (`#780`_, `#1109`_)
388 (:pr:`780`, :pr:`1109`)
76389 - Fix a TypeError in ``werkzeug.contrib.lint.GuardedIterator.close``.
77 (`#1116`_)
390 (:pr:`1116`)
78391 - ``BaseResponse.calculate_content_length`` now correctly works for
79392 Unicode responses on Python 3. It first encodes using
80 ``iter_encoded``. (`#705`_)
393 ``iter_encoded``. (:issue:`705`)
81394 - Secure cookie contrib works with string secret key on Python 3.
82 (`#1205`_)
395 (:pr:`1205`)
83396 - Shared data middleware accepts a list instead of a dict of static
84 locations to preserve lookup order. (`#1197`_)
397 locations to preserve lookup order. (:pr:`1197`)
85398 - HTTP header values without encoding can contain single quotes.
86 (`#1208`_)
399 (:pr:`1208`)
87400 - The built-in dev server supports receiving requests with chunked
88 transfer encoding. (`#1198`_)
89
90 .. _Click: https://www.palletsprojects.com/p/click/
91 .. _pallets/meta#24: https://github.com/pallets/meta/issues/24
92 .. _#414: https://github.com/pallets/werkzeug/pull/414
93 .. _#705: https://github.com/pallets/werkzeug/pull/705
94 .. _#780: https://github.com/pallets/werkzeug/pull/780
95 .. _#984: https://github.com/pallets/werkzeug/pull/984
96 .. _#1088: https://github.com/pallets/werkzeug/pull/1088
97 .. _#1090: https://github.com/pallets/werkzeug/pull/1090
98 .. _#1102: https://github.com/pallets/werkzeug/pull/1102
99 .. _#1109: https://github.com/pallets/werkzeug/pull/1109
100 .. _#1116: https://github.com/pallets/werkzeug/pull/1116
101 .. _#1197: https://github.com/pallets/werkzeug/pull/1197
102 .. _#1198: https://github.com/pallets/werkzeug/pull/1198
103 .. _#1205: https://github.com/pallets/werkzeug/pull/1205
104 .. _#1208: https://github.com/pallets/werkzeug/pull/1208
401 transfer encoding. (:pr:`1198`)
402
403 .. _Click: https://palletsprojects.com/p/click/
404
105405
106406 Version 0.12.2
107407 --------------
11891489 (bugfix release, released on June 24th 2008)
11901490
11911491 - fixed a security problem with `werkzeug.contrib.SecureCookie`.
1192 More details available in the `release announcement`_.
1193
1194 .. _release announcement: http://lucumr.pocoo.org/cogitations/2008/06/24/werkzeug-031-released/
1492
11951493
11961494 Version 0.3
11971495 -----------
130130 .. _email: https://help.github.com/articles/setting-your-email-in-git/
131131 .. _Fork: https://github.com/pallets/werkzeug/pull/2305#fork-destination-box
132132 .. _Clone: https://help.github.com/articles/fork-a-repo/#step-2-create-a-local-clone-of-your-fork
133 .. _committing as you go: http://dont-be-afraid-to-commit.readthedocs.io/en/latest/git/commandlinegit.html#commit-your-changes
133 .. _committing as you go: https://dont-be-afraid-to-commit.readthedocs.io/en/latest/git/commandlinegit.html#commit-your-changes
134134 .. _PEP8: https://pep8.org/
135135 .. _create a pull request: https://help.github.com/articles/creating-a-pull-request/
136136 .. _coverage: https://coverage.readthedocs.io
+0
-31
LICENSE less more
0 Copyright © 2007 by the Pallets team.
1
2 Some rights reserved.
3
4 Redistribution and use in source and binary forms, with or without
5 modification, are permitted provided that the following conditions are
6 met:
7
8 * Redistributions of source code must retain the above copyright notice,
9 this list of conditions and the following disclaimer.
10
11 * Redistributions in binary form must reproduce the above copyright
12 notice, this list of conditions and the following disclaimer in the
13 documentation and/or other materials provided with the distribution.
14
15 * Neither the name of the copyright holder nor the names of its
16 contributors may be used to endorse or promote products derived from
17 this software without specific prior written permission.
18
19 THIS SOFTWARE AND DOCUMENTATION IS PROVIDED BY THE COPYRIGHT HOLDERS AND
20 CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING,
21 BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
22 FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
23 COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
24 INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
25 NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
26 USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
27 ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
29 THIS SOFTWARE AND DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF
30 SUCH DAMAGE.
0 Copyright 2007 Pallets
1
2 Redistribution and use in source and binary forms, with or without
3 modification, are permitted provided that the following conditions are
4 met:
5
6 1. Redistributions of source code must retain the above copyright
7 notice, this list of conditions and the following disclaimer.
8
9 2. Redistributions in binary form must reproduce the above copyright
10 notice, this list of conditions and the following disclaimer in the
11 documentation and/or other materials provided with the distribution.
12
13 3. Neither the name of the copyright holder nor the names of its
14 contributors may be used to endorse or promote products derived from
15 this software without specific prior written permission.
16
17 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
18 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
19 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
20 PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
21 HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
22 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
23 TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
24 PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
25 LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
26 NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
27 SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
0 include CHANGES.rst LICENSE AUTHORS tox.ini
1 graft werkzeug/debug/shared
0 include CHANGES.rst
1 include LICENSE.rst
2 include tox.ini
3 graft artwork
4 graft docs
5 prune docs/_build
6 graft examples
27 graft tests
3 graft docs
4 graft artwork
5 graft examples
6 prune docs/_build
7 prune docs/_themes
8 global-exclude *.py[cdo] __pycache__ *.so
8 graft src/werkzeug/debug/shared
9 global-exclude *.py[co]
+0
-25
Makefile less more
0 documentation:
1 @(cd docs; make html)
2
3 release:
4 python scripts/make-release.py
5
6 test:
7 pytest
8
9 tox-test:
10 tox
11
12 coverage:
13 @(coverage run --module pytest $(TEST_OPTIONS) $(TESTS))
14
15 doctest:
16 @(cd docs; sphinx-build -b doctest . _build/doctest)
17
18 upload-docs:
19 $(MAKE) -C docs html dirhtml latex
20 $(MAKE) -C docs/_build/latex all-pdf
21 cd docs/_build/; mv html werkzeug-docs; zip -r werkzeug-docs.zip werkzeug-docs; mv werkzeug-docs html
22 rsync -a docs/_build/dirhtml/ flow.srv.pocoo.org:/srv/websites/werkzeug.pocoo.org/docs/
23 rsync -a docs/_build/latex/Werkzeug.pdf flow.srv.pocoo.org:/srv/websites/werkzeug.pocoo.org/docs/
24 rsync -a docs/_build/werkzeug-docs.zip flow.srv.pocoo.org:/srv/websites/werkzeug.pocoo.org/docs/werkzeug-docs.zip
00 Werkzeug
11 ========
2
3 *werkzeug* German noun: "tool". Etymology: *werk* ("work"), *zeug* ("stuff")
24
35 Werkzeug is a comprehensive `WSGI`_ web application library. It began as
46 a simple collection of various utilities for WSGI applications and has
68
79 It includes:
810
9 * An interactive debugger that allows inspecting stack traces and source
10 code in the browser with an interactive interpreter for any frame in
11 the stack.
12 * A full-featured request object with objects to interact with headers,
13 query args, form data, files, and cookies.
14 * A response object that can wrap other WSGI applications and handle
15 streaming data.
16 * A routing system for matching URLs to endpoints and generating URLs
17 for endpoints, with an extensible system for capturing variables from
18 URLs.
19 * HTTP utilities to handle entity tags, cache control, dates, user
20 agents, cookies, files, and more.
21 * A threaded WSGI server for use while developing applications locally.
22 * A test client for simulating HTTP requests during testing without
23 requiring running a server.
11 - An interactive debugger that allows inspecting stack traces and
12 source code in the browser with an interactive interpreter for any
13 frame in the stack.
14 - A full-featured request object with objects to interact with
15 headers, query args, form data, files, and cookies.
16 - A response object that can wrap other WSGI applications and handle
17 streaming data.
18 - A routing system for matching URLs to endpoints and generating URLs
19 for endpoints, with an extensible system for capturing variables
20 from URLs.
21 - HTTP utilities to handle entity tags, cache control, dates, user
22 agents, cookies, files, and more.
23 - A threaded WSGI server for use while developing applications
24 locally.
25 - A test client for simulating HTTP requests during testing without
26 requiring running a server.
2427
2528 Werkzeug is Unicode aware and doesn't enforce any dependencies. It is up
2629 to the developer to choose a template engine, database adapter, and even
6164 Links
6265 -----
6366
64 * Website: https://www.palletsprojects.com/p/werkzeug/
65 * Releases: https://pypi.org/project/Werkzeug/
66 * Code: https://github.com/pallets/werkzeug
67 * Issue tracker: https://github.com/pallets/werkzeug/issues
68 * Test status:
67 - Website: https://www.palletsprojects.com/p/werkzeug/
68 - Documentation: https://werkzeug.palletsprojects.com/
69 - Releases: https://pypi.org/project/Werkzeug/
70 - Code: https://github.com/pallets/werkzeug
71 - Issue tracker: https://github.com/pallets/werkzeug/issues
72 - Test status:
6973
70 * Linux, Mac: https://travis-ci.org/pallets/werkzeug
71 * Windows: https://ci.appveyor.com/project/davidism/werkzeug
74 - Linux, Mac: https://travis-ci.org/pallets/werkzeug
75 - Windows: https://ci.appveyor.com/project/pallets/werkzeug
7276
73 * Test coverage: https://codecov.io/gh/pallets/werkzeug
77 - Test coverage: https://codecov.io/gh/pallets/werkzeug
78 - Official chat: https://discord.gg/t6rrQZH
7479
7580 .. _WSGI: https://wsgi.readthedocs.io/en/latest/
7681 .. _Flask: https://www.palletsprojects.com/p/flask/
77 hg bisect to find out how the Werkzeug performance of some internal
88 core parts changes over time.
99
10 :copyright: 2014 by the Werkzeug Team, see AUTHORS for more details.
11 :license: BSD, see LICENSE for more details.
10 :copyright: 2007 Pallets
11 :license: BSD-3-Clause
1212 """
1313 from __future__ import division
14 from __future__ import print_function
15
16 import gc
1417 import os
15 import gc
18 import subprocess
1619 import sys
17 import subprocess
18 from cStringIO import StringIO
1920 from timeit import default_timer as timer
2021 from types import FunctionType
2122
23 try:
24 from cStringIO import StringIO
25 except ImportError:
26 from io import StringIO
27
2228 PY2 = sys.version_info[0] == 2
2329
2430 if not PY2:
2632
2733
2834 # create a new module where we later store all the werkzeug attributes.
29 wz = type(sys)('werkzeug_nonlazy')
30 sys.path.insert(0, '<DUMMY>')
31 null_out = open(os.devnull, 'w')
32
35 wz = type(sys)("werkzeug_nonlazy")
36 sys.path.insert(0, "<DUMMY>")
37 null_out = open(os.devnull, "w")
3338
3439 # ±4% are ignored
3540 TOLERANCE = 0.04
4348 """Returns the current node or tag for the given path."""
4449 tags = {}
4550 try:
46 client = subprocess.Popen(['hg', 'cat', '-r', 'tip', '.hgtags'],
47 stdout=subprocess.PIPE, cwd=path)
51 client = subprocess.Popen(
52 ["hg", "cat", "-r", "tip", ".hgtags"], stdout=subprocess.PIPE, cwd=path
53 )
4854 for line in client.communicate()[0].splitlines():
4955 line = line.strip()
5056 if not line:
5460 except OSError:
5561 return
5662
57 client = subprocess.Popen(['hg', 'parent', '--template', '#node#'],
58 stdout=subprocess.PIPE, cwd=path)
63 client = subprocess.Popen(
64 ["hg", "parent", "--template", "#node#"], stdout=subprocess.PIPE, cwd=path
65 )
5966
6067 tip = client.communicate()[0].strip()
6168 tag = tags.get(tip)
7178 # get rid of already imported stuff
7279 wz.__dict__.clear()
7380 for key in sys.modules.keys():
74 if key.startswith('werkzeug.') or key == 'werkzeug':
81 if key.startswith("werkzeug.") or key == "werkzeug":
7582 sys.modules.pop(key, None)
7683
7784 # import werkzeug again.
7885 import werkzeug
86
7987 for key in werkzeug.__all__:
8088 setattr(wz, key, getattr(werkzeug, key))
8189
8492
8593 # get the real version from the setup file
8694 try:
87 f = open(os.path.join(path, 'setup.py'))
95 f = open(os.path.join(path, "setup.py"))
8896 except IOError:
8997 pass
9098 else:
9199 try:
92100 for line in f:
93101 line = line.strip()
94 if line.startswith('version='):
95 return line[8:].strip(' \t,')[1:-1], hg_tag
102 if line.startswith("version="):
103 return line[8:].strip(" \t,")[1:-1], hg_tag
96104 finally:
97105 f.close()
98 print >> sys.stderr, 'Unknown werkzeug version loaded'
106 print("Unknown werkzeug version loaded", file=sys.stderr)
99107 sys.exit(2)
100108
101109
111119 name = func.__name__
112120 else:
113121 name = func
114 if name.startswith('time_'):
122 if name.startswith("time_"):
115123 name = name[5:]
116 return name.replace('_', ' ').title()
124 return name.replace("_", " ").title()
117125
118126
119127 def bench(func):
120128 """Times a single function."""
121 sys.stdout.write('%44s ' % format_func(func))
129 sys.stdout.write("%44s " % format_func(func))
122130 sys.stdout.flush()
123131
124132 # figure out how many times we have to run the function to
126134 for i in xrange(3, 10):
127135 rounds = 1 << i
128136 t = timer()
129 for x in xrange(rounds):
137 for _ in xrange(rounds):
130138 func()
131139 if timer() - t >= 0.2:
132140 break
138146 gc.disable()
139147 try:
140148 t = timer()
141 for x in xrange(rounds):
149 for _ in xrange(rounds):
142150 func()
143151 return (timer() - t) / rounds * 1000
144152 finally:
145153 gc.enable()
146154
147155 delta = median(_run() for x in xrange(TEST_RUNS))
148 sys.stdout.write('%.4f\n' % delta)
156 sys.stdout.write("%.4f\n" % delta)
149157 sys.stdout.flush()
150158
151159 return delta
154162 def main():
155163 """The main entrypoint."""
156164 from optparse import OptionParser
157 parser = OptionParser(usage='%prog [options]')
158 parser.add_option('--werkzeug-path', '-p', dest='path', default='..',
159 help='the path to the werkzeug package. defaults to cwd')
160 parser.add_option('--compare', '-c', dest='compare', nargs=2,
161 default=False, help='compare two hg nodes of Werkzeug')
162 parser.add_option('--init-compare', dest='init_compare',
163 action='store_true', default=False,
164 help='Initializes the comparison feature')
165
166 parser = OptionParser(usage="%prog [options]")
167 parser.add_option(
168 "--werkzeug-path",
169 "-p",
170 dest="path",
171 default="..",
172 help="the path to the werkzeug package. defaults to cwd",
173 )
174 parser.add_option(
175 "--compare",
176 "-c",
177 dest="compare",
178 nargs=2,
179 default=False,
180 help="compare two hg nodes of Werkzeug",
181 )
182 parser.add_option(
183 "--init-compare",
184 dest="init_compare",
185 action="store_true",
186 default=False,
187 help="Initializes the comparison feature",
188 )
165189 options, args = parser.parse_args()
166190 if args:
167 parser.error('Script takes no arguments')
191 parser.error("Script takes no arguments")
168192 if options.compare:
169193 compare(*options.compare)
170194 elif options.init_compare:
175199
176200 def init_compare():
177201 """Initializes the comparison feature."""
178 print('Initializing comparison feature')
179 subprocess.Popen(['hg', 'clone', '..', 'a']).wait()
180 subprocess.Popen(['hg', 'clone', '..', 'b']).wait()
202 print("Initializing comparison feature")
203 subprocess.Popen(["hg", "clone", "..", "a"]).wait()
204 subprocess.Popen(["hg", "clone", "..", "b"]).wait()
181205
182206
183207 def compare(node1, node2):
184208 """Compares two Werkzeug hg versions."""
185 if not os.path.isdir('a'):
186 print >> sys.stderr, 'error: comparison feature not initialized'
209 if not os.path.isdir("a"):
210 print("error: comparison feature not initialized", file=sys.stderr)
187211 sys.exit(4)
188212
189 print('=' * 80)
190 print('WERKZEUG INTERNAL BENCHMARK -- COMPARE MODE'.center(80))
191 print('-' * 80)
192
193 def _error(msg):
194 print >> sys.stderr, 'error:', msg
195 sys.exit(1)
213 print("=" * 80)
214 print("WERKZEUG INTERNAL BENCHMARK -- COMPARE MODE".center(80))
215 print("-" * 80)
196216
197217 def _hg_update(repo, node):
198 hg = lambda *x: subprocess.call(['hg'] + list(x), cwd=repo,
199 stdout=null_out, stderr=null_out)
200 hg('revert', '-a', '--no-backup')
201 client = subprocess.Popen(['hg', 'status', '--unknown', '-n', '-0'],
202 stdout=subprocess.PIPE, cwd=repo)
218 def hg(*x):
219 return subprocess.call(
220 ["hg"] + list(x), cwd=repo, stdout=null_out, stderr=null_out
221 )
222
223 hg("revert", "-a", "--no-backup")
224 client = subprocess.Popen(
225 ["hg", "status", "--unknown", "-n", "-0"], stdout=subprocess.PIPE, cwd=repo
226 )
203227 unknown = client.communicate()[0]
204228 if unknown:
205 client = subprocess.Popen(['xargs', '-0', 'rm', '-f'], cwd=repo,
206 stdout=null_out, stdin=subprocess.PIPE)
229 client = subprocess.Popen(
230 ["xargs", "-0", "rm", "-f"],
231 cwd=repo,
232 stdout=null_out,
233 stdin=subprocess.PIPE,
234 )
207235 client.communicate(unknown)
208 hg('pull', '../..')
209 hg('update', node)
210 if node == 'tip':
211 diff = subprocess.Popen(['hg', 'diff'], cwd='..',
212 stdout=subprocess.PIPE).communicate()[0]
236 hg("pull", "../..")
237 hg("update", node)
238 if node == "tip":
239 diff = subprocess.Popen(
240 ["hg", "diff"], cwd="..", stdout=subprocess.PIPE
241 ).communicate()[0]
213242 if diff:
214 client = subprocess.Popen(['hg', 'import', '--no-commit', '-'],
215 cwd=repo, stdout=null_out,
216 stdin=subprocess.PIPE)
243 client = subprocess.Popen(
244 ["hg", "import", "--no-commit", "-"],
245 cwd=repo,
246 stdout=null_out,
247 stdin=subprocess.PIPE,
248 )
217249 client.communicate(diff)
218250
219 _hg_update('a', node1)
220 _hg_update('b', node2)
221 d1 = run('a', no_header=True)
222 d2 = run('b', no_header=True)
223
224 print('DIRECT COMPARISON'.center(80))
225 print('-' * 80)
251 _hg_update("a", node1)
252 _hg_update("b", node2)
253 d1 = run("a", no_header=True)
254 d2 = run("b", no_header=True)
255
256 print("DIRECT COMPARISON".center(80))
257 print("-" * 80)
226258 for key in sorted(d1):
227259 delta = d1[key] - d2[key]
228 if abs(1 - d1[key] / d2[key]) < TOLERANCE or \
229 abs(delta) < MIN_RESOLUTION:
230 delta = '=='
260 if abs(1 - d1[key] / d2[key]) < TOLERANCE or abs(delta) < MIN_RESOLUTION:
261 delta = "=="
231262 else:
232 delta = '%+.4f (%+d%%)' % \
233 (delta, round(d2[key] / d1[key] * 100 - 100))
234 print('%36s %.4f %.4f %s' %
235 (format_func(key), d1[key], d2[key], delta))
236 print('-' * 80)
263 delta = "%+.4f (%+d%%)" % (delta, round(d2[key] / d1[key] * 100 - 100))
264 print("%36s %.4f %.4f %s" % (format_func(key), d1[key], d2[key], delta))
265 print("-" * 80)
237266
238267
239268 def run(path, no_header=False):
241270 wz_version, hg_tag = load_werkzeug(path)
242271 result = {}
243272 if not no_header:
244 print('=' * 80)
245 print('WERKZEUG INTERNAL BENCHMARK'.center(80))
246 print('-' * 80)
247 print('Path: %s' % path)
248 print('Version: %s' % wz_version)
273 print("=" * 80)
274 print("WERKZEUG INTERNAL BENCHMARK".center(80))
275 print("-" * 80)
276 print("Path: %s" % path)
277 print("Version: %s" % wz_version)
249278 if hg_tag is not None:
250 print('HG Tag: %s' % hg_tag)
251 print('-' * 80)
279 print("HG Tag: %s" % hg_tag)
280 print("-" * 80)
252281 for key, value in sorted(globals().items()):
253 if key.startswith('time_'):
254 before = globals().get('before_' + key[5:])
282 if key.startswith("time_"):
283 before = globals().get("before_" + key[5:])
255284 if before:
256285 before()
257286 result[key] = bench(value)
258 after = globals().get('after_' + key[5:])
287 after = globals().get("after_" + key[5:])
259288 if after:
260289 after()
261 print('-' * 80)
290 print("-" * 80)
262291 return result
263292
264293
265294 URL_DECODED_DATA = dict((str(x), str(x)) for x in xrange(100))
266 URL_ENCODED_DATA = '&'.join('%s=%s' % x for x in URL_DECODED_DATA.items())
267 MULTIPART_ENCODED_DATA = '\n'.join((
268 '--foo',
269 'Content-Disposition: form-data; name=foo',
270 '',
271 'this is just bar',
272 '--foo',
273 'Content-Disposition: form-data; name=bar',
274 '',
275 'blafasel',
276 '--foo',
277 'Content-Disposition: form-data; name=foo; filename=wzbench.py',
278 'Content-Type: text/plain',
279 '',
280 open(__file__.rstrip('c')).read(),
281 '--foo--'
282 ))
295 URL_ENCODED_DATA = "&".join("%s=%s" % x for x in URL_DECODED_DATA.items())
296 MULTIPART_ENCODED_DATA = "\n".join(
297 (
298 "--foo",
299 "Content-Disposition: form-data; name=foo",
300 "",
301 "this is just bar",
302 "--foo",
303 "Content-Disposition: form-data; name=bar",
304 "",
305 "blafasel",
306 "--foo",
307 "Content-Disposition: form-data; name=foo; filename=wzbench.py",
308 "Content-Type: text/plain",
309 "",
310 open(__file__.rstrip("c")).read(),
311 "--foo--",
312 )
313 )
283314 MULTIDICT = None
284315 REQUEST = None
285316 TEST_ENV = None
300331 # from_values which is known to be slowish in 0.5.1 and higher.
301332 # we don't want to bench two things at once.
302333 environ = {
303 'REQUEST_METHOD': 'POST',
304 'CONTENT_TYPE': 'multipart/form-data; boundary=foo',
305 'wsgi.input': StringIO(MULTIPART_ENCODED_DATA),
306 'CONTENT_LENGTH': str(len(MULTIPART_ENCODED_DATA))
334 "REQUEST_METHOD": "POST",
335 "CONTENT_TYPE": "multipart/form-data; boundary=foo",
336 "wsgi.input": StringIO(MULTIPART_ENCODED_DATA),
337 "CONTENT_LENGTH": str(len(MULTIPART_ENCODED_DATA)),
307338 }
308339 request = wz.Request(environ)
309340 request.form
311342
312343 def before_multidict_lookup_hit():
313344 global MULTIDICT
314 MULTIDICT = wz.MultiDict({'foo': 'bar'})
345 MULTIDICT = wz.MultiDict({"foo": "bar"})
315346
316347
317348 def time_multidict_lookup_hit():
318 MULTIDICT['foo']
349 MULTIDICT["foo"]
319350
320351
321352 def after_multidict_lookup_hit():
330361
331362 def time_multidict_lookup_miss():
332363 try:
333 MULTIDICT['foo']
364 MULTIDICT["foo"]
334365 except KeyError:
335366 pass
336367
347378 return 42
348379
349380 f = Foo()
350 for x in xrange(60):
381 for _ in xrange(60):
351382 f.x
352383
353384
354385 def before_request_form_access():
355386 global REQUEST
356 data = 'foo=bar&blah=blub'
357 REQUEST = wz.Request({
358 'CONTENT_LENGTH': str(len(data)),
359 'wsgi.input': StringIO(data),
360 'REQUEST_METHOD': 'POST',
361 'wsgi.version': (1, 0),
362 'QUERY_STRING': data,
363 'CONTENT_TYPE': 'application/x-www-form-urlencoded',
364 'PATH_INFO': '/',
365 'SCRIPT_NAME': ''
366 })
387 data = "foo=bar&blah=blub"
388 REQUEST = wz.Request(
389 {
390 "CONTENT_LENGTH": str(len(data)),
391 "wsgi.input": StringIO(data),
392 "REQUEST_METHOD": "POST",
393 "wsgi.version": (1, 0),
394 "QUERY_STRING": data,
395 "CONTENT_TYPE": "application/x-www-form-urlencoded",
396 "PATH_INFO": "/",
397 "SCRIPT_NAME": "",
398 }
399 )
367400
368401
369402 def time_request_form_access():
370 for x in xrange(30):
403 for _ in xrange(30):
371404 REQUEST.path
372405 REQUEST.script_root
373 REQUEST.args['foo']
374 REQUEST.form['foo']
406 REQUEST.args["foo"]
407 REQUEST.form["foo"]
375408
376409
377410 def after_request_form_access():
380413
381414
382415 def time_request_from_values():
383 wz.Request.from_values(base_url='http://www.google.com/',
384 query_string='foo=bar&blah=blaz',
385 input_stream=StringIO(MULTIPART_ENCODED_DATA),
386 content_length=len(MULTIPART_ENCODED_DATA),
387 content_type='multipart/form-data; '
388 'boundary=foo', method='POST')
416 wz.Request.from_values(
417 base_url="http://www.google.com/",
418 query_string="foo=bar&blah=blaz",
419 input_stream=StringIO(MULTIPART_ENCODED_DATA),
420 content_length=len(MULTIPART_ENCODED_DATA),
421 content_type="multipart/form-data; boundary=foo",
422 method="POST",
423 )
389424
390425
391426 def before_request_shallow_init():
403438
404439
405440 def time_response_iter_performance():
406 resp = wz.Response(u'Hällo Wörld ' * 1000,
407 mimetype='text/html')
408 for item in resp({'REQUEST_METHOD': 'GET'}, lambda *s: None):
441 resp = wz.Response(u"Hällo Wörld " * 1000, mimetype="text/html")
442 for _ in resp({"REQUEST_METHOD": "GET"}, lambda *s: None):
409443 pass
410444
411445
412446 def time_response_iter_head_performance():
413 resp = wz.Response(u'Hällo Wörld ' * 1000,
414 mimetype='text/html')
415 for item in resp({'REQUEST_METHOD': 'HEAD'}, lambda *s: None):
447 resp = wz.Response(u"Hällo Wörld " * 1000, mimetype="text/html")
448 for _ in resp({"REQUEST_METHOD": "HEAD"}, lambda *s: None):
416449 pass
417450
418451
423456
424457
425458 def time_local_manager_dispatch():
426 for x in xrange(10):
459 for _ in xrange(10):
427460 LOCAL.x = 42
428 for x in xrange(10):
461 for _ in xrange(10):
429462 LOCAL.x
430463
431464
436469
437470 def before_html_builder():
438471 global TABLE
439 TABLE = [['col 1', 'col 2', 'col 3', '4', '5', '6'] for x in range(10)]
472 TABLE = [["col 1", "col 2", "col 3", "4", "5", "6"] for x in range(10)]
440473
441474
442475 def time_html_builder():
443476 html_rows = []
444477 for row in TABLE: # noqa
445 html_cols = [wz.html.td(col, class_='col') for col in row]
446 html_rows.append(wz.html.tr(class_='row', *html_cols))
478 html_cols = [wz.html.td(col, class_="col") for col in row]
479 html_rows.append(wz.html.tr(class_="row", *html_cols))
447480 wz.html.table(*html_rows)
448481
449482
452485 TABLE = None
453486
454487
455 if __name__ == '__main__':
488 if __name__ == "__main__":
456489 os.chdir(os.path.dirname(__file__) or os.path.curdir)
457490 try:
458491 main()
459492 except KeyboardInterrupt:
460 print >> sys.stderr, 'interrupted!'
493 print("\nInterrupted!", file=sys.stderr)
0 # Makefile for Sphinx documentation
0 # Minimal makefile for Sphinx documentation
11 #
22
33 # You can set these variables from the command line.
44 SPHINXOPTS =
55 SPHINXBUILD = sphinx-build
6 PAPER =
6 SOURCEDIR = .
77 BUILDDIR = _build
88
9 # Internal variables.
10 PAPEROPT_a4 = -D latex_paper_size=a4
11 PAPEROPT_letter = -D latex_paper_size=letter
12 ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
9 # Put it first so that "make" without argument is like "make help".
10 help:
11 @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
1312
14 .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp epub latex changes linkcheck doctest
13 .PHONY: help Makefile
1514
16 help:
17 @echo "Please use \`make <target>' where <target> is one of"
18 @echo " html to make standalone HTML files"
19 @echo " dirhtml to make HTML files named index.html in directories"
20 @echo " singlehtml to make a single large HTML file"
21 @echo " pickle to make pickle files"
22 @echo " json to make JSON files"
23 @echo " htmlhelp to make HTML files and a HTML help project"
24 @echo " qthelp to make HTML files and a qthelp project"
25 @echo " devhelp to make HTML files and a Devhelp project"
26 @echo " epub to make an epub"
27 @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
28 @echo " latexpdf to make LaTeX files and run them through pdflatex"
29 @echo " changes to make an overview of all changed/added/deprecated items"
30 @echo " linkcheck to check all external links for integrity"
31 @echo " doctest to run all doctests embedded in the documentation (if enabled)"
32
33 clean:
34 -rm -rf $(BUILDDIR)/*
35
36 html:
37 $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
38 @echo
39 @echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
40
41 dirhtml:
42 $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
43 @echo
44 @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
45
46 singlehtml:
47 $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
48 @echo
49 @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
50
51 pickle:
52 $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
53 @echo
54 @echo "Build finished; now you can process the pickle files."
55
56 json:
57 $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
58 @echo
59 @echo "Build finished; now you can process the JSON files."
60
61 htmlhelp:
62 $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
63 @echo
64 @echo "Build finished; now you can run HTML Help Workshop with the" \
65 ".hhp project file in $(BUILDDIR)/htmlhelp."
66
67 qthelp:
68 $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
69 @echo
70 @echo "Build finished; now you can run "qcollectiongenerator" with the" \
71 ".qhcp project file in $(BUILDDIR)/qthelp, like this:"
72 @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Flask.qhcp"
73 @echo "To view the help file:"
74 @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Flask.qhc"
75
76 devhelp:
77 $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) _build/devhelp
78 @echo
79 @echo "Build finished."
80 @echo "To view the help file:"
81 @echo "# mkdir -p $$HOME/.local/share/devhelp/Flask"
82 @echo "# ln -s _build/devhelp $$HOME/.local/share/devhelp/Flask"
83 @echo "# devhelp"
84
85 epub:
86 $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
87 @echo
88 @echo "Build finished. The epub file is in $(BUILDDIR)/epub."
89
90 latex:
91 $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
92 @echo
93 @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
94 @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \
95 "run these through (pdf)latex."
96
97 latexpdf: latex
98 $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) _build/latex
99 @echo "Running LaTeX files through pdflatex..."
100 make -C _build/latex all-pdf
101 @echo "pdflatex finished; the PDF files are in _build/latex."
102
103 changes:
104 $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
105 @echo
106 @echo "The overview file is in $(BUILDDIR)/changes."
107
108 linkcheck:
109 $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
110 @echo
111 @echo "Link check complete; look for any errors in the above output " \
112 "or in $(BUILDDIR)/linkcheck/output.txt."
113
114 doctest:
115 $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
116 @echo "Testing of doctests in the sources finished, look at the " \
117 "results in $(BUILDDIR)/doctest/output.txt."
15 # Catch-all target: route all unknown targets to Sphinx using the new
16 # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
17 %: Makefile
18 @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
docs/_static/background.png less more
Binary diff not shown
docs/_static/codebackground.png less more
Binary diff not shown
docs/_static/contents.png less more
Binary diff not shown
docs/_static/header.png less more
Binary diff not shown
docs/_static/navigation.png less more
Binary diff not shown
docs/_static/navigation_active.png less more
Binary diff not shown
docs/_static/shorty-screenshot.png less more
Binary diff not shown
+0
-423
docs/_static/style.css less more
0 body {
1 font-family: 'Lucida Grande', 'Lucida Sans Unicode', 'Geneva', 'Verdana', sans-serif;
2 font-size: 14px;
3 letter-spacing: -0.01em;
4 line-height: 150%;
5 text-align: center;
6 background: #AFC1C4 url(background.png);
7 color: black;
8 margin: 0;
9 padding: 0;
10 }
11
12 a {
13 color: #CA7900;
14 text-decoration: none;
15 }
16
17 a:hover {
18 color: #2491CF;
19 }
20
21 pre {
22 font-family: 'Consolas', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace;
23 font-size: 0.85em;
24 letter-spacing: 0.015em;
25 padding: 0.3em 0.7em;
26 border: 1px solid #aaa;
27 border-right-color: #ddd;
28 border-bottom-color: #ddd;
29 background: #f8f8f8 url(codebackground.png);
30 }
31
32 cite, code, tt {
33 font-family: 'Consolas', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace;
34 font-size: 0.95em;
35 letter-spacing: 0.01em;
36 font-style: normal;
37 }
38
39 tt {
40 background-color: #f2f2f2;
41 border-bottom: 1px solid #ddd;
42 color: #333;
43 }
44
45 tt.func-signature {
46 font-family: 'Lucida Grande', 'Lucida Sans Unicode', 'Geneva', 'Verdana', sans-serif;
47 font-size: 0.85em;
48 background-color: transparent;
49 border-bottom: none;
50 color: #555;
51 }
52
53 dt {
54 margin-top: 0.8em;
55 }
56
57 dd p.first {
58 margin-top: 0;
59 }
60
61 dd p.last {
62 margin-bottom: 0;
63 }
64
65 pre {
66 line-height: 150%;
67 }
68
69 pre a {
70 color: inherit;
71 text-decoration: underline;
72 }
73
74 div.syntax {
75 background-color: transparent;
76 }
77
78 div.page {
79 background: white url(contents.png) 0 130px;
80 border: 1px solid #aaa;
81 width: 740px;
82 margin: 20px auto 20px auto;
83 text-align: left;
84 }
85
86 div.header {
87 background-image: url(header.png);
88 height: 100px;
89 border-bottom: 1px solid #aaa;
90 }
91
92 div.header h1 {
93 float: right;
94 position: absolute;
95 margin: -43px 0 0 585px;
96 height: 180px;
97 width: 180px;
98 }
99
100 div.header h1 a {
101 display: block;
102 background-image: url(werkzeug.png);
103 background-repeat: no-repeat;
104 height: 180px;
105 width: 180px;
106 text-decoration: none;
107 color: white!important;
108 }
109
110 div.header span {
111 display: none;
112 }
113
114 div.header p {
115 background-image: url(header_invert.png);
116 margin: 0;
117 padding: 10px;
118 height: 80px;
119 color: white;
120 display: none;
121 }
122
123 ul.navigation {
124 background-image: url(navigation.png);
125 height: 2em;
126 list-style: none;
127 border-top: 1px solid #ddd;
128 border-bottom: 1px solid #ddd;
129 margin: 0;
130 padding: 0;
131 }
132
133 ul.navigation li {
134 margin: 0;
135 padding: 0;
136 height: 2em;
137 line-height: 1.75em;
138 float: left;
139 }
140
141 ul.navigation li a {
142 margin: 0;
143 padding: 0 10px 0 10px;
144 color: #EE9816;
145 }
146
147 ul.navigation li a:hover {
148 color: #3CA8E7;
149 }
150
151 ul.navigation li.active {
152 background-image: url(navigation_active.png);
153 }
154
155 ul.navigation li.active a {
156 color: black;
157 }
158
159 ul.navigation li.indexlink a {
160 font-size: 0.9em;
161 font-weight: bold;
162 color: #11557C;
163 }
164
165 div.body {
166 margin: 0 20px 0 20px;
167 padding: 0.5em 0 20px 0;
168 }
169
170 p {
171 margin: 0.8em 0 0.5em 0;
172 }
173
174 h1 {
175 margin: 0;
176 padding: 0.7em 0 0.3em 0;
177 font-size: 1.5em;
178 color: #11557C;
179 }
180
181 h2 {
182 margin: 1.3em 0 0.2em 0;
183 font-size: 1.35em;
184 padding: 0;
185 }
186
187 h3 {
188 margin: 1em 0 -0.3em 0;
189 }
190
191 h2 a, h3 a, h4 a, h5 a, h6 a {
192 color: black!important;
193 }
194
195 a.headerlink {
196 color: #B4B4B4!important;
197 font-size: 0.8em;
198 padding: 0 4px 0 4px;
199 text-decoration: none!important;
200 visibility: hidden;
201 }
202
203 h1:hover > a.headerlink,
204 h2:hover > a.headerlink,
205 h3:hover > a.headerlink,
206 h4:hover > a.headerlink,
207 h5:hover > a.headerlink,
208 h6:hover > a.headerlink,
209 dt:hover > a.headerlink {
210 visibility: visible;
211 }
212
213 a.headerlink:hover {
214 background-color: #B4B4B4;
215 color: #F0F0F0!important;
216 }
217
218 table {
219 border-collapse: collapse;
220 margin: 0 -0.5em 0 -0.5em;
221 }
222
223 table td, table th {
224 padding: 0.2em 0.5em 0.2em 0.5em;
225 }
226
227 div.footer {
228 background-color: #E3EFF1;
229 color: #86989B;
230 padding: 3px 8px 3px 0;
231 clear: both;
232 font-size: 0.8em;
233 text-align: right;
234 }
235
236 div.footer a {
237 color: #86989B;
238 text-decoration: underline;
239 }
240
241 div.toc {
242 float: right;
243 background-color: white;
244 border: 1px solid #86989B;
245 padding: 0;
246 margin: 0 0 1em 1em;
247 width: 10em;
248 }
249
250 div.toc h4 {
251 margin: 0;
252 font-size: 0.9em;
253 padding: 0.1em 0 0.1em 0.6em;
254 margin: 0;
255 color: white;
256 border-bottom: 1px solid #86989B;
257 background-color: #AFC1C4;
258 }
259
260 div.toc ul {
261 margin: 1em 0 1em 0;
262 padding: 0 0 0 1em;
263 list-style: none;
264 }
265
266 div.toc ul li {
267 margin: 0.5em 0 0.5em 0;
268 font-size: 0.9em;
269 line-height: 130%;
270 }
271
272 div.toc ul li p {
273 margin: 0;
274 padding: 0;
275 }
276
277 div.toc ul ul {
278 margin: 0.2em 0 0.2em 0;
279 padding: 0 0 0 1.8em;
280 }
281
282 div.toc ul ul li {
283 padding: 0;
284 }
285
286 div.admonition, div.warning, div#toc {
287 font-size: 0.9em;
288 margin: 1em 0 0 0;
289 border: 1px solid #86989B;
290 background-color: #f7f7f7;
291 }
292
293 div.admonition p, div.warning p, div#toc p {
294 margin: 0.5em 1em 0.5em 1em;
295 padding: 0;
296 }
297
298 div.admonition pre, div.warning pre, div#toc pre {
299 margin: 0.4em 1em 0.4em 1em;
300 }
301
302 div.admonition p.admonition-title,
303 div.warning p.admonition-title,
304 div#toc h3 {
305 margin: 0;
306 padding: 0.1em 0 0.1em 0.5em;
307 color: white;
308 border-bottom: 1px solid #86989B;
309 font-weight: bold;
310 background-color: #AFC1C4;
311 }
312
313 div.warning {
314 border: 1px solid #940000;
315 }
316
317 div.warning p.admonition-title {
318 background-color: #CF0000;
319 border-bottom-color: #940000;
320 }
321
322 div.admonition ul, div.admonition ol,
323 div.warning ul, div.warning ol,
324 div#toc ul, div#toc ol {
325 margin: 0.1em 0.5em 0.5em 3em;
326 padding: 0;
327 }
328
329 div#toc div.inner {
330 border-top: 1px solid #86989B;
331 padding: 10px;
332 }
333
334 div#toc h3 {
335 border-bottom: none;
336 cursor: pointer;
337 font-size: 13px;
338 }
339
340 div#toc h3:hover {
341 background-color: #86989B;
342 }
343
344 div#toc ul {
345 margin: 2px 0 2px 20px;
346 padding: 0;
347 }
348
349 div#toc ul li {
350 line-height: 125%;
351 }
352
353 dl.function dt,
354 dl.class dt,
355 dl.exception dt,
356 dl.method dt,
357 dl.attribute dt {
358 font-weight: normal;
359 }
360
361 dt .descname {
362 font-weight: bold;
363 margin-right: 4px;
364 }
365
366 dt .descname, dt .descclassname {
367 padding: 0;
368 background: transparent;
369 border-bottom: 1px solid #111;
370 }
371
372 dt .descclassname {
373 margin-left: 2px;
374 }
375
376 dl dt big {
377 font-size: 100%;
378 }
379
380 dl p {
381 margin: 0;
382 }
383
384 dl p + p {
385 margin-top: 10px;
386 }
387
388 span.versionmodified {
389 color: #4B4A49;
390 font-weight: bold;
391 }
392
393 span.versionadded {
394 color: #30691A;
395 font-weight: bold;
396 }
397
398 table.field-list td.field-body ul.simple {
399 margin: 0;
400 padding: 0!important;
401 list-style: none;
402 }
403
404 table.indextable td {
405 width: 50%;
406 vertical-align: top;
407 }
408
409 table.indextable dt {
410 margin: 0;
411 }
412
413 table.indextable dd dt a {
414 color: black!important;
415 font-size: 0.8em;
416 }
417
418 div.jumpbox {
419 padding: 1em 0 0.4em 0;
420 border-bottom: 1px solid #ddd;
421 color: #aaa;
422 }
+0
-10
docs/_static/werkzeug.js less more
0 (function() {
1 Werkzeug = {};
2
3 $(function() {
4 $('#toc h3').click(function() {
5 $(this).next().slideToggle();
6 $(this).parent().toggleClass('toc-collapsed');
7 }).next().hide().parent().addClass('toc-collapsed');
8 });
9 })();
+0
-19
docs/_templates/sidebarintro.html less more
0 <h3>About Werkzeug</h3>
1 <p>
2 Werkzeug is a WSGI utility library. It can serve as the basis for a
3 custom framework.
4 </p>
5 <h3>Other Formats</h3>
6 <p>
7 You can download the documentation in other formats as well:
8 </p>
9 <ul>
10 <li><a href="http://werkzeug.pocoo.org/docs/werkzeug-docs.pdf">as PDF</a>
11 <li><a href="http://werkzeug.pocoo.org/docs/werkzeug-docs.zip">as zipped HTML</a>
12 </ul>
13 <h3>Useful Links</h3>
14 <ul>
15 <li><a href="http://werkzeug.pocoo.org/">The Werkzeug Website</a></li>
16 <li><a href="http://pypi.python.org/pypi/Werkzeug">Werkzeug @ PyPI</a></li>
17 <li><a href="http://github.com/pallets/werkzeug">Werkzeug @ github</a></li>
18 </ul>
+0
-3
docs/_templates/sidebarlogo.html less more
0 <p class="logo"><a href="{{ pathto(master_doc) }}">
1 <img class="logo" src="{{ pathto('_static/werkzeug.png', 1) }}" alt="Logo"/>
2 </a></p>
+0
-37
docs/_themes/LICENSE less more
0 Copyright (c) 2011 by Armin Ronacher.
1
2 Some rights reserved.
3
4 Redistribution and use in source and binary forms of the theme, with or
5 without modification, are permitted provided that the following conditions
6 are met:
7
8 * Redistributions of source code must retain the above copyright
9 notice, this list of conditions and the following disclaimer.
10
11 * Redistributions in binary form must reproduce the above
12 copyright notice, this list of conditions and the following
13 disclaimer in the documentation and/or other materials provided
14 with the distribution.
15
16 * The names of the contributors may not be used to endorse or
17 promote products derived from this software without specific
18 prior written permission.
19
20 We kindly ask you to only use these themes in an unmodified manner just
21 for Flask and Flask-related products, not for unrelated projects. If you
22 like the visual style and want to use it for your own projects, please
23 consider making some larger changes to the themes (such as changing
24 font faces, sizes, colors or margins).
25
26 THIS THEME IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
27 AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
28 IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
29 ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
30 LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
31 CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
32 SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
33 INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
34 CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
35 ARISING IN ANY WAY OUT OF THE USE OF THIS THEME, EVEN IF ADVISED OF THE
36 POSSIBILITY OF SUCH DAMAGE.
+0
-31
docs/_themes/README less more
0 Flask Sphinx Styles
1 ===================
2
3 This repository contains sphinx styles for Flask and Flask related
4 projects. To use this style in your Sphinx documentation, follow
5 this guide:
6
7 1. put this folder as _themes into your docs folder. Alternatively
8 you can also use git submodules to check out the contents there.
9 2. add this to your conf.py:
10
11 sys.path.append(os.path.abspath('_themes'))
12 html_theme_path = ['_themes']
13 html_theme = 'flask'
14
15 The following themes exist:
16
17 - 'flask' - the standard flask documentation theme for large
18 projects
19 - 'flask_small' - small one-page theme. Intended to be used by
20 very small addon libraries for flask.
21
22 The following options exist for the flask_small theme:
23
24 [options]
25 index_logo = '' filename of a picture in _static
26 to be used as replacement for the
27 h1 in the index.rst file.
28 index_logo_height = 120px height of the index logo
29 github_fork = '' repository name on github for the
30 "fork me" badge
+0
-8
docs/_themes/werkzeug/layout.html less more
0 {%- extends "basic/layout.html" %}
1 {%- block relbar2 %}{% endblock %}
2 {%- block footer %}
3 <div class="footer">
4 &copy; Copyright {{ copyright }}.
5 Created using <a href="http://sphinx.pocoo.org/">Sphinx</a>.
6 </div>
7 {%- endblock %}
+0
-19
docs/_themes/werkzeug/relations.html less more
0 <h3>Related Topics</h3>
1 <ul>
2 <li><a href="{{ pathto(master_doc) }}">Documentation overview</a><ul>
3 {%- for parent in parents %}
4 <li><a href="{{ parent.link|e }}">{{ parent.title }}</a><ul>
5 {%- endfor %}
6 {%- if prev %}
7 <li>Previous: <a href="{{ prev.link|e }}" title="{{ _('previous chapter')
8 }}">{{ prev.title }}</a></li>
9 {%- endif %}
10 {%- if next %}
11 <li>Next: <a href="{{ next.link|e }}" title="{{ _('next chapter')
12 }}">{{ next.title }}</a></li>
13 {%- endif %}
14 {%- for parent in parents %}
15 </ul></li>
16 {%- endfor %}
17 </ul></li>
18 </ul>
+0
-395
docs/_themes/werkzeug/static/werkzeug.css_t less more
0 /*
1 * werkzeug.css_t
2 * ~~~~~~~~~~~~~~
3 *
4 * :copyright: Copyright 2011 by Armin Ronacher.
5 * :license: Flask Design License, see LICENSE for details.
6 */
7
8 {% set page_width = '940px' %}
9 {% set sidebar_width = '220px' %}
10 {% set font_family = "'Lucida Grande', 'Lucida Sans Unicode', 'Geneva', 'Verdana', sans-serif" %}
11 {% set header_font_family = "'Ubuntu', " ~ font_family %}
12
13 @import url("basic.css");
14 @import url(http://fonts.googleapis.com/css?family=Ubuntu);
15
16 /* -- page layout ----------------------------------------------------------- */
17
18 body {
19 font-family: {{ font_family }};
20 font-size: 15px;
21 background-color: white;
22 color: #000;
23 margin: 0;
24 padding: 0;
25 }
26
27 div.document {
28 width: {{ page_width }};
29 margin: 30px auto 0 auto;
30 }
31
32 div.documentwrapper {
33 float: left;
34 width: 100%;
35 }
36
37 div.bodywrapper {
38 margin: 0 0 0 {{ sidebar_width }};
39 }
40
41 div.sphinxsidebar {
42 width: {{ sidebar_width }};
43 }
44
45 hr {
46 border: 1px solid #B1B4B6;
47 }
48
49 div.body {
50 background-color: #ffffff;
51 color: #3E4349;
52 padding: 0 30px 0 30px;
53 }
54
55 img.floatingflask {
56 padding: 0 0 10px 10px;
57 float: right;
58 }
59
60 div.footer {
61 width: {{ page_width }};
62 margin: 20px auto 30px auto;
63 font-size: 14px;
64 color: #888;
65 text-align: right;
66 }
67
68 div.footer a {
69 color: #888;
70 }
71
72 div.related {
73 display: none;
74 }
75
76 div.sphinxsidebar a {
77 color: #444;
78 text-decoration: none;
79 border-bottom: 1px dotted #999;
80 }
81
82 div.sphinxsidebar a:hover {
83 border-bottom: 1px solid #999;
84 }
85
86 div.sphinxsidebar {
87 font-size: 13px;
88 line-height: 1.5;
89 }
90
91 div.sphinxsidebarwrapper {
92 padding: 18px 10px;
93 }
94
95 div.sphinxsidebarwrapper p.logo {
96 padding: 0 0 20px 0;
97 margin: 0;
98 text-align: center;
99 }
100
101 div.sphinxsidebar h3,
102 div.sphinxsidebar h4 {
103 font-family: {{ font_family }};
104 color: #444;
105 font-size: 24px;
106 font-weight: normal;
107 margin: 0 0 5px 0;
108 padding: 0;
109 }
110
111 div.sphinxsidebar h4 {
112 font-size: 20px;
113 }
114
115 div.sphinxsidebar h3 a {
116 color: #444;
117 }
118
119 div.sphinxsidebar p.logo a,
120 div.sphinxsidebar h3 a,
121 div.sphinxsidebar p.logo a:hover,
122 div.sphinxsidebar h3 a:hover {
123 border: none;
124 }
125
126 div.sphinxsidebar p {
127 color: #555;
128 margin: 10px 0;
129 }
130
131 div.sphinxsidebar ul {
132 margin: 10px 0;
133 padding: 0;
134 color: #000;
135 }
136
137 div.sphinxsidebar input {
138 border: 1px solid #ccc;
139 font-family: {{ font_family }};
140 font-size: 14px;
141 }
142
143 div.sphinxsidebar form.search input[name="q"] {
144 width: 130px;
145 }
146
147 /* -- body styles ----------------------------------------------------------- */
148
149 a {
150 color: #185F6D;
151 text-decoration: underline;
152 }
153
154 a:hover {
155 color: #2794AA;
156 text-decoration: underline;
157 }
158
159 div.body h1,
160 div.body h2,
161 div.body h3,
162 div.body h4,
163 div.body h5,
164 div.body h6 {
165 font-family: {{ header_font_family }};
166 font-weight: normal;
167 margin: 30px 0px 10px 0px;
168 padding: 0;
169 color: black;
170 }
171
172 div.body h1 { margin-top: 0; padding-top: 0; font-size: 240%; }
173 div.body h2 { font-size: 180%; }
174 div.body h3 { font-size: 150%; }
175 div.body h4 { font-size: 130%; }
176 div.body h5 { font-size: 100%; }
177 div.body h6 { font-size: 100%; }
178
179 a.headerlink {
180 color: #ddd;
181 padding: 0 4px;
182 text-decoration: none;
183 }
184
185 a.headerlink:hover {
186 color: #444;
187 background: #eaeaea;
188 }
189
190 div.body p, div.body dd, div.body li {
191 line-height: 1.4em;
192 }
193
194 div.admonition {
195 background: #fafafa;
196 margin: 20px -30px;
197 padding: 10px 30px;
198 border-top: 1px solid #ccc;
199 border-bottom: 1px solid #ccc;
200 }
201
202 div.admonition tt.xref, div.admonition a tt {
203 border-bottom: 1px solid #fafafa;
204 }
205
206 dd div.admonition {
207 margin-left: -60px;
208 padding-left: 60px;
209 }
210
211 div.admonition p.admonition-title {
212 font-family: {{ font_family }};
213 font-weight: normal;
214 font-size: 24px;
215 margin: 0 0 10px 0;
216 padding: 0;
217 line-height: 1;
218 }
219
220 div.admonition p.last {
221 margin-bottom: 0;
222 }
223
224 div.highlight {
225 background-color: white;
226 }
227
228 dt:target, .highlight {
229 background: #FAF3E8;
230 }
231
232 div.note {
233 background-color: #eee;
234 border: 1px solid #ccc;
235 }
236
237 div.seealso {
238 background-color: #ffc;
239 border: 1px solid #ff6;
240 }
241
242 div.topic {
243 background-color: #eee;
244 }
245
246 p.admonition-title {
247 display: inline;
248 }
249
250 p.admonition-title:after {
251 content: ":";
252 }
253
254 pre, tt {
255 font-family: 'Consolas', 'Menlo', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace;
256 font-size: 0.9em;
257 }
258
259 img.screenshot {
260 }
261
262 tt.descname, tt.descclassname {
263 font-size: 0.95em;
264 }
265
266 tt.descname {
267 padding-right: 0.08em;
268 }
269
270 img.screenshot {
271 -moz-box-shadow: 2px 2px 4px #eee;
272 -webkit-box-shadow: 2px 2px 4px #eee;
273 box-shadow: 2px 2px 4px #eee;
274 }
275
276 table.docutils {
277 border: 1px solid #888;
278 -moz-box-shadow: 2px 2px 4px #eee;
279 -webkit-box-shadow: 2px 2px 4px #eee;
280 box-shadow: 2px 2px 4px #eee;
281 }
282
283 table.docutils td, table.docutils th {
284 border: 1px solid #888;
285 padding: 0.25em 0.7em;
286 }
287
288 table.field-list, table.footnote {
289 border: none;
290 -moz-box-shadow: none;
291 -webkit-box-shadow: none;
292 box-shadow: none;
293 }
294
295 table.footnote {
296 margin: 15px 0;
297 width: 100%;
298 border: 1px solid #eee;
299 background: #fdfdfd;
300 font-size: 0.9em;
301 }
302
303 table.footnote + table.footnote {
304 margin-top: -15px;
305 border-top: none;
306 }
307
308 table.field-list th {
309 padding: 0 0.8em 0 0;
310 }
311
312 table.field-list td {
313 padding: 0;
314 }
315
316 table.footnote td.label {
317 width: 0px;
318 padding: 0.3em 0 0.3em 0.5em;
319 }
320
321 table.footnote td {
322 padding: 0.3em 0.5em;
323 }
324
325 dl {
326 margin: 0;
327 padding: 0;
328 }
329
330 dl dd {
331 margin-left: 30px;
332 }
333
334 blockquote {
335 margin: 0 0 0 30px;
336 padding: 0;
337 }
338
339 ul, ol {
340 margin: 10px 0 10px 30px;
341 padding: 0;
342 }
343
344 pre {
345 background: #E8EFF0;
346 padding: 7px 30px;
347 margin: 15px -30px;
348 line-height: 1.3em;
349 }
350
351 dl pre, blockquote pre, li pre {
352 margin-left: -60px;
353 padding-left: 60px;
354 }
355
356 dl dl pre {
357 margin-left: -90px;
358 padding-left: 90px;
359 }
360
361 tt {
362 background-color: #E8EFF0;
363 color: #222;
364 /* padding: 1px 2px; */
365 }
366
367 tt.xref, a tt {
368 background-color: #E8EFF0;
369 border-bottom: 1px solid white;
370 }
371
372 a.reference {
373 text-decoration: none;
374 border-bottom: 1px dotted #2BABC4;
375 }
376
377 a.reference:hover {
378 border-bottom: 1px solid #2794AA;
379 }
380
381 a.footnote-reference {
382 text-decoration: none;
383 font-size: 0.7em;
384 vertical-align: top;
385 border-bottom: 1px dotted #004B6B;
386 }
387
388 a.footnote-reference:hover {
389 border-bottom: 1px solid #6D4100;
390 }
391
392 a:hover tt {
393 background: #EEE;
394 }
+0
-4
docs/_themes/werkzeug/theme.conf less more
0 [theme]
1 inherit = basic
2 stylesheet = werkzeug.css
3 pygments_style = werkzeug_theme_support.WerkzeugStyle
+0
-85
docs/_themes/werkzeug_theme_support.py less more
0 from pygments.style import Style
1 from pygments.token import Keyword, Name, Comment, String, Error, \
2 Number, Operator, Generic, Whitespace, Punctuation, Other, Literal
3
4
5 class WerkzeugStyle(Style):
6 background_color = "#f8f8f8"
7 default_style = ""
8
9 styles = {
10 # No corresponding class for the following:
11 #Text: "", # class: ''
12 Whitespace: "underline #f8f8f8", # class: 'w'
13 Error: "#a40000 border:#ef2929", # class: 'err'
14 Other: "#000000", # class 'x'
15
16 Comment: "italic #8f5902", # class: 'c'
17 Comment.Preproc: "noitalic", # class: 'cp'
18
19 Keyword: "bold #004461", # class: 'k'
20 Keyword.Constant: "bold #004461", # class: 'kc'
21 Keyword.Declaration: "bold #004461", # class: 'kd'
22 Keyword.Namespace: "bold #004461", # class: 'kn'
23 Keyword.Pseudo: "bold #004461", # class: 'kp'
24 Keyword.Reserved: "bold #004461", # class: 'kr'
25 Keyword.Type: "bold #004461", # class: 'kt'
26
27 Operator: "#582800", # class: 'o'
28 Operator.Word: "bold #004461", # class: 'ow' - like keywords
29
30 Punctuation: "bold #000000", # class: 'p'
31
32 # because special names such as Name.Class, Name.Function, etc.
33 # are not recognized as such later in the parsing, we choose them
34 # to look the same as ordinary variables.
35 Name: "#000000", # class: 'n'
36 Name.Attribute: "#c4a000", # class: 'na' - to be revised
37 Name.Builtin: "#004461", # class: 'nb'
38 Name.Builtin.Pseudo: "#3465a4", # class: 'bp'
39 Name.Class: "#000000", # class: 'nc' - to be revised
40 Name.Constant: "#000000", # class: 'no' - to be revised
41 Name.Decorator: "#1B5C66", # class: 'nd' - to be revised
42 Name.Entity: "#ce5c00", # class: 'ni'
43 Name.Exception: "bold #cc0000", # class: 'ne'
44 Name.Function: "#000000", # class: 'nf'
45 Name.Property: "#000000", # class: 'py'
46 Name.Label: "#f57900", # class: 'nl'
47 Name.Namespace: "#000000", # class: 'nn' - to be revised
48 Name.Other: "#000000", # class: 'nx'
49 Name.Tag: "bold #004461", # class: 'nt' - like a keyword
50 Name.Variable: "#000000", # class: 'nv' - to be revised
51 Name.Variable.Class: "#000000", # class: 'vc' - to be revised
52 Name.Variable.Global: "#000000", # class: 'vg' - to be revised
53 Name.Variable.Instance: "#000000", # class: 'vi' - to be revised
54
55 Number: "#990000", # class: 'm'
56
57 Literal: "#000000", # class: 'l'
58 Literal.Date: "#000000", # class: 'ld'
59
60 String: "#4e9a06", # class: 's'
61 String.Backtick: "#4e9a06", # class: 'sb'
62 String.Char: "#4e9a06", # class: 'sc'
63 String.Doc: "italic #8f5902", # class: 'sd' - like a comment
64 String.Double: "#4e9a06", # class: 's2'
65 String.Escape: "#4e9a06", # class: 'se'
66 String.Heredoc: "#4e9a06", # class: 'sh'
67 String.Interpol: "#4e9a06", # class: 'si'
68 String.Other: "#4e9a06", # class: 'sx'
69 String.Regex: "#4e9a06", # class: 'sr'
70 String.Single: "#4e9a06", # class: 's1'
71 String.Symbol: "#4e9a06", # class: 'ss'
72
73 Generic: "#000000", # class: 'g'
74 Generic.Deleted: "#a40000", # class: 'gd'
75 Generic.Emph: "italic #000000", # class: 'ge'
76 Generic.Error: "#ef2929", # class: 'gr'
77 Generic.Heading: "bold #000080", # class: 'gh'
78 Generic.Inserted: "#00A000", # class: 'gi'
79 Generic.Output: "#888", # class: 'go'
80 Generic.Prompt: "#745334", # class: 'gp'
81 Generic.Strong: "bold #000000", # class: 'gs'
82 Generic.Subheading: "bold #800080", # class: 'gu'
83 Generic.Traceback: "bold #a40000", # class: 'gt'
84 }
0 ==================
1 Werkzeug Changelog
2 ==================
3
4 .. module:: werkzeug
5
6 This file lists all major changes in Werkzeug over the versions.
7 For API breaking changes have a look at :ref:`api-changes`, they
8 are listed there in detail.
0 Changelog
1 =========
92
103 .. include:: ../CHANGES.rst
11
12 .. _api-changes:
13
14 API Changes
15 ===========
16
17 `0.9`
18 - Soft-deprecated the :attr:`BaseRequest.data` and
19 :attr:`BaseResponse.data` attributes and introduced new methods
20 to interact with entity data. This will allows in the future to
21 make better APIs to deal with request and response entity
22 bodies. So far there is no deprecation warning but users are
23 strongly encouraged to update.
24 - The :class:`Headers` and :class:`EnvironHeaders` datastructures
25 are now designed to operate on unicode data. This is a backwards
26 incompatible change and was necessary for the Python 3 support.
27 - The :class:`Headers` object no longer supports in-place operations
28 through the old ``linked`` method. This has been removed without
29 replacement due to changes on the encoding model.
30
31 `0.6.2`
32 - renamed the attribute `implicit_seqence_conversion` attribute of
33 the request object to `implicit_sequence_conversion`. Because
34 this is a feature that is typically unused and was only in there
35 for the 0.6 series we consider this a bug that does not require
36 backwards compatibility support which would be impossible to
37 properly implement.
38
39 `0.6`
40 - Old deprecations were removed.
41 - `cached_property.writeable` was deprecated.
42 - :meth:`BaseResponse.get_wsgi_headers` replaces the older
43 `BaseResponse.fix_headers` method. The older method stays
44 around for backwards compatibility reasons until 0.7.
45 - `BaseResponse.header_list` was deprecated. You should not
46 need this function, `get_wsgi_headers` and the `to_list`
47 method on the regular headers should serve as a replacement.
48 - Deprecated `BaseResponse.iter_encoded`'s charset parameter.
49 - :class:`LimitedStream` non-silent usage was deprecated.
50 - the `__repr__` of HTTP exceptions changed. This might break
51 doctests.
52
53 `0.5`
54 - Werkzeug switched away from wsgiref as library for the builtin
55 webserver.
56 - The `encoding` parameter for :class:`Template`\s is now called
57 `charset`. The older one will work for another two versions
58 but warn with a :exc:`DeprecationWarning`.
59 - The :class:`Client` has cookie support now which is enabled
60 by default.
61 - :meth:`BaseResponse._get_file_stream` is now passed more parameters
62 to make the function more useful. In 0.6 the old way to invoke
63 the method will no longer work. To support both newer and older
64 Werkzeug versions you can add all arguments to the signature and
65 provide default values for each of them.
66 - :func:`url_decode` no longer supports both `&` and `;` as
67 separator. This has to be specified explicitly now.
68 - The request object is now enforced to be read-only for all
69 attributes. If your code relies on modifications of some values
70 makes sure to create copies of them using the mutable counterparts!
71 - Some data structures that were only used on request objects are
72 now immutable as well. (:class:`Authorization` / :class:`Accept`
73 and subclasses)
74 - `CacheControl` was split up into :class:`RequestCacheControl`
75 and :class:`ResponseCacheControl`, the former being immutable.
76 The old class will go away in 0.6
77 - undocumented `werkzeug.test.File` was replaced by
78 :class:`FileWrapper`.
79 - it's not longer possible to pass dicts inside the `data` dict
80 in :class:`Client`. Use tuples instead.
81 - It's save to modify the return value of :meth:`MultiDict.getlist`
82 and methods that return lists in the :class:`MultiDict` now. The
83 class creates copies instead of revealing the internal lists.
84 However :class:`MultiDict.setlistdefault` still (and intentionally)
85 returns the internal list for modifications.
86
87 `0.3`
88 - Werkzeug 0.3 will be the last release with Python 2.3 compatibility.
89 - The `environ_property` is now read-only by default. This decision was
90 made because the request in general should be considered read-only.
91
92 `0.2`
93 - The `BaseReporterStream` is now part of the contrib module, the
94 new module is `werkzeug.contrib.reporterstream`. Starting with
95 `0.3`, the old import will not work any longer.
96 - `RequestRedirect` now uses a 301 status code. Previously a 302
97 status code was used incorrectly. If you want to continue using
98 this 302 code, use ``response = redirect(e.new_url, 302)``.
99 - `lazy_property` is now called `cached_property`. The alias for
100 the old name will disappear in Werkzeug 0.3.
101 - `match` can now raise `MethodNotAllowed` if configured for
102 methods and there was no method for that request.
103 - The `response_body` attribute on the response object is now called
104 `data`. With Werkzeug 0.3 the old name will not work any longer.
105 - The file-like methods on the response object are deprecated. If
106 you want to use the response object as file like object use the
107 `Response` class or a subclass of `BaseResponse` and mix the new
108 `ResponseStreamMixin` class and use `response.stream`.
0 # -*- coding: utf-8 -*-
1 #
2 # Werkzeug documentation build configuration file, created by
3 # sphinx-quickstart on Fri Jan 16 23:10:43 2009.
4 #
5 # This file is execfile()d with the current directory set to its containing dir.
6 #
7 # The contents of this file are pickled, so don't put values in the namespace
8 # that aren't pickleable (module imports are okay, they're removed automatically).
9 #
10 # Note that not all possible configuration values are present in this
11 # autogenerated file.
12 #
13 # All configuration values have a default; values that are commented out
14 # serve to show the default.
0 from pallets_sphinx_themes import get_version
1 from pallets_sphinx_themes import ProjectLink
152
16 import sys, os
3 # Project --------------------------------------------------------------
174
18 # If your extensions are in another directory, add it here. If the directory
19 # is relative to the documentation root, use os.path.abspath to make it
20 # absolute, like shown here.
21 sys.path.append(os.path.abspath('.'))
22 sys.path.append(os.path.abspath('_themes'))
5 project = "Werkzeug"
6 copyright = "2007 Pallets"
7 author = "Pallets"
8 release, version = get_version("Werkzeug")
239
24 # General configuration
25 # ---------------------
10 # General --------------------------------------------------------------
2611
27 # Add any Sphinx extension module names here, as strings. They can be extensions
28 # coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
29 extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx',
30 'sphinx.ext.doctest', 'werkzeugext']
12 master_doc = "index"
13 extensions = [
14 "sphinx.ext.autodoc",
15 "sphinx.ext.intersphinx",
16 "pallets_sphinx_themes",
17 "sphinx_issues",
18 ]
19 intersphinx_mapping = {"python": ("https://docs.python.org/3/", None)}
20 issues_github_path = "pallets/werkzeug"
3121
32 # Add any paths that contain templates here, relative to this directory.
33 templates_path = ['_templates']
22 # HTML -----------------------------------------------------------------
3423
35 # The suffix of source filenames.
36 source_suffix = '.rst'
24 html_theme = "werkzeug"
25 html_context = {
26 "project_links": [
27 ProjectLink("Donate to Pallets", "https://www.palletsprojects.com/donate"),
28 ProjectLink("Werkzeug Website", "https://palletsprojects.com/p/werkzeug/"),
29 ProjectLink("PyPI releases", "https://pypi.org/project/Werkzeug/"),
30 ProjectLink("Source Code", "https://github.com/pallets/werkzeug/"),
31 ProjectLink("Issue Tracker", "https://github.com/pallets/werkzeug/issues/"),
32 ]
33 }
34 html_sidebars = {
35 "index": ["project.html", "localtoc.html", "searchbox.html"],
36 "**": ["localtoc.html", "relations.html", "searchbox.html"],
37 }
38 singlehtml_sidebars = {"index": ["project.html", "localtoc.html"]}
39 html_static_path = ["_static"]
40 html_favicon = "_static/favicon.ico"
41 html_logo = "_static/werkzeug.png"
42 html_title = "Werkzeug Documentation ({})".format(version)
43 html_show_sourcelink = False
3744
38 # The encoding of source files.
39 #source_encoding = 'utf-8'
45 # LaTeX ----------------------------------------------------------------
4046
41 # The master toctree document.
42 master_doc = 'index'
43
44 # General information about the project.
45 project = u'Werkzeug'
46 copyright = u'2011, The Werkzeug Team'
47
48 # The version info for the project you're documenting, acts as replacement for
49 # |version| and |release|, also used in various other places throughout the
50 # built documents.
51
52 import re
53 try:
54 import werkzeug
55 except ImportError:
56 sys.path.append(os.path.abspath('../'))
57 from werkzeug import __version__ as release
58 if 'dev' in release:
59 release = release[:release.find('dev') + 3]
60 if release == 'unknown':
61 version = release
62 else:
63 version = re.match(r'\d+\.\d+(?:\.\d+)?', release).group()
64
65 # The language for content autogenerated by Sphinx. Refer to documentation
66 # for a list of supported languages.
67 #language = None
68
69 # There are two options for replacing |today|: either, you set today to some
70 # non-false value, then it is used:
71 #today = ''
72 # Else, today_fmt is used as the format for a strftime call.
73 #today_fmt = '%B %d, %Y'
74
75 # List of documents that shouldn't be included in the build.
76 #unused_docs = []
77
78 # List of directories, relative to source directory, that shouldn't be searched
79 # for source files.
80 exclude_trees = ['_build']
81
82 # The reST default role (used for this markup: `text`) to use for all documents.
83 #default_role = None
84
85 # If true, '()' will be appended to :func: etc. cross-reference text.
86 add_function_parentheses = True
87
88 # If true, the current module name will be prepended to all description
89 # unit titles (such as .. function::).
90 #add_module_names = True
91
92 # If true, sectionauthor and moduleauthor directives will be shown in the
93 # output. They are ignored by default.
94 #show_authors = False
95
96 # The name of the Pygments (syntax highlighting) style to use.
97 pygments_style = 'werkzeug_theme_support.WerkzeugStyle'
98
99 # doctest setup code
100 doctest_global_setup = '''\
101 from werkzeug import *
102 '''
103
104
105 # Options for HTML output
106 # -----------------------
107
108 html_theme = 'werkzeug'
109 html_theme_path = ['_themes']
110
111 # The name for this set of Sphinx documents. If None, it defaults to
112 # "<project> v<release> documentation".
113 #html_title = None
114
115 # A shorter title for the navigation bar. Default is the same as html_title.
116 #html_short_title = None
117
118 # The name of an image file (relative to this directory) to place at the top
119 # of the sidebar.
120 #html_logo = None
121
122 # The name of an image file (within the static path) to use as favicon of the
123 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
124 # pixels large.
125 #html_favicon = None
126
127 # Add any paths that contain custom static files (such as style sheets) here,
128 # relative to this directory. They are copied after the builtin static files,
129 # so a file named "default.css" will overwrite the builtin "default.css".
130 html_static_path = ['_static']
131
132 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
133 # using the given strftime format.
134 #html_last_updated_fmt = '%b %d, %Y'
135
136 # If true, SmartyPants will be used to convert quotes and dashes to
137 # typographically correct entities.
138 #html_use_smartypants = True
139
140 # Custom sidebar templates, maps document names to template names.
141 html_sidebars = {
142 'index': ['sidebarlogo.html', 'sidebarintro.html', 'sourcelink.html',
143 'searchbox.html'],
144 '**': ['sidebarlogo.html', 'localtoc.html', 'relations.html',
145 'sourcelink.html', 'searchbox.html']
146 }
147
148 # Additional templates that should be rendered to pages, maps page names to
149 # template names.
150 #html_additional_pages = {}
151
152 # If false, no module index is generated.
153 #html_use_modindex = True
154
155 # If false, no index is generated.
156 #html_use_index = True
157
158 # If true, the index is split into individual pages for each letter.
159 #html_split_index = False
160
161 # If true, links to the reST sources are added to the pages.
162 #html_show_sourcelink = True
163
164 # If true, an OpenSearch description file will be output, and all pages will
165 # contain a <link> tag referring to it. The value of this option must be the
166 # base URL from which the finished HTML is served.
167 #html_use_opensearch = ''
168
169 # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
170 #html_file_suffix = ''
171
172 # Output file base name for HTML help builder.
173 htmlhelp_basename = 'Werkzeugdoc'
174
175
176 # Options for LaTeX output
177 # ------------------------
178
179 # The paper size ('letter' or 'a4').
180 latex_paper_size = 'a4'
181
182 # Grouping the document tree into LaTeX files. List of tuples
183 # (source start file, target name, title, author, document class [howto/manual]).
18447 latex_documents = [
185 ('latexindex', 'Werkzeug.tex', ur'Werkzeug Documentation',
186 ur'The Werkzeug Team', 'manual'),
48 (master_doc, "Werkzeug-{}.tex".format(version), html_title, author, "manual")
18749 ]
188
189 # Additional stuff for LaTeX
190 latex_elements = {
191 'fontpkg': r'\usepackage{mathpazo}',
192 'papersize': 'a4paper',
193 'pointsize': '12pt',
194 'preamble': r'''
195 \usepackage{werkzeugstyle}
196
197 % i hate you latex, here too
198 \DeclareUnicodeCharacter{2603}{\\N\{SNOWMAN\}}
199 '''
200 }
201
202 latex_use_parts = True
203
204 latex_additional_files = ['werkzeugstyle.sty', 'logo.pdf']
205
206 latex_use_modindex = False
207
208
209 # Example configuration for intersphinx: refer to the Python standard library.
210 intersphinx_mapping = {
211 'http://docs.python.org/dev': None,
212 'http://docs.sqlalchemy.org/en/latest/': None
213 }
+0
-85
docs/contents.rst.inc less more
0 Getting Started
1 ---------------
2
3 If you are new to Werkzeug or WSGI development in general you
4 should start here.
5
6 .. toctree::
7 :maxdepth: 2
8
9 installation
10 transition
11 tutorial
12 levels
13 quickstart
14 python3
15
16 Serving and Testing
17 -------------------
18
19 The development server and testing support and management script
20 utilities are covered here:
21
22 .. toctree::
23 :maxdepth: 2
24
25 serving
26 test
27 debug
28
29 Reference
30 ---------
31
32 .. toctree::
33 :maxdepth: 2
34
35 wrappers
36 routing
37 wsgi
38 filesystem
39 http
40 datastructures
41 utils
42 urls
43 local
44 middlewares
45 exceptions
46
47 Deployment
48 ----------
49
50 This section covers running your application in production on a web
51 server such as Apache or lighttpd.
52
53 .. toctree::
54 :maxdepth: 3
55
56 deployment/index
57
58 Contributed Modules
59 -------------------
60
61 A lot of useful code contributed by the community is shipped with Werkzeug
62 as part of the `contrib` module:
63
64 .. toctree::
65 :maxdepth: 3
66
67 contrib/index
68
69 Additional Information
70 ----------------------
71
72 .. toctree::
73 :maxdepth: 2
74
75 terms
76 unicode
77 request_data
78 changes
79
80 If you can’t find the information you’re looking for, have a look at the
81 index or try to find it using the search function:
82
83 * :ref:`genindex`
84 * :ref:`search`
00 ================
11 Atom Syndication
22 ================
3
4 .. warning::
5 .. deprecated:: 0.15
6 This will be removed in version 1.0. Use a dedicated feed
7 library instead.
38
49 .. automodule:: werkzeug.contrib.atom
510
00 =====
11 Cache
22 =====
3
4 .. warning::
5 .. deprecated:: 0.15
6 This will be removed in version 1.0. It has been extracted to
7 `cachelib <https://github.com/pallets/cachelib>`_.
38
49 .. automodule:: werkzeug.contrib.cache
510
0 ======
1 Fixers
2 ======
3
40 .. automodule:: werkzeug.contrib.fixers
5
6 .. autoclass:: CGIRootFix
7
8 .. autoclass:: PathInfoFromRequestUriFix
9
10 .. autoclass:: ProxyFix
11 :members:
12
13 .. autoclass:: HeaderRewriterFix
14
15 .. autoclass:: InternetExplorerFix
11 Contributed Modules
22 ===================
33
4 A lot of useful code contributed by the community is shipped with Werkzeug
5 as part of the `contrib` module:
4 Some useful code contributed by the community is shipped with Werkzeug
5 as part of the ``contrib`` module.
6
7 .. warning::
8 The code in this module is being deprecated, to be moved or removed
9 in version 1.0. Be sure to pay attention to any deprecation warnings
10 and update your code appropriately.
611
712 .. toctree::
813 :maxdepth: 2
0 =======
10 Iter IO
21 =======
2
3 .. warning::
4 .. deprecated:: 0.15
5 This will be removed in version 1.0.
36
47 .. automodule:: werkzeug.contrib.iterio
58
0 ==========================
10 Lint Validation Middleware
21 ==========================
32
4 .. currentmodule:: werkzeug.contrib.lint
5
6 .. automodule:: werkzeug.contrib.lint
7
8 .. autoclass:: LintMiddleware
3 .. warning::
4 ``werkzeug.contrib.lint`` has moved to
5 :mod:`werkzeug.middleware.lint`. The old import is deprecated as of
6 version 0.15 and will be removed in version 1.0.
0 =========================
10 WSGI Application Profiler
21 =========================
32
4 .. automodule:: werkzeug.contrib.profiler
3 .. warning::
4 ``werkzeug.contrib.profiler`` has moved to
5 :mod:`werkzeug.middleware.profile`. The old import is deprecated as
6 of version 0.15 and will be removed in version 1.0.
57
6 .. autoclass:: MergeStream
7
8 .. autoclass:: ProfilerMiddleware
9
10 .. autofunction:: make_action
8 .. autoclass:: werkzeug.contrib.profiler.MergeStream
00 =============
11 Secure Cookie
22 =============
3
4 .. warning::
5 .. deprecated:: 0.15
6 This will be removed in version 1.0. It has moved to
7 https://github.com/pallets/secure-cookie.
38
49 .. automodule:: werkzeug.contrib.securecookie
510
11 Sessions
22 ========
33
4 .. warning::
5 .. deprecated:: 0.15
6 This will be removed in version 1.0. It has moved to
7 https://github.com/pallets/secure-cookie.
8
49 .. automodule:: werkzeug.contrib.sessions
510
6 .. testsetup::
7
8 from werkzeug.contrib.sessions import *
911
1012 Reference
1113 =========
1315 .. autoclass:: Session
1416
1517 .. attribute:: sid
16
18
1719 The session ID as string.
1820
1921 .. attribute:: new
11 Extra Wrappers
22 ==============
33
4 .. warning::
5 .. deprecated:: 0.15
6 All classes in this module have been moved or deprecated and
7 will be removed in version 1.0. Check the docs for the status
8 of each class.
9
410 .. automodule:: werkzeug.contrib.wrappers
511
612 .. autoclass:: JSONRequestMixin
7 :members:
813
914 .. autoclass:: ProtobufRequestMixin
1015 :members:
0 ======================
10 Debugging Applications
21 ======================
32
43 .. module:: werkzeug.debug
54
6 Depending on the WSGI gateway/server, exceptions are handled differently.
7 But most of the time, exceptions go to stderr or the error log.
5 Depending on the WSGI gateway/server, exceptions are handled
6 differently. Most of the time, exceptions go to stderr or the error log,
7 and a generic "500 Internal Server Error" message is displayed.
88
99 Since this is not the best debugging environment, Werkzeug provides a
10 WSGI middleware that renders nice debugging tracebacks, optionally with an
11 AJAX based debugger (which allows to execute code in the context of the
12 traceback's frames).
10 WSGI middleware that renders nice tracebacks, optionally with an
11 interactive debug console to execute code in any frame.
1312
14 The interactive debugger however does not work in forking environments
15 which makes it nearly impossible to use on production servers. Also the
16 debugger allows the execution of arbitrary code which makes it a major
17 security risk and **must never be used on production machines** because of
18 that. **We cannot stress this enough. Do not enable this in
19 production.**
13 .. danger::
14
15 The debugger allows the execution of arbitrary code which makes it a
16 major security risk. **The debugger must never be used on production
17 machines. We cannot stress this enough. Do not enable the debugger
18 in production.**
19
20 .. note::
21
22 The interactive debugger does not work in forking environments, such
23 as a server that starts multiple processes. Most such environments
24 are production servers, where the debugger should not be enabled
25 anyway.
26
2027
2128 Enabling the Debugger
22 =====================
29 ---------------------
2330
24 You can enable the debugger by wrapping the application in a
25 :class:`DebuggedApplication` middleware. Additionally there are
26 parameters to the :func:`run_simple` function to enable it because this
27 is a common task during development.
31 Enable the debugger by wrapping the application with the
32 :class:`DebuggedApplication` middleware. Alternatively, you can pass
33 ``use_debugger=True`` to :func:`run_simple` and it will do that for you.
2834
2935 .. autoclass:: DebuggedApplication
3036
37
3138 Using the Debugger
32 ==================
39 ------------------
3340
34 Once enabled and an error happens during a request you will see a detailed
35 traceback instead of a general "internal server error". If you have the
36 `evalex` feature enabled you can also get a traceback for every frame in
37 the traceback by clicking on the console icon.
41 Once enabled and an error happens during a request you will see a
42 detailed traceback instead of a generic "internal server error". The
43 traceback is still output to the terminal as well.
3844
39 Once clicked a console opens where you can execute Python code in:
45 The error message is displayed at the top. Clicking it jumps to the
46 bottom of the traceback. Frames that represent user code, as opposed to
47 built-ins or installed packages, are highlighted blue. Clicking a
48 frame will show more lines for context, clicking again will hide them.
49
50 If you have the ``evalex`` feature enabled you can get a console for
51 every frame in the traceback by hovering over a frame and clicking the
52 console icon that appears at the right. Once clicked a console opens
53 where you can execute Python code in:
4054
4155 .. image:: _static/debug-screenshot.png
4256 :alt: a screenshot of the interactive debugger
4458
4559 Inside the interactive consoles you can execute any kind of Python code.
4660 Unlike regular Python consoles the output of the object reprs is colored
47 and stripped to a reasonable size by default. If the output is longer
61 and stripped to a reasonable size by default. If the output is longer
4862 than what the console decides to display a small plus sign is added to
4963 the repr and a click will expand the repr.
5064
5165 To display all variables that are defined in the current frame you can
52 use the `dump()` function. You can call it without arguments to get a
66 use the ``dump()`` function. You can call it without arguments to get a
5367 detailed list of all variables and their values, or with an object as
5468 argument to get a detailed list of all the attributes it has.
5569
70
5671 Debugger PIN
57 ============
72 ------------
5873
59 Starting with Werkzeug 0.11 the debugger is additionally protected by a
60 PIN. This is a security helper to make it less likely for the debugger to
61 be exploited in production as it has happened to people to keep the
62 debugger active. The PIN based authentication is enabled by default.
74 Starting with Werkzeug 0.11 the debug console is protected by a PIN.
75 This is a security helper to make it less likely for the debugger to be
76 exploited if you forget to disable it when deploying to production. The
77 PIN based authentication is enabled by default.
6378
64 When the debugger comes up, on first usage it will prompt for a PIN that
65 is printed to the command line. The PIN is generated in a stable way that
66 is specific to the project. In some situations it might be not possible
67 to generate a stable PIN between restarts in which case an explicit PIN
68 can be provided through the environment variable ``WERKZEUG_DEBUG_PIN``.
69 This can be set to a number and will become the PIN. This variable can
70 also be set to the value ``off`` to disable the PIN check entirely.
79 The first time a console is opened, a dialog will prompt for a PIN that
80 is printed to the command line. The PIN is generated in a stable way
81 that is specific to the project. An explicit PIN can be provided through
82 the environment variable ``WERKZEUG_DEBUG_PIN``. This can be set to a
83 number and will become the PIN. This variable can also be set to the
84 value ``off`` to disable the PIN check entirely.
7185
72 If the PIN is entered too many times incorrectly the server needs to be
86 If an incorrect PIN is entered too many times the server needs to be
7387 restarted.
7488
75 **This feature is not supposed to entirely secure the debugger. It's
76 intended to make it harder for an attacker to exploit the debugger. Never
77 enable the debugger in production.**
89 **This feature is not meant to entirely secure the debugger. It is
90 intended to make it harder for an attacker to exploit the debugger.
91 Never enable the debugger in production.**
92
7893
7994 Pasting Errors
80 ==============
95 --------------
8196
82 If you click on the `Traceback` title, the traceback switches over to a text
83 based one. The text based one can be pasted to `gist.github.com <https://gist.github.com>`_ with one
84 click.
85
86
87 .. _paste.pocoo.org: https://gist.github.com
88
97 If you click on the "Traceback (most recent call last)" header, the
98 view switches to a tradition text-based traceback. The text can be
99 copied, or automatically pasted to `gist.github.com
100 <https://gist.github.com>`_ with one click.
99 `AppEngine`_, there however the execution does happen in a CGI-like
1010 environment. The application's performance is unaffected because of that.
1111
12 .. _AppEngine: http://code.google.com/appengine/
12 .. _AppEngine: https://cloud.google.com/appengine/
1313
1414 Creating a `.cgi` file
1515 ======================
3131 ============
3232
3333 Usually there are two ways to configure the server. Either just copy the
34 `.cgi` into a `cgi-bin` (and use `mod_rerwite` or something similar to
34 `.cgi` into a `cgi-bin` (and use `mod_rewrite` or something similar to
3535 rewrite the URL) or let the server point to the file directly.
3636
37 In Apache for example you can put a like like this into the config:
37 In Apache for example you can put something like this into the config:
3838
3939 .. sourcecode:: apache
4040
1717 #!/usr/bin/python
1818 from flup.server.fcgi import WSGIServer
1919 from yourapplication import make_app
20
20
2121 if __name__ == '__main__':
2222 application = make_app()
2323 WSGIServer(application).run()
6464 "^(/.*)$" => "/yourapplication.fcgi$1"
6565
6666 Remember to enable the FastCGI, alias and rewrite modules. This configuration
67 binds the application to `/yourapplication`. If you want the application to
68 work in the URL root you have to work around a lighttpd bug with the
69 :class:`~werkzeug.contrib.fixers.LighttpdCGIRootFix` middleware.
67 binds the application to `/yourapplication`.
7068
71 Make sure to apply it only if you are mounting the application the URL
72 root. Also, see the Lighty docs for more information on `FastCGI and Python
73 <http://redmine.lighttpd.net/wiki/lighttpd/Docs:ModFastCGI>`_ (note that
74 explicitly passing a socket to run() is no longer necessary).
69 See the Lighty docs for more information on `FastCGI and Python
70 <https://redmine.lighttpd.net/projects/lighttpd/wiki/Docs_ModFastCGI>`_.
7571
7672 Configuring nginx
7773 =================
136132 web server.
137133 - different python interpreters being used.
138134
139 .. _lighttpd: http://www.lighttpd.net/
140 .. _nginx: http://nginx.net/
141 .. _flup: http://trac.saddi.com/flup
135 .. _lighttpd: https://www.lighttpd.net/
136 .. _nginx: https://nginx.org/
137 .. _flup: https://pypi.org/project/flup/
33
44 If you are using the `Apache`_ webserver you should consider using `mod_wsgi`_.
55
6 .. _Apache: http://httpd.apache.org/
6 .. _Apache: https://httpd.apache.org/
77
88 Installing `mod_wsgi`
99 =====================
7373 </Directory>
7474 </VirtualHost>
7575
76 For more information consult the `mod_wsgi wiki`_.
77
78 .. _mod_wsgi: http://code.google.com/p/modwsgi/
79 .. _installation instructions: http://code.google.com/p/modwsgi/wiki/QuickInstallationGuide
80 .. _virtual python: http://pypi.python.org/pypi/virtualenv
81 .. _mod_wsgi wiki: http://code.google.com/p/modwsgi/wiki/
76 .. _mod_wsgi: https://modwsgi.readthedocs.io/en/develop/
77 .. _installation instructions: https://modwsgi.readthedocs.io/en/develop/installation.html
78 .. _virtual python: https://pypi.org/project/virtualenv/
2828 If you now start the file the server will listen on `localhost:8080`. Keep
2929 in mind that WSGI applications behave slightly different for proxied setups.
3030 If you have not developed your application for proxying in mind, you can
31 apply the :class:`~werkzeug.contrib.fixers.ProxyFix` middleware.
31 apply the :class:`~werkzeug.middleware.proxy_fix.ProxyFix` middleware.
3232
3333
3434 Configuring nginx
4242 proxy_set_header Host $host;
4343 proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
4444 proxy_pass http://127.0.0.1:8080;
45 proxy_redirect default;
45 proxy_redirect default;
4646 }
4747
4848 Since Nginx doesn't start your server for you, you have to do it by yourself. You
4242 .. autoexception:: ExpectationFailed
4343
4444 .. autoexception:: ImATeapot
45
46 .. autoexception:: FailedDependency
4547
4648 .. autoexception:: PreconditionRequired
4749
99101 raised which behaves like a :exc:`KeyError` but also a :exc:`BadRequest`
100102 exception.
101103
104 .. autoexception:: BadRequestKeyError
105
102106
103107 Simple Aborting
104108 ===============
139143 exceptions module.
140144
141145 You can override the default description in the constructor with the
142 `description` parameter (it's the first argument for all exceptions
143 except of the :exc:`MethodNotAllowed` which accepts a list of allowed methods
144 as first argument)::
146 ``description`` parameter::
145147
146 raise BadRequest('Request failed because X was not present')
148 raise BadRequest(description='Request failed because X was not present')
0 ======================
1 Documentation Overview
2 ======================
0 Werkzeug
1 ========
32
4 Welcome to the Werkzeug |version| documentation.
3 *werkzeug* German noun: "tool".
4 Etymology: *werk* ("work"), *zeug* ("stuff")
55
6 .. include:: contents.rst.inc
6 Werkzeug is a comprehensive `WSGI`_ web application library. It began as
7 a simple collection of various utilities for WSGI applications and has
8 become one of the most advanced WSGI utility libraries.
9
10 Werkzeug is Unicode aware and doesn't enforce any dependencies. It is up
11 to the developer to choose a template engine, database adapter, and even
12 how to handle requests.
13
14 .. _WSGI: https://wsgi.readthedocs.io/en/latest/
15
16
17 Getting Started
18 ---------------
19
20 .. toctree::
21 :maxdepth: 2
22
23 installation
24 transition
25 tutorial
26 levels
27 quickstart
28
29
30 Serving and Testing
31 -------------------
32
33 .. toctree::
34 :maxdepth: 2
35
36 serving
37 test
38 debug
39
40
41 Reference
42 ---------
43
44 .. toctree::
45 :maxdepth: 2
46
47 wrappers
48 routing
49 wsgi
50 filesystem
51 http
52 datastructures
53 utils
54 urls
55 local
56 middleware/index
57 exceptions
58
59
60 Deployment
61 ----------
62
63 .. toctree::
64 :maxdepth: 3
65
66 deployment/index
67
68
69 Contributed Modules
70 -------------------
71
72 .. toctree::
73 :maxdepth: 3
74
75 contrib/index
76
77
78 Additional Information
79 ----------------------
80
81 .. toctree::
82 :maxdepth: 2
83
84 terms
85 unicode
86 request_data
87 changes
0 ============
0 .. _installation:
1
12 Installation
23 ============
34
4 Werkzeug requires at least Python 2.6 to work correctly. If you do need
5 to support an older version you can download an older version of Werkzeug
6 though we strongly recommend against that. Werkzeug currently has
7 experimental support for Python 3. For more information about the
8 Python 3 support see :ref:`python3`.
5
6 Python Version
7 --------------
8
9 We recommend using the latest version of Python 3. Werkzeug supports
10 Python 3.4 and newer and Python 2.7.
911
1012
11 Installing a released version
12 =============================
13 Dependencies
14 ------------
1315
14 As a Python egg (via easy_install or pip)
15 -----------------------------------------
16 Werkzeug does not have any direct dependencies.
1617
17 You can install the most recent Werkzeug version using `easy_install`_::
1818
19 easy_install Werkzeug
19 Optional dependencies
20 ~~~~~~~~~~~~~~~~~~~~~
2021
21 Alternatively you can also use pip::
22 These distributions will not be installed automatically. Werkzeug will
23 detect and use them if you install them.
24
25 * `SimpleJSON`_ is a fast JSON implementation that is compatible with
26 Python's ``json`` module. It is preferred for JSON operations if it is
27 installed.
28 * `termcolor`_ provides request log highlighting when using the
29 development server.
30 * `Watchdog`_ provides a faster, more efficient reloader for the
31 development server.
32
33 .. _SimpleJSON: https://simplejson.readthedocs.io/en/latest/
34 .. _termcolor: https://pypi.org/project/termcolor/
35 .. _Watchdog: https://pypi.org/project/watchdog/
36
37
38 Virtual environments
39 --------------------
40
41 Use a virtual environment to manage the dependencies for your project,
42 both in development and in production.
43
44 What problem does a virtual environment solve? The more Python
45 projects you have, the more likely it is that you need to work with
46 different versions of Python libraries, or even Python itself. Newer
47 versions of libraries for one project can break compatibility in
48 another project.
49
50 Virtual environments are independent groups of Python libraries, one for
51 each project. Packages installed for one project will not affect other
52 projects or the operating system's packages.
53
54 Python 3 comes bundled with the :mod:`venv` module to create virtual
55 environments. If you're using a modern version of Python, you can
56 continue on to the next section.
57
58 If you're using Python 2, see :ref:`install-install-virtualenv` first.
59
60 .. _install-create-env:
61
62 Create an environment
63 ~~~~~~~~~~~~~~~~~~~~~
64
65 Create a project folder and a :file:`venv` folder within:
66
67 .. code-block:: sh
68
69 mkdir myproject
70 cd myproject
71 python3 -m venv venv
72
73 On Windows:
74
75 .. code-block:: bat
76
77 py -3 -m venv venv
78
79 If you needed to install virtualenv because you are on an older version
80 of Python, use the following command instead:
81
82 .. code-block:: sh
83
84 virtualenv venv
85
86 On Windows:
87
88 .. code-block:: bat
89
90 \Python27\Scripts\virtualenv.exe venv
91
92
93 Activate the environment
94 ~~~~~~~~~~~~~~~~~~~~~~~~
95
96 Before you work on your project, activate the corresponding environment:
97
98 .. code-block:: sh
99
100 . venv/bin/activate
101
102 On Windows:
103
104 .. code-block:: bat
105
106 venv\Scripts\activate
107
108 Your shell prompt will change to show the name of the activated
109 environment.
110
111
112 Install Werkzeug
113 ----------------
114
115 Within the activated environment, use the following command to install
116 Werkzeug:
117
118 .. code-block:: sh
22119
23120 pip install Werkzeug
24121
25 Either way we strongly recommend using these tools in combination with
26 :ref:`virtualenv`.
27122
28 This will install a Werkzeug egg in your Python installation's `site-packages`
29 directory.
123 Living on the edge
124 ~~~~~~~~~~~~~~~~~~
30125
31 From the tarball release
32 -------------------------
126 If you want to work with the latest Werkzeug code before it's released,
127 install or update the code from the master branch:
33128
34 1. Download the most recent tarball from the `download page`_.
35 2. Unpack the tarball.
36 3. ``python setup.py install``
129 .. code-block:: sh
37130
38 Note that the last command will automatically download and install
39 `setuptools`_ if you don't already have it installed. This requires a working
40 Internet connection.
41
42 This will install Werkzeug into your Python installation's `site-packages`
43 directory.
131 pip install -U https://github.com/pallets/werkzeug/archive/master.tar.gz
44132
45133
46 Installing the development version
47 ==================================
134 .. _install-install-virtualenv:
48135
49 1. Install `Git`_
50 2. ``git clone git://github.com/pallets/werkzeug.git``
51 3. ``cd werkzeug``
52 4. ``pip install --editable .``
136 Install virtualenv
137 ------------------
53138
54 .. _virtualenv:
139 If you are using Python 2, the venv module is not available. Instead,
140 install `virtualenv`_.
55141
56 virtualenv
57 ==========
142 On Linux, virtualenv is provided by your package manager:
58143
59 Virtualenv is probably what you want to use during development, and in
60 production too if you have shell access there.
144 .. code-block:: sh
61145
62 What problem does virtualenv solve? If you like Python as I do,
63 chances are you want to use it for other projects besides Werkzeug-based
64 web applications. But the more projects you have, the more likely it is
65 that you will be working with different versions of Python itself, or at
66 least different versions of Python libraries. Let's face it; quite often
67 libraries break backwards compatibility, and it's unlikely that any serious
68 application will have zero dependencies. So what do you do if two or more
69 of your projects have conflicting dependencies?
146 # Debian, Ubuntu
147 sudo apt-get install python-virtualenv
70148
71 Virtualenv to the rescue! It basically enables multiple side-by-side
72 installations of Python, one for each project. It doesn't actually
73 install separate copies of Python, but it does provide a clever way
74 to keep different project environments isolated.
149 # CentOS, Fedora
150 sudo yum install python-virtualenv
75151
76 So let's see how virtualenv works!
152 # Arch
153 sudo pacman -S python-virtualenv
77154
78 If you are on Mac OS X or Linux, chances are that one of the following two
79 commands will work for you::
155 If you are on Mac OS X or Windows, download `get-pip.py`_, then:
80156
81 $ sudo easy_install virtualenv
157 .. code-block:: sh
82158
83 or even better::
159 sudo python2 Downloads/get-pip.py
160 sudo python2 -m pip install virtualenv
84161
85 $ sudo pip install virtualenv
162 On Windows, as an administrator:
86163
87 One of these will probably install virtualenv on your system. Maybe it's
88 even in your package manager. If you use Ubuntu, try::
164 .. code-block:: bat
89165
90 $ sudo apt-get install python-virtualenv
166 \Python27\python.exe Downloads\get-pip.py
167 \Python27\python.exe -m pip install virtualenv
91168
92 If you are on Windows and don't have the `easy_install` command, you must
93 install it first. Once you have it installed, run the same commands as
94 above, but without the `sudo` prefix.
169 Now you can continue to :ref:`install-create-env`.
95170
96 Once you have virtualenv installed, just fire up a shell and create
97 your own environment. I usually create a project folder and an `env`
98 folder within::
99
100 $ mkdir myproject
101 $ cd myproject
102 $ virtualenv env
103 New python executable in env/bin/python
104 Installing setuptools............done.
105
106 Now, whenever you want to work on a project, you only have to activate
107 the corresponding environment. On OS X and Linux, do the following::
108
109 $ . env/bin/activate
110
111 (Note the space between the dot and the script name. The dot means that
112 this script should run in the context of the current shell. If this command
113 does not work in your shell, try replacing the dot with ``source``)
114
115 If you are a Windows user, the following command is for you::
116
117 $ env\scripts\activate
118
119 Either way, you should now be using your virtualenv (see how the prompt of
120 your shell has changed to show the virtualenv).
121
122 Now you can just enter the following command to get Werkzeug activated in
123 your virtualenv::
124
125 $ pip install Werkzeug
126
127 A few seconds later you are good to go.
128
129 .. _download page: https://pypi.python.org/pypi/Werkzeug
130 .. _setuptools: http://peak.telecommunity.com/DevCenter/setuptools
131 .. _easy_install: http://peak.telecommunity.com/DevCenter/EasyInstall
132 .. _Git: http://git-scm.org/
171 .. _virtualenv: https://virtualenv.pypa.io/en/latest/
172 .. _get-pip.py: https://bootstrap.pypa.io/get-pip.py
+0
-6
docs/latexindex.rst less more
0 :orphan:
1
2 Werkzeug Documentation
3 ======================
4
5 .. include:: contents.rst.inc
5050 </form>
5151 ''')
5252 start_response('200 OK', [('Content-Type', 'text/html; charset=utf-8')])
53 return [''.join(result)]
53 return [''.join(result).encode('utf-8')]
5454
5555 High or Low?
5656 ============
docs/logo.pdf less more
Binary diff not shown
00 @ECHO OFF
1
2 pushd %~dp0
13
24 REM Command file for Sphinx documentation
35
4 set SPHINXBUILD=sphinx-build
5 set ALLSPHINXOPTS=-d _build/doctrees %SPHINXOPTS% .
6 if NOT "%PAPER%" == "" (
7 set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS%
6 if "%SPHINXBUILD%" == "" (
7 set SPHINXBUILD=sphinx-build
88 )
9 set SOURCEDIR=.
10 set BUILDDIR=_build
911
1012 if "%1" == "" goto help
1113
12 if "%1" == "help" (
13 :help
14 echo.Please use `make ^<target^>` where ^<target^> is one of
15 echo. html to make standalone HTML files
16 echo. pickle to make pickle files
17 echo. json to make JSON files
18 echo. htmlhelp to make HTML files and a HTML help project
19 echo. qthelp to make HTML files and a qthelp project
20 echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter
21 echo. changes to make an overview over all changed/added/deprecated items
22 echo. linkcheck to check all external links for integrity
23 goto end
14 %SPHINXBUILD% >NUL 2>NUL
15 if errorlevel 9009 (
16 echo.
17 echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
18 echo.installed, then set the SPHINXBUILD environment variable to point
19 echo.to the full path of the 'sphinx-build' executable. Alternatively you
20 echo.may add the Sphinx directory to PATH.
21 echo.
22 echo.If you don't have Sphinx installed, grab it from
23 echo.http://sphinx-doc.org/
24 exit /b 1
2425 )
2526
26 if "%1" == "clean" (
27 for /d %%i in (_build\*) do rmdir /q /s %%i
28 del /q /s _build\*
29 goto end
30 )
27 %SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
28 goto end
3129
32 if "%1" == "html" (
33 %SPHINXBUILD% -b html %ALLSPHINXOPTS% _build/html
34 echo.
35 echo.Build finished. The HTML pages are in _build/html.
36 goto end
37 )
38
39 if "%1" == "pickle" (
40 %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% _build/pickle
41 echo.
42 echo.Build finished; now you can process the pickle files.
43 goto end
44 )
45
46 if "%1" == "json" (
47 %SPHINXBUILD% -b json %ALLSPHINXOPTS% _build/json
48 echo.
49 echo.Build finished; now you can process the JSON files.
50 goto end
51 )
52
53 if "%1" == "htmlhelp" (
54 %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% _build/htmlhelp
55 echo.
56 echo.Build finished; now you can run HTML Help Workshop with the ^
57 .hhp project file in _build/htmlhelp.
58 goto end
59 )
60
61 if "%1" == "qthelp" (
62 %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% _build/qthelp
63 echo.
64 echo.Build finished; now you can run "qcollectiongenerator" with the ^
65 .qhcp project file in _build/qthelp, like this:
66 echo.^> qcollectiongenerator _build\qthelp\Werkzeug.qhcp
67 echo.To view the help file:
68 echo.^> assistant -collectionFile _build\qthelp\Werkzeug.ghc
69 goto end
70 )
71
72 if "%1" == "latex" (
73 %SPHINXBUILD% -b latex %ALLSPHINXOPTS% _build/latex
74 echo.
75 echo.Build finished; the LaTeX files are in _build/latex.
76 goto end
77 )
78
79 if "%1" == "changes" (
80 %SPHINXBUILD% -b changes %ALLSPHINXOPTS% _build/changes
81 echo.
82 echo.The overview file is in _build/changes.
83 goto end
84 )
85
86 if "%1" == "linkcheck" (
87 %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% _build/linkcheck
88 echo.
89 echo.Link check complete; look for any errors in the above output ^
90 or in _build/linkcheck/output.txt.
91 goto end
92 )
30 :help
31 %SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
9332
9433 :end
34 popd
+0
-7
docs/makearchive.py less more
0 import os
1 import conf
2 name = "werkzeug-docs-" + conf.version
3 os.chdir("_build")
4 os.rename("html", name)
5 os.system("tar czf %s.tar.gz %s" % (name, name))
6 os.rename(name, "html")
0 .. automodule:: werkzeug.middleware.dispatcher
0 .. automodule:: werkzeug.middleware.http_proxy
0 .. automodule:: werkzeug.middleware
0 .. automodule:: werkzeug.middleware.lint
0 .. automodule:: werkzeug.middleware.profiler
0 .. automodule:: werkzeug.middleware.proxy_fix
0 .. automodule:: werkzeug.middleware.shared_data
0 ===========
1 Middlewares
2 ===========
0 :orphan:
31
4 .. module:: werkzeug.wsgi
2 Middleware
3 ==========
54
6 Middlewares wrap applications to dispatch between them or provide
7 additional request handling. Additionally to the middlewares documented
8 here, there is also the :class:`DebuggedApplication` class that is
9 implemented as a WSGI middleware.
10
11 .. autoclass:: SharedDataMiddleware
12 :members: is_allowed
13
14 .. autoclass:: ProxyMiddleware
15
16 .. autoclass:: DispatcherMiddleware
17
18 Also there's the …
19
20 .. autofunction:: werkzeug._internal._easteregg
5 Moved to :doc:`/middleware/index`.
+0
-73
docs/python3.rst less more
0 .. _python3:
1
2 ==============
3 Python 3 Notes
4 ==============
5
6 Since version 0.9, Werkzeug supports Python 3.3+ in addition to versions 2.6
7 and 2.7. Older Python 3 versions such as 3.2 or 3.1 are not supported.
8
9 This part of the documentation outlines special information required to
10 use Werkzeug and WSGI on Python 3.
11
12 .. warning::
13
14 Python 3 support in Werkzeug is currently highly experimental. Please
15 give feedback on it and help us improve it.
16
17
18 WSGI Environment
19 ================
20
21 The WSGI environment on Python 3 works slightly different than it does on
22 Python 2. For the most part Werkzeug hides the differences from you if
23 you work on the higher level APIs. The main difference between Python 2
24 and Python 3 is that on Python 2 the WSGI environment contains bytes
25 whereas the environment on Python 3 contains a range of differently
26 encoded strings.
27
28 There are two different kinds of strings in the WSGI environ on Python 3:
29
30 - unicode strings restricted to latin1 values. These are used for
31 HTTP headers and a few other things.
32 - unicode strings carrying binary payload, roundtripped through latin1
33 values. This is usually referred as “WSGI encoding dance” throughout
34 Werkzeug.
35
36 Werkzeug provides you with functionality to deal with these automatically
37 so that you don't need to be aware of the inner workings. The following
38 functions and classes should be used to read information out of the
39 WSGI environment:
40
41 - :func:`~werkzeug.wsgi.get_current_url`
42 - :func:`~werkzeug.wsgi.get_host`
43 - :func:`~werkzeug.wsgi.get_script_name`
44 - :func:`~werkzeug.wsgi.get_path_info`
45 - :func:`~werkzeug.wsgi.get_query_string`
46 - :func:`~werkzeug.datastructures.EnvironHeaders`
47
48 Applications are strongly discouraged to create and modify a WSGI
49 environment themselves on Python 3 unless they take care of the proper
50 decoding step. All high level interfaces in Werkzeug will apply the
51 correct encoding and decoding steps as necessary.
52
53 URLs
54 ====
55
56 URLs in Werkzeug attempt to represent themselves as unicode strings on
57 Python 3. All the parsing functions generally also provide functionality
58 that allow operations on bytes. In some cases functions that deal with
59 URLs allow passing in `None` as charset to change the return value to byte
60 objects. Internally Werkzeug will now unify URIs and IRIs as much as
61 possible.
62
63 Request Cleanup
64 ===============
65
66 Request objects on Python 3 and PyPy require explicit closing when file
67 uploads are involved. This is required to properly close temporary file
68 objects created by the multipart parser. For that purpose the ``close()``
69 method was introduced.
70
71 In addition to that request objects now also act as context managers that
72 automatically close.
9797 ----------------------
9898
9999 Modern web applications transmit a lot more than multipart form data or
100 url encoded data. Extending the parsing capabilities by subclassing
101 the :class:`BaseRequest` is simple. The following example implements
102 parsing for incoming JSON data::
100 url encoded data. To extend the capabilities, subclass :class:`BaseRequest`
101 or :class:`Request` and add or extend methods.
102
103 There is already a mixin that provides JSON parsing::
104
105 from werkzeug.wrappers import Request
106 from werkzeug.wrappers.json import JSONMixin
107
108 class JSONRequest(JSONMixin, Request):
109 pass
110
111 The basic implementation of that looks like::
103112
104113 from werkzeug.utils import cached_property
105114 from werkzeug.wrappers import Request
106 from simplejson import loads
115 import simplejson as json
107116
108117 class JSONRequest(Request):
109 # accept up to 4MB of transmitted data.
110 max_content_length = 1024 * 1024 * 4
111
112118 @cached_property
113119 def json(self):
114 if self.headers.get('content-type') == 'application/json':
115 return loads(self.data)
120 if self.mimetype == "application/json":
121 return json.loads(self.data)
0 Sphinx~=1.8.3
1 Pallets-Sphinx-Themes~=1.1.2
2 sphinx-issues~=1.2.0
44 ===========
55
66 .. module:: werkzeug.routing
7
8 .. testsetup::
9
10 from werkzeug.routing import *
117
128 When it comes to combining multiple controller or view functions (however
139 you want to call them), you need a dispatcher. A simple way would be
1814 objects mentioned on this page must be imported from :mod:`werkzeug.routing`, not
1915 from :mod:`werkzeug`!
2016
21 .. _Routes: http://routes.groovie.org/
17 .. _Routes: https://routes.readthedocs.io/en/latest/
2218
2319
2420 Quickstart
148144 Custom Converters
149145 =================
150146
151 You can easily add custom converters. The only thing you have to do is to
152 subclass :class:`BaseConverter` and pass that new converter to the url_map.
153 A converter has to provide two public methods: `to_python` and `to_url`,
154 as well as a member that represents a regular expression. Here is a small
155 example::
147 You can add custom converters that add behaviors not provided by the
148 built-in converters. To make a custom converter, subclass
149 :class:`BaseConverter` then pass the new class to the :class:`Map`
150 ``converters`` parameter, or add it to
151 :attr:`url_map.converters <Map.converters>`.
152
153 The converter should have a ``regex`` attribute with a regular
154 expression to match with. If the converter can take arguments in a URL
155 rule, it should accept them in its ``__init__`` method.
156
157 It can implement a ``to_python`` method to convert the matched string to
158 some other object. This can also do extra validation that wasn't
159 possible with the ``regex`` attribute, and should raise a
160 :exc:`werkzeug.routing.ValidationError` in that case. Raising any other
161 errors will cause a 500 error.
162
163 It can implement a ``to_url`` method to convert a Python object to a
164 string when building a URL. Any error raised here will be converted to a
165 :exc:`werkzeug.routing.BuildError` and eventually cause a 500 error.
166
167 This example implements a ``BooleanConverter`` that will match the
168 strings ``"yes"``, ``"no"``, and ``"maybe"``, returning a random value
169 for ``"maybe"``. ::
156170
157171 from random import randrange
158 from werkzeug.routing import Rule, Map, BaseConverter, ValidationError
172 from werkzeug.routing import BaseConverter, ValidationError
159173
160174 class BooleanConverter(BaseConverter):
161
162 def __init__(self, url_map, randomify=False):
175 regex = r"(?:yes|no|maybe)"
176
177 def __init__(self, url_map, maybe=False):
163178 super(BooleanConverter, self).__init__(url_map)
164 self.randomify = randomify
165 self.regex = '(?:yes|no|maybe)'
179 self.maybe = maybe
166180
167181 def to_python(self, value):
168 if value == 'maybe':
169 if self.randomify:
182 if value == "maybe":
183 if self.maybe:
170184 return not randrange(2)
171 raise ValidationError()
185 raise ValidationError
172186 return value == 'yes'
173187
174188 def to_url(self, value):
175 return value and 'yes' or 'no'
176
177 url_map = Map([
178 Rule('/vote/<bool:werkzeug_rocks>', endpoint='vote'),
179 Rule('/vote/<bool(randomify=True):foo>', endpoint='foo')
189 return "yes" if value else "no"
190
191 from werkzeug.routing import Map, Rule
192
193 url_map = Map([
194 Rule("/vote/<bool:werkzeug_rocks>", endpoint="vote"),
195 Rule("/guess/<bool(maybe=True):foo>", endpoint="guess")
180196 ], converters={'bool': BooleanConverter})
181197
182 If you want that converter to be the default converter, name it ``'default'``.
198 If you want to change the default converter, assign a different
199 converter to the ``"default"`` key.
200
183201
184202 Host Matching
185203 =============
5454 drain a laptop's battery.
5555
5656 - The ``watchdog`` backend uses filesystem events, and is much faster than
57 ``stat``. It requires the `watchdog <https://pypi.python.org/pypi/watchdog>`_
57 ``stat``. It requires the `watchdog <https://pypi.org/project/watchdog/>`_
5858 module to be installed. The recommended way to achieve this is to add
5959 ``Werkzeug[watchdog]`` to your requirements file.
6060
7474 Colored Logging
7575 ---------------
7676 Werkzeug is able to color the output of request logs when ran from a terminal, just install the `termcolor
77 <https://pypi.python.org/pypi/termcolor>`_ package. Windows users need to install `colorama
78 <https://pypi.python.org/pypi/colorama>`_ in addition to termcolor for this to work.
77 <https://pypi.org/project/termcolor/>`_ package. Windows users need to install `colorama
78 <https://pypi.org/project/colorama/>`_ in addition to termcolor for this to work.
7979
8080 Virtual Hosts
8181 -------------
150150 situation, you can either remove the localhost entry for ``::1`` or
151151 explicitly bind the hostname to an ipv4 address (`127.0.0.1`)
152152
153 .. _hosts file: http://en.wikipedia.org/wiki/Hosts_file
153 .. _hosts file: https://en.wikipedia.org/wiki/Hosts_file
154154
155155 SSL
156156 ---
224224 security reasons.
225225
226226 This feature requires the pyOpenSSL library to be installed.
227
228
229 Unix Sockets
230 ------------
231
232 The dev server can bind to a Unix socket instead of a TCP socket.
233 :func:`run_simple` will bind to a Unix socket if the ``hostname``
234 parameter starts with ``'unix://'``. ::
235
236 from werkzeug.serving import run_simple
237 run_simple('unix://example.sock', 0, app)
138138 A dict with values that are used to override the generated environ.
139139
140140 .. attribute:: input_stream
141
141
142142 The optional input stream. This and :attr:`form` / :attr:`files`
143143 is mutually exclusive. Also do not provide this stream if the
144144 request method is not `POST` / `PUT` or something comparable.
2525 With Werkzeug 0.7, the recommended way to import this function is
2626 directly from the utils module (and with 1.0 this will become mandatory).
2727 To automatically rewrite all imports one can use the
28 `werkzeug-import-rewrite <http://bit.ly/import-rewrite>`_ script.
28 `werkzeug-import-rewrite <https://bit.ly/import-rewrite>`_ script.
2929
3030 You can use it by executing it with Python and with a list of folders with
3131 Werkzeug based code. It will then spit out a hg/git compatible patch
5353
5454 patch -p1 < new-imports.udiff
5555
56 Stop Using Deprecated Things
57 ----------------------------
5856
59 A few things in Werkzeug will stop being supported and for others, we're
60 suggesting alternatives even if they will stick around for a longer time.
57 Deprecated and Removed Code
58 ---------------------------
6159
62 Do not use:
60 Some things that were relevant to Werkzeug's core (working with WSGI and
61 HTTP) have been removed. These were not commonly used, or are better
62 served by dedicated libraries.
6363
64 - `werkzeug.script`, replace it with custom scripts written with
65 `argparse`, `click` or something similar.
66 - `werkzeug.template`, replace with a proper template engine.
67 - `werkzeug.contrib.jsrouting`, stop using URL generation for
68 JavaScript, it does not scale well with many public routing.
69 - `werkzeug.contrib.kickstart`, replace with hand written code, the
70 Werkzeug API became better in general that this is no longer
71 necessary.
72 - `werkzeug.contrib.testtools`, not useful really.
64 - ``werkzeug.script``, replace with `Click`_ or another dedicated
65 command line library.
66 - ``werkzeug.template``, replace with `Jinja`_ or another dedicated
67 template library.
68 - ``werkzeug.contrib.jsrouting``, this type of URL generation in
69 JavaScript did not scale well. Instead, generate URLs when
70 rendering templates, or add a URL field to a JSON response.
71 - ``werkzeug.contrib.kickstart``, replace with custom code if needed,
72 the Werkzeug API has improved in general. `Flask`_ is a higher-level
73 version of this.
74 - ``werkzeug.contrib.testtools``, was not significantly useful over
75 the default behavior.
76 - ``werkzeug.contrib.cache``, has been extracted to `cachelib`_.
77 - ``werkzeug.contrib.atom``, was outside the core focus of Werkzeug,
78 replace with a dedicated feed generation library.
79 - ``werkzeug.contrib.limiter``, stream limiting is better handled by
80 the WSGI server library instead of middleware.
81
82 .. _Click: https://click.palletsprojects.com/
83 .. _Jinja: http://jinja.pocoo.org/docs/
84 .. _Flask: http://flask.pocoo.org/docs/
85 .. _cachelib: https://github.com/pallets/cachelib
4444 .. image:: _static/shortly.png
4545 :alt: a screenshot of shortly
4646
47 .. _TinyURL: http://tinyurl.com/
47 .. _TinyURL: https://tinyurl.com/
4848 .. _Jinja: http://jinja.pocoo.org/
49 .. _redis: http://redis.io/
49 .. _redis: https://redis.io/
5050
5151 Step 0: A Basic WSGI Introduction
5252 ---------------------------------
473473 repository to see a version of this tutorial with some small refinements
474474 such as a custom 404 page.
475475
476 - `shortly in the example folder <https://github.com/pallets/werkzeug/blob/master/examples/shortly>`_
476 - `shortly in the example folder <https://github.com/pallets/werkzeug/tree/master/examples/shortly>`_
9494 Unlike the regular python decoding Werkzeug does not raise an
9595 :exc:`UnicodeDecodeError` if the decoding failed but an
9696 :exc:`~exceptions.HTTPUnicodeError` which
97 is a direct subclass of `UnicodeError` and the `BadRequest` HTTP exception.
97 is a direct subclass of `UnicodeError` and the `BadRequest` HTTP exception.
9898 The reason is that if this exception is not caught by the application but
9999 a catch-all for HTTP exceptions exists a default `400 BAD REQUEST` error
100100 page is displayed.
2525 .. autoclass:: environ_property
2626
2727 .. autoclass:: header_property
28
29 .. autofunction:: parse_cookie
30
31 .. autofunction:: dump_cookie
3228
3329 .. autofunction:: redirect
3430
+0
-15
docs/werkzeugext.py less more
0 # -*- coding: utf-8 -*-
1 """
2 Werkzeug Sphinx Extensions
3 ~~~~~~~~~~~~~~~~~~~~~~~~~~
4
5 Provides some more helpers for the werkzeug docs.
6
7 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD, see LICENSE for more details.
9 """
10 from sphinx.ext.autodoc import cut_lines
11
12
13 def setup(app):
14 app.connect('autodoc-process-docstring', cut_lines(3, 3, what=['module']))
+0
-119
docs/werkzeugstyle.sty less more
0 \definecolor{TitleColor}{rgb}{0,0,0}
1 \definecolor{InnerLinkColor}{rgb}{0,0,0}
2 \definecolor{OuterLinkColor}{rgb}{1.0,0.5,0.0}
3
4 \renewcommand{\maketitle}{%
5 \begin{titlepage}%
6 \let\footnotesize\small
7 \let\footnoterule\relax
8 \ifsphinxpdfoutput
9 \begingroup
10 % This \def is required to deal with multi-line authors; it
11 % changes \\ to ', ' (comma-space), making it pass muster for
12 % generating document info in the PDF file.
13 \def\\{, }
14 \pdfinfo{
15 /Author (\@author)
16 /Title (\@title)
17 }
18 \endgroup
19 \fi
20 \begin{flushright}%
21 %\sphinxlogo%
22 {\center
23 \vspace*{3cm}
24 \includegraphics{logo.pdf}
25 \vspace{3cm}
26 \par
27 {\rm\Huge \@title \par}%
28 {\em\LARGE \py@release\releaseinfo \par}
29 {\large
30 \@date \par
31 \py@authoraddress \par
32 }}%
33 \end{flushright}%\par
34 \@thanks
35 \end{titlepage}%
36 \cleardoublepage%
37 \setcounter{footnote}{0}%
38 \let\thanks\relax\let\maketitle\relax
39 %\gdef\@thanks{}\gdef\@author{}\gdef\@title{}
40 }
41
42 \fancypagestyle{normal}{
43 \fancyhf{}
44 \fancyfoot[LE,RO]{{\thepage}}
45 \fancyfoot[LO]{{\nouppercase{\rightmark}}}
46 \fancyfoot[RE]{{\nouppercase{\leftmark}}}
47 \fancyhead[LE,RO]{{ \@title, \py@release}}
48 \renewcommand{\headrulewidth}{0.4pt}
49 \renewcommand{\footrulewidth}{0.4pt}
50 }
51
52 \fancypagestyle{plain}{
53 \fancyhf{}
54 \fancyfoot[LE,RO]{{\thepage}}
55 \renewcommand{\headrulewidth}{0pt}
56 \renewcommand{\footrulewidth}{0.4pt}
57 }
58
59 \titleformat{\section}{\Large}%
60 {\py@TitleColor\thesection}{0.5em}{\py@TitleColor}{\py@NormalColor}
61 \titleformat{\subsection}{\large}%
62 {\py@TitleColor\thesubsection}{0.5em}{\py@TitleColor}{\py@NormalColor}
63 \titleformat{\subsubsection}{}%
64 {\py@TitleColor\thesubsubsection}{0.5em}{\py@TitleColor}{\py@NormalColor}
65 \titleformat{\paragraph}{\large}%
66 {\py@TitleColor}{0em}{\py@TitleColor}{\py@NormalColor}
67
68 \ChNameVar{\raggedleft\normalsize}
69 \ChNumVar{\raggedleft \bfseries\Large}
70 \ChTitleVar{\raggedleft \rm\Huge}
71
72 \renewcommand\thepart{\@Roman\c@part}
73 \renewcommand\part{%
74 \pagestyle{plain}
75 \if@noskipsec \leavevmode \fi
76 \cleardoublepage
77 \vspace*{6cm}%
78 \@afterindentfalse
79 \secdef\@part\@spart}
80
81 \def\@part[#1]#2{%
82 \ifnum \c@secnumdepth >\m@ne
83 \refstepcounter{part}%
84 \addcontentsline{toc}{part}{\thepart\hspace{1em}#1}%
85 \else
86 \addcontentsline{toc}{part}{#1}%
87 \fi
88 {\parindent \z@ %\center
89 \interlinepenalty \@M
90 \normalfont
91 \ifnum \c@secnumdepth >\m@ne
92 \rm\Large \partname~\thepart
93 \par\nobreak
94 \fi
95 \MakeUppercase{\rm\Huge #2}%
96 \markboth{}{}\par}%
97 \nobreak
98 \vskip 8ex
99 \@afterheading}
100 \def\@spart#1{%
101 {\parindent \z@ %\center
102 \interlinepenalty \@M
103 \normalfont
104 \huge \bfseries #1\par}%
105 \nobreak
106 \vskip 3ex
107 \@afterheading}
108
109 % use inconsolata font
110 \usepackage{inconsolata}
111
112 % fix single quotes, for inconsolata. (does not work)
113 %%\usepackage{textcomp}
114 %%\begingroup
115 %% \catcode`'=\active
116 %% \g@addto@macro\@noligs{\let'\textsinglequote}
117 %% \endgroup
118 %%\endinput
175175
176176 .. autoclass:: UserAgentMixin
177177 :members:
178
179
180 Extra Mixin Classes
181 ===================
182
183 These mixins are not included in the default :class:`Request` and
184 :class:`Response` classes. They provide extra behavior that needs to be
185 opted into by creating your own subclasses::
186
187 class Response(JSONMixin, BaseResponse):
188 pass
189
190
191 .. module:: werkzeug.wrappers.json
192
193 JSON
194 ----
195
196 .. autoclass:: JSONMixin
197 :members:
0 ============
10 WSGI Helpers
21 ============
32
43 .. module:: werkzeug.wsgi
54
65 The following classes and functions are designed to make working with
7 the WSGI specification easier or operate on the WSGI layer. All the
6 the WSGI specification easier or operate on the WSGI layer. All the
87 functionality from this module is available on the high-level
9 :ref:`Request/Response classes <wrappers>`.
8 :ref:`Request / Response classes <wrappers>`.
109
1110
1211 Iterator / Stream Helpers
13 =========================
12 -------------------------
1413
1514 These classes and functions simplify working with the WSGI application
1615 iterator and the input stream.
2019 .. autoclass:: FileWrapper
2120
2221 .. autoclass:: LimitedStream
23 :members:
22 :members:
2423
2524 .. autofunction:: make_line_iter
2625
3029
3130
3231 Environ Helpers
33 ===============
32 ---------------
3433
3534 These functions operate on the WSGI environment. They extract useful
3635 information or perform common manipulations:
5756
5857 .. autofunction:: host_is_trusted
5958
59
6060 Convenience Helpers
61 ===================
61 -------------------
6262
6363 .. autofunction:: responder
6464
6565 .. autofunction:: werkzeug.testapp.test_app
66
67
68 Bytes, Strings, and Encodings
69 -----------------------------
70
71 The WSGI environment on Python 3 works slightly different than it does
72 on Python 2. Werkzeug hides the differences from you if you use the
73 higher level APIs.
74
75 The WSGI specification (`PEP 3333`_) decided to always use the native
76 ``str`` type. On Python 2 this means the raw bytes are passed through
77 and can be decoded directly. On Python 3, however, the raw bytes are
78 always decoded using the ISO-8859-1 charset to produce a Unicode string.
79
80 Python 3 Unicode strings in the WSGI environment are restricted to
81 ISO-8859-1 code points. If a string read from the environment might
82 contain characters outside that charset, it must first be decoded to
83 bytes as ISO-8859-1, then encoded to a Unicode string using the proper
84 charset (typically UTF-8). The reverse is done when writing to the
85 environ. This is known as the "WSGI encoding dance".
86
87 Werkzeug provides functions to deal with this automatically so that you
88 don't need to be aware of the inner workings. Use the functions on this
89 page as well as :func:`~werkzeug.datastructures.EnvironHeaders` to read
90 data out of the WSGI environment.
91
92 Applications should avoid manually creating or modifying a WSGI
93 environment unless they take care of the proper encoding or decoding
94 step. All high level interfaces in Werkzeug will apply the encoding and
95 decoding as necessary.
96
97 .. _PEP 3333: https://www.python.org/dev/peps/pep-3333/#unicode-issues
98
99
100 Raw Request URI and Path Encoding
101 ---------------------------------
102
103 The ``PATH_INFO`` in the environ is the path value after
104 percent-decoding. For example, the raw path ``/hello%2fworld`` would
105 show up from the WSGI server to Werkzeug as ``/hello/world``. This loses
106 the information that the slash was a raw character as opposed to a path
107 separator.
108
109 The WSGI specification (`PEP 3333`_) does not provide a way to get the
110 original value, so it is impossible to route some types of data in the
111 path. The most compatible way to work around this is to send problematic
112 data in the query string instead of the path.
113
114 However, many WSGI servers add a non-standard environ key with the raw
115 path. To match this behavior, Werkzeug's test client and development
116 server will add the raw value to both the ``REQUEST_URI`` and
117 ``RAW_URI`` keys. If you want to route based on this value, you can use
118 middleware to replace ``PATH_INFO`` in the environ before it reaches the
119 application. However, keep in mind that these keys are non-standard and
120 not guaranteed to be present.
+0
-104
examples/README less more
0 =================
1 Werkzeug Examples
2 =================
3
4 This directory contains various example applications and example code of
5 Werkzeug powered applications.
6
7 Beside the proof of concept applications and code snippets in the partial
8 folder they all have external depencencies for template engines or database
9 adapters (SQLAlchemy only so far). Also, every application has click as
10 external dependency, used to create the command line interface.
11
12
13 Full Example Applications
14 =========================
15
16 The following example applications are application types you would actually
17 find in real life :-)
18
19
20 `simplewiki`
21 A simple Wiki implementation.
22
23 Requirements:
24 - SQLAlchemy
25 - Creoleparser >= 0.7
26 - genshi
27
28 You can obtain all packages in the Cheeseshop via easy_install. You have
29 to have at least version 0.7 of Creoleparser.
30
31 Usage::
32
33 ./manage-simplewiki.py initdb
34 ./manage-simplewiki.py runserver
35
36 Or of course you can just use the application object
37 (`simplewiki.SimpleWiki`) and hook that into your favourite WSGI gateway.
38 The constructor of the application object takes a single argument which is
39 the SQLAlchemy URI for the database.
40
41 The management script for the devserver looks up the an environment var
42 called `SIMPLEWIKI_DATABASE_URI` and uses that for the database URI. If
43 no such variable is provided "sqlite:////tmp/simplewiki.db" is assumed.
44
45 `plnt`
46 A planet called plnt, pronounce plant.
47
48 Requirements:
49 - SQLAlchemy
50 - Jinja
51 - feedparser
52
53 You can obtain all packages in the Cheeseshop via easy_install.
54
55 Usage::
56
57 ./manage-plnt.py initdb
58 ./manage-plnt.py sync
59 ./manage-plnt.py runserver
60
61 The WSGI application is called `plnt.Plnt` which, like the simple wiki,
62 accepts a database URI as first argument. The environment variable for
63 the database key is called `PLNT_DATABASE_URI` and the default is
64 "sqlite:////tmp/plnt.db".
65
66 Per default a few python related blogs are added to the database, you
67 can add more in a python shell by playing with the `Blog` model.
68
69 `shorty`
70 A tinyurl clone for the Werkzeug tutorial.
71
72 Requirements:
73 - SQLAlchemy
74 - Jinja2
75
76 You can obtain all packages in the Cheeseshop via easy_install.
77
78 Usage::
79
80 ./manage-shorty.py initdb
81 ./manage-shorty.py runserver
82
83 The WSGI application is called `shorty.application.Shorty` which, like the
84 simple wiki, accepts a database URI as first argument.
85
86 The source code of the application is explained in detail in the Werkzeug
87 tutorial.
88
89 `couchy`
90 Like shorty, but implemented using CouchDB.
91
92 Requirements :
93 - werkzeug : http://werkzeug.pocoo.org
94 - jinja : http://jinja.pocoo.org
95 - couchdb 0.72 & above : http://www.couchdb.org
96
97 `cupoftee`
98 A `Teeworlds <http://www.teeworlds.com/>`_ server browser. This application
99 works best in a non forking environment and won't work for CGI.
100
101 Usage::
102
103 ./manage-cupoftee.py runserver
0 =================
1 Werkzeug Examples
2 =================
3
4 This directory contains various example applications and example code of
5 Werkzeug powered applications.
6
7 Beside the proof of concept applications and code snippets in the partial
8 folder they all have external depencencies for template engines or database
9 adapters (SQLAlchemy only so far). Also, every application has click as
10 external dependency, used to create the command line interface.
11
12
13 Full Example Applications
14 =========================
15
16 The following example applications are application types you would actually
17 find in real life :-)
18
19
20 `simplewiki`
21
22 A simple Wiki implementation.
23
24 Requirements:
25
26 - SQLAlchemy
27 - Creoleparser >= 0.7
28 - genshi
29
30 You can obtain all packages in the Cheeseshop via easy_install. You have
31 to have at least version 0.7 of Creoleparser.
32
33 Usage::
34
35 ./manage-simplewiki.py initdb
36 ./manage-simplewiki.py runserver
37
38 Or of course you can just use the application object
39 (`simplewiki.SimpleWiki`) and hook that into your favourite WSGI gateway.
40 The constructor of the application object takes a single argument which is
41 the SQLAlchemy URI for the database.
42
43 The management script for the devserver looks up the an environment var
44 called `SIMPLEWIKI_DATABASE_URI` and uses that for the database URI. If
45 no such variable is provided "sqlite:////tmp/simplewiki.db" is assumed.
46
47 `plnt`
48
49 A planet called plnt, pronounce plant.
50
51 Requirements:
52
53 - SQLAlchemy
54 - Jinja2
55 - feedparser
56
57 You can obtain all packages in the Cheeseshop via easy_install.
58
59 Usage::
60
61 ./manage-plnt.py initdb
62 ./manage-plnt.py sync
63 ./manage-plnt.py runserver
64
65 The WSGI application is called `plnt.Plnt` which, like the simple wiki,
66 accepts a database URI as first argument. The environment variable for
67 the database key is called `PLNT_DATABASE_URI` and the default is
68 "sqlite:////tmp/plnt.db".
69
70 Per default a few python related blogs are added to the database, you
71 can add more in a python shell by playing with the `Blog` model.
72
73 `shorty`
74
75 A tinyurl clone for the Werkzeug tutorial.
76
77 Requirements:
78
79 - SQLAlchemy
80 - Jinja2
81
82 You can obtain all packages in the Cheeseshop via easy_install.
83
84 Usage::
85
86 ./manage-shorty.py initdb
87 ./manage-shorty.py runserver
88
89 The WSGI application is called `shorty.application.Shorty` which, like the
90 simple wiki, accepts a database URI as first argument.
91
92 The source code of the application is explained in detail in the Werkzeug
93 tutorial.
94
95 `couchy`
96
97 Like shorty, but implemented using CouchDB.
98
99 Requirements :
100
101 - werkzeug : http://werkzeug.pocoo.org
102 - jinja : http://jinja.pocoo.org
103 - couchdb 0.72 & above : https://couchdb.apache.org/
104
105 `cupoftee`
106
107 A `Teeworlds <https://www.teeworlds.com/>`_ server browser. This application
108 works best in a non forking environment and won't work for CGI.
109
110 Usage::
111
112 ./manage-cupoftee.py runserver
44
55 Stores session on the client.
66
7 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
1010 from time import asctime
11
12 from werkzeug.contrib.securecookie import SecureCookie
1113 from werkzeug.serving import run_simple
12 from werkzeug.wrappers import BaseRequest, BaseResponse
13 from werkzeug.contrib.securecookie import SecureCookie
14 from werkzeug.wrappers import BaseRequest
15 from werkzeug.wrappers import BaseResponse
1416
15 SECRET_KEY = 'V\x8a$m\xda\xe9\xc3\x0f|f\x88\xbccj>\x8bI^3+'
17 SECRET_KEY = "V\x8a$m\xda\xe9\xc3\x0f|f\x88\xbccj>\x8bI^3+"
1618
1719
1820 class Request(BaseRequest):
19
2021 def __init__(self, environ):
2122 BaseRequest.__init__(self, environ)
2223 self.session = SecureCookie.load_cookie(self, secret_key=SECRET_KEY)
2728
2829
2930 def get_time(request):
30 return 'Time: %s' % request.session.get('time', 'not set')
31 return "Time: %s" % request.session.get("time", "not set")
3132
3233
3334 def set_time(request):
34 request.session['time'] = time = asctime()
35 return 'Time set to %s' % time
35 request.session["time"] = time = asctime()
36 return "Time set to %s" % time
3637
3738
3839 def application(environ, start_response):
3940 request = Request(environ)
40 response = BaseResponse({
41 'get': get_time,
42 'set': set_time
43 }.get(request.path.strip('/'), index)(request), mimetype='text/html')
41 response = BaseResponse(
42 {"get": get_time, "set": set_time}.get(request.path.strip("/"), index)(request),
43 mimetype="text/html",
44 )
4445 request.session.save_cookie(response)
4546 return response(environ, start_response)
4647
4748
48 if __name__ == '__main__':
49 run_simple('localhost', 5000, application)
49 if __name__ == "__main__":
50 run_simple("localhost", 5000, application)
00 #!/usr/bin/env python
11 # -*- coding: utf-8 -*-
2 from werkzeug.contrib.sessions import SessionMiddleware
3 from werkzeug.contrib.sessions import SessionStore
24 from werkzeug.serving import run_simple
3 from werkzeug.contrib.sessions import SessionStore, SessionMiddleware
45
56
67 class MemorySessionStore(SessionStore):
7
88 def __init__(self, session_class=None):
99 SessionStore.__init__(self, session_class=None)
1010 self.sessions = {}
2222
2323
2424 def application(environ, start_response):
25 session = environ['werkzeug.session']
26 session['visit_count'] = session.get('visit_count', 0) + 1
25 session = environ["werkzeug.session"]
26 session["visit_count"] = session.get("visit_count", 0) + 1
2727
28 start_response('200 OK', [('Content-Type', 'text/html')])
29 return ['''
30 <!doctype html>
28 start_response("200 OK", [("Content-Type", "text/html")])
29 return [
30 """<!doctype html>
3131 <title>Session Example</title>
3232 <h1>Session Example</h1>
33 <p>You visited this page %d times.</p>
34 ''' % session['visit_count']]
33 <p>You visited this page %d times.</p>"""
34 % session["visit_count"]
35 ]
3536
3637
3738 def make_app():
3839 return SessionMiddleware(application, MemorySessionStore())
3940
4041
41 if __name__ == '__main__':
42 run_simple('localhost', 5000, make_app())
42 if __name__ == "__main__":
43 run_simple("localhost", 5000, make_app())
66 This is a very simple application that uses a secure cookie to do the
77 user authentification.
88
9 :copyright: Copyright 2009 by the Werkzeug Team, see AUTHORS for more details.
10 :license: BSD, see LICENSE for more details.
9 :copyright: 2007 Pallets
10 :license: BSD-3-Clause
1111 """
12 from werkzeug.contrib.securecookie import SecureCookie
1213 from werkzeug.serving import run_simple
13 from werkzeug.utils import cached_property, escape, redirect
14 from werkzeug.wrappers import Request, Response
15 from werkzeug.contrib.securecookie import SecureCookie
14 from werkzeug.utils import cached_property
15 from werkzeug.utils import escape
16 from werkzeug.utils import redirect
17 from werkzeug.wrappers import Request
18 from werkzeug.wrappers import Response
1619
1720
1821 # don't use this key but a different one; you could just use
1922 # os.unrandom(20) to get something random. Changing this key
2023 # invalidates all sessions at once.
21 SECRET_KEY = '\xfa\xdd\xb8z\xae\xe0}4\x8b\xea'
24 SECRET_KEY = "\xfa\xdd\xb8z\xae\xe0}4\x8b\xea"
2225
2326 # the cookie name for the session
24 COOKIE_NAME = 'session'
27 COOKIE_NAME = "session"
2528
2629 # the users that may access
27 USERS = {
28 'admin': 'default',
29 'user1': 'default'
30 }
30 USERS = {"admin": "default", "user1": "default"}
3131
3232
3333 class AppRequest(Request):
3535
3636 def logout(self):
3737 """Log the user out."""
38 self.session.pop('username', None)
38 self.session.pop("username", None)
3939
4040 def login(self, username):
4141 """Log the user in."""
42 self.session['username'] = username
42 self.session["username"] = username
4343
4444 @property
4545 def logged_in(self):
4949 @property
5050 def user(self):
5151 """The user that is logged in."""
52 return self.session.get('username')
52 return self.session.get("username")
5353
5454 @cached_property
5555 def session(self):
6060
6161
6262 def login_form(request):
63 error = ''
64 if request.method == 'POST':
65 username = request.form.get('username')
66 password = request.form.get('password')
63 error = ""
64 if request.method == "POST":
65 username = request.form.get("username")
66 password = request.form.get("password")
6767 if password and USERS.get(username) == password:
6868 request.login(username)
69 return redirect('')
70 error = '<p>Invalid credentials'
71 return Response('''
72 <title>Login</title><h1>Login</h1>
69 return redirect("")
70 error = "<p>Invalid credentials"
71 return Response(
72 """<title>Login</title><h1>Login</h1>
7373 <p>Not logged in.
7474 %s
7575 <form action="" method="post">
7878 <input type="text" name="username" size=20>
7979 <input type="password" name="password", size=20>
8080 <input type="submit" value="Login">
81 </form>''' % error, mimetype='text/html')
81 </form>"""
82 % error,
83 mimetype="text/html",
84 )
8285
8386
8487 def index(request):
85 return Response('''
86 <title>Logged in</title>
88 return Response(
89 """<title>Logged in</title>
8790 <h1>Logged in</h1>
8891 <p>Logged in as %s
89 <p><a href="/?do=logout">Logout</a>
90 ''' % escape(request.user), mimetype='text/html')
92 <p><a href="/?do=logout">Logout</a>"""
93 % escape(request.user),
94 mimetype="text/html",
95 )
9196
9297
9398 @AppRequest.application
9499 def application(request):
95 if request.args.get('do') == 'logout':
100 if request.args.get("do") == "logout":
96101 request.logout()
97 response = redirect('.')
102 response = redirect(".")
98103 elif request.logged_in:
99104 response = index(request)
100105 else:
103108 return response
104109
105110
106 if __name__ == '__main__':
107 run_simple('localhost', 4000, application)
111 if __name__ == "__main__":
112 run_simple("localhost", 4000, application)
44
55 Package description goes here.
66
7 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
10 from coolmagic.application import make_app
10 from .application import make_app
88 that automatically wraps the application within the require
99 middlewares. Per default only the `SharedDataMiddleware` is applied.
1010
11 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
12 :license: BSD, see LICENSE for more details.
11 :copyright: 2007 Pallets
12 :license: BSD-3-Clause
1313 """
14 from os import path, listdir
15 from coolmagic.utils import Request, local_manager, redirect
16 from werkzeug.routing import Map, Rule, RequestRedirect
17 from werkzeug.exceptions import HTTPException, NotFound
14 from os import listdir
15 from os import path
16
17 from werkzeug.exceptions import HTTPException
18 from werkzeug.exceptions import NotFound
19 from werkzeug.middleware.shared_data import SharedDataMiddleware
20 from werkzeug.routing import Map
21 from werkzeug.routing import RequestRedirect
22 from werkzeug.routing import Rule
23
24 from .utils import local_manager
25 from .utils import Request
1826
1927
2028 class CoolMagicApplication(object):
2533 def __init__(self, config):
2634 self.config = config
2735
28 for fn in listdir(path.join(path.dirname(__file__), 'views')):
29 if fn.endswith('.py') and fn != '__init__.py':
30 __import__('coolmagic.views.' + fn[:-3])
36 for fn in listdir(path.join(path.dirname(__file__), "views")):
37 if fn.endswith(".py") and fn != "__init__.py":
38 __import__("coolmagic.views." + fn[:-3])
3139
3240 from coolmagic.utils import exported_views
41
3342 rules = [
3443 # url for shared data. this will always be unmatched
3544 # because either the middleware or the webserver
3645 # handles that request first.
37 Rule('/public/<path:file>',
38 endpoint='shared_data')
46 Rule("/public/<path:file>", endpoint="shared_data")
3947 ]
4048 self.views = {}
41 for endpoint, (func, rule, extra) in exported_views.iteritems():
49 for endpoint, (func, rule, extra) in exported_views.items():
4250 if rule is not None:
4351 rules.append(Rule(rule, endpoint=endpoint, **extra))
4452 self.views[endpoint] = func
5058 try:
5159 endpoint, args = urls.match(req.path)
5260 resp = self.views[endpoint](**args)
53 except NotFound, e:
54 resp = self.views['static.not_found']()
55 except (HTTPException, RequestRedirect), e:
61 except NotFound:
62 resp = self.views["static.not_found"]()
63 except (HTTPException, RequestRedirect) as e:
5664 resp = e
5765 return resp(environ, start_response)
5866
6674 app = CoolMagicApplication(config)
6775
6876 # static stuff
69 from werkzeug.wsgi import SharedDataMiddleware
70 app = SharedDataMiddleware(app, {
71 '/public': path.join(path.dirname(__file__), 'public')
72 })
77 app = SharedDataMiddleware(
78 app, {"/public": path.join(path.dirname(__file__), "public")}
79 )
7380
7481 # clean up locals
7582 app = local_manager.make_middleware(app)
44
55 The star-import module for all views.
66
7 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
10 from coolmagic.utils import Response, TemplateResponse, ThreadedRequest, \
11 export, url_for, redirect
12 from werkzeug.utils import escape
10 from .utils import ThreadedRequest
1311
1412
1513 #: a thread local proxy request object
77 and implement some additional functionallity like the ability to link
88 to view functions.
99
10 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
11 :license: BSD, see LICENSE for more details.
10 :copyright: 2007 Pallets
11 :license: BSD-3-Clause
1212 """
13 from os.path import dirname, join
14 from jinja import Environment, FileSystemLoader
15 from werkzeug.local import Local, LocalManager
16 from werkzeug.utils import redirect
17 from werkzeug.wrappers import BaseRequest, BaseResponse
13 from os.path import dirname
14 from os.path import join
15
16 from jinja2 import Environment
17 from jinja2 import FileSystemLoader
18 from werkzeug.local import Local
19 from werkzeug.local import LocalManager
20 from werkzeug.wrappers import BaseRequest
21 from werkzeug.wrappers import BaseResponse
1822
1923
2024 local = Local()
2125 local_manager = LocalManager([local])
2226 template_env = Environment(
23 loader=FileSystemLoader(join(dirname(__file__), 'templates'),
24 use_memcache=False)
27 loader=FileSystemLoader(join(dirname(__file__), "templates"), use_memcache=False)
2528 )
2629 exported_views = {}
2730
3134 Decorator for registering view functions and adding
3235 templates to it.
3336 """
37
3438 def wrapped(f):
35 endpoint = (f.__module__ + '.' + f.__name__)[16:]
39 endpoint = (f.__module__ + "." + f.__name__)[16:]
3640 if template is not None:
3741 old_f = f
42
3843 def f(**kwargs):
3944 rv = old_f(**kwargs)
4045 if not isinstance(rv, Response):
4146 rv = TemplateResponse(template, **(rv or {}))
4247 return rv
48
4349 f.__name__ = old_f.__name__
4450 f.__doc__ = old_f.__doc__
4551 exported_views[endpoint] = (f, string, extra)
4652 return f
53
4754 return wrapped
4855
4956
5966 The concrete request object used in the WSGI application.
6067 It has some helper functions that can be used to build URLs.
6168 """
62 charset = 'utf-8'
69
70 charset = "utf-8"
6371
6472 def __init__(self, environ, url_adapter):
6573 BaseRequest.__init__(self, environ)
7482 """
7583
7684 def __getattr__(self, name):
77 if name == '__members__':
78 return [x for x in dir(local.request) if not
79 x.startswith('_')]
85 if name == "__members__":
86 return [x for x in dir(local.request) if not x.startswith("_")]
8087 return getattr(local.request, name)
8188
8289 def __setattr__(self, name, value):
8794 """
8895 The concrete response object for the WSGI application.
8996 """
90 charset = 'utf-8'
91 default_mimetype = 'text/html'
97
98 charset = "utf-8"
99 default_mimetype = "text/html"
92100
93101
94102 class TemplateResponse(Response):
98106
99107 def __init__(self, template_name, **values):
100108 from coolmagic import helpers
101 values.update(
102 request=local.request,
103 h=helpers
104 )
109
110 values.update(request=local.request, h=helpers)
105111 template = template_env.get_template(template_name)
106112 Response.__init__(self, template.render(values))
44
55 This module collects and assambles the urls.
66
7 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
44
55 Some static views.
66
7 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
10 from coolmagic.helpers import *
10 from coolmagic.utils import export
1111
1212
13 @export('/', template='static/index.html')
13 @export("/", template="static/index.html")
1414 def index():
1515 pass
1616
1717
18 @export('/about', template='static/about.html')
18 @export("/about", template="static/about.html")
1919 def about():
2020 pass
2121
2222
23 @export('/broken')
23 @export("/broken")
2424 def broken():
25 foo = request.args.get('foo', 42)
26 raise RuntimeError('that\'s really broken')
25 raise RuntimeError("that's really broken")
2726
2827
29 @export(None, template='static/not_found.html')
28 @export(None, template="static/not_found.html")
3029 def not_found():
3130 """
3231 This function is always executed if an url does not
22 Requirements :
33 - werkzeug : http://werkzeug.pocoo.org
44 - jinja : http://jinja.pocoo.org
5 - couchdb 0.72 & above : http://www.couchdb.org
6 - couchdb-python 0.3 & above : http://code.google.com/p/couchdb-python
7
5 - couchdb 0.72 & above : https://couchdb.apache.org/
6 - couchdb-python 0.3 & above : https://github.com/djc/couchdb-python
00 from couchdb.client import Server
1 from couchy.utils import STATIC_PATH, local, local_manager, \
2 url_map
1 from werkzeug.exceptions import HTTPException
2 from werkzeug.exceptions import NotFound
3 from werkzeug.middleware.shared_data import SharedDataMiddleware
34 from werkzeug.wrappers import Request
4 from werkzeug.wsgi import ClosingIterator, SharedDataMiddleware
5 from werkzeug.exceptions import HTTPException, NotFound
6 from couchy import views
7 from couchy.models import URL
8 import couchy.models
5 from werkzeug.wsgi import ClosingIterator
6
7 from . import views
8 from .models import URL
9 from .utils import local
10 from .utils import local_manager
11 from .utils import STATIC_PATH
12 from .utils import url_map
913
1014
1115 class Couchy(object):
12
1316 def __init__(self, db_uri):
1417 local.application = self
1518
1619 server = Server(db_uri)
1720 try:
18 db = server.create('urls')
19 except:
20 db = server['urls']
21 self.dispatch = SharedDataMiddleware(self.dispatch, {
22 '/static': STATIC_PATH
23 })
21 db = server.create("urls")
22 except Exception:
23 db = server["urls"]
24 self.dispatch = SharedDataMiddleware(self.dispatch, {"/static": STATIC_PATH})
2425
2526 URL.db = db
2627
3233 endpoint, values = adapter.match()
3334 handler = getattr(views, endpoint)
3435 response = handler(request, **values)
35 except NotFound, e:
36 except NotFound:
3637 response = views.not_found(request)
3738 response.status_code = 404
38 except HTTPException, e:
39 except HTTPException as e:
3940 response = e
40 return ClosingIterator(response(environ, start_response),
41 [local_manager.cleanup])
41 return ClosingIterator(
42 response(environ, start_response), [local_manager.cleanup]
43 )
4244
4345 def __call__(self, environ, start_response):
4446 return self.dispatch(environ, start_response)
00 from datetime import datetime
1 from couchdb.mapping import Document, TextField, BooleanField, DateTimeField
2 from couchy.utils import url_for, get_random_uid
1
2 from couchdb.mapping import BooleanField
3 from couchdb.mapping import DateTimeField
4 from couchdb.mapping import Document
5 from couchdb.mapping import TextField
6
7 from .utils import get_random_uid
8 from .utils import url_for
39
410
511 class URL(Document):
1016 db = None
1117
1218 @classmethod
13 def load(self, id):
14 return super(URL, self).load(URL.db, id)
19 def load(cls, id):
20 return super(URL, cls).load(URL.db, id)
1521
1622 @classmethod
17 def query(self, code):
23 def query(cls, code):
1824 return URL.db.query(code)
1925
2026 def store(self):
21 if getattr(self._data, 'id', None) is None:
27 if getattr(self._data, "id", None) is None:
2228 new_id = self.shorty_id if self.shorty_id else None
2329 while 1:
2430 id = new_id if new_id else get_random_uid()
25 docid = None
2631 try:
27 docid = URL.db.resource.put(content=self._data, path='/%s/' % str(id))['id']
28 except:
32 docid = URL.db.resource.put(
33 content=self._data, path="/%s/" % str(id)
34 )["id"]
35 except Exception:
2936 continue
3037 if docid:
3138 break
3643
3744 @property
3845 def short_url(self):
39 return url_for('link', uid=self.id, _external=True)
46 return url_for("link", uid=self.id, _external=True)
4047
4148 def __repr__(self):
42 return '<URL %r>' % self.id
49 return "<URL %r>" % self.id
00 from os import path
1 from urlparse import urlparse
2 from random import sample, randrange
3 from jinja import Environment, FileSystemLoader
4 from werkzeug.local import Local, LocalManager
1 from random import randrange
2 from random import sample
3
4 from jinja2 import Environment
5 from jinja2 import FileSystemLoader
6 from werkzeug.local import Local
7 from werkzeug.local import LocalManager
8 from werkzeug.routing import Map
9 from werkzeug.routing import Rule
10 from werkzeug.urls import url_parse
511 from werkzeug.utils import cached_property
612 from werkzeug.wrappers import Response
7 from werkzeug.routing import Map, Rule
813
9 TEMPLATE_PATH = path.join(path.dirname(__file__), 'templates')
10 STATIC_PATH = path.join(path.dirname(__file__), 'static')
11 ALLOWED_SCHEMES = frozenset(['http', 'https', 'ftp', 'ftps'])
12 URL_CHARS = 'abcdefghijkmpqrstuvwxyzABCDEFGHIJKLMNPQRST23456789'
14 TEMPLATE_PATH = path.join(path.dirname(__file__), "templates")
15 STATIC_PATH = path.join(path.dirname(__file__), "static")
16 ALLOWED_SCHEMES = frozenset(["http", "https", "ftp", "ftps"])
17 URL_CHARS = "abcdefghijkmpqrstuvwxyzABCDEFGHIJKLMNPQRST23456789"
1318
1419 local = Local()
1520 local_manager = LocalManager([local])
16 application = local('application')
21 application = local("application")
1722
18 url_map = Map([Rule('/static/<file>', endpoint='static', build_only=True)])
23 url_map = Map([Rule("/static/<file>", endpoint="static", build_only=True)])
1924
2025 jinja_env = Environment(loader=FileSystemLoader(TEMPLATE_PATH))
2126
2227
2328 def expose(rule, **kw):
2429 def decorate(f):
25 kw['endpoint'] = f.__name__
30 kw["endpoint"] = f.__name__
2631 url_map.add(Rule(rule, **kw))
2732 return f
33
2834 return decorate
35
2936
3037 def url_for(endpoint, _external=False, **values):
3138 return local.url_adapter.build(endpoint, values, force_external=_external)
32 jinja_env.globals['url_for'] = url_for
39
40
41 jinja_env.globals["url_for"] = url_for
42
3343
3444 def render_template(template, **context):
35 return Response(jinja_env.get_template(template).render(**context),
36 mimetype='text/html')
45 return Response(
46 jinja_env.get_template(template).render(**context), mimetype="text/html"
47 )
48
3749
3850 def validate_url(url):
39 return urlparse(url)[0] in ALLOWED_SCHEMES
51 return url_parse(url)[0] in ALLOWED_SCHEMES
52
4053
4154 def get_random_uid():
42 return ''.join(sample(URL_CHARS, randrange(3, 9)))
55 return "".join(sample(URL_CHARS, randrange(3, 9)))
56
4357
4458 class Pagination(object):
45
4659 def __init__(self, results, per_page, page, endpoint):
4760 self.results = results
4861 self.per_page = per_page
5568
5669 @cached_property
5770 def entries(self):
58 return self.results[((self.page - 1) * self.per_page):(((self.page - 1) * self.per_page)+self.per_page)]
71 return self.results[
72 ((self.page - 1) * self.per_page) : (
73 ((self.page - 1) * self.per_page) + self.per_page
74 )
75 ]
5976
60 has_previous = property(lambda x: x.page > 1)
61 has_next = property(lambda x: x.page < x.pages)
62 previous = property(lambda x: url_for(x.endpoint, page=x.page - 1))
63 next = property(lambda x: url_for(x.endpoint, page=x.page + 1))
64 pages = property(lambda x: max(0, x.count - 1) // x.per_page + 1)
77 has_previous = property(lambda self: self.page > 1)
78 has_next = property(lambda self: self.page < self.pages)
79 previous = property(lambda self: url_for(self.endpoint, page=self.page - 1))
80 next = property(lambda self: url_for(self.endpoint, page=self.page + 1))
81 pages = property(lambda self: max(0, self.count - 1) // self.per_page + 1)
0 from werkzeug.exceptions import NotFound
01 from werkzeug.utils import redirect
1 from werkzeug.exceptions import NotFound
2 from couchy.utils import render_template, expose, \
3 validate_url, url_for, Pagination
4 from couchy.models import URL
2
3 from .models import URL
4 from .utils import expose
5 from .utils import Pagination
6 from .utils import render_template
7 from .utils import url_for
8 from .utils import validate_url
59
610
7 @expose('/')
11 @expose("/")
812 def new(request):
9 error = url = ''
10 if request.method == 'POST':
11 url = request.form.get('url')
12 alias = request.form.get('alias')
13 error = url = ""
14 if request.method == "POST":
15 url = request.form.get("url")
16 alias = request.form.get("alias")
1317 if not validate_url(url):
1418 error = "I'm sorry but you cannot shorten this URL."
1519 elif alias:
1620 if len(alias) > 140:
17 error = 'Your alias is too long'
18 elif '/' in alias:
19 error = 'Your alias might not include a slash'
21 error = "Your alias is too long"
22 elif "/" in alias:
23 error = "Your alias might not include a slash"
2024 elif URL.load(alias):
21 error = 'The alias you have requested exists already'
25 error = "The alias you have requested exists already"
2226 if not error:
23 url = URL(target=url, public='private' not in request.form, shorty_id=alias if alias else None)
27 url = URL(
28 target=url,
29 public="private" not in request.form,
30 shorty_id=alias if alias else None,
31 )
2432 url.store()
2533 uid = url.id
26 return redirect(url_for('display', uid=uid))
27 return render_template('new.html', error=error, url=url)
34 return redirect(url_for("display", uid=uid))
35 return render_template("new.html", error=error, url=url)
2836
29 @expose('/display/<uid>')
37
38 @expose("/display/<uid>")
3039 def display(request, uid):
3140 url = URL.load(uid)
3241 if not url:
3342 raise NotFound()
34 return render_template('display.html', url=url)
43 return render_template("display.html", url=url)
3544
36 @expose('/u/<uid>')
45
46 @expose("/u/<uid>")
3747 def link(request, uid):
3848 url = URL.load(uid)
3949 if not url:
4050 raise NotFound()
4151 return redirect(url.target, 301)
4252
43 @expose('/list/', defaults={'page': 1})
44 @expose('/list/<int:page>')
53
54 @expose("/list/", defaults={"page": 1})
55 @expose("/list/<int:page>")
4556 def list(request, page):
4657 def wrap(doc):
4758 data = doc.value
48 data['_id'] = doc.id
59 data["_id"] = doc.id
4960 return URL.wrap(data)
5061
51 code = '''function(doc) { if (doc.public){ map([doc._id], doc); }}'''
62 code = """function(doc) { if (doc.public){ map([doc._id], doc); }}"""
5263 docResults = URL.query(code)
5364 results = [wrap(doc) for doc in docResults]
54 pagination = Pagination(results, 1, page, 'list')
65 pagination = Pagination(results, 1, page, "list")
5566 if pagination.page > 1 and not pagination.entries:
5667 raise NotFound()
57 return render_template('list.html', pagination=pagination)
68 return render_template("list.html", pagination=pagination)
69
5870
5971 def not_found(request):
60 return render_template('not_found.html')
72 return render_template("not_found.html")
44
55 Werkzeug powered Teeworlds Server Browser.
66
7 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
10 from cupoftee.application import make_app
10 from .application import make_app
44
55 The WSGI appliction for the cup of tee browser.
66
7 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
1010 import time
1111 from os import path
1212 from threading import Thread
13 from cupoftee.db import Database
14 from cupoftee.network import ServerBrowser
15 from werkzeug.templates import Template
16 from werkzeug.wrappers import Request, Response
17 from werkzeug.wsgi import SharedDataMiddleware
18 from werkzeug.exceptions import HTTPException, NotFound
19 from werkzeug.routing import Map, Rule
13
14 from jinja2 import Environment
15 from jinja2 import PackageLoader
16 from werkzeug.exceptions import HTTPException
17 from werkzeug.exceptions import NotFound
18 from werkzeug.middleware.shared_data import SharedDataMiddleware
19 from werkzeug.routing import Map
20 from werkzeug.routing import Rule
21 from werkzeug.wrappers import Request
22 from werkzeug.wrappers import Response
23
24 from .db import Database
25 from .network import ServerBrowser
2026
2127
22 templates = path.join(path.dirname(__file__), 'templates')
28 templates = path.join(path.dirname(__file__), "templates")
2329 pages = {}
24 url_map = Map([Rule('/shared/<file>', endpoint='shared')])
30 url_map = Map([Rule("/shared/<file>", endpoint="shared")])
2531
2632
27 def make_app(database, interval=60):
28 return SharedDataMiddleware(Cup(database), {
29 '/shared': path.join(path.dirname(__file__), 'shared')
30 })
33 def make_app(database, interval=120):
34 return SharedDataMiddleware(
35 Cup(database, interval),
36 {"/shared": path.join(path.dirname(__file__), "shared")},
37 )
3138
3239
3340 class PageMeta(type):
34
3541 def __init__(cls, name, bases, d):
3642 type.__init__(cls, name, bases, d)
37 if d.get('url_rule') is not None:
43 if d.get("url_rule") is not None:
3844 pages[cls.identifier] = cls
39 url_map.add(Rule(cls.url_rule, endpoint=cls.identifier,
40 **cls.url_arguments))
45 url_map.add(
46 Rule(cls.url_rule, endpoint=cls.identifier, **cls.url_arguments)
47 )
4148
42 identifier = property(lambda x: x.__name__.lower())
49 identifier = property(lambda self: self.__name__.lower())
4350
4451
45 class Page(object):
46 __metaclass__ = PageMeta
52 def _with_metaclass(meta, *bases):
53 """Create a base class with a metaclass."""
54
55 class metaclass(type):
56 def __new__(metacls, name, this_bases, d):
57 return meta(name, bases, d)
58
59 return type.__new__(metaclass, "temporary_class", (), {})
60
61
62 class Page(_with_metaclass(PageMeta, object)):
4763 url_arguments = {}
4864
4965 def __init__(self, cup, request, url_adapter):
5975
6076 def render_template(self, template=None):
6177 if template is None:
62 template = self.__class__.identifier + '.html'
78 template = self.__class__.identifier + ".html"
6379 context = dict(self.__dict__)
6480 context.update(url_for=self.url_for, self=self)
65 body_tmpl = Template.from_file(path.join(templates, template))
66 layout_tmpl = Template.from_file(path.join(templates, 'layout.html'))
67 context['body'] = body_tmpl.render(context)
68 return layout_tmpl.render(context)
81 return self.cup.render_template(template, context)
6982
7083 def get_response(self):
71 return Response(self.render_template(), mimetype='text/html')
84 return Response(self.render_template(), mimetype="text/html")
7285
7386
7487 class Cup(object):
75
7688 def __init__(self, database, interval=120):
89 self.jinja_env = Environment(loader=PackageLoader("cupoftee"), autoescape=True)
7790 self.interval = interval
7891 self.db = Database(database)
7992 self.master = ServerBrowser(self)
8295 self.updater.start()
8396
8497 def update_master(self):
85 wait = self.interval
8698 while 1:
8799 if self.master.sync():
88100 wait = self.interval
96108 endpoint, values = url_adapter.match()
97109 page = pages[endpoint](self, request, url_adapter)
98110 response = page.process(**values)
99 except NotFound, e:
111 except NotFound:
100112 page = MissingPage(self, request, url_adapter)
101113 response = page.process()
102 except HTTPException, e:
114 except HTTPException as e:
103115 return e
104116 return response or page.get_response()
105117
107119 request = Request(environ)
108120 return self.dispatch_request(request)(environ, start_response)
109121
122 def render_template(self, name, **context):
123 template = self.jinja_env.get_template(name)
124 return template.render(context)
125
110126
111127 from cupoftee.pages import MissingPage
55 A simple object database. As long as the server is not running in
66 multiprocess mode that's good enough.
77
8 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
9 :license: BSD, see LICENSE for more details.
8 :copyright: 2007 Pallets
9 :license: BSD-3-Clause
1010 """
11 from __future__ import with_statement
12 import gdbm
11 from pickle import dumps
12 from pickle import loads
1313 from threading import Lock
14 from pickle import dumps, loads
14
15 try:
16 import dbm
17 except ImportError:
18 import anydbm as dbm
1519
1620
1721 class Database(object):
18
1922 def __init__(self, filename):
2023 self.filename = filename
21 self._fs = gdbm.open(filename, 'cf')
24 self._fs = dbm.open(filename, "cf")
2225 self._local = {}
2326 self._lock = Lock()
2427
3942 def __delitem__(self, key, value):
4043 with self._lock:
4144 self._local.pop(key, None)
42 if self._fs.has_key(key):
45 if key in self._fs:
4346 del self._fs[key]
4447
4548 def __del__(self):
6366
6467 def sync(self):
6568 with self._lock:
66 for key, value in self._local.iteritems():
69 for key, value in self._local.items():
6770 self._fs[key] = dumps(value, 2)
6871 self._fs.sync()
6972
7174 try:
7275 self.sync()
7376 self._fs.close()
74 except:
77 except Exception:
7578 pass
44
55 Query the servers for information.
66
7 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
10 import time
1110 import socket
11 from datetime import datetime
1212 from math import log
13 from datetime import datetime
14 from cupoftee.utils import unicodecmp
13
14 from .utils import unicodecmp
1515
1616
1717 class ServerError(Exception):
3131
3232
3333 class ServerBrowser(Syncable):
34
3534 def __init__(self, cup):
3635 self.cup = cup
37 self.servers = cup.db.setdefault('servers', dict)
36 self.servers = cup.db.setdefault("servers", dict)
3837
3938 def _sync(self):
4039 to_delete = set(self.servers)
41 for x in xrange(1, 17):
42 addr = ('master%d.teeworlds.com' % x, 8300)
43 print addr
40 for x in range(1, 17):
41 addr = ("master%d.teeworlds.com" % x, 8300)
42 print(addr)
4443 try:
4544 self._sync_master(addr, to_delete)
46 except (socket.error, socket.timeout, IOError), e:
45 except (socket.error, socket.timeout, IOError):
4746 continue
4847 for server_id in to_delete:
4948 self.servers.pop(server_id, None)
5049 if not self.servers:
51 raise IOError('no servers found')
50 raise IOError("no servers found")
5251 self.cup.db.sync()
5352
5453 def _sync_master(self, addr, to_delete):
5554 s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
5655 s.settimeout(5)
57 s.sendto('\x20\x00\x00\x00\x00\x48\xff\xff\xff\xffreqt', addr)
56 s.sendto(b"\x20\x00\x00\x00\x00\x48\xff\xff\xff\xffreqt", addr)
5857 data = s.recvfrom(1024)[0][14:]
5958 s.close()
6059
61 for n in xrange(0, len(data) / 6):
62 addr = ('.'.join(map(str, map(ord, data[n * 6:n * 6 + 4]))),
63 ord(data[n * 6 + 5]) * 256 + ord(data[n * 6 + 4]))
64 server_id = '%s:%d' % addr
60 for n in range(0, len(data) // 6):
61 addr = (
62 ".".join(map(str, map(ord, data[n * 6 : n * 6 + 4]))),
63 ord(data[n * 6 + 5]) * 256 + ord(data[n * 6 + 4]),
64 )
65 server_id = "%s:%d" % addr
6566 if server_id in self.servers:
6667 if not self.servers[server_id].sync():
6768 continue
7475
7576
7677 class Server(Syncable):
77
7878 def __init__(self, addr, server_id):
7979 self.addr = addr
8080 self.id = server_id
8181 self.players = []
8282 if not self.sync():
83 raise ServerError('server not responding in time')
83 raise ServerError("server not responding in time")
8484
8585 def _sync(self):
8686 s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
8787 s.settimeout(1)
88 s.sendto('\xff\xff\xff\xff\xff\xff\xff\xff\xff\xffgief', self.addr)
89 bits = s.recvfrom(1024)[0][14:].split('\x00')
88 s.sendto(b"\xff\xff\xff\xff\xff\xff\xff\xff\xff\xffgief", self.addr)
89 bits = s.recvfrom(1024)[0][14:].split(b"\x00")
9090 s.close()
9191 self.version, server_name, map_name = bits[:3]
92 self.name = server_name.decode('latin1')
93 self.map = map_name.decode('latin1')
92 self.name = server_name.decode("latin1")
93 self.map = map_name.decode("latin1")
9494 self.gametype = bits[3]
95 self.flags, self.progression, player_count, \
96 self.max_players = map(int, bits[4:8])
95 self.flags, self.progression, player_count, self.max_players = map(
96 int, bits[4:8]
97 )
9798
9899 # sync the player stats
99100 players = dict((p.name, p) for p in self.players)
100 for i in xrange(player_count):
101 name = bits[8 + i * 2].decode('latin1')
101 for i in range(player_count):
102 name = bits[8 + i * 2].decode("latin1")
102103 score = int(bits[9 + i * 2])
103104
104105 # update existing player
109110 else:
110111 self.players.append(Player(self, name, score))
111112 # delete players that left
112 for player in players.itervalues():
113 for player in players.values():
113114 try:
114115 self.players.remove(player)
115 except:
116 except Exception:
116117 pass
117118
118119 # sort the player list and count them
124125
125126
126127 class Player(object):
127
128128 def __init__(self, server, name, score):
129129 self.server = server
130130 self.name = name
44
55 The pages.
66
7 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
10 from functools import reduce
1011
12 from werkzeug.exceptions import NotFound
1113 from werkzeug.utils import redirect
12 from werkzeug.exceptions import NotFound
13 from cupoftee.application import Page
14 from cupoftee.utils import unicodecmp
14
15 from .application import Page
16 from .utils import unicodecmp
1517
1618
1719 class ServerList(Page):
18 url_rule = '/'
20 url_rule = "/"
1921
2022 def order_link(self, name, title):
21 cls = ''
22 link = '?order_by=' + name
23 cls = ""
24 link = "?order_by=" + name
2325 desc = False
2426 if name == self.order_by:
2527 desc = not self.order_desc
26 cls = ' class="%s"' % (desc and 'down' or 'up')
28 cls = ' class="%s"' % ("down" if desc else "up")
2729 if desc:
28 link += '&amp;dir=desc'
30 link += "&amp;dir=desc"
2931 return '<a href="%s"%s>%s</a>' % (link, cls, title)
3032
3133 def process(self):
32 self.order_by = self.request.args.get('order_by') or 'name'
34 self.order_by = self.request.args.get("order_by") or "name"
3335 sort_func = {
34 'name': lambda x: x,
35 'map': lambda x: x.map,
36 'gametype': lambda x: x.gametype,
37 'players': lambda x: x.player_count,
38 'progression': lambda x: x.progression,
36 "name": lambda x: x,
37 "map": lambda x: x.map,
38 "gametype": lambda x: x.gametype,
39 "players": lambda x: x.player_count,
40 "progression": lambda x: x.progression,
3941 }.get(self.order_by)
4042 if sort_func is None:
41 return redirect(self.url_for('serverlist'))
43 return redirect(self.url_for("serverlist"))
4244
4345 self.servers = self.cup.master.servers.values()
4446 self.servers.sort(key=sort_func)
45 if self.request.args.get('dir') == 'desc':
47 if self.request.args.get("dir") == "desc":
4648 self.servers.reverse()
4749 self.order_desc = True
4850 else:
4951 self.order_desc = False
5052
5153 self.players = reduce(lambda a, b: a + b.players, self.servers, [])
52 self.players.sort(lambda a, b: unicodecmp(a.name, b.name))
54 self.players = sorted(self.players, key=lambda a, b: unicodecmp(a.name, b.name))
5355
5456
5557 class Server(Page):
56 url_rule = '/server/<id>'
58 url_rule = "/server/<id>"
5759
5860 def process(self, id):
5961 try:
6365
6466
6567 class Search(Page):
66 url_rule = '/search'
68 url_rule = "/search"
6769
6870 def process(self):
69 self.user = self.request.args.get('user')
71 self.user = self.request.args.get("user")
7072 if self.user:
7173 self.results = []
72 for server in self.cup.master.servers.itervalues():
74 for server in self.cup.master.servers.values():
7375 for player in server.players:
7476 if player.name == self.user:
7577 self.results.append(server)
7678
7779
7880 class MissingPage(Page):
79
8081 def get_response(self):
8182 response = super(MissingPage, self).get_response()
8283 response.status_code = 404
33 <head>
44 <title>Teeworlds Server Browser</title>
55 <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
6 <link rel="stylesheet" type="text/css" href="${url_for('shared', file='style.css')}">
6 <link rel="stylesheet" type="text/css" href="{{ url_for('shared', file='style.css') }}">
77 </head>
88 <body>
9 <h1><a href="http://www.teeworlds.com/">Teeworlds</a> Server Browser</h1>
9 <h1><a href="https://www.teeworlds.com/">Teeworlds</a> Server Browser</h1>
1010 <div class="contents">
11 ${body}
11 {% block body %}{% endblock %}
1212 </div>
1313 <div class="footer">
14 <a href="http://www.teeworls.com/">Teeworlds</a> Server Browser by
14 <a href="https://www.teeworlds.com/">Teeworlds</a> Server Browser by
1515 <a href="http://lucumr.pocoo.org/">Armin Ronacher</a>, powered by
1616 <a href="http://werkzeug.pocoo.org/">Werkzeug</a>, licensed under
1717 the modified BSD license.
0 {% extends "layout.html" %}
1 {% block body %}
02 <h2>Page Not Found</h2>
13 <p>
24 The requested page does not exist on this server. If you expect something
35 here (for example a server) it probably went away after the last update.
46 </p>
57 <p>
6 go back to <a href="$url_for('serverlist')">the server list</a>.
8 go back to <a href="{{ url_for('serverlist') }}">the server list</a>.
79 </p>
10 {% endblock %}
0 {% extends "layout.html" %}
1 {% block body %}
02 <h2>Nick Search</h2>
1 <% if not user %>
2 <form action="$url_for('search')" method="get">
3 {% if not user %}
4 <form action="{{ url_for('search') }}" method="get">
35 <p>
46 You have to enter a nickname.
57 </p>
810 <input type="submit" value="Search">
911 </p>
1012 <p>
11 Take me <a href="$url_for('serverlist')">back to the server list</a>.
13 Take me <a href="{{ url_for('serverlist') }}">back to the server list</a>.
1214 </p>
1315 </form>
14 <% else %>
15 <% if results %>
16 {% else %}
17 {% if results %}
1618 <p>
17 The nickname "$escape(user)" is currently playing on the
18 following ${len(results) == 1 and 'server' or 'servers'}:
19 The nickname "{{ user }}" is currently playing on the
20 following {{ 'server' if results|length == 1 else 'servers' }}:
1921 </p>
2022 <ul>
21 <% for server in results %>
22 <li><a href="$url_for('server', id=server.id)">$escape(server.name)</a></li>
23 <% endfor %>
23 {% for server in results %}
24 <li><a href="{{ url_for('server', id=server.id) }}">{{ server.name }}</a></li>
25 {% endfor %}
2426 </ul>
25 <% else %>
27 {% else %}
2628 <p>
27 The nickname "$escape(user)" is currently not playing.
29 The nickname "{{ user }}" is currently not playing.
2830 </p>
29 <% endif %>
31 {% endif %}
3032 <p>
31 You can <a href="$url_for('search', user=self.user)">bookmark this link</a>
32 to search for "$escape(user)" quickly or <a href="$url_for('serverlist')">return
33 You can <a href="{{ url_for('search', user=self.user) }}">bookmark this link</a>
34 to search for "{{ user }}" quickly or <a href="{{ url_for('serverlist') }}">return
3335 to the server list</a>.
3436 </p>
35 <% endif %>
37 {% endif %}
38 {% endblock %}
0 <h2>$escape(server.name)</h2>
0 {% extends "layout.html" %}
1 {% block body %}
2 <h2>{{ server.name }}</h2>
13 <p>
2 Take me back to <a href="$url_for('serverlist')">the server list</a>.
4 Take me back to <a href="{{ url_for('serverlist') }}">the server list</a>.
35 </p>
46 <dl class="serverinfo">
57 <dt>Map</dt>
6 <dd>$escape(server.map)</dd>
8 <dd>{{ server.map }}</dd>
79 <dt>Gametype</dt>
8 <dd>$server.gametype</dd>
10 <dd>{{ server.gametype }}</dd>
911 <dt>Number of players</dt>
10 <dd>$server.player_count</dd>
12 <dd>{{ server.player_count }}</dd>
1113 <dt>Server version</dt>
12 <dd>$server.version</dd>
14 <dd>{{ server.version }}</dd>
1315 <dt>Maximum number of players</dt>
14 <dd>$server.max_players</dd>
15 <% if server.progression >= 0 %>
16 <dd>{{ server.max_players }}</dd>
17 {% if server.progression >= 0 %}
1618 <dt>Game progression</dt>
17 <dd>$server.progression%</dd>
18 <% endif %>
19 <dd>{{ server.progression }}%</dd>
20 {% endif %}
1921 </dl>
20 <% if server.players %>
22 {% if server.players %}
2123 <h3>Players</h3>
2224 <ul class="players">
23 <% for player in server.players %>
24 <li><a href="$url_for('search', user=player.name)">$escape(player.name)</a>
25 <small>($player.score)</small></li>
26 <% endfor %>
25 {% for player in server.players %}
26 <li><a href="{{ url_for('search', user=player.name) }}">{{ player.name }}</a>
27 <small>({{ player.score }}</small></li>
28 {% endfor %}
2729 </ul>
28 <% endif %>
30 {% endif %}
31 {% endblock %}
0 {% extends "layout.html" %}
1 {% block body %}
02 <h2>Server List</h2>
13 <p>
2 Currently <strong>$len(players)</strong> players are playing on
3 <strong>$len(servers)</strong> servers.
4 <% if cup.master.last_sync %>
4 Currently <strong>{{ len(players) }}</strong> players are playing on
5 <strong>{{ len(servers) }}</strong> servers.
6 {% if cup.master.last_sync %}
57 This list was last synced on
6 <strong>$cup.master.last_sync.strftime('%d %B %Y at %H:%M UTC')</strong>.
7 <% else %>
8 <strong>{{ cup.master.last_sync.strftime('%d %B %Y at %H:%M UTC') }}</strong>.
9 {% else %}
810 Syncronization with master server in progress. Reload the page in a minute
911 or two, to see the server list.
10 <% endif %>
12 {% endif %}
1113 </p>
1214 <table class="servers">
1315 <thead>
1416 <tr>
15 <th>$self.order_link('name', 'Name')</th>
16 <th>$self.order_link('map', 'Map')</th>
17 <th>$self.order_link('gametype', 'Gametype')</th>
18 <th>$self.order_link('players', 'Players')</th>
19 <th>$self.order_link('progression', 'Progression')</th>
17 <th>{{ self.order_link('name', 'Name') }}</th>
18 <th>{{ self.order_link('map', 'Map') }}</th>
19 <th>{{ self.order_link('gametype', 'Gametype') }}</th>
20 <th>{{ self.order_link('players', 'Players') }}</th>
21 <th>{{ self.order_link('progression', 'Progression') }}</th>
2022 </tr>
2123 </thead>
2224 <tbody>
23 <% for server in servers %>
25 {% for server in servers %}
2426 <tr>
25 <th><a href="$url_for('server', id=server.id)">$escape(server.name)</a></th>
26 <td>$escape(server.map)</td>
27 <td>$server.gametype</td>
28 <td>$server.player_count / $server.max_players</td>
29 <td>${server.progression >= 0 and '%d%%' % server.progression or '?'}</td>
27 <th><a href="{{ url_for('server', id=server.id) }}">{{ server.name }}</a></th>
28 <td>{{ server.map }}</td>
29 <td>{{ server.gametype }}</td>
30 <td>{{ server.player_count }} / {{ server.max_players }}</td>
31 <td>{{ '%d%%' % server.progression if server.progression >= 0 else '?' }}</td>
3032 </tr>
31 <% endfor %>
33 {% endfor %}
3234 </tbody>
3335 </table>
3436 <h3>Players online</h3>
3840 the detail page of the server for some more information.
3941 </p>
4042 <div class="players">
41 <% for player in players %>
42 <a href="$url_for('server', id=player.server.id)" title="score: $player.score"
43 style="font-size: $player.size%">$escape(player.name)</a>
44 <% endfor %>
43 {% for player in players %}
44 <a href="{{ url_for('server', id=player.server.id) }}" title="score: {{ player.score }}"
45 style="font-size: {{ player.size }}%">{{ player.name }}</a>
46 {% endfor %}
4547 </div>
4648 <h3>Find User</h3>
4749 <p>
5052 users can appear on multiple servers for too generic usernames (like the
5153 default "nameless tee" user).
5254 </p>
53 <form action="$url_for('search')" method="get">
55 <form action="{{ url_for('search') }}" method="get">
5456 <p>
5557 <input type="text" name="user" value="" size="30">
5658 <input type="submit" value="Search">
5759 </p>
5860 </form>
61 {% endblock %}
44
55 Various utilities.
66
7 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
1010 import re
1111
1212
13 _sort_re = re.compile(r'\w+', re.UNICODE)
13 _sort_re = re.compile(r"\w+", re.UNICODE)
1414
1515
1616 def unicodecmp(a, b):
1717 x, y = map(_sort_re.search, [a, b])
18 return cmp((x and x.group() or a).lower(), (y and y.group() or b).lower())
18 x = (x.group() if x else a).lower()
19 y = (y.group() if y else b).lower()
20 return (x > y) - (x < y)
55 Shows how you can implement HTTP basic auth support without an
66 additional component.
77
8 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
9 :license: BSD.
8 :copyright: 2007 Pallets
9 :license: BSD-3-Clause
1010 """
1111 from werkzeug.serving import run_simple
12 from werkzeug.wrappers import Request, Response
12 from werkzeug.wrappers import Request
13 from werkzeug.wrappers import Response
1314
1415
1516 class Application(object):
16
17 def __init__(self, users, realm='login required'):
17 def __init__(self, users, realm="login required"):
1818 self.users = users
1919 self.realm = realm
2020
2222 return username in self.users and self.users[username] == password
2323
2424 def auth_required(self, request):
25 return Response('Could not verify your access level for that URL.\n'
26 'You have to login with proper credentials', 401,
27 {'WWW-Authenticate': 'Basic realm="%s"' % self.realm})
25 return Response(
26 "Could not verify your access level for that URL.\n"
27 "You have to login with proper credentials",
28 401,
29 {"WWW-Authenticate": 'Basic realm="%s"' % self.realm},
30 )
2831
2932 def dispatch_request(self, request):
30 return Response('Logged in as %s' % request.authorization.username)
33 return Response("Logged in as %s" % request.authorization.username)
3134
3235 def __call__(self, environ, start_response):
3336 request = Request(environ)
3942 return response(environ, start_response)
4043
4144
42 if __name__ == '__main__':
43 application = Application({'user1': 'password', 'user2': 'password'})
44 run_simple('localhost', 5000, application)
45 if __name__ == "__main__":
46 application = Application({"user1": "password", "user2": "password"})
47 run_simple("localhost", 5000, application)
0 from i18nurls.application import Application as make_app
0 from .application import Application as make_app
00 from os import path
1 from werkzeug.templates import Template
2 from werkzeug.wrappers import BaseRequest, BaseResponse
3 from werkzeug.routing import NotFound, RequestRedirect
4 from werkzeug.exceptions import HTTPException, NotFound
5 from i18nurls.urls import map
61
7 TEMPLATES = path.join(path.dirname(__file__), 'templates')
2 from jinja2 import Environment
3 from jinja2 import PackageLoader
4 from werkzeug.exceptions import HTTPException
5 from werkzeug.exceptions import NotFound
6 from werkzeug.routing import RequestRedirect
7 from werkzeug.wrappers import BaseResponse
8 from werkzeug.wrappers import Request as _Request
9
10 from .urls import map
11
12 TEMPLATES = path.join(path.dirname(__file__), "templates")
813 views = {}
914
1015
1116 def expose(name):
1217 """Register the function as view."""
18
1319 def wrapped(f):
1420 views[name] = f
1521 return f
22
1623 return wrapped
1724
1825
19 class Request(BaseRequest):
20
26 class Request(_Request):
2127 def __init__(self, environ, urls):
2228 super(Request, self).__init__(environ)
2329 self.urls = urls
2430 self.matched_url = None
2531
2632 def url_for(self, endpoint, **args):
27 if not 'lang_code' in args:
28 args['lang_code'] = self.language
29 if endpoint == 'this':
33 if "lang_code" not in args:
34 args["lang_code"] = self.language
35 if endpoint == "this":
3036 endpoint = self.matched_url[0]
3137 tmp = self.matched_url[1].copy()
3238 tmp.update(args)
3945
4046
4147 class TemplateResponse(Response):
48 jinja_env = Environment(loader=PackageLoader("i18nurls"), autoescape=True)
4249
4350 def __init__(self, template_name, **values):
4451 self.template_name = template_name
4552 self.template_values = values
46 Response.__init__(self, mimetype='text/html')
53 Response.__init__(self, mimetype="text/html")
4754
4855 def __call__(self, environ, start_response):
49 req = environ['werkzeug.request']
56 req = environ["werkzeug.request"]
5057 values = self.template_values.copy()
51 values['req'] = req
52 values['body'] = self.render_template(self.template_name, values)
53 self.write(self.render_template('layout.html', values))
54 return Response.__call__(self, environ, start_response)
58 values["req"] = req
59 self.data = self.render_template(self.template_name, values)
60 return super(TemplateResponse, self).__call__(environ, start_response)
5561
5662 def render_template(self, name, values):
57 return Template.from_file(path.join(TEMPLATES, name)).render(values)
63 template = self.jinja_env.get_template(name)
64 return template.render(values)
5865
5966
6067 class Application(object):
61
6268 def __init__(self):
6369 from i18nurls import views
70
6471 self.not_found = views.page_not_found
6572
6673 def __call__(self, environ, start_response):
6976 try:
7077 endpoint, args = urls.match(req.path)
7178 req.matched_url = (endpoint, args)
72 if endpoint == '#language_select':
79 if endpoint == "#language_select":
7380 lng = req.accept_languages.best
74 lng = lng and lng.split('-')[0].lower() or 'en'
75 index_url = urls.build('index', {'lang_code': lng})
76 resp = Response('Moved to %s' % index_url, status=302)
77 resp.headers['Location'] = index_url
81 lng = lng and lng.split("-")[0].lower() or "en"
82 index_url = urls.build("index", {"lang_code": lng})
83 resp = Response("Moved to %s" % index_url, status=302)
84 resp.headers["Location"] = index_url
7885 else:
79 req.language = args.pop('lang_code', None)
86 req.language = args.pop("lang_code", None)
8087 resp = views[endpoint](req, **args)
8188 except NotFound:
8289 resp = self.not_found(req)
83 except (RequestRedirect, HTTPException), e:
90 except (RequestRedirect, HTTPException) as e:
8491 resp = e
8592 return resp(environ, start_response)
0 {% extends "layout.html" %}
1 {% block body %}
02 <p>
13 This is just another page. Maybe you want to head over to the
2 <a href="$escape(req.url_for('blog/index'))">blog</a>.
4 <a href="{{ req.url_for('blog/index') }}">blog</a>.
35 </p>
6 {% endblock %}
0 <p>Blog <% if mode == 'index' %>Index<% else %>Post $post_id<% endif %></p>
0 {% extends "layout.html" %}
1 {% block body %}
2 <p>Blog {% if mode == 'index' %}Index{% else %}Post {{ post_id }}{% endif %}</p>
13 <p>
2 How about going to <a href="$escape(req.url_for('index'))">the index</a>.
4 How about going to <a href="{{ req.url_for('index') }}">the index</a>.
35 </p>
6 {% endblock %}
0 {% extends "layout.html" %}
1 {% block body %}
02 <p>Hello in the i18n URL example application.</p>
13 <p>Because I'm too lazy to translate here is just english content.</p>
24 <ul>
3 <li><a href="$escape(req.url_for('about'))">about this page</a></li>
5 <li><a href="{{ req.url_for('about') }}">about this page</a></li>
46 </ul>
7 {% endblock %}
11 "http://www.w3.org/TR/html4/loose.dtd">
22 <html>
33 <head>
4 <title>$title | Example Application</title>
4 <title>{{ title }} | Example Application</title>
55 </head>
66 <body>
77 <h1>Example Application</h1>
88 <p>
9 Request Language: <strong>$req.language</strong>
9 Request Language: <strong>{{ req.language }}</strong>
1010 </p>
11 $body
11 {% block body %}{% endblock %}
1212 <blockquote>
1313 <p>This page in other languages:
1414 <ul>
15 <% for lng in ['en', 'de', 'fr'] %>
16 <li><a href="$escape(req.url_for('this', lang_code=lng))">$lng</a></li>
17 <% endfor %>
15 {% for lng in ['en', 'de', 'fr'] %}
16 <li><a href="{{ req.url_for('this', lang_code=lng) }}">{{ lng }}</a></li>
17 {% endfor %}
1818 </ul>
1919 </blockquote>
2020 </body>
0 from werkzeug.routing import Map, Rule, Submount
0 from werkzeug.routing import Map
1 from werkzeug.routing import Rule
2 from werkzeug.routing import Submount
13
2 map = Map([
3 Rule('/', endpoint='#language_select'),
4 Submount('/<string(length=2):lang_code>', [
5 Rule('/', endpoint='index'),
6 Rule('/about', endpoint='about'),
7 Rule('/blog/', endpoint='blog/index'),
8 Rule('/blog/<int:post_id>', endpoint='blog/show')
9 ])
10 ])
4 map = Map(
5 [
6 Rule("/", endpoint="#language_select"),
7 Submount(
8 "/<string(length=2):lang_code>",
9 [
10 Rule("/", endpoint="index"),
11 Rule("/about", endpoint="about"),
12 Rule("/blog/", endpoint="blog/index"),
13 Rule("/blog/<int:post_id>", endpoint="blog/show"),
14 ],
15 ),
16 ]
17 )
0 from i18nurls.application import TemplateResponse, Response, expose
0 from .application import expose
1 from .application import Response
2 from .application import TemplateResponse
13
24
3 @expose('index')
5 @expose("index")
46 def index(req):
5 return TemplateResponse('index.html', title='Index')
7 return TemplateResponse("index.html", title="Index")
68
7 @expose('about')
9
10 @expose("about")
811 def about(req):
9 return TemplateResponse('about.html', title='About')
12 return TemplateResponse("about.html", title="About")
1013
11 @expose('blog/index')
14
15 @expose("blog/index")
1216 def blog_index(req):
13 return TemplateResponse('blog.html', title='Blog Index', mode='index')
17 return TemplateResponse("blog.html", title="Blog Index", mode="index")
1418
15 @expose('blog/show')
19
20 @expose("blog/show")
1621 def blog_show(req, post_id):
17 return TemplateResponse('blog.html', title='Blog Post #%d' % post_id,
18 post_id=post_id, mode='show')
22 return TemplateResponse(
23 "blog.html", title="Blog Post #%d" % post_id, post_id=post_id, mode="show"
24 )
25
1926
2027 def page_not_found(req):
21 return Response('<h1>Page Not Found</h1>', mimetype='text/html')
28 return Response("<h1>Page Not Found</h1>", mimetype="text/html")
55
66 Manage the coolmagic example application.
77
8 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
9 :license: BSD, see LICENSE for more details.
8 :copyright: 2007 Pallets
9 :license: BSD-3-Clause
1010 """
1111 import click
12 from werkzeug.serving import run_simple
13
1214 from coolmagic import make_app
13 from werkzeug.serving import run_simple
1415
1516
1617 @click.group()
1920
2021
2122 @cli.command()
22 @click.option('-h', '--hostname', type=str, default='localhost', help="localhost")
23 @click.option('-p', '--port', type=int, default=5000, help="5000")
24 @click.option('--no-reloader', is_flag=True, default=False)
25 @click.option('--debugger', is_flag=True)
26 @click.option('--no-evalex', is_flag=True, default=False)
27 @click.option('--threaded', is_flag=True)
28 @click.option('--processes', type=int, default=1, help="1")
23 @click.option("-h", "--hostname", type=str, default="localhost", help="localhost")
24 @click.option("-p", "--port", type=int, default=5000, help="5000")
25 @click.option("--no-reloader", is_flag=True, default=False)
26 @click.option("--debugger", is_flag=True)
27 @click.option("--no-evalex", is_flag=True, default=False)
28 @click.option("--threaded", is_flag=True)
29 @click.option("--processes", type=int, default=1, help="1")
2930 def runserver(hostname, port, no_reloader, debugger, no_evalex, threaded, processes):
3031 """Start a new development server."""
3132 app = make_app()
3233 reloader = not no_reloader
3334 evalex = not no_evalex
34 run_simple(hostname, port, app,
35 use_reloader=reloader, use_debugger=debugger,
36 use_evalex=evalex, threaded=threaded, processes=processes)
35 run_simple(
36 hostname,
37 port,
38 app,
39 use_reloader=reloader,
40 use_debugger=debugger,
41 use_evalex=evalex,
42 threaded=threaded,
43 processes=processes,
44 )
3745
3846
3947 @cli.command()
40 @click.option('--no-ipython', is_flag=True, default=False)
48 @click.option("--no-ipython", is_flag=True, default=False)
4149 def shell(no_ipython):
4250 """Start a new interactive python session."""
43 banner = 'Interactive Werkzeug Shell'
51 banner = "Interactive Werkzeug Shell"
4452 namespace = dict()
4553 if not no_ipython:
4654 try:
4755 try:
4856 from IPython.frontend.terminal.embed import InteractiveShellEmbed
57
4958 sh = InteractiveShellEmbed.instance(banner1=banner)
5059 except ImportError:
5160 from IPython.Shell import IPShellEmbed
61
5262 sh = IPShellEmbed(banner=banner)
5363 except ImportError:
5464 pass
5666 sh(local_ns=namespace)
5767 return
5868 from code import interact
69
5970 interact(banner, local=namespace)
6071
61 if __name__ == '__main__':
72
73 if __name__ == "__main__":
6274 cli()
44
55 def make_app():
66 from couchy.application import Couchy
7 return Couchy('http://localhost:5984')
7
8 return Couchy("http://localhost:5984")
89
910
1011 def make_shell():
1112 from couchy import models, utils
13
1214 application = make_app()
13 return locals()
15 return {"application": application, "models": models, "utils": utils}
1416
1517
1618 @click.group()
2123 @cli.command()
2224 def initdb():
2325 from couchy.application import Couchy
24 Couchy('http://localhost:5984').init_database()
26
27 Couchy("http://localhost:5984").init_database()
2528
2629
2730 @cli.command()
28 @click.option('-h', '--hostname', type=str, default='localhost', help="localhost")
29 @click.option('-p', '--port', type=int, default=5000, help="5000")
30 @click.option('--no-reloader', is_flag=True, default=False)
31 @click.option('--debugger', is_flag=True)
32 @click.option('--no-evalex', is_flag=True, default=False)
33 @click.option('--threaded', is_flag=True)
34 @click.option('--processes', type=int, default=1, help="1")
31 @click.option("-h", "--hostname", type=str, default="localhost", help="localhost")
32 @click.option("-p", "--port", type=int, default=5000, help="5000")
33 @click.option("--no-reloader", is_flag=True, default=False)
34 @click.option("--debugger", is_flag=True)
35 @click.option("--no-evalex", is_flag=True, default=False)
36 @click.option("--threaded", is_flag=True)
37 @click.option("--processes", type=int, default=1, help="1")
3538 def runserver(hostname, port, no_reloader, debugger, no_evalex, threaded, processes):
3639 """Start a new development server."""
3740 app = make_app()
3841 reloader = not no_reloader
3942 evalex = not no_evalex
40 run_simple(hostname, port, app,
41 use_reloader=reloader, use_debugger=debugger,
42 use_evalex=evalex, threaded=threaded, processes=processes)
43 run_simple(
44 hostname,
45 port,
46 app,
47 use_reloader=reloader,
48 use_debugger=debugger,
49 use_evalex=evalex,
50 threaded=threaded,
51 processes=processes,
52 )
4353
4454
4555 @cli.command()
46 @click.option('--no-ipython', is_flag=True, default=False)
56 @click.option("--no-ipython", is_flag=True, default=False)
4757 def shell(no_ipython):
4858 """Start a new interactive python session."""
49 banner = 'Interactive Werkzeug Shell'
59 banner = "Interactive Werkzeug Shell"
5060 namespace = make_shell()
5161 if not no_ipython:
5262 try:
5363 try:
5464 from IPython.frontend.terminal.embed import InteractiveShellEmbed
65
5566 sh = InteractiveShellEmbed.instance(banner1=banner)
5667 except ImportError:
5768 from IPython.Shell import IPShellEmbed
69
5870 sh = IPShellEmbed(banner=banner)
5971 except ImportError:
6072 pass
6274 sh(local_ns=namespace)
6375 return
6476 from code import interact
77
6578 interact(banner, local=namespace)
6679
67 if __name__ == '__main__':
80
81 if __name__ == "__main__":
6882 cli()
44
55 Manage the cup of tee application.
66
7 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
1010 import click
1111 from werkzeug.serving import run_simple
1313
1414 def make_app():
1515 from cupoftee import make_app
16 return make_app('/tmp/cupoftee.db')
16
17 return make_app("/tmp/cupoftee.db")
1718
1819
1920 @click.group()
2223
2324
2425 @cli.command()
25 @click.option('-h', '--hostname', type=str, default='localhost', help="localhost")
26 @click.option('-p', '--port', type=int, default=5000, help="5000")
27 @click.option('--reloader', is_flag=True, default=False)
28 @click.option('--debugger', is_flag=True)
29 @click.option('--evalex', is_flag=True, default=False)
30 @click.option('--threaded', is_flag=True)
31 @click.option('--processes', type=int, default=1, help="1")
26 @click.option("-h", "--hostname", type=str, default="localhost", help="localhost")
27 @click.option("-p", "--port", type=int, default=5000, help="5000")
28 @click.option("--reloader", is_flag=True, default=False)
29 @click.option("--debugger", is_flag=True)
30 @click.option("--evalex", is_flag=True, default=False)
31 @click.option("--threaded", is_flag=True)
32 @click.option("--processes", type=int, default=1, help="1")
3233 def runserver(hostname, port, reloader, debugger, evalex, threaded, processes):
3334 """Start a new development server."""
3435 app = make_app()
35 run_simple(hostname, port, app,
36 use_reloader=reloader, use_debugger=debugger,
37 use_evalex=evalex, threaded=threaded, processes=processes)
36 run_simple(
37 hostname,
38 port,
39 app,
40 use_reloader=reloader,
41 use_debugger=debugger,
42 use_evalex=evalex,
43 threaded=threaded,
44 processes=processes,
45 )
3846
3947
40 if __name__ == '__main__':
48 if __name__ == "__main__":
4149 cli()
55
66 Manage the i18n url example application.
77
8 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
9 :license: BSD, see LICENSE for more details.
8 :copyright: 2007 Pallets
9 :license: BSD-3-Clause
1010 """
1111 import click
12 from werkzeug.serving import run_simple
13
1214 from i18nurls import make_app
13 from werkzeug.serving import run_simple
1415
1516
1617 @click.group()
1920
2021
2122 @cli.command()
22 @click.option('-h', '--hostname', type=str, default='localhost', help="localhost")
23 @click.option('-p', '--port', type=int, default=5000, help="5000")
24 @click.option('--no-reloader', is_flag=True, default=False)
25 @click.option('--debugger', is_flag=True)
26 @click.option('--no-evalex', is_flag=True, default=False)
27 @click.option('--threaded', is_flag=True)
28 @click.option('--processes', type=int, default=1, help="1")
23 @click.option("-h", "--hostname", type=str, default="localhost", help="localhost")
24 @click.option("-p", "--port", type=int, default=5000, help="5000")
25 @click.option("--no-reloader", is_flag=True, default=False)
26 @click.option("--debugger", is_flag=True)
27 @click.option("--no-evalex", is_flag=True, default=False)
28 @click.option("--threaded", is_flag=True)
29 @click.option("--processes", type=int, default=1, help="1")
2930 def runserver(hostname, port, no_reloader, debugger, no_evalex, threaded, processes):
3031 """Start a new development server."""
3132 app = make_app()
3233 reloader = not no_reloader
3334 evalex = not no_evalex
34 run_simple(hostname, port, app,
35 use_reloader=reloader, use_debugger=debugger,
36 use_evalex=evalex, threaded=threaded, processes=processes)
35 run_simple(
36 hostname,
37 port,
38 app,
39 use_reloader=reloader,
40 use_debugger=debugger,
41 use_evalex=evalex,
42 threaded=threaded,
43 processes=processes,
44 )
3745
3846
3947 @cli.command()
40 @click.option('--no-ipython', is_flag=True, default=False)
48 @click.option("--no-ipython", is_flag=True, default=False)
4149 def shell(no_ipython):
4250 """Start a new interactive python session."""
43 banner = 'Interactive Werkzeug Shell'
51 banner = "Interactive Werkzeug Shell"
4452 namespace = dict()
4553 if not no_ipython:
4654 try:
4755 try:
4856 from IPython.frontend.terminal.embed import InteractiveShellEmbed
57
4958 sh = InteractiveShellEmbed.instance(banner1=banner)
5059 except ImportError:
5160 from IPython.Shell import IPShellEmbed
61
5262 sh = IPShellEmbed(banner=banner)
5363 except ImportError:
5464 pass
5666 sh(local_ns=namespace)
5767 return
5868 from code import interact
69
5970 interact(banner, local=namespace)
6071
61 if __name__ == '__main__':
72
73 if __name__ == "__main__":
6274 cli()
55
66 This script manages the plnt application.
77
8 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
9 :license: BSD, see LICENSE for more details.
8 :copyright: 2007 Pallets
9 :license: BSD-3-Clause
1010 """
11 import os
12
1113 import click
12 import os
1314 from werkzeug.serving import run_simple
1415
1516
1617 def make_app():
1718 """Helper function that creates a plnt app."""
1819 from plnt import Plnt
19 database_uri = os.environ.get('PLNT_DATABASE_URI')
20 app = Plnt(database_uri or 'sqlite:////tmp/plnt.db')
20
21 database_uri = os.environ.get("PLNT_DATABASE_URI")
22 app = Plnt(database_uri or "sqlite:////tmp/plnt.db")
2123 app.bind_to_context()
2224 return app
2325
3133 def initdb():
3234 """Initialize the database"""
3335 from plnt.database import Blog, session
36
3437 make_app().init_database()
3538 # and now fill in some python blogs everybody should read (shamelessly
3639 # added my own blog too)
3740 blogs = [
38 Blog('Armin Ronacher', 'http://lucumr.pocoo.org/',
39 'http://lucumr.pocoo.org/cogitations/feed/'),
40 Blog('Georg Brandl', 'http://pyside.blogspot.com/',
41 'http://pyside.blogspot.com/feeds/posts/default'),
42 Blog('Ian Bicking', 'http://blog.ianbicking.org/',
43 'http://blog.ianbicking.org/feed/'),
44 Blog('Amir Salihefendic', 'http://amix.dk/',
45 'http://feeds.feedburner.com/amixdk'),
46 Blog('Christopher Lenz', 'http://www.cmlenz.net/blog/',
47 'http://www.cmlenz.net/blog/atom.xml'),
48 Blog('Frederick Lundh', 'http://online.effbot.org/',
49 'http://online.effbot.org/rss.xml')
41 Blog(
42 "Armin Ronacher",
43 "http://lucumr.pocoo.org/",
44 "http://lucumr.pocoo.org/cogitations/feed/",
45 ),
46 Blog(
47 "Georg Brandl",
48 "http://pyside.blogspot.com/",
49 "http://pyside.blogspot.com/feeds/posts/default",
50 ),
51 Blog(
52 "Ian Bicking",
53 "http://blog.ianbicking.org/",
54 "http://blog.ianbicking.org/feed/",
55 ),
56 Blog(
57 "Amir Salihefendic", "http://amix.dk/", "http://feeds.feedburner.com/amixdk"
58 ),
59 Blog(
60 "Christopher Lenz",
61 "http://www.cmlenz.net/blog/",
62 "http://www.cmlenz.net/blog/atom.xml",
63 ),
64 Blog(
65 "Frederick Lundh",
66 "http://online.effbot.org/",
67 "http://online.effbot.org/rss.xml",
68 ),
5069 ]
5170 # okay. got tired here. if someone feels that he is missing, drop me
5271 # a line ;-)
5372 for blog in blogs:
5473 session.add(blog)
5574 session.commit()
56 click.echo('Initialized database, now run manage-plnt.py sync to get the posts')
75 click.echo("Initialized database, now run manage-plnt.py sync to get the posts")
5776
5877
5978 @cli.command()
60 @click.option('-h', '--hostname', type=str, default='localhost', help="localhost")
61 @click.option('-p', '--port', type=int, default=5000, help="5000")
62 @click.option('--no-reloader', is_flag=True, default=False)
63 @click.option('--debugger', is_flag=True)
64 @click.option('--no-evalex', is_flag=True, default=False)
65 @click.option('--threaded', is_flag=True)
66 @click.option('--processes', type=int, default=1, help="1")
79 @click.option("-h", "--hostname", type=str, default="localhost", help="localhost")
80 @click.option("-p", "--port", type=int, default=5000, help="5000")
81 @click.option("--no-reloader", is_flag=True, default=False)
82 @click.option("--debugger", is_flag=True)
83 @click.option("--no-evalex", is_flag=True, default=False)
84 @click.option("--threaded", is_flag=True)
85 @click.option("--processes", type=int, default=1, help="1")
6786 def runserver(hostname, port, no_reloader, debugger, no_evalex, threaded, processes):
6887 """Start a new development server."""
6988 app = make_app()
7089 reloader = not no_reloader
7190 evalex = not no_evalex
72 run_simple(hostname, port, app,
73 use_reloader=reloader, use_debugger=debugger,
74 use_evalex=evalex, threaded=threaded, processes=processes)
91 run_simple(
92 hostname,
93 port,
94 app,
95 use_reloader=reloader,
96 use_debugger=debugger,
97 use_evalex=evalex,
98 threaded=threaded,
99 processes=processes,
100 )
75101
76102
77103 @cli.command()
78 @click.option('--no-ipython', is_flag=True, default=False)
104 @click.option("--no-ipython", is_flag=True, default=False)
79105 def shell(no_ipython):
80106 """Start a new interactive python session."""
81 banner = 'Interactive Werkzeug Shell'
82 namespace = {'app': make_app()}
107 banner = "Interactive Werkzeug Shell"
108 namespace = {"app": make_app()}
83109 if not no_ipython:
84110 try:
85111 try:
86112 from IPython.frontend.terminal.embed import InteractiveShellEmbed
113
87114 sh = InteractiveShellEmbed.instance(banner1=banner)
88115 except ImportError:
89116 from IPython.Shell import IPShellEmbed
117
90118 sh = IPShellEmbed(banner=banner)
91119 except ImportError:
92120 pass
94122 sh(local_ns=namespace)
95123 return
96124 from code import interact
125
97126 interact(banner, local=namespace)
98127
99128
101130 def sync():
102131 """Sync the blogs in the planet. Call this from a cronjob."""
103132 from plnt.sync import sync
133
104134 make_app().bind_to_context()
105135 sync()
106136
107 if __name__ == '__main__':
137
138 if __name__ == "__main__":
108139 cli()
00 #!/usr/bin/env python
1 import click
21 import os
32 import tempfile
3
4 import click
45 from werkzeug.serving import run_simple
56
67
78 def make_app():
89 from shorty.application import Shorty
10
911 filename = os.path.join(tempfile.gettempdir(), "shorty.db")
10 return Shorty('sqlite:///{0}'.format(filename))
12 return Shorty("sqlite:///{0}".format(filename))
1113
1214
1315 def make_shell():
1416 from shorty import models, utils
17
1518 application = make_app()
16 return locals()
19 return {"application": application, "models": models, "utils": utils}
1720
1821
1922 @click.group()
2730
2831
2932 @cli.command()
30 @click.option('-h', '--hostname', type=str, default='localhost', help="localhost")
31 @click.option('-p', '--port', type=int, default=5000, help="5000")
32 @click.option('--no-reloader', is_flag=True, default=False)
33 @click.option('--debugger', is_flag=True)
34 @click.option('--no-evalex', is_flag=True, default=False)
35 @click.option('--threaded', is_flag=True)
36 @click.option('--processes', type=int, default=1, help="1")
33 @click.option("-h", "--hostname", type=str, default="localhost", help="localhost")
34 @click.option("-p", "--port", type=int, default=5000, help="5000")
35 @click.option("--no-reloader", is_flag=True, default=False)
36 @click.option("--debugger", is_flag=True)
37 @click.option("--no-evalex", is_flag=True, default=False)
38 @click.option("--threaded", is_flag=True)
39 @click.option("--processes", type=int, default=1, help="1")
3740 def runserver(hostname, port, no_reloader, debugger, no_evalex, threaded, processes):
3841 """Start a new development server."""
3942 app = make_app()
4043 reloader = not no_reloader
4144 evalex = not no_evalex
42 run_simple(hostname, port, app,
43 use_reloader=reloader, use_debugger=debugger,
44 use_evalex=evalex, threaded=threaded, processes=processes)
45 run_simple(
46 hostname,
47 port,
48 app,
49 use_reloader=reloader,
50 use_debugger=debugger,
51 use_evalex=evalex,
52 threaded=threaded,
53 processes=processes,
54 )
4555
4656
4757 @cli.command()
48 @click.option('--no-ipython', is_flag=True, default=False)
58 @click.option("--no-ipython", is_flag=True, default=False)
4959 def shell(no_ipython):
5060 """Start a new interactive python session."""
51 banner = 'Interactive Werkzeug Shell'
61 banner = "Interactive Werkzeug Shell"
5262 namespace = make_shell()
5363 if not no_ipython:
5464 try:
5565 try:
5666 from IPython.frontend.terminal.embed import InteractiveShellEmbed
67
5768 sh = InteractiveShellEmbed.instance(banner1=banner)
5869 except ImportError:
5970 from IPython.Shell import IPShellEmbed
71
6072 sh = IPShellEmbed(banner=banner)
6173 except ImportError:
6274 pass
6476 sh(local_ns=namespace)
6577 return
6678 from code import interact
79
6780 interact(banner, local=namespace)
6881
69 if __name__ == '__main__':
82
83 if __name__ == "__main__":
7084 cli()
55
66 This script provides some basic commands to debug and test SimpleWiki.
77
8 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
9 :license: BSD, see LICENSE for more details.
8 :copyright: 2007 Pallets
9 :license: BSD-3-Clause
1010 """
11 import os
12
1113 import click
12 import os
13 import tempfile
1414 from werkzeug.serving import run_simple
1515
1616
1717 def make_wiki():
1818 """Helper function that creates a new wiki instance."""
1919 from simplewiki import SimpleWiki
20 database_uri = os.environ.get('SIMPLEWIKI_DATABASE_URI')
21 return SimpleWiki(database_uri or 'sqlite:////tmp/simplewiki.db')
20
21 database_uri = os.environ.get("SIMPLEWIKI_DATABASE_URI")
22 return SimpleWiki(database_uri or "sqlite:////tmp/simplewiki.db")
2223
2324
2425 def make_shell():
2526 from simplewiki import database
27
2628 wiki = make_wiki()
2729 wiki.bind_to_context()
28 return {
29 'wiki': wiki,
30 'db': database
31 }
30 return {"wiki": wiki, "db": database}
3231
3332
3433 @click.group()
4241
4342
4443 @cli.command()
45 @click.option('-h', '--hostname', type=str, default='localhost', help="localhost")
46 @click.option('-p', '--port', type=int, default=5000, help="5000")
47 @click.option('--no-reloader', is_flag=True, default=False)
48 @click.option('--debugger', is_flag=True)
49 @click.option('--no-evalex', is_flag=True, default=False)
50 @click.option('--threaded', is_flag=True)
51 @click.option('--processes', type=int, default=1, help="1")
44 @click.option("-h", "--hostname", type=str, default="localhost", help="localhost")
45 @click.option("-p", "--port", type=int, default=5000, help="5000")
46 @click.option("--no-reloader", is_flag=True, default=False)
47 @click.option("--debugger", is_flag=True)
48 @click.option("--no-evalex", is_flag=True, default=False)
49 @click.option("--threaded", is_flag=True)
50 @click.option("--processes", type=int, default=1, help="1")
5251 def runserver(hostname, port, no_reloader, debugger, no_evalex, threaded, processes):
5352 """Start a new development server."""
5453 app = make_wiki()
5554 reloader = not no_reloader
5655 evalex = not no_evalex
57 run_simple(hostname, port, app,
58 use_reloader=reloader, use_debugger=debugger,
59 use_evalex=evalex, threaded=threaded, processes=processes)
56 run_simple(
57 hostname,
58 port,
59 app,
60 use_reloader=reloader,
61 use_debugger=debugger,
62 use_evalex=evalex,
63 threaded=threaded,
64 processes=processes,
65 )
6066
6167
6268 @cli.command()
63 @click.option('--no-ipython', is_flag=True, default=False)
69 @click.option("--no-ipython", is_flag=True, default=False)
6470 def shell(no_ipython):
6571 """Start a new interactive python session."""
66 banner = 'Interactive Werkzeug Shell'
72 banner = "Interactive Werkzeug Shell"
6773 namespace = make_shell()
6874 if not no_ipython:
6975 try:
7076 try:
7177 from IPython.frontend.terminal.embed import InteractiveShellEmbed
78
7279 sh = InteractiveShellEmbed.instance(banner1=banner)
7380 except ImportError:
7481 from IPython.Shell import IPShellEmbed
82
7583 sh = IPShellEmbed(banner=banner)
7684 except ImportError:
7785 pass
7987 sh(local_ns=namespace)
8088 return
8189 from code import interact
90
8291 interact(banner, local=namespace)
8392
84 if __name__ == '__main__':
93
94 if __name__ == "__main__":
8595 cli()
99
1010 __ http://webpy.org/tutorial2.en
1111
12 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
13 :license: BSD, see LICENSE for more details.
12 :copyright: 2007 Pallets
13 :license: BSD-3-Clause
1414 """
15 import click
1615 import os
1716 import sys
18 sys.path.append(os.path.join(os.path.dirname(__file__), 'webpylike'))
19 from example import app
17
18 import click
2019 from werkzeug.serving import run_simple
20
21 from webpylike.example import app
22
23 sys.path.append(os.path.join(os.path.dirname(__file__), "webpylike"))
2124
2225
2326 @click.group()
2629
2730
2831 @cli.command()
29 @click.option('-h', '--hostname', type=str, default='localhost', help="localhost")
30 @click.option('-p', '--port', type=int, default=5000, help="5000")
31 @click.option('--no-reloader', is_flag=True, default=False)
32 @click.option('--debugger', is_flag=True)
33 @click.option('--no-evalex', is_flag=True, default=False)
34 @click.option('--threaded', is_flag=True)
35 @click.option('--processes', type=int, default=1, help="1")
32 @click.option("-h", "--hostname", type=str, default="localhost", help="localhost")
33 @click.option("-p", "--port", type=int, default=5000, help="5000")
34 @click.option("--no-reloader", is_flag=True, default=False)
35 @click.option("--debugger", is_flag=True)
36 @click.option("--no-evalex", is_flag=True, default=False)
37 @click.option("--threaded", is_flag=True)
38 @click.option("--processes", type=int, default=1, help="1")
3639 def runserver(hostname, port, no_reloader, debugger, no_evalex, threaded, processes):
3740 """Start a new development server."""
3841 reloader = not no_reloader
3942 evalex = not no_evalex
40 run_simple(hostname, port, app,
41 use_reloader=reloader, use_debugger=debugger,
42 use_evalex=evalex, threaded=threaded, processes=processes)
43 run_simple(
44 hostname,
45 port,
46 app,
47 use_reloader=reloader,
48 use_debugger=debugger,
49 use_evalex=evalex,
50 threaded=threaded,
51 processes=processes,
52 )
4353
4454
4555 @cli.command()
46 @click.option('--no-ipython', is_flag=True, default=False)
56 @click.option("--no-ipython", is_flag=True, default=False)
4757 def shell(no_ipython):
4858 """Start a new interactive python session."""
49 banner = 'Interactive Werkzeug Shell'
59 banner = "Interactive Werkzeug Shell"
5060 namespace = dict()
5161 if not no_ipython:
5262 try:
5363 try:
5464 from IPython.frontend.terminal.embed import InteractiveShellEmbed
65
5566 sh = InteractiveShellEmbed.instance(banner1=banner)
5667 except ImportError:
5768 from IPython.Shell import IPShellEmbed
69
5870 sh = IPShellEmbed(banner=banner)
5971 except ImportError:
6072 pass
6274 sh(local_ns=namespace)
6375 return
6476 from code import interact
77
6578 interact(banner, local=namespace)
6679
67 if __name__ == '__main__':
80
81 if __name__ == "__main__":
6882 cli()
0 from werkzeug.routing import Map, Rule, Subdomain, Submount, EndpointPrefix
0 from werkzeug.routing import EndpointPrefix
1 from werkzeug.routing import Map
2 from werkzeug.routing import Rule
3 from werkzeug.routing import Subdomain
4 from werkzeug.routing import Submount
15
2 m = Map([
3 # Static URLs
4 EndpointPrefix('static/', [
5 Rule('/', endpoint='index'),
6 Rule('/about', endpoint='about'),
7 Rule('/help', endpoint='help'),
8 ]),
9 # Knowledge Base
10 Subdomain('kb', [EndpointPrefix('kb/', [
11 Rule('/', endpoint='index'),
12 Submount('/browse', [
13 Rule('/', endpoint='browse'),
14 Rule('/<int:id>/', defaults={'page': 1}, endpoint='browse'),
15 Rule('/<int:id>/<int:page>', endpoint='browse')
16 ])
17 ])])
18 ])
6 m = Map(
7 [
8 # Static URLs
9 EndpointPrefix(
10 "static/",
11 [
12 Rule("/", endpoint="index"),
13 Rule("/about", endpoint="about"),
14 Rule("/help", endpoint="help"),
15 ],
16 ),
17 # Knowledge Base
18 Subdomain(
19 "kb",
20 [
21 EndpointPrefix(
22 "kb/",
23 [
24 Rule("/", endpoint="index"),
25 Submount(
26 "/browse",
27 [
28 Rule("/", endpoint="browse"),
29 Rule(
30 "/<int:id>/",
31 defaults={"page": 1},
32 endpoint="browse",
33 ),
34 Rule("/<int:id>/<int:page>", endpoint="browse"),
35 ],
36 ),
37 ],
38 )
39 ],
40 ),
41 ]
42 )
44
55 Noun. plnt (plant) -- a planet application that sounds like a herb.
66
7 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
10 from plnt.webapp import Plnt
10 from .webapp import Plnt
44
55 The database definitions for the planet.
66
7 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
10 from sqlalchemy import MetaData, Table, Column, ForeignKey, Boolean, \
11 Integer, String, DateTime
12 from sqlalchemy.orm import dynamic_loader, scoped_session, create_session, \
13 mapper
14 from plnt.utils import application, local_manager
10 from sqlalchemy import Column
11 from sqlalchemy import DateTime
12 from sqlalchemy import ForeignKey
13 from sqlalchemy import Integer
14 from sqlalchemy import MetaData
15 from sqlalchemy import String
16 from sqlalchemy import Table
17 from sqlalchemy.orm import create_session
18 from sqlalchemy.orm import dynamic_loader
19 from sqlalchemy.orm import mapper
20 from sqlalchemy.orm import scoped_session
21
22 from .utils import application
23 from .utils import local_manager
1524
1625
1726 def new_db_session():
18 return create_session(application.database_engine, autoflush=True,
19 autocommit=False)
27 return create_session(application.database_engine, autoflush=True, autocommit=False)
28
2029
2130 metadata = MetaData()
2231 session = scoped_session(new_db_session, local_manager.get_ident)
2332
2433
25 blog_table = Table('blogs', metadata,
26 Column('id', Integer, primary_key=True),
27 Column('name', String(120)),
28 Column('description', String),
29 Column('url', String(200)),
30 Column('feed_url', String(250))
34 blog_table = Table(
35 "blogs",
36 metadata,
37 Column("id", Integer, primary_key=True),
38 Column("name", String(120)),
39 Column("description", String),
40 Column("url", String(200)),
41 Column("feed_url", String(250)),
3142 )
3243
33 entry_table = Table('entries', metadata,
34 Column('id', Integer, primary_key=True),
35 Column('blog_id', Integer, ForeignKey('blogs.id')),
36 Column('guid', String(200), unique=True),
37 Column('title', String(140)),
38 Column('url', String(200)),
39 Column('text', String),
40 Column('pub_date', DateTime),
41 Column('last_update', DateTime)
44 entry_table = Table(
45 "entries",
46 metadata,
47 Column("id", Integer, primary_key=True),
48 Column("blog_id", Integer, ForeignKey("blogs.id")),
49 Column("guid", String(200), unique=True),
50 Column("title", String(140)),
51 Column("url", String(200)),
52 Column("text", String),
53 Column("pub_date", DateTime),
54 Column("last_update", DateTime),
4255 )
4356
4457
4558 class Blog(object):
4659 query = session.query_property()
4760
48 def __init__(self, name, url, feed_url, description=u''):
61 def __init__(self, name, url, feed_url, description=u""):
4962 self.name = name
5063 self.url = url
5164 self.feed_url = feed_url
5265 self.description = description
5366
5467 def __repr__(self):
55 return '<%s %r>' % (self.__class__.__name__, self.url)
68 return "<%s %r>" % (self.__class__.__name__, self.url)
5669
5770
5871 class Entry(object):
5972 query = session.query_property()
6073
6174 def __repr__(self):
62 return '<%s %r>' % (self.__class__.__name__, self.guid)
75 return "<%s %r>" % (self.__class__.__name__, self.guid)
6376
6477
6578 mapper(Entry, entry_table)
66 mapper(Blog, blog_table, properties=dict(
67 entries=dynamic_loader(Entry, backref='blog')
68 ))
79 mapper(Blog, blog_table, properties=dict(entries=dynamic_loader(Entry, backref="blog")))
44
55 Does the synchronization. Called by "manage-plnt.py sync"
66
7 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
10 import sys
10 from datetime import datetime
11
1112 import feedparser
12 from time import time
13 from datetime import datetime
1413 from werkzeug.utils import escape
15 from plnt.database import Blog, Entry, session
16 from plnt.utils import strip_tags, nl2p
14
15 from .database import Blog
16 from .database import Entry
17 from .database import session
18 from .utils import nl2p
19 from .utils import strip_tags
1720
1821
19 HTML_MIMETYPES = set(['text/html', 'application/xhtml+xml'])
22 HTML_MIMETYPES = {"text/html", "application/xhtml+xml"}
2023
2124
2225 def sync():
2831 # parse the feed. feedparser.parse will never given an exception
2932 # but the bozo bit might be defined.
3033 feed = feedparser.parse(blog.feed_url)
31 blog_author = feed.get('author') or blog.name
32 blog_author_detail = feed.get('author_detail')
3334
3435 for entry in feed.entries:
3536 # get the guid. either the id if specified, otherwise the link.
3637 # if none is available we skip the entry.
37 guid = entry.get('id') or entry.get('link')
38 guid = entry.get("id") or entry.get("link")
3839 if not guid:
3940 continue
4041
4445
4546 # get title, url and text. skip if no title or no text is
4647 # given. if the link is missing we use the blog link.
47 if 'title_detail' in entry:
48 title = entry.title_detail.get('value') or ''
49 if entry.title_detail.get('type') in HTML_MIMETYPES:
48 if "title_detail" in entry:
49 title = entry.title_detail.get("value") or ""
50 if entry.title_detail.get("type") in HTML_MIMETYPES:
5051 title = strip_tags(title)
5152 else:
5253 title = escape(title)
5354 else:
54 title = entry.get('title')
55 url = entry.get('link') or blog.blog_url
56 text = 'content' in entry and entry.content[0] or \
57 entry.get('summary_detail')
55 title = entry.get("title")
56 url = entry.get("link") or blog.blog_url
57 text = (
58 "content" in entry and entry.content[0] or entry.get("summary_detail")
59 )
5860
5961 if not title or not text:
6062 continue
6264 # if we have an html text we use that, otherwise we HTML
6365 # escape the text and use that one. We also handle XHTML
6466 # with our tag soup parser for the moment.
65 if text.get('type') not in HTML_MIMETYPES:
66 text = escape(nl2p(text.get('value') or ''))
67 if text.get("type") not in HTML_MIMETYPES:
68 text = escape(nl2p(text.get("value") or ""))
6769 else:
68 text = text.get('value') or ''
70 text = text.get("value") or ""
6971
7072 # no text? continue
7173 if not text.strip():
7375
7476 # get the pub date and updated date. This is rather complex
7577 # because different feeds do different stuff
76 pub_date = entry.get('published_parsed') or \
77 entry.get('created_parsed') or \
78 entry.get('date_parsed')
79 updated = entry.get('updated_parsed') or pub_date
78 pub_date = (
79 entry.get("published_parsed")
80 or entry.get("created_parsed")
81 or entry.get("date_parsed")
82 )
83 updated = entry.get("updated_parsed") or pub_date
8084 pub_date = pub_date or updated
8185
8286 # if we don't have a pub_date we skip.
55 Plnt is a small example application written using the
66 <a href="http://werkzeug.pocoo.org/">Werkzeug</a> WSGI toolkit,
77 the <a href="http://jinja.pocoo.org/">Jinja</a> template language,
8 the <a href="http://sqlalchemy.org/">SQLAlchemy</a> database abstraction
8 the <a href="https://www.sqlalchemy.org/">SQLAlchemy</a> database abstraction
99 layer and ORM and last but not least the awesome
1010 <a href="http://feedparser.org/">feedparser</a> library.
1111 </p>
44
55 The planet utilities.
66
7 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
1010 import re
1111 from os import path
12 from jinja2 import Environment, FileSystemLoader
13 from werkzeug.local import Local, LocalManager
14 from werkzeug.urls import url_encode, url_quote
12
13 from jinja2 import Environment
14 from jinja2 import FileSystemLoader
15 from werkzeug._compat import unichr
16 from werkzeug.local import Local
17 from werkzeug.local import LocalManager
18 from werkzeug.routing import Map
19 from werkzeug.routing import Rule
1520 from werkzeug.utils import cached_property
1621 from werkzeug.wrappers import Response
17 from werkzeug.routing import Map, Rule
1822
1923
2024 # context locals. these two objects are use by the application to
2731
2832
2933 # proxy objects
30 request = local('request')
31 application = local('application')
32 url_adapter = local('url_adapter')
34 request = local("request")
35 application = local("application")
36 url_adapter = local("url_adapter")
3337
3438
3539 # let's use jinja for templates this time
36 template_path = path.join(path.dirname(__file__), 'templates')
40 template_path = path.join(path.dirname(__file__), "templates")
3741 jinja_env = Environment(loader=FileSystemLoader(template_path))
3842
3943
4044 # the collected url patterns
41 url_map = Map([Rule('/shared/<path:file>', endpoint='shared')])
45 url_map = Map([Rule("/shared/<path:file>", endpoint="shared")])
4246 endpoints = {}
4347
4448
45 _par_re = re.compile(r'\n{2,}')
46 _entity_re = re.compile(r'&([^;]+);')
47 _striptags_re = re.compile(r'(<!--.*-->|<[^>]*>)')
49 _par_re = re.compile(r"\n{2,}")
50 _entity_re = re.compile(r"&([^;]+);")
51 _striptags_re = re.compile(r"(<!--.*-->|<[^>]*>)")
4852
49 from htmlentitydefs import name2codepoint
53 try:
54 from html.entities import name2codepoint
55 except ImportError:
56 from htmlentitydefs import name2codepoint
57
5058 html_entities = name2codepoint.copy()
51 html_entities['apos'] = 39
59 html_entities["apos"] = 39
5260 del name2codepoint
5361
5462
5563 def expose(url_rule, endpoint=None, **kwargs):
5664 """Expose this function to the web layer."""
65
5766 def decorate(f):
5867 e = endpoint or f.__name__
5968 endpoints[e] = f
6069 url_map.add(Rule(url_rule, endpoint=e, **kwargs))
6170 return f
71
6272 return decorate
6373
6474
6575 def render_template(template_name, **context):
6676 """Render a template into a response."""
6777 tmpl = jinja_env.get_template(template_name)
68 context['url_for'] = url_for
69 return Response(tmpl.render(context), mimetype='text/html')
78 context["url_for"] = url_for
79 return Response(tmpl.render(context), mimetype="text/html")
7080
7181
7282 def nl2p(s):
7383 """Add paragraphs to a text."""
74 return u'\n'.join(u'<p>%s</p>' % p for p in _par_re.split(s))
84 return u"\n".join(u"<p>%s</p>" % p for p in _par_re.split(s))
7585
7686
7787 def url_for(endpoint, **kw):
8191
8292 def strip_tags(s):
8393 """Resolve HTML entities and remove tags from a string."""
94
8495 def handle_match(m):
8596 name = m.group(1)
8697 if name in html_entities:
8798 return unichr(html_entities[name])
88 if name[:2] in ('#x', '#X'):
99 if name[:2] in ("#x", "#X"):
89100 try:
90101 return unichr(int(name[2:], 16))
91102 except ValueError:
92 return u''
93 elif name.startswith('#'):
103 return u""
104 elif name.startswith("#"):
94105 try:
95106 return unichr(int(name[1:]))
96107 except ValueError:
97 return u''
98 return u''
99 return _entity_re.sub(handle_match, _striptags_re.sub('', s))
108 return u""
109 return u""
110
111 return _entity_re.sub(handle_match, _striptags_re.sub("", s))
100112
101113
102114 class Pagination(object):
112124
113125 @cached_property
114126 def entries(self):
115 return self.query.offset((self.page - 1) * self.per_page) \
116 .limit(self.per_page).all()
127 return (
128 self.query.offset((self.page - 1) * self.per_page)
129 .limit(self.per_page)
130 .all()
131 )
117132
118133 @cached_property
119134 def count(self):
120135 return self.query.count()
121136
122 has_previous = property(lambda x: x.page > 1)
123 has_next = property(lambda x: x.page < x.pages)
124 previous = property(lambda x: url_for(x.endpoint, page=x.page - 1))
125 next = property(lambda x: url_for(x.endpoint, page=x.page + 1))
126 pages = property(lambda x: max(0, x.count - 1) // x.per_page + 1)
137 has_previous = property(lambda self: self.page > 1)
138 has_next = property(lambda self: self.page < self.pages)
139 previous = property(lambda self: url_for(self.endpoint, page=self.page - 1))
140 next = property(lambda self: url_for(self.endpoint, page=self.page + 1))
141 pages = property(lambda self: max(0, self.count - 1) // self.per_page + 1)
44
55 Display the aggregated feeds.
66
7 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
10 from datetime import datetime, date
11 from plnt.database import Blog, Entry
12 from plnt.utils import Pagination, expose, render_template
10 from datetime import date
11
12 from .database import Entry
13 from .utils import expose
14 from .utils import Pagination
15 from .utils import render_template
1316
1417
1518 #: number of items per page
1619 PER_PAGE = 30
1720
1821
19 @expose('/', defaults={'page': 1})
20 @expose('/page/<int:page>')
22 @expose("/", defaults={"page": 1})
23 @expose("/page/<int:page>")
2124 def index(request, page):
2225 """Show the index page or any an offset of it."""
2326 days = []
2427 days_found = set()
2528 query = Entry.query.order_by(Entry.pub_date.desc())
26 pagination = Pagination(query, PER_PAGE, page, 'index')
29 pagination = Pagination(query, PER_PAGE, page, "index")
2730 for entry in pagination.entries:
2831 day = date(*entry.pub_date.timetuple()[:3])
2932 if day not in days_found:
3033 days_found.add(day)
31 days.append({'date': day, 'entries': []})
32 days[-1]['entries'].append(entry)
33 return render_template('index.html', days=days, pagination=pagination)
34 days.append({"date": day, "entries": []})
35 days[-1]["entries"].append(entry)
36 return render_template("index.html", days=days, pagination=pagination)
3437
3538
36 @expose('/about')
39 @expose("/about")
3740 def about(request):
3841 """Show the about page, so that we have another view func ;-)"""
39 return render_template('about.html')
42 return render_template("about.html")
44
55 The web part of the planet.
66
7 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
1010 from os import path
11
1112 from sqlalchemy import create_engine
13 from werkzeug.exceptions import HTTPException
14 from werkzeug.middleware.shared_data import SharedDataMiddleware
1215 from werkzeug.wrappers import Request
13 from werkzeug.wsgi import ClosingIterator, SharedDataMiddleware
14 from werkzeug.exceptions import HTTPException, NotFound
15 from plnt.utils import local, local_manager, url_map, endpoints
16 from plnt.database import session, metadata
16 from werkzeug.wsgi import ClosingIterator
1717
18 # import the views module because it contains setup code
19 import plnt.views
18 from . import views # noqa: F401
19 from .database import metadata
20 from .database import session
21 from .utils import endpoints
22 from .utils import local
23 from .utils import local_manager
24 from .utils import url_map
2025
2126 #: path to shared data
22 SHARED_DATA = path.join(path.dirname(__file__), 'shared')
27 SHARED_DATA = path.join(path.dirname(__file__), "shared")
2328
2429
2530 class Plnt(object):
26
2731 def __init__(self, database_uri):
2832 self.database_engine = create_engine(database_uri)
2933
3034 self._dispatch = local_manager.middleware(self.dispatch_request)
31 self._dispatch = SharedDataMiddleware(self._dispatch, {
32 '/shared': SHARED_DATA
33 })
35 self._dispatch = SharedDataMiddleware(self._dispatch, {"/shared": SHARED_DATA})
3436
3537 def init_database(self):
3638 metadata.create_all(self.database_engine)
4547 try:
4648 endpoint, values = adapter.match(request.path)
4749 response = endpoints[endpoint](request, **values)
48 except HTTPException, e:
50 except HTTPException as e:
4951 response = e
50 return ClosingIterator(response(environ, start_response),
51 session.remove)
52 return ClosingIterator(response(environ, start_response), session.remove)
5253
5354 def __call__(self, environ, start_response):
5455 return self._dispatch(environ, start_response)
44
55 A simple URL shortener using Werkzeug and redis.
66
7 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
1010 import os
11
1112 import redis
12 import urlparse
13 from werkzeug.wrappers import Request, Response
14 from werkzeug.routing import Map, Rule
15 from werkzeug.exceptions import HTTPException, NotFound
16 from werkzeug.wsgi import SharedDataMiddleware
13 from jinja2 import Environment
14 from jinja2 import FileSystemLoader
15 from werkzeug.exceptions import HTTPException
16 from werkzeug.exceptions import NotFound
17 from werkzeug.middleware.shared_data import SharedDataMiddleware
18 from werkzeug.routing import Map
19 from werkzeug.routing import Rule
20 from werkzeug.urls import url_parse
1721 from werkzeug.utils import redirect
18
19 from jinja2 import Environment, FileSystemLoader
22 from werkzeug.wrappers import Request
23 from werkzeug.wrappers import Response
2024
2125
2226 def base36_encode(number):
23 assert number >= 0, 'positive integer required'
27 assert number >= 0, "positive integer required"
2428 if number == 0:
25 return '0'
29 return "0"
2630 base36 = []
2731 while number != 0:
2832 number, i = divmod(number, 36)
29 base36.append('0123456789abcdefghijklmnopqrstuvwxyz'[i])
30 return ''.join(reversed(base36))
33 base36.append("0123456789abcdefghijklmnopqrstuvwxyz"[i])
34 return "".join(reversed(base36))
3135
3236
3337 def is_valid_url(url):
34 parts = urlparse.urlparse(url)
35 return parts.scheme in ('http', 'https')
38 parts = url_parse(url)
39 return parts.scheme in ("http", "https")
3640
3741
3842 def get_hostname(url):
39 return urlparse.urlparse(url).netloc
43 return url_parse(url).netloc
4044
4145
4246 class Shortly(object):
47 def __init__(self, config):
48 self.redis = redis.Redis(config["redis_host"], config["redis_port"])
49 template_path = os.path.join(os.path.dirname(__file__), "templates")
50 self.jinja_env = Environment(
51 loader=FileSystemLoader(template_path), autoescape=True
52 )
53 self.jinja_env.filters["hostname"] = get_hostname
4354
44 def __init__(self, config):
45 self.redis = redis.Redis(config['redis_host'], config['redis_port'])
46 template_path = os.path.join(os.path.dirname(__file__), 'templates')
47 self.jinja_env = Environment(loader=FileSystemLoader(template_path),
48 autoescape=True)
49 self.jinja_env.filters['hostname'] = get_hostname
50
51 self.url_map = Map([
52 Rule('/', endpoint='new_url'),
53 Rule('/<short_id>', endpoint='follow_short_link'),
54 Rule('/<short_id>+', endpoint='short_link_details')
55 ])
55 self.url_map = Map(
56 [
57 Rule("/", endpoint="new_url"),
58 Rule("/<short_id>", endpoint="follow_short_link"),
59 Rule("/<short_id>+", endpoint="short_link_details"),
60 ]
61 )
5662
5763 def on_new_url(self, request):
5864 error = None
59 url = ''
60 if request.method == 'POST':
61 url = request.form['url']
65 url = ""
66 if request.method == "POST":
67 url = request.form["url"]
6268 if not is_valid_url(url):
63 error = 'Please enter a valid URL'
69 error = "Please enter a valid URL"
6470 else:
6571 short_id = self.insert_url(url)
66 return redirect('/%s+' % short_id)
67 return self.render_template('new_url.html', error=error, url=url)
72 return redirect("/%s+" % short_id)
73 return self.render_template("new_url.html", error=error, url=url)
6874
6975 def on_follow_short_link(self, request, short_id):
70 link_target = self.redis.get('url-target:' + short_id)
76 link_target = self.redis.get("url-target:" + short_id)
7177 if link_target is None:
7278 raise NotFound()
73 self.redis.incr('click-count:' + short_id)
79 self.redis.incr("click-count:" + short_id)
7480 return redirect(link_target)
7581
7682 def on_short_link_details(self, request, short_id):
77 link_target = self.redis.get('url-target:' + short_id)
83 link_target = self.redis.get("url-target:" + short_id)
7884 if link_target is None:
7985 raise NotFound()
80 click_count = int(self.redis.get('click-count:' + short_id) or 0)
81 return self.render_template('short_link_details.html',
86 click_count = int(self.redis.get("click-count:" + short_id) or 0)
87 return self.render_template(
88 "short_link_details.html",
8289 link_target=link_target,
8390 short_id=short_id,
84 click_count=click_count
91 click_count=click_count,
8592 )
8693
8794 def error_404(self):
88 response = self.render_template('404.html')
95 response = self.render_template("404.html")
8996 response.status_code = 404
9097 return response
9198
9299 def insert_url(self, url):
93 short_id = self.redis.get('reverse-url:' + url)
100 short_id = self.redis.get("reverse-url:" + url)
94101 if short_id is not None:
95102 return short_id
96 url_num = self.redis.incr('last-url-id')
103 url_num = self.redis.incr("last-url-id")
97104 short_id = base36_encode(url_num)
98 self.redis.set('url-target:' + short_id, url)
99 self.redis.set('reverse-url:' + url, short_id)
105 self.redis.set("url-target:" + short_id, url)
106 self.redis.set("reverse-url:" + url, short_id)
100107 return short_id
101108
102109 def render_template(self, template_name, **context):
103110 t = self.jinja_env.get_template(template_name)
104 return Response(t.render(context), mimetype='text/html')
111 return Response(t.render(context), mimetype="text/html")
105112
106113 def dispatch_request(self, request):
107114 adapter = self.url_map.bind_to_environ(request.environ)
108115 try:
109116 endpoint, values = adapter.match()
110 return getattr(self, 'on_' + endpoint)(request, **values)
111 except NotFound, e:
117 return getattr(self, "on_" + endpoint)(request, **values)
118 except NotFound:
112119 return self.error_404()
113 except HTTPException, e:
120 except HTTPException as e:
114121 return e
115122
116123 def wsgi_app(self, environ, start_response):
122129 return self.wsgi_app(environ, start_response)
123130
124131
125 def create_app(redis_host='localhost', redis_port=6379, with_static=True):
126 app = Shortly({
127 'redis_host': redis_host,
128 'redis_port': redis_port
129 })
132 def create_app(redis_host="localhost", redis_port=6379, with_static=True):
133 app = Shortly({"redis_host": redis_host, "redis_port": redis_port})
130134 if with_static:
131 app.wsgi_app = SharedDataMiddleware(app.wsgi_app, {
132 '/static': os.path.join(os.path.dirname(__file__), 'static')
133 })
135 app.wsgi_app = SharedDataMiddleware(
136 app.wsgi_app, {"/static": os.path.join(os.path.dirname(__file__), "static")}
137 )
134138 return app
135139
136140
137 if __name__ == '__main__':
141 if __name__ == "__main__":
138142 from werkzeug.serving import run_simple
143
139144 app = create_app()
140 run_simple('127.0.0.1', 5000, app, use_debugger=True, use_reloader=True)
145 run_simple("127.0.0.1", 5000, app, use_debugger=True, use_reloader=True)
00 from sqlalchemy import create_engine
1 from werkzeug.exceptions import HTTPException
2 from werkzeug.exceptions import NotFound
3 from werkzeug.middleware.shared_data import SharedDataMiddleware
14 from werkzeug.wrappers import Request
2 from werkzeug.wsgi import ClosingIterator, SharedDataMiddleware
3 from werkzeug.exceptions import HTTPException, NotFound
4 from shorty.utils import STATIC_PATH, session, local, local_manager, \
5 metadata, url_map
5 from werkzeug.wsgi import ClosingIterator
66
7 import shorty.models
8 from shorty import views
7 from . import views
8 from .utils import local
9 from .utils import local_manager
10 from .utils import metadata
11 from .utils import session
12 from .utils import STATIC_PATH
13 from .utils import url_map
914
1015
1116 class Shorty(object):
12
1317 def __init__(self, db_uri):
1418 local.application = self
1519 self.database_engine = create_engine(db_uri, convert_unicode=True)
1620
17 self.dispatch = SharedDataMiddleware(self.dispatch, {
18 '/static': STATIC_PATH
19 })
21 self.dispatch = SharedDataMiddleware(self.dispatch, {"/static": STATIC_PATH})
2022
2123 def init_database(self):
2224 metadata.create_all(self.database_engine)
2931 endpoint, values = adapter.match()
3032 handler = getattr(views, endpoint)
3133 response = handler(request, **values)
32 except NotFound, e:
34 except NotFound:
3335 response = views.not_found(request)
3436 response.status_code = 404
35 except HTTPException, e:
37 except HTTPException as e:
3638 response = e
37 return ClosingIterator(response(environ, start_response),
38 [session.remove, local_manager.cleanup])
39 return ClosingIterator(
40 response(environ, start_response), [session.remove, local_manager.cleanup]
41 )
3942
4043 def __call__(self, environ, start_response):
4144 return self.dispatch(environ, start_response)
00 from datetime import datetime
1 from sqlalchemy import Table, Column, String, Boolean, DateTime
1
2 from sqlalchemy import Boolean
3 from sqlalchemy import Column
4 from sqlalchemy import DateTime
5 from sqlalchemy import String
6 from sqlalchemy import Table
27 from sqlalchemy.orm import mapper
3 from shorty.utils import session, metadata, url_for, get_random_uid
48
5 url_table = Table('urls', metadata,
6 Column('uid', String(140), primary_key=True),
7 Column('target', String(500)),
8 Column('added', DateTime),
9 Column('public', Boolean)
9 from .utils import get_random_uid
10 from .utils import metadata
11 from .utils import session
12 from .utils import url_for
13
14 url_table = Table(
15 "urls",
16 metadata,
17 Column("uid", String(140), primary_key=True),
18 Column("target", String(500)),
19 Column("added", DateTime),
20 Column("public", Boolean),
1021 )
22
1123
1224 class URL(object):
1325 query = session.query_property()
2638
2739 @property
2840 def short_url(self):
29 return url_for('link', uid=self.uid, _external=True)
41 return url_for("link", uid=self.uid, _external=True)
3042
3143 def __repr__(self):
32 return '<URL %r>' % self.uid
44 return "<URL %r>" % self.uid
45
3346
3447 mapper(URL, url_table)
00 from os import path
1 from urlparse import urlparse
2 from random import sample, randrange
3 from jinja2 import Environment, FileSystemLoader
4 from werkzeug.local import Local, LocalManager
1 from random import randrange
2 from random import sample
3
4 from jinja2 import Environment
5 from jinja2 import FileSystemLoader
6 from sqlalchemy import MetaData
7 from sqlalchemy.orm import create_session
8 from sqlalchemy.orm import scoped_session
9 from werkzeug.local import Local
10 from werkzeug.local import LocalManager
11 from werkzeug.routing import Map
12 from werkzeug.routing import Rule
13 from werkzeug.urls import url_parse
514 from werkzeug.utils import cached_property
615 from werkzeug.wrappers import Response
7 from werkzeug.routing import Map, Rule
8 from sqlalchemy import MetaData
9 from sqlalchemy.orm import create_session, scoped_session
1016
1117
12 TEMPLATE_PATH = path.join(path.dirname(__file__), 'templates')
13 STATIC_PATH = path.join(path.dirname(__file__), 'static')
14 ALLOWED_SCHEMES = frozenset(['http', 'https', 'ftp', 'ftps'])
15 URL_CHARS = 'abcdefghijkmpqrstuvwxyzABCDEFGHIJKLMNPQRST23456789'
18 TEMPLATE_PATH = path.join(path.dirname(__file__), "templates")
19 STATIC_PATH = path.join(path.dirname(__file__), "static")
20 ALLOWED_SCHEMES = frozenset(["http", "https", "ftp", "ftps"])
21 URL_CHARS = "abcdefghijkmpqrstuvwxyzABCDEFGHIJKLMNPQRST23456789"
1622
1723 local = Local()
1824 local_manager = LocalManager([local])
19 application = local('application')
25 application = local("application")
2026
2127 metadata = MetaData()
22 url_map = Map([Rule('/static/<file>', endpoint='static', build_only=True)])
28 url_map = Map([Rule("/static/<file>", endpoint="static", build_only=True)])
2329
24 session = scoped_session(lambda: create_session(application.database_engine,
25 autocommit=False,
26 autoflush=False))
30 session = scoped_session(
31 lambda: create_session(
32 application.database_engine, autocommit=False, autoflush=False
33 )
34 )
2735 jinja_env = Environment(loader=FileSystemLoader(TEMPLATE_PATH))
2836
2937
3038 def expose(rule, **kw):
3139 def decorate(f):
32 kw['endpoint'] = f.__name__
40 kw["endpoint"] = f.__name__
3341 url_map.add(Rule(rule, **kw))
3442 return f
43
3544 return decorate
45
3646
3747 def url_for(endpoint, _external=False, **values):
3848 return local.url_adapter.build(endpoint, values, force_external=_external)
39 jinja_env.globals['url_for'] = url_for
49
50
51 jinja_env.globals["url_for"] = url_for
52
4053
4154 def render_template(template, **context):
42 return Response(jinja_env.get_template(template).render(**context),
43 mimetype='text/html')
55 return Response(
56 jinja_env.get_template(template).render(**context), mimetype="text/html"
57 )
58
4459
4560 def validate_url(url):
46 return urlparse(url)[0] in ALLOWED_SCHEMES
61 return url_parse(url)[0] in ALLOWED_SCHEMES
62
4763
4864 def get_random_uid():
49 return ''.join(sample(URL_CHARS, randrange(3, 9)))
65 return "".join(sample(URL_CHARS, randrange(3, 9)))
5066
5167
5268 class Pagination(object):
53
5469 def __init__(self, query, per_page, page, endpoint):
5570 self.query = query
5671 self.per_page = per_page
6378
6479 @cached_property
6580 def entries(self):
66 return self.query.offset((self.page - 1) * self.per_page) \
67 .limit(self.per_page).all()
81 return (
82 self.query.offset((self.page - 1) * self.per_page)
83 .limit(self.per_page)
84 .all()
85 )
6886
69 has_previous = property(lambda x: x.page > 1)
70 has_next = property(lambda x: x.page < x.pages)
71 previous = property(lambda x: url_for(x.endpoint, page=x.page - 1))
72 next = property(lambda x: url_for(x.endpoint, page=x.page + 1))
73 pages = property(lambda x: max(0, x.count - 1) // x.per_page + 1)
87 has_previous = property(lambda self: self.page > 1)
88 has_next = property(lambda self: self.page < self.pages)
89 previous = property(lambda self: url_for(self.endpoint, page=self.page - 1))
90 next = property(lambda self: url_for(self.endpoint, page=self.page + 1))
91 pages = property(lambda self: max(0, self.count - 1) // self.per_page + 1)
0 from werkzeug.exceptions import NotFound
01 from werkzeug.utils import redirect
1 from werkzeug.exceptions import NotFound
2 from shorty.utils import session, Pagination, render_template, expose, \
3 validate_url, url_for
4 from shorty.models import URL
52
6 @expose('/')
3 from .models import URL
4 from .utils import expose
5 from .utils import Pagination
6 from .utils import render_template
7 from .utils import session
8 from .utils import url_for
9 from .utils import validate_url
10
11
12 @expose("/")
713 def new(request):
8 error = url = ''
9 if request.method == 'POST':
10 url = request.form.get('url')
11 alias = request.form.get('alias')
14 error = url = ""
15 if request.method == "POST":
16 url = request.form.get("url")
17 alias = request.form.get("alias")
1218 if not validate_url(url):
1319 error = "I'm sorry but you cannot shorten this URL."
1420 elif alias:
1521 if len(alias) > 140:
16 error = 'Your alias is too long'
17 elif '/' in alias:
18 error = 'Your alias might not include a slash'
22 error = "Your alias is too long"
23 elif "/" in alias:
24 error = "Your alias might not include a slash"
1925 elif URL.query.get(alias):
20 error = 'The alias you have requested exists already'
26 error = "The alias you have requested exists already"
2127 if not error:
22 uid = URL(url, 'private' not in request.form, alias).uid
28 uid = URL(url, "private" not in request.form, alias).uid
2329 session.commit()
24 return redirect(url_for('display', uid=uid))
25 return render_template('new.html', error=error, url=url)
30 return redirect(url_for("display", uid=uid))
31 return render_template("new.html", error=error, url=url)
2632
27 @expose('/display/<uid>')
33
34 @expose("/display/<uid>")
2835 def display(request, uid):
2936 url = URL.query.get(uid)
3037 if not url:
3138 raise NotFound()
32 return render_template('display.html', url=url)
39 return render_template("display.html", url=url)
3340
34 @expose('/u/<uid>')
41
42 @expose("/u/<uid>")
3543 def link(request, uid):
3644 url = URL.query.get(uid)
3745 if not url:
3846 raise NotFound()
3947 return redirect(url.target, 301)
4048
41 @expose('/list/', defaults={'page': 1})
42 @expose('/list/<int:page>')
49
50 @expose("/list/", defaults={"page": 1})
51 @expose("/list/<int:page>")
4352 def list(request, page):
4453 query = URL.query.filter_by(public=True)
45 pagination = Pagination(query, 30, page, 'list')
54 pagination = Pagination(query, 30, page, "list")
4655 if pagination.page > 1 and not pagination.entries:
4756 raise NotFound()
48 return render_template('list.html', pagination=pagination)
57 return render_template("list.html", pagination=pagination)
58
4959
5060 def not_found(request):
51 return render_template('not_found.html')
61 return render_template("not_found.html")
55 Very simple wiki application based on Genshi, Werkzeug and SQLAlchemy.
66 Additionally the creoleparser is used for the wiki markup.
77
8 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
9 :license: BSD.
8 :copyright: 2007 Pallets
9 :license: BSD-3-Clause
1010 """
11 from simplewiki.application import SimpleWiki
11 from .application import SimpleWiki
88 careful not to name any other objects in the module with the same
99 prefix unless you want to act them as actions.
1010
11 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
12 :license: BSD.
11 :copyright: 2007 Pallets
12 :license: BSD-3-Clause
1313 """
1414 from difflib import unified_diff
15 from simplewiki.utils import Response, generate_template, parse_creole, \
16 href, redirect, format_datetime
17 from simplewiki.database import RevisionedPage, Page, Revision, session
15
16 from werkzeug.utils import redirect
17
18 from .database import Page
19 from .database import Revision
20 from .database import RevisionedPage
21 from .database import session
22 from .utils import format_datetime
23 from .utils import generate_template
24 from .utils import href
25 from .utils import Response
1826
1927
2028 def on_show(request, page_name):
2129 """Displays the page the user requests."""
22 revision_id = request.args.get('rev', type=int)
30 revision_id = request.args.get("rev", type=int)
2331 query = RevisionedPage.query.filter_by(name=page_name)
2432 if revision_id:
2533 query = query.filter_by(revision_id=revision_id)
3038 page = query.first()
3139 if page is None:
3240 return page_missing(request, page_name, revision_requested)
33 return Response(generate_template('action_show.html',
34 page=page
35 ))
41 return Response(generate_template("action_show.html", page=page))
3642
3743
3844 def on_edit(request, page_name):
3945 """Edit the current revision of a page."""
40 change_note = error = ''
41 revision = Revision.query.filter(
42 (Page.name == page_name) &
43 (Page.page_id == Revision.page_id)
44 ).order_by(Revision.revision_id.desc()).first()
46 change_note = error = ""
47 revision = (
48 Revision.query.filter(
49 (Page.name == page_name) & (Page.page_id == Revision.page_id)
50 )
51 .order_by(Revision.revision_id.desc())
52 .first()
53 )
4554 if revision is None:
4655 page = None
4756 else:
4857 page = revision.page
4958
50 if request.method == 'POST':
51 text = request.form.get('text')
52 if request.form.get('cancel') or \
53 revision and revision.text == text:
59 if request.method == "POST":
60 text = request.form.get("text")
61 if request.form.get("cancel") or revision and revision.text == text:
5462 return redirect(href(page.name))
5563 elif not text:
56 error = 'You cannot save empty revisions.'
64 error = "You cannot save empty revisions."
5765 else:
58 change_note = request.form.get('change_note', '')
66 change_note = request.form.get("change_note", "")
5967 if page is None:
6068 page = Page(page_name)
6169 session.add(page)
6371 session.commit()
6472 return redirect(href(page.name))
6573
66 return Response(generate_template('action_edit.html',
67 revision=revision,
68 page=page,
69 new=page is None,
70 page_name=page_name,
71 change_note=change_note,
72 error=error
73 ))
74 return Response(
75 generate_template(
76 "action_edit.html",
77 revision=revision,
78 page=page,
79 new=page is None,
80 page_name=page_name,
81 change_note=change_note,
82 error=error,
83 )
84 )
7485
7586
7687 def on_log(request, page_name):
7889 page = Page.query.filter_by(name=page_name).first()
7990 if page is None:
8091 return page_missing(request, page_name, False)
81 return Response(generate_template('action_log.html',
82 page=page
83 ))
92 return Response(generate_template("action_log.html", page=page))
8493
8594
8695 def on_diff(request, page_name):
8796 """Show the diff between two revisions."""
88 old = request.args.get('old', type=int)
89 new = request.args.get('new', type=int)
90 error = ''
97 old = request.args.get("old", type=int)
98 new = request.args.get("new", type=int)
99 error = ""
91100 diff = page = old_rev = new_rev = None
92101
93102 if not (old and new):
94 error = 'No revisions specified.'
103 error = "No revisions specified."
95104 else:
96 revisions = dict((x.revision_id, x) for x in Revision.query.filter(
97 (Revision.revision_id.in_((old, new))) &
98 (Revision.page_id == Page.page_id) &
99 (Page.name == page_name)
100 ))
105 revisions = dict(
106 (x.revision_id, x)
107 for x in Revision.query.filter(
108 (Revision.revision_id.in_((old, new)))
109 & (Revision.page_id == Page.page_id)
110 & (Page.name == page_name)
111 )
112 )
101113 if len(revisions) != 2:
102 error = 'At least one of the revisions requested ' \
103 'does not exist.'
114 error = "At least one of the revisions requested does not exist."
104115 else:
105116 new_rev = revisions[new]
106117 old_rev = revisions[old]
107118 page = old_rev.page
108119 diff = unified_diff(
109 (old_rev.text + '\n').splitlines(True),
110 (new_rev.text + '\n').splitlines(True),
111 page.name, page.name,
120 (old_rev.text + "\n").splitlines(True),
121 (new_rev.text + "\n").splitlines(True),
122 page.name,
123 page.name,
112124 format_datetime(old_rev.timestamp),
113125 format_datetime(new_rev.timestamp),
114 3
126 3,
115127 )
116128
117 return Response(generate_template('action_diff.html',
118 error=error,
119 old_revision=old_rev,
120 new_revision=new_rev,
121 page=page,
122 diff=diff
123 ))
129 return Response(
130 generate_template(
131 "action_diff.html",
132 error=error,
133 old_revision=old_rev,
134 new_revision=new_rev,
135 page=page,
136 diff=diff,
137 )
138 )
124139
125140
126141 def on_revert(request, page_name):
127142 """Revert an old revision."""
128 rev_id = request.args.get('rev', type=int)
143 rev_id = request.args.get("rev", type=int)
129144
130145 old_revision = page = None
131 error = 'No such revision'
132
133 if request.method == 'POST' and request.form.get('cancel'):
146 error = "No such revision"
147
148 if request.method == "POST" and request.form.get("cancel"):
134149 return redirect(href(page_name))
135150
136151 if rev_id:
137152 old_revision = Revision.query.filter(
138 (Revision.revision_id == rev_id) &
139 (Revision.page_id == Page.page_id) &
140 (Page.name == page_name)
153 (Revision.revision_id == rev_id)
154 & (Revision.page_id == Page.page_id)
155 & (Page.name == page_name)
141156 ).first()
142157 if old_revision:
143 new_revision = Revision.query.filter(
144 (Revision.page_id == Page.page_id) &
145 (Page.name == page_name)
146 ).order_by(Revision.revision_id.desc()).first()
158 new_revision = (
159 Revision.query.filter(
160 (Revision.page_id == Page.page_id) & (Page.name == page_name)
161 )
162 .order_by(Revision.revision_id.desc())
163 .first()
164 )
147165 if old_revision == new_revision:
148 error = 'You tried to revert the current active ' \
149 'revision.'
166 error = "You tried to revert the current active revision."
150167 elif old_revision.text == new_revision.text:
151 error = 'There are no changes between the current ' \
152 'revision and the revision you want to ' \
153 'restore.'
168 error = (
169 "There are no changes between the current "
170 "revision and the revision you want to "
171 "restore."
172 )
154173 else:
155 error = ''
174 error = ""
156175 page = old_revision.page
157 if request.method == 'POST':
158 change_note = request.form.get('change_note', '')
159 change_note = 'revert' + (change_note and ': ' +
160 change_note or '')
161 session.add(Revision(page, old_revision.text,
162 change_note))
176 if request.method == "POST":
177 change_note = request.form.get("change_note", "")
178 change_note = "revert" + (change_note and ": " + change_note or "")
179 session.add(Revision(page, old_revision.text, change_note))
163180 session.commit()
164181 return redirect(href(page_name))
165182
166 return Response(generate_template('action_revert.html',
167 error=error,
168 old_revision=old_revision,
169 page=page
170 ))
183 return Response(
184 generate_template(
185 "action_revert.html", error=error, old_revision=old_revision, page=page
186 )
187 )
171188
172189
173190 def page_missing(request, page_name, revision_requested, protected=False):
174191 """Displayed if page or revision does not exist."""
175 return Response(generate_template('page_missing.html',
176 page_name=page_name,
177 revision_requested=revision_requested,
178 protected=protected
179 ), status=404)
192 return Response(
193 generate_template(
194 "page_missing.html",
195 page_name=page_name,
196 revision_requested=revision_requested,
197 protected=protected,
198 ),
199 status=404,
200 )
180201
181202
182203 def missing_action(request, action):
183204 """Displayed if a user tried to access a action that does not exist."""
184 return Response(generate_template('missing_action.html',
185 action=action
186 ), status=404)
205 return Response(generate_template("missing_action.html", action=action), status=404)
66 requests to specific wiki pages and actions.
77
88
9 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
10 :license: BSD.
9 :copyright: 2007 Pallets
10 :license: BSD-3-Clause
1111 """
1212 from os import path
13
1314 from sqlalchemy import create_engine
15 from werkzeug.middleware.shared_data import SharedDataMiddleware
1416 from werkzeug.utils import redirect
15 from werkzeug.wsgi import ClosingIterator, SharedDataMiddleware
16 from simplewiki.utils import Request, Response, local, local_manager, href
17 from simplewiki.database import session, metadata
18 from simplewiki import actions
19 from simplewiki.specialpages import pages, page_not_found
17 from werkzeug.wsgi import ClosingIterator
18
19 from . import actions
20 from .database import metadata
21 from .database import session
22 from .specialpages import page_not_found
23 from .specialpages import pages
24 from .utils import href
25 from .utils import local
26 from .utils import local_manager
27 from .utils import Request
2028
2129
2230 #: path to shared data
23 SHARED_DATA = path.join(path.dirname(__file__), 'shared')
31 SHARED_DATA = path.join(path.dirname(__file__), "shared")
2432
2533
2634 class SimpleWiki(object):
3442 # apply our middlewares. we apply the middlewars *inside* the
3543 # application and not outside of it so that we never lose the
3644 # reference to the `SimpleWiki` object.
37 self._dispatch = SharedDataMiddleware(self.dispatch_request, {
38 '/_shared': SHARED_DATA
39 })
45 self._dispatch = SharedDataMiddleware(
46 self.dispatch_request, {"/_shared": SHARED_DATA}
47 )
4048
4149 # free the context locals at the end of the request
4250 self._dispatch = local_manager.make_middleware(self._dispatch)
6371
6472 # get the current action from the url and normalize the page name
6573 # which is just the request path
66 action_name = request.args.get('action') or 'show'
67 page_name = u'_'.join([x for x in request.path.strip('/')
68 .split() if x])
74 action_name = request.args.get("action") or "show"
75 page_name = u"_".join([x for x in request.path.strip("/").split() if x])
6976
7077 # redirect to the Main_Page if the user requested the index
7178 if not page_name:
72 response = redirect(href('Main_Page'))
79 response = redirect(href("Main_Page"))
7380
7481 # check special pages
75 elif page_name.startswith('Special:'):
82 elif page_name.startswith("Special:"):
7683 if page_name[8:] not in pages:
7784 response = page_not_found(request, page_name)
7885 else:
8289 # action module. It's "on_" + the action name. If it doesn't
8390 # exists call the missing_action method from the same module.
8491 else:
85 action = getattr(actions, 'on_' + action_name, None)
92 action = getattr(actions, "on_" + action_name, None)
8693 if action is None:
8794 response = actions.missing_action(request, action_name)
8895 else:
8996 response = action(request, page_name)
9097
9198 # make sure the session is removed properly
92 return ClosingIterator(response(environ, start_response),
93 session.remove)
99 return ClosingIterator(response(environ, start_response), session.remove)
94100
95101 def __call__(self, environ, start_response):
96102 """Just forward a WSGI call to the first internal middleware."""
44
55 The database.
66
7 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
1010 from datetime import datetime
11 from sqlalchemy import Table, Column, Integer, String, DateTime, \
12 ForeignKey, MetaData, join
13 from sqlalchemy.orm import relation, create_session, scoped_session, \
14 mapper
15 from simplewiki.utils import application, local_manager, parse_creole
11
12 from sqlalchemy import Column
13 from sqlalchemy import DateTime
14 from sqlalchemy import ForeignKey
15 from sqlalchemy import Integer
16 from sqlalchemy import join
17 from sqlalchemy import MetaData
18 from sqlalchemy import String
19 from sqlalchemy import Table
20 from sqlalchemy.orm import create_session
21 from sqlalchemy.orm import mapper
22 from sqlalchemy.orm import relation
23 from sqlalchemy.orm import scoped_session
24
25 from .utils import application
26 from .utils import local_manager
27 from .utils import parse_creole
1628
1729
1830 # create a global metadata
2739 application. If there is no application bound to the context it
2840 raises an exception.
2941 """
30 return create_session(application.database_engine, autoflush=True,
31 autocommit=False)
42 return create_session(application.database_engine, autoflush=True, autocommit=False)
3243
3344
3445 # and create a new global session factory. Calling this object gives
3748
3849
3950 # our database tables.
40 page_table = Table('pages', metadata,
41 Column('page_id', Integer, primary_key=True),
42 Column('name', String(60), unique=True)
51 page_table = Table(
52 "pages",
53 metadata,
54 Column("page_id", Integer, primary_key=True),
55 Column("name", String(60), unique=True),
4356 )
4457
45 revision_table = Table('revisions', metadata,
46 Column('revision_id', Integer, primary_key=True),
47 Column('page_id', Integer, ForeignKey('pages.page_id')),
48 Column('timestamp', DateTime),
49 Column('text', String),
50 Column('change_note', String(200))
58 revision_table = Table(
59 "revisions",
60 metadata,
61 Column("revision_id", Integer, primary_key=True),
62 Column("page_id", Integer, ForeignKey("pages.page_id")),
63 Column("timestamp", DateTime),
64 Column("text", String),
65 Column("change_note", String(200)),
5166 )
5267
5368
5873 new revisions. It's also used for the diff system and the revision
5974 log.
6075 """
76
6177 query = session.query_property()
6278
63 def __init__(self, page, text, change_note='', timestamp=None):
64 if isinstance(page, (int, long)):
79 def __init__(self, page, text, change_note="", timestamp=None):
80 if isinstance(page, int):
6581 self.page_id = page
6682 else:
6783 self.page = page
7490 return parse_creole(self.text)
7591
7692 def __repr__(self):
77 return '<%s %r:%r>' % (
78 self.__class__.__name__,
79 self.page_id,
80 self.revision_id
81 )
93 return "<%s %r:%r>" % (self.__class__.__name__, self.page_id, self.revision_id)
8294
8395
8496 class Page(object):
8698 Represents a simple page without any revisions. This is for example
8799 used in the page index where the page contents are not relevant.
88100 """
101
89102 query = session.query_property()
90103
91104 def __init__(self, name):
93106
94107 @property
95108 def title(self):
96 return self.name.replace('_', ' ')
109 return self.name.replace("_", " ")
97110
98111 def __repr__(self):
99 return '<%s %r>' % (self.__class__.__name__, self.name)
112 return "<%s %r>" % (self.__class__.__name__, self.name)
100113
101114
102115 class RevisionedPage(Page, Revision):
105118 and the ability of SQLAlchemy to map to joins we can combine `Page` and
106119 `Revision` into one class here.
107120 """
121
108122 query = session.query_property()
109123
110124 def __init__(self):
111 raise TypeError('cannot create WikiPage instances, use the Page and '
112 'Revision classes for data manipulation.')
125 raise TypeError(
126 "cannot create WikiPage instances, use the Page and "
127 "Revision classes for data manipulation."
128 )
113129
114130 def __repr__(self):
115 return '<%s %r:%r>' % (
116 self.__class__.__name__,
117 self.name,
118 self.revision_id
119 )
131 return "<%s %r:%r>" % (self.__class__.__name__, self.name, self.revision_id)
120132
121133
122134 # setup mappers
123135 mapper(Revision, revision_table)
124 mapper(Page, page_table, properties=dict(
125 revisions=relation(Revision, backref='page',
126 order_by=Revision.revision_id.desc())
127 ))
128 mapper(RevisionedPage, join(page_table, revision_table), properties=dict(
129 page_id=[page_table.c.page_id, revision_table.c.page_id],
130 ))
136 mapper(
137 Page,
138 page_table,
139 properties=dict(
140 revisions=relation(
141 Revision, backref="page", order_by=Revision.revision_id.desc()
142 )
143 ),
144 )
145 mapper(
146 RevisionedPage,
147 join(page_table, revision_table),
148 properties=dict(page_id=[page_table.c.page_id, revision_table.c.page_id]),
149 )
55 This module contains special pages such as the recent changes page.
66
77
8 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
9 :license: BSD.
8 :copyright: 2007 Pallets
9 :license: BSD-3-Clause
1010 """
11 from simplewiki.utils import Response, Pagination, generate_template, href
12 from simplewiki.database import RevisionedPage, Page
13 from simplewiki.actions import page_missing
14
11 from .actions import page_missing
12 from .database import Page
13 from .database import RevisionedPage
14 from .utils import generate_template
15 from .utils import Pagination
16 from .utils import Response
1517
1618
1719 def page_index(request):
1921 letters = {}
2022 for page in Page.query.order_by(Page.name):
2123 letters.setdefault(page.name.capitalize()[0], []).append(page)
22 return Response(generate_template('page_index.html',
23 letters=sorted(letters.items())
24 ))
24 return Response(
25 generate_template("page_index.html", letters=sorted(letters.items()))
26 )
2527
2628
2729 def recent_changes(request):
2830 """Display the recent changes."""
29 page = max(1, request.args.get('page', type=int))
30 query = RevisionedPage.query \
31 .order_by(RevisionedPage.revision_id.desc())
32 return Response(generate_template('recent_changes.html',
33 pagination=Pagination(query, 20, page, 'Special:Recent_Changes')
34 ))
31 page = max(1, request.args.get("page", type=int))
32 query = RevisionedPage.query.order_by(RevisionedPage.revision_id.desc())
33 return Response(
34 generate_template(
35 "recent_changes.html",
36 pagination=Pagination(query, 20, page, "Special:Recent_Changes"),
37 )
38 )
3539
3640
3741 def page_not_found(request, page_name):
4246 return page_missing(request, page_name, True)
4347
4448
45 pages = {
46 'Index': page_index,
47 'Recent_Changes': recent_changes
48 }
49 pages = {"Index": page_index, "Recent_Changes": recent_changes}
44 <html xmlns="http://www.w3.org/1999/xhtml" xmlns:xi="http://www.w3.org/2001/XInclude"
55 xmlns:py="http://genshi.edgewall.org/"><xi:include href="layout.html" />
66 <head>
7 <title>${new and 'Create' or 'Edit'} Page</title>
7 <title>${'Create' if new else 'Edit'} Page</title>
88 </head>
99 <body>
10 <h1>${new and 'Create' or 'Edit'} “${page.title or page_name}”</h1>
10 <h1>${'Create' if new else 'Edit'} “${page.title or page_name}”</h1>
1111 <p>
12 You can now ${new and 'create' or 'modify'} the page contents. To
12 You can now ${'create' if new else 'modify'} the page contents. To
1313 format your text you can use <a href="http://www.wikicreole.org/">creole markup</a>.
1414 </p>
1515 <p class="error" py:if="error">${error}</p>
2020 <th class="actions">Actions</th>
2121 </tr>
2222 <tr py:for="idx, revision in enumerate(page.revisions)"
23 class="${idx % 2 == 1 and 'even' or 'odd'}">
23 class="${'even' if idx % 2 == 1 else 'odd'}">
2424 <td class="timestamp">${format_datetime(revision.timestamp)}</td>
2525 <td class="change_note">${revision.change_note}</td>
2626 <td class="diff">
2727 <input type="radio" name="old" value="${revision.revision_id}"
28 checked="${idx == 1 and 'checked' or None}" />
28 checked="${'checked' if idx == 1 else None}" />
2929 <input type="radio" name="new" value="${revision.revision_id}"
30 checked="${idx == 0 and 'checked' or None}" />
30 checked="${'checked' if idx == 0 else None}" />
3131 </td>
3232 <td class="actions">
3333 <a href="${href(page.name, rev=revision.revision_id)}">show</a>
2626 ('edit', href(page.name, action='edit'), 'edit'),
2727 ('log', href(page.name, action='log'), 'log')
2828 )">
29 <a href="${href}" class="${id == page_action and 'active' or
29 <a href="${href}" class="${'active' if id == page_action else
3030 None}">${title}</a> |
3131 </py:for>
3232 </py:if>
00 <div xmlns="http://www.w3.org/1999/xhtml" xmlns:py="http://genshi.edgewall.org/"
11 py:strip="">
2
2
33 <py:def function="render_pagination(pagination)">
44 <div class="pagination" py:if="pagination.pages > 1">
55 <py:choose test="pagination.has_previous">
1414 <th class="change_note">Change Note</th>
1515 </tr>
1616 <tr py:for="idx, entry in enumerate(pagination.entries)"
17 class="${idx % 2 == 1 and 'even' or 'odd'}">
17 class="${'even' if idx % 2 == 1 else 'odd'}">
1818 <td class="timestamp">${format_datetime(entry.timestamp)}</td>
1919 <td class="page"><a href="${href(entry.name)}">${entry.title}</a></td>
2020 <td class="change_note">${entry.change_note}</td>
55 This module implements various utility functions and classes used all
66 over the application.
77
8 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
9 :license: BSD.
8 :copyright: 2007 Pallets
9 :license: BSD-3-Clause
1010 """
11 import difflib
11 from os import path
12
1213 import creoleparser
13 from os import path
1414 from genshi import Stream
1515 from genshi.template import TemplateLoader
16 from werkzeug.local import Local, LocalManager
17 from werkzeug.urls import url_encode, url_quote
18 from werkzeug.utils import cached_property, redirect
19 from werkzeug.wrappers import BaseRequest, BaseResponse
16 from werkzeug.local import Local
17 from werkzeug.local import LocalManager
18 from werkzeug.urls import url_encode
19 from werkzeug.urls import url_quote
20 from werkzeug.utils import cached_property
21 from werkzeug.wrappers import BaseRequest
22 from werkzeug.wrappers import BaseResponse
2023
2124
2225 # calculate the path to the templates an create the template loader
23 TEMPLATE_PATH = path.join(path.dirname(__file__), 'templates')
24 template_loader = TemplateLoader(TEMPLATE_PATH, auto_reload=True,
25 variable_lookup='lenient')
26 TEMPLATE_PATH = path.join(path.dirname(__file__), "templates")
27 template_loader = TemplateLoader(
28 TEMPLATE_PATH, auto_reload=True, variable_lookup="lenient"
29 )
2630
2731
2832 # context locals. these two objects are use by the application to
3034 # current thread and the current greenlet if there is greenlet support.
3135 local = Local()
3236 local_manager = LocalManager([local])
33 request = local('request')
34 application = local('application')
37 request = local("request")
38 application = local("application")
3539
3640 # create a new creole parser
3741 creole_parser = creoleparser.Parser(
38 dialect=creoleparser.create_dialect(creoleparser.creole10_base,
39 wiki_links_base_url='',
42 dialect=creoleparser.create_dialect(
43 creoleparser.creole10_base,
44 wiki_links_base_url="",
4045 wiki_links_path_func=lambda page_name: href(page_name),
41 wiki_links_space_char='_',
42 no_wiki_monospace=True
46 wiki_links_space_char="_",
47 no_wiki_monospace=True,
4348 ),
44 method='html'
49 method="html",
4550 )
4651
4752
4853 def generate_template(template_name, **context):
4954 """Load and generate a template."""
50 context.update(
51 href=href,
52 format_datetime=format_datetime
53 )
55 context.update(href=href, format_datetime=format_datetime)
5456 return template_loader.load(template_name).generate(**context)
5557
5658
6466 Simple function for URL generation. Position arguments are used for the
6567 URL path and keyword arguments are used for the url parameters.
6668 """
67 result = [(request and request.script_root or '') + '/']
69 result = [(request.script_root if request else "") + "/"]
6870 for idx, arg in enumerate(args):
69 result.append((idx and '/' or '') + url_quote(arg))
71 result.append(("/" if idx else "") + url_quote(arg))
7072 if kw:
71 result.append('?' + url_encode(kw))
72 return ''.join(result)
73 result.append("?" + url_encode(kw))
74 return "".join(result)
7375
7476
7577 def format_datetime(obj):
7678 """Format a datetime object."""
77 return obj.strftime('%Y-%m-%d %H:%M')
79 return obj.strftime("%Y-%m-%d %H:%M")
7880
7981
8082 class Request(BaseRequest):
9496 to html. This makes it possible to switch to xhtml or html5 easily.
9597 """
9698
97 default_mimetype = 'text/html'
99 default_mimetype = "text/html"
98100
99 def __init__(self, response=None, status=200, headers=None, mimetype=None,
100 content_type=None):
101 def __init__(
102 self, response=None, status=200, headers=None, mimetype=None, content_type=None
103 ):
101104 if isinstance(response, Stream):
102 response = response.render('html', encoding=None, doctype='html')
103 BaseResponse.__init__(self, response, status, headers, mimetype,
104 content_type)
105 response = response.render("html", encoding=None, doctype="html")
106 BaseResponse.__init__(self, response, status, headers, mimetype, content_type)
105107
106108
107109 class Pagination(object):
118120
119121 @cached_property
120122 def entries(self):
121 return self.query.offset((self.page - 1) * self.per_page) \
122 .limit(self.per_page).all()
123 return (
124 self.query.offset((self.page - 1) * self.per_page)
125 .limit(self.per_page)
126 .all()
127 )
123128
124129 @property
125130 def has_previous(self):
44
55 All uploaded files are directly send back to the client.
66
7 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
1010 from werkzeug.serving import run_simple
11 from werkzeug.wrappers import BaseRequest, BaseResponse
11 from werkzeug.wrappers import BaseRequest
12 from werkzeug.wrappers import BaseResponse
1213 from werkzeug.wsgi import wrap_file
1314
1415
1516 def view_file(req):
16 if not 'uploaded_file' in req.files:
17 return BaseResponse('no file uploaded')
18 f = req.files['uploaded_file']
19 return BaseResponse(wrap_file(req.environ, f), mimetype=f.content_type,
20 direct_passthrough=True)
17 if "uploaded_file" not in req.files:
18 return BaseResponse("no file uploaded")
19 f = req.files["uploaded_file"]
20 return BaseResponse(
21 wrap_file(req.environ, f), mimetype=f.content_type, direct_passthrough=True
22 )
2123
2224
2325 def upload_file(req):
24 return BaseResponse('''
25 <h1>Upload File</h1>
26 <form action="" method="post" enctype="multipart/form-data">
27 <input type="file" name="uploaded_file">
28 <input type="submit" value="Upload">
29 </form>
30 ''', mimetype='text/html')
26 return BaseResponse(
27 """<h1>Upload File</h1>
28 <form action="" method="post" enctype="multipart/form-data">
29 <input type="file" name="uploaded_file">
30 <input type="submit" value="Upload">
31 </form>""",
32 mimetype="text/html",
33 )
3134
3235
3336 def application(environ, start_response):
3437 req = BaseRequest(environ)
35 if req.method == 'POST':
38 if req.method == "POST":
3639 resp = view_file(req)
3740 else:
3841 resp = upload_file(req)
3942 return resp(environ, start_response)
4043
4144
42 if __name__ == '__main__':
43 run_simple('localhost', 5000, application, use_debugger=True)
45 if __name__ == "__main__":
46 run_simple("localhost", 5000, application, use_debugger=True)
44
55 The application from th web.py tutorial.
66
7 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
10 from webpylike import WebPyApp, View, Response
10 from .webpylike import Response
11 from .webpylike import View
12 from .webpylike import WebPyApp
1113
1214
13 urls = (
14 '/', 'index',
15 '/about', 'about'
16 )
15 urls = ("/", "index", "/about", "about")
1716
1817
1918 class index(View):
2019 def GET(self):
21 return Response('Hello World')
20 return Response("Hello World")
2221
2322
2423 class about(View):
2524 def GET(self):
26 return Response('This is the about page')
25 return Response("This is the about page")
2726
2827
2928 app = WebPyApp(urls, globals())
66 not implement is a stream system that hooks into sys.stdout like web.py
77 provides. I consider this bad design.
88
9 :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details.
10 :license: BSD.
9 :copyright: 2007 Pallets
10 :license: BSD-3-Clause
1111 """
1212 import re
13 from werkzeug.wrappers import BaseRequest, BaseResponse
14 from werkzeug.exceptions import HTTPException, MethodNotAllowed, \
15 NotImplemented, NotFound
13
14 from werkzeug.exceptions import HTTPException
15 from werkzeug.exceptions import MethodNotAllowed
16 from werkzeug.exceptions import NotFound
17 from werkzeug.exceptions import NotImplemented
18 from werkzeug.wrappers import BaseRequest
19 from werkzeug.wrappers import BaseResponse
1620
1721
1822 class Request(BaseRequest):
3236
3337 def GET(self):
3438 raise MethodNotAllowed()
39
3540 POST = DELETE = PUT = GET
3641
3742 def HEAD(self):
4550 """
4651
4752 def __init__(self, urls, views):
48 self.urls = [(re.compile('^%s$' % urls[i]), urls[i + 1])
49 for i in xrange(0, len(urls), 2)]
53 self.urls = [
54 (re.compile("^%s$" % urls[i]), urls[i + 1]) for i in range(0, len(urls), 2)
55 ]
5056 self.views = views
5157
5258 def __call__(self, environ, start_response):
5662 match = regex.match(req.path)
5763 if match is not None:
5864 view = self.views[view](self, req)
59 if req.method not in ('GET', 'HEAD', 'POST',
60 'DELETE', 'PUT'):
61 raise NotImplemented()
65 if req.method not in ("GET", "HEAD", "POST", "DELETE", "PUT"):
66 raise NotImplemented() # noqa: F901
6267 resp = getattr(view, req.method)(*match.groups())
6368 break
6469 else:
6570 raise NotFound()
66 except HTTPException, e:
71 except HTTPException as e:
6772 resp = e
6873 return resp(environ, start_response)
+0
-165
scripts/make-release.py less more
0 #!/usr/bin/env python
1 # -*- coding: utf-8 -*-
2 from __future__ import print_function
3
4 import os
5 import re
6 import sys
7 from datetime import date, datetime
8 from subprocess import PIPE, Popen
9
10 _date_strip_re = re.compile(r'(?<=\d)(st|nd|rd|th)')
11
12
13 def parse_changelog():
14 with open('CHANGES.rst') as f:
15 lineiter = iter(f)
16 for line in lineiter:
17 match = re.search('^Version\s+(.*)', line.strip())
18
19 if match is None:
20 continue
21
22 version = match.group(1).strip()
23
24 if next(lineiter).count('-') != len(match.group(0)):
25 continue
26
27 while 1:
28 change_info = next(lineiter).strip()
29
30 if change_info:
31 break
32
33 match = re.search(
34 r'released on (\w+\s+\d+\w+\s+\d+)(?:, codename (.*))?',
35 change_info,
36 flags=re.IGNORECASE
37 )
38
39 if match is None:
40 continue
41
42 datestr, codename = match.groups()
43 return version, parse_date(datestr), codename
44
45
46 def bump_version(version):
47 try:
48 parts = [int(i) for i in version.split('.')]
49 except ValueError:
50 fail('Current version is not numeric')
51
52 parts[-1] += 1
53 return '.'.join(map(str, parts))
54
55
56 def parse_date(string):
57 string = _date_strip_re.sub('', string)
58 return datetime.strptime(string, '%B %d %Y')
59
60
61 def set_filename_version(filename, version_number, pattern):
62 changed = []
63
64 def inject_version(match):
65 before, old, after = match.groups()
66 changed.append(True)
67 return before + version_number + after
68
69 with open(filename) as f:
70 contents = re.sub(
71 r"^(\s*%s\s*=\s*')(.+?)(')" % pattern,
72 inject_version, f.read(),
73 flags=re.DOTALL | re.MULTILINE
74 )
75
76 if not changed:
77 fail('Could not find %s in %s', pattern, filename)
78
79 with open(filename, 'w') as f:
80 f.write(contents)
81
82
83 def set_init_version(version):
84 info('Setting __init__.py version to %s', version)
85 set_filename_version('werkzeug/__init__.py', version, '__version__')
86
87
88 def build():
89 cmd = [sys.executable, 'setup.py', 'sdist', 'bdist_wheel']
90 Popen(cmd).wait()
91
92
93 def fail(message, *args):
94 print('Error:', message % args, file=sys.stderr)
95 sys.exit(1)
96
97
98 def info(message, *args):
99 print(message % args, file=sys.stderr)
100
101
102 def get_git_tags():
103 return set(
104 Popen(['git', 'tag'], stdout=PIPE).communicate()[0].splitlines()
105 )
106
107
108 def git_is_clean():
109 return Popen(['git', 'diff', '--quiet']).wait() == 0
110
111
112 def make_git_commit(message, *args):
113 message = message % args
114 Popen(['git', 'commit', '-am', message]).wait()
115
116
117 def make_git_tag(tag):
118 info('Tagging "%s"', tag)
119 Popen(['git', 'tag', tag]).wait()
120
121
122 def main():
123 os.chdir(os.path.join(os.path.dirname(__file__), '..'))
124
125 rv = parse_changelog()
126
127 if rv is None:
128 fail('Could not parse changelog')
129
130 version, release_date, codename = rv
131 dev_version = bump_version(version) + '.dev'
132
133 info(
134 'Releasing %s (codename %s, release date %s)',
135 version, codename, release_date.strftime('%d/%m/%Y')
136 )
137 tags = get_git_tags()
138
139 if version in tags:
140 fail('Version "%s" is already tagged', version)
141
142 if release_date.date() != date.today():
143 fail(
144 'Release date is not today (%s != %s)',
145 release_date.date(), date.today()
146 )
147
148 if not git_is_clean():
149 fail('You have uncommitted changes in git')
150
151 try:
152 import wheel # noqa: F401
153 except ImportError:
154 fail('You need to install the wheel package.')
155
156 set_init_version(version)
157 make_git_commit('Bump version number to %s', version)
158 make_git_tag(version)
159 build()
160 set_init_version(dev_version)
161
162
163 if __name__ == '__main__':
164 main()
0 [metadata]
1 license_file = LICENSE.rst
2
3 [bdist_wheel]
4 universal = true
5
06 [tool:pytest]
1 minversion = 3.0
27 testpaths = tests
38 norecursedirs = tests/hypothesis
4 filterwarnings = ignore::requests.packages.urllib3.exceptions.InsecureRequestWarning
9 filterwarnings =
10 ignore::requests.packages.urllib3.exceptions.InsecureRequestWarning
11 ; warning about collections.abc fixed in watchdog master
12 ignore::DeprecationWarning:watchdog.utils.bricks:175
13 ; DeprecationWarnings from Werkzeug
14 ignore:.* version 1\.0:DeprecationWarning
15 ignore:The default 'Secure:UserWarning
516
6 [bdist_wheel]
7 universal = 1
17 [coverage:run]
18 branch = True
19 source =
20 werkzeug
21 tests
822
9 [metadata]
10 license_file = LICENSE
23 [coverage:paths]
24 source =
25 src/werkzeug
26 .tox/*/lib/python*/site-packages/werkzeug
27 .tox/*/site-packages/werkzeug
1128
1229 [flake8]
13 ignore = E126,E241,E272,E305,E402,E731,W503
14 exclude=.tox,examples,docs
15 max-line-length=100
30 # B = bugbear
31 # E = pycodestyle errors
32 # F = flake8 pyflakes
33 # W = pycodestyle warnings
34 # B9 = bugbear opinions
35 select = B, E, F, W, B9
36 ignore =
37 # slice notation whitespace, invalid
38 E203
39 # import at top, too many circular import fixes
40 E402
41 # line length, handled by bugbear B950
42 E501
43 # bare except, handled by bugbear B001
44 E722
45 # bin op line break, invalid
46 W503
47 # up to 88 allowed by bugbear B950
48 max-line-length = 80
49 per-file-ignores =
50 # __init__ modules export names
51 **/__init__.py: F401
52 # LocalProxy assigns lambdas
53 src/werkzeug/local.py: E731
0 #!/usr/bin/env python
10 import io
21 import re
32
4 from setuptools import find_packages, setup
3 from setuptools import find_packages
4 from setuptools import setup
55
6 with io.open('README.rst', 'rt', encoding='utf8') as f:
6 with io.open("README.rst", "rt", encoding="utf8") as f:
77 readme = f.read()
88
9 with io.open('werkzeug/__init__.py', 'rt', encoding='utf8') as f:
10 version = re.search(
11 r'__version__ = \'(.*?)\'', f.read(), re.M).group(1)
9 with io.open("src/werkzeug/__init__.py", "rt", encoding="utf8") as f:
10 version = re.search(r'__version__ = "(.*?)"', f.read(), re.M).group(1)
1211
1312 setup(
14 name='Werkzeug',
13 name="Werkzeug",
1514 version=version,
16 url='https://www.palletsprojects.org/p/werkzeug/',
17 license='BSD',
18 author='Armin Ronacher',
19 author_email='armin.ronacher@active-4.com',
20 description='The comprehensive WSGI web application library.',
15 url="https://palletsprojects.com/p/werkzeug/",
16 project_urls={
17 "Documentation": "https://werkzeug.palletsprojects.com/",
18 "Code": "https://github.com/pallets/werkzeug",
19 "Issue tracker": "https://github.com/pallets/werkzeug/issues",
20 },
21 license="BSD-3-Clause",
22 author="Armin Ronacher",
23 author_email="armin.ronacher@active-4.com",
24 maintainer="The Pallets Team",
25 maintainer_email="contact@palletsprojects.com",
26 description="The comprehensive WSGI web application library.",
2127 long_description=readme,
2228 classifiers=[
23 'Development Status :: 5 - Production/Stable',
24 'Environment :: Web Environment',
25 'Intended Audience :: Developers',
26 'License :: OSI Approved :: BSD License',
27 'Operating System :: OS Independent',
28 'Programming Language :: Python',
29 'Programming Language :: Python :: 2',
30 'Programming Language :: Python :: 2.6',
31 'Programming Language :: Python :: 2.7',
32 'Programming Language :: Python :: 3',
33 'Programming Language :: Python :: 3.3',
34 'Programming Language :: Python :: 3.4',
35 'Programming Language :: Python :: 3.5',
36 'Programming Language :: Python :: 3.6',
37 'Topic :: Internet :: WWW/HTTP :: Dynamic Content',
38 'Topic :: Software Development :: Libraries :: Python Modules',
29 "Development Status :: 5 - Production/Stable",
30 "Environment :: Web Environment",
31 "Intended Audience :: Developers",
32 "License :: OSI Approved :: BSD License",
33 "Operating System :: OS Independent",
34 "Programming Language :: Python",
35 "Programming Language :: Python :: 2",
36 "Programming Language :: Python :: 2.7",
37 "Programming Language :: Python :: 3",
38 "Programming Language :: Python :: 3.4",
39 "Programming Language :: Python :: 3.5",
40 "Programming Language :: Python :: 3.6",
41 "Programming Language :: Python :: 3.7",
42 "Programming Language :: Python :: Implementation :: CPython",
43 "Programming Language :: Python :: Implementation :: PyPy",
44 "Topic :: Internet :: WWW/HTTP :: Dynamic Content",
45 "Topic :: Internet :: WWW/HTTP :: WSGI",
46 "Topic :: Internet :: WWW/HTTP :: WSGI :: Application",
47 "Topic :: Internet :: WWW/HTTP :: WSGI :: Middleware",
48 "Topic :: Software Development :: Libraries :: Application Frameworks",
49 "Topic :: Software Development :: Libraries :: Python Modules",
3950 ],
40 packages=find_packages(exclude=('tests*',)),
51 packages=find_packages("src"),
52 package_dir={"": "src"},
53 include_package_data=True,
54 python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*",
4155 extras_require={
42 'watchdog': ['watchdog'],
43 'termcolor': ['termcolor'],
44 'dev': [
45 'pytest',
46 'coverage',
47 'tox',
48 'sphinx',
56 "watchdog": ["watchdog"],
57 "termcolor": ["termcolor"],
58 "dev": [
59 "pytest",
60 "coverage",
61 "tox",
62 "sphinx",
63 "pallets-sphinx-themes",
64 "sphinx-issues",
4965 ],
5066 },
51 include_package_data=True,
52 zip_safe=False,
53 platforms='any'
5467 )
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug
3 ~~~~~~~~
4
5 Werkzeug is the Swiss Army knife of Python web development.
6
7 It provides useful classes and functions for any WSGI application to make
8 the life of a python web developer much easier. All of the provided
9 classes are independent from each other so you can mix it with any other
10 library.
11
12
13 :copyright: 2007 Pallets
14 :license: BSD-3-Clause
15 """
16 import sys
17 from types import ModuleType
18
19 __version__ = "0.15.4"
20
21 # This import magic raises concerns quite often which is why the implementation
22 # and motivation is explained here in detail now.
23 #
24 # The majority of the functions and classes provided by Werkzeug work on the
25 # HTTP and WSGI layer. There is no useful grouping for those which is why
26 # they are all importable from "werkzeug" instead of the modules where they are
27 # implemented. The downside of that is, that now everything would be loaded at
28 # once, even if unused.
29 #
30 # The implementation of a lazy-loading module in this file replaces the
31 # werkzeug package when imported from within. Attribute access to the werkzeug
32 # module will then lazily import from the modules that implement the objects.
33
34 # import mapping to objects in other modules
35 all_by_module = {
36 "werkzeug.debug": ["DebuggedApplication"],
37 "werkzeug.local": [
38 "Local",
39 "LocalManager",
40 "LocalProxy",
41 "LocalStack",
42 "release_local",
43 ],
44 "werkzeug.serving": ["run_simple"],
45 "werkzeug.test": ["Client", "EnvironBuilder", "create_environ", "run_wsgi_app"],
46 "werkzeug.testapp": ["test_app"],
47 "werkzeug.exceptions": ["abort", "Aborter"],
48 "werkzeug.urls": [
49 "url_decode",
50 "url_encode",
51 "url_quote",
52 "url_quote_plus",
53 "url_unquote",
54 "url_unquote_plus",
55 "url_fix",
56 "Href",
57 "iri_to_uri",
58 "uri_to_iri",
59 ],
60 "werkzeug.formparser": ["parse_form_data"],
61 "werkzeug.utils": [
62 "escape",
63 "environ_property",
64 "append_slash_redirect",
65 "redirect",
66 "cached_property",
67 "import_string",
68 "dump_cookie",
69 "parse_cookie",
70 "unescape",
71 "format_string",
72 "find_modules",
73 "header_property",
74 "html",
75 "xhtml",
76 "HTMLBuilder",
77 "validate_arguments",
78 "ArgumentValidationError",
79 "bind_arguments",
80 "secure_filename",
81 ],
82 "werkzeug.wsgi": [
83 "get_current_url",
84 "get_host",
85 "pop_path_info",
86 "peek_path_info",
87 "ClosingIterator",
88 "FileWrapper",
89 "make_line_iter",
90 "LimitedStream",
91 "responder",
92 "wrap_file",
93 "extract_path_info",
94 ],
95 "werkzeug.datastructures": [
96 "MultiDict",
97 "CombinedMultiDict",
98 "Headers",
99 "EnvironHeaders",
100 "ImmutableList",
101 "ImmutableDict",
102 "ImmutableMultiDict",
103 "TypeConversionDict",
104 "ImmutableTypeConversionDict",
105 "Accept",
106 "MIMEAccept",
107 "CharsetAccept",
108 "LanguageAccept",
109 "RequestCacheControl",
110 "ResponseCacheControl",
111 "ETags",
112 "HeaderSet",
113 "WWWAuthenticate",
114 "Authorization",
115 "FileMultiDict",
116 "CallbackDict",
117 "FileStorage",
118 "OrderedMultiDict",
119 "ImmutableOrderedMultiDict",
120 ],
121 "werkzeug.useragents": ["UserAgent"],
122 "werkzeug.http": [
123 "parse_etags",
124 "parse_date",
125 "http_date",
126 "cookie_date",
127 "parse_cache_control_header",
128 "is_resource_modified",
129 "parse_accept_header",
130 "parse_set_header",
131 "quote_etag",
132 "unquote_etag",
133 "generate_etag",
134 "dump_header",
135 "parse_list_header",
136 "parse_dict_header",
137 "parse_authorization_header",
138 "parse_www_authenticate_header",
139 "remove_entity_headers",
140 "is_entity_header",
141 "remove_hop_by_hop_headers",
142 "parse_options_header",
143 "dump_options_header",
144 "is_hop_by_hop_header",
145 "unquote_header_value",
146 "quote_header_value",
147 "HTTP_STATUS_CODES",
148 ],
149 "werkzeug.wrappers": [
150 "BaseResponse",
151 "BaseRequest",
152 "Request",
153 "Response",
154 "AcceptMixin",
155 "ETagRequestMixin",
156 "ETagResponseMixin",
157 "ResponseStreamMixin",
158 "CommonResponseDescriptorsMixin",
159 "UserAgentMixin",
160 "AuthorizationMixin",
161 "WWWAuthenticateMixin",
162 "CommonRequestDescriptorsMixin",
163 ],
164 "werkzeug.middleware.dispatcher": ["DispatcherMiddleware"],
165 "werkzeug.middleware.shared_data": ["SharedDataMiddleware"],
166 "werkzeug.security": ["generate_password_hash", "check_password_hash"],
167 # the undocumented easteregg ;-)
168 "werkzeug._internal": ["_easteregg"],
169 }
170
171 # modules that should be imported when accessed as attributes of werkzeug
172 attribute_modules = frozenset(["exceptions", "routing"])
173
174 object_origins = {}
175 for module, items in all_by_module.items():
176 for item in items:
177 object_origins[item] = module
178
179
180 class module(ModuleType):
181 """Automatically import objects from the modules."""
182
183 def __getattr__(self, name):
184 if name in object_origins:
185 module = __import__(object_origins[name], None, None, [name])
186 for extra_name in all_by_module[module.__name__]:
187 setattr(self, extra_name, getattr(module, extra_name))
188 return getattr(module, name)
189 elif name in attribute_modules:
190 __import__("werkzeug." + name)
191 return ModuleType.__getattribute__(self, name)
192
193 def __dir__(self):
194 """Just show what we want to show."""
195 result = list(new_module.__all__)
196 result.extend(
197 (
198 "__file__",
199 "__doc__",
200 "__all__",
201 "__docformat__",
202 "__name__",
203 "__path__",
204 "__package__",
205 "__version__",
206 )
207 )
208 return result
209
210
211 # keep a reference to this module so that it's not garbage collected
212 old_module = sys.modules["werkzeug"]
213
214
215 # setup the new module and patch it into the dict of loaded modules
216 new_module = sys.modules["werkzeug"] = module("werkzeug")
217 new_module.__dict__.update(
218 {
219 "__file__": __file__,
220 "__package__": "werkzeug",
221 "__path__": __path__,
222 "__doc__": __doc__,
223 "__version__": __version__,
224 "__all__": tuple(object_origins) + tuple(attribute_modules),
225 "__docformat__": "restructuredtext en",
226 }
227 )
228
229
230 # Due to bootstrapping issues we need to import exceptions here.
231 # Don't ask :-(
232 __import__("werkzeug.exceptions")
0 # flake8: noqa
1 # This whole file is full of lint errors
2 import functools
3 import operator
4 import sys
5
6 try:
7 import builtins
8 except ImportError:
9 import __builtin__ as builtins
10
11
12 PY2 = sys.version_info[0] == 2
13 WIN = sys.platform.startswith("win")
14
15 _identity = lambda x: x
16
17 if PY2:
18 unichr = unichr
19 text_type = unicode
20 string_types = (str, unicode)
21 integer_types = (int, long)
22
23 iterkeys = lambda d, *args, **kwargs: d.iterkeys(*args, **kwargs)
24 itervalues = lambda d, *args, **kwargs: d.itervalues(*args, **kwargs)
25 iteritems = lambda d, *args, **kwargs: d.iteritems(*args, **kwargs)
26
27 iterlists = lambda d, *args, **kwargs: d.iterlists(*args, **kwargs)
28 iterlistvalues = lambda d, *args, **kwargs: d.iterlistvalues(*args, **kwargs)
29
30 int_to_byte = chr
31 iter_bytes = iter
32
33 import collections as collections_abc
34
35 exec("def reraise(tp, value, tb=None):\n raise tp, value, tb")
36
37 def fix_tuple_repr(obj):
38 def __repr__(self):
39 cls = self.__class__
40 return "%s(%s)" % (
41 cls.__name__,
42 ", ".join(
43 "%s=%r" % (field, self[index])
44 for index, field in enumerate(cls._fields)
45 ),
46 )
47
48 obj.__repr__ = __repr__
49 return obj
50
51 def implements_iterator(cls):
52 cls.next = cls.__next__
53 del cls.__next__
54 return cls
55
56 def implements_to_string(cls):
57 cls.__unicode__ = cls.__str__
58 cls.__str__ = lambda x: x.__unicode__().encode("utf-8")
59 return cls
60
61 def native_string_result(func):
62 def wrapper(*args, **kwargs):
63 return func(*args, **kwargs).encode("utf-8")
64
65 return functools.update_wrapper(wrapper, func)
66
67 def implements_bool(cls):
68 cls.__nonzero__ = cls.__bool__
69 del cls.__bool__
70 return cls
71
72 from itertools import imap, izip, ifilter
73
74 range_type = xrange
75
76 from StringIO import StringIO
77 from cStringIO import StringIO as BytesIO
78
79 NativeStringIO = BytesIO
80
81 def make_literal_wrapper(reference):
82 return _identity
83
84 def normalize_string_tuple(tup):
85 """Normalizes a string tuple to a common type. Following Python 2
86 rules, upgrades to unicode are implicit.
87 """
88 if any(isinstance(x, text_type) for x in tup):
89 return tuple(to_unicode(x) for x in tup)
90 return tup
91
92 def try_coerce_native(s):
93 """Try to coerce a unicode string to native if possible. Otherwise,
94 leave it as unicode.
95 """
96 try:
97 return to_native(s)
98 except UnicodeError:
99 return s
100
101 wsgi_get_bytes = _identity
102
103 def wsgi_decoding_dance(s, charset="utf-8", errors="replace"):
104 return s.decode(charset, errors)
105
106 def wsgi_encoding_dance(s, charset="utf-8", errors="replace"):
107 if isinstance(s, bytes):
108 return s
109 return s.encode(charset, errors)
110
111 def to_bytes(x, charset=sys.getdefaultencoding(), errors="strict"):
112 if x is None:
113 return None
114 if isinstance(x, (bytes, bytearray, buffer)):
115 return bytes(x)
116 if isinstance(x, unicode):
117 return x.encode(charset, errors)
118 raise TypeError("Expected bytes")
119
120 def to_native(x, charset=sys.getdefaultencoding(), errors="strict"):
121 if x is None or isinstance(x, str):
122 return x
123 return x.encode(charset, errors)
124
125
126 else:
127 unichr = chr
128 text_type = str
129 string_types = (str,)
130 integer_types = (int,)
131
132 iterkeys = lambda d, *args, **kwargs: iter(d.keys(*args, **kwargs))
133 itervalues = lambda d, *args, **kwargs: iter(d.values(*args, **kwargs))
134 iteritems = lambda d, *args, **kwargs: iter(d.items(*args, **kwargs))
135
136 iterlists = lambda d, *args, **kwargs: iter(d.lists(*args, **kwargs))
137 iterlistvalues = lambda d, *args, **kwargs: iter(d.listvalues(*args, **kwargs))
138
139 int_to_byte = operator.methodcaller("to_bytes", 1, "big")
140 iter_bytes = functools.partial(map, int_to_byte)
141
142 import collections.abc as collections_abc
143
144 def reraise(tp, value, tb=None):
145 if value.__traceback__ is not tb:
146 raise value.with_traceback(tb)
147 raise value
148
149 fix_tuple_repr = _identity
150 implements_iterator = _identity
151 implements_to_string = _identity
152 implements_bool = _identity
153 native_string_result = _identity
154 imap = map
155 izip = zip
156 ifilter = filter
157 range_type = range
158
159 from io import StringIO, BytesIO
160
161 NativeStringIO = StringIO
162
163 _latin1_encode = operator.methodcaller("encode", "latin1")
164
165 def make_literal_wrapper(reference):
166 if isinstance(reference, text_type):
167 return _identity
168 return _latin1_encode
169
170 def normalize_string_tuple(tup):
171 """Ensures that all types in the tuple are either strings
172 or bytes.
173 """
174 tupiter = iter(tup)
175 is_text = isinstance(next(tupiter, None), text_type)
176 for arg in tupiter:
177 if isinstance(arg, text_type) != is_text:
178 raise TypeError(
179 "Cannot mix str and bytes arguments (got %s)" % repr(tup)
180 )
181 return tup
182
183 try_coerce_native = _identity
184 wsgi_get_bytes = _latin1_encode
185
186 def wsgi_decoding_dance(s, charset="utf-8", errors="replace"):
187 return s.encode("latin1").decode(charset, errors)
188
189 def wsgi_encoding_dance(s, charset="utf-8", errors="replace"):
190 if isinstance(s, text_type):
191 s = s.encode(charset)
192 return s.decode("latin1", errors)
193
194 def to_bytes(x, charset=sys.getdefaultencoding(), errors="strict"):
195 if x is None:
196 return None
197 if isinstance(x, (bytes, bytearray, memoryview)): # noqa
198 return bytes(x)
199 if isinstance(x, str):
200 return x.encode(charset, errors)
201 raise TypeError("Expected bytes")
202
203 def to_native(x, charset=sys.getdefaultencoding(), errors="strict"):
204 if x is None or isinstance(x, str):
205 return x
206 return x.decode(charset, errors)
207
208
209 def to_unicode(
210 x, charset=sys.getdefaultencoding(), errors="strict", allow_none_charset=False
211 ):
212 if x is None:
213 return None
214 if not isinstance(x, bytes):
215 return text_type(x)
216 if charset is None and allow_none_charset:
217 return x
218 return x.decode(charset, errors)
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug._internal
3 ~~~~~~~~~~~~~~~~~~
4
5 This module provides internally used helpers and constants.
6
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
9 """
10 import inspect
11 import logging
12 import re
13 import string
14 from datetime import date
15 from datetime import datetime
16 from itertools import chain
17 from weakref import WeakKeyDictionary
18
19 from ._compat import int_to_byte
20 from ._compat import integer_types
21 from ._compat import iter_bytes
22 from ._compat import range_type
23 from ._compat import text_type
24
25
26 _logger = None
27 _signature_cache = WeakKeyDictionary()
28 _epoch_ord = date(1970, 1, 1).toordinal()
29 _cookie_params = {
30 b"expires",
31 b"path",
32 b"comment",
33 b"max-age",
34 b"secure",
35 b"httponly",
36 b"version",
37 }
38 _legal_cookie_chars = (
39 string.ascii_letters + string.digits + u"/=!#$%&'*+-.^_`|~:"
40 ).encode("ascii")
41
42 _cookie_quoting_map = {b",": b"\\054", b";": b"\\073", b'"': b'\\"', b"\\": b"\\\\"}
43 for _i in chain(range_type(32), range_type(127, 256)):
44 _cookie_quoting_map[int_to_byte(_i)] = ("\\%03o" % _i).encode("latin1")
45
46 _octal_re = re.compile(br"\\[0-3][0-7][0-7]")
47 _quote_re = re.compile(br"[\\].")
48 _legal_cookie_chars_re = br"[\w\d!#%&\'~_`><@,:/\$\*\+\-\.\^\|\)\(\?\}\{\=]"
49 _cookie_re = re.compile(
50 br"""
51 (?P<key>[^=;]+)
52 (?:\s*=\s*
53 (?P<val>
54 "(?:[^\\"]|\\.)*" |
55 (?:.*?)
56 )
57 )?
58 \s*;
59 """,
60 flags=re.VERBOSE,
61 )
62
63
64 class _Missing(object):
65 def __repr__(self):
66 return "no value"
67
68 def __reduce__(self):
69 return "_missing"
70
71
72 _missing = _Missing()
73
74
75 def _get_environ(obj):
76 env = getattr(obj, "environ", obj)
77 assert isinstance(env, dict), (
78 "%r is not a WSGI environment (has to be a dict)" % type(obj).__name__
79 )
80 return env
81
82
83 def _has_level_handler(logger):
84 """Check if there is a handler in the logging chain that will handle
85 the given logger's effective level.
86 """
87 level = logger.getEffectiveLevel()
88 current = logger
89
90 while current:
91 if any(handler.level <= level for handler in current.handlers):
92 return True
93
94 if not current.propagate:
95 break
96
97 current = current.parent
98
99 return False
100
101
102 def _log(type, message, *args, **kwargs):
103 """Log a message to the 'werkzeug' logger.
104
105 The logger is created the first time it is needed. If there is no
106 level set, it is set to :data:`logging.INFO`. If there is no handler
107 for the logger's effective level, a :class:`logging.StreamHandler`
108 is added.
109 """
110 global _logger
111
112 if _logger is None:
113 _logger = logging.getLogger("werkzeug")
114
115 if _logger.level == logging.NOTSET:
116 _logger.setLevel(logging.INFO)
117
118 if not _has_level_handler(_logger):
119 _logger.addHandler(logging.StreamHandler())
120
121 getattr(_logger, type)(message.rstrip(), *args, **kwargs)
122
123
124 def _parse_signature(func):
125 """Return a signature object for the function."""
126 if hasattr(func, "im_func"):
127 func = func.im_func
128
129 # if we have a cached validator for this function, return it
130 parse = _signature_cache.get(func)
131 if parse is not None:
132 return parse
133
134 # inspect the function signature and collect all the information
135 if hasattr(inspect, "getfullargspec"):
136 tup = inspect.getfullargspec(func)
137 else:
138 tup = inspect.getargspec(func)
139 positional, vararg_var, kwarg_var, defaults = tup[:4]
140 defaults = defaults or ()
141 arg_count = len(positional)
142 arguments = []
143 for idx, name in enumerate(positional):
144 if isinstance(name, list):
145 raise TypeError(
146 "cannot parse functions that unpack tuples in the function signature"
147 )
148 try:
149 default = defaults[idx - arg_count]
150 except IndexError:
151 param = (name, False, None)
152 else:
153 param = (name, True, default)
154 arguments.append(param)
155 arguments = tuple(arguments)
156
157 def parse(args, kwargs):
158 new_args = []
159 missing = []
160 extra = {}
161
162 # consume as many arguments as positional as possible
163 for idx, (name, has_default, default) in enumerate(arguments):
164 try:
165 new_args.append(args[idx])
166 except IndexError:
167 try:
168 new_args.append(kwargs.pop(name))
169 except KeyError:
170 if has_default:
171 new_args.append(default)
172 else:
173 missing.append(name)
174 else:
175 if name in kwargs:
176 extra[name] = kwargs.pop(name)
177
178 # handle extra arguments
179 extra_positional = args[arg_count:]
180 if vararg_var is not None:
181 new_args.extend(extra_positional)
182 extra_positional = ()
183 if kwargs and kwarg_var is None:
184 extra.update(kwargs)
185 kwargs = {}
186
187 return (
188 new_args,
189 kwargs,
190 missing,
191 extra,
192 extra_positional,
193 arguments,
194 vararg_var,
195 kwarg_var,
196 )
197
198 _signature_cache[func] = parse
199 return parse
200
201
202 def _date_to_unix(arg):
203 """Converts a timetuple, integer or datetime object into the seconds from
204 epoch in utc.
205 """
206 if isinstance(arg, datetime):
207 arg = arg.utctimetuple()
208 elif isinstance(arg, integer_types + (float,)):
209 return int(arg)
210 year, month, day, hour, minute, second = arg[:6]
211 days = date(year, month, 1).toordinal() - _epoch_ord + day - 1
212 hours = days * 24 + hour
213 minutes = hours * 60 + minute
214 seconds = minutes * 60 + second
215 return seconds
216
217
218 class _DictAccessorProperty(object):
219 """Baseclass for `environ_property` and `header_property`."""
220
221 read_only = False
222
223 def __init__(
224 self,
225 name,
226 default=None,
227 load_func=None,
228 dump_func=None,
229 read_only=None,
230 doc=None,
231 ):
232 self.name = name
233 self.default = default
234 self.load_func = load_func
235 self.dump_func = dump_func
236 if read_only is not None:
237 self.read_only = read_only
238 self.__doc__ = doc
239
240 def __get__(self, obj, type=None):
241 if obj is None:
242 return self
243 storage = self.lookup(obj)
244 if self.name not in storage:
245 return self.default
246 rv = storage[self.name]
247 if self.load_func is not None:
248 try:
249 rv = self.load_func(rv)
250 except (ValueError, TypeError):
251 rv = self.default
252 return rv
253
254 def __set__(self, obj, value):
255 if self.read_only:
256 raise AttributeError("read only property")
257 if self.dump_func is not None:
258 value = self.dump_func(value)
259 self.lookup(obj)[self.name] = value
260
261 def __delete__(self, obj):
262 if self.read_only:
263 raise AttributeError("read only property")
264 self.lookup(obj).pop(self.name, None)
265
266 def __repr__(self):
267 return "<%s %s>" % (self.__class__.__name__, self.name)
268
269
270 def _cookie_quote(b):
271 buf = bytearray()
272 all_legal = True
273 _lookup = _cookie_quoting_map.get
274 _push = buf.extend
275
276 for char in iter_bytes(b):
277 if char not in _legal_cookie_chars:
278 all_legal = False
279 char = _lookup(char, char)
280 _push(char)
281
282 if all_legal:
283 return bytes(buf)
284 return bytes(b'"' + buf + b'"')
285
286
287 def _cookie_unquote(b):
288 if len(b) < 2:
289 return b
290 if b[:1] != b'"' or b[-1:] != b'"':
291 return b
292
293 b = b[1:-1]
294
295 i = 0
296 n = len(b)
297 rv = bytearray()
298 _push = rv.extend
299
300 while 0 <= i < n:
301 o_match = _octal_re.search(b, i)
302 q_match = _quote_re.search(b, i)
303 if not o_match and not q_match:
304 rv.extend(b[i:])
305 break
306 j = k = -1
307 if o_match:
308 j = o_match.start(0)
309 if q_match:
310 k = q_match.start(0)
311 if q_match and (not o_match or k < j):
312 _push(b[i:k])
313 _push(b[k + 1 : k + 2])
314 i = k + 2
315 else:
316 _push(b[i:j])
317 rv.append(int(b[j + 1 : j + 4], 8))
318 i = j + 4
319
320 return bytes(rv)
321
322
323 def _cookie_parse_impl(b):
324 """Lowlevel cookie parsing facility that operates on bytes."""
325 i = 0
326 n = len(b)
327
328 while i < n:
329 match = _cookie_re.search(b + b";", i)
330 if not match:
331 break
332
333 key = match.group("key").strip()
334 value = match.group("val") or b""
335 i = match.end(0)
336
337 # Ignore parameters. We have no interest in them.
338 if key.lower() not in _cookie_params:
339 yield _cookie_unquote(key), _cookie_unquote(value)
340
341
342 def _encode_idna(domain):
343 # If we're given bytes, make sure they fit into ASCII
344 if not isinstance(domain, text_type):
345 domain.decode("ascii")
346 return domain
347
348 # Otherwise check if it's already ascii, then return
349 try:
350 return domain.encode("ascii")
351 except UnicodeError:
352 pass
353
354 # Otherwise encode each part separately
355 parts = domain.split(".")
356 for idx, part in enumerate(parts):
357 parts[idx] = part.encode("idna")
358 return b".".join(parts)
359
360
361 def _decode_idna(domain):
362 # If the input is a string try to encode it to ascii to
363 # do the idna decoding. if that fails because of an
364 # unicode error, then we already have a decoded idna domain
365 if isinstance(domain, text_type):
366 try:
367 domain = domain.encode("ascii")
368 except UnicodeError:
369 return domain
370
371 # Decode each part separately. If a part fails, try to
372 # decode it with ascii and silently ignore errors. This makes
373 # most sense because the idna codec does not have error handling
374 parts = domain.split(b".")
375 for idx, part in enumerate(parts):
376 try:
377 parts[idx] = part.decode("idna")
378 except UnicodeError:
379 parts[idx] = part.decode("ascii", "ignore")
380
381 return ".".join(parts)
382
383
384 def _make_cookie_domain(domain):
385 if domain is None:
386 return None
387 domain = _encode_idna(domain)
388 if b":" in domain:
389 domain = domain.split(b":", 1)[0]
390 if b"." in domain:
391 return domain
392 raise ValueError(
393 "Setting 'domain' for a cookie on a server running locally (ex: "
394 "localhost) is not supported by complying browsers. You should "
395 "have something like: '127.0.0.1 localhost dev.localhost' on "
396 "your hosts file and then point your server to run on "
397 "'dev.localhost' and also set 'domain' for 'dev.localhost'"
398 )
399
400
401 def _easteregg(app=None):
402 """Like the name says. But who knows how it works?"""
403
404 def bzzzzzzz(gyver):
405 import base64
406 import zlib
407
408 return zlib.decompress(base64.b64decode(gyver)).decode("ascii")
409
410 gyver = u"\n".join(
411 [
412 x + (77 - len(x)) * u" "
413 for x in bzzzzzzz(
414 b"""
415 eJyFlzuOJDkMRP06xRjymKgDJCDQStBYT8BCgK4gTwfQ2fcFs2a2FzvZk+hvlcRvRJD148efHt9m
416 9Xz94dRY5hGt1nrYcXx7us9qlcP9HHNh28rz8dZj+q4rynVFFPdlY4zH873NKCexrDM6zxxRymzz
417 4QIxzK4bth1PV7+uHn6WXZ5C4ka/+prFzx3zWLMHAVZb8RRUxtFXI5DTQ2n3Hi2sNI+HK43AOWSY
418 jmEzE4naFp58PdzhPMdslLVWHTGUVpSxImw+pS/D+JhzLfdS1j7PzUMxij+mc2U0I9zcbZ/HcZxc
419 q1QjvvcThMYFnp93agEx392ZdLJWXbi/Ca4Oivl4h/Y1ErEqP+lrg7Xa4qnUKu5UE9UUA4xeqLJ5
420 jWlPKJvR2yhRI7xFPdzPuc6adXu6ovwXwRPXXnZHxlPtkSkqWHilsOrGrvcVWXgGP3daXomCj317
421 8P2UOw/NnA0OOikZyFf3zZ76eN9QXNwYdD8f8/LdBRFg0BO3bB+Pe/+G8er8tDJv83XTkj7WeMBJ
422 v/rnAfdO51d6sFglfi8U7zbnr0u9tyJHhFZNXYfH8Iafv2Oa+DT6l8u9UYlajV/hcEgk1x8E8L/r
423 XJXl2SK+GJCxtnyhVKv6GFCEB1OO3f9YWAIEbwcRWv/6RPpsEzOkXURMN37J0PoCSYeBnJQd9Giu
424 LxYQJNlYPSo/iTQwgaihbART7Fcyem2tTSCcwNCs85MOOpJtXhXDe0E7zgZJkcxWTar/zEjdIVCk
425 iXy87FW6j5aGZhttDBoAZ3vnmlkx4q4mMmCdLtnHkBXFMCReqthSGkQ+MDXLLCpXwBs0t+sIhsDI
426 tjBB8MwqYQpLygZ56rRHHpw+OAVyGgaGRHWy2QfXez+ZQQTTBkmRXdV/A9LwH6XGZpEAZU8rs4pE
427 1R4FQ3Uwt8RKEtRc0/CrANUoes3EzM6WYcFyskGZ6UTHJWenBDS7h163Eo2bpzqxNE9aVgEM2CqI
428 GAJe9Yra4P5qKmta27VjzYdR04Vc7KHeY4vs61C0nbywFmcSXYjzBHdiEjraS7PGG2jHHTpJUMxN
429 Jlxr3pUuFvlBWLJGE3GcA1/1xxLcHmlO+LAXbhrXah1tD6Ze+uqFGdZa5FM+3eHcKNaEarutAQ0A
430 QMAZHV+ve6LxAwWnXbbSXEG2DmCX5ijeLCKj5lhVFBrMm+ryOttCAeFpUdZyQLAQkA06RLs56rzG
431 8MID55vqr/g64Qr/wqwlE0TVxgoiZhHrbY2h1iuuyUVg1nlkpDrQ7Vm1xIkI5XRKLedN9EjzVchu
432 jQhXcVkjVdgP2O99QShpdvXWoSwkp5uMwyjt3jiWCqWGSiaaPAzohjPanXVLbM3x0dNskJsaCEyz
433 DTKIs+7WKJD4ZcJGfMhLFBf6hlbnNkLEePF8Cx2o2kwmYF4+MzAxa6i+6xIQkswOqGO+3x9NaZX8
434 MrZRaFZpLeVTYI9F/djY6DDVVs340nZGmwrDqTCiiqD5luj3OzwpmQCiQhdRYowUYEA3i1WWGwL4
435 GCtSoO4XbIPFeKGU13XPkDf5IdimLpAvi2kVDVQbzOOa4KAXMFlpi/hV8F6IDe0Y2reg3PuNKT3i
436 RYhZqtkQZqSB2Qm0SGtjAw7RDwaM1roESC8HWiPxkoOy0lLTRFG39kvbLZbU9gFKFRvixDZBJmpi
437 Xyq3RE5lW00EJjaqwp/v3EByMSpVZYsEIJ4APaHmVtpGSieV5CALOtNUAzTBiw81GLgC0quyzf6c
438 NlWknzJeCsJ5fup2R4d8CYGN77mu5vnO1UqbfElZ9E6cR6zbHjgsr9ly18fXjZoPeDjPuzlWbFwS
439 pdvPkhntFvkc13qb9094LL5NrA3NIq3r9eNnop9DizWOqCEbyRBFJTHn6Tt3CG1o8a4HevYh0XiJ
440 sR0AVVHuGuMOIfbuQ/OKBkGRC6NJ4u7sbPX8bG/n5sNIOQ6/Y/BX3IwRlTSabtZpYLB85lYtkkgm
441 p1qXK3Du2mnr5INXmT/78KI12n11EFBkJHHp0wJyLe9MvPNUGYsf+170maayRoy2lURGHAIapSpQ
442 krEDuNoJCHNlZYhKpvw4mspVWxqo415n8cD62N9+EfHrAvqQnINStetek7RY2Urv8nxsnGaZfRr/
443 nhXbJ6m/yl1LzYqscDZA9QHLNbdaSTTr+kFg3bC0iYbX/eQy0Bv3h4B50/SGYzKAXkCeOLI3bcAt
444 mj2Z/FM1vQWgDynsRwNvrWnJHlespkrp8+vO1jNaibm+PhqXPPv30YwDZ6jApe3wUjFQobghvW9p
445 7f2zLkGNv8b191cD/3vs9Q833z8t"""
446 ).splitlines()
447 ]
448 )
449
450 def easteregged(environ, start_response):
451 def injecting_start_response(status, headers, exc_info=None):
452 headers.append(("X-Powered-By", "Werkzeug"))
453 return start_response(status, headers, exc_info)
454
455 if app is not None and environ.get("QUERY_STRING") != "macgybarchakku":
456 return app(environ, injecting_start_response)
457 injecting_start_response("200 OK", [("Content-Type", "text/html")])
458 return [
459 (
460 u"""
461 <!DOCTYPE html>
462 <html>
463 <head>
464 <title>About Werkzeug</title>
465 <style type="text/css">
466 body { font: 15px Georgia, serif; text-align: center; }
467 a { color: #333; text-decoration: none; }
468 h1 { font-size: 30px; margin: 20px 0 10px 0; }
469 p { margin: 0 0 30px 0; }
470 pre { font: 11px 'Consolas', 'Monaco', monospace; line-height: 0.95; }
471 </style>
472 </head>
473 <body>
474 <h1><a href="http://werkzeug.pocoo.org/">Werkzeug</a></h1>
475 <p>the Swiss Army knife of Python web development.</p>
476 <pre>%s\n\n\n</pre>
477 </body>
478 </html>"""
479 % gyver
480 ).encode("latin1")
481 ]
482
483 return easteregged
0 import os
1 import subprocess
2 import sys
3 import threading
4 import time
5 from itertools import chain
6
7 from ._compat import iteritems
8 from ._compat import PY2
9 from ._compat import text_type
10 from ._internal import _log
11
12
13 def _iter_module_files():
14 """This iterates over all relevant Python files. It goes through all
15 loaded files from modules, all files in folders of already loaded modules
16 as well as all files reachable through a package.
17 """
18 # The list call is necessary on Python 3 in case the module
19 # dictionary modifies during iteration.
20 for module in list(sys.modules.values()):
21 if module is None:
22 continue
23 filename = getattr(module, "__file__", None)
24 if filename:
25 if os.path.isdir(filename) and os.path.exists(
26 os.path.join(filename, "__init__.py")
27 ):
28 filename = os.path.join(filename, "__init__.py")
29
30 old = None
31 while not os.path.isfile(filename):
32 old = filename
33 filename = os.path.dirname(filename)
34 if filename == old:
35 break
36 else:
37 if filename[-4:] in (".pyc", ".pyo"):
38 filename = filename[:-1]
39 yield filename
40
41
42 def _find_observable_paths(extra_files=None):
43 """Finds all paths that should be observed."""
44 rv = set(
45 os.path.dirname(os.path.abspath(x)) if os.path.isfile(x) else os.path.abspath(x)
46 for x in sys.path
47 )
48
49 for filename in extra_files or ():
50 rv.add(os.path.dirname(os.path.abspath(filename)))
51
52 for module in list(sys.modules.values()):
53 fn = getattr(module, "__file__", None)
54 if fn is None:
55 continue
56 fn = os.path.abspath(fn)
57 rv.add(os.path.dirname(fn))
58
59 return _find_common_roots(rv)
60
61
62 def _get_args_for_reloading():
63 """Returns the executable. This contains a workaround for windows
64 if the executable is incorrectly reported to not have the .exe
65 extension which can cause bugs on reloading. This also contains
66 a workaround for linux where the file is executable (possibly with
67 a program other than python)
68 """
69 rv = [sys.executable]
70 py_script = os.path.abspath(sys.argv[0])
71 args = sys.argv[1:]
72 # Need to look at main module to determine how it was executed.
73 __main__ = sys.modules["__main__"]
74
75 if __main__.__package__ is None:
76 # Executed a file, like "python app.py".
77 if os.name == "nt":
78 # Windows entry points have ".exe" extension and should be
79 # called directly.
80 if not os.path.exists(py_script) and os.path.exists(py_script + ".exe"):
81 py_script += ".exe"
82
83 if (
84 os.path.splitext(rv[0])[1] == ".exe"
85 and os.path.splitext(py_script)[1] == ".exe"
86 ):
87 rv.pop(0)
88
89 elif os.path.isfile(py_script) and os.access(py_script, os.X_OK):
90 # The file is marked as executable. Nix adds a wrapper that
91 # shouldn't be called with the Python executable.
92 rv.pop(0)
93
94 rv.append(py_script)
95 else:
96 # Executed a module, like "python -m werkzeug.serving".
97 if sys.argv[0] == "-m":
98 # Flask works around previous behavior by putting
99 # "-m flask" in sys.argv.
100 # TODO remove this once Flask no longer misbehaves
101 args = sys.argv
102 else:
103 py_module = __main__.__package__
104 name = os.path.splitext(os.path.basename(py_script))[0]
105
106 if name != "__main__":
107 py_module += "." + name
108
109 rv.extend(("-m", py_module.lstrip(".")))
110
111 rv.extend(args)
112 return rv
113
114
115 def _find_common_roots(paths):
116 """Out of some paths it finds the common roots that need monitoring."""
117 paths = [x.split(os.path.sep) for x in paths]
118 root = {}
119 for chunks in sorted(paths, key=len, reverse=True):
120 node = root
121 for chunk in chunks:
122 node = node.setdefault(chunk, {})
123 node.clear()
124
125 rv = set()
126
127 def _walk(node, path):
128 for prefix, child in iteritems(node):
129 _walk(child, path + (prefix,))
130 if not node:
131 rv.add("/".join(path))
132
133 _walk(root, ())
134 return rv
135
136
137 class ReloaderLoop(object):
138 name = None
139
140 # monkeypatched by testsuite. wrapping with `staticmethod` is required in
141 # case time.sleep has been replaced by a non-c function (e.g. by
142 # `eventlet.monkey_patch`) before we get here
143 _sleep = staticmethod(time.sleep)
144
145 def __init__(self, extra_files=None, interval=1):
146 self.extra_files = set(os.path.abspath(x) for x in extra_files or ())
147 self.interval = interval
148
149 def run(self):
150 pass
151
152 def restart_with_reloader(self):
153 """Spawn a new Python interpreter with the same arguments as this one,
154 but running the reloader thread.
155 """
156 while 1:
157 _log("info", " * Restarting with %s" % self.name)
158 args = _get_args_for_reloading()
159
160 # a weird bug on windows. sometimes unicode strings end up in the
161 # environment and subprocess.call does not like this, encode them
162 # to latin1 and continue.
163 if os.name == "nt" and PY2:
164 new_environ = {}
165 for key, value in iteritems(os.environ):
166 if isinstance(key, text_type):
167 key = key.encode("iso-8859-1")
168 if isinstance(value, text_type):
169 value = value.encode("iso-8859-1")
170 new_environ[key] = value
171 else:
172 new_environ = os.environ.copy()
173
174 new_environ["WERKZEUG_RUN_MAIN"] = "true"
175 exit_code = subprocess.call(args, env=new_environ, close_fds=False)
176 if exit_code != 3:
177 return exit_code
178
179 def trigger_reload(self, filename):
180 self.log_reload(filename)
181 sys.exit(3)
182
183 def log_reload(self, filename):
184 filename = os.path.abspath(filename)
185 _log("info", " * Detected change in %r, reloading" % filename)
186
187
188 class StatReloaderLoop(ReloaderLoop):
189 name = "stat"
190
191 def run(self):
192 mtimes = {}
193 while 1:
194 for filename in chain(_iter_module_files(), self.extra_files):
195 try:
196 mtime = os.stat(filename).st_mtime
197 except OSError:
198 continue
199
200 old_time = mtimes.get(filename)
201 if old_time is None:
202 mtimes[filename] = mtime
203 continue
204 elif mtime > old_time:
205 self.trigger_reload(filename)
206 self._sleep(self.interval)
207
208
209 class WatchdogReloaderLoop(ReloaderLoop):
210 def __init__(self, *args, **kwargs):
211 ReloaderLoop.__init__(self, *args, **kwargs)
212 from watchdog.observers import Observer
213 from watchdog.events import FileSystemEventHandler
214
215 self.observable_paths = set()
216
217 def _check_modification(filename):
218 if filename in self.extra_files:
219 self.trigger_reload(filename)
220 dirname = os.path.dirname(filename)
221 if dirname.startswith(tuple(self.observable_paths)):
222 if filename.endswith((".pyc", ".pyo", ".py")):
223 self.trigger_reload(filename)
224
225 class _CustomHandler(FileSystemEventHandler):
226 def on_created(self, event):
227 _check_modification(event.src_path)
228
229 def on_modified(self, event):
230 _check_modification(event.src_path)
231
232 def on_moved(self, event):
233 _check_modification(event.src_path)
234 _check_modification(event.dest_path)
235
236 def on_deleted(self, event):
237 _check_modification(event.src_path)
238
239 reloader_name = Observer.__name__.lower()
240 if reloader_name.endswith("observer"):
241 reloader_name = reloader_name[:-8]
242 reloader_name += " reloader"
243
244 self.name = reloader_name
245
246 self.observer_class = Observer
247 self.event_handler = _CustomHandler()
248 self.should_reload = False
249
250 def trigger_reload(self, filename):
251 # This is called inside an event handler, which means throwing
252 # SystemExit has no effect.
253 # https://github.com/gorakhargosh/watchdog/issues/294
254 self.should_reload = True
255 self.log_reload(filename)
256
257 def run(self):
258 watches = {}
259 observer = self.observer_class()
260 observer.start()
261
262 try:
263 while not self.should_reload:
264 to_delete = set(watches)
265 paths = _find_observable_paths(self.extra_files)
266 for path in paths:
267 if path not in watches:
268 try:
269 watches[path] = observer.schedule(
270 self.event_handler, path, recursive=True
271 )
272 except OSError:
273 # Clear this path from list of watches We don't want
274 # the same error message showing again in the next
275 # iteration.
276 watches[path] = None
277 to_delete.discard(path)
278 for path in to_delete:
279 watch = watches.pop(path, None)
280 if watch is not None:
281 observer.unschedule(watch)
282 self.observable_paths = paths
283 self._sleep(self.interval)
284 finally:
285 observer.stop()
286 observer.join()
287
288 sys.exit(3)
289
290
291 reloader_loops = {"stat": StatReloaderLoop, "watchdog": WatchdogReloaderLoop}
292
293 try:
294 __import__("watchdog.observers")
295 except ImportError:
296 reloader_loops["auto"] = reloader_loops["stat"]
297 else:
298 reloader_loops["auto"] = reloader_loops["watchdog"]
299
300
301 def ensure_echo_on():
302 """Ensure that echo mode is enabled. Some tools such as PDB disable
303 it which causes usability issues after reload."""
304 # tcgetattr will fail if stdin isn't a tty
305 if not sys.stdin.isatty():
306 return
307 try:
308 import termios
309 except ImportError:
310 return
311 attributes = termios.tcgetattr(sys.stdin)
312 if not attributes[3] & termios.ECHO:
313 attributes[3] |= termios.ECHO
314 termios.tcsetattr(sys.stdin, termios.TCSANOW, attributes)
315
316
317 def run_with_reloader(main_func, extra_files=None, interval=1, reloader_type="auto"):
318 """Run the given function in an independent python interpreter."""
319 import signal
320
321 reloader = reloader_loops[reloader_type](extra_files, interval)
322 signal.signal(signal.SIGTERM, lambda *args: sys.exit(0))
323 try:
324 if os.environ.get("WERKZEUG_RUN_MAIN") == "true":
325 ensure_echo_on()
326 t = threading.Thread(target=main_func, args=())
327 t.setDaemon(True)
328 t.start()
329 reloader.run()
330 else:
331 sys.exit(reloader.restart_with_reloader())
332 except KeyboardInterrupt:
333 pass
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.contrib
3 ~~~~~~~~~~~~~~~~
4
5 Contains user-submitted code that other users may find useful, but which
6 is not part of the Werkzeug core. Anyone can write code for inclusion in
7 the `contrib` package. All modules in this package are distributed as an
8 add-on library and thus are not part of Werkzeug itself.
9
10 This file itself is mostly for informational purposes and to tell the
11 Python interpreter that `contrib` is a package.
12
13 :copyright: 2007 Pallets
14 :license: BSD-3-Clause
15 """
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.contrib.atom
3 ~~~~~~~~~~~~~~~~~~~~~
4
5 This module provides a class called :class:`AtomFeed` which can be
6 used to generate feeds in the Atom syndication format (see :rfc:`4287`).
7
8 Example::
9
10 def atom_feed(request):
11 feed = AtomFeed("My Blog", feed_url=request.url,
12 url=request.host_url,
13 subtitle="My example blog for a feed test.")
14 for post in Post.query.limit(10).all():
15 feed.add(post.title, post.body, content_type='html',
16 author=post.author, url=post.url, id=post.uid,
17 updated=post.last_update, published=post.pub_date)
18 return feed.get_response()
19
20 :copyright: 2007 Pallets
21 :license: BSD-3-Clause
22 """
23 import warnings
24 from datetime import datetime
25
26 from .._compat import implements_to_string
27 from .._compat import string_types
28 from ..utils import escape
29 from ..wrappers import BaseResponse
30
31 warnings.warn(
32 "'werkzeug.contrib.atom' is deprecated as of version 0.15 and will"
33 " be removed in version 1.0.",
34 DeprecationWarning,
35 stacklevel=2,
36 )
37
38 XHTML_NAMESPACE = "http://www.w3.org/1999/xhtml"
39
40
41 def _make_text_block(name, content, content_type=None):
42 """Helper function for the builder that creates an XML text block."""
43 if content_type == "xhtml":
44 return u'<%s type="xhtml"><div xmlns="%s">%s</div></%s>\n' % (
45 name,
46 XHTML_NAMESPACE,
47 content,
48 name,
49 )
50 if not content_type:
51 return u"<%s>%s</%s>\n" % (name, escape(content), name)
52 return u'<%s type="%s">%s</%s>\n' % (name, content_type, escape(content), name)
53
54
55 def format_iso8601(obj):
56 """Format a datetime object for iso8601"""
57 iso8601 = obj.isoformat()
58 if obj.tzinfo:
59 return iso8601
60 return iso8601 + "Z"
61
62
63 @implements_to_string
64 class AtomFeed(object):
65
66 """A helper class that creates Atom feeds.
67
68 :param title: the title of the feed. Required.
69 :param title_type: the type attribute for the title element. One of
70 ``'html'``, ``'text'`` or ``'xhtml'``.
71 :param url: the url for the feed (not the url *of* the feed)
72 :param id: a globally unique id for the feed. Must be an URI. If
73 not present the `feed_url` is used, but one of both is
74 required.
75 :param updated: the time the feed was modified the last time. Must
76 be a :class:`datetime.datetime` object. If not
77 present the latest entry's `updated` is used.
78 Treated as UTC if naive datetime.
79 :param feed_url: the URL to the feed. Should be the URL that was
80 requested.
81 :param author: the author of the feed. Must be either a string (the
82 name) or a dict with name (required) and uri or
83 email (both optional). Can be a list of (may be
84 mixed, too) strings and dicts, too, if there are
85 multiple authors. Required if not every entry has an
86 author element.
87 :param icon: an icon for the feed.
88 :param logo: a logo for the feed.
89 :param rights: copyright information for the feed.
90 :param rights_type: the type attribute for the rights element. One of
91 ``'html'``, ``'text'`` or ``'xhtml'``. Default is
92 ``'text'``.
93 :param subtitle: a short description of the feed.
94 :param subtitle_type: the type attribute for the subtitle element.
95 One of ``'text'``, ``'html'``, ``'text'``
96 or ``'xhtml'``. Default is ``'text'``.
97 :param links: additional links. Must be a list of dictionaries with
98 href (required) and rel, type, hreflang, title, length
99 (all optional)
100 :param generator: the software that generated this feed. This must be
101 a tuple in the form ``(name, url, version)``. If
102 you don't want to specify one of them, set the item
103 to `None`.
104 :param entries: a list with the entries for the feed. Entries can also
105 be added later with :meth:`add`.
106
107 For more information on the elements see
108 http://www.atomenabled.org/developers/syndication/
109
110 Everywhere where a list is demanded, any iterable can be used.
111 """
112
113 default_generator = ("Werkzeug", None, None)
114
115 def __init__(self, title=None, entries=None, **kwargs):
116 self.title = title
117 self.title_type = kwargs.get("title_type", "text")
118 self.url = kwargs.get("url")
119 self.feed_url = kwargs.get("feed_url", self.url)
120 self.id = kwargs.get("id", self.feed_url)
121 self.updated = kwargs.get("updated")
122 self.author = kwargs.get("author", ())
123 self.icon = kwargs.get("icon")
124 self.logo = kwargs.get("logo")
125 self.rights = kwargs.get("rights")
126 self.rights_type = kwargs.get("rights_type")
127 self.subtitle = kwargs.get("subtitle")
128 self.subtitle_type = kwargs.get("subtitle_type", "text")
129 self.generator = kwargs.get("generator")
130 if self.generator is None:
131 self.generator = self.default_generator
132 self.links = kwargs.get("links", [])
133 self.entries = list(entries) if entries else []
134
135 if not hasattr(self.author, "__iter__") or isinstance(
136 self.author, string_types + (dict,)
137 ):
138 self.author = [self.author]
139 for i, author in enumerate(self.author):
140 if not isinstance(author, dict):
141 self.author[i] = {"name": author}
142
143 if not self.title:
144 raise ValueError("title is required")
145 if not self.id:
146 raise ValueError("id is required")
147 for author in self.author:
148 if "name" not in author:
149 raise TypeError("author must contain at least a name")
150
151 def add(self, *args, **kwargs):
152 """Add a new entry to the feed. This function can either be called
153 with a :class:`FeedEntry` or some keyword and positional arguments
154 that are forwarded to the :class:`FeedEntry` constructor.
155 """
156 if len(args) == 1 and not kwargs and isinstance(args[0], FeedEntry):
157 self.entries.append(args[0])
158 else:
159 kwargs["feed_url"] = self.feed_url
160 self.entries.append(FeedEntry(*args, **kwargs))
161
162 def __repr__(self):
163 return "<%s %r (%d entries)>" % (
164 self.__class__.__name__,
165 self.title,
166 len(self.entries),
167 )
168
169 def generate(self):
170 """Return a generator that yields pieces of XML."""
171 # atom demands either an author element in every entry or a global one
172 if not self.author:
173 if any(not e.author for e in self.entries):
174 self.author = ({"name": "Unknown author"},)
175
176 if not self.updated:
177 dates = sorted([entry.updated for entry in self.entries])
178 self.updated = dates[-1] if dates else datetime.utcnow()
179
180 yield u'<?xml version="1.0" encoding="utf-8"?>\n'
181 yield u'<feed xmlns="http://www.w3.org/2005/Atom">\n'
182 yield " " + _make_text_block("title", self.title, self.title_type)
183 yield u" <id>%s</id>\n" % escape(self.id)
184 yield u" <updated>%s</updated>\n" % format_iso8601(self.updated)
185 if self.url:
186 yield u' <link href="%s" />\n' % escape(self.url)
187 if self.feed_url:
188 yield u' <link href="%s" rel="self" />\n' % escape(self.feed_url)
189 for link in self.links:
190 yield u" <link %s/>\n" % "".join(
191 '%s="%s" ' % (k, escape(link[k])) for k in link
192 )
193 for author in self.author:
194 yield u" <author>\n"
195 yield u" <name>%s</name>\n" % escape(author["name"])
196 if "uri" in author:
197 yield u" <uri>%s</uri>\n" % escape(author["uri"])
198 if "email" in author:
199 yield " <email>%s</email>\n" % escape(author["email"])
200 yield " </author>\n"
201 if self.subtitle:
202 yield " " + _make_text_block("subtitle", self.subtitle, self.subtitle_type)
203 if self.icon:
204 yield u" <icon>%s</icon>\n" % escape(self.icon)
205 if self.logo:
206 yield u" <logo>%s</logo>\n" % escape(self.logo)
207 if self.rights:
208 yield " " + _make_text_block("rights", self.rights, self.rights_type)
209 generator_name, generator_url, generator_version = self.generator
210 if generator_name or generator_url or generator_version:
211 tmp = [u" <generator"]
212 if generator_url:
213 tmp.append(u' uri="%s"' % escape(generator_url))
214 if generator_version:
215 tmp.append(u' version="%s"' % escape(generator_version))
216 tmp.append(u">%s</generator>\n" % escape(generator_name))
217 yield u"".join(tmp)
218 for entry in self.entries:
219 for line in entry.generate():
220 yield u" " + line
221 yield u"</feed>\n"
222
223 def to_string(self):
224 """Convert the feed into a string."""
225 return u"".join(self.generate())
226
227 def get_response(self):
228 """Return a response object for the feed."""
229 return BaseResponse(self.to_string(), mimetype="application/atom+xml")
230
231 def __call__(self, environ, start_response):
232 """Use the class as WSGI response object."""
233 return self.get_response()(environ, start_response)
234
235 def __str__(self):
236 return self.to_string()
237
238
239 @implements_to_string
240 class FeedEntry(object):
241
242 """Represents a single entry in a feed.
243
244 :param title: the title of the entry. Required.
245 :param title_type: the type attribute for the title element. One of
246 ``'html'``, ``'text'`` or ``'xhtml'``.
247 :param content: the content of the entry.
248 :param content_type: the type attribute for the content element. One
249 of ``'html'``, ``'text'`` or ``'xhtml'``.
250 :param summary: a summary of the entry's content.
251 :param summary_type: the type attribute for the summary element. One
252 of ``'html'``, ``'text'`` or ``'xhtml'``.
253 :param url: the url for the entry.
254 :param id: a globally unique id for the entry. Must be an URI. If
255 not present the URL is used, but one of both is required.
256 :param updated: the time the entry was modified the last time. Must
257 be a :class:`datetime.datetime` object. Treated as
258 UTC if naive datetime. Required.
259 :param author: the author of the entry. Must be either a string (the
260 name) or a dict with name (required) and uri or
261 email (both optional). Can be a list of (may be
262 mixed, too) strings and dicts, too, if there are
263 multiple authors. Required if the feed does not have an
264 author element.
265 :param published: the time the entry was initially published. Must
266 be a :class:`datetime.datetime` object. Treated as
267 UTC if naive datetime.
268 :param rights: copyright information for the entry.
269 :param rights_type: the type attribute for the rights element. One of
270 ``'html'``, ``'text'`` or ``'xhtml'``. Default is
271 ``'text'``.
272 :param links: additional links. Must be a list of dictionaries with
273 href (required) and rel, type, hreflang, title, length
274 (all optional)
275 :param categories: categories for the entry. Must be a list of dictionaries
276 with term (required), scheme and label (all optional)
277 :param xml_base: The xml base (url) for this feed item. If not provided
278 it will default to the item url.
279
280 For more information on the elements see
281 http://www.atomenabled.org/developers/syndication/
282
283 Everywhere where a list is demanded, any iterable can be used.
284 """
285
286 def __init__(self, title=None, content=None, feed_url=None, **kwargs):
287 self.title = title
288 self.title_type = kwargs.get("title_type", "text")
289 self.content = content
290 self.content_type = kwargs.get("content_type", "html")
291 self.url = kwargs.get("url")
292 self.id = kwargs.get("id", self.url)
293 self.updated = kwargs.get("updated")
294 self.summary = kwargs.get("summary")
295 self.summary_type = kwargs.get("summary_type", "html")
296 self.author = kwargs.get("author", ())
297 self.published = kwargs.get("published")
298 self.rights = kwargs.get("rights")
299 self.links = kwargs.get("links", [])
300 self.categories = kwargs.get("categories", [])
301 self.xml_base = kwargs.get("xml_base", feed_url)
302
303 if not hasattr(self.author, "__iter__") or isinstance(
304 self.author, string_types + (dict,)
305 ):
306 self.author = [self.author]
307 for i, author in enumerate(self.author):
308 if not isinstance(author, dict):
309 self.author[i] = {"name": author}
310
311 if not self.title:
312 raise ValueError("title is required")
313 if not self.id:
314 raise ValueError("id is required")
315 if not self.updated:
316 raise ValueError("updated is required")
317
318 def __repr__(self):
319 return "<%s %r>" % (self.__class__.__name__, self.title)
320
321 def generate(self):
322 """Yields pieces of ATOM XML."""
323 base = ""
324 if self.xml_base:
325 base = ' xml:base="%s"' % escape(self.xml_base)
326 yield u"<entry%s>\n" % base
327 yield u" " + _make_text_block("title", self.title, self.title_type)
328 yield u" <id>%s</id>\n" % escape(self.id)
329 yield u" <updated>%s</updated>\n" % format_iso8601(self.updated)
330 if self.published:
331 yield u" <published>%s</published>\n" % format_iso8601(self.published)
332 if self.url:
333 yield u' <link href="%s" />\n' % escape(self.url)
334 for author in self.author:
335 yield u" <author>\n"
336 yield u" <name>%s</name>\n" % escape(author["name"])
337 if "uri" in author:
338 yield u" <uri>%s</uri>\n" % escape(author["uri"])
339 if "email" in author:
340 yield u" <email>%s</email>\n" % escape(author["email"])
341 yield u" </author>\n"
342 for link in self.links:
343 yield u" <link %s/>\n" % "".join(
344 '%s="%s" ' % (k, escape(link[k])) for k in link
345 )
346 for category in self.categories:
347 yield u" <category %s/>\n" % "".join(
348 '%s="%s" ' % (k, escape(category[k])) for k in category
349 )
350 if self.summary:
351 yield u" " + _make_text_block("summary", self.summary, self.summary_type)
352 if self.content:
353 yield u" " + _make_text_block("content", self.content, self.content_type)
354 yield u"</entry>\n"
355
356 def to_string(self):
357 """Convert the feed item into a unicode object."""
358 return u"".join(self.generate())
359
360 def __str__(self):
361 return self.to_string()
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.contrib.cache
3 ~~~~~~~~~~~~~~~~~~~~~~
4
5 The main problem with dynamic Web sites is, well, they're dynamic. Each
6 time a user requests a page, the webserver executes a lot of code, queries
7 the database, renders templates until the visitor gets the page he sees.
8
9 This is a lot more expensive than just loading a file from the file system
10 and sending it to the visitor.
11
12 For most Web applications, this overhead isn't a big deal but once it
13 becomes, you will be glad to have a cache system in place.
14
15 How Caching Works
16 =================
17
18 Caching is pretty simple. Basically you have a cache object lurking around
19 somewhere that is connected to a remote cache or the file system or
20 something else. When the request comes in you check if the current page
21 is already in the cache and if so, you're returning it from the cache.
22 Otherwise you generate the page and put it into the cache. (Or a fragment
23 of the page, you don't have to cache the full thing)
24
25 Here is a simple example of how to cache a sidebar for 5 minutes::
26
27 def get_sidebar(user):
28 identifier = 'sidebar_for/user%d' % user.id
29 value = cache.get(identifier)
30 if value is not None:
31 return value
32 value = generate_sidebar_for(user=user)
33 cache.set(identifier, value, timeout=60 * 5)
34 return value
35
36 Creating a Cache Object
37 =======================
38
39 To create a cache object you just import the cache system of your choice
40 from the cache module and instantiate it. Then you can start working
41 with that object:
42
43 >>> from werkzeug.contrib.cache import SimpleCache
44 >>> c = SimpleCache()
45 >>> c.set("foo", "value")
46 >>> c.get("foo")
47 'value'
48 >>> c.get("missing") is None
49 True
50
51 Please keep in mind that you have to create the cache and put it somewhere
52 you have access to it (either as a module global you can import or you just
53 put it into your WSGI application).
54
55 :copyright: 2007 Pallets
56 :license: BSD-3-Clause
57 """
58 import errno
59 import os
60 import platform
61 import re
62 import tempfile
63 import warnings
64 from hashlib import md5
65 from time import time
66
67 from .._compat import integer_types
68 from .._compat import iteritems
69 from .._compat import string_types
70 from .._compat import text_type
71 from .._compat import to_native
72 from ..posixemulation import rename
73
74 try:
75 import cPickle as pickle
76 except ImportError: # pragma: no cover
77 import pickle
78
79 warnings.warn(
80 "'werkzeug.contrib.cache' is deprecated as of version 0.15 and will"
81 " be removed in version 1.0. It has moved to https://github.com"
82 "/pallets/cachelib.",
83 DeprecationWarning,
84 stacklevel=2,
85 )
86
87
88 def _items(mappingorseq):
89 """Wrapper for efficient iteration over mappings represented by dicts
90 or sequences::
91
92 >>> for k, v in _items((i, i*i) for i in xrange(5)):
93 ... assert k*k == v
94
95 >>> for k, v in _items(dict((i, i*i) for i in xrange(5))):
96 ... assert k*k == v
97
98 """
99 if hasattr(mappingorseq, "items"):
100 return iteritems(mappingorseq)
101 return mappingorseq
102
103
104 class BaseCache(object):
105 """Baseclass for the cache systems. All the cache systems implement this
106 API or a superset of it.
107
108 :param default_timeout: the default timeout (in seconds) that is used if
109 no timeout is specified on :meth:`set`. A timeout
110 of 0 indicates that the cache never expires.
111 """
112
113 def __init__(self, default_timeout=300):
114 self.default_timeout = default_timeout
115
116 def _normalize_timeout(self, timeout):
117 if timeout is None:
118 timeout = self.default_timeout
119 return timeout
120
121 def get(self, key):
122 """Look up key in the cache and return the value for it.
123
124 :param key: the key to be looked up.
125 :returns: The value if it exists and is readable, else ``None``.
126 """
127 return None
128
129 def delete(self, key):
130 """Delete `key` from the cache.
131
132 :param key: the key to delete.
133 :returns: Whether the key existed and has been deleted.
134 :rtype: boolean
135 """
136 return True
137
138 def get_many(self, *keys):
139 """Returns a list of values for the given keys.
140 For each key an item in the list is created::
141
142 foo, bar = cache.get_many("foo", "bar")
143
144 Has the same error handling as :meth:`get`.
145
146 :param keys: The function accepts multiple keys as positional
147 arguments.
148 """
149 return [self.get(k) for k in keys]
150
151 def get_dict(self, *keys):
152 """Like :meth:`get_many` but return a dict::
153
154 d = cache.get_dict("foo", "bar")
155 foo = d["foo"]
156 bar = d["bar"]
157
158 :param keys: The function accepts multiple keys as positional
159 arguments.
160 """
161 return dict(zip(keys, self.get_many(*keys)))
162
163 def set(self, key, value, timeout=None):
164 """Add a new key/value to the cache (overwrites value, if key already
165 exists in the cache).
166
167 :param key: the key to set
168 :param value: the value for the key
169 :param timeout: the cache timeout for the key in seconds (if not
170 specified, it uses the default timeout). A timeout of
171 0 idicates that the cache never expires.
172 :returns: ``True`` if key has been updated, ``False`` for backend
173 errors. Pickling errors, however, will raise a subclass of
174 ``pickle.PickleError``.
175 :rtype: boolean
176 """
177 return True
178
179 def add(self, key, value, timeout=None):
180 """Works like :meth:`set` but does not overwrite the values of already
181 existing keys.
182
183 :param key: the key to set
184 :param value: the value for the key
185 :param timeout: the cache timeout for the key in seconds (if not
186 specified, it uses the default timeout). A timeout of
187 0 idicates that the cache never expires.
188 :returns: Same as :meth:`set`, but also ``False`` for already
189 existing keys.
190 :rtype: boolean
191 """
192 return True
193
194 def set_many(self, mapping, timeout=None):
195 """Sets multiple keys and values from a mapping.
196
197 :param mapping: a mapping with the keys/values to set.
198 :param timeout: the cache timeout for the key in seconds (if not
199 specified, it uses the default timeout). A timeout of
200 0 idicates that the cache never expires.
201 :returns: Whether all given keys have been set.
202 :rtype: boolean
203 """
204 rv = True
205 for key, value in _items(mapping):
206 if not self.set(key, value, timeout):
207 rv = False
208 return rv
209
210 def delete_many(self, *keys):
211 """Deletes multiple keys at once.
212
213 :param keys: The function accepts multiple keys as positional
214 arguments.
215 :returns: Whether all given keys have been deleted.
216 :rtype: boolean
217 """
218 return all(self.delete(key) for key in keys)
219
220 def has(self, key):
221 """Checks if a key exists in the cache without returning it. This is a
222 cheap operation that bypasses loading the actual data on the backend.
223
224 This method is optional and may not be implemented on all caches.
225
226 :param key: the key to check
227 """
228 raise NotImplementedError(
229 "%s doesn't have an efficient implementation of `has`. That "
230 "means it is impossible to check whether a key exists without "
231 "fully loading the key's data. Consider using `self.get` "
232 "explicitly if you don't care about performance."
233 )
234
235 def clear(self):
236 """Clears the cache. Keep in mind that not all caches support
237 completely clearing the cache.
238
239 :returns: Whether the cache has been cleared.
240 :rtype: boolean
241 """
242 return True
243
244 def inc(self, key, delta=1):
245 """Increments the value of a key by `delta`. If the key does
246 not yet exist it is initialized with `delta`.
247
248 For supporting caches this is an atomic operation.
249
250 :param key: the key to increment.
251 :param delta: the delta to add.
252 :returns: The new value or ``None`` for backend errors.
253 """
254 value = (self.get(key) or 0) + delta
255 return value if self.set(key, value) else None
256
257 def dec(self, key, delta=1):
258 """Decrements the value of a key by `delta`. If the key does
259 not yet exist it is initialized with `-delta`.
260
261 For supporting caches this is an atomic operation.
262
263 :param key: the key to increment.
264 :param delta: the delta to subtract.
265 :returns: The new value or `None` for backend errors.
266 """
267 value = (self.get(key) or 0) - delta
268 return value if self.set(key, value) else None
269
270
271 class NullCache(BaseCache):
272 """A cache that doesn't cache. This can be useful for unit testing.
273
274 :param default_timeout: a dummy parameter that is ignored but exists
275 for API compatibility with other caches.
276 """
277
278 def has(self, key):
279 return False
280
281
282 class SimpleCache(BaseCache):
283 """Simple memory cache for single process environments. This class exists
284 mainly for the development server and is not 100% thread safe. It tries
285 to use as many atomic operations as possible and no locks for simplicity
286 but it could happen under heavy load that keys are added multiple times.
287
288 :param threshold: the maximum number of items the cache stores before
289 it starts deleting some.
290 :param default_timeout: the default timeout that is used if no timeout is
291 specified on :meth:`~BaseCache.set`. A timeout of
292 0 indicates that the cache never expires.
293 """
294
295 def __init__(self, threshold=500, default_timeout=300):
296 BaseCache.__init__(self, default_timeout)
297 self._cache = {}
298 self.clear = self._cache.clear
299 self._threshold = threshold
300
301 def _prune(self):
302 if len(self._cache) > self._threshold:
303 now = time()
304 toremove = []
305 for idx, (key, (expires, _)) in enumerate(self._cache.items()):
306 if (expires != 0 and expires <= now) or idx % 3 == 0:
307 toremove.append(key)
308 for key in toremove:
309 self._cache.pop(key, None)
310
311 def _normalize_timeout(self, timeout):
312 timeout = BaseCache._normalize_timeout(self, timeout)
313 if timeout > 0:
314 timeout = time() + timeout
315 return timeout
316
317 def get(self, key):
318 try:
319 expires, value = self._cache[key]
320 if expires == 0 or expires > time():
321 return pickle.loads(value)
322 except (KeyError, pickle.PickleError):
323 return None
324
325 def set(self, key, value, timeout=None):
326 expires = self._normalize_timeout(timeout)
327 self._prune()
328 self._cache[key] = (expires, pickle.dumps(value, pickle.HIGHEST_PROTOCOL))
329 return True
330
331 def add(self, key, value, timeout=None):
332 expires = self._normalize_timeout(timeout)
333 self._prune()
334 item = (expires, pickle.dumps(value, pickle.HIGHEST_PROTOCOL))
335 if key in self._cache:
336 return False
337 self._cache.setdefault(key, item)
338 return True
339
340 def delete(self, key):
341 return self._cache.pop(key, None) is not None
342
343 def has(self, key):
344 try:
345 expires, value = self._cache[key]
346 return expires == 0 or expires > time()
347 except KeyError:
348 return False
349
350
351 _test_memcached_key = re.compile(r"[^\x00-\x21\xff]{1,250}$").match
352
353
354 class MemcachedCache(BaseCache):
355 """A cache that uses memcached as backend.
356
357 The first argument can either be an object that resembles the API of a
358 :class:`memcache.Client` or a tuple/list of server addresses. In the
359 event that a tuple/list is passed, Werkzeug tries to import the best
360 available memcache library.
361
362 This cache looks into the following packages/modules to find bindings for
363 memcached:
364
365 - ``pylibmc``
366 - ``google.appengine.api.memcached``
367 - ``memcached``
368 - ``libmc``
369
370 Implementation notes: This cache backend works around some limitations in
371 memcached to simplify the interface. For example unicode keys are encoded
372 to utf-8 on the fly. Methods such as :meth:`~BaseCache.get_dict` return
373 the keys in the same format as passed. Furthermore all get methods
374 silently ignore key errors to not cause problems when untrusted user data
375 is passed to the get methods which is often the case in web applications.
376
377 :param servers: a list or tuple of server addresses or alternatively
378 a :class:`memcache.Client` or a compatible client.
379 :param default_timeout: the default timeout that is used if no timeout is
380 specified on :meth:`~BaseCache.set`. A timeout of
381 0 indicates that the cache never expires.
382 :param key_prefix: a prefix that is added before all keys. This makes it
383 possible to use the same memcached server for different
384 applications. Keep in mind that
385 :meth:`~BaseCache.clear` will also clear keys with a
386 different prefix.
387 """
388
389 def __init__(self, servers=None, default_timeout=300, key_prefix=None):
390 BaseCache.__init__(self, default_timeout)
391 if servers is None or isinstance(servers, (list, tuple)):
392 if servers is None:
393 servers = ["127.0.0.1:11211"]
394 self._client = self.import_preferred_memcache_lib(servers)
395 if self._client is None:
396 raise RuntimeError("no memcache module found")
397 else:
398 # NOTE: servers is actually an already initialized memcache
399 # client.
400 self._client = servers
401
402 self.key_prefix = to_native(key_prefix)
403
404 def _normalize_key(self, key):
405 key = to_native(key, "utf-8")
406 if self.key_prefix:
407 key = self.key_prefix + key
408 return key
409
410 def _normalize_timeout(self, timeout):
411 timeout = BaseCache._normalize_timeout(self, timeout)
412 if timeout > 0:
413 timeout = int(time()) + timeout
414 return timeout
415
416 def get(self, key):
417 key = self._normalize_key(key)
418 # memcached doesn't support keys longer than that. Because often
419 # checks for so long keys can occur because it's tested from user
420 # submitted data etc we fail silently for getting.
421 if _test_memcached_key(key):
422 return self._client.get(key)
423
424 def get_dict(self, *keys):
425 key_mapping = {}
426 have_encoded_keys = False
427 for key in keys:
428 encoded_key = self._normalize_key(key)
429 if not isinstance(key, str):
430 have_encoded_keys = True
431 if _test_memcached_key(key):
432 key_mapping[encoded_key] = key
433 _keys = list(key_mapping)
434 d = rv = self._client.get_multi(_keys)
435 if have_encoded_keys or self.key_prefix:
436 rv = {}
437 for key, value in iteritems(d):
438 rv[key_mapping[key]] = value
439 if len(rv) < len(keys):
440 for key in keys:
441 if key not in rv:
442 rv[key] = None
443 return rv
444
445 def add(self, key, value, timeout=None):
446 key = self._normalize_key(key)
447 timeout = self._normalize_timeout(timeout)
448 return self._client.add(key, value, timeout)
449
450 def set(self, key, value, timeout=None):
451 key = self._normalize_key(key)
452 timeout = self._normalize_timeout(timeout)
453 return self._client.set(key, value, timeout)
454
455 def get_many(self, *keys):
456 d = self.get_dict(*keys)
457 return [d[key] for key in keys]
458
459 def set_many(self, mapping, timeout=None):
460 new_mapping = {}
461 for key, value in _items(mapping):
462 key = self._normalize_key(key)
463 new_mapping[key] = value
464
465 timeout = self._normalize_timeout(timeout)
466 failed_keys = self._client.set_multi(new_mapping, timeout)
467 return not failed_keys
468
469 def delete(self, key):
470 key = self._normalize_key(key)
471 if _test_memcached_key(key):
472 return self._client.delete(key)
473
474 def delete_many(self, *keys):
475 new_keys = []
476 for key in keys:
477 key = self._normalize_key(key)
478 if _test_memcached_key(key):
479 new_keys.append(key)
480 return self._client.delete_multi(new_keys)
481
482 def has(self, key):
483 key = self._normalize_key(key)
484 if _test_memcached_key(key):
485 return self._client.append(key, "")
486 return False
487
488 def clear(self):
489 return self._client.flush_all()
490
491 def inc(self, key, delta=1):
492 key = self._normalize_key(key)
493 return self._client.incr(key, delta)
494
495 def dec(self, key, delta=1):
496 key = self._normalize_key(key)
497 return self._client.decr(key, delta)
498
499 def import_preferred_memcache_lib(self, servers):
500 """Returns an initialized memcache client. Used by the constructor."""
501 try:
502 import pylibmc
503 except ImportError:
504 pass
505 else:
506 return pylibmc.Client(servers)
507
508 try:
509 from google.appengine.api import memcache
510 except ImportError:
511 pass
512 else:
513 return memcache.Client()
514
515 try:
516 import memcache
517 except ImportError:
518 pass
519 else:
520 return memcache.Client(servers)
521
522 try:
523 import libmc
524 except ImportError:
525 pass
526 else:
527 return libmc.Client(servers)
528
529
530 # backwards compatibility
531 GAEMemcachedCache = MemcachedCache
532
533
534 class RedisCache(BaseCache):
535 """Uses the Redis key-value store as a cache backend.
536
537 The first argument can be either a string denoting address of the Redis
538 server or an object resembling an instance of a redis.Redis class.
539
540 Note: Python Redis API already takes care of encoding unicode strings on
541 the fly.
542
543 .. versionadded:: 0.7
544
545 .. versionadded:: 0.8
546 `key_prefix` was added.
547
548 .. versionchanged:: 0.8
549 This cache backend now properly serializes objects.
550
551 .. versionchanged:: 0.8.3
552 This cache backend now supports password authentication.
553
554 .. versionchanged:: 0.10
555 ``**kwargs`` is now passed to the redis object.
556
557 :param host: address of the Redis server or an object which API is
558 compatible with the official Python Redis client (redis-py).
559 :param port: port number on which Redis server listens for connections.
560 :param password: password authentication for the Redis server.
561 :param db: db (zero-based numeric index) on Redis Server to connect.
562 :param default_timeout: the default timeout that is used if no timeout is
563 specified on :meth:`~BaseCache.set`. A timeout of
564 0 indicates that the cache never expires.
565 :param key_prefix: A prefix that should be added to all keys.
566
567 Any additional keyword arguments will be passed to ``redis.Redis``.
568 """
569
570 def __init__(
571 self,
572 host="localhost",
573 port=6379,
574 password=None,
575 db=0,
576 default_timeout=300,
577 key_prefix=None,
578 **kwargs
579 ):
580 BaseCache.__init__(self, default_timeout)
581 if host is None:
582 raise ValueError("RedisCache host parameter may not be None")
583 if isinstance(host, string_types):
584 try:
585 import redis
586 except ImportError:
587 raise RuntimeError("no redis module found")
588 if kwargs.get("decode_responses", None):
589 raise ValueError("decode_responses is not supported by RedisCache.")
590 self._client = redis.Redis(
591 host=host, port=port, password=password, db=db, **kwargs
592 )
593 else:
594 self._client = host
595 self.key_prefix = key_prefix or ""
596
597 def _normalize_timeout(self, timeout):
598 timeout = BaseCache._normalize_timeout(self, timeout)
599 if timeout == 0:
600 timeout = -1
601 return timeout
602
603 def dump_object(self, value):
604 """Dumps an object into a string for redis. By default it serializes
605 integers as regular string and pickle dumps everything else.
606 """
607 t = type(value)
608 if t in integer_types:
609 return str(value).encode("ascii")
610 return b"!" + pickle.dumps(value)
611
612 def load_object(self, value):
613 """The reversal of :meth:`dump_object`. This might be called with
614 None.
615 """
616 if value is None:
617 return None
618 if value.startswith(b"!"):
619 try:
620 return pickle.loads(value[1:])
621 except pickle.PickleError:
622 return None
623 try:
624 return int(value)
625 except ValueError:
626 # before 0.8 we did not have serialization. Still support that.
627 return value
628
629 def get(self, key):
630 return self.load_object(self._client.get(self.key_prefix + key))
631
632 def get_many(self, *keys):
633 if self.key_prefix:
634 keys = [self.key_prefix + key for key in keys]
635 return [self.load_object(x) for x in self._client.mget(keys)]
636
637 def set(self, key, value, timeout=None):
638 timeout = self._normalize_timeout(timeout)
639 dump = self.dump_object(value)
640 if timeout == -1:
641 result = self._client.set(name=self.key_prefix + key, value=dump)
642 else:
643 result = self._client.setex(
644 name=self.key_prefix + key, value=dump, time=timeout
645 )
646 return result
647
648 def add(self, key, value, timeout=None):
649 timeout = self._normalize_timeout(timeout)
650 dump = self.dump_object(value)
651 return self._client.setnx(
652 name=self.key_prefix + key, value=dump
653 ) and self._client.expire(name=self.key_prefix + key, time=timeout)
654
655 def set_many(self, mapping, timeout=None):
656 timeout = self._normalize_timeout(timeout)
657 # Use transaction=False to batch without calling redis MULTI
658 # which is not supported by twemproxy
659 pipe = self._client.pipeline(transaction=False)
660
661 for key, value in _items(mapping):
662 dump = self.dump_object(value)
663 if timeout == -1:
664 pipe.set(name=self.key_prefix + key, value=dump)
665 else:
666 pipe.setex(name=self.key_prefix + key, value=dump, time=timeout)
667 return pipe.execute()
668
669 def delete(self, key):
670 return self._client.delete(self.key_prefix + key)
671
672 def delete_many(self, *keys):
673 if not keys:
674 return
675 if self.key_prefix:
676 keys = [self.key_prefix + key for key in keys]
677 return self._client.delete(*keys)
678
679 def has(self, key):
680 return self._client.exists(self.key_prefix + key)
681
682 def clear(self):
683 status = False
684 if self.key_prefix:
685 keys = self._client.keys(self.key_prefix + "*")
686 if keys:
687 status = self._client.delete(*keys)
688 else:
689 status = self._client.flushdb()
690 return status
691
692 def inc(self, key, delta=1):
693 return self._client.incr(name=self.key_prefix + key, amount=delta)
694
695 def dec(self, key, delta=1):
696 return self._client.decr(name=self.key_prefix + key, amount=delta)
697
698
699 class FileSystemCache(BaseCache):
700 """A cache that stores the items on the file system. This cache depends
701 on being the only user of the `cache_dir`. Make absolutely sure that
702 nobody but this cache stores files there or otherwise the cache will
703 randomly delete files therein.
704
705 :param cache_dir: the directory where cache files are stored.
706 :param threshold: the maximum number of items the cache stores before
707 it starts deleting some. A threshold value of 0
708 indicates no threshold.
709 :param default_timeout: the default timeout that is used if no timeout is
710 specified on :meth:`~BaseCache.set`. A timeout of
711 0 indicates that the cache never expires.
712 :param mode: the file mode wanted for the cache files, default 0600
713 """
714
715 #: used for temporary files by the FileSystemCache
716 _fs_transaction_suffix = ".__wz_cache"
717 #: keep amount of files in a cache element
718 _fs_count_file = "__wz_cache_count"
719
720 def __init__(self, cache_dir, threshold=500, default_timeout=300, mode=0o600):
721 BaseCache.__init__(self, default_timeout)
722 self._path = cache_dir
723 self._threshold = threshold
724 self._mode = mode
725
726 try:
727 os.makedirs(self._path)
728 except OSError as ex:
729 if ex.errno != errno.EEXIST:
730 raise
731
732 self._update_count(value=len(self._list_dir()))
733
734 @property
735 def _file_count(self):
736 return self.get(self._fs_count_file) or 0
737
738 def _update_count(self, delta=None, value=None):
739 # If we have no threshold, don't count files
740 if self._threshold == 0:
741 return
742
743 if delta:
744 new_count = self._file_count + delta
745 else:
746 new_count = value or 0
747 self.set(self._fs_count_file, new_count, mgmt_element=True)
748
749 def _normalize_timeout(self, timeout):
750 timeout = BaseCache._normalize_timeout(self, timeout)
751 if timeout != 0:
752 timeout = time() + timeout
753 return int(timeout)
754
755 def _list_dir(self):
756 """return a list of (fully qualified) cache filenames
757 """
758 mgmt_files = [
759 self._get_filename(name).split("/")[-1] for name in (self._fs_count_file,)
760 ]
761 return [
762 os.path.join(self._path, fn)
763 for fn in os.listdir(self._path)
764 if not fn.endswith(self._fs_transaction_suffix) and fn not in mgmt_files
765 ]
766
767 def _prune(self):
768 if self._threshold == 0 or not self._file_count > self._threshold:
769 return
770
771 entries = self._list_dir()
772 now = time()
773 for idx, fname in enumerate(entries):
774 try:
775 remove = False
776 with open(fname, "rb") as f:
777 expires = pickle.load(f)
778 remove = (expires != 0 and expires <= now) or idx % 3 == 0
779
780 if remove:
781 os.remove(fname)
782 except (IOError, OSError):
783 pass
784 self._update_count(value=len(self._list_dir()))
785
786 def clear(self):
787 for fname in self._list_dir():
788 try:
789 os.remove(fname)
790 except (IOError, OSError):
791 self._update_count(value=len(self._list_dir()))
792 return False
793 self._update_count(value=0)
794 return True
795
796 def _get_filename(self, key):
797 if isinstance(key, text_type):
798 key = key.encode("utf-8") # XXX unicode review
799 hash = md5(key).hexdigest()
800 return os.path.join(self._path, hash)
801
802 def get(self, key):
803 filename = self._get_filename(key)
804 try:
805 with open(filename, "rb") as f:
806 pickle_time = pickle.load(f)
807 if pickle_time == 0 or pickle_time >= time():
808 return pickle.load(f)
809 else:
810 os.remove(filename)
811 return None
812 except (IOError, OSError, pickle.PickleError):
813 return None
814
815 def add(self, key, value, timeout=None):
816 filename = self._get_filename(key)
817 if not os.path.exists(filename):
818 return self.set(key, value, timeout)
819 return False
820
821 def set(self, key, value, timeout=None, mgmt_element=False):
822 # Management elements have no timeout
823 if mgmt_element:
824 timeout = 0
825
826 # Don't prune on management element update, to avoid loop
827 else:
828 self._prune()
829
830 timeout = self._normalize_timeout(timeout)
831 filename = self._get_filename(key)
832 try:
833 fd, tmp = tempfile.mkstemp(
834 suffix=self._fs_transaction_suffix, dir=self._path
835 )
836 with os.fdopen(fd, "wb") as f:
837 pickle.dump(timeout, f, 1)
838 pickle.dump(value, f, pickle.HIGHEST_PROTOCOL)
839 rename(tmp, filename)
840 os.chmod(filename, self._mode)
841 except (IOError, OSError):
842 return False
843 else:
844 # Management elements should not count towards threshold
845 if not mgmt_element:
846 self._update_count(delta=1)
847 return True
848
849 def delete(self, key, mgmt_element=False):
850 try:
851 os.remove(self._get_filename(key))
852 except (IOError, OSError):
853 return False
854 else:
855 # Management elements should not count towards threshold
856 if not mgmt_element:
857 self._update_count(delta=-1)
858 return True
859
860 def has(self, key):
861 filename = self._get_filename(key)
862 try:
863 with open(filename, "rb") as f:
864 pickle_time = pickle.load(f)
865 if pickle_time == 0 or pickle_time >= time():
866 return True
867 else:
868 os.remove(filename)
869 return False
870 except (IOError, OSError, pickle.PickleError):
871 return False
872
873
874 class UWSGICache(BaseCache):
875 """Implements the cache using uWSGI's caching framework.
876
877 .. note::
878 This class cannot be used when running under PyPy, because the uWSGI
879 API implementation for PyPy is lacking the needed functionality.
880
881 :param default_timeout: The default timeout in seconds.
882 :param cache: The name of the caching instance to connect to, for
883 example: mycache@localhost:3031, defaults to an empty string, which
884 means uWSGI will cache in the local instance. If the cache is in the
885 same instance as the werkzeug app, you only have to provide the name of
886 the cache.
887 """
888
889 def __init__(self, default_timeout=300, cache=""):
890 BaseCache.__init__(self, default_timeout)
891
892 if platform.python_implementation() == "PyPy":
893 raise RuntimeError(
894 "uWSGI caching does not work under PyPy, see "
895 "the docs for more details."
896 )
897
898 try:
899 import uwsgi
900
901 self._uwsgi = uwsgi
902 except ImportError:
903 raise RuntimeError(
904 "uWSGI could not be imported, are you running under uWSGI?"
905 )
906
907 self.cache = cache
908
909 def get(self, key):
910 rv = self._uwsgi.cache_get(key, self.cache)
911 if rv is None:
912 return
913 return pickle.loads(rv)
914
915 def delete(self, key):
916 return self._uwsgi.cache_del(key, self.cache)
917
918 def set(self, key, value, timeout=None):
919 return self._uwsgi.cache_update(
920 key, pickle.dumps(value), self._normalize_timeout(timeout), self.cache
921 )
922
923 def add(self, key, value, timeout=None):
924 return self._uwsgi.cache_set(
925 key, pickle.dumps(value), self._normalize_timeout(timeout), self.cache
926 )
927
928 def clear(self):
929 return self._uwsgi.cache_clear(self.cache)
930
931 def has(self, key):
932 return self._uwsgi.cache_exists(key, self.cache) is not None
0 """
1 Fixers
2 ======
3
4 .. warning::
5 .. deprecated:: 0.15
6 ``ProxyFix`` has moved to :mod:`werkzeug.middleware.proxy_fix`.
7 All other code in this module is deprecated and will be removed
8 in version 1.0.
9
10 .. versionadded:: 0.5
11
12 This module includes various helpers that fix web server behavior.
13
14 .. autoclass:: ProxyFix
15 :members:
16
17 .. autoclass:: CGIRootFix
18
19 .. autoclass:: PathInfoFromRequestUriFix
20
21 .. autoclass:: HeaderRewriterFix
22
23 .. autoclass:: InternetExplorerFix
24
25 :copyright: 2007 Pallets
26 :license: BSD-3-Clause
27 """
28 import warnings
29
30 from ..datastructures import Headers
31 from ..datastructures import ResponseCacheControl
32 from ..http import parse_cache_control_header
33 from ..http import parse_options_header
34 from ..http import parse_set_header
35 from ..middleware.proxy_fix import ProxyFix as _ProxyFix
36 from ..useragents import UserAgent
37
38 try:
39 from urllib.parse import unquote
40 except ImportError:
41 from urllib import unquote
42
43
44 class CGIRootFix(object):
45 """Wrap the application in this middleware if you are using FastCGI
46 or CGI and you have problems with your app root being set to the CGI
47 script's path instead of the path users are going to visit.
48
49 :param app: the WSGI application
50 :param app_root: Defaulting to ``'/'``, you can set this to
51 something else if your app is mounted somewhere else.
52
53 .. deprecated:: 0.15
54 This middleware will be removed in version 1.0.
55
56 .. versionchanged:: 0.9
57 Added `app_root` parameter and renamed from
58 ``LighttpdCGIRootFix``.
59 """
60
61 def __init__(self, app, app_root="/"):
62 warnings.warn(
63 "'CGIRootFix' is deprecated as of version 0.15 and will be"
64 " removed in version 1.0.",
65 DeprecationWarning,
66 stacklevel=2,
67 )
68 self.app = app
69 self.app_root = app_root.strip("/")
70
71 def __call__(self, environ, start_response):
72 environ["SCRIPT_NAME"] = self.app_root
73 return self.app(environ, start_response)
74
75
76 class LighttpdCGIRootFix(CGIRootFix):
77 def __init__(self, *args, **kwargs):
78 warnings.warn(
79 "'LighttpdCGIRootFix' is renamed 'CGIRootFix'. Both will be"
80 " removed in version 1.0.",
81 DeprecationWarning,
82 stacklevel=2,
83 )
84 super(LighttpdCGIRootFix, self).__init__(*args, **kwargs)
85
86
87 class PathInfoFromRequestUriFix(object):
88 """On windows environment variables are limited to the system charset
89 which makes it impossible to store the `PATH_INFO` variable in the
90 environment without loss of information on some systems.
91
92 This is for example a problem for CGI scripts on a Windows Apache.
93
94 This fixer works by recreating the `PATH_INFO` from `REQUEST_URI`,
95 `REQUEST_URL`, or `UNENCODED_URL` (whatever is available). Thus the
96 fix can only be applied if the webserver supports either of these
97 variables.
98
99 :param app: the WSGI application
100
101 .. deprecated:: 0.15
102 This middleware will be removed in version 1.0.
103 """
104
105 def __init__(self, app):
106 warnings.warn(
107 "'PathInfoFromRequestUriFix' is deprecated as of version"
108 " 0.15 and will be removed in version 1.0.",
109 DeprecationWarning,
110 stacklevel=2,
111 )
112 self.app = app
113
114 def __call__(self, environ, start_response):
115 for key in "REQUEST_URL", "REQUEST_URI", "UNENCODED_URL":
116 if key not in environ:
117 continue
118 request_uri = unquote(environ[key])
119 script_name = unquote(environ.get("SCRIPT_NAME", ""))
120 if request_uri.startswith(script_name):
121 environ["PATH_INFO"] = request_uri[len(script_name) :].split("?", 1)[0]
122 break
123 return self.app(environ, start_response)
124
125
126 class ProxyFix(_ProxyFix):
127 """
128 .. deprecated:: 0.15
129 ``werkzeug.contrib.fixers.ProxyFix`` has moved to
130 :mod:`werkzeug.middleware.proxy_fix`. This import will be
131 removed in 1.0.
132 """
133
134 def __init__(self, *args, **kwargs):
135 warnings.warn(
136 "'werkzeug.contrib.fixers.ProxyFix' has moved to 'werkzeug"
137 ".middleware.proxy_fix.ProxyFix'. This import is deprecated"
138 " as of version 0.15 and will be removed in 1.0.",
139 DeprecationWarning,
140 stacklevel=2,
141 )
142 super(ProxyFix, self).__init__(*args, **kwargs)
143
144
145 class HeaderRewriterFix(object):
146 """This middleware can remove response headers and add others. This
147 is for example useful to remove the `Date` header from responses if you
148 are using a server that adds that header, no matter if it's present or
149 not or to add `X-Powered-By` headers::
150
151 app = HeaderRewriterFix(app, remove_headers=['Date'],
152 add_headers=[('X-Powered-By', 'WSGI')])
153
154 :param app: the WSGI application
155 :param remove_headers: a sequence of header keys that should be
156 removed.
157 :param add_headers: a sequence of ``(key, value)`` tuples that should
158 be added.
159
160 .. deprecated:: 0.15
161 This middleware will be removed in 1.0.
162 """
163
164 def __init__(self, app, remove_headers=None, add_headers=None):
165 warnings.warn(
166 "'HeaderRewriterFix' is deprecated as of version 0.15 and"
167 " will be removed in version 1.0.",
168 DeprecationWarning,
169 stacklevel=2,
170 )
171 self.app = app
172 self.remove_headers = set(x.lower() for x in (remove_headers or ()))
173 self.add_headers = list(add_headers or ())
174
175 def __call__(self, environ, start_response):
176 def rewriting_start_response(status, headers, exc_info=None):
177 new_headers = []
178 for key, value in headers:
179 if key.lower() not in self.remove_headers:
180 new_headers.append((key, value))
181 new_headers += self.add_headers
182 return start_response(status, new_headers, exc_info)
183
184 return self.app(environ, rewriting_start_response)
185
186
187 class InternetExplorerFix(object):
188 """This middleware fixes a couple of bugs with Microsoft Internet
189 Explorer. Currently the following fixes are applied:
190
191 - removing of `Vary` headers for unsupported mimetypes which
192 causes troubles with caching. Can be disabled by passing
193 ``fix_vary=False`` to the constructor.
194 see: https://support.microsoft.com/en-us/help/824847
195
196 - removes offending headers to work around caching bugs in
197 Internet Explorer if `Content-Disposition` is set. Can be
198 disabled by passing ``fix_attach=False`` to the constructor.
199
200 If it does not detect affected Internet Explorer versions it won't touch
201 the request / response.
202
203 .. deprecated:: 0.15
204 This middleware will be removed in 1.0.
205 """
206
207 # This code was inspired by Django fixers for the same bugs. The
208 # fix_vary and fix_attach fixers were originally implemented in Django
209 # by Michael Axiak and is available as part of the Django project:
210 # https://code.djangoproject.com/ticket/4148
211
212 def __init__(self, app, fix_vary=True, fix_attach=True):
213 warnings.warn(
214 "'InternetExplorerFix' is deprecated as of version 0.15 and"
215 " will be removed in version 1.0.",
216 DeprecationWarning,
217 stacklevel=2,
218 )
219 self.app = app
220 self.fix_vary = fix_vary
221 self.fix_attach = fix_attach
222
223 def fix_headers(self, environ, headers, status=None):
224 if self.fix_vary:
225 header = headers.get("content-type", "")
226 mimetype, options = parse_options_header(header)
227 if mimetype not in ("text/html", "text/plain", "text/sgml"):
228 headers.pop("vary", None)
229
230 if self.fix_attach and "content-disposition" in headers:
231 pragma = parse_set_header(headers.get("pragma", ""))
232 pragma.discard("no-cache")
233 header = pragma.to_header()
234 if not header:
235 headers.pop("pragma", "")
236 else:
237 headers["Pragma"] = header
238 header = headers.get("cache-control", "")
239 if header:
240 cc = parse_cache_control_header(header, cls=ResponseCacheControl)
241 cc.no_cache = None
242 cc.no_store = False
243 header = cc.to_header()
244 if not header:
245 headers.pop("cache-control", "")
246 else:
247 headers["Cache-Control"] = header
248
249 def run_fixed(self, environ, start_response):
250 def fixing_start_response(status, headers, exc_info=None):
251 headers = Headers(headers)
252 self.fix_headers(environ, headers, status)
253 return start_response(status, headers.to_wsgi_list(), exc_info)
254
255 return self.app(environ, fixing_start_response)
256
257 def __call__(self, environ, start_response):
258 ua = UserAgent(environ)
259 if ua.browser != "msie":
260 return self.app(environ, start_response)
261 return self.run_fixed(environ, start_response)
0 # -*- coding: utf-8 -*-
1 r"""
2 werkzeug.contrib.iterio
3 ~~~~~~~~~~~~~~~~~~~~~~~
4
5 This module implements a :class:`IterIO` that converts an iterator into
6 a stream object and the other way round. Converting streams into
7 iterators requires the `greenlet`_ module.
8
9 To convert an iterator into a stream all you have to do is to pass it
10 directly to the :class:`IterIO` constructor. In this example we pass it
11 a newly created generator::
12
13 def foo():
14 yield "something\n"
15 yield "otherthings"
16 stream = IterIO(foo())
17 print stream.read() # read the whole iterator
18
19 The other way round works a bit different because we have to ensure that
20 the code execution doesn't take place yet. An :class:`IterIO` call with a
21 callable as first argument does two things. The function itself is passed
22 an :class:`IterIO` stream it can feed. The object returned by the
23 :class:`IterIO` constructor on the other hand is not an stream object but
24 an iterator::
25
26 def foo(stream):
27 stream.write("some")
28 stream.write("thing")
29 stream.flush()
30 stream.write("otherthing")
31 iterator = IterIO(foo)
32 print iterator.next() # prints something
33 print iterator.next() # prints otherthing
34 iterator.next() # raises StopIteration
35
36 .. _greenlet: https://github.com/python-greenlet/greenlet
37
38 :copyright: 2007 Pallets
39 :license: BSD-3-Clause
40 """
41 import warnings
42
43 from .._compat import implements_iterator
44
45 try:
46 import greenlet
47 except ImportError:
48 greenlet = None
49
50 warnings.warn(
51 "'werkzeug.contrib.iterio' is deprecated as of version 0.15 and"
52 " will be removed in version 1.0.",
53 DeprecationWarning,
54 stacklevel=2,
55 )
56
57
58 def _mixed_join(iterable, sentinel):
59 """concatenate any string type in an intelligent way."""
60 iterator = iter(iterable)
61 first_item = next(iterator, sentinel)
62 if isinstance(first_item, bytes):
63 return first_item + b"".join(iterator)
64 return first_item + u"".join(iterator)
65
66
67 def _newline(reference_string):
68 if isinstance(reference_string, bytes):
69 return b"\n"
70 return u"\n"
71
72
73 @implements_iterator
74 class IterIO(object):
75 """Instances of this object implement an interface compatible with the
76 standard Python :class:`file` object. Streams are either read-only or
77 write-only depending on how the object is created.
78
79 If the first argument is an iterable a file like object is returned that
80 returns the contents of the iterable. In case the iterable is empty
81 read operations will return the sentinel value.
82
83 If the first argument is a callable then the stream object will be
84 created and passed to that function. The caller itself however will
85 not receive a stream but an iterable. The function will be executed
86 step by step as something iterates over the returned iterable. Each
87 call to :meth:`flush` will create an item for the iterable. If
88 :meth:`flush` is called without any writes in-between the sentinel
89 value will be yielded.
90
91 Note for Python 3: due to the incompatible interface of bytes and
92 streams you should set the sentinel value explicitly to an empty
93 bytestring (``b''``) if you are expecting to deal with bytes as
94 otherwise the end of the stream is marked with the wrong sentinel
95 value.
96
97 .. versionadded:: 0.9
98 `sentinel` parameter was added.
99 """
100
101 def __new__(cls, obj, sentinel=""):
102 try:
103 iterator = iter(obj)
104 except TypeError:
105 return IterI(obj, sentinel)
106 return IterO(iterator, sentinel)
107
108 def __iter__(self):
109 return self
110
111 def tell(self):
112 if self.closed:
113 raise ValueError("I/O operation on closed file")
114 return self.pos
115
116 def isatty(self):
117 if self.closed:
118 raise ValueError("I/O operation on closed file")
119 return False
120
121 def seek(self, pos, mode=0):
122 if self.closed:
123 raise ValueError("I/O operation on closed file")
124 raise IOError(9, "Bad file descriptor")
125
126 def truncate(self, size=None):
127 if self.closed:
128 raise ValueError("I/O operation on closed file")
129 raise IOError(9, "Bad file descriptor")
130
131 def write(self, s):
132 if self.closed:
133 raise ValueError("I/O operation on closed file")
134 raise IOError(9, "Bad file descriptor")
135
136 def writelines(self, list):
137 if self.closed:
138 raise ValueError("I/O operation on closed file")
139 raise IOError(9, "Bad file descriptor")
140
141 def read(self, n=-1):
142 if self.closed:
143 raise ValueError("I/O operation on closed file")
144 raise IOError(9, "Bad file descriptor")
145
146 def readlines(self, sizehint=0):
147 if self.closed:
148 raise ValueError("I/O operation on closed file")
149 raise IOError(9, "Bad file descriptor")
150
151 def readline(self, length=None):
152 if self.closed:
153 raise ValueError("I/O operation on closed file")
154 raise IOError(9, "Bad file descriptor")
155
156 def flush(self):
157 if self.closed:
158 raise ValueError("I/O operation on closed file")
159 raise IOError(9, "Bad file descriptor")
160
161 def __next__(self):
162 if self.closed:
163 raise StopIteration()
164 line = self.readline()
165 if not line:
166 raise StopIteration()
167 return line
168
169
170 class IterI(IterIO):
171 """Convert an stream into an iterator."""
172
173 def __new__(cls, func, sentinel=""):
174 if greenlet is None:
175 raise RuntimeError("IterI requires greenlet support")
176 stream = object.__new__(cls)
177 stream._parent = greenlet.getcurrent()
178 stream._buffer = []
179 stream.closed = False
180 stream.sentinel = sentinel
181 stream.pos = 0
182
183 def run():
184 func(stream)
185 stream.close()
186
187 g = greenlet.greenlet(run, stream._parent)
188 while 1:
189 rv = g.switch()
190 if not rv:
191 return
192 yield rv[0]
193
194 def close(self):
195 if not self.closed:
196 self.closed = True
197 self._flush_impl()
198
199 def write(self, s):
200 if self.closed:
201 raise ValueError("I/O operation on closed file")
202 if s:
203 self.pos += len(s)
204 self._buffer.append(s)
205
206 def writelines(self, list):
207 for item in list:
208 self.write(item)
209
210 def flush(self):
211 if self.closed:
212 raise ValueError("I/O operation on closed file")
213 self._flush_impl()
214
215 def _flush_impl(self):
216 data = _mixed_join(self._buffer, self.sentinel)
217 self._buffer = []
218 if not data and self.closed:
219 self._parent.switch()
220 else:
221 self._parent.switch((data,))
222
223
224 class IterO(IterIO):
225 """Iter output. Wrap an iterator and give it a stream like interface."""
226
227 def __new__(cls, gen, sentinel=""):
228 self = object.__new__(cls)
229 self._gen = gen
230 self._buf = None
231 self.sentinel = sentinel
232 self.closed = False
233 self.pos = 0
234 return self
235
236 def __iter__(self):
237 return self
238
239 def _buf_append(self, string):
240 """Replace string directly without appending to an empty string,
241 avoiding type issues."""
242 if not self._buf:
243 self._buf = string
244 else:
245 self._buf += string
246
247 def close(self):
248 if not self.closed:
249 self.closed = True
250 if hasattr(self._gen, "close"):
251 self._gen.close()
252
253 def seek(self, pos, mode=0):
254 if self.closed:
255 raise ValueError("I/O operation on closed file")
256 if mode == 1:
257 pos += self.pos
258 elif mode == 2:
259 self.read()
260 self.pos = min(self.pos, self.pos + pos)
261 return
262 elif mode != 0:
263 raise IOError("Invalid argument")
264 buf = []
265 try:
266 tmp_end_pos = len(self._buf or "")
267 while pos > tmp_end_pos:
268 item = next(self._gen)
269 tmp_end_pos += len(item)
270 buf.append(item)
271 except StopIteration:
272 pass
273 if buf:
274 self._buf_append(_mixed_join(buf, self.sentinel))
275 self.pos = max(0, pos)
276
277 def read(self, n=-1):
278 if self.closed:
279 raise ValueError("I/O operation on closed file")
280 if n < 0:
281 self._buf_append(_mixed_join(self._gen, self.sentinel))
282 result = self._buf[self.pos :]
283 self.pos += len(result)
284 return result
285 new_pos = self.pos + n
286 buf = []
287 try:
288 tmp_end_pos = 0 if self._buf is None else len(self._buf)
289 while new_pos > tmp_end_pos or (self._buf is None and not buf):
290 item = next(self._gen)
291 tmp_end_pos += len(item)
292 buf.append(item)
293 except StopIteration:
294 pass
295 if buf:
296 self._buf_append(_mixed_join(buf, self.sentinel))
297
298 if self._buf is None:
299 return self.sentinel
300
301 new_pos = max(0, new_pos)
302 try:
303 return self._buf[self.pos : new_pos]
304 finally:
305 self.pos = min(new_pos, len(self._buf))
306
307 def readline(self, length=None):
308 if self.closed:
309 raise ValueError("I/O operation on closed file")
310
311 nl_pos = -1
312 if self._buf:
313 nl_pos = self._buf.find(_newline(self._buf), self.pos)
314 buf = []
315 try:
316 if self._buf is None:
317 pos = self.pos
318 else:
319 pos = len(self._buf)
320 while nl_pos < 0:
321 item = next(self._gen)
322 local_pos = item.find(_newline(item))
323 buf.append(item)
324 if local_pos >= 0:
325 nl_pos = pos + local_pos
326 break
327 pos += len(item)
328 except StopIteration:
329 pass
330 if buf:
331 self._buf_append(_mixed_join(buf, self.sentinel))
332
333 if self._buf is None:
334 return self.sentinel
335
336 if nl_pos < 0:
337 new_pos = len(self._buf)
338 else:
339 new_pos = nl_pos + 1
340 if length is not None and self.pos + length < new_pos:
341 new_pos = self.pos + length
342 try:
343 return self._buf[self.pos : new_pos]
344 finally:
345 self.pos = min(new_pos, len(self._buf))
346
347 def readlines(self, sizehint=0):
348 total = 0
349 lines = []
350 line = self.readline()
351 while line:
352 lines.append(line)
353 total += len(line)
354 if 0 < sizehint <= total:
355 break
356 line = self.readline()
357 return lines
0 import warnings
1
2 from ..middleware.lint import * # noqa: F401, F403
3
4 warnings.warn(
5 "'werkzeug.contrib.lint' has moved to 'werkzeug.middleware.lint'."
6 " This import is deprecated as of version 0.15 and will be removed"
7 " in version 1.0.",
8 DeprecationWarning,
9 stacklevel=2,
10 )
0 import warnings
1
2 from ..middleware.profiler import * # noqa: F401, F403
3
4 warnings.warn(
5 "'werkzeug.contrib.profiler' has moved to"
6 "'werkzeug.middleware.profiler'. This import is deprecated as of"
7 "version 0.15 and will be removed in version 1.0.",
8 DeprecationWarning,
9 stacklevel=2,
10 )
11
12
13 class MergeStream(object):
14 """An object that redirects ``write`` calls to multiple streams.
15 Use this to log to both ``sys.stdout`` and a file::
16
17 f = open('profiler.log', 'w')
18 stream = MergeStream(sys.stdout, f)
19 profiler = ProfilerMiddleware(app, stream)
20
21 .. deprecated:: 0.15
22 Use the ``tee`` command in your terminal instead. This class
23 will be removed in 1.0.
24 """
25
26 def __init__(self, *streams):
27 warnings.warn(
28 "'MergeStream' is deprecated as of version 0.15 and will be removed in"
29 " version 1.0. Use your terminal's 'tee' command instead.",
30 DeprecationWarning,
31 stacklevel=2,
32 )
33
34 if not streams:
35 raise TypeError("At least one stream must be given.")
36
37 self.streams = streams
38
39 def write(self, data):
40 for stream in self.streams:
41 stream.write(data)
0 # -*- coding: utf-8 -*-
1 r"""
2 werkzeug.contrib.securecookie
3 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
4
5 This module implements a cookie that is not alterable from the client
6 because it adds a checksum the server checks for. You can use it as
7 session replacement if all you have is a user id or something to mark
8 a logged in user.
9
10 Keep in mind that the data is still readable from the client as a
11 normal cookie is. However you don't have to store and flush the
12 sessions you have at the server.
13
14 Example usage:
15
16 >>> from werkzeug.contrib.securecookie import SecureCookie
17 >>> x = SecureCookie({"foo": 42, "baz": (1, 2, 3)}, "deadbeef")
18
19 Dumping into a string so that one can store it in a cookie:
20
21 >>> value = x.serialize()
22
23 Loading from that string again:
24
25 >>> x = SecureCookie.unserialize(value, "deadbeef")
26 >>> x["baz"]
27 (1, 2, 3)
28
29 If someone modifies the cookie and the checksum is wrong the unserialize
30 method will fail silently and return a new empty `SecureCookie` object.
31
32 Keep in mind that the values will be visible in the cookie so do not
33 store data in a cookie you don't want the user to see.
34
35 Application Integration
36 =======================
37
38 If you are using the werkzeug request objects you could integrate the
39 secure cookie into your application like this::
40
41 from werkzeug.utils import cached_property
42 from werkzeug.wrappers import BaseRequest
43 from werkzeug.contrib.securecookie import SecureCookie
44
45 # don't use this key but a different one; you could just use
46 # os.urandom(20) to get something random
47 SECRET_KEY = '\xfa\xdd\xb8z\xae\xe0}4\x8b\xea'
48
49 class Request(BaseRequest):
50
51 @cached_property
52 def client_session(self):
53 data = self.cookies.get('session_data')
54 if not data:
55 return SecureCookie(secret_key=SECRET_KEY)
56 return SecureCookie.unserialize(data, SECRET_KEY)
57
58 def application(environ, start_response):
59 request = Request(environ)
60
61 # get a response object here
62 response = ...
63
64 if request.client_session.should_save:
65 session_data = request.client_session.serialize()
66 response.set_cookie('session_data', session_data,
67 httponly=True)
68 return response(environ, start_response)
69
70 A less verbose integration can be achieved by using shorthand methods::
71
72 class Request(BaseRequest):
73
74 @cached_property
75 def client_session(self):
76 return SecureCookie.load_cookie(self, secret_key=COOKIE_SECRET)
77
78 def application(environ, start_response):
79 request = Request(environ)
80
81 # get a response object here
82 response = ...
83
84 request.client_session.save_cookie(response)
85 return response(environ, start_response)
86
87 :copyright: 2007 Pallets
88 :license: BSD-3-Clause
89 """
90 import base64
91 import pickle
92 import warnings
93 from hashlib import sha1 as _default_hash
94 from hmac import new as hmac
95 from time import time
96
97 from .._compat import iteritems
98 from .._compat import text_type
99 from .._compat import to_bytes
100 from .._compat import to_native
101 from .._internal import _date_to_unix
102 from ..contrib.sessions import ModificationTrackingDict
103 from ..security import safe_str_cmp
104 from ..urls import url_quote_plus
105 from ..urls import url_unquote_plus
106
107 warnings.warn(
108 "'werkzeug.contrib.securecookie' is deprecated as of version 0.15"
109 " and will be removed in version 1.0. It has moved to"
110 " https://github.com/pallets/secure-cookie.",
111 DeprecationWarning,
112 stacklevel=2,
113 )
114
115
116 class UnquoteError(Exception):
117 """Internal exception used to signal failures on quoting."""
118
119
120 class SecureCookie(ModificationTrackingDict):
121 """Represents a secure cookie. You can subclass this class and provide
122 an alternative mac method. The import thing is that the mac method
123 is a function with a similar interface to the hashlib. Required
124 methods are update() and digest().
125
126 Example usage:
127
128 >>> x = SecureCookie({"foo": 42, "baz": (1, 2, 3)}, "deadbeef")
129 >>> x["foo"]
130 42
131 >>> x["baz"]
132 (1, 2, 3)
133 >>> x["blafasel"] = 23
134 >>> x.should_save
135 True
136
137 :param data: the initial data. Either a dict, list of tuples or `None`.
138 :param secret_key: the secret key. If not set `None` or not specified
139 it has to be set before :meth:`serialize` is called.
140 :param new: The initial value of the `new` flag.
141 """
142
143 #: The hash method to use. This has to be a module with a new function
144 #: or a function that creates a hashlib object. Such as `hashlib.md5`
145 #: Subclasses can override this attribute. The default hash is sha1.
146 #: Make sure to wrap this in staticmethod() if you store an arbitrary
147 #: function there such as hashlib.sha1 which might be implemented
148 #: as a function.
149 hash_method = staticmethod(_default_hash)
150
151 #: The module used for serialization. Should have a ``dumps`` and a
152 #: ``loads`` method that takes bytes. The default is :mod:`pickle`.
153 #:
154 #: .. versionchanged:: 0.15
155 #: The default of ``pickle`` will change to :mod:`json` in 1.0.
156 serialization_method = pickle
157
158 #: if the contents should be base64 quoted. This can be disabled if the
159 #: serialization process returns cookie safe strings only.
160 quote_base64 = True
161
162 def __init__(self, data=None, secret_key=None, new=True):
163 ModificationTrackingDict.__init__(self, data or ())
164 # explicitly convert it into a bytestring because python 2.6
165 # no longer performs an implicit string conversion on hmac
166 if secret_key is not None:
167 secret_key = to_bytes(secret_key, "utf-8")
168 self.secret_key = secret_key
169 self.new = new
170
171 if self.serialization_method is pickle:
172 warnings.warn(
173 "The default 'SecureCookie.serialization_method' will"
174 " change from pickle to json in version 1.0. To upgrade"
175 " existing tokens, override 'unquote' to try pickle if"
176 " json fails.",
177 stacklevel=2,
178 )
179
180 def __repr__(self):
181 return "<%s %s%s>" % (
182 self.__class__.__name__,
183 dict.__repr__(self),
184 "*" if self.should_save else "",
185 )
186
187 @property
188 def should_save(self):
189 """True if the session should be saved. By default this is only true
190 for :attr:`modified` cookies, not :attr:`new`.
191 """
192 return self.modified
193
194 @classmethod
195 def quote(cls, value):
196 """Quote the value for the cookie. This can be any object supported
197 by :attr:`serialization_method`.
198
199 :param value: the value to quote.
200 """
201 if cls.serialization_method is not None:
202 value = cls.serialization_method.dumps(value)
203 if cls.quote_base64:
204 value = b"".join(
205 base64.b64encode(to_bytes(value, "utf8")).splitlines()
206 ).strip()
207 return value
208
209 @classmethod
210 def unquote(cls, value):
211 """Unquote the value for the cookie. If unquoting does not work a
212 :exc:`UnquoteError` is raised.
213
214 :param value: the value to unquote.
215 """
216 try:
217 if cls.quote_base64:
218 value = base64.b64decode(value)
219 if cls.serialization_method is not None:
220 value = cls.serialization_method.loads(value)
221 return value
222 except Exception:
223 # unfortunately pickle and other serialization modules can
224 # cause pretty every error here. if we get one we catch it
225 # and convert it into an UnquoteError
226 raise UnquoteError()
227
228 def serialize(self, expires=None):
229 """Serialize the secure cookie into a string.
230
231 If expires is provided, the session will be automatically invalidated
232 after expiration when you unseralize it. This provides better
233 protection against session cookie theft.
234
235 :param expires: an optional expiration date for the cookie (a
236 :class:`datetime.datetime` object)
237 """
238 if self.secret_key is None:
239 raise RuntimeError("no secret key defined")
240 if expires:
241 self["_expires"] = _date_to_unix(expires)
242 result = []
243 mac = hmac(self.secret_key, None, self.hash_method)
244 for key, value in sorted(self.items()):
245 result.append(
246 (
247 "%s=%s" % (url_quote_plus(key), self.quote(value).decode("ascii"))
248 ).encode("ascii")
249 )
250 mac.update(b"|" + result[-1])
251 return b"?".join([base64.b64encode(mac.digest()).strip(), b"&".join(result)])
252
253 @classmethod
254 def unserialize(cls, string, secret_key):
255 """Load the secure cookie from a serialized string.
256
257 :param string: the cookie value to unserialize.
258 :param secret_key: the secret key used to serialize the cookie.
259 :return: a new :class:`SecureCookie`.
260 """
261 if isinstance(string, text_type):
262 string = string.encode("utf-8", "replace")
263 if isinstance(secret_key, text_type):
264 secret_key = secret_key.encode("utf-8", "replace")
265 try:
266 base64_hash, data = string.split(b"?", 1)
267 except (ValueError, IndexError):
268 items = ()
269 else:
270 items = {}
271 mac = hmac(secret_key, None, cls.hash_method)
272 for item in data.split(b"&"):
273 mac.update(b"|" + item)
274 if b"=" not in item:
275 items = None
276 break
277 key, value = item.split(b"=", 1)
278 # try to make the key a string
279 key = url_unquote_plus(key.decode("ascii"))
280 try:
281 key = to_native(key)
282 except UnicodeError:
283 pass
284 items[key] = value
285
286 # no parsing error and the mac looks okay, we can now
287 # sercurely unpickle our cookie.
288 try:
289 client_hash = base64.b64decode(base64_hash)
290 except TypeError:
291 items = client_hash = None
292 if items is not None and safe_str_cmp(client_hash, mac.digest()):
293 try:
294 for key, value in iteritems(items):
295 items[key] = cls.unquote(value)
296 except UnquoteError:
297 items = ()
298 else:
299 if "_expires" in items:
300 if time() > items["_expires"]:
301 items = ()
302 else:
303 del items["_expires"]
304 else:
305 items = ()
306 return cls(items, secret_key, False)
307
308 @classmethod
309 def load_cookie(cls, request, key="session", secret_key=None):
310 """Loads a :class:`SecureCookie` from a cookie in request. If the
311 cookie is not set, a new :class:`SecureCookie` instanced is
312 returned.
313
314 :param request: a request object that has a `cookies` attribute
315 which is a dict of all cookie values.
316 :param key: the name of the cookie.
317 :param secret_key: the secret key used to unquote the cookie.
318 Always provide the value even though it has
319 no default!
320 """
321 data = request.cookies.get(key)
322 if not data:
323 return cls(secret_key=secret_key)
324 return cls.unserialize(data, secret_key)
325
326 def save_cookie(
327 self,
328 response,
329 key="session",
330 expires=None,
331 session_expires=None,
332 max_age=None,
333 path="/",
334 domain=None,
335 secure=None,
336 httponly=False,
337 force=False,
338 ):
339 """Saves the SecureCookie in a cookie on response object. All
340 parameters that are not described here are forwarded directly
341 to :meth:`~BaseResponse.set_cookie`.
342
343 :param response: a response object that has a
344 :meth:`~BaseResponse.set_cookie` method.
345 :param key: the name of the cookie.
346 :param session_expires: the expiration date of the secure cookie
347 stored information. If this is not provided
348 the cookie `expires` date is used instead.
349 """
350 if force or self.should_save:
351 data = self.serialize(session_expires or expires)
352 response.set_cookie(
353 key,
354 data,
355 expires=expires,
356 max_age=max_age,
357 path=path,
358 domain=domain,
359 secure=secure,
360 httponly=httponly,
361 )
0 # -*- coding: utf-8 -*-
1 r"""
2 werkzeug.contrib.sessions
3 ~~~~~~~~~~~~~~~~~~~~~~~~~
4
5 This module contains some helper classes that help one to add session
6 support to a python WSGI application. For full client-side session
7 storage see :mod:`~werkzeug.contrib.securecookie` which implements a
8 secure, client-side session storage.
9
10
11 Application Integration
12 =======================
13
14 ::
15
16 from werkzeug.contrib.sessions import SessionMiddleware, \
17 FilesystemSessionStore
18
19 app = SessionMiddleware(app, FilesystemSessionStore())
20
21 The current session will then appear in the WSGI environment as
22 `werkzeug.session`. However it's recommended to not use the middleware
23 but the stores directly in the application. However for very simple
24 scripts a middleware for sessions could be sufficient.
25
26 This module does not implement methods or ways to check if a session is
27 expired. That should be done by a cronjob and storage specific. For
28 example to prune unused filesystem sessions one could check the modified
29 time of the files. If sessions are stored in the database the new()
30 method should add an expiration timestamp for the session.
31
32 For better flexibility it's recommended to not use the middleware but the
33 store and session object directly in the application dispatching::
34
35 session_store = FilesystemSessionStore()
36
37 def application(environ, start_response):
38 request = Request(environ)
39 sid = request.cookies.get('cookie_name')
40 if sid is None:
41 request.session = session_store.new()
42 else:
43 request.session = session_store.get(sid)
44 response = get_the_response_object(request)
45 if request.session.should_save:
46 session_store.save(request.session)
47 response.set_cookie('cookie_name', request.session.sid)
48 return response(environ, start_response)
49
50 :copyright: 2007 Pallets
51 :license: BSD-3-Clause
52 """
53 import os
54 import re
55 import tempfile
56 import warnings
57 from hashlib import sha1
58 from os import path
59 from pickle import dump
60 from pickle import HIGHEST_PROTOCOL
61 from pickle import load
62 from random import random
63 from time import time
64
65 from .._compat import PY2
66 from .._compat import text_type
67 from ..datastructures import CallbackDict
68 from ..filesystem import get_filesystem_encoding
69 from ..posixemulation import rename
70 from ..utils import dump_cookie
71 from ..utils import parse_cookie
72 from ..wsgi import ClosingIterator
73
74 warnings.warn(
75 "'werkzeug.contrib.sessions' is deprecated as of version 0.15 and"
76 " will be removed in version 1.0. It has moved to"
77 " https://github.com/pallets/secure-cookie.",
78 DeprecationWarning,
79 stacklevel=2,
80 )
81
82 _sha1_re = re.compile(r"^[a-f0-9]{40}$")
83
84
85 def _urandom():
86 if hasattr(os, "urandom"):
87 return os.urandom(30)
88 return text_type(random()).encode("ascii")
89
90
91 def generate_key(salt=None):
92 if salt is None:
93 salt = repr(salt).encode("ascii")
94 return sha1(b"".join([salt, str(time()).encode("ascii"), _urandom()])).hexdigest()
95
96
97 class ModificationTrackingDict(CallbackDict):
98 __slots__ = ("modified",)
99
100 def __init__(self, *args, **kwargs):
101 def on_update(self):
102 self.modified = True
103
104 self.modified = False
105 CallbackDict.__init__(self, on_update=on_update)
106 dict.update(self, *args, **kwargs)
107
108 def copy(self):
109 """Create a flat copy of the dict."""
110 missing = object()
111 result = object.__new__(self.__class__)
112 for name in self.__slots__:
113 val = getattr(self, name, missing)
114 if val is not missing:
115 setattr(result, name, val)
116 return result
117
118 def __copy__(self):
119 return self.copy()
120
121
122 class Session(ModificationTrackingDict):
123 """Subclass of a dict that keeps track of direct object changes. Changes
124 in mutable structures are not tracked, for those you have to set
125 `modified` to `True` by hand.
126 """
127
128 __slots__ = ModificationTrackingDict.__slots__ + ("sid", "new")
129
130 def __init__(self, data, sid, new=False):
131 ModificationTrackingDict.__init__(self, data)
132 self.sid = sid
133 self.new = new
134
135 def __repr__(self):
136 return "<%s %s%s>" % (
137 self.__class__.__name__,
138 dict.__repr__(self),
139 "*" if self.should_save else "",
140 )
141
142 @property
143 def should_save(self):
144 """True if the session should be saved.
145
146 .. versionchanged:: 0.6
147 By default the session is now only saved if the session is
148 modified, not if it is new like it was before.
149 """
150 return self.modified
151
152
153 class SessionStore(object):
154 """Baseclass for all session stores. The Werkzeug contrib module does not
155 implement any useful stores besides the filesystem store, application
156 developers are encouraged to create their own stores.
157
158 :param session_class: The session class to use. Defaults to
159 :class:`Session`.
160 """
161
162 def __init__(self, session_class=None):
163 if session_class is None:
164 session_class = Session
165 self.session_class = session_class
166
167 def is_valid_key(self, key):
168 """Check if a key has the correct format."""
169 return _sha1_re.match(key) is not None
170
171 def generate_key(self, salt=None):
172 """Simple function that generates a new session key."""
173 return generate_key(salt)
174
175 def new(self):
176 """Generate a new session."""
177 return self.session_class({}, self.generate_key(), True)
178
179 def save(self, session):
180 """Save a session."""
181
182 def save_if_modified(self, session):
183 """Save if a session class wants an update."""
184 if session.should_save:
185 self.save(session)
186
187 def delete(self, session):
188 """Delete a session."""
189
190 def get(self, sid):
191 """Get a session for this sid or a new session object. This method
192 has to check if the session key is valid and create a new session if
193 that wasn't the case.
194 """
195 return self.session_class({}, sid, True)
196
197
198 #: used for temporary files by the filesystem session store
199 _fs_transaction_suffix = ".__wz_sess"
200
201
202 class FilesystemSessionStore(SessionStore):
203 """Simple example session store that saves sessions on the filesystem.
204 This store works best on POSIX systems and Windows Vista / Windows
205 Server 2008 and newer.
206
207 .. versionchanged:: 0.6
208 `renew_missing` was added. Previously this was considered `True`,
209 now the default changed to `False` and it can be explicitly
210 deactivated.
211
212 :param path: the path to the folder used for storing the sessions.
213 If not provided the default temporary directory is used.
214 :param filename_template: a string template used to give the session
215 a filename. ``%s`` is replaced with the
216 session id.
217 :param session_class: The session class to use. Defaults to
218 :class:`Session`.
219 :param renew_missing: set to `True` if you want the store to
220 give the user a new sid if the session was
221 not yet saved.
222 """
223
224 def __init__(
225 self,
226 path=None,
227 filename_template="werkzeug_%s.sess",
228 session_class=None,
229 renew_missing=False,
230 mode=0o644,
231 ):
232 SessionStore.__init__(self, session_class)
233 if path is None:
234 path = tempfile.gettempdir()
235 self.path = path
236 if isinstance(filename_template, text_type) and PY2:
237 filename_template = filename_template.encode(get_filesystem_encoding())
238 assert not filename_template.endswith(_fs_transaction_suffix), (
239 "filename templates may not end with %s" % _fs_transaction_suffix
240 )
241 self.filename_template = filename_template
242 self.renew_missing = renew_missing
243 self.mode = mode
244
245 def get_session_filename(self, sid):
246 # out of the box, this should be a strict ASCII subset but
247 # you might reconfigure the session object to have a more
248 # arbitrary string.
249 if isinstance(sid, text_type) and PY2:
250 sid = sid.encode(get_filesystem_encoding())
251 return path.join(self.path, self.filename_template % sid)
252
253 def save(self, session):
254 fn = self.get_session_filename(session.sid)
255 fd, tmp = tempfile.mkstemp(suffix=_fs_transaction_suffix, dir=self.path)
256 f = os.fdopen(fd, "wb")
257 try:
258 dump(dict(session), f, HIGHEST_PROTOCOL)
259 finally:
260 f.close()
261 try:
262 rename(tmp, fn)
263 os.chmod(fn, self.mode)
264 except (IOError, OSError):
265 pass
266
267 def delete(self, session):
268 fn = self.get_session_filename(session.sid)
269 try:
270 os.unlink(fn)
271 except OSError:
272 pass
273
274 def get(self, sid):
275 if not self.is_valid_key(sid):
276 return self.new()
277 try:
278 f = open(self.get_session_filename(sid), "rb")
279 except IOError:
280 if self.renew_missing:
281 return self.new()
282 data = {}
283 else:
284 try:
285 try:
286 data = load(f)
287 except Exception:
288 data = {}
289 finally:
290 f.close()
291 return self.session_class(data, sid, False)
292
293 def list(self):
294 """Lists all sessions in the store.
295
296 .. versionadded:: 0.6
297 """
298 before, after = self.filename_template.split("%s", 1)
299 filename_re = re.compile(
300 r"%s(.{5,})%s$" % (re.escape(before), re.escape(after))
301 )
302 result = []
303 for filename in os.listdir(self.path):
304 #: this is a session that is still being saved.
305 if filename.endswith(_fs_transaction_suffix):
306 continue
307 match = filename_re.match(filename)
308 if match is not None:
309 result.append(match.group(1))
310 return result
311
312
313 class SessionMiddleware(object):
314 """A simple middleware that puts the session object of a store provided
315 into the WSGI environ. It automatically sets cookies and restores
316 sessions.
317
318 However a middleware is not the preferred solution because it won't be as
319 fast as sessions managed by the application itself and will put a key into
320 the WSGI environment only relevant for the application which is against
321 the concept of WSGI.
322
323 The cookie parameters are the same as for the :func:`~dump_cookie`
324 function just prefixed with ``cookie_``. Additionally `max_age` is
325 called `cookie_age` and not `cookie_max_age` because of backwards
326 compatibility.
327 """
328
329 def __init__(
330 self,
331 app,
332 store,
333 cookie_name="session_id",
334 cookie_age=None,
335 cookie_expires=None,
336 cookie_path="/",
337 cookie_domain=None,
338 cookie_secure=None,
339 cookie_httponly=False,
340 cookie_samesite="Lax",
341 environ_key="werkzeug.session",
342 ):
343 self.app = app
344 self.store = store
345 self.cookie_name = cookie_name
346 self.cookie_age = cookie_age
347 self.cookie_expires = cookie_expires
348 self.cookie_path = cookie_path
349 self.cookie_domain = cookie_domain
350 self.cookie_secure = cookie_secure
351 self.cookie_httponly = cookie_httponly
352 self.cookie_samesite = cookie_samesite
353 self.environ_key = environ_key
354
355 def __call__(self, environ, start_response):
356 cookie = parse_cookie(environ.get("HTTP_COOKIE", ""))
357 sid = cookie.get(self.cookie_name, None)
358 if sid is None:
359 session = self.store.new()
360 else:
361 session = self.store.get(sid)
362 environ[self.environ_key] = session
363
364 def injecting_start_response(status, headers, exc_info=None):
365 if session.should_save:
366 self.store.save(session)
367 headers.append(
368 (
369 "Set-Cookie",
370 dump_cookie(
371 self.cookie_name,
372 session.sid,
373 self.cookie_age,
374 self.cookie_expires,
375 self.cookie_path,
376 self.cookie_domain,
377 self.cookie_secure,
378 self.cookie_httponly,
379 samesite=self.cookie_samesite,
380 ),
381 )
382 )
383 return start_response(status, headers, exc_info)
384
385 return ClosingIterator(
386 self.app(environ, injecting_start_response),
387 lambda: self.store.save_if_modified(session),
388 )
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.contrib.wrappers
3 ~~~~~~~~~~~~~~~~~~~~~~~~~
4
5 Extra wrappers or mixins contributed by the community. These wrappers can
6 be mixed in into request objects to add extra functionality.
7
8 Example::
9
10 from werkzeug.wrappers import Request as RequestBase
11 from werkzeug.contrib.wrappers import JSONRequestMixin
12
13 class Request(RequestBase, JSONRequestMixin):
14 pass
15
16 Afterwards this request object provides the extra functionality of the
17 :class:`JSONRequestMixin`.
18
19 :copyright: 2007 Pallets
20 :license: BSD-3-Clause
21 """
22 import codecs
23 import warnings
24
25 from .._compat import wsgi_decoding_dance
26 from ..exceptions import BadRequest
27 from ..http import dump_options_header
28 from ..http import parse_options_header
29 from ..utils import cached_property
30 from ..wrappers.json import JSONMixin as _JSONMixin
31
32
33 def is_known_charset(charset):
34 """Checks if the given charset is known to Python."""
35 try:
36 codecs.lookup(charset)
37 except LookupError:
38 return False
39 return True
40
41
42 class JSONRequestMixin(_JSONMixin):
43 """
44 .. deprecated:: 0.15
45 Moved to :class:`werkzeug.wrappers.json.JSONMixin`. This old
46 import will be removed in version 1.0.
47 """
48
49 @property
50 def json(self):
51 warnings.warn(
52 "'werkzeug.contrib.wrappers.JSONRequestMixin' has moved to"
53 " 'werkzeug.wrappers.json.JSONMixin'. This old import will"
54 " be removed in version 1.0.",
55 DeprecationWarning,
56 stacklevel=2,
57 )
58 return super(JSONRequestMixin, self).json
59
60
61 class ProtobufRequestMixin(object):
62
63 """Add protobuf parsing method to a request object. This will parse the
64 input data through `protobuf`_ if possible.
65
66 :exc:`~werkzeug.exceptions.BadRequest` will be raised if the content-type
67 is not protobuf or if the data itself cannot be parsed property.
68
69 .. _protobuf: https://github.com/protocolbuffers/protobuf
70
71 .. deprecated:: 0.15
72 This mixin will be removed in version 1.0.
73 """
74
75 #: by default the :class:`ProtobufRequestMixin` will raise a
76 #: :exc:`~werkzeug.exceptions.BadRequest` if the object is not
77 #: initialized. You can bypass that check by setting this
78 #: attribute to `False`.
79 protobuf_check_initialization = True
80
81 def parse_protobuf(self, proto_type):
82 """Parse the data into an instance of proto_type."""
83 warnings.warn(
84 "'werkzeug.contrib.wrappers.ProtobufRequestMixin' is"
85 " deprecated as of version 0.15 and will be removed in"
86 " version 1.0.",
87 DeprecationWarning,
88 stacklevel=2,
89 )
90 if "protobuf" not in self.environ.get("CONTENT_TYPE", ""):
91 raise BadRequest("Not a Protobuf request")
92
93 obj = proto_type()
94 try:
95 obj.ParseFromString(self.data)
96 except Exception:
97 raise BadRequest("Unable to parse Protobuf request")
98
99 # Fail if not all required fields are set
100 if self.protobuf_check_initialization and not obj.IsInitialized():
101 raise BadRequest("Partial Protobuf request")
102
103 return obj
104
105
106 class RoutingArgsRequestMixin(object):
107
108 """This request mixin adds support for the wsgiorg routing args
109 `specification`_.
110
111 .. _specification: https://wsgi.readthedocs.io/en/latest/
112 specifications/routing_args.html
113
114 .. deprecated:: 0.15
115 This mixin will be removed in version 1.0.
116 """
117
118 def _get_routing_args(self):
119 warnings.warn(
120 "'werkzeug.contrib.wrappers.RoutingArgsRequestMixin' is"
121 " deprecated as of version 0.15 and will be removed in"
122 " version 1.0.",
123 DeprecationWarning,
124 stacklevel=2,
125 )
126 return self.environ.get("wsgiorg.routing_args", (()))[0]
127
128 def _set_routing_args(self, value):
129 warnings.warn(
130 "'werkzeug.contrib.wrappers.RoutingArgsRequestMixin' is"
131 " deprecated as of version 0.15 and will be removed in"
132 " version 1.0.",
133 DeprecationWarning,
134 stacklevel=2,
135 )
136 if self.shallow:
137 raise RuntimeError(
138 "A shallow request tried to modify the WSGI "
139 "environment. If you really want to do that, "
140 "set `shallow` to False."
141 )
142 self.environ["wsgiorg.routing_args"] = (value, self.routing_vars)
143
144 routing_args = property(
145 _get_routing_args,
146 _set_routing_args,
147 doc="""
148 The positional URL arguments as `tuple`.""",
149 )
150 del _get_routing_args, _set_routing_args
151
152 def _get_routing_vars(self):
153 warnings.warn(
154 "'werkzeug.contrib.wrappers.RoutingArgsRequestMixin' is"
155 " deprecated as of version 0.15 and will be removed in"
156 " version 1.0.",
157 DeprecationWarning,
158 stacklevel=2,
159 )
160 rv = self.environ.get("wsgiorg.routing_args")
161 if rv is not None:
162 return rv[1]
163 rv = {}
164 if not self.shallow:
165 self.routing_vars = rv
166 return rv
167
168 def _set_routing_vars(self, value):
169 warnings.warn(
170 "'werkzeug.contrib.wrappers.RoutingArgsRequestMixin' is"
171 " deprecated as of version 0.15 and will be removed in"
172 " version 1.0.",
173 DeprecationWarning,
174 stacklevel=2,
175 )
176 if self.shallow:
177 raise RuntimeError(
178 "A shallow request tried to modify the WSGI "
179 "environment. If you really want to do that, "
180 "set `shallow` to False."
181 )
182 self.environ["wsgiorg.routing_args"] = (self.routing_args, value)
183
184 routing_vars = property(
185 _get_routing_vars,
186 _set_routing_vars,
187 doc="""
188 The keyword URL arguments as `dict`.""",
189 )
190 del _get_routing_vars, _set_routing_vars
191
192
193 class ReverseSlashBehaviorRequestMixin(object):
194
195 """This mixin reverses the trailing slash behavior of :attr:`script_root`
196 and :attr:`path`. This makes it possible to use :func:`~urlparse.urljoin`
197 directly on the paths.
198
199 Because it changes the behavior or :class:`Request` this class has to be
200 mixed in *before* the actual request class::
201
202 class MyRequest(ReverseSlashBehaviorRequestMixin, Request):
203 pass
204
205 This example shows the differences (for an application mounted on
206 `/application` and the request going to `/application/foo/bar`):
207
208 +---------------+-------------------+---------------------+
209 | | normal behavior | reverse behavior |
210 +===============+===================+=====================+
211 | `script_root` | ``/application`` | ``/application/`` |
212 +---------------+-------------------+---------------------+
213 | `path` | ``/foo/bar`` | ``foo/bar`` |
214 +---------------+-------------------+---------------------+
215
216 .. deprecated:: 0.15
217 This mixin will be removed in version 1.0.
218 """
219
220 @cached_property
221 def path(self):
222 """Requested path as unicode. This works a bit like the regular path
223 info in the WSGI environment but will not include a leading slash.
224 """
225 warnings.warn(
226 "'werkzeug.contrib.wrappers.ReverseSlashBehaviorRequestMixin'"
227 " is deprecated as of version 0.15 and will be removed in"
228 " version 1.0.",
229 DeprecationWarning,
230 stacklevel=2,
231 )
232 path = wsgi_decoding_dance(
233 self.environ.get("PATH_INFO") or "", self.charset, self.encoding_errors
234 )
235 return path.lstrip("/")
236
237 @cached_property
238 def script_root(self):
239 """The root path of the script includling a trailing slash."""
240 warnings.warn(
241 "'werkzeug.contrib.wrappers.ReverseSlashBehaviorRequestMixin'"
242 " is deprecated as of version 0.15 and will be removed in"
243 " version 1.0.",
244 DeprecationWarning,
245 stacklevel=2,
246 )
247 path = wsgi_decoding_dance(
248 self.environ.get("SCRIPT_NAME") or "", self.charset, self.encoding_errors
249 )
250 return path.rstrip("/") + "/"
251
252
253 class DynamicCharsetRequestMixin(object):
254
255 """"If this mixin is mixed into a request class it will provide
256 a dynamic `charset` attribute. This means that if the charset is
257 transmitted in the content type headers it's used from there.
258
259 Because it changes the behavior or :class:`Request` this class has
260 to be mixed in *before* the actual request class::
261
262 class MyRequest(DynamicCharsetRequestMixin, Request):
263 pass
264
265 By default the request object assumes that the URL charset is the
266 same as the data charset. If the charset varies on each request
267 based on the transmitted data it's not a good idea to let the URLs
268 change based on that. Most browsers assume either utf-8 or latin1
269 for the URLs if they have troubles figuring out. It's strongly
270 recommended to set the URL charset to utf-8::
271
272 class MyRequest(DynamicCharsetRequestMixin, Request):
273 url_charset = 'utf-8'
274
275 .. deprecated:: 0.15
276 This mixin will be removed in version 1.0.
277
278 .. versionadded:: 0.6
279 """
280
281 #: the default charset that is assumed if the content type header
282 #: is missing or does not contain a charset parameter. The default
283 #: is latin1 which is what HTTP specifies as default charset.
284 #: You may however want to set this to utf-8 to better support
285 #: browsers that do not transmit a charset for incoming data.
286 default_charset = "latin1"
287
288 def unknown_charset(self, charset):
289 """Called if a charset was provided but is not supported by
290 the Python codecs module. By default latin1 is assumed then
291 to not lose any information, you may override this method to
292 change the behavior.
293
294 :param charset: the charset that was not found.
295 :return: the replacement charset.
296 """
297 return "latin1"
298
299 @cached_property
300 def charset(self):
301 """The charset from the content type."""
302 warnings.warn(
303 "'werkzeug.contrib.wrappers.DynamicCharsetRequestMixin'"
304 " is deprecated as of version 0.15 and will be removed in"
305 " version 1.0.",
306 DeprecationWarning,
307 stacklevel=2,
308 )
309 header = self.environ.get("CONTENT_TYPE")
310 if header:
311 ct, options = parse_options_header(header)
312 charset = options.get("charset")
313 if charset:
314 if is_known_charset(charset):
315 return charset
316 return self.unknown_charset(charset)
317 return self.default_charset
318
319
320 class DynamicCharsetResponseMixin(object):
321
322 """If this mixin is mixed into a response class it will provide
323 a dynamic `charset` attribute. This means that if the charset is
324 looked up and stored in the `Content-Type` header and updates
325 itself automatically. This also means a small performance hit but
326 can be useful if you're working with different charsets on
327 responses.
328
329 Because the charset attribute is no a property at class-level, the
330 default value is stored in `default_charset`.
331
332 Because it changes the behavior or :class:`Response` this class has
333 to be mixed in *before* the actual response class::
334
335 class MyResponse(DynamicCharsetResponseMixin, Response):
336 pass
337
338 .. deprecated:: 0.15
339 This mixin will be removed in version 1.0.
340
341 .. versionadded:: 0.6
342 """
343
344 #: the default charset.
345 default_charset = "utf-8"
346
347 def _get_charset(self):
348 warnings.warn(
349 "'werkzeug.contrib.wrappers.DynamicCharsetResponseMixin'"
350 " is deprecated as of version 0.15 and will be removed in"
351 " version 1.0.",
352 DeprecationWarning,
353 stacklevel=2,
354 )
355 header = self.headers.get("content-type")
356 if header:
357 charset = parse_options_header(header)[1].get("charset")
358 if charset:
359 return charset
360 return self.default_charset
361
362 def _set_charset(self, charset):
363 warnings.warn(
364 "'werkzeug.contrib.wrappers.DynamicCharsetResponseMixin'"
365 " is deprecated as of version 0.15 and will be removed in"
366 " version 1.0.",
367 DeprecationWarning,
368 stacklevel=2,
369 )
370 header = self.headers.get("content-type")
371 ct, options = parse_options_header(header)
372 if not ct:
373 raise TypeError("Cannot set charset if Content-Type header is missing.")
374 options["charset"] = charset
375 self.headers["Content-Type"] = dump_options_header(ct, options)
376
377 charset = property(
378 _get_charset,
379 _set_charset,
380 doc="""
381 The charset for the response. It's stored inside the
382 Content-Type header as a parameter.""",
383 )
384 del _get_charset, _set_charset
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.datastructures
3 ~~~~~~~~~~~~~~~~~~~~~~~
4
5 This module provides mixins and classes with an immutable interface.
6
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
9 """
10 import codecs
11 import mimetypes
12 import re
13 from copy import deepcopy
14 from itertools import repeat
15
16 from ._compat import BytesIO
17 from ._compat import collections_abc
18 from ._compat import integer_types
19 from ._compat import iteritems
20 from ._compat import iterkeys
21 from ._compat import iterlists
22 from ._compat import itervalues
23 from ._compat import make_literal_wrapper
24 from ._compat import PY2
25 from ._compat import string_types
26 from ._compat import text_type
27 from ._compat import to_native
28 from ._internal import _missing
29 from .filesystem import get_filesystem_encoding
30
31 _locale_delim_re = re.compile(r"[_-]")
32
33
34 def is_immutable(self):
35 raise TypeError("%r objects are immutable" % self.__class__.__name__)
36
37
38 def iter_multi_items(mapping):
39 """Iterates over the items of a mapping yielding keys and values
40 without dropping any from more complex structures.
41 """
42 if isinstance(mapping, MultiDict):
43 for item in iteritems(mapping, multi=True):
44 yield item
45 elif isinstance(mapping, dict):
46 for key, value in iteritems(mapping):
47 if isinstance(value, (tuple, list)):
48 for value in value:
49 yield key, value
50 else:
51 yield key, value
52 else:
53 for item in mapping:
54 yield item
55
56
57 def native_itermethods(names):
58 if not PY2:
59 return lambda x: x
60
61 def setviewmethod(cls, name):
62 viewmethod_name = "view%s" % name
63 repr_name = "view_%s" % name
64
65 def viewmethod(self, *a, **kw):
66 return ViewItems(self, name, repr_name, *a, **kw)
67
68 viewmethod.__name__ = viewmethod_name
69 viewmethod.__doc__ = "`%s()` object providing a view on %s" % (
70 viewmethod_name,
71 name,
72 )
73 setattr(cls, viewmethod_name, viewmethod)
74
75 def setitermethod(cls, name):
76 itermethod = getattr(cls, name)
77 setattr(cls, "iter%s" % name, itermethod)
78
79 def listmethod(self, *a, **kw):
80 return list(itermethod(self, *a, **kw))
81
82 listmethod.__name__ = name
83 listmethod.__doc__ = "Like :py:meth:`iter%s`, but returns a list." % name
84 setattr(cls, name, listmethod)
85
86 def wrap(cls):
87 for name in names:
88 setitermethod(cls, name)
89 setviewmethod(cls, name)
90 return cls
91
92 return wrap
93
94
95 class ImmutableListMixin(object):
96 """Makes a :class:`list` immutable.
97
98 .. versionadded:: 0.5
99
100 :private:
101 """
102
103 _hash_cache = None
104
105 def __hash__(self):
106 if self._hash_cache is not None:
107 return self._hash_cache
108 rv = self._hash_cache = hash(tuple(self))
109 return rv
110
111 def __reduce_ex__(self, protocol):
112 return type(self), (list(self),)
113
114 def __delitem__(self, key):
115 is_immutable(self)
116
117 def __iadd__(self, other):
118 is_immutable(self)
119
120 __imul__ = __iadd__
121
122 def __setitem__(self, key, value):
123 is_immutable(self)
124
125 def append(self, item):
126 is_immutable(self)
127
128 remove = append
129
130 def extend(self, iterable):
131 is_immutable(self)
132
133 def insert(self, pos, value):
134 is_immutable(self)
135
136 def pop(self, index=-1):
137 is_immutable(self)
138
139 def reverse(self):
140 is_immutable(self)
141
142 def sort(self, cmp=None, key=None, reverse=None):
143 is_immutable(self)
144
145
146 class ImmutableList(ImmutableListMixin, list):
147 """An immutable :class:`list`.
148
149 .. versionadded:: 0.5
150
151 :private:
152 """
153
154 def __repr__(self):
155 return "%s(%s)" % (self.__class__.__name__, list.__repr__(self))
156
157
158 class ImmutableDictMixin(object):
159 """Makes a :class:`dict` immutable.
160
161 .. versionadded:: 0.5
162
163 :private:
164 """
165
166 _hash_cache = None
167
168 @classmethod
169 def fromkeys(cls, keys, value=None):
170 instance = super(cls, cls).__new__(cls)
171 instance.__init__(zip(keys, repeat(value)))
172 return instance
173
174 def __reduce_ex__(self, protocol):
175 return type(self), (dict(self),)
176
177 def _iter_hashitems(self):
178 return iteritems(self)
179
180 def __hash__(self):
181 if self._hash_cache is not None:
182 return self._hash_cache
183 rv = self._hash_cache = hash(frozenset(self._iter_hashitems()))
184 return rv
185
186 def setdefault(self, key, default=None):
187 is_immutable(self)
188
189 def update(self, *args, **kwargs):
190 is_immutable(self)
191
192 def pop(self, key, default=None):
193 is_immutable(self)
194
195 def popitem(self):
196 is_immutable(self)
197
198 def __setitem__(self, key, value):
199 is_immutable(self)
200
201 def __delitem__(self, key):
202 is_immutable(self)
203
204 def clear(self):
205 is_immutable(self)
206
207
208 class ImmutableMultiDictMixin(ImmutableDictMixin):
209 """Makes a :class:`MultiDict` immutable.
210
211 .. versionadded:: 0.5
212
213 :private:
214 """
215
216 def __reduce_ex__(self, protocol):
217 return type(self), (list(iteritems(self, multi=True)),)
218
219 def _iter_hashitems(self):
220 return iteritems(self, multi=True)
221
222 def add(self, key, value):
223 is_immutable(self)
224
225 def popitemlist(self):
226 is_immutable(self)
227
228 def poplist(self, key):
229 is_immutable(self)
230
231 def setlist(self, key, new_list):
232 is_immutable(self)
233
234 def setlistdefault(self, key, default_list=None):
235 is_immutable(self)
236
237
238 class UpdateDictMixin(object):
239 """Makes dicts call `self.on_update` on modifications.
240
241 .. versionadded:: 0.5
242
243 :private:
244 """
245
246 on_update = None
247
248 def calls_update(name): # noqa: B902
249 def oncall(self, *args, **kw):
250 rv = getattr(super(UpdateDictMixin, self), name)(*args, **kw)
251 if self.on_update is not None:
252 self.on_update(self)
253 return rv
254
255 oncall.__name__ = name
256 return oncall
257
258 def setdefault(self, key, default=None):
259 modified = key not in self
260 rv = super(UpdateDictMixin, self).setdefault(key, default)
261 if modified and self.on_update is not None:
262 self.on_update(self)
263 return rv
264
265 def pop(self, key, default=_missing):
266 modified = key in self
267 if default is _missing:
268 rv = super(UpdateDictMixin, self).pop(key)
269 else:
270 rv = super(UpdateDictMixin, self).pop(key, default)
271 if modified and self.on_update is not None:
272 self.on_update(self)
273 return rv
274
275 __setitem__ = calls_update("__setitem__")
276 __delitem__ = calls_update("__delitem__")
277 clear = calls_update("clear")
278 popitem = calls_update("popitem")
279 update = calls_update("update")
280 del calls_update
281
282
283 class TypeConversionDict(dict):
284 """Works like a regular dict but the :meth:`get` method can perform
285 type conversions. :class:`MultiDict` and :class:`CombinedMultiDict`
286 are subclasses of this class and provide the same feature.
287
288 .. versionadded:: 0.5
289 """
290
291 def get(self, key, default=None, type=None):
292 """Return the default value if the requested data doesn't exist.
293 If `type` is provided and is a callable it should convert the value,
294 return it or raise a :exc:`ValueError` if that is not possible. In
295 this case the function will return the default as if the value was not
296 found:
297
298 >>> d = TypeConversionDict(foo='42', bar='blub')
299 >>> d.get('foo', type=int)
300 42
301 >>> d.get('bar', -1, type=int)
302 -1
303
304 :param key: The key to be looked up.
305 :param default: The default value to be returned if the key can't
306 be looked up. If not further specified `None` is
307 returned.
308 :param type: A callable that is used to cast the value in the
309 :class:`MultiDict`. If a :exc:`ValueError` is raised
310 by this callable the default value is returned.
311 """
312 try:
313 rv = self[key]
314 except KeyError:
315 return default
316 if type is not None:
317 try:
318 rv = type(rv)
319 except ValueError:
320 rv = default
321 return rv
322
323
324 class ImmutableTypeConversionDict(ImmutableDictMixin, TypeConversionDict):
325 """Works like a :class:`TypeConversionDict` but does not support
326 modifications.
327
328 .. versionadded:: 0.5
329 """
330
331 def copy(self):
332 """Return a shallow mutable copy of this object. Keep in mind that
333 the standard library's :func:`copy` function is a no-op for this class
334 like for any other python immutable type (eg: :class:`tuple`).
335 """
336 return TypeConversionDict(self)
337
338 def __copy__(self):
339 return self
340
341
342 class ViewItems(object):
343 def __init__(self, multi_dict, method, repr_name, *a, **kw):
344 self.__multi_dict = multi_dict
345 self.__method = method
346 self.__repr_name = repr_name
347 self.__a = a
348 self.__kw = kw
349
350 def __get_items(self):
351 return getattr(self.__multi_dict, self.__method)(*self.__a, **self.__kw)
352
353 def __repr__(self):
354 return "%s(%r)" % (self.__repr_name, list(self.__get_items()))
355
356 def __iter__(self):
357 return iter(self.__get_items())
358
359
360 @native_itermethods(["keys", "values", "items", "lists", "listvalues"])
361 class MultiDict(TypeConversionDict):
362 """A :class:`MultiDict` is a dictionary subclass customized to deal with
363 multiple values for the same key which is for example used by the parsing
364 functions in the wrappers. This is necessary because some HTML form
365 elements pass multiple values for the same key.
366
367 :class:`MultiDict` implements all standard dictionary methods.
368 Internally, it saves all values for a key as a list, but the standard dict
369 access methods will only return the first value for a key. If you want to
370 gain access to the other values, too, you have to use the `list` methods as
371 explained below.
372
373 Basic Usage:
374
375 >>> d = MultiDict([('a', 'b'), ('a', 'c')])
376 >>> d
377 MultiDict([('a', 'b'), ('a', 'c')])
378 >>> d['a']
379 'b'
380 >>> d.getlist('a')
381 ['b', 'c']
382 >>> 'a' in d
383 True
384
385 It behaves like a normal dict thus all dict functions will only return the
386 first value when multiple values for one key are found.
387
388 From Werkzeug 0.3 onwards, the `KeyError` raised by this class is also a
389 subclass of the :exc:`~exceptions.BadRequest` HTTP exception and will
390 render a page for a ``400 BAD REQUEST`` if caught in a catch-all for HTTP
391 exceptions.
392
393 A :class:`MultiDict` can be constructed from an iterable of
394 ``(key, value)`` tuples, a dict, a :class:`MultiDict` or from Werkzeug 0.2
395 onwards some keyword parameters.
396
397 :param mapping: the initial value for the :class:`MultiDict`. Either a
398 regular dict, an iterable of ``(key, value)`` tuples
399 or `None`.
400 """
401
402 def __init__(self, mapping=None):
403 if isinstance(mapping, MultiDict):
404 dict.__init__(self, ((k, l[:]) for k, l in iterlists(mapping)))
405 elif isinstance(mapping, dict):
406 tmp = {}
407 for key, value in iteritems(mapping):
408 if isinstance(value, (tuple, list)):
409 if len(value) == 0:
410 continue
411 value = list(value)
412 else:
413 value = [value]
414 tmp[key] = value
415 dict.__init__(self, tmp)
416 else:
417 tmp = {}
418 for key, value in mapping or ():
419 tmp.setdefault(key, []).append(value)
420 dict.__init__(self, tmp)
421
422 def __getstate__(self):
423 return dict(self.lists())
424
425 def __setstate__(self, value):
426 dict.clear(self)
427 dict.update(self, value)
428
429 def __getitem__(self, key):
430 """Return the first data value for this key;
431 raises KeyError if not found.
432
433 :param key: The key to be looked up.
434 :raise KeyError: if the key does not exist.
435 """
436
437 if key in self:
438 lst = dict.__getitem__(self, key)
439 if len(lst) > 0:
440 return lst[0]
441 raise exceptions.BadRequestKeyError(key)
442
443 def __setitem__(self, key, value):
444 """Like :meth:`add` but removes an existing key first.
445
446 :param key: the key for the value.
447 :param value: the value to set.
448 """
449 dict.__setitem__(self, key, [value])
450
451 def add(self, key, value):
452 """Adds a new value for the key.
453
454 .. versionadded:: 0.6
455
456 :param key: the key for the value.
457 :param value: the value to add.
458 """
459 dict.setdefault(self, key, []).append(value)
460
461 def getlist(self, key, type=None):
462 """Return the list of items for a given key. If that key is not in the
463 `MultiDict`, the return value will be an empty list. Just as `get`
464 `getlist` accepts a `type` parameter. All items will be converted
465 with the callable defined there.
466
467 :param key: The key to be looked up.
468 :param type: A callable that is used to cast the value in the
469 :class:`MultiDict`. If a :exc:`ValueError` is raised
470 by this callable the value will be removed from the list.
471 :return: a :class:`list` of all the values for the key.
472 """
473 try:
474 rv = dict.__getitem__(self, key)
475 except KeyError:
476 return []
477 if type is None:
478 return list(rv)
479 result = []
480 for item in rv:
481 try:
482 result.append(type(item))
483 except ValueError:
484 pass
485 return result
486
487 def setlist(self, key, new_list):
488 """Remove the old values for a key and add new ones. Note that the list
489 you pass the values in will be shallow-copied before it is inserted in
490 the dictionary.
491
492 >>> d = MultiDict()
493 >>> d.setlist('foo', ['1', '2'])
494 >>> d['foo']
495 '1'
496 >>> d.getlist('foo')
497 ['1', '2']
498
499 :param key: The key for which the values are set.
500 :param new_list: An iterable with the new values for the key. Old values
501 are removed first.
502 """
503 dict.__setitem__(self, key, list(new_list))
504
505 def setdefault(self, key, default=None):
506 """Returns the value for the key if it is in the dict, otherwise it
507 returns `default` and sets that value for `key`.
508
509 :param key: The key to be looked up.
510 :param default: The default value to be returned if the key is not
511 in the dict. If not further specified it's `None`.
512 """
513 if key not in self:
514 self[key] = default
515 else:
516 default = self[key]
517 return default
518
519 def setlistdefault(self, key, default_list=None):
520 """Like `setdefault` but sets multiple values. The list returned
521 is not a copy, but the list that is actually used internally. This
522 means that you can put new values into the dict by appending items
523 to the list:
524
525 >>> d = MultiDict({"foo": 1})
526 >>> d.setlistdefault("foo").extend([2, 3])
527 >>> d.getlist("foo")
528 [1, 2, 3]
529
530 :param key: The key to be looked up.
531 :param default_list: An iterable of default values. It is either copied
532 (in case it was a list) or converted into a list
533 before returned.
534 :return: a :class:`list`
535 """
536 if key not in self:
537 default_list = list(default_list or ())
538 dict.__setitem__(self, key, default_list)
539 else:
540 default_list = dict.__getitem__(self, key)
541 return default_list
542
543 def items(self, multi=False):
544 """Return an iterator of ``(key, value)`` pairs.
545
546 :param multi: If set to `True` the iterator returned will have a pair
547 for each value of each key. Otherwise it will only
548 contain pairs for the first value of each key.
549 """
550
551 for key, values in iteritems(dict, self):
552 if multi:
553 for value in values:
554 yield key, value
555 else:
556 yield key, values[0]
557
558 def lists(self):
559 """Return a iterator of ``(key, values)`` pairs, where values is the list
560 of all values associated with the key."""
561
562 for key, values in iteritems(dict, self):
563 yield key, list(values)
564
565 def keys(self):
566 return iterkeys(dict, self)
567
568 __iter__ = keys
569
570 def values(self):
571 """Returns an iterator of the first value on every key's value list."""
572 for values in itervalues(dict, self):
573 yield values[0]
574
575 def listvalues(self):
576 """Return an iterator of all values associated with a key. Zipping
577 :meth:`keys` and this is the same as calling :meth:`lists`:
578
579 >>> d = MultiDict({"foo": [1, 2, 3]})
580 >>> zip(d.keys(), d.listvalues()) == d.lists()
581 True
582 """
583
584 return itervalues(dict, self)
585
586 def copy(self):
587 """Return a shallow copy of this object."""
588 return self.__class__(self)
589
590 def deepcopy(self, memo=None):
591 """Return a deep copy of this object."""
592 return self.__class__(deepcopy(self.to_dict(flat=False), memo))
593
594 def to_dict(self, flat=True):
595 """Return the contents as regular dict. If `flat` is `True` the
596 returned dict will only have the first item present, if `flat` is
597 `False` all values will be returned as lists.
598
599 :param flat: If set to `False` the dict returned will have lists
600 with all the values in it. Otherwise it will only
601 contain the first value for each key.
602 :return: a :class:`dict`
603 """
604 if flat:
605 return dict(iteritems(self))
606 return dict(self.lists())
607
608 def update(self, other_dict):
609 """update() extends rather than replaces existing key lists:
610
611 >>> a = MultiDict({'x': 1})
612 >>> b = MultiDict({'x': 2, 'y': 3})
613 >>> a.update(b)
614 >>> a
615 MultiDict([('y', 3), ('x', 1), ('x', 2)])
616
617 If the value list for a key in ``other_dict`` is empty, no new values
618 will be added to the dict and the key will not be created:
619
620 >>> x = {'empty_list': []}
621 >>> y = MultiDict()
622 >>> y.update(x)
623 >>> y
624 MultiDict([])
625 """
626 for key, value in iter_multi_items(other_dict):
627 MultiDict.add(self, key, value)
628
629 def pop(self, key, default=_missing):
630 """Pop the first item for a list on the dict. Afterwards the
631 key is removed from the dict, so additional values are discarded:
632
633 >>> d = MultiDict({"foo": [1, 2, 3]})
634 >>> d.pop("foo")
635 1
636 >>> "foo" in d
637 False
638
639 :param key: the key to pop.
640 :param default: if provided the value to return if the key was
641 not in the dictionary.
642 """
643 try:
644 lst = dict.pop(self, key)
645
646 if len(lst) == 0:
647 raise exceptions.BadRequestKeyError(key)
648
649 return lst[0]
650 except KeyError:
651 if default is not _missing:
652 return default
653 raise exceptions.BadRequestKeyError(key)
654
655 def popitem(self):
656 """Pop an item from the dict."""
657 try:
658 item = dict.popitem(self)
659
660 if len(item[1]) == 0:
661 raise exceptions.BadRequestKeyError(item)
662
663 return (item[0], item[1][0])
664 except KeyError as e:
665 raise exceptions.BadRequestKeyError(e.args[0])
666
667 def poplist(self, key):
668 """Pop the list for a key from the dict. If the key is not in the dict
669 an empty list is returned.
670
671 .. versionchanged:: 0.5
672 If the key does no longer exist a list is returned instead of
673 raising an error.
674 """
675 return dict.pop(self, key, [])
676
677 def popitemlist(self):
678 """Pop a ``(key, list)`` tuple from the dict."""
679 try:
680 return dict.popitem(self)
681 except KeyError as e:
682 raise exceptions.BadRequestKeyError(e.args[0])
683
684 def __copy__(self):
685 return self.copy()
686
687 def __deepcopy__(self, memo):
688 return self.deepcopy(memo=memo)
689
690 def __repr__(self):
691 return "%s(%r)" % (self.__class__.__name__, list(iteritems(self, multi=True)))
692
693
694 class _omd_bucket(object):
695 """Wraps values in the :class:`OrderedMultiDict`. This makes it
696 possible to keep an order over multiple different keys. It requires
697 a lot of extra memory and slows down access a lot, but makes it
698 possible to access elements in O(1) and iterate in O(n).
699 """
700
701 __slots__ = ("prev", "key", "value", "next")
702
703 def __init__(self, omd, key, value):
704 self.prev = omd._last_bucket
705 self.key = key
706 self.value = value
707 self.next = None
708
709 if omd._first_bucket is None:
710 omd._first_bucket = self
711 if omd._last_bucket is not None:
712 omd._last_bucket.next = self
713 omd._last_bucket = self
714
715 def unlink(self, omd):
716 if self.prev:
717 self.prev.next = self.next
718 if self.next:
719 self.next.prev = self.prev
720 if omd._first_bucket is self:
721 omd._first_bucket = self.next
722 if omd._last_bucket is self:
723 omd._last_bucket = self.prev
724
725
726 @native_itermethods(["keys", "values", "items", "lists", "listvalues"])
727 class OrderedMultiDict(MultiDict):
728 """Works like a regular :class:`MultiDict` but preserves the
729 order of the fields. To convert the ordered multi dict into a
730 list you can use the :meth:`items` method and pass it ``multi=True``.
731
732 In general an :class:`OrderedMultiDict` is an order of magnitude
733 slower than a :class:`MultiDict`.
734
735 .. admonition:: note
736
737 Due to a limitation in Python you cannot convert an ordered
738 multi dict into a regular dict by using ``dict(multidict)``.
739 Instead you have to use the :meth:`to_dict` method, otherwise
740 the internal bucket objects are exposed.
741 """
742
743 def __init__(self, mapping=None):
744 dict.__init__(self)
745 self._first_bucket = self._last_bucket = None
746 if mapping is not None:
747 OrderedMultiDict.update(self, mapping)
748
749 def __eq__(self, other):
750 if not isinstance(other, MultiDict):
751 return NotImplemented
752 if isinstance(other, OrderedMultiDict):
753 iter1 = iteritems(self, multi=True)
754 iter2 = iteritems(other, multi=True)
755 try:
756 for k1, v1 in iter1:
757 k2, v2 = next(iter2)
758 if k1 != k2 or v1 != v2:
759 return False
760 except StopIteration:
761 return False
762 try:
763 next(iter2)
764 except StopIteration:
765 return True
766 return False
767 if len(self) != len(other):
768 return False
769 for key, values in iterlists(self):
770 if other.getlist(key) != values:
771 return False
772 return True
773
774 __hash__ = None
775
776 def __ne__(self, other):
777 return not self.__eq__(other)
778
779 def __reduce_ex__(self, protocol):
780 return type(self), (list(iteritems(self, multi=True)),)
781
782 def __getstate__(self):
783 return list(iteritems(self, multi=True))
784
785 def __setstate__(self, values):
786 dict.clear(self)
787 for key, value in values:
788 self.add(key, value)
789
790 def __getitem__(self, key):
791 if key in self:
792 return dict.__getitem__(self, key)[0].value
793 raise exceptions.BadRequestKeyError(key)
794
795 def __setitem__(self, key, value):
796 self.poplist(key)
797 self.add(key, value)
798
799 def __delitem__(self, key):
800 self.pop(key)
801
802 def keys(self):
803 return (key for key, value in iteritems(self))
804
805 __iter__ = keys
806
807 def values(self):
808 return (value for key, value in iteritems(self))
809
810 def items(self, multi=False):
811 ptr = self._first_bucket
812 if multi:
813 while ptr is not None:
814 yield ptr.key, ptr.value
815 ptr = ptr.next
816 else:
817 returned_keys = set()
818 while ptr is not None:
819 if ptr.key not in returned_keys:
820 returned_keys.add(ptr.key)
821 yield ptr.key, ptr.value
822 ptr = ptr.next
823
824 def lists(self):
825 returned_keys = set()
826 ptr = self._first_bucket
827 while ptr is not None:
828 if ptr.key not in returned_keys:
829 yield ptr.key, self.getlist(ptr.key)
830 returned_keys.add(ptr.key)
831 ptr = ptr.next
832
833 def listvalues(self):
834 for _key, values in iterlists(self):
835 yield values
836
837 def add(self, key, value):
838 dict.setdefault(self, key, []).append(_omd_bucket(self, key, value))
839
840 def getlist(self, key, type=None):
841 try:
842 rv = dict.__getitem__(self, key)
843 except KeyError:
844 return []
845 if type is None:
846 return [x.value for x in rv]
847 result = []
848 for item in rv:
849 try:
850 result.append(type(item.value))
851 except ValueError:
852 pass
853 return result
854
855 def setlist(self, key, new_list):
856 self.poplist(key)
857 for value in new_list:
858 self.add(key, value)
859
860 def setlistdefault(self, key, default_list=None):
861 raise TypeError("setlistdefault is unsupported for ordered multi dicts")
862
863 def update(self, mapping):
864 for key, value in iter_multi_items(mapping):
865 OrderedMultiDict.add(self, key, value)
866
867 def poplist(self, key):
868 buckets = dict.pop(self, key, ())
869 for bucket in buckets:
870 bucket.unlink(self)
871 return [x.value for x in buckets]
872
873 def pop(self, key, default=_missing):
874 try:
875 buckets = dict.pop(self, key)
876 except KeyError:
877 if default is not _missing:
878 return default
879 raise exceptions.BadRequestKeyError(key)
880 for bucket in buckets:
881 bucket.unlink(self)
882 return buckets[0].value
883
884 def popitem(self):
885 try:
886 key, buckets = dict.popitem(self)
887 except KeyError as e:
888 raise exceptions.BadRequestKeyError(e.args[0])
889 for bucket in buckets:
890 bucket.unlink(self)
891 return key, buckets[0].value
892
893 def popitemlist(self):
894 try:
895 key, buckets = dict.popitem(self)
896 except KeyError as e:
897 raise exceptions.BadRequestKeyError(e.args[0])
898 for bucket in buckets:
899 bucket.unlink(self)
900 return key, [x.value for x in buckets]
901
902
903 def _options_header_vkw(value, kw):
904 return dump_options_header(
905 value, dict((k.replace("_", "-"), v) for k, v in kw.items())
906 )
907
908
909 def _unicodify_header_value(value):
910 if isinstance(value, bytes):
911 value = value.decode("latin-1")
912 if not isinstance(value, text_type):
913 value = text_type(value)
914 return value
915
916
917 @native_itermethods(["keys", "values", "items"])
918 class Headers(object):
919 """An object that stores some headers. It has a dict-like interface
920 but is ordered and can store the same keys multiple times.
921
922 This data structure is useful if you want a nicer way to handle WSGI
923 headers which are stored as tuples in a list.
924
925 From Werkzeug 0.3 onwards, the :exc:`KeyError` raised by this class is
926 also a subclass of the :class:`~exceptions.BadRequest` HTTP exception
927 and will render a page for a ``400 BAD REQUEST`` if caught in a
928 catch-all for HTTP exceptions.
929
930 Headers is mostly compatible with the Python :class:`wsgiref.headers.Headers`
931 class, with the exception of `__getitem__`. :mod:`wsgiref` will return
932 `None` for ``headers['missing']``, whereas :class:`Headers` will raise
933 a :class:`KeyError`.
934
935 To create a new :class:`Headers` object pass it a list or dict of headers
936 which are used as default values. This does not reuse the list passed
937 to the constructor for internal usage.
938
939 :param defaults: The list of default values for the :class:`Headers`.
940
941 .. versionchanged:: 0.9
942 This data structure now stores unicode values similar to how the
943 multi dicts do it. The main difference is that bytes can be set as
944 well which will automatically be latin1 decoded.
945
946 .. versionchanged:: 0.9
947 The :meth:`linked` function was removed without replacement as it
948 was an API that does not support the changes to the encoding model.
949 """
950
951 def __init__(self, defaults=None):
952 self._list = []
953 if defaults is not None:
954 if isinstance(defaults, (list, Headers)):
955 self._list.extend(defaults)
956 else:
957 self.extend(defaults)
958
959 def __getitem__(self, key, _get_mode=False):
960 if not _get_mode:
961 if isinstance(key, integer_types):
962 return self._list[key]
963 elif isinstance(key, slice):
964 return self.__class__(self._list[key])
965 if not isinstance(key, string_types):
966 raise exceptions.BadRequestKeyError(key)
967 ikey = key.lower()
968 for k, v in self._list:
969 if k.lower() == ikey:
970 return v
971 # micro optimization: if we are in get mode we will catch that
972 # exception one stack level down so we can raise a standard
973 # key error instead of our special one.
974 if _get_mode:
975 raise KeyError()
976 raise exceptions.BadRequestKeyError(key)
977
978 def __eq__(self, other):
979 return other.__class__ is self.__class__ and set(other._list) == set(self._list)
980
981 __hash__ = None
982
983 def __ne__(self, other):
984 return not self.__eq__(other)
985
986 def get(self, key, default=None, type=None, as_bytes=False):
987 """Return the default value if the requested data doesn't exist.
988 If `type` is provided and is a callable it should convert the value,
989 return it or raise a :exc:`ValueError` if that is not possible. In
990 this case the function will return the default as if the value was not
991 found:
992
993 >>> d = Headers([('Content-Length', '42')])
994 >>> d.get('Content-Length', type=int)
995 42
996
997 If a headers object is bound you must not add unicode strings
998 because no encoding takes place.
999
1000 .. versionadded:: 0.9
1001 Added support for `as_bytes`.
1002
1003 :param key: The key to be looked up.
1004 :param default: The default value to be returned if the key can't
1005 be looked up. If not further specified `None` is
1006 returned.
1007 :param type: A callable that is used to cast the value in the
1008 :class:`Headers`. If a :exc:`ValueError` is raised
1009 by this callable the default value is returned.
1010 :param as_bytes: return bytes instead of unicode strings.
1011 """
1012 try:
1013 rv = self.__getitem__(key, _get_mode=True)
1014 except KeyError:
1015 return default
1016 if as_bytes:
1017 rv = rv.encode("latin1")
1018 if type is None:
1019 return rv
1020 try:
1021 return type(rv)
1022 except ValueError:
1023 return default
1024
1025 def getlist(self, key, type=None, as_bytes=False):
1026 """Return the list of items for a given key. If that key is not in the
1027 :class:`Headers`, the return value will be an empty list. Just as
1028 :meth:`get` :meth:`getlist` accepts a `type` parameter. All items will
1029 be converted with the callable defined there.
1030
1031 .. versionadded:: 0.9
1032 Added support for `as_bytes`.
1033
1034 :param key: The key to be looked up.
1035 :param type: A callable that is used to cast the value in the
1036 :class:`Headers`. If a :exc:`ValueError` is raised
1037 by this callable the value will be removed from the list.
1038 :return: a :class:`list` of all the values for the key.
1039 :param as_bytes: return bytes instead of unicode strings.
1040 """
1041 ikey = key.lower()
1042 result = []
1043 for k, v in self:
1044 if k.lower() == ikey:
1045 if as_bytes:
1046 v = v.encode("latin1")
1047 if type is not None:
1048 try:
1049 v = type(v)
1050 except ValueError:
1051 continue
1052 result.append(v)
1053 return result
1054
1055 def get_all(self, name):
1056 """Return a list of all the values for the named field.
1057
1058 This method is compatible with the :mod:`wsgiref`
1059 :meth:`~wsgiref.headers.Headers.get_all` method.
1060 """
1061 return self.getlist(name)
1062
1063 def items(self, lower=False):
1064 for key, value in self:
1065 if lower:
1066 key = key.lower()
1067 yield key, value
1068
1069 def keys(self, lower=False):
1070 for key, _ in iteritems(self, lower):
1071 yield key
1072
1073 def values(self):
1074 for _, value in iteritems(self):
1075 yield value
1076
1077 def extend(self, iterable):
1078 """Extend the headers with a dict or an iterable yielding keys and
1079 values.
1080 """
1081 if isinstance(iterable, dict):
1082 for key, value in iteritems(iterable):
1083 if isinstance(value, (tuple, list)):
1084 for v in value:
1085 self.add(key, v)
1086 else:
1087 self.add(key, value)
1088 else:
1089 for key, value in iterable:
1090 self.add(key, value)
1091
1092 def __delitem__(self, key, _index_operation=True):
1093 if _index_operation and isinstance(key, (integer_types, slice)):
1094 del self._list[key]
1095 return
1096 key = key.lower()
1097 new = []
1098 for k, v in self._list:
1099 if k.lower() != key:
1100 new.append((k, v))
1101 self._list[:] = new
1102
1103 def remove(self, key):
1104 """Remove a key.
1105
1106 :param key: The key to be removed.
1107 """
1108 return self.__delitem__(key, _index_operation=False)
1109
1110 def pop(self, key=None, default=_missing):
1111 """Removes and returns a key or index.
1112
1113 :param key: The key to be popped. If this is an integer the item at
1114 that position is removed, if it's a string the value for
1115 that key is. If the key is omitted or `None` the last
1116 item is removed.
1117 :return: an item.
1118 """
1119 if key is None:
1120 return self._list.pop()
1121 if isinstance(key, integer_types):
1122 return self._list.pop(key)
1123 try:
1124 rv = self[key]
1125 self.remove(key)
1126 except KeyError:
1127 if default is not _missing:
1128 return default
1129 raise
1130 return rv
1131
1132 def popitem(self):
1133 """Removes a key or index and returns a (key, value) item."""
1134 return self.pop()
1135
1136 def __contains__(self, key):
1137 """Check if a key is present."""
1138 try:
1139 self.__getitem__(key, _get_mode=True)
1140 except KeyError:
1141 return False
1142 return True
1143
1144 has_key = __contains__
1145
1146 def __iter__(self):
1147 """Yield ``(key, value)`` tuples."""
1148 return iter(self._list)
1149
1150 def __len__(self):
1151 return len(self._list)
1152
1153 def add(self, _key, _value, **kw):
1154 """Add a new header tuple to the list.
1155
1156 Keyword arguments can specify additional parameters for the header
1157 value, with underscores converted to dashes::
1158
1159 >>> d = Headers()
1160 >>> d.add('Content-Type', 'text/plain')
1161 >>> d.add('Content-Disposition', 'attachment', filename='foo.png')
1162
1163 The keyword argument dumping uses :func:`dump_options_header`
1164 behind the scenes.
1165
1166 .. versionadded:: 0.4.1
1167 keyword arguments were added for :mod:`wsgiref` compatibility.
1168 """
1169 if kw:
1170 _value = _options_header_vkw(_value, kw)
1171 _key = _unicodify_header_value(_key)
1172 _value = _unicodify_header_value(_value)
1173 self._validate_value(_value)
1174 self._list.append((_key, _value))
1175
1176 def _validate_value(self, value):
1177 if not isinstance(value, text_type):
1178 raise TypeError("Value should be unicode.")
1179 if u"\n" in value or u"\r" in value:
1180 raise ValueError(
1181 "Detected newline in header value. This is "
1182 "a potential security problem"
1183 )
1184
1185 def add_header(self, _key, _value, **_kw):
1186 """Add a new header tuple to the list.
1187
1188 An alias for :meth:`add` for compatibility with the :mod:`wsgiref`
1189 :meth:`~wsgiref.headers.Headers.add_header` method.
1190 """
1191 self.add(_key, _value, **_kw)
1192
1193 def clear(self):
1194 """Clears all headers."""
1195 del self._list[:]
1196
1197 def set(self, _key, _value, **kw):
1198 """Remove all header tuples for `key` and add a new one. The newly
1199 added key either appears at the end of the list if there was no
1200 entry or replaces the first one.
1201
1202 Keyword arguments can specify additional parameters for the header
1203 value, with underscores converted to dashes. See :meth:`add` for
1204 more information.
1205
1206 .. versionchanged:: 0.6.1
1207 :meth:`set` now accepts the same arguments as :meth:`add`.
1208
1209 :param key: The key to be inserted.
1210 :param value: The value to be inserted.
1211 """
1212 if kw:
1213 _value = _options_header_vkw(_value, kw)
1214 _key = _unicodify_header_value(_key)
1215 _value = _unicodify_header_value(_value)
1216 self._validate_value(_value)
1217 if not self._list:
1218 self._list.append((_key, _value))
1219 return
1220 listiter = iter(self._list)
1221 ikey = _key.lower()
1222 for idx, (old_key, _old_value) in enumerate(listiter):
1223 if old_key.lower() == ikey:
1224 # replace first ocurrence
1225 self._list[idx] = (_key, _value)
1226 break
1227 else:
1228 self._list.append((_key, _value))
1229 return
1230 self._list[idx + 1 :] = [t for t in listiter if t[0].lower() != ikey]
1231
1232 def setdefault(self, key, default):
1233 """Returns the value for the key if it is in the dict, otherwise it
1234 returns `default` and sets that value for `key`.
1235
1236 :param key: The key to be looked up.
1237 :param default: The default value to be returned if the key is not
1238 in the dict. If not further specified it's `None`.
1239 """
1240 if key in self:
1241 return self[key]
1242 self.set(key, default)
1243 return default
1244
1245 def __setitem__(self, key, value):
1246 """Like :meth:`set` but also supports index/slice based setting."""
1247 if isinstance(key, (slice, integer_types)):
1248 if isinstance(key, integer_types):
1249 value = [value]
1250 value = [
1251 (_unicodify_header_value(k), _unicodify_header_value(v))
1252 for (k, v) in value
1253 ]
1254 [self._validate_value(v) for (k, v) in value]
1255 if isinstance(key, integer_types):
1256 self._list[key] = value[0]
1257 else:
1258 self._list[key] = value
1259 else:
1260 self.set(key, value)
1261
1262 def to_list(self, charset="iso-8859-1"):
1263 """Convert the headers into a list suitable for WSGI.
1264
1265 .. deprecated:: 0.9
1266 """
1267 from warnings import warn
1268
1269 warn(
1270 "'to_list' deprecated as of version 0.9 and will be removed"
1271 " in version 1.0. Use 'to_wsgi_list' instead.",
1272 DeprecationWarning,
1273 stacklevel=2,
1274 )
1275 return self.to_wsgi_list()
1276
1277 def to_wsgi_list(self):
1278 """Convert the headers into a list suitable for WSGI.
1279
1280 The values are byte strings in Python 2 converted to latin1 and unicode
1281 strings in Python 3 for the WSGI server to encode.
1282
1283 :return: list
1284 """
1285 if PY2:
1286 return [(to_native(k), v.encode("latin1")) for k, v in self]
1287 return list(self)
1288
1289 def copy(self):
1290 return self.__class__(self._list)
1291
1292 def __copy__(self):
1293 return self.copy()
1294
1295 def __str__(self):
1296 """Returns formatted headers suitable for HTTP transmission."""
1297 strs = []
1298 for key, value in self.to_wsgi_list():
1299 strs.append("%s: %s" % (key, value))
1300 strs.append("\r\n")
1301 return "\r\n".join(strs)
1302
1303 def __repr__(self):
1304 return "%s(%r)" % (self.__class__.__name__, list(self))
1305
1306
1307 class ImmutableHeadersMixin(object):
1308 """Makes a :class:`Headers` immutable. We do not mark them as
1309 hashable though since the only usecase for this datastructure
1310 in Werkzeug is a view on a mutable structure.
1311
1312 .. versionadded:: 0.5
1313
1314 :private:
1315 """
1316
1317 def __delitem__(self, key, **kwargs):
1318 is_immutable(self)
1319
1320 def __setitem__(self, key, value):
1321 is_immutable(self)
1322
1323 set = __setitem__
1324
1325 def add(self, item):
1326 is_immutable(self)
1327
1328 remove = add_header = add
1329
1330 def extend(self, iterable):
1331 is_immutable(self)
1332
1333 def insert(self, pos, value):
1334 is_immutable(self)
1335
1336 def pop(self, index=-1):
1337 is_immutable(self)
1338
1339 def popitem(self):
1340 is_immutable(self)
1341
1342 def setdefault(self, key, default):
1343 is_immutable(self)
1344
1345
1346 class EnvironHeaders(ImmutableHeadersMixin, Headers):
1347 """Read only version of the headers from a WSGI environment. This
1348 provides the same interface as `Headers` and is constructed from
1349 a WSGI environment.
1350
1351 From Werkzeug 0.3 onwards, the `KeyError` raised by this class is also a
1352 subclass of the :exc:`~exceptions.BadRequest` HTTP exception and will
1353 render a page for a ``400 BAD REQUEST`` if caught in a catch-all for
1354 HTTP exceptions.
1355 """
1356
1357 def __init__(self, environ):
1358 self.environ = environ
1359
1360 def __eq__(self, other):
1361 return self.environ is other.environ
1362
1363 __hash__ = None
1364
1365 def __getitem__(self, key, _get_mode=False):
1366 # _get_mode is a no-op for this class as there is no index but
1367 # used because get() calls it.
1368 if not isinstance(key, string_types):
1369 raise KeyError(key)
1370 key = key.upper().replace("-", "_")
1371 if key in ("CONTENT_TYPE", "CONTENT_LENGTH"):
1372 return _unicodify_header_value(self.environ[key])
1373 return _unicodify_header_value(self.environ["HTTP_" + key])
1374
1375 def __len__(self):
1376 # the iter is necessary because otherwise list calls our
1377 # len which would call list again and so forth.
1378 return len(list(iter(self)))
1379
1380 def __iter__(self):
1381 for key, value in iteritems(self.environ):
1382 if key.startswith("HTTP_") and key not in (
1383 "HTTP_CONTENT_TYPE",
1384 "HTTP_CONTENT_LENGTH",
1385 ):
1386 yield (
1387 key[5:].replace("_", "-").title(),
1388 _unicodify_header_value(value),
1389 )
1390 elif key in ("CONTENT_TYPE", "CONTENT_LENGTH") and value:
1391 yield (key.replace("_", "-").title(), _unicodify_header_value(value))
1392
1393 def copy(self):
1394 raise TypeError("cannot create %r copies" % self.__class__.__name__)
1395
1396
1397 @native_itermethods(["keys", "values", "items", "lists", "listvalues"])
1398 class CombinedMultiDict(ImmutableMultiDictMixin, MultiDict):
1399 """A read only :class:`MultiDict` that you can pass multiple :class:`MultiDict`
1400 instances as sequence and it will combine the return values of all wrapped
1401 dicts:
1402
1403 >>> from werkzeug.datastructures import CombinedMultiDict, MultiDict
1404 >>> post = MultiDict([('foo', 'bar')])
1405 >>> get = MultiDict([('blub', 'blah')])
1406 >>> combined = CombinedMultiDict([get, post])
1407 >>> combined['foo']
1408 'bar'
1409 >>> combined['blub']
1410 'blah'
1411
1412 This works for all read operations and will raise a `TypeError` for
1413 methods that usually change data which isn't possible.
1414
1415 From Werkzeug 0.3 onwards, the `KeyError` raised by this class is also a
1416 subclass of the :exc:`~exceptions.BadRequest` HTTP exception and will
1417 render a page for a ``400 BAD REQUEST`` if caught in a catch-all for HTTP
1418 exceptions.
1419 """
1420
1421 def __reduce_ex__(self, protocol):
1422 return type(self), (self.dicts,)
1423
1424 def __init__(self, dicts=None):
1425 self.dicts = dicts or []
1426
1427 @classmethod
1428 def fromkeys(cls):
1429 raise TypeError("cannot create %r instances by fromkeys" % cls.__name__)
1430
1431 def __getitem__(self, key):
1432 for d in self.dicts:
1433 if key in d:
1434 return d[key]
1435 raise exceptions.BadRequestKeyError(key)
1436
1437 def get(self, key, default=None, type=None):
1438 for d in self.dicts:
1439 if key in d:
1440 if type is not None:
1441 try:
1442 return type(d[key])
1443 except ValueError:
1444 continue
1445 return d[key]
1446 return default
1447
1448 def getlist(self, key, type=None):
1449 rv = []
1450 for d in self.dicts:
1451 rv.extend(d.getlist(key, type))
1452 return rv
1453
1454 def _keys_impl(self):
1455 """This function exists so __len__ can be implemented more efficiently,
1456 saving one list creation from an iterator.
1457
1458 Using this for Python 2's ``dict.keys`` behavior would be useless since
1459 `dict.keys` in Python 2 returns a list, while we have a set here.
1460 """
1461 rv = set()
1462 for d in self.dicts:
1463 rv.update(iterkeys(d))
1464 return rv
1465
1466 def keys(self):
1467 return iter(self._keys_impl())
1468
1469 __iter__ = keys
1470
1471 def items(self, multi=False):
1472 found = set()
1473 for d in self.dicts:
1474 for key, value in iteritems(d, multi):
1475 if multi:
1476 yield key, value
1477 elif key not in found:
1478 found.add(key)
1479 yield key, value
1480
1481 def values(self):
1482 for _key, value in iteritems(self):
1483 yield value
1484
1485 def lists(self):
1486 rv = {}
1487 for d in self.dicts:
1488 for key, values in iterlists(d):
1489 rv.setdefault(key, []).extend(values)
1490 return iteritems(rv)
1491
1492 def listvalues(self):
1493 return (x[1] for x in self.lists())
1494
1495 def copy(self):
1496 """Return a shallow mutable copy of this object.
1497
1498 This returns a :class:`MultiDict` representing the data at the
1499 time of copying. The copy will no longer reflect changes to the
1500 wrapped dicts.
1501
1502 .. versionchanged:: 0.15
1503 Return a mutable :class:`MultiDict`.
1504 """
1505 return MultiDict(self)
1506
1507 def to_dict(self, flat=True):
1508 """Return the contents as regular dict. If `flat` is `True` the
1509 returned dict will only have the first item present, if `flat` is
1510 `False` all values will be returned as lists.
1511
1512 :param flat: If set to `False` the dict returned will have lists
1513 with all the values in it. Otherwise it will only
1514 contain the first item for each key.
1515 :return: a :class:`dict`
1516 """
1517 rv = {}
1518 for d in reversed(self.dicts):
1519 rv.update(d.to_dict(flat))
1520 return rv
1521
1522 def __len__(self):
1523 return len(self._keys_impl())
1524
1525 def __contains__(self, key):
1526 for d in self.dicts:
1527 if key in d:
1528 return True
1529 return False
1530
1531 has_key = __contains__
1532
1533 def __repr__(self):
1534 return "%s(%r)" % (self.__class__.__name__, self.dicts)
1535
1536
1537 class FileMultiDict(MultiDict):
1538 """A special :class:`MultiDict` that has convenience methods to add
1539 files to it. This is used for :class:`EnvironBuilder` and generally
1540 useful for unittesting.
1541
1542 .. versionadded:: 0.5
1543 """
1544
1545 def add_file(self, name, file, filename=None, content_type=None):
1546 """Adds a new file to the dict. `file` can be a file name or
1547 a :class:`file`-like or a :class:`FileStorage` object.
1548
1549 :param name: the name of the field.
1550 :param file: a filename or :class:`file`-like object
1551 :param filename: an optional filename
1552 :param content_type: an optional content type
1553 """
1554 if isinstance(file, FileStorage):
1555 value = file
1556 else:
1557 if isinstance(file, string_types):
1558 if filename is None:
1559 filename = file
1560 file = open(file, "rb")
1561 if filename and content_type is None:
1562 content_type = (
1563 mimetypes.guess_type(filename)[0] or "application/octet-stream"
1564 )
1565 value = FileStorage(file, filename, name, content_type)
1566
1567 self.add(name, value)
1568
1569
1570 class ImmutableDict(ImmutableDictMixin, dict):
1571 """An immutable :class:`dict`.
1572
1573 .. versionadded:: 0.5
1574 """
1575
1576 def __repr__(self):
1577 return "%s(%s)" % (self.__class__.__name__, dict.__repr__(self))
1578
1579 def copy(self):
1580 """Return a shallow mutable copy of this object. Keep in mind that
1581 the standard library's :func:`copy` function is a no-op for this class
1582 like for any other python immutable type (eg: :class:`tuple`).
1583 """
1584 return dict(self)
1585
1586 def __copy__(self):
1587 return self
1588
1589
1590 class ImmutableMultiDict(ImmutableMultiDictMixin, MultiDict):
1591 """An immutable :class:`MultiDict`.
1592
1593 .. versionadded:: 0.5
1594 """
1595
1596 def copy(self):
1597 """Return a shallow mutable copy of this object. Keep in mind that
1598 the standard library's :func:`copy` function is a no-op for this class
1599 like for any other python immutable type (eg: :class:`tuple`).
1600 """
1601 return MultiDict(self)
1602
1603 def __copy__(self):
1604 return self
1605
1606
1607 class ImmutableOrderedMultiDict(ImmutableMultiDictMixin, OrderedMultiDict):
1608 """An immutable :class:`OrderedMultiDict`.
1609
1610 .. versionadded:: 0.6
1611 """
1612
1613 def _iter_hashitems(self):
1614 return enumerate(iteritems(self, multi=True))
1615
1616 def copy(self):
1617 """Return a shallow mutable copy of this object. Keep in mind that
1618 the standard library's :func:`copy` function is a no-op for this class
1619 like for any other python immutable type (eg: :class:`tuple`).
1620 """
1621 return OrderedMultiDict(self)
1622
1623 def __copy__(self):
1624 return self
1625
1626
1627 @native_itermethods(["values"])
1628 class Accept(ImmutableList):
1629 """An :class:`Accept` object is just a list subclass for lists of
1630 ``(value, quality)`` tuples. It is automatically sorted by specificity
1631 and quality.
1632
1633 All :class:`Accept` objects work similar to a list but provide extra
1634 functionality for working with the data. Containment checks are
1635 normalized to the rules of that header:
1636
1637 >>> a = CharsetAccept([('ISO-8859-1', 1), ('utf-8', 0.7)])
1638 >>> a.best
1639 'ISO-8859-1'
1640 >>> 'iso-8859-1' in a
1641 True
1642 >>> 'UTF8' in a
1643 True
1644 >>> 'utf7' in a
1645 False
1646
1647 To get the quality for an item you can use normal item lookup:
1648
1649 >>> print a['utf-8']
1650 0.7
1651 >>> a['utf7']
1652 0
1653
1654 .. versionchanged:: 0.5
1655 :class:`Accept` objects are forced immutable now.
1656 """
1657
1658 def __init__(self, values=()):
1659 if values is None:
1660 list.__init__(self)
1661 self.provided = False
1662 elif isinstance(values, Accept):
1663 self.provided = values.provided
1664 list.__init__(self, values)
1665 else:
1666 self.provided = True
1667 values = sorted(
1668 values,
1669 key=lambda x: (self._specificity(x[0]), x[1], x[0]),
1670 reverse=True,
1671 )
1672 list.__init__(self, values)
1673
1674 def _specificity(self, value):
1675 """Returns a tuple describing the value's specificity."""
1676 return (value != "*",)
1677
1678 def _value_matches(self, value, item):
1679 """Check if a value matches a given accept item."""
1680 return item == "*" or item.lower() == value.lower()
1681
1682 def __getitem__(self, key):
1683 """Besides index lookup (getting item n) you can also pass it a string
1684 to get the quality for the item. If the item is not in the list, the
1685 returned quality is ``0``.
1686 """
1687 if isinstance(key, string_types):
1688 return self.quality(key)
1689 return list.__getitem__(self, key)
1690
1691 def quality(self, key):
1692 """Returns the quality of the key.
1693
1694 .. versionadded:: 0.6
1695 In previous versions you had to use the item-lookup syntax
1696 (eg: ``obj[key]`` instead of ``obj.quality(key)``)
1697 """
1698 for item, quality in self:
1699 if self._value_matches(key, item):
1700 return quality
1701 return 0
1702
1703 def __contains__(self, value):
1704 for item, _quality in self:
1705 if self._value_matches(value, item):
1706 return True
1707 return False
1708
1709 def __repr__(self):
1710 return "%s([%s])" % (
1711 self.__class__.__name__,
1712 ", ".join("(%r, %s)" % (x, y) for x, y in self),
1713 )
1714
1715 def index(self, key):
1716 """Get the position of an entry or raise :exc:`ValueError`.
1717
1718 :param key: The key to be looked up.
1719
1720 .. versionchanged:: 0.5
1721 This used to raise :exc:`IndexError`, which was inconsistent
1722 with the list API.
1723 """
1724 if isinstance(key, string_types):
1725 for idx, (item, _quality) in enumerate(self):
1726 if self._value_matches(key, item):
1727 return idx
1728 raise ValueError(key)
1729 return list.index(self, key)
1730
1731 def find(self, key):
1732 """Get the position of an entry or return -1.
1733
1734 :param key: The key to be looked up.
1735 """
1736 try:
1737 return self.index(key)
1738 except ValueError:
1739 return -1
1740
1741 def values(self):
1742 """Iterate over all values."""
1743 for item in self:
1744 yield item[0]
1745
1746 def to_header(self):
1747 """Convert the header set into an HTTP header string."""
1748 result = []
1749 for value, quality in self:
1750 if quality != 1:
1751 value = "%s;q=%s" % (value, quality)
1752 result.append(value)
1753 return ",".join(result)
1754
1755 def __str__(self):
1756 return self.to_header()
1757
1758 def _best_single_match(self, match):
1759 for client_item, quality in self:
1760 if self._value_matches(match, client_item):
1761 # self is sorted by specificity descending, we can exit
1762 return client_item, quality
1763
1764 def best_match(self, matches, default=None):
1765 """Returns the best match from a list of possible matches based
1766 on the specificity and quality of the client. If two items have the
1767 same quality and specificity, the one is returned that comes first.
1768
1769 :param matches: a list of matches to check for
1770 :param default: the value that is returned if none match
1771 """
1772 result = default
1773 best_quality = -1
1774 best_specificity = (-1,)
1775 for server_item in matches:
1776 match = self._best_single_match(server_item)
1777 if not match:
1778 continue
1779 client_item, quality = match
1780 specificity = self._specificity(client_item)
1781 if quality <= 0 or quality < best_quality:
1782 continue
1783 # better quality or same quality but more specific => better match
1784 if quality > best_quality or specificity > best_specificity:
1785 result = server_item
1786 best_quality = quality
1787 best_specificity = specificity
1788 return result
1789
1790 @property
1791 def best(self):
1792 """The best match as value."""
1793 if self:
1794 return self[0][0]
1795
1796
1797 class MIMEAccept(Accept):
1798 """Like :class:`Accept` but with special methods and behavior for
1799 mimetypes.
1800 """
1801
1802 def _specificity(self, value):
1803 return tuple(x != "*" for x in value.split("/", 1))
1804
1805 def _value_matches(self, value, item):
1806 def _normalize(x):
1807 x = x.lower()
1808 return ("*", "*") if x == "*" else x.split("/", 1)
1809
1810 # this is from the application which is trusted. to avoid developer
1811 # frustration we actually check these for valid values
1812 if "/" not in value:
1813 raise ValueError("invalid mimetype %r" % value)
1814 value_type, value_subtype = _normalize(value)
1815 if value_type == "*" and value_subtype != "*":
1816 raise ValueError("invalid mimetype %r" % value)
1817
1818 if "/" not in item:
1819 return False
1820 item_type, item_subtype = _normalize(item)
1821 if item_type == "*" and item_subtype != "*":
1822 return False
1823 return (
1824 item_type == item_subtype == "*" or value_type == value_subtype == "*"
1825 ) or (
1826 item_type == value_type
1827 and (
1828 item_subtype == "*"
1829 or value_subtype == "*"
1830 or item_subtype == value_subtype
1831 )
1832 )
1833
1834 @property
1835 def accept_html(self):
1836 """True if this object accepts HTML."""
1837 return (
1838 "text/html" in self or "application/xhtml+xml" in self or self.accept_xhtml
1839 )
1840
1841 @property
1842 def accept_xhtml(self):
1843 """True if this object accepts XHTML."""
1844 return "application/xhtml+xml" in self or "application/xml" in self
1845
1846 @property
1847 def accept_json(self):
1848 """True if this object accepts JSON."""
1849 return "application/json" in self
1850
1851
1852 class LanguageAccept(Accept):
1853 """Like :class:`Accept` but with normalization for languages."""
1854
1855 def _value_matches(self, value, item):
1856 def _normalize(language):
1857 return _locale_delim_re.split(language.lower())
1858
1859 return item == "*" or _normalize(value) == _normalize(item)
1860
1861
1862 class CharsetAccept(Accept):
1863 """Like :class:`Accept` but with normalization for charsets."""
1864
1865 def _value_matches(self, value, item):
1866 def _normalize(name):
1867 try:
1868 return codecs.lookup(name).name
1869 except LookupError:
1870 return name.lower()
1871
1872 return item == "*" or _normalize(value) == _normalize(item)
1873
1874
1875 def cache_property(key, empty, type):
1876 """Return a new property object for a cache header. Useful if you
1877 want to add support for a cache extension in a subclass."""
1878 return property(
1879 lambda x: x._get_cache_value(key, empty, type),
1880 lambda x, v: x._set_cache_value(key, v, type),
1881 lambda x: x._del_cache_value(key),
1882 "accessor for %r" % key,
1883 )
1884
1885
1886 class _CacheControl(UpdateDictMixin, dict):
1887 """Subclass of a dict that stores values for a Cache-Control header. It
1888 has accessors for all the cache-control directives specified in RFC 2616.
1889 The class does not differentiate between request and response directives.
1890
1891 Because the cache-control directives in the HTTP header use dashes the
1892 python descriptors use underscores for that.
1893
1894 To get a header of the :class:`CacheControl` object again you can convert
1895 the object into a string or call the :meth:`to_header` method. If you plan
1896 to subclass it and add your own items have a look at the sourcecode for
1897 that class.
1898
1899 .. versionchanged:: 0.4
1900
1901 Setting `no_cache` or `private` to boolean `True` will set the implicit
1902 none-value which is ``*``:
1903
1904 >>> cc = ResponseCacheControl()
1905 >>> cc.no_cache = True
1906 >>> cc
1907 <ResponseCacheControl 'no-cache'>
1908 >>> cc.no_cache
1909 '*'
1910 >>> cc.no_cache = None
1911 >>> cc
1912 <ResponseCacheControl ''>
1913
1914 In versions before 0.5 the behavior documented here affected the now
1915 no longer existing `CacheControl` class.
1916 """
1917
1918 no_cache = cache_property("no-cache", "*", None)
1919 no_store = cache_property("no-store", None, bool)
1920 max_age = cache_property("max-age", -1, int)
1921 no_transform = cache_property("no-transform", None, None)
1922
1923 def __init__(self, values=(), on_update=None):
1924 dict.__init__(self, values or ())
1925 self.on_update = on_update
1926 self.provided = values is not None
1927
1928 def _get_cache_value(self, key, empty, type):
1929 """Used internally by the accessor properties."""
1930 if type is bool:
1931 return key in self
1932 if key in self:
1933 value = self[key]
1934 if value is None:
1935 return empty
1936 elif type is not None:
1937 try:
1938 value = type(value)
1939 except ValueError:
1940 pass
1941 return value
1942
1943 def _set_cache_value(self, key, value, type):
1944 """Used internally by the accessor properties."""
1945 if type is bool:
1946 if value:
1947 self[key] = None
1948 else:
1949 self.pop(key, None)
1950 else:
1951 if value is None:
1952 self.pop(key)
1953 elif value is True:
1954 self[key] = None
1955 else:
1956 self[key] = value
1957
1958 def _del_cache_value(self, key):
1959 """Used internally by the accessor properties."""
1960 if key in self:
1961 del self[key]
1962
1963 def to_header(self):
1964 """Convert the stored values into a cache control header."""
1965 return dump_header(self)
1966
1967 def __str__(self):
1968 return self.to_header()
1969
1970 def __repr__(self):
1971 return "<%s %s>" % (
1972 self.__class__.__name__,
1973 " ".join("%s=%r" % (k, v) for k, v in sorted(self.items())),
1974 )
1975
1976
1977 class RequestCacheControl(ImmutableDictMixin, _CacheControl):
1978 """A cache control for requests. This is immutable and gives access
1979 to all the request-relevant cache control headers.
1980
1981 To get a header of the :class:`RequestCacheControl` object again you can
1982 convert the object into a string or call the :meth:`to_header` method. If
1983 you plan to subclass it and add your own items have a look at the sourcecode
1984 for that class.
1985
1986 .. versionadded:: 0.5
1987 In previous versions a `CacheControl` class existed that was used
1988 both for request and response.
1989 """
1990
1991 max_stale = cache_property("max-stale", "*", int)
1992 min_fresh = cache_property("min-fresh", "*", int)
1993 no_transform = cache_property("no-transform", None, None)
1994 only_if_cached = cache_property("only-if-cached", None, bool)
1995
1996
1997 class ResponseCacheControl(_CacheControl):
1998 """A cache control for responses. Unlike :class:`RequestCacheControl`
1999 this is mutable and gives access to response-relevant cache control
2000 headers.
2001
2002 To get a header of the :class:`ResponseCacheControl` object again you can
2003 convert the object into a string or call the :meth:`to_header` method. If
2004 you plan to subclass it and add your own items have a look at the sourcecode
2005 for that class.
2006
2007 .. versionadded:: 0.5
2008 In previous versions a `CacheControl` class existed that was used
2009 both for request and response.
2010 """
2011
2012 public = cache_property("public", None, bool)
2013 private = cache_property("private", "*", None)
2014 must_revalidate = cache_property("must-revalidate", None, bool)
2015 proxy_revalidate = cache_property("proxy-revalidate", None, bool)
2016 s_maxage = cache_property("s-maxage", None, None)
2017
2018
2019 # attach cache_property to the _CacheControl as staticmethod
2020 # so that others can reuse it.
2021 _CacheControl.cache_property = staticmethod(cache_property)
2022
2023
2024 class CallbackDict(UpdateDictMixin, dict):
2025 """A dict that calls a function passed every time something is changed.
2026 The function is passed the dict instance.
2027 """
2028
2029 def __init__(self, initial=None, on_update=None):
2030 dict.__init__(self, initial or ())
2031 self.on_update = on_update
2032
2033 def __repr__(self):
2034 return "<%s %s>" % (self.__class__.__name__, dict.__repr__(self))
2035
2036
2037 class HeaderSet(collections_abc.MutableSet):
2038 """Similar to the :class:`ETags` class this implements a set-like structure.
2039 Unlike :class:`ETags` this is case insensitive and used for vary, allow, and
2040 content-language headers.
2041
2042 If not constructed using the :func:`parse_set_header` function the
2043 instantiation works like this:
2044
2045 >>> hs = HeaderSet(['foo', 'bar', 'baz'])
2046 >>> hs
2047 HeaderSet(['foo', 'bar', 'baz'])
2048 """
2049
2050 def __init__(self, headers=None, on_update=None):
2051 self._headers = list(headers or ())
2052 self._set = set([x.lower() for x in self._headers])
2053 self.on_update = on_update
2054
2055 def add(self, header):
2056 """Add a new header to the set."""
2057 self.update((header,))
2058
2059 def remove(self, header):
2060 """Remove a header from the set. This raises an :exc:`KeyError` if the
2061 header is not in the set.
2062
2063 .. versionchanged:: 0.5
2064 In older versions a :exc:`IndexError` was raised instead of a
2065 :exc:`KeyError` if the object was missing.
2066
2067 :param header: the header to be removed.
2068 """
2069 key = header.lower()
2070 if key not in self._set:
2071 raise KeyError(header)
2072 self._set.remove(key)
2073 for idx, key in enumerate(self._headers):
2074 if key.lower() == header:
2075 del self._headers[idx]
2076 break
2077 if self.on_update is not None:
2078 self.on_update(self)
2079
2080 def update(self, iterable):
2081 """Add all the headers from the iterable to the set.
2082
2083 :param iterable: updates the set with the items from the iterable.
2084 """
2085 inserted_any = False
2086 for header in iterable:
2087 key = header.lower()
2088 if key not in self._set:
2089 self._headers.append(header)
2090 self._set.add(key)
2091 inserted_any = True
2092 if inserted_any and self.on_update is not None:
2093 self.on_update(self)
2094
2095 def discard(self, header):
2096 """Like :meth:`remove` but ignores errors.
2097
2098 :param header: the header to be discarded.
2099 """
2100 try:
2101 return self.remove(header)
2102 except KeyError:
2103 pass
2104
2105 def find(self, header):
2106 """Return the index of the header in the set or return -1 if not found.
2107
2108 :param header: the header to be looked up.
2109 """
2110 header = header.lower()
2111 for idx, item in enumerate(self._headers):
2112 if item.lower() == header:
2113 return idx
2114 return -1
2115
2116 def index(self, header):
2117 """Return the index of the header in the set or raise an
2118 :exc:`IndexError`.
2119
2120 :param header: the header to be looked up.
2121 """
2122 rv = self.find(header)
2123 if rv < 0:
2124 raise IndexError(header)
2125 return rv
2126
2127 def clear(self):
2128 """Clear the set."""
2129 self._set.clear()
2130 del self._headers[:]
2131 if self.on_update is not None:
2132 self.on_update(self)
2133
2134 def as_set(self, preserve_casing=False):
2135 """Return the set as real python set type. When calling this, all
2136 the items are converted to lowercase and the ordering is lost.
2137
2138 :param preserve_casing: if set to `True` the items in the set returned
2139 will have the original case like in the
2140 :class:`HeaderSet`, otherwise they will
2141 be lowercase.
2142 """
2143 if preserve_casing:
2144 return set(self._headers)
2145 return set(self._set)
2146
2147 def to_header(self):
2148 """Convert the header set into an HTTP header string."""
2149 return ", ".join(map(quote_header_value, self._headers))
2150
2151 def __getitem__(self, idx):
2152 return self._headers[idx]
2153
2154 def __delitem__(self, idx):
2155 rv = self._headers.pop(idx)
2156 self._set.remove(rv.lower())
2157 if self.on_update is not None:
2158 self.on_update(self)
2159
2160 def __setitem__(self, idx, value):
2161 old = self._headers[idx]
2162 self._set.remove(old.lower())
2163 self._headers[idx] = value
2164 self._set.add(value.lower())
2165 if self.on_update is not None:
2166 self.on_update(self)
2167
2168 def __contains__(self, header):
2169 return header.lower() in self._set
2170
2171 def __len__(self):
2172 return len(self._set)
2173
2174 def __iter__(self):
2175 return iter(self._headers)
2176
2177 def __nonzero__(self):
2178 return bool(self._set)
2179
2180 def __str__(self):
2181 return self.to_header()
2182
2183 def __repr__(self):
2184 return "%s(%r)" % (self.__class__.__name__, self._headers)
2185
2186
2187 class ETags(collections_abc.Container, collections_abc.Iterable):
2188 """A set that can be used to check if one etag is present in a collection
2189 of etags.
2190 """
2191
2192 def __init__(self, strong_etags=None, weak_etags=None, star_tag=False):
2193 self._strong = frozenset(not star_tag and strong_etags or ())
2194 self._weak = frozenset(weak_etags or ())
2195 self.star_tag = star_tag
2196
2197 def as_set(self, include_weak=False):
2198 """Convert the `ETags` object into a python set. Per default all the
2199 weak etags are not part of this set."""
2200 rv = set(self._strong)
2201 if include_weak:
2202 rv.update(self._weak)
2203 return rv
2204
2205 def is_weak(self, etag):
2206 """Check if an etag is weak."""
2207 return etag in self._weak
2208
2209 def is_strong(self, etag):
2210 """Check if an etag is strong."""
2211 return etag in self._strong
2212
2213 def contains_weak(self, etag):
2214 """Check if an etag is part of the set including weak and strong tags."""
2215 return self.is_weak(etag) or self.contains(etag)
2216
2217 def contains(self, etag):
2218 """Check if an etag is part of the set ignoring weak tags.
2219 It is also possible to use the ``in`` operator.
2220 """
2221 if self.star_tag:
2222 return True
2223 return self.is_strong(etag)
2224
2225 def contains_raw(self, etag):
2226 """When passed a quoted tag it will check if this tag is part of the
2227 set. If the tag is weak it is checked against weak and strong tags,
2228 otherwise strong only."""
2229 etag, weak = unquote_etag(etag)
2230 if weak:
2231 return self.contains_weak(etag)
2232 return self.contains(etag)
2233
2234 def to_header(self):
2235 """Convert the etags set into a HTTP header string."""
2236 if self.star_tag:
2237 return "*"
2238 return ", ".join(
2239 ['"%s"' % x for x in self._strong] + ['W/"%s"' % x for x in self._weak]
2240 )
2241
2242 def __call__(self, etag=None, data=None, include_weak=False):
2243 if [etag, data].count(None) != 1:
2244 raise TypeError("either tag or data required, but at least one")
2245 if etag is None:
2246 etag = generate_etag(data)
2247 if include_weak:
2248 if etag in self._weak:
2249 return True
2250 return etag in self._strong
2251
2252 def __bool__(self):
2253 return bool(self.star_tag or self._strong or self._weak)
2254
2255 __nonzero__ = __bool__
2256
2257 def __str__(self):
2258 return self.to_header()
2259
2260 def __iter__(self):
2261 return iter(self._strong)
2262
2263 def __contains__(self, etag):
2264 return self.contains(etag)
2265
2266 def __repr__(self):
2267 return "<%s %r>" % (self.__class__.__name__, str(self))
2268
2269
2270 class IfRange(object):
2271 """Very simple object that represents the `If-Range` header in parsed
2272 form. It will either have neither a etag or date or one of either but
2273 never both.
2274
2275 .. versionadded:: 0.7
2276 """
2277
2278 def __init__(self, etag=None, date=None):
2279 #: The etag parsed and unquoted. Ranges always operate on strong
2280 #: etags so the weakness information is not necessary.
2281 self.etag = etag
2282 #: The date in parsed format or `None`.
2283 self.date = date
2284
2285 def to_header(self):
2286 """Converts the object back into an HTTP header."""
2287 if self.date is not None:
2288 return http_date(self.date)
2289 if self.etag is not None:
2290 return quote_etag(self.etag)
2291 return ""
2292
2293 def __str__(self):
2294 return self.to_header()
2295
2296 def __repr__(self):
2297 return "<%s %r>" % (self.__class__.__name__, str(self))
2298
2299
2300 class Range(object):
2301 """Represents a ``Range`` header. All methods only support only
2302 bytes as the unit. Stores a list of ranges if given, but the methods
2303 only work if only one range is provided.
2304
2305 :raise ValueError: If the ranges provided are invalid.
2306
2307 .. versionchanged:: 0.15
2308 The ranges passed in are validated.
2309
2310 .. versionadded:: 0.7
2311 """
2312
2313 def __init__(self, units, ranges):
2314 #: The units of this range. Usually "bytes".
2315 self.units = units
2316 #: A list of ``(begin, end)`` tuples for the range header provided.
2317 #: The ranges are non-inclusive.
2318 self.ranges = ranges
2319
2320 for start, end in ranges:
2321 if start is None or (end is not None and (start < 0 or start >= end)):
2322 raise ValueError("{} is not a valid range.".format((start, end)))
2323
2324 def range_for_length(self, length):
2325 """If the range is for bytes, the length is not None and there is
2326 exactly one range and it is satisfiable it returns a ``(start, stop)``
2327 tuple, otherwise `None`.
2328 """
2329 if self.units != "bytes" or length is None or len(self.ranges) != 1:
2330 return None
2331 start, end = self.ranges[0]
2332 if end is None:
2333 end = length
2334 if start < 0:
2335 start += length
2336 if is_byte_range_valid(start, end, length):
2337 return start, min(end, length)
2338
2339 def make_content_range(self, length):
2340 """Creates a :class:`~werkzeug.datastructures.ContentRange` object
2341 from the current range and given content length.
2342 """
2343 rng = self.range_for_length(length)
2344 if rng is not None:
2345 return ContentRange(self.units, rng[0], rng[1], length)
2346
2347 def to_header(self):
2348 """Converts the object back into an HTTP header."""
2349 ranges = []
2350 for begin, end in self.ranges:
2351 if end is None:
2352 ranges.append("%s-" % begin if begin >= 0 else str(begin))
2353 else:
2354 ranges.append("%s-%s" % (begin, end - 1))
2355 return "%s=%s" % (self.units, ",".join(ranges))
2356
2357 def to_content_range_header(self, length):
2358 """Converts the object into `Content-Range` HTTP header,
2359 based on given length
2360 """
2361 range_for_length = self.range_for_length(length)
2362 if range_for_length is not None:
2363 return "%s %d-%d/%d" % (
2364 self.units,
2365 range_for_length[0],
2366 range_for_length[1] - 1,
2367 length,
2368 )
2369 return None
2370
2371 def __str__(self):
2372 return self.to_header()
2373
2374 def __repr__(self):
2375 return "<%s %r>" % (self.__class__.__name__, str(self))
2376
2377
2378 class ContentRange(object):
2379 """Represents the content range header.
2380
2381 .. versionadded:: 0.7
2382 """
2383
2384 def __init__(self, units, start, stop, length=None, on_update=None):
2385 assert is_byte_range_valid(start, stop, length), "Bad range provided"
2386 self.on_update = on_update
2387 self.set(start, stop, length, units)
2388
2389 def _callback_property(name): # noqa: B902
2390 def fget(self):
2391 return getattr(self, name)
2392
2393 def fset(self, value):
2394 setattr(self, name, value)
2395 if self.on_update is not None:
2396 self.on_update(self)
2397
2398 return property(fget, fset)
2399
2400 #: The units to use, usually "bytes"
2401 units = _callback_property("_units")
2402 #: The start point of the range or `None`.
2403 start = _callback_property("_start")
2404 #: The stop point of the range (non-inclusive) or `None`. Can only be
2405 #: `None` if also start is `None`.
2406 stop = _callback_property("_stop")
2407 #: The length of the range or `None`.
2408 length = _callback_property("_length")
2409 del _callback_property
2410
2411 def set(self, start, stop, length=None, units="bytes"):
2412 """Simple method to update the ranges."""
2413 assert is_byte_range_valid(start, stop, length), "Bad range provided"
2414 self._units = units
2415 self._start = start
2416 self._stop = stop
2417 self._length = length
2418 if self.on_update is not None:
2419 self.on_update(self)
2420
2421 def unset(self):
2422 """Sets the units to `None` which indicates that the header should
2423 no longer be used.
2424 """
2425 self.set(None, None, units=None)
2426
2427 def to_header(self):
2428 if self.units is None:
2429 return ""
2430 if self.length is None:
2431 length = "*"
2432 else:
2433 length = self.length
2434 if self.start is None:
2435 return "%s */%s" % (self.units, length)
2436 return "%s %s-%s/%s" % (self.units, self.start, self.stop - 1, length)
2437
2438 def __nonzero__(self):
2439 return self.units is not None
2440
2441 __bool__ = __nonzero__
2442
2443 def __str__(self):
2444 return self.to_header()
2445
2446 def __repr__(self):
2447 return "<%s %r>" % (self.__class__.__name__, str(self))
2448
2449
2450 class Authorization(ImmutableDictMixin, dict):
2451 """Represents an `Authorization` header sent by the client. You should
2452 not create this kind of object yourself but use it when it's returned by
2453 the `parse_authorization_header` function.
2454
2455 This object is a dict subclass and can be altered by setting dict items
2456 but it should be considered immutable as it's returned by the client and
2457 not meant for modifications.
2458
2459 .. versionchanged:: 0.5
2460 This object became immutable.
2461 """
2462
2463 def __init__(self, auth_type, data=None):
2464 dict.__init__(self, data or {})
2465 self.type = auth_type
2466
2467 username = property(
2468 lambda self: self.get("username"),
2469 doc="""
2470 The username transmitted. This is set for both basic and digest
2471 auth all the time.""",
2472 )
2473 password = property(
2474 lambda self: self.get("password"),
2475 doc="""
2476 When the authentication type is basic this is the password
2477 transmitted by the client, else `None`.""",
2478 )
2479 realm = property(
2480 lambda self: self.get("realm"),
2481 doc="""
2482 This is the server realm sent back for HTTP digest auth.""",
2483 )
2484 nonce = property(
2485 lambda self: self.get("nonce"),
2486 doc="""
2487 The nonce the server sent for digest auth, sent back by the client.
2488 A nonce should be unique for every 401 response for HTTP digest
2489 auth.""",
2490 )
2491 uri = property(
2492 lambda self: self.get("uri"),
2493 doc="""
2494 The URI from Request-URI of the Request-Line; duplicated because
2495 proxies are allowed to change the Request-Line in transit. HTTP
2496 digest auth only.""",
2497 )
2498 nc = property(
2499 lambda self: self.get("nc"),
2500 doc="""
2501 The nonce count value transmitted by clients if a qop-header is
2502 also transmitted. HTTP digest auth only.""",
2503 )
2504 cnonce = property(
2505 lambda self: self.get("cnonce"),
2506 doc="""
2507 If the server sent a qop-header in the ``WWW-Authenticate``
2508 header, the client has to provide this value for HTTP digest auth.
2509 See the RFC for more details.""",
2510 )
2511 response = property(
2512 lambda self: self.get("response"),
2513 doc="""
2514 A string of 32 hex digits computed as defined in RFC 2617, which
2515 proves that the user knows a password. Digest auth only.""",
2516 )
2517 opaque = property(
2518 lambda self: self.get("opaque"),
2519 doc="""
2520 The opaque header from the server returned unchanged by the client.
2521 It is recommended that this string be base64 or hexadecimal data.
2522 Digest auth only.""",
2523 )
2524 qop = property(
2525 lambda self: self.get("qop"),
2526 doc="""
2527 Indicates what "quality of protection" the client has applied to
2528 the message for HTTP digest auth. Note that this is a single token,
2529 not a quoted list of alternatives as in WWW-Authenticate.""",
2530 )
2531
2532
2533 class WWWAuthenticate(UpdateDictMixin, dict):
2534 """Provides simple access to `WWW-Authenticate` headers."""
2535
2536 #: list of keys that require quoting in the generated header
2537 _require_quoting = frozenset(["domain", "nonce", "opaque", "realm", "qop"])
2538
2539 def __init__(self, auth_type=None, values=None, on_update=None):
2540 dict.__init__(self, values or ())
2541 if auth_type:
2542 self["__auth_type__"] = auth_type
2543 self.on_update = on_update
2544
2545 def set_basic(self, realm="authentication required"):
2546 """Clear the auth info and enable basic auth."""
2547 dict.clear(self)
2548 dict.update(self, {"__auth_type__": "basic", "realm": realm})
2549 if self.on_update:
2550 self.on_update(self)
2551
2552 def set_digest(
2553 self, realm, nonce, qop=("auth",), opaque=None, algorithm=None, stale=False
2554 ):
2555 """Clear the auth info and enable digest auth."""
2556 d = {
2557 "__auth_type__": "digest",
2558 "realm": realm,
2559 "nonce": nonce,
2560 "qop": dump_header(qop),
2561 }
2562 if stale:
2563 d["stale"] = "TRUE"
2564 if opaque is not None:
2565 d["opaque"] = opaque
2566 if algorithm is not None:
2567 d["algorithm"] = algorithm
2568 dict.clear(self)
2569 dict.update(self, d)
2570 if self.on_update:
2571 self.on_update(self)
2572
2573 def to_header(self):
2574 """Convert the stored values into a WWW-Authenticate header."""
2575 d = dict(self)
2576 auth_type = d.pop("__auth_type__", None) or "basic"
2577 return "%s %s" % (
2578 auth_type.title(),
2579 ", ".join(
2580 [
2581 "%s=%s"
2582 % (
2583 key,
2584 quote_header_value(
2585 value, allow_token=key not in self._require_quoting
2586 ),
2587 )
2588 for key, value in iteritems(d)
2589 ]
2590 ),
2591 )
2592
2593 def __str__(self):
2594 return self.to_header()
2595
2596 def __repr__(self):
2597 return "<%s %r>" % (self.__class__.__name__, self.to_header())
2598
2599 def auth_property(name, doc=None): # noqa: B902
2600 """A static helper function for subclasses to add extra authentication
2601 system properties onto a class::
2602
2603 class FooAuthenticate(WWWAuthenticate):
2604 special_realm = auth_property('special_realm')
2605
2606 For more information have a look at the sourcecode to see how the
2607 regular properties (:attr:`realm` etc.) are implemented.
2608 """
2609
2610 def _set_value(self, value):
2611 if value is None:
2612 self.pop(name, None)
2613 else:
2614 self[name] = str(value)
2615
2616 return property(lambda x: x.get(name), _set_value, doc=doc)
2617
2618 def _set_property(name, doc=None): # noqa: B902
2619 def fget(self):
2620 def on_update(header_set):
2621 if not header_set and name in self:
2622 del self[name]
2623 elif header_set:
2624 self[name] = header_set.to_header()
2625
2626 return parse_set_header(self.get(name), on_update)
2627
2628 return property(fget, doc=doc)
2629
2630 type = auth_property(
2631 "__auth_type__",
2632 doc="""The type of the auth mechanism. HTTP currently specifies
2633 ``Basic`` and ``Digest``.""",
2634 )
2635 realm = auth_property(
2636 "realm",
2637 doc="""A string to be displayed to users so they know which
2638 username and password to use. This string should contain at
2639 least the name of the host performing the authentication and
2640 might additionally indicate the collection of users who might
2641 have access.""",
2642 )
2643 domain = _set_property(
2644 "domain",
2645 doc="""A list of URIs that define the protection space. If a URI
2646 is an absolute path, it is relative to the canonical root URL of
2647 the server being accessed.""",
2648 )
2649 nonce = auth_property(
2650 "nonce",
2651 doc="""
2652 A server-specified data string which should be uniquely generated
2653 each time a 401 response is made. It is recommended that this
2654 string be base64 or hexadecimal data.""",
2655 )
2656 opaque = auth_property(
2657 "opaque",
2658 doc="""A string of data, specified by the server, which should
2659 be returned by the client unchanged in the Authorization header
2660 of subsequent requests with URIs in the same protection space.
2661 It is recommended that this string be base64 or hexadecimal
2662 data.""",
2663 )
2664 algorithm = auth_property(
2665 "algorithm",
2666 doc="""A string indicating a pair of algorithms used to produce
2667 the digest and a checksum. If this is not present it is assumed
2668 to be "MD5". If the algorithm is not understood, the challenge
2669 should be ignored (and a different one used, if there is more
2670 than one).""",
2671 )
2672 qop = _set_property(
2673 "qop",
2674 doc="""A set of quality-of-privacy directives such as auth and
2675 auth-int.""",
2676 )
2677
2678 @property
2679 def stale(self):
2680 """A flag, indicating that the previous request from the client
2681 was rejected because the nonce value was stale.
2682 """
2683 val = self.get("stale")
2684 if val is not None:
2685 return val.lower() == "true"
2686
2687 @stale.setter
2688 def stale(self, value):
2689 if value is None:
2690 self.pop("stale", None)
2691 else:
2692 self["stale"] = "TRUE" if value else "FALSE"
2693
2694 auth_property = staticmethod(auth_property)
2695 del _set_property
2696
2697
2698 class FileStorage(object):
2699 """The :class:`FileStorage` class is a thin wrapper over incoming files.
2700 It is used by the request object to represent uploaded files. All the
2701 attributes of the wrapper stream are proxied by the file storage so
2702 it's possible to do ``storage.read()`` instead of the long form
2703 ``storage.stream.read()``.
2704 """
2705
2706 def __init__(
2707 self,
2708 stream=None,
2709 filename=None,
2710 name=None,
2711 content_type=None,
2712 content_length=None,
2713 headers=None,
2714 ):
2715 self.name = name
2716 self.stream = stream or BytesIO()
2717
2718 # if no filename is provided we can attempt to get the filename
2719 # from the stream object passed. There we have to be careful to
2720 # skip things like <fdopen>, <stderr> etc. Python marks these
2721 # special filenames with angular brackets.
2722 if filename is None:
2723 filename = getattr(stream, "name", None)
2724 s = make_literal_wrapper(filename)
2725 if filename and filename[0] == s("<") and filename[-1] == s(">"):
2726 filename = None
2727
2728 # On Python 3 we want to make sure the filename is always unicode.
2729 # This might not be if the name attribute is bytes due to the
2730 # file being opened from the bytes API.
2731 if not PY2 and isinstance(filename, bytes):
2732 filename = filename.decode(get_filesystem_encoding(), "replace")
2733
2734 self.filename = filename
2735 if headers is None:
2736 headers = Headers()
2737 self.headers = headers
2738 if content_type is not None:
2739 headers["Content-Type"] = content_type
2740 if content_length is not None:
2741 headers["Content-Length"] = str(content_length)
2742
2743 def _parse_content_type(self):
2744 if not hasattr(self, "_parsed_content_type"):
2745 self._parsed_content_type = parse_options_header(self.content_type)
2746
2747 @property
2748 def content_type(self):
2749 """The content-type sent in the header. Usually not available"""
2750 return self.headers.get("content-type")
2751
2752 @property
2753 def content_length(self):
2754 """The content-length sent in the header. Usually not available"""
2755 return int(self.headers.get("content-length") or 0)
2756
2757 @property
2758 def mimetype(self):
2759 """Like :attr:`content_type`, but without parameters (eg, without
2760 charset, type etc.) and always lowercase. For example if the content
2761 type is ``text/HTML; charset=utf-8`` the mimetype would be
2762 ``'text/html'``.
2763
2764 .. versionadded:: 0.7
2765 """
2766 self._parse_content_type()
2767 return self._parsed_content_type[0].lower()
2768
2769 @property
2770 def mimetype_params(self):
2771 """The mimetype parameters as dict. For example if the content
2772 type is ``text/html; charset=utf-8`` the params would be
2773 ``{'charset': 'utf-8'}``.
2774
2775 .. versionadded:: 0.7
2776 """
2777 self._parse_content_type()
2778 return self._parsed_content_type[1]
2779
2780 def save(self, dst, buffer_size=16384):
2781 """Save the file to a destination path or file object. If the
2782 destination is a file object you have to close it yourself after the
2783 call. The buffer size is the number of bytes held in memory during
2784 the copy process. It defaults to 16KB.
2785
2786 For secure file saving also have a look at :func:`secure_filename`.
2787
2788 :param dst: a filename or open file object the uploaded file
2789 is saved to.
2790 :param buffer_size: the size of the buffer. This works the same as
2791 the `length` parameter of
2792 :func:`shutil.copyfileobj`.
2793 """
2794 from shutil import copyfileobj
2795
2796 close_dst = False
2797 if isinstance(dst, string_types):
2798 dst = open(dst, "wb")
2799 close_dst = True
2800 try:
2801 copyfileobj(self.stream, dst, buffer_size)
2802 finally:
2803 if close_dst:
2804 dst.close()
2805
2806 def close(self):
2807 """Close the underlying file if possible."""
2808 try:
2809 self.stream.close()
2810 except Exception:
2811 pass
2812
2813 def __nonzero__(self):
2814 return bool(self.filename)
2815
2816 __bool__ = __nonzero__
2817
2818 def __getattr__(self, name):
2819 try:
2820 return getattr(self.stream, name)
2821 except AttributeError:
2822 # SpooledTemporaryFile doesn't implement IOBase, get the
2823 # attribute from its backing file instead.
2824 # https://github.com/python/cpython/pull/3249
2825 if hasattr(self.stream, "_file"):
2826 return getattr(self.stream._file, name)
2827 raise
2828
2829 def __iter__(self):
2830 return iter(self.stream)
2831
2832 def __repr__(self):
2833 return "<%s: %r (%r)>" % (
2834 self.__class__.__name__,
2835 self.filename,
2836 self.content_type,
2837 )
2838
2839
2840 # circular dependencies
2841 from . import exceptions
2842 from .http import dump_header
2843 from .http import dump_options_header
2844 from .http import generate_etag
2845 from .http import http_date
2846 from .http import is_byte_range_valid
2847 from .http import parse_options_header
2848 from .http import parse_set_header
2849 from .http import quote_etag
2850 from .http import quote_header_value
2851 from .http import unquote_etag
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.debug
3 ~~~~~~~~~~~~~~
4
5 WSGI application traceback debugger.
6
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
9 """
10 import getpass
11 import hashlib
12 import json
13 import mimetypes
14 import os
15 import pkgutil
16 import re
17 import sys
18 import time
19 import uuid
20 from itertools import chain
21 from os.path import basename
22 from os.path import join
23
24 from .._compat import text_type
25 from .._internal import _log
26 from ..http import parse_cookie
27 from ..security import gen_salt
28 from ..wrappers import BaseRequest as Request
29 from ..wrappers import BaseResponse as Response
30 from .console import Console
31 from .repr import debug_repr as _debug_repr
32 from .tbtools import get_current_traceback
33 from .tbtools import render_console_html
34
35
36 def debug_repr(*args, **kwargs):
37 import warnings
38
39 warnings.warn(
40 "'debug_repr' has moved to 'werkzeug.debug.repr.debug_repr'"
41 " as of version 0.7. This old import will be removed in version"
42 " 1.0.",
43 DeprecationWarning,
44 stacklevel=2,
45 )
46 return _debug_repr(*args, **kwargs)
47
48
49 # A week
50 PIN_TIME = 60 * 60 * 24 * 7
51
52
53 def hash_pin(pin):
54 if isinstance(pin, text_type):
55 pin = pin.encode("utf-8", "replace")
56 return hashlib.md5(pin + b"shittysalt").hexdigest()[:12]
57
58
59 _machine_id = None
60
61
62 def get_machine_id():
63 global _machine_id
64 rv = _machine_id
65 if rv is not None:
66 return rv
67
68 def _generate():
69 # docker containers share the same machine id, get the
70 # container id instead
71 try:
72 with open("/proc/self/cgroup") as f:
73 value = f.readline()
74 except IOError:
75 pass
76 else:
77 value = value.strip().partition("/docker/")[2]
78
79 if value:
80 return value
81
82 # Potential sources of secret information on linux. The machine-id
83 # is stable across boots, the boot id is not
84 for filename in "/etc/machine-id", "/proc/sys/kernel/random/boot_id":
85 try:
86 with open(filename, "rb") as f:
87 return f.readline().strip()
88 except IOError:
89 continue
90
91 # On OS X we can use the computer's serial number assuming that
92 # ioreg exists and can spit out that information.
93 try:
94 # Also catch import errors: subprocess may not be available, e.g.
95 # Google App Engine
96 # See https://github.com/pallets/werkzeug/issues/925
97 from subprocess import Popen, PIPE
98
99 dump = Popen(
100 ["ioreg", "-c", "IOPlatformExpertDevice", "-d", "2"], stdout=PIPE
101 ).communicate()[0]
102 match = re.search(b'"serial-number" = <([^>]+)', dump)
103 if match is not None:
104 return match.group(1)
105 except (OSError, ImportError):
106 pass
107
108 # On Windows we can use winreg to get the machine guid
109 wr = None
110 try:
111 import winreg as wr
112 except ImportError:
113 try:
114 import _winreg as wr
115 except ImportError:
116 pass
117 if wr is not None:
118 try:
119 with wr.OpenKey(
120 wr.HKEY_LOCAL_MACHINE,
121 "SOFTWARE\\Microsoft\\Cryptography",
122 0,
123 wr.KEY_READ | wr.KEY_WOW64_64KEY,
124 ) as rk:
125 machineGuid, wrType = wr.QueryValueEx(rk, "MachineGuid")
126 if wrType == wr.REG_SZ:
127 return machineGuid.encode("utf-8")
128 else:
129 return machineGuid
130 except WindowsError:
131 pass
132
133 _machine_id = rv = _generate()
134 return rv
135
136
137 class _ConsoleFrame(object):
138 """Helper class so that we can reuse the frame console code for the
139 standalone console.
140 """
141
142 def __init__(self, namespace):
143 self.console = Console(namespace)
144 self.id = 0
145
146
147 def get_pin_and_cookie_name(app):
148 """Given an application object this returns a semi-stable 9 digit pin
149 code and a random key. The hope is that this is stable between
150 restarts to not make debugging particularly frustrating. If the pin
151 was forcefully disabled this returns `None`.
152
153 Second item in the resulting tuple is the cookie name for remembering.
154 """
155 pin = os.environ.get("WERKZEUG_DEBUG_PIN")
156 rv = None
157 num = None
158
159 # Pin was explicitly disabled
160 if pin == "off":
161 return None, None
162
163 # Pin was provided explicitly
164 if pin is not None and pin.replace("-", "").isdigit():
165 # If there are separators in the pin, return it directly
166 if "-" in pin:
167 rv = pin
168 else:
169 num = pin
170
171 modname = getattr(app, "__module__", app.__class__.__module__)
172
173 try:
174 # getuser imports the pwd module, which does not exist in Google
175 # App Engine. It may also raise a KeyError if the UID does not
176 # have a username, such as in Docker.
177 username = getpass.getuser()
178 except (ImportError, KeyError):
179 username = None
180
181 mod = sys.modules.get(modname)
182
183 # This information only exists to make the cookie unique on the
184 # computer, not as a security feature.
185 probably_public_bits = [
186 username,
187 modname,
188 getattr(app, "__name__", app.__class__.__name__),
189 getattr(mod, "__file__", None),
190 ]
191
192 # This information is here to make it harder for an attacker to
193 # guess the cookie name. They are unlikely to be contained anywhere
194 # within the unauthenticated debug page.
195 private_bits = [str(uuid.getnode()), get_machine_id()]
196
197 h = hashlib.md5()
198 for bit in chain(probably_public_bits, private_bits):
199 if not bit:
200 continue
201 if isinstance(bit, text_type):
202 bit = bit.encode("utf-8")
203 h.update(bit)
204 h.update(b"cookiesalt")
205
206 cookie_name = "__wzd" + h.hexdigest()[:20]
207
208 # If we need to generate a pin we salt it a bit more so that we don't
209 # end up with the same value and generate out 9 digits
210 if num is None:
211 h.update(b"pinsalt")
212 num = ("%09d" % int(h.hexdigest(), 16))[:9]
213
214 # Format the pincode in groups of digits for easier remembering if
215 # we don't have a result yet.
216 if rv is None:
217 for group_size in 5, 4, 3:
218 if len(num) % group_size == 0:
219 rv = "-".join(
220 num[x : x + group_size].rjust(group_size, "0")
221 for x in range(0, len(num), group_size)
222 )
223 break
224 else:
225 rv = num
226
227 return rv, cookie_name
228
229
230 class DebuggedApplication(object):
231 """Enables debugging support for a given application::
232
233 from werkzeug.debug import DebuggedApplication
234 from myapp import app
235 app = DebuggedApplication(app, evalex=True)
236
237 The `evalex` keyword argument allows evaluating expressions in a
238 traceback's frame context.
239
240 .. versionadded:: 0.9
241 The `lodgeit_url` parameter was deprecated.
242
243 :param app: the WSGI application to run debugged.
244 :param evalex: enable exception evaluation feature (interactive
245 debugging). This requires a non-forking server.
246 :param request_key: The key that points to the request object in ths
247 environment. This parameter is ignored in current
248 versions.
249 :param console_path: the URL for a general purpose console.
250 :param console_init_func: the function that is executed before starting
251 the general purpose console. The return value
252 is used as initial namespace.
253 :param show_hidden_frames: by default hidden traceback frames are skipped.
254 You can show them by setting this parameter
255 to `True`.
256 :param pin_security: can be used to disable the pin based security system.
257 :param pin_logging: enables the logging of the pin system.
258 """
259
260 def __init__(
261 self,
262 app,
263 evalex=False,
264 request_key="werkzeug.request",
265 console_path="/console",
266 console_init_func=None,
267 show_hidden_frames=False,
268 lodgeit_url=None,
269 pin_security=True,
270 pin_logging=True,
271 ):
272 if lodgeit_url is not None:
273 from warnings import warn
274
275 warn(
276 "'lodgeit_url' is no longer used as of version 0.9 and"
277 " will be removed in version 1.0. Werkzeug uses"
278 " https://gist.github.com/ instead.",
279 DeprecationWarning,
280 stacklevel=2,
281 )
282 if not console_init_func:
283 console_init_func = None
284 self.app = app
285 self.evalex = evalex
286 self.frames = {}
287 self.tracebacks = {}
288 self.request_key = request_key
289 self.console_path = console_path
290 self.console_init_func = console_init_func
291 self.show_hidden_frames = show_hidden_frames
292 self.secret = gen_salt(20)
293 self._failed_pin_auth = 0
294
295 self.pin_logging = pin_logging
296 if pin_security:
297 # Print out the pin for the debugger on standard out.
298 if os.environ.get("WERKZEUG_RUN_MAIN") == "true" and pin_logging:
299 _log("warning", " * Debugger is active!")
300 if self.pin is None:
301 _log("warning", " * Debugger PIN disabled. DEBUGGER UNSECURED!")
302 else:
303 _log("info", " * Debugger PIN: %s" % self.pin)
304 else:
305 self.pin = None
306
307 def _get_pin(self):
308 if not hasattr(self, "_pin"):
309 self._pin, self._pin_cookie = get_pin_and_cookie_name(self.app)
310 return self._pin
311
312 def _set_pin(self, value):
313 self._pin = value
314
315 pin = property(_get_pin, _set_pin)
316 del _get_pin, _set_pin
317
318 @property
319 def pin_cookie_name(self):
320 """The name of the pin cookie."""
321 if not hasattr(self, "_pin_cookie"):
322 self._pin, self._pin_cookie = get_pin_and_cookie_name(self.app)
323 return self._pin_cookie
324
325 def debug_application(self, environ, start_response):
326 """Run the application and conserve the traceback frames."""
327 app_iter = None
328 try:
329 app_iter = self.app(environ, start_response)
330 for item in app_iter:
331 yield item
332 if hasattr(app_iter, "close"):
333 app_iter.close()
334 except Exception:
335 if hasattr(app_iter, "close"):
336 app_iter.close()
337 traceback = get_current_traceback(
338 skip=1,
339 show_hidden_frames=self.show_hidden_frames,
340 ignore_system_exceptions=True,
341 )
342 for frame in traceback.frames:
343 self.frames[frame.id] = frame
344 self.tracebacks[traceback.id] = traceback
345
346 try:
347 start_response(
348 "500 INTERNAL SERVER ERROR",
349 [
350 ("Content-Type", "text/html; charset=utf-8"),
351 # Disable Chrome's XSS protection, the debug
352 # output can cause false-positives.
353 ("X-XSS-Protection", "0"),
354 ],
355 )
356 except Exception:
357 # if we end up here there has been output but an error
358 # occurred. in that situation we can do nothing fancy any
359 # more, better log something into the error log and fall
360 # back gracefully.
361 environ["wsgi.errors"].write(
362 "Debugging middleware caught exception in streamed "
363 "response at a point where response headers were already "
364 "sent.\n"
365 )
366 else:
367 is_trusted = bool(self.check_pin_trust(environ))
368 yield traceback.render_full(
369 evalex=self.evalex, evalex_trusted=is_trusted, secret=self.secret
370 ).encode("utf-8", "replace")
371
372 traceback.log(environ["wsgi.errors"])
373
374 def execute_command(self, request, command, frame):
375 """Execute a command in a console."""
376 return Response(frame.console.eval(command), mimetype="text/html")
377
378 def display_console(self, request):
379 """Display a standalone shell."""
380 if 0 not in self.frames:
381 if self.console_init_func is None:
382 ns = {}
383 else:
384 ns = dict(self.console_init_func())
385 ns.setdefault("app", self.app)
386 self.frames[0] = _ConsoleFrame(ns)
387 is_trusted = bool(self.check_pin_trust(request.environ))
388 return Response(
389 render_console_html(secret=self.secret, evalex_trusted=is_trusted),
390 mimetype="text/html",
391 )
392
393 def paste_traceback(self, request, traceback):
394 """Paste the traceback and return a JSON response."""
395 rv = traceback.paste()
396 return Response(json.dumps(rv), mimetype="application/json")
397
398 def get_resource(self, request, filename):
399 """Return a static resource from the shared folder."""
400 filename = join("shared", basename(filename))
401 try:
402 data = pkgutil.get_data(__package__, filename)
403 except OSError:
404 data = None
405 if data is not None:
406 mimetype = mimetypes.guess_type(filename)[0] or "application/octet-stream"
407 return Response(data, mimetype=mimetype)
408 return Response("Not Found", status=404)
409
410 def check_pin_trust(self, environ):
411 """Checks if the request passed the pin test. This returns `True` if the
412 request is trusted on a pin/cookie basis and returns `False` if not.
413 Additionally if the cookie's stored pin hash is wrong it will return
414 `None` so that appropriate action can be taken.
415 """
416 if self.pin is None:
417 return True
418 val = parse_cookie(environ).get(self.pin_cookie_name)
419 if not val or "|" not in val:
420 return False
421 ts, pin_hash = val.split("|", 1)
422 if not ts.isdigit():
423 return False
424 if pin_hash != hash_pin(self.pin):
425 return None
426 return (time.time() - PIN_TIME) < int(ts)
427
428 def _fail_pin_auth(self):
429 time.sleep(5.0 if self._failed_pin_auth > 5 else 0.5)
430 self._failed_pin_auth += 1
431
432 def pin_auth(self, request):
433 """Authenticates with the pin."""
434 exhausted = False
435 auth = False
436 trust = self.check_pin_trust(request.environ)
437
438 # If the trust return value is `None` it means that the cookie is
439 # set but the stored pin hash value is bad. This means that the
440 # pin was changed. In this case we count a bad auth and unset the
441 # cookie. This way it becomes harder to guess the cookie name
442 # instead of the pin as we still count up failures.
443 bad_cookie = False
444 if trust is None:
445 self._fail_pin_auth()
446 bad_cookie = True
447
448 # If we're trusted, we're authenticated.
449 elif trust:
450 auth = True
451
452 # If we failed too many times, then we're locked out.
453 elif self._failed_pin_auth > 10:
454 exhausted = True
455
456 # Otherwise go through pin based authentication
457 else:
458 entered_pin = request.args.get("pin")
459 if entered_pin.strip().replace("-", "") == self.pin.replace("-", ""):
460 self._failed_pin_auth = 0
461 auth = True
462 else:
463 self._fail_pin_auth()
464
465 rv = Response(
466 json.dumps({"auth": auth, "exhausted": exhausted}),
467 mimetype="application/json",
468 )
469 if auth:
470 rv.set_cookie(
471 self.pin_cookie_name,
472 "%s|%s" % (int(time.time()), hash_pin(self.pin)),
473 httponly=True,
474 )
475 elif bad_cookie:
476 rv.delete_cookie(self.pin_cookie_name)
477 return rv
478
479 def log_pin_request(self):
480 """Log the pin if needed."""
481 if self.pin_logging and self.pin is not None:
482 _log(
483 "info", " * To enable the debugger you need to enter the security pin:"
484 )
485 _log("info", " * Debugger pin code: %s" % self.pin)
486 return Response("")
487
488 def __call__(self, environ, start_response):
489 """Dispatch the requests."""
490 # important: don't ever access a function here that reads the incoming
491 # form data! Otherwise the application won't have access to that data
492 # any more!
493 request = Request(environ)
494 response = self.debug_application
495 if request.args.get("__debugger__") == "yes":
496 cmd = request.args.get("cmd")
497 arg = request.args.get("f")
498 secret = request.args.get("s")
499 traceback = self.tracebacks.get(request.args.get("tb", type=int))
500 frame = self.frames.get(request.args.get("frm", type=int))
501 if cmd == "resource" and arg:
502 response = self.get_resource(request, arg)
503 elif cmd == "paste" and traceback is not None and secret == self.secret:
504 response = self.paste_traceback(request, traceback)
505 elif cmd == "pinauth" and secret == self.secret:
506 response = self.pin_auth(request)
507 elif cmd == "printpin" and secret == self.secret:
508 response = self.log_pin_request()
509 elif (
510 self.evalex
511 and cmd is not None
512 and frame is not None
513 and self.secret == secret
514 and self.check_pin_trust(environ)
515 ):
516 response = self.execute_command(request, cmd, frame)
517 elif (
518 self.evalex
519 and self.console_path is not None
520 and request.path == self.console_path
521 ):
522 response = self.display_console(request)
523 return response(environ, start_response)
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.debug.console
3 ~~~~~~~~~~~~~~~~~~~~~~
4
5 Interactive console support.
6
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
9 """
10 import code
11 import sys
12 from types import CodeType
13
14 from ..local import Local
15 from ..utils import escape
16 from .repr import debug_repr
17 from .repr import dump
18 from .repr import helper
19
20
21 _local = Local()
22
23
24 class HTMLStringO(object):
25 """A StringO version that HTML escapes on write."""
26
27 def __init__(self):
28 self._buffer = []
29
30 def isatty(self):
31 return False
32
33 def close(self):
34 pass
35
36 def flush(self):
37 pass
38
39 def seek(self, n, mode=0):
40 pass
41
42 def readline(self):
43 if len(self._buffer) == 0:
44 return ""
45 ret = self._buffer[0]
46 del self._buffer[0]
47 return ret
48
49 def reset(self):
50 val = "".join(self._buffer)
51 del self._buffer[:]
52 return val
53
54 def _write(self, x):
55 if isinstance(x, bytes):
56 x = x.decode("utf-8", "replace")
57 self._buffer.append(x)
58
59 def write(self, x):
60 self._write(escape(x))
61
62 def writelines(self, x):
63 self._write(escape("".join(x)))
64
65
66 class ThreadedStream(object):
67 """Thread-local wrapper for sys.stdout for the interactive console."""
68
69 @staticmethod
70 def push():
71 if not isinstance(sys.stdout, ThreadedStream):
72 sys.stdout = ThreadedStream()
73 _local.stream = HTMLStringO()
74
75 @staticmethod
76 def fetch():
77 try:
78 stream = _local.stream
79 except AttributeError:
80 return ""
81 return stream.reset()
82
83 @staticmethod
84 def displayhook(obj):
85 try:
86 stream = _local.stream
87 except AttributeError:
88 return _displayhook(obj)
89 # stream._write bypasses escaping as debug_repr is
90 # already generating HTML for us.
91 if obj is not None:
92 _local._current_ipy.locals["_"] = obj
93 stream._write(debug_repr(obj))
94
95 def __setattr__(self, name, value):
96 raise AttributeError("read only attribute %s" % name)
97
98 def __dir__(self):
99 return dir(sys.__stdout__)
100
101 def __getattribute__(self, name):
102 if name == "__members__":
103 return dir(sys.__stdout__)
104 try:
105 stream = _local.stream
106 except AttributeError:
107 stream = sys.__stdout__
108 return getattr(stream, name)
109
110 def __repr__(self):
111 return repr(sys.__stdout__)
112
113
114 # add the threaded stream as display hook
115 _displayhook = sys.displayhook
116 sys.displayhook = ThreadedStream.displayhook
117
118
119 class _ConsoleLoader(object):
120 def __init__(self):
121 self._storage = {}
122
123 def register(self, code, source):
124 self._storage[id(code)] = source
125 # register code objects of wrapped functions too.
126 for var in code.co_consts:
127 if isinstance(var, CodeType):
128 self._storage[id(var)] = source
129
130 def get_source_by_code(self, code):
131 try:
132 return self._storage[id(code)]
133 except KeyError:
134 pass
135
136
137 def _wrap_compiler(console):
138 compile = console.compile
139
140 def func(source, filename, symbol):
141 code = compile(source, filename, symbol)
142 console.loader.register(code, source)
143 return code
144
145 console.compile = func
146
147
148 class _InteractiveConsole(code.InteractiveInterpreter):
149 def __init__(self, globals, locals):
150 code.InteractiveInterpreter.__init__(self, locals)
151 self.globals = dict(globals)
152 self.globals["dump"] = dump
153 self.globals["help"] = helper
154 self.globals["__loader__"] = self.loader = _ConsoleLoader()
155 self.more = False
156 self.buffer = []
157 _wrap_compiler(self)
158
159 def runsource(self, source):
160 source = source.rstrip() + "\n"
161 ThreadedStream.push()
162 prompt = "... " if self.more else ">>> "
163 try:
164 source_to_eval = "".join(self.buffer + [source])
165 if code.InteractiveInterpreter.runsource(
166 self, source_to_eval, "<debugger>", "single"
167 ):
168 self.more = True
169 self.buffer.append(source)
170 else:
171 self.more = False
172 del self.buffer[:]
173 finally:
174 output = ThreadedStream.fetch()
175 return prompt + escape(source) + output
176
177 def runcode(self, code):
178 try:
179 eval(code, self.globals, self.locals)
180 except Exception:
181 self.showtraceback()
182
183 def showtraceback(self):
184 from .tbtools import get_current_traceback
185
186 tb = get_current_traceback(skip=1)
187 sys.stdout._write(tb.render_summary())
188
189 def showsyntaxerror(self, filename=None):
190 from .tbtools import get_current_traceback
191
192 tb = get_current_traceback(skip=4)
193 sys.stdout._write(tb.render_summary())
194
195 def write(self, data):
196 sys.stdout.write(data)
197
198
199 class Console(object):
200 """An interactive console."""
201
202 def __init__(self, globals=None, locals=None):
203 if locals is None:
204 locals = {}
205 if globals is None:
206 globals = {}
207 self._ipy = _InteractiveConsole(globals, locals)
208
209 def eval(self, code):
210 _local._current_ipy = self._ipy
211 old_sys_stdout = sys.stdout
212 try:
213 return self._ipy.runsource(code)
214 finally:
215 sys.stdout = old_sys_stdout
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.debug.repr
3 ~~~~~~~~~~~~~~~~~~~
4
5 This module implements object representations for debugging purposes.
6 Unlike the default repr these reprs expose a lot more information and
7 produce HTML instead of ASCII.
8
9 Together with the CSS and JavaScript files of the debugger this gives
10 a colorful and more compact output.
11
12 :copyright: 2007 Pallets
13 :license: BSD-3-Clause
14 """
15 import codecs
16 import re
17 import sys
18 from collections import deque
19 from traceback import format_exception_only
20
21 from .._compat import integer_types
22 from .._compat import iteritems
23 from .._compat import PY2
24 from .._compat import string_types
25 from .._compat import text_type
26 from ..utils import escape
27
28
29 missing = object()
30 _paragraph_re = re.compile(r"(?:\r\n|\r|\n){2,}")
31 RegexType = type(_paragraph_re)
32
33
34 HELP_HTML = """\
35 <div class=box>
36 <h3>%(title)s</h3>
37 <pre class=help>%(text)s</pre>
38 </div>\
39 """
40 OBJECT_DUMP_HTML = """\
41 <div class=box>
42 <h3>%(title)s</h3>
43 %(repr)s
44 <table>%(items)s</table>
45 </div>\
46 """
47
48
49 def debug_repr(obj):
50 """Creates a debug repr of an object as HTML unicode string."""
51 return DebugReprGenerator().repr(obj)
52
53
54 def dump(obj=missing):
55 """Print the object details to stdout._write (for the interactive
56 console of the web debugger.
57 """
58 gen = DebugReprGenerator()
59 if obj is missing:
60 rv = gen.dump_locals(sys._getframe(1).f_locals)
61 else:
62 rv = gen.dump_object(obj)
63 sys.stdout._write(rv)
64
65
66 class _Helper(object):
67 """Displays an HTML version of the normal help, for the interactive
68 debugger only because it requires a patched sys.stdout.
69 """
70
71 def __repr__(self):
72 return "Type help(object) for help about object."
73
74 def __call__(self, topic=None):
75 if topic is None:
76 sys.stdout._write("<span class=help>%s</span>" % repr(self))
77 return
78 import pydoc
79
80 pydoc.help(topic)
81 rv = sys.stdout.reset()
82 if isinstance(rv, bytes):
83 rv = rv.decode("utf-8", "ignore")
84 paragraphs = _paragraph_re.split(rv)
85 if len(paragraphs) > 1:
86 title = paragraphs[0]
87 text = "\n\n".join(paragraphs[1:])
88 else: # pragma: no cover
89 title = "Help"
90 text = paragraphs[0]
91 sys.stdout._write(HELP_HTML % {"title": title, "text": text})
92
93
94 helper = _Helper()
95
96
97 def _add_subclass_info(inner, obj, base):
98 if isinstance(base, tuple):
99 for base in base:
100 if type(obj) is base:
101 return inner
102 elif type(obj) is base:
103 return inner
104 module = ""
105 if obj.__class__.__module__ not in ("__builtin__", "exceptions"):
106 module = '<span class="module">%s.</span>' % obj.__class__.__module__
107 return "%s%s(%s)" % (module, obj.__class__.__name__, inner)
108
109
110 class DebugReprGenerator(object):
111 def __init__(self):
112 self._stack = []
113
114 def _sequence_repr_maker(left, right, base=object(), limit=8): # noqa: B008, B902
115 def proxy(self, obj, recursive):
116 if recursive:
117 return _add_subclass_info(left + "..." + right, obj, base)
118 buf = [left]
119 have_extended_section = False
120 for idx, item in enumerate(obj):
121 if idx:
122 buf.append(", ")
123 if idx == limit:
124 buf.append('<span class="extended">')
125 have_extended_section = True
126 buf.append(self.repr(item))
127 if have_extended_section:
128 buf.append("</span>")
129 buf.append(right)
130 return _add_subclass_info(u"".join(buf), obj, base)
131
132 return proxy
133
134 list_repr = _sequence_repr_maker("[", "]", list)
135 tuple_repr = _sequence_repr_maker("(", ")", tuple)
136 set_repr = _sequence_repr_maker("set([", "])", set)
137 frozenset_repr = _sequence_repr_maker("frozenset([", "])", frozenset)
138 deque_repr = _sequence_repr_maker(
139 '<span class="module">collections.' "</span>deque([", "])", deque
140 )
141 del _sequence_repr_maker
142
143 def regex_repr(self, obj):
144 pattern = repr(obj.pattern)
145 if PY2:
146 pattern = pattern.decode("string-escape", "ignore")
147 else:
148 pattern = codecs.decode(pattern, "unicode-escape", "ignore")
149 if pattern[:1] == "u":
150 pattern = "ur" + pattern[1:]
151 else:
152 pattern = "r" + pattern
153 return u're.compile(<span class="string regex">%s</span>)' % pattern
154
155 def string_repr(self, obj, limit=70):
156 buf = ['<span class="string">']
157 r = repr(obj)
158
159 # shorten the repr when the hidden part would be at least 3 chars
160 if len(r) - limit > 2:
161 buf.extend(
162 (
163 escape(r[:limit]),
164 '<span class="extended">',
165 escape(r[limit:]),
166 "</span>",
167 )
168 )
169 else:
170 buf.append(escape(r))
171
172 buf.append("</span>")
173 out = u"".join(buf)
174
175 # if the repr looks like a standard string, add subclass info if needed
176 if r[0] in "'\"" or (r[0] in "ub" and r[1] in "'\""):
177 return _add_subclass_info(out, obj, (bytes, text_type))
178
179 # otherwise, assume the repr distinguishes the subclass already
180 return out
181
182 def dict_repr(self, d, recursive, limit=5):
183 if recursive:
184 return _add_subclass_info(u"{...}", d, dict)
185 buf = ["{"]
186 have_extended_section = False
187 for idx, (key, value) in enumerate(iteritems(d)):
188 if idx:
189 buf.append(", ")
190 if idx == limit - 1:
191 buf.append('<span class="extended">')
192 have_extended_section = True
193 buf.append(
194 '<span class="pair"><span class="key">%s</span>: '
195 '<span class="value">%s</span></span>'
196 % (self.repr(key), self.repr(value))
197 )
198 if have_extended_section:
199 buf.append("</span>")
200 buf.append("}")
201 return _add_subclass_info(u"".join(buf), d, dict)
202
203 def object_repr(self, obj):
204 r = repr(obj)
205 if PY2:
206 r = r.decode("utf-8", "replace")
207 return u'<span class="object">%s</span>' % escape(r)
208
209 def dispatch_repr(self, obj, recursive):
210 if obj is helper:
211 return u'<span class="help">%r</span>' % helper
212 if isinstance(obj, (integer_types, float, complex)):
213 return u'<span class="number">%r</span>' % obj
214 if isinstance(obj, string_types) or isinstance(obj, bytes):
215 return self.string_repr(obj)
216 if isinstance(obj, RegexType):
217 return self.regex_repr(obj)
218 if isinstance(obj, list):
219 return self.list_repr(obj, recursive)
220 if isinstance(obj, tuple):
221 return self.tuple_repr(obj, recursive)
222 if isinstance(obj, set):
223 return self.set_repr(obj, recursive)
224 if isinstance(obj, frozenset):
225 return self.frozenset_repr(obj, recursive)
226 if isinstance(obj, dict):
227 return self.dict_repr(obj, recursive)
228 if deque is not None and isinstance(obj, deque):
229 return self.deque_repr(obj, recursive)
230 return self.object_repr(obj)
231
232 def fallback_repr(self):
233 try:
234 info = "".join(format_exception_only(*sys.exc_info()[:2]))
235 except Exception: # pragma: no cover
236 info = "?"
237 if PY2:
238 info = info.decode("utf-8", "ignore")
239 return u'<span class="brokenrepr">&lt;broken repr (%s)&gt;' u"</span>" % escape(
240 info.strip()
241 )
242
243 def repr(self, obj):
244 recursive = False
245 for item in self._stack:
246 if item is obj:
247 recursive = True
248 break
249 self._stack.append(obj)
250 try:
251 try:
252 return self.dispatch_repr(obj, recursive)
253 except Exception:
254 return self.fallback_repr()
255 finally:
256 self._stack.pop()
257
258 def dump_object(self, obj):
259 repr = items = None
260 if isinstance(obj, dict):
261 title = "Contents of"
262 items = []
263 for key, value in iteritems(obj):
264 if not isinstance(key, string_types):
265 items = None
266 break
267 items.append((key, self.repr(value)))
268 if items is None:
269 items = []
270 repr = self.repr(obj)
271 for key in dir(obj):
272 try:
273 items.append((key, self.repr(getattr(obj, key))))
274 except Exception:
275 pass
276 title = "Details for"
277 title += " " + object.__repr__(obj)[1:-1]
278 return self.render_object_dump(items, title, repr)
279
280 def dump_locals(self, d):
281 items = [(key, self.repr(value)) for key, value in d.items()]
282 return self.render_object_dump(items, "Local variables in frame")
283
284 def render_object_dump(self, items, title, repr=None):
285 html_items = []
286 for key, value in items:
287 html_items.append(
288 "<tr><th>%s<td><pre class=repr>%s</pre>" % (escape(key), value)
289 )
290 if not html_items:
291 html_items.append("<tr><td><em>Nothing</em>")
292 return OBJECT_DUMP_HTML % {
293 "title": escape(title),
294 "repr": "<pre class=repr>%s</pre>" % repr if repr else "",
295 "items": "\n".join(html_items),
296 }
0 $(function() {
1 if (!EVALEX_TRUSTED) {
2 initPinBox();
3 }
4
5 /**
6 * if we are in console mode, show the console.
7 */
8 if (CONSOLE_MODE && EVALEX) {
9 openShell(null, $('div.console div.inner').empty(), 0);
10 }
11
12 $("div.detail").click(function() {
13 $("div.traceback").get(0).scrollIntoView(false);
14 });
15
16 $('div.traceback div.frame').each(function() {
17 var
18 target = $('pre', this),
19 consoleNode = null,
20 frameID = this.id.substring(6);
21
22 target.click(function() {
23 $(this).parent().toggleClass('expanded');
24 });
25
26 /**
27 * Add an interactive console to the frames
28 */
29 if (EVALEX && target.is('.current')) {
30 $('<img src="?__debugger__=yes&cmd=resource&f=console.png">')
31 .attr('title', 'Open an interactive python shell in this frame')
32 .click(function() {
33 consoleNode = openShell(consoleNode, target, frameID);
34 return false;
35 })
36 .prependTo(target);
37 }
38 });
39
40 /**
41 * toggle traceback types on click.
42 */
43 $('h2.traceback').click(function() {
44 $(this).next().slideToggle('fast');
45 $('div.plain').slideToggle('fast');
46 }).css('cursor', 'pointer');
47 $('div.plain').hide();
48
49 /**
50 * Add extra info (this is here so that only users with JavaScript
51 * enabled see it.)
52 */
53 $('span.nojavascript')
54 .removeClass('nojavascript')
55 .html('<p>To switch between the interactive traceback and the plaintext ' +
56 'one, you can click on the "Traceback" headline. From the text ' +
57 'traceback you can also create a paste of it. ' + (!EVALEX ? '' :
58 'For code execution mouse-over the frame you want to debug and ' +
59 'click on the console icon on the right side.' +
60 '<p>You can execute arbitrary Python code in the stack frames and ' +
61 'there are some extra helpers available for introspection:' +
62 '<ul><li><code>dump()</code> shows all variables in the frame' +
63 '<li><code>dump(obj)</code> dumps all that\'s known about the object</ul>'));
64
65 /**
66 * Add the pastebin feature
67 */
68 $('div.plain form')
69 .submit(function() {
70 var label = $('input[type="submit"]', this);
71 var old_val = label.val();
72 label.val('submitting...');
73 $.ajax({
74 dataType: 'json',
75 url: document.location.pathname,
76 data: {__debugger__: 'yes', tb: TRACEBACK, cmd: 'paste',
77 s: SECRET},
78 success: function(data) {
79 $('div.plain span.pastemessage')
80 .removeClass('pastemessage')
81 .text('Paste created: ')
82 .append($('<a>#' + data.id + '</a>').attr('href', data.url));
83 },
84 error: function() {
85 alert('Error: Could not submit paste. No network connection?');
86 label.val(old_val);
87 }
88 });
89 return false;
90 });
91
92 // if we have javascript we submit by ajax anyways, so no need for the
93 // not scaling textarea.
94 var plainTraceback = $('div.plain textarea');
95 plainTraceback.replaceWith($('<pre>').text(plainTraceback.text()));
96 });
97
98 function initPinBox() {
99 $('.pin-prompt form').submit(function(evt) {
100 evt.preventDefault();
101 var pin = this.pin.value;
102 var btn = this.btn;
103 btn.disabled = true;
104 $.ajax({
105 dataType: 'json',
106 url: document.location.pathname,
107 data: {__debugger__: 'yes', cmd: 'pinauth', pin: pin,
108 s: SECRET},
109 success: function(data) {
110 btn.disabled = false;
111 if (data.auth) {
112 EVALEX_TRUSTED = true;
113 $('.pin-prompt').fadeOut();
114 } else {
115 if (data.exhausted) {
116 alert('Error: too many attempts. Restart server to retry.');
117 } else {
118 alert('Error: incorrect pin');
119 }
120 }
121 console.log(data);
122 },
123 error: function() {
124 btn.disabled = false;
125 alert('Error: Could not verify PIN. Network error?');
126 }
127 });
128 });
129 }
130
131 function promptForPin() {
132 if (!EVALEX_TRUSTED) {
133 $.ajax({
134 url: document.location.pathname,
135 data: {__debugger__: 'yes', cmd: 'printpin', s: SECRET}
136 });
137 $('.pin-prompt').fadeIn(function() {
138 $('.pin-prompt input[name="pin"]').focus();
139 });
140 }
141 }
142
143
144 /**
145 * Helper function for shell initialization
146 */
147 function openShell(consoleNode, target, frameID) {
148 promptForPin();
149 if (consoleNode)
150 return consoleNode.slideToggle('fast');
151 consoleNode = $('<pre class="console">')
152 .appendTo(target.parent())
153 .hide()
154 var historyPos = 0, history = [''];
155 var output = $('<div class="output">[console ready]</div>')
156 .appendTo(consoleNode);
157 var form = $('<form>&gt;&gt;&gt; </form>')
158 .submit(function() {
159 var cmd = command.val();
160 $.get('', {
161 __debugger__: 'yes', cmd: cmd, frm: frameID, s: SECRET}, function(data) {
162 var tmp = $('<div>').html(data);
163 $('span.extended', tmp).each(function() {
164 var hidden = $(this).wrap('<span>').hide();
165 hidden
166 .parent()
167 .append($('<a href="#" class="toggle">&nbsp;&nbsp;</a>')
168 .click(function() {
169 hidden.toggle();
170 $(this).toggleClass('open')
171 return false;
172 }));
173 });
174 output.append(tmp);
175 command.focus();
176 consoleNode.scrollTop(consoleNode.get(0).scrollHeight);
177 var old = history.pop();
178 history.push(cmd);
179 if (typeof old != 'undefined')
180 history.push(old);
181 historyPos = history.length - 1;
182 });
183 command.val('');
184 return false;
185 }).
186 appendTo(consoleNode);
187
188 var command = $('<input type="text" autocomplete="off" autocorrect="off" autocapitalize="off" spellcheck="false">')
189 .appendTo(form)
190 .keydown(function(e) {
191 if (e.key == 'l' && e.ctrlKey) {
192 output.text('--- screen cleared ---');
193 return false;
194 }
195 else if (e.charCode == 0 && (e.keyCode == 38 || e.keyCode == 40)) {
196 // handle up arrow and down arrow
197 if (e.keyCode == 38 && historyPos > 0)
198 historyPos--;
199 else if (e.keyCode == 40 && historyPos < history.length)
200 historyPos++;
201 command.val(history[historyPos]);
202 return false;
203 }
204 });
205
206 return consoleNode.slideDown('fast', function() {
207 command.focus();
208 });
209 }
0 @font-face {
1 font-family: 'Ubuntu';
2 font-style: normal;
3 font-weight: normal;
4 src: local('Ubuntu'), local('Ubuntu-Regular'),
5 url('?__debugger__=yes&cmd=resource&f=ubuntu.ttf') format('truetype');
6 }
7
8 body, input { font-family: 'Lucida Grande', 'Lucida Sans Unicode', 'Geneva',
9 'Verdana', sans-serif; color: #000; text-align: center;
10 margin: 1em; padding: 0; font-size: 15px; }
11 h1, h2, h3 { font-family: 'Ubuntu', 'Lucida Grande', 'Lucida Sans Unicode',
12 'Geneva', 'Verdana', sans-serif; font-weight: normal; }
13
14 input { background-color: #fff; margin: 0; text-align: left;
15 outline: none !important; }
16 input[type="submit"] { padding: 3px 6px; }
17 a { color: #11557C; }
18 a:hover { color: #177199; }
19 pre, code,
20 textarea { font-family: 'Consolas', 'Monaco', 'Bitstream Vera Sans Mono',
21 monospace; font-size: 14px; }
22
23 div.debugger { text-align: left; padding: 12px; margin: auto;
24 background-color: white; }
25 h1 { font-size: 36px; margin: 0 0 0.3em 0; }
26 div.detail { cursor: pointer; }
27 div.detail p { margin: 0 0 8px 13px; font-size: 14px; white-space: pre-wrap;
28 font-family: monospace; }
29 div.explanation { margin: 20px 13px; font-size: 15px; color: #555; }
30 div.footer { font-size: 13px; text-align: right; margin: 30px 0;
31 color: #86989B; }
32
33 h2 { font-size: 16px; margin: 1.3em 0 0.0 0; padding: 9px;
34 background-color: #11557C; color: white; }
35 h2 em, h3 em { font-style: normal; color: #A5D6D9; font-weight: normal; }
36
37 div.traceback, div.plain { border: 1px solid #ddd; margin: 0 0 1em 0; padding: 10px; }
38 div.plain p { margin: 0; }
39 div.plain textarea,
40 div.plain pre { margin: 10px 0 0 0; padding: 4px;
41 background-color: #E8EFF0; border: 1px solid #D3E7E9; }
42 div.plain textarea { width: 99%; height: 300px; }
43 div.traceback h3 { font-size: 1em; margin: 0 0 0.8em 0; }
44 div.traceback ul { list-style: none; margin: 0; padding: 0 0 0 1em; }
45 div.traceback h4 { font-size: 13px; font-weight: normal; margin: 0.7em 0 0.1em 0; }
46 div.traceback pre { margin: 0; padding: 5px 0 3px 15px;
47 background-color: #E8EFF0; border: 1px solid #D3E7E9; }
48 div.traceback .library .current { background: white; color: #555; }
49 div.traceback .expanded .current { background: #E8EFF0; color: black; }
50 div.traceback pre:hover { background-color: #DDECEE; color: black; cursor: pointer; }
51 div.traceback div.source.expanded pre + pre { border-top: none; }
52
53 div.traceback span.ws { display: none; }
54 div.traceback pre.before, div.traceback pre.after { display: none; background: white; }
55 div.traceback div.source.expanded pre.before,
56 div.traceback div.source.expanded pre.after {
57 display: block;
58 }
59
60 div.traceback div.source.expanded span.ws {
61 display: inline;
62 }
63
64 div.traceback blockquote { margin: 1em 0 0 0; padding: 0; }
65 div.traceback img { float: right; padding: 2px; margin: -3px 2px 0 0; display: none; }
66 div.traceback img:hover { background-color: #ddd; cursor: pointer;
67 border-color: #BFDDE0; }
68 div.traceback pre:hover img { display: block; }
69 div.traceback cite.filename { font-style: normal; color: #3B666B; }
70
71 pre.console { border: 1px solid #ccc; background: white!important;
72 color: black; padding: 5px!important;
73 margin: 3px 0 0 0!important; cursor: default!important;
74 max-height: 400px; overflow: auto; }
75 pre.console form { color: #555; }
76 pre.console input { background-color: transparent; color: #555;
77 width: 90%; font-family: 'Consolas', 'Deja Vu Sans Mono',
78 'Bitstream Vera Sans Mono', monospace; font-size: 14px;
79 border: none!important; }
80
81 span.string { color: #30799B; }
82 span.number { color: #9C1A1C; }
83 span.help { color: #3A7734; }
84 span.object { color: #485F6E; }
85 span.extended { opacity: 0.5; }
86 span.extended:hover { opacity: 1; }
87 a.toggle { text-decoration: none; background-repeat: no-repeat;
88 background-position: center center;
89 background-image: url(?__debugger__=yes&cmd=resource&f=more.png); }
90 a.toggle:hover { background-color: #444; }
91 a.open { background-image: url(?__debugger__=yes&cmd=resource&f=less.png); }
92
93 pre.console div.traceback,
94 pre.console div.box { margin: 5px 10px; white-space: normal;
95 border: 1px solid #11557C; padding: 10px;
96 font-family: 'Lucida Grande', 'Lucida Sans Unicode', 'Geneva',
97 'Verdana', sans-serif; }
98 pre.console div.box h3,
99 pre.console div.traceback h3 { margin: -10px -10px 10px -10px; padding: 5px;
100 background: #11557C; color: white; }
101
102 pre.console div.traceback pre:hover { cursor: default; background: #E8EFF0; }
103 pre.console div.traceback pre.syntaxerror { background: inherit; border: none;
104 margin: 20px -10px -10px -10px;
105 padding: 10px; border-top: 1px solid #BFDDE0;
106 background: #E8EFF0; }
107 pre.console div.noframe-traceback pre.syntaxerror { margin-top: -10px; border: none; }
108
109 pre.console div.box pre.repr { padding: 0; margin: 0; background-color: white; border: none; }
110 pre.console div.box table { margin-top: 6px; }
111 pre.console div.box pre { border: none; }
112 pre.console div.box pre.help { background-color: white; }
113 pre.console div.box pre.help:hover { cursor: default; }
114 pre.console table tr { vertical-align: top; }
115 div.console { border: 1px solid #ccc; padding: 4px; background-color: #fafafa; }
116
117 div.traceback pre, div.console pre {
118 white-space: pre-wrap; /* css-3 should we be so lucky... */
119 white-space: -moz-pre-wrap; /* Mozilla, since 1999 */
120 white-space: -pre-wrap; /* Opera 4-6 ?? */
121 white-space: -o-pre-wrap; /* Opera 7 ?? */
122 word-wrap: break-word; /* Internet Explorer 5.5+ */
123 _white-space: pre; /* IE only hack to re-specify in
124 addition to word-wrap */
125 }
126
127
128 div.pin-prompt {
129 position: absolute;
130 display: none;
131 top: 0;
132 bottom: 0;
133 left: 0;
134 right: 0;
135 background: rgba(255, 255, 255, 0.8);
136 }
137
138 div.pin-prompt .inner {
139 background: #eee;
140 padding: 10px 50px;
141 width: 350px;
142 margin: 10% auto 0 auto;
143 border: 1px solid #ccc;
144 border-radius: 2px;
145 }
146
147 div.exc-divider {
148 margin: 0.7em 0 0 -1em;
149 padding: 0.5em;
150 background: #11557C;
151 color: #ddd;
152 border: 1px solid #ddd;
153 }
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.debug.tbtools
3 ~~~~~~~~~~~~~~~~~~~~~~
4
5 This module provides various traceback related utility functions.
6
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
9 """
10 import codecs
11 import inspect
12 import json
13 import os
14 import re
15 import sys
16 import sysconfig
17 import traceback
18 from tokenize import TokenError
19
20 from .._compat import PY2
21 from .._compat import range_type
22 from .._compat import reraise
23 from .._compat import string_types
24 from .._compat import text_type
25 from .._compat import to_native
26 from .._compat import to_unicode
27 from ..filesystem import get_filesystem_encoding
28 from ..utils import cached_property
29 from ..utils import escape
30 from .console import Console
31
32
33 _coding_re = re.compile(br"coding[:=]\s*([-\w.]+)")
34 _line_re = re.compile(br"^(.*?)$", re.MULTILINE)
35 _funcdef_re = re.compile(r"^(\s*def\s)|(.*(?<!\w)lambda(:|\s))|^(\s*@)")
36 UTF8_COOKIE = b"\xef\xbb\xbf"
37
38 system_exceptions = (SystemExit, KeyboardInterrupt)
39 try:
40 system_exceptions += (GeneratorExit,)
41 except NameError:
42 pass
43
44
45 HEADER = u"""\
46 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
47 "http://www.w3.org/TR/html4/loose.dtd">
48 <html>
49 <head>
50 <title>%(title)s // Werkzeug Debugger</title>
51 <link rel="stylesheet" href="?__debugger__=yes&amp;cmd=resource&amp;f=style.css"
52 type="text/css">
53 <!-- We need to make sure this has a favicon so that the debugger does
54 not by accident trigger a request to /favicon.ico which might
55 change the application state. -->
56 <link rel="shortcut icon"
57 href="?__debugger__=yes&amp;cmd=resource&amp;f=console.png">
58 <script src="?__debugger__=yes&amp;cmd=resource&amp;f=jquery.js"></script>
59 <script src="?__debugger__=yes&amp;cmd=resource&amp;f=debugger.js"></script>
60 <script type="text/javascript">
61 var TRACEBACK = %(traceback_id)d,
62 CONSOLE_MODE = %(console)s,
63 EVALEX = %(evalex)s,
64 EVALEX_TRUSTED = %(evalex_trusted)s,
65 SECRET = "%(secret)s";
66 </script>
67 </head>
68 <body style="background-color: #fff">
69 <div class="debugger">
70 """
71 FOOTER = u"""\
72 <div class="footer">
73 Brought to you by <strong class="arthur">DON'T PANIC</strong>, your
74 friendly Werkzeug powered traceback interpreter.
75 </div>
76 </div>
77
78 <div class="pin-prompt">
79 <div class="inner">
80 <h3>Console Locked</h3>
81 <p>
82 The console is locked and needs to be unlocked by entering the PIN.
83 You can find the PIN printed out on the standard output of your
84 shell that runs the server.
85 <form>
86 <p>PIN:
87 <input type=text name=pin size=14>
88 <input type=submit name=btn value="Confirm Pin">
89 </form>
90 </div>
91 </div>
92 </body>
93 </html>
94 """
95
96 PAGE_HTML = (
97 HEADER
98 + u"""\
99 <h1>%(exception_type)s</h1>
100 <div class="detail">
101 <p class="errormsg">%(exception)s</p>
102 </div>
103 <h2 class="traceback">Traceback <em>(most recent call last)</em></h2>
104 %(summary)s
105 <div class="plain">
106 <form action="/?__debugger__=yes&amp;cmd=paste" method="post">
107 <p>
108 <input type="hidden" name="language" value="pytb">
109 This is the Copy/Paste friendly version of the traceback. <span
110 class="pastemessage">You can also paste this traceback into
111 a <a href="https://gist.github.com/">gist</a>:
112 <input type="submit" value="create paste"></span>
113 </p>
114 <textarea cols="50" rows="10" name="code" readonly>%(plaintext)s</textarea>
115 </form>
116 </div>
117 <div class="explanation">
118 The debugger caught an exception in your WSGI application. You can now
119 look at the traceback which led to the error. <span class="nojavascript">
120 If you enable JavaScript you can also use additional features such as code
121 execution (if the evalex feature is enabled), automatic pasting of the
122 exceptions and much more.</span>
123 </div>
124 """
125 + FOOTER
126 + """
127 <!--
128
129 %(plaintext_cs)s
130
131 -->
132 """
133 )
134
135 CONSOLE_HTML = (
136 HEADER
137 + u"""\
138 <h1>Interactive Console</h1>
139 <div class="explanation">
140 In this console you can execute Python expressions in the context of the
141 application. The initial namespace was created by the debugger automatically.
142 </div>
143 <div class="console"><div class="inner">The Console requires JavaScript.</div></div>
144 """
145 + FOOTER
146 )
147
148 SUMMARY_HTML = u"""\
149 <div class="%(classes)s">
150 %(title)s
151 <ul>%(frames)s</ul>
152 %(description)s
153 </div>
154 """
155
156 FRAME_HTML = u"""\
157 <div class="frame" id="frame-%(id)d">
158 <h4>File <cite class="filename">"%(filename)s"</cite>,
159 line <em class="line">%(lineno)s</em>,
160 in <code class="function">%(function_name)s</code></h4>
161 <div class="source %(library)s">%(lines)s</div>
162 </div>
163 """
164
165 SOURCE_LINE_HTML = u"""\
166 <tr class="%(classes)s">
167 <td class=lineno>%(lineno)s</td>
168 <td>%(code)s</td>
169 </tr>
170 """
171
172
173 def render_console_html(secret, evalex_trusted=True):
174 return CONSOLE_HTML % {
175 "evalex": "true",
176 "evalex_trusted": "true" if evalex_trusted else "false",
177 "console": "true",
178 "title": "Console",
179 "secret": secret,
180 "traceback_id": -1,
181 }
182
183
184 def get_current_traceback(
185 ignore_system_exceptions=False, show_hidden_frames=False, skip=0
186 ):
187 """Get the current exception info as `Traceback` object. Per default
188 calling this method will reraise system exceptions such as generator exit,
189 system exit or others. This behavior can be disabled by passing `False`
190 to the function as first parameter.
191 """
192 exc_type, exc_value, tb = sys.exc_info()
193 if ignore_system_exceptions and exc_type in system_exceptions:
194 reraise(exc_type, exc_value, tb)
195 for _ in range_type(skip):
196 if tb.tb_next is None:
197 break
198 tb = tb.tb_next
199 tb = Traceback(exc_type, exc_value, tb)
200 if not show_hidden_frames:
201 tb.filter_hidden_frames()
202 return tb
203
204
205 class Line(object):
206 """Helper for the source renderer."""
207
208 __slots__ = ("lineno", "code", "in_frame", "current")
209
210 def __init__(self, lineno, code):
211 self.lineno = lineno
212 self.code = code
213 self.in_frame = False
214 self.current = False
215
216 @property
217 def classes(self):
218 rv = ["line"]
219 if self.in_frame:
220 rv.append("in-frame")
221 if self.current:
222 rv.append("current")
223 return rv
224
225 def render(self):
226 return SOURCE_LINE_HTML % {
227 "classes": u" ".join(self.classes),
228 "lineno": self.lineno,
229 "code": escape(self.code),
230 }
231
232
233 class Traceback(object):
234 """Wraps a traceback."""
235
236 def __init__(self, exc_type, exc_value, tb):
237 self.exc_type = exc_type
238 self.exc_value = exc_value
239 self.tb = tb
240
241 exception_type = exc_type.__name__
242 if exc_type.__module__ not in {"builtins", "__builtin__", "exceptions"}:
243 exception_type = exc_type.__module__ + "." + exception_type
244 self.exception_type = exception_type
245
246 self.groups = []
247 memo = set()
248 while True:
249 self.groups.append(Group(exc_type, exc_value, tb))
250 memo.add(id(exc_value))
251 if PY2:
252 break
253 exc_value = exc_value.__cause__ or exc_value.__context__
254 if exc_value is None or id(exc_value) in memo:
255 break
256 exc_type = type(exc_value)
257 tb = exc_value.__traceback__
258 self.groups.reverse()
259 self.frames = [frame for group in self.groups for frame in group.frames]
260
261 def filter_hidden_frames(self):
262 """Remove the frames according to the paste spec."""
263 for group in self.groups:
264 group.filter_hidden_frames()
265
266 self.frames[:] = [frame for group in self.groups for frame in group.frames]
267
268 @property
269 def is_syntax_error(self):
270 """Is it a syntax error?"""
271 return isinstance(self.exc_value, SyntaxError)
272
273 @property
274 def exception(self):
275 """String representation of the final exception."""
276 return self.groups[-1].exception
277
278 def log(self, logfile=None):
279 """Log the ASCII traceback into a file object."""
280 if logfile is None:
281 logfile = sys.stderr
282 tb = self.plaintext.rstrip() + u"\n"
283 logfile.write(to_native(tb, "utf-8", "replace"))
284
285 def paste(self):
286 """Create a paste and return the paste id."""
287 data = json.dumps(
288 {
289 "description": "Werkzeug Internal Server Error",
290 "public": False,
291 "files": {"traceback.txt": {"content": self.plaintext}},
292 }
293 ).encode("utf-8")
294 try:
295 from urllib2 import urlopen
296 except ImportError:
297 from urllib.request import urlopen
298 rv = urlopen("https://api.github.com/gists", data=data)
299 resp = json.loads(rv.read().decode("utf-8"))
300 rv.close()
301 return {"url": resp["html_url"], "id": resp["id"]}
302
303 def render_summary(self, include_title=True):
304 """Render the traceback for the interactive console."""
305 title = ""
306 classes = ["traceback"]
307 if not self.frames:
308 classes.append("noframe-traceback")
309 frames = []
310 else:
311 library_frames = sum(frame.is_library for frame in self.frames)
312 mark_lib = 0 < library_frames < len(self.frames)
313 frames = [group.render(mark_lib=mark_lib) for group in self.groups]
314
315 if include_title:
316 if self.is_syntax_error:
317 title = u"Syntax Error"
318 else:
319 title = u"Traceback <em>(most recent call last)</em>:"
320
321 if self.is_syntax_error:
322 description_wrapper = u"<pre class=syntaxerror>%s</pre>"
323 else:
324 description_wrapper = u"<blockquote>%s</blockquote>"
325
326 return SUMMARY_HTML % {
327 "classes": u" ".join(classes),
328 "title": u"<h3>%s</h3>" % title if title else u"",
329 "frames": u"\n".join(frames),
330 "description": description_wrapper % escape(self.exception),
331 }
332
333 def render_full(self, evalex=False, secret=None, evalex_trusted=True):
334 """Render the Full HTML page with the traceback info."""
335 exc = escape(self.exception)
336 return PAGE_HTML % {
337 "evalex": "true" if evalex else "false",
338 "evalex_trusted": "true" if evalex_trusted else "false",
339 "console": "false",
340 "title": exc,
341 "exception": exc,
342 "exception_type": escape(self.exception_type),
343 "summary": self.render_summary(include_title=False),
344 "plaintext": escape(self.plaintext),
345 "plaintext_cs": re.sub("-{2,}", "-", self.plaintext),
346 "traceback_id": self.id,
347 "secret": secret,
348 }
349
350 @cached_property
351 def plaintext(self):
352 return u"\n".join([group.render_text() for group in self.groups])
353
354 @property
355 def id(self):
356 return id(self)
357
358
359 class Group(object):
360 """A group of frames for an exception in a traceback. On Python 3,
361 if the exception has a ``__cause__`` or ``__context__``, there are
362 multiple exception groups.
363 """
364
365 def __init__(self, exc_type, exc_value, tb):
366 self.exc_type = exc_type
367 self.exc_value = exc_value
368 self.info = None
369 if not PY2:
370 if exc_value.__cause__ is not None:
371 self.info = (
372 u"The above exception was the direct cause of the"
373 u" following exception"
374 )
375 elif exc_value.__context__ is not None:
376 self.info = (
377 u"During handling of the above exception, another"
378 u" exception occurred"
379 )
380
381 self.frames = []
382 while tb is not None:
383 self.frames.append(Frame(exc_type, exc_value, tb))
384 tb = tb.tb_next
385
386 def filter_hidden_frames(self):
387 new_frames = []
388 hidden = False
389
390 for frame in self.frames:
391 hide = frame.hide
392 if hide in ("before", "before_and_this"):
393 new_frames = []
394 hidden = False
395 if hide == "before_and_this":
396 continue
397 elif hide in ("reset", "reset_and_this"):
398 hidden = False
399 if hide == "reset_and_this":
400 continue
401 elif hide in ("after", "after_and_this"):
402 hidden = True
403 if hide == "after_and_this":
404 continue
405 elif hide or hidden:
406 continue
407 new_frames.append(frame)
408
409 # if we only have one frame and that frame is from the codeop
410 # module, remove it.
411 if len(new_frames) == 1 and self.frames[0].module == "codeop":
412 del self.frames[:]
413
414 # if the last frame is missing something went terrible wrong :(
415 elif self.frames[-1] in new_frames:
416 self.frames[:] = new_frames
417
418 @property
419 def exception(self):
420 """String representation of the exception."""
421 buf = traceback.format_exception_only(self.exc_type, self.exc_value)
422 rv = "".join(buf).strip()
423 return to_unicode(rv, "utf-8", "replace")
424
425 def render(self, mark_lib=True):
426 out = []
427 if self.info is not None:
428 out.append(u'<li><div class="exc-divider">%s:</div>' % self.info)
429 for frame in self.frames:
430 out.append(
431 u"<li%s>%s"
432 % (
433 u' title="%s"' % escape(frame.info) if frame.info else u"",
434 frame.render(mark_lib=mark_lib),
435 )
436 )
437 return u"\n".join(out)
438
439 def render_text(self):
440 out = []
441 if self.info is not None:
442 out.append(u"\n%s:\n" % self.info)
443 out.append(u"Traceback (most recent call last):")
444 for frame in self.frames:
445 out.append(frame.render_text())
446 out.append(self.exception)
447 return u"\n".join(out)
448
449
450 class Frame(object):
451 """A single frame in a traceback."""
452
453 def __init__(self, exc_type, exc_value, tb):
454 self.lineno = tb.tb_lineno
455 self.function_name = tb.tb_frame.f_code.co_name
456 self.locals = tb.tb_frame.f_locals
457 self.globals = tb.tb_frame.f_globals
458
459 fn = inspect.getsourcefile(tb) or inspect.getfile(tb)
460 if fn[-4:] in (".pyo", ".pyc"):
461 fn = fn[:-1]
462 # if it's a file on the file system resolve the real filename.
463 if os.path.isfile(fn):
464 fn = os.path.realpath(fn)
465 self.filename = to_unicode(fn, get_filesystem_encoding())
466 self.module = self.globals.get("__name__")
467 self.loader = self.globals.get("__loader__")
468 self.code = tb.tb_frame.f_code
469
470 # support for paste's traceback extensions
471 self.hide = self.locals.get("__traceback_hide__", False)
472 info = self.locals.get("__traceback_info__")
473 if info is not None:
474 info = to_unicode(info, "utf-8", "replace")
475 self.info = info
476
477 def render(self, mark_lib=True):
478 """Render a single frame in a traceback."""
479 return FRAME_HTML % {
480 "id": self.id,
481 "filename": escape(self.filename),
482 "lineno": self.lineno,
483 "function_name": escape(self.function_name),
484 "lines": self.render_line_context(),
485 "library": "library" if mark_lib and self.is_library else "",
486 }
487
488 @cached_property
489 def is_library(self):
490 return any(
491 self.filename.startswith(path) for path in sysconfig.get_paths().values()
492 )
493
494 def render_text(self):
495 return u' File "%s", line %s, in %s\n %s' % (
496 self.filename,
497 self.lineno,
498 self.function_name,
499 self.current_line.strip(),
500 )
501
502 def render_line_context(self):
503 before, current, after = self.get_context_lines()
504 rv = []
505
506 def render_line(line, cls):
507 line = line.expandtabs().rstrip()
508 stripped_line = line.strip()
509 prefix = len(line) - len(stripped_line)
510 rv.append(
511 '<pre class="line %s"><span class="ws">%s</span>%s</pre>'
512 % (cls, " " * prefix, escape(stripped_line) or " ")
513 )
514
515 for line in before:
516 render_line(line, "before")
517 render_line(current, "current")
518 for line in after:
519 render_line(line, "after")
520
521 return "\n".join(rv)
522
523 def get_annotated_lines(self):
524 """Helper function that returns lines with extra information."""
525 lines = [Line(idx + 1, x) for idx, x in enumerate(self.sourcelines)]
526
527 # find function definition and mark lines
528 if hasattr(self.code, "co_firstlineno"):
529 lineno = self.code.co_firstlineno - 1
530 while lineno > 0:
531 if _funcdef_re.match(lines[lineno].code):
532 break
533 lineno -= 1
534 try:
535 offset = len(inspect.getblock([x.code + "\n" for x in lines[lineno:]]))
536 except TokenError:
537 offset = 0
538 for line in lines[lineno : lineno + offset]:
539 line.in_frame = True
540
541 # mark current line
542 try:
543 lines[self.lineno - 1].current = True
544 except IndexError:
545 pass
546
547 return lines
548
549 def eval(self, code, mode="single"):
550 """Evaluate code in the context of the frame."""
551 if isinstance(code, string_types):
552 if PY2 and isinstance(code, text_type): # noqa
553 code = UTF8_COOKIE + code.encode("utf-8")
554 code = compile(code, "<interactive>", mode)
555 return eval(code, self.globals, self.locals)
556
557 @cached_property
558 def sourcelines(self):
559 """The sourcecode of the file as list of unicode strings."""
560 # get sourcecode from loader or file
561 source = None
562 if self.loader is not None:
563 try:
564 if hasattr(self.loader, "get_source"):
565 source = self.loader.get_source(self.module)
566 elif hasattr(self.loader, "get_source_by_code"):
567 source = self.loader.get_source_by_code(self.code)
568 except Exception:
569 # we munch the exception so that we don't cause troubles
570 # if the loader is broken.
571 pass
572
573 if source is None:
574 try:
575 f = open(to_native(self.filename, get_filesystem_encoding()), mode="rb")
576 except IOError:
577 return []
578 try:
579 source = f.read()
580 finally:
581 f.close()
582
583 # already unicode? return right away
584 if isinstance(source, text_type):
585 return source.splitlines()
586
587 # yes. it should be ascii, but we don't want to reject too many
588 # characters in the debugger if something breaks
589 charset = "utf-8"
590 if source.startswith(UTF8_COOKIE):
591 source = source[3:]
592 else:
593 for idx, match in enumerate(_line_re.finditer(source)):
594 match = _coding_re.search(match.group())
595 if match is not None:
596 charset = match.group(1)
597 break
598 if idx > 1:
599 break
600
601 # on broken cookies we fall back to utf-8 too
602 charset = to_native(charset)
603 try:
604 codecs.lookup(charset)
605 except LookupError:
606 charset = "utf-8"
607
608 return source.decode(charset, "replace").splitlines()
609
610 def get_context_lines(self, context=5):
611 before = self.sourcelines[self.lineno - context - 1 : self.lineno - 1]
612 past = self.sourcelines[self.lineno : self.lineno + context]
613 return (before, self.current_line, past)
614
615 @property
616 def current_line(self):
617 try:
618 return self.sourcelines[self.lineno - 1]
619 except IndexError:
620 return u""
621
622 @cached_property
623 def console(self):
624 return Console(self.globals, self.locals)
625
626 @property
627 def id(self):
628 return id(self)
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.exceptions
3 ~~~~~~~~~~~~~~~~~~~
4
5 This module implements a number of Python exceptions you can raise from
6 within your views to trigger a standard non-200 response.
7
8
9 Usage Example
10 -------------
11
12 ::
13
14 from werkzeug.wrappers import BaseRequest
15 from werkzeug.wsgi import responder
16 from werkzeug.exceptions import HTTPException, NotFound
17
18 def view(request):
19 raise NotFound()
20
21 @responder
22 def application(environ, start_response):
23 request = BaseRequest(environ)
24 try:
25 return view(request)
26 except HTTPException as e:
27 return e
28
29
30 As you can see from this example those exceptions are callable WSGI
31 applications. Because of Python 2.4 compatibility those do not extend
32 from the response objects but only from the python exception class.
33
34 As a matter of fact they are not Werkzeug response objects. However you
35 can get a response object by calling ``get_response()`` on a HTTP
36 exception.
37
38 Keep in mind that you have to pass an environment to ``get_response()``
39 because some errors fetch additional information from the WSGI
40 environment.
41
42 If you want to hook in a different exception page to say, a 404 status
43 code, you can add a second except for a specific subclass of an error::
44
45 @responder
46 def application(environ, start_response):
47 request = BaseRequest(environ)
48 try:
49 return view(request)
50 except NotFound, e:
51 return not_found(request)
52 except HTTPException, e:
53 return e
54
55
56 :copyright: 2007 Pallets
57 :license: BSD-3-Clause
58 """
59 import sys
60
61 import werkzeug
62
63 # Because of bootstrapping reasons we need to manually patch ourselves
64 # onto our parent module.
65 werkzeug.exceptions = sys.modules[__name__]
66
67 from ._compat import implements_to_string
68 from ._compat import integer_types
69 from ._compat import iteritems
70 from ._compat import text_type
71 from ._internal import _get_environ
72 from .wrappers import Response
73
74
75 @implements_to_string
76 class HTTPException(Exception):
77 """Baseclass for all HTTP exceptions. This exception can be called as WSGI
78 application to render a default error page or you can catch the subclasses
79 of it independently and render nicer error messages.
80 """
81
82 code = None
83 description = None
84
85 def __init__(self, description=None, response=None):
86 super(Exception, self).__init__()
87 if description is not None:
88 self.description = description
89 self.response = response
90
91 @classmethod
92 def wrap(cls, exception, name=None):
93 """Create an exception that is a subclass of the calling HTTP
94 exception and the ``exception`` argument.
95
96 The first argument to the class will be passed to the
97 wrapped ``exception``, the rest to the HTTP exception. If
98 ``self.args`` is not empty, the wrapped exception message is
99 added to the HTTP exception description.
100
101 .. versionchanged:: 0.15
102 The description includes the wrapped exception message.
103 """
104
105 class newcls(cls, exception):
106 def __init__(self, arg=None, *args, **kwargs):
107 super(cls, self).__init__(*args, **kwargs)
108
109 if arg is None:
110 exception.__init__(self)
111 else:
112 exception.__init__(self, arg)
113
114 def get_description(self, environ=None):
115 out = super(cls, self).get_description(environ=environ)
116
117 if self.args:
118 out += "<p><pre><code>{}: {}</code></pre></p>".format(
119 exception.__name__, escape(exception.__str__(self))
120 )
121
122 return out
123
124 newcls.__module__ = sys._getframe(1).f_globals.get("__name__")
125 newcls.__name__ = name or cls.__name__ + exception.__name__
126 return newcls
127
128 @property
129 def name(self):
130 """The status name."""
131 return HTTP_STATUS_CODES.get(self.code, "Unknown Error")
132
133 def get_description(self, environ=None):
134 """Get the description."""
135 return u"<p>%s</p>" % escape(self.description)
136
137 def get_body(self, environ=None):
138 """Get the HTML body."""
139 return text_type(
140 (
141 u'<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n'
142 u"<title>%(code)s %(name)s</title>\n"
143 u"<h1>%(name)s</h1>\n"
144 u"%(description)s\n"
145 )
146 % {
147 "code": self.code,
148 "name": escape(self.name),
149 "description": self.get_description(environ),
150 }
151 )
152
153 def get_headers(self, environ=None):
154 """Get a list of headers."""
155 return [("Content-Type", "text/html")]
156
157 def get_response(self, environ=None):
158 """Get a response object. If one was passed to the exception
159 it's returned directly.
160
161 :param environ: the optional environ for the request. This
162 can be used to modify the response depending
163 on how the request looked like.
164 :return: a :class:`Response` object or a subclass thereof.
165 """
166 if self.response is not None:
167 return self.response
168 if environ is not None:
169 environ = _get_environ(environ)
170 headers = self.get_headers(environ)
171 return Response(self.get_body(environ), self.code, headers)
172
173 def __call__(self, environ, start_response):
174 """Call the exception as WSGI application.
175
176 :param environ: the WSGI environment.
177 :param start_response: the response callable provided by the WSGI
178 server.
179 """
180 response = self.get_response(environ)
181 return response(environ, start_response)
182
183 def __str__(self):
184 code = self.code if self.code is not None else "???"
185 return "%s %s: %s" % (code, self.name, self.description)
186
187 def __repr__(self):
188 code = self.code if self.code is not None else "???"
189 return "<%s '%s: %s'>" % (self.__class__.__name__, code, self.name)
190
191
192 class BadRequest(HTTPException):
193 """*400* `Bad Request`
194
195 Raise if the browser sends something to the application the application
196 or server cannot handle.
197 """
198
199 code = 400
200 description = (
201 "The browser (or proxy) sent a request that this server could "
202 "not understand."
203 )
204
205
206 class ClientDisconnected(BadRequest):
207 """Internal exception that is raised if Werkzeug detects a disconnected
208 client. Since the client is already gone at that point attempting to
209 send the error message to the client might not work and might ultimately
210 result in another exception in the server. Mainly this is here so that
211 it is silenced by default as far as Werkzeug is concerned.
212
213 Since disconnections cannot be reliably detected and are unspecified
214 by WSGI to a large extent this might or might not be raised if a client
215 is gone.
216
217 .. versionadded:: 0.8
218 """
219
220
221 class SecurityError(BadRequest):
222 """Raised if something triggers a security error. This is otherwise
223 exactly like a bad request error.
224
225 .. versionadded:: 0.9
226 """
227
228
229 class BadHost(BadRequest):
230 """Raised if the submitted host is badly formatted.
231
232 .. versionadded:: 0.11.2
233 """
234
235
236 class Unauthorized(HTTPException):
237 """*401* ``Unauthorized``
238
239 Raise if the user is not authorized to access a resource.
240
241 The ``www_authenticate`` argument should be used to set the
242 ``WWW-Authenticate`` header. This is used for HTTP basic auth and
243 other schemes. Use :class:`~werkzeug.datastructures.WWWAuthenticate`
244 to create correctly formatted values. Strictly speaking a 401
245 response is invalid if it doesn't provide at least one value for
246 this header, although real clients typically don't care.
247
248 :param description: Override the default message used for the body
249 of the response.
250 :param www-authenticate: A single value, or list of values, for the
251 WWW-Authenticate header.
252
253 .. versionchanged:: 0.15.3
254 If the ``www_authenticate`` argument is not set, the
255 ``WWW-Authenticate`` header is not set.
256
257 .. versionchanged:: 0.15.3
258 The ``response`` argument was restored.
259
260 .. versionchanged:: 0.15.1
261 ``description`` was moved back as the first argument, restoring
262 its previous position.
263
264 .. versionchanged:: 0.15.0
265 ``www_authenticate`` was added as the first argument, ahead of
266 ``description``.
267 """
268
269 code = 401
270 description = (
271 "The server could not verify that you are authorized to access"
272 " the URL requested. You either supplied the wrong credentials"
273 " (e.g. a bad password), or your browser doesn't understand"
274 " how to supply the credentials required."
275 )
276
277 def __init__(self, description=None, response=None, www_authenticate=None):
278 HTTPException.__init__(self, description, response)
279
280 if www_authenticate is not None:
281 if not isinstance(www_authenticate, (tuple, list)):
282 www_authenticate = (www_authenticate,)
283
284 self.www_authenticate = www_authenticate
285
286 def get_headers(self, environ=None):
287 headers = HTTPException.get_headers(self, environ)
288 if self.www_authenticate:
289 headers.append(
290 ("WWW-Authenticate", ", ".join([str(x) for x in self.www_authenticate]))
291 )
292 return headers
293
294
295 class Forbidden(HTTPException):
296 """*403* `Forbidden`
297
298 Raise if the user doesn't have the permission for the requested resource
299 but was authenticated.
300 """
301
302 code = 403
303 description = (
304 "You don't have the permission to access the requested"
305 " resource. It is either read-protected or not readable by the"
306 " server."
307 )
308
309
310 class NotFound(HTTPException):
311 """*404* `Not Found`
312
313 Raise if a resource does not exist and never existed.
314 """
315
316 code = 404
317 description = (
318 "The requested URL was not found on the server. If you entered"
319 " the URL manually please check your spelling and try again."
320 )
321
322
323 class MethodNotAllowed(HTTPException):
324 """*405* `Method Not Allowed`
325
326 Raise if the server used a method the resource does not handle. For
327 example `POST` if the resource is view only. Especially useful for REST.
328
329 The first argument for this exception should be a list of allowed methods.
330 Strictly speaking the response would be invalid if you don't provide valid
331 methods in the header which you can do with that list.
332 """
333
334 code = 405
335 description = "The method is not allowed for the requested URL."
336
337 def __init__(self, valid_methods=None, description=None):
338 """Takes an optional list of valid http methods
339 starting with werkzeug 0.3 the list will be mandatory."""
340 HTTPException.__init__(self, description)
341 self.valid_methods = valid_methods
342
343 def get_headers(self, environ=None):
344 headers = HTTPException.get_headers(self, environ)
345 if self.valid_methods:
346 headers.append(("Allow", ", ".join(self.valid_methods)))
347 return headers
348
349
350 class NotAcceptable(HTTPException):
351 """*406* `Not Acceptable`
352
353 Raise if the server can't return any content conforming to the
354 `Accept` headers of the client.
355 """
356
357 code = 406
358
359 description = (
360 "The resource identified by the request is only capable of"
361 " generating response entities which have content"
362 " characteristics not acceptable according to the accept"
363 " headers sent in the request."
364 )
365
366
367 class RequestTimeout(HTTPException):
368 """*408* `Request Timeout`
369
370 Raise to signalize a timeout.
371 """
372
373 code = 408
374 description = (
375 "The server closed the network connection because the browser"
376 " didn't finish the request within the specified time."
377 )
378
379
380 class Conflict(HTTPException):
381 """*409* `Conflict`
382
383 Raise to signal that a request cannot be completed because it conflicts
384 with the current state on the server.
385
386 .. versionadded:: 0.7
387 """
388
389 code = 409
390 description = (
391 "A conflict happened while processing the request. The"
392 " resource might have been modified while the request was being"
393 " processed."
394 )
395
396
397 class Gone(HTTPException):
398 """*410* `Gone`
399
400 Raise if a resource existed previously and went away without new location.
401 """
402
403 code = 410
404 description = (
405 "The requested URL is no longer available on this server and"
406 " there is no forwarding address. If you followed a link from a"
407 " foreign page, please contact the author of this page."
408 )
409
410
411 class LengthRequired(HTTPException):
412 """*411* `Length Required`
413
414 Raise if the browser submitted data but no ``Content-Length`` header which
415 is required for the kind of processing the server does.
416 """
417
418 code = 411
419 description = (
420 "A request with this method requires a valid <code>Content-"
421 "Length</code> header."
422 )
423
424
425 class PreconditionFailed(HTTPException):
426 """*412* `Precondition Failed`
427
428 Status code used in combination with ``If-Match``, ``If-None-Match``, or
429 ``If-Unmodified-Since``.
430 """
431
432 code = 412
433 description = (
434 "The precondition on the request for the URL failed positive evaluation."
435 )
436
437
438 class RequestEntityTooLarge(HTTPException):
439 """*413* `Request Entity Too Large`
440
441 The status code one should return if the data submitted exceeded a given
442 limit.
443 """
444
445 code = 413
446 description = "The data value transmitted exceeds the capacity limit."
447
448
449 class RequestURITooLarge(HTTPException):
450 """*414* `Request URI Too Large`
451
452 Like *413* but for too long URLs.
453 """
454
455 code = 414
456 description = (
457 "The length of the requested URL exceeds the capacity limit for"
458 " this server. The request cannot be processed."
459 )
460
461
462 class UnsupportedMediaType(HTTPException):
463 """*415* `Unsupported Media Type`
464
465 The status code returned if the server is unable to handle the media type
466 the client transmitted.
467 """
468
469 code = 415
470 description = (
471 "The server does not support the media type transmitted in the request."
472 )
473
474
475 class RequestedRangeNotSatisfiable(HTTPException):
476 """*416* `Requested Range Not Satisfiable`
477
478 The client asked for an invalid part of the file.
479
480 .. versionadded:: 0.7
481 """
482
483 code = 416
484 description = "The server cannot provide the requested range."
485
486 def __init__(self, length=None, units="bytes", description=None):
487 """Takes an optional `Content-Range` header value based on ``length``
488 parameter.
489 """
490 HTTPException.__init__(self, description)
491 self.length = length
492 self.units = units
493
494 def get_headers(self, environ=None):
495 headers = HTTPException.get_headers(self, environ)
496 if self.length is not None:
497 headers.append(("Content-Range", "%s */%d" % (self.units, self.length)))
498 return headers
499
500
501 class ExpectationFailed(HTTPException):
502 """*417* `Expectation Failed`
503
504 The server cannot meet the requirements of the Expect request-header.
505
506 .. versionadded:: 0.7
507 """
508
509 code = 417
510 description = "The server could not meet the requirements of the Expect header"
511
512
513 class ImATeapot(HTTPException):
514 """*418* `I'm a teapot`
515
516 The server should return this if it is a teapot and someone attempted
517 to brew coffee with it.
518
519 .. versionadded:: 0.7
520 """
521
522 code = 418
523 description = "This server is a teapot, not a coffee machine"
524
525
526 class UnprocessableEntity(HTTPException):
527 """*422* `Unprocessable Entity`
528
529 Used if the request is well formed, but the instructions are otherwise
530 incorrect.
531 """
532
533 code = 422
534 description = (
535 "The request was well-formed but was unable to be followed due"
536 " to semantic errors."
537 )
538
539
540 class Locked(HTTPException):
541 """*423* `Locked`
542
543 Used if the resource that is being accessed is locked.
544 """
545
546 code = 423
547 description = "The resource that is being accessed is locked."
548
549
550 class FailedDependency(HTTPException):
551 """*424* `Failed Dependency`
552
553 Used if the method could not be performed on the resource
554 because the requested action depended on another action and that action failed.
555 """
556
557 code = 424
558 description = (
559 "The method could not be performed on the resource because the"
560 " requested action depended on another action and that action"
561 " failed."
562 )
563
564
565 class PreconditionRequired(HTTPException):
566 """*428* `Precondition Required`
567
568 The server requires this request to be conditional, typically to prevent
569 the lost update problem, which is a race condition between two or more
570 clients attempting to update a resource through PUT or DELETE. By requiring
571 each client to include a conditional header ("If-Match" or "If-Unmodified-
572 Since") with the proper value retained from a recent GET request, the
573 server ensures that each client has at least seen the previous revision of
574 the resource.
575 """
576
577 code = 428
578 description = (
579 "This request is required to be conditional; try using"
580 ' "If-Match" or "If-Unmodified-Since".'
581 )
582
583
584 class TooManyRequests(HTTPException):
585 """*429* `Too Many Requests`
586
587 The server is limiting the rate at which this user receives responses, and
588 this request exceeds that rate. (The server may use any convenient method
589 to identify users and their request rates). The server may include a
590 "Retry-After" header to indicate how long the user should wait before
591 retrying.
592 """
593
594 code = 429
595 description = "This user has exceeded an allotted request count. Try again later."
596
597
598 class RequestHeaderFieldsTooLarge(HTTPException):
599 """*431* `Request Header Fields Too Large`
600
601 The server refuses to process the request because the header fields are too
602 large. One or more individual fields may be too large, or the set of all
603 headers is too large.
604 """
605
606 code = 431
607 description = "One or more header fields exceeds the maximum size."
608
609
610 class UnavailableForLegalReasons(HTTPException):
611 """*451* `Unavailable For Legal Reasons`
612
613 This status code indicates that the server is denying access to the
614 resource as a consequence of a legal demand.
615 """
616
617 code = 451
618 description = "Unavailable for legal reasons."
619
620
621 class InternalServerError(HTTPException):
622 """*500* `Internal Server Error`
623
624 Raise if an internal server error occurred. This is a good fallback if an
625 unknown error occurred in the dispatcher.
626 """
627
628 code = 500
629 description = (
630 "The server encountered an internal error and was unable to"
631 " complete your request. Either the server is overloaded or"
632 " there is an error in the application."
633 )
634
635
636 class NotImplemented(HTTPException):
637 """*501* `Not Implemented`
638
639 Raise if the application does not support the action requested by the
640 browser.
641 """
642
643 code = 501
644 description = "The server does not support the action requested by the browser."
645
646
647 class BadGateway(HTTPException):
648 """*502* `Bad Gateway`
649
650 If you do proxying in your application you should return this status code
651 if you received an invalid response from the upstream server it accessed
652 in attempting to fulfill the request.
653 """
654
655 code = 502
656 description = (
657 "The proxy server received an invalid response from an upstream server."
658 )
659
660
661 class ServiceUnavailable(HTTPException):
662 """*503* `Service Unavailable`
663
664 Status code you should return if a service is temporarily unavailable.
665 """
666
667 code = 503
668 description = (
669 "The server is temporarily unable to service your request due"
670 " to maintenance downtime or capacity problems. Please try"
671 " again later."
672 )
673
674
675 class GatewayTimeout(HTTPException):
676 """*504* `Gateway Timeout`
677
678 Status code you should return if a connection to an upstream server
679 times out.
680 """
681
682 code = 504
683 description = "The connection to an upstream server timed out."
684
685
686 class HTTPVersionNotSupported(HTTPException):
687 """*505* `HTTP Version Not Supported`
688
689 The server does not support the HTTP protocol version used in the request.
690 """
691
692 code = 505
693 description = (
694 "The server does not support the HTTP protocol version used in the request."
695 )
696
697
698 default_exceptions = {}
699 __all__ = ["HTTPException"]
700
701
702 def _find_exceptions():
703 for _name, obj in iteritems(globals()):
704 try:
705 is_http_exception = issubclass(obj, HTTPException)
706 except TypeError:
707 is_http_exception = False
708 if not is_http_exception or obj.code is None:
709 continue
710 __all__.append(obj.__name__)
711 old_obj = default_exceptions.get(obj.code, None)
712 if old_obj is not None and issubclass(obj, old_obj):
713 continue
714 default_exceptions[obj.code] = obj
715
716
717 _find_exceptions()
718 del _find_exceptions
719
720
721 class Aborter(object):
722 """When passed a dict of code -> exception items it can be used as
723 callable that raises exceptions. If the first argument to the
724 callable is an integer it will be looked up in the mapping, if it's
725 a WSGI application it will be raised in a proxy exception.
726
727 The rest of the arguments are forwarded to the exception constructor.
728 """
729
730 def __init__(self, mapping=None, extra=None):
731 if mapping is None:
732 mapping = default_exceptions
733 self.mapping = dict(mapping)
734 if extra is not None:
735 self.mapping.update(extra)
736
737 def __call__(self, code, *args, **kwargs):
738 if not args and not kwargs and not isinstance(code, integer_types):
739 raise HTTPException(response=code)
740 if code not in self.mapping:
741 raise LookupError("no exception for %r" % code)
742 raise self.mapping[code](*args, **kwargs)
743
744
745 def abort(status, *args, **kwargs):
746 """Raises an :py:exc:`HTTPException` for the given status code or WSGI
747 application::
748
749 abort(404) # 404 Not Found
750 abort(Response('Hello World'))
751
752 Can be passed a WSGI application or a status code. If a status code is
753 given it's looked up in the list of exceptions and will raise that
754 exception, if passed a WSGI application it will wrap it in a proxy WSGI
755 exception and raise that::
756
757 abort(404)
758 abort(Response('Hello World'))
759
760 """
761 return _aborter(status, *args, **kwargs)
762
763
764 _aborter = Aborter()
765
766
767 #: an exception that is used internally to signal both a key error and a
768 #: bad request. Used by a lot of the datastructures.
769 BadRequestKeyError = BadRequest.wrap(KeyError)
770
771 # imported here because of circular dependencies of werkzeug.utils
772 from .http import HTTP_STATUS_CODES
773 from .utils import escape
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.filesystem
3 ~~~~~~~~~~~~~~~~~~~
4
5 Various utilities for the local filesystem.
6
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
9 """
10 import codecs
11 import sys
12 import warnings
13
14 # We do not trust traditional unixes.
15 has_likely_buggy_unicode_filesystem = (
16 sys.platform.startswith("linux") or "bsd" in sys.platform
17 )
18
19
20 def _is_ascii_encoding(encoding):
21 """Given an encoding this figures out if the encoding is actually ASCII (which
22 is something we don't actually want in most cases). This is necessary
23 because ASCII comes under many names such as ANSI_X3.4-1968.
24 """
25 if encoding is None:
26 return False
27 try:
28 return codecs.lookup(encoding).name == "ascii"
29 except LookupError:
30 return False
31
32
33 class BrokenFilesystemWarning(RuntimeWarning, UnicodeWarning):
34 """The warning used by Werkzeug to signal a broken filesystem. Will only be
35 used once per runtime."""
36
37
38 _warned_about_filesystem_encoding = False
39
40
41 def get_filesystem_encoding():
42 """Returns the filesystem encoding that should be used. Note that this is
43 different from the Python understanding of the filesystem encoding which
44 might be deeply flawed. Do not use this value against Python's unicode APIs
45 because it might be different. See :ref:`filesystem-encoding` for the exact
46 behavior.
47
48 The concept of a filesystem encoding in generally is not something you
49 should rely on. As such if you ever need to use this function except for
50 writing wrapper code reconsider.
51 """
52 global _warned_about_filesystem_encoding
53 rv = sys.getfilesystemencoding()
54 if has_likely_buggy_unicode_filesystem and not rv or _is_ascii_encoding(rv):
55 if not _warned_about_filesystem_encoding:
56 warnings.warn(
57 "Detected a misconfigured UNIX filesystem: Will use"
58 " UTF-8 as filesystem encoding instead of {0!r}".format(rv),
59 BrokenFilesystemWarning,
60 )
61 _warned_about_filesystem_encoding = True
62 return "utf-8"
63 return rv
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.formparser
3 ~~~~~~~~~~~~~~~~~~~
4
5 This module implements the form parsing. It supports url-encoded forms
6 as well as non-nested multipart uploads.
7
8 :copyright: 2007 Pallets
9 :license: BSD-3-Clause
10 """
11 import codecs
12 import re
13 from functools import update_wrapper
14 from itertools import chain
15 from itertools import repeat
16 from itertools import tee
17
18 from ._compat import BytesIO
19 from ._compat import text_type
20 from ._compat import to_native
21 from .datastructures import FileStorage
22 from .datastructures import Headers
23 from .datastructures import MultiDict
24 from .http import parse_options_header
25 from .urls import url_decode_stream
26 from .wsgi import get_content_length
27 from .wsgi import get_input_stream
28 from .wsgi import make_line_iter
29
30 # there are some platforms where SpooledTemporaryFile is not available.
31 # In that case we need to provide a fallback.
32 try:
33 from tempfile import SpooledTemporaryFile
34 except ImportError:
35 from tempfile import TemporaryFile
36
37 SpooledTemporaryFile = None
38
39
40 #: an iterator that yields empty strings
41 _empty_string_iter = repeat("")
42
43 #: a regular expression for multipart boundaries
44 _multipart_boundary_re = re.compile("^[ -~]{0,200}[!-~]$")
45
46 #: supported http encodings that are also available in python we support
47 #: for multipart messages.
48 _supported_multipart_encodings = frozenset(["base64", "quoted-printable"])
49
50
51 def default_stream_factory(
52 total_content_length, filename, content_type, content_length=None
53 ):
54 """The stream factory that is used per default."""
55 max_size = 1024 * 500
56 if SpooledTemporaryFile is not None:
57 return SpooledTemporaryFile(max_size=max_size, mode="wb+")
58 if total_content_length is None or total_content_length > max_size:
59 return TemporaryFile("wb+")
60 return BytesIO()
61
62
63 def parse_form_data(
64 environ,
65 stream_factory=None,
66 charset="utf-8",
67 errors="replace",
68 max_form_memory_size=None,
69 max_content_length=None,
70 cls=None,
71 silent=True,
72 ):
73 """Parse the form data in the environ and return it as tuple in the form
74 ``(stream, form, files)``. You should only call this method if the
75 transport method is `POST`, `PUT`, or `PATCH`.
76
77 If the mimetype of the data transmitted is `multipart/form-data` the
78 files multidict will be filled with `FileStorage` objects. If the
79 mimetype is unknown the input stream is wrapped and returned as first
80 argument, else the stream is empty.
81
82 This is a shortcut for the common usage of :class:`FormDataParser`.
83
84 Have a look at :ref:`dealing-with-request-data` for more details.
85
86 .. versionadded:: 0.5
87 The `max_form_memory_size`, `max_content_length` and
88 `cls` parameters were added.
89
90 .. versionadded:: 0.5.1
91 The optional `silent` flag was added.
92
93 :param environ: the WSGI environment to be used for parsing.
94 :param stream_factory: An optional callable that returns a new read and
95 writeable file descriptor. This callable works
96 the same as :meth:`~BaseResponse._get_file_stream`.
97 :param charset: The character set for URL and url encoded form data.
98 :param errors: The encoding error behavior.
99 :param max_form_memory_size: the maximum number of bytes to be accepted for
100 in-memory stored form data. If the data
101 exceeds the value specified an
102 :exc:`~exceptions.RequestEntityTooLarge`
103 exception is raised.
104 :param max_content_length: If this is provided and the transmitted data
105 is longer than this value an
106 :exc:`~exceptions.RequestEntityTooLarge`
107 exception is raised.
108 :param cls: an optional dict class to use. If this is not specified
109 or `None` the default :class:`MultiDict` is used.
110 :param silent: If set to False parsing errors will not be caught.
111 :return: A tuple in the form ``(stream, form, files)``.
112 """
113 return FormDataParser(
114 stream_factory,
115 charset,
116 errors,
117 max_form_memory_size,
118 max_content_length,
119 cls,
120 silent,
121 ).parse_from_environ(environ)
122
123
124 def exhaust_stream(f):
125 """Helper decorator for methods that exhausts the stream on return."""
126
127 def wrapper(self, stream, *args, **kwargs):
128 try:
129 return f(self, stream, *args, **kwargs)
130 finally:
131 exhaust = getattr(stream, "exhaust", None)
132 if exhaust is not None:
133 exhaust()
134 else:
135 while 1:
136 chunk = stream.read(1024 * 64)
137 if not chunk:
138 break
139
140 return update_wrapper(wrapper, f)
141
142
143 class FormDataParser(object):
144 """This class implements parsing of form data for Werkzeug. By itself
145 it can parse multipart and url encoded form data. It can be subclassed
146 and extended but for most mimetypes it is a better idea to use the
147 untouched stream and expose it as separate attributes on a request
148 object.
149
150 .. versionadded:: 0.8
151
152 :param stream_factory: An optional callable that returns a new read and
153 writeable file descriptor. This callable works
154 the same as :meth:`~BaseResponse._get_file_stream`.
155 :param charset: The character set for URL and url encoded form data.
156 :param errors: The encoding error behavior.
157 :param max_form_memory_size: the maximum number of bytes to be accepted for
158 in-memory stored form data. If the data
159 exceeds the value specified an
160 :exc:`~exceptions.RequestEntityTooLarge`
161 exception is raised.
162 :param max_content_length: If this is provided and the transmitted data
163 is longer than this value an
164 :exc:`~exceptions.RequestEntityTooLarge`
165 exception is raised.
166 :param cls: an optional dict class to use. If this is not specified
167 or `None` the default :class:`MultiDict` is used.
168 :param silent: If set to False parsing errors will not be caught.
169 """
170
171 def __init__(
172 self,
173 stream_factory=None,
174 charset="utf-8",
175 errors="replace",
176 max_form_memory_size=None,
177 max_content_length=None,
178 cls=None,
179 silent=True,
180 ):
181 if stream_factory is None:
182 stream_factory = default_stream_factory
183 self.stream_factory = stream_factory
184 self.charset = charset
185 self.errors = errors
186 self.max_form_memory_size = max_form_memory_size
187 self.max_content_length = max_content_length
188 if cls is None:
189 cls = MultiDict
190 self.cls = cls
191 self.silent = silent
192
193 def get_parse_func(self, mimetype, options):
194 return self.parse_functions.get(mimetype)
195
196 def parse_from_environ(self, environ):
197 """Parses the information from the environment as form data.
198
199 :param environ: the WSGI environment to be used for parsing.
200 :return: A tuple in the form ``(stream, form, files)``.
201 """
202 content_type = environ.get("CONTENT_TYPE", "")
203 content_length = get_content_length(environ)
204 mimetype, options = parse_options_header(content_type)
205 return self.parse(get_input_stream(environ), mimetype, content_length, options)
206
207 def parse(self, stream, mimetype, content_length, options=None):
208 """Parses the information from the given stream, mimetype,
209 content length and mimetype parameters.
210
211 :param stream: an input stream
212 :param mimetype: the mimetype of the data
213 :param content_length: the content length of the incoming data
214 :param options: optional mimetype parameters (used for
215 the multipart boundary for instance)
216 :return: A tuple in the form ``(stream, form, files)``.
217 """
218 if (
219 self.max_content_length is not None
220 and content_length is not None
221 and content_length > self.max_content_length
222 ):
223 raise exceptions.RequestEntityTooLarge()
224 if options is None:
225 options = {}
226
227 parse_func = self.get_parse_func(mimetype, options)
228 if parse_func is not None:
229 try:
230 return parse_func(self, stream, mimetype, content_length, options)
231 except ValueError:
232 if not self.silent:
233 raise
234
235 return stream, self.cls(), self.cls()
236
237 @exhaust_stream
238 def _parse_multipart(self, stream, mimetype, content_length, options):
239 parser = MultiPartParser(
240 self.stream_factory,
241 self.charset,
242 self.errors,
243 max_form_memory_size=self.max_form_memory_size,
244 cls=self.cls,
245 )
246 boundary = options.get("boundary")
247 if boundary is None:
248 raise ValueError("Missing boundary")
249 if isinstance(boundary, text_type):
250 boundary = boundary.encode("ascii")
251 form, files = parser.parse(stream, boundary, content_length)
252 return stream, form, files
253
254 @exhaust_stream
255 def _parse_urlencoded(self, stream, mimetype, content_length, options):
256 if (
257 self.max_form_memory_size is not None
258 and content_length is not None
259 and content_length > self.max_form_memory_size
260 ):
261 raise exceptions.RequestEntityTooLarge()
262 form = url_decode_stream(stream, self.charset, errors=self.errors, cls=self.cls)
263 return stream, form, self.cls()
264
265 #: mapping of mimetypes to parsing functions
266 parse_functions = {
267 "multipart/form-data": _parse_multipart,
268 "application/x-www-form-urlencoded": _parse_urlencoded,
269 "application/x-url-encoded": _parse_urlencoded,
270 }
271
272
273 def is_valid_multipart_boundary(boundary):
274 """Checks if the string given is a valid multipart boundary."""
275 return _multipart_boundary_re.match(boundary) is not None
276
277
278 def _line_parse(line):
279 """Removes line ending characters and returns a tuple (`stripped_line`,
280 `is_terminated`).
281 """
282 if line[-2:] in ["\r\n", b"\r\n"]:
283 return line[:-2], True
284 elif line[-1:] in ["\r", "\n", b"\r", b"\n"]:
285 return line[:-1], True
286 return line, False
287
288
289 def parse_multipart_headers(iterable):
290 """Parses multipart headers from an iterable that yields lines (including
291 the trailing newline symbol). The iterable has to be newline terminated.
292
293 The iterable will stop at the line where the headers ended so it can be
294 further consumed.
295
296 :param iterable: iterable of strings that are newline terminated
297 """
298 result = []
299 for line in iterable:
300 line = to_native(line)
301 line, line_terminated = _line_parse(line)
302 if not line_terminated:
303 raise ValueError("unexpected end of line in multipart header")
304 if not line:
305 break
306 elif line[0] in " \t" and result:
307 key, value = result[-1]
308 result[-1] = (key, value + "\n " + line[1:])
309 else:
310 parts = line.split(":", 1)
311 if len(parts) == 2:
312 result.append((parts[0].strip(), parts[1].strip()))
313
314 # we link the list to the headers, no need to create a copy, the
315 # list was not shared anyways.
316 return Headers(result)
317
318
319 _begin_form = "begin_form"
320 _begin_file = "begin_file"
321 _cont = "cont"
322 _end = "end"
323
324
325 class MultiPartParser(object):
326 def __init__(
327 self,
328 stream_factory=None,
329 charset="utf-8",
330 errors="replace",
331 max_form_memory_size=None,
332 cls=None,
333 buffer_size=64 * 1024,
334 ):
335 self.charset = charset
336 self.errors = errors
337 self.max_form_memory_size = max_form_memory_size
338 self.stream_factory = (
339 default_stream_factory if stream_factory is None else stream_factory
340 )
341 self.cls = MultiDict if cls is None else cls
342
343 # make sure the buffer size is divisible by four so that we can base64
344 # decode chunk by chunk
345 assert buffer_size % 4 == 0, "buffer size has to be divisible by 4"
346 # also the buffer size has to be at least 1024 bytes long or long headers
347 # will freak out the system
348 assert buffer_size >= 1024, "buffer size has to be at least 1KB"
349
350 self.buffer_size = buffer_size
351
352 def _fix_ie_filename(self, filename):
353 """Internet Explorer 6 transmits the full file name if a file is
354 uploaded. This function strips the full path if it thinks the
355 filename is Windows-like absolute.
356 """
357 if filename[1:3] == ":\\" or filename[:2] == "\\\\":
358 return filename.split("\\")[-1]
359 return filename
360
361 def _find_terminator(self, iterator):
362 """The terminator might have some additional newlines before it.
363 There is at least one application that sends additional newlines
364 before headers (the python setuptools package).
365 """
366 for line in iterator:
367 if not line:
368 break
369 line = line.strip()
370 if line:
371 return line
372 return b""
373
374 def fail(self, message):
375 raise ValueError(message)
376
377 def get_part_encoding(self, headers):
378 transfer_encoding = headers.get("content-transfer-encoding")
379 if (
380 transfer_encoding is not None
381 and transfer_encoding in _supported_multipart_encodings
382 ):
383 return transfer_encoding
384
385 def get_part_charset(self, headers):
386 # Figure out input charset for current part
387 content_type = headers.get("content-type")
388 if content_type:
389 mimetype, ct_params = parse_options_header(content_type)
390 return ct_params.get("charset", self.charset)
391 return self.charset
392
393 def start_file_streaming(self, filename, headers, total_content_length):
394 if isinstance(filename, bytes):
395 filename = filename.decode(self.charset, self.errors)
396 filename = self._fix_ie_filename(filename)
397 content_type = headers.get("content-type")
398 try:
399 content_length = int(headers["content-length"])
400 except (KeyError, ValueError):
401 content_length = 0
402 container = self.stream_factory(
403 total_content_length=total_content_length,
404 filename=filename,
405 content_type=content_type,
406 content_length=content_length,
407 )
408 return filename, container
409
410 def in_memory_threshold_reached(self, bytes):
411 raise exceptions.RequestEntityTooLarge()
412
413 def validate_boundary(self, boundary):
414 if not boundary:
415 self.fail("Missing boundary")
416 if not is_valid_multipart_boundary(boundary):
417 self.fail("Invalid boundary: %s" % boundary)
418 if len(boundary) > self.buffer_size: # pragma: no cover
419 # this should never happen because we check for a minimum size
420 # of 1024 and boundaries may not be longer than 200. The only
421 # situation when this happens is for non debug builds where
422 # the assert is skipped.
423 self.fail("Boundary longer than buffer size")
424
425 def parse_lines(self, file, boundary, content_length, cap_at_buffer=True):
426 """Generate parts of
427 ``('begin_form', (headers, name))``
428 ``('begin_file', (headers, name, filename))``
429 ``('cont', bytestring)``
430 ``('end', None)``
431
432 Always obeys the grammar
433 parts = ( begin_form cont* end |
434 begin_file cont* end )*
435 """
436 next_part = b"--" + boundary
437 last_part = next_part + b"--"
438
439 iterator = chain(
440 make_line_iter(
441 file,
442 limit=content_length,
443 buffer_size=self.buffer_size,
444 cap_at_buffer=cap_at_buffer,
445 ),
446 _empty_string_iter,
447 )
448
449 terminator = self._find_terminator(iterator)
450
451 if terminator == last_part:
452 return
453 elif terminator != next_part:
454 self.fail("Expected boundary at start of multipart data")
455
456 while terminator != last_part:
457 headers = parse_multipart_headers(iterator)
458
459 disposition = headers.get("content-disposition")
460 if disposition is None:
461 self.fail("Missing Content-Disposition header")
462 disposition, extra = parse_options_header(disposition)
463 transfer_encoding = self.get_part_encoding(headers)
464 name = extra.get("name")
465 filename = extra.get("filename")
466
467 # if no content type is given we stream into memory. A list is
468 # used as a temporary container.
469 if filename is None:
470 yield _begin_form, (headers, name)
471
472 # otherwise we parse the rest of the headers and ask the stream
473 # factory for something we can write in.
474 else:
475 yield _begin_file, (headers, name, filename)
476
477 buf = b""
478 for line in iterator:
479 if not line:
480 self.fail("unexpected end of stream")
481
482 if line[:2] == b"--":
483 terminator = line.rstrip()
484 if terminator in (next_part, last_part):
485 break
486
487 if transfer_encoding is not None:
488 if transfer_encoding == "base64":
489 transfer_encoding = "base64_codec"
490 try:
491 line = codecs.decode(line, transfer_encoding)
492 except Exception:
493 self.fail("could not decode transfer encoded chunk")
494
495 # we have something in the buffer from the last iteration.
496 # this is usually a newline delimiter.
497 if buf:
498 yield _cont, buf
499 buf = b""
500
501 # If the line ends with windows CRLF we write everything except
502 # the last two bytes. In all other cases however we write
503 # everything except the last byte. If it was a newline, that's
504 # fine, otherwise it does not matter because we will write it
505 # the next iteration. this ensures we do not write the
506 # final newline into the stream. That way we do not have to
507 # truncate the stream. However we do have to make sure that
508 # if something else than a newline is in there we write it
509 # out.
510 if line[-2:] == b"\r\n":
511 buf = b"\r\n"
512 cutoff = -2
513 else:
514 buf = line[-1:]
515 cutoff = -1
516 yield _cont, line[:cutoff]
517
518 else: # pragma: no cover
519 raise ValueError("unexpected end of part")
520
521 # if we have a leftover in the buffer that is not a newline
522 # character we have to flush it, otherwise we will chop of
523 # certain values.
524 if buf not in (b"", b"\r", b"\n", b"\r\n"):
525 yield _cont, buf
526
527 yield _end, None
528
529 def parse_parts(self, file, boundary, content_length):
530 """Generate ``('file', (name, val))`` and
531 ``('form', (name, val))`` parts.
532 """
533 in_memory = 0
534
535 for ellt, ell in self.parse_lines(file, boundary, content_length):
536 if ellt == _begin_file:
537 headers, name, filename = ell
538 is_file = True
539 guard_memory = False
540 filename, container = self.start_file_streaming(
541 filename, headers, content_length
542 )
543 _write = container.write
544
545 elif ellt == _begin_form:
546 headers, name = ell
547 is_file = False
548 container = []
549 _write = container.append
550 guard_memory = self.max_form_memory_size is not None
551
552 elif ellt == _cont:
553 _write(ell)
554 # if we write into memory and there is a memory size limit we
555 # count the number of bytes in memory and raise an exception if
556 # there is too much data in memory.
557 if guard_memory:
558 in_memory += len(ell)
559 if in_memory > self.max_form_memory_size:
560 self.in_memory_threshold_reached(in_memory)
561
562 elif ellt == _end:
563 if is_file:
564 container.seek(0)
565 yield (
566 "file",
567 (name, FileStorage(container, filename, name, headers=headers)),
568 )
569 else:
570 part_charset = self.get_part_charset(headers)
571 yield (
572 "form",
573 (name, b"".join(container).decode(part_charset, self.errors)),
574 )
575
576 def parse(self, file, boundary, content_length):
577 formstream, filestream = tee(
578 self.parse_parts(file, boundary, content_length), 2
579 )
580 form = (p[1] for p in formstream if p[0] == "form")
581 files = (p[1] for p in filestream if p[0] == "file")
582 return self.cls(form), self.cls(files)
583
584
585 from . import exceptions
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.http
3 ~~~~~~~~~~~~~
4
5 Werkzeug comes with a bunch of utilities that help Werkzeug to deal with
6 HTTP data. Most of the classes and functions provided by this module are
7 used by the wrappers, but they are useful on their own, too, especially if
8 the response and request objects are not used.
9
10 This covers some of the more HTTP centric features of WSGI, some other
11 utilities such as cookie handling are documented in the `werkzeug.utils`
12 module.
13
14
15 :copyright: 2007 Pallets
16 :license: BSD-3-Clause
17 """
18 import base64
19 import re
20 import warnings
21 from datetime import datetime
22 from datetime import timedelta
23 from hashlib import md5
24 from time import gmtime
25 from time import time
26
27 from ._compat import integer_types
28 from ._compat import iteritems
29 from ._compat import PY2
30 from ._compat import string_types
31 from ._compat import text_type
32 from ._compat import to_bytes
33 from ._compat import to_unicode
34 from ._compat import try_coerce_native
35 from ._internal import _cookie_parse_impl
36 from ._internal import _cookie_quote
37 from ._internal import _make_cookie_domain
38
39 try:
40 from email.utils import parsedate_tz
41 except ImportError:
42 from email.Utils import parsedate_tz
43
44 try:
45 from urllib.request import parse_http_list as _parse_list_header
46 from urllib.parse import unquote_to_bytes as _unquote
47 except ImportError:
48 from urllib2 import parse_http_list as _parse_list_header
49 from urllib2 import unquote as _unquote
50
51 _cookie_charset = "latin1"
52 _basic_auth_charset = "utf-8"
53 # for explanation of "media-range", etc. see Sections 5.3.{1,2} of RFC 7231
54 _accept_re = re.compile(
55 r"""
56 ( # media-range capturing-parenthesis
57 [^\s;,]+ # type/subtype
58 (?:[ \t]*;[ \t]* # ";"
59 (?: # parameter non-capturing-parenthesis
60 [^\s;,q][^\s;,]* # token that doesn't start with "q"
61 | # or
62 q[^\s;,=][^\s;,]* # token that is more than just "q"
63 )
64 )* # zero or more parameters
65 ) # end of media-range
66 (?:[ \t]*;[ \t]*q= # weight is a "q" parameter
67 (\d*(?:\.\d+)?) # qvalue capturing-parentheses
68 [^,]* # "extension" accept params: who cares?
69 )? # accept params are optional
70 """,
71 re.VERBOSE,
72 )
73 _token_chars = frozenset(
74 "!#$%&'*+-.0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ^_`abcdefghijklmnopqrstuvwxyz|~"
75 )
76 _etag_re = re.compile(r'([Ww]/)?(?:"(.*?)"|(.*?))(?:\s*,\s*|$)')
77 _unsafe_header_chars = set('()<>@,;:"/[]?={} \t')
78 _option_header_piece_re = re.compile(
79 r"""
80 ;\s*,?\s* # newlines were replaced with commas
81 (?P<key>
82 "[^"\\]*(?:\\.[^"\\]*)*" # quoted string
83 |
84 [^\s;,=*]+ # token
85 )
86 (?:\*(?P<count>\d+))? # *1, optional continuation index
87 \s*
88 (?: # optionally followed by =value
89 (?: # equals sign, possibly with encoding
90 \*\s*=\s* # * indicates extended notation
91 (?: # optional encoding
92 (?P<encoding>[^\s]+?)
93 '(?P<language>[^\s]*?)'
94 )?
95 |
96 =\s* # basic notation
97 )
98 (?P<value>
99 "[^"\\]*(?:\\.[^"\\]*)*" # quoted string
100 |
101 [^;,]+ # token
102 )?
103 )?
104 \s*
105 """,
106 flags=re.VERBOSE,
107 )
108 _option_header_start_mime_type = re.compile(r",\s*([^;,\s]+)([;,]\s*.+)?")
109
110 _entity_headers = frozenset(
111 [
112 "allow",
113 "content-encoding",
114 "content-language",
115 "content-length",
116 "content-location",
117 "content-md5",
118 "content-range",
119 "content-type",
120 "expires",
121 "last-modified",
122 ]
123 )
124 _hop_by_hop_headers = frozenset(
125 [
126 "connection",
127 "keep-alive",
128 "proxy-authenticate",
129 "proxy-authorization",
130 "te",
131 "trailer",
132 "transfer-encoding",
133 "upgrade",
134 ]
135 )
136
137
138 HTTP_STATUS_CODES = {
139 100: "Continue",
140 101: "Switching Protocols",
141 102: "Processing",
142 200: "OK",
143 201: "Created",
144 202: "Accepted",
145 203: "Non Authoritative Information",
146 204: "No Content",
147 205: "Reset Content",
148 206: "Partial Content",
149 207: "Multi Status",
150 226: "IM Used", # see RFC 3229
151 300: "Multiple Choices",
152 301: "Moved Permanently",
153 302: "Found",
154 303: "See Other",
155 304: "Not Modified",
156 305: "Use Proxy",
157 307: "Temporary Redirect",
158 308: "Permanent Redirect",
159 400: "Bad Request",
160 401: "Unauthorized",
161 402: "Payment Required", # unused
162 403: "Forbidden",
163 404: "Not Found",
164 405: "Method Not Allowed",
165 406: "Not Acceptable",
166 407: "Proxy Authentication Required",
167 408: "Request Timeout",
168 409: "Conflict",
169 410: "Gone",
170 411: "Length Required",
171 412: "Precondition Failed",
172 413: "Request Entity Too Large",
173 414: "Request URI Too Long",
174 415: "Unsupported Media Type",
175 416: "Requested Range Not Satisfiable",
176 417: "Expectation Failed",
177 418: "I'm a teapot", # see RFC 2324
178 421: "Misdirected Request", # see RFC 7540
179 422: "Unprocessable Entity",
180 423: "Locked",
181 424: "Failed Dependency",
182 426: "Upgrade Required",
183 428: "Precondition Required", # see RFC 6585
184 429: "Too Many Requests",
185 431: "Request Header Fields Too Large",
186 449: "Retry With", # proprietary MS extension
187 451: "Unavailable For Legal Reasons",
188 500: "Internal Server Error",
189 501: "Not Implemented",
190 502: "Bad Gateway",
191 503: "Service Unavailable",
192 504: "Gateway Timeout",
193 505: "HTTP Version Not Supported",
194 507: "Insufficient Storage",
195 510: "Not Extended",
196 }
197
198
199 def wsgi_to_bytes(data):
200 """coerce wsgi unicode represented bytes to real ones"""
201 if isinstance(data, bytes):
202 return data
203 return data.encode("latin1") # XXX: utf8 fallback?
204
205
206 def bytes_to_wsgi(data):
207 assert isinstance(data, bytes), "data must be bytes"
208 if isinstance(data, str):
209 return data
210 else:
211 return data.decode("latin1")
212
213
214 def quote_header_value(value, extra_chars="", allow_token=True):
215 """Quote a header value if necessary.
216
217 .. versionadded:: 0.5
218
219 :param value: the value to quote.
220 :param extra_chars: a list of extra characters to skip quoting.
221 :param allow_token: if this is enabled token values are returned
222 unchanged.
223 """
224 if isinstance(value, bytes):
225 value = bytes_to_wsgi(value)
226 value = str(value)
227 if allow_token:
228 token_chars = _token_chars | set(extra_chars)
229 if set(value).issubset(token_chars):
230 return value
231 return '"%s"' % value.replace("\\", "\\\\").replace('"', '\\"')
232
233
234 def unquote_header_value(value, is_filename=False):
235 r"""Unquotes a header value. (Reversal of :func:`quote_header_value`).
236 This does not use the real unquoting but what browsers are actually
237 using for quoting.
238
239 .. versionadded:: 0.5
240
241 :param value: the header value to unquote.
242 """
243 if value and value[0] == value[-1] == '"':
244 # this is not the real unquoting, but fixing this so that the
245 # RFC is met will result in bugs with internet explorer and
246 # probably some other browsers as well. IE for example is
247 # uploading files with "C:\foo\bar.txt" as filename
248 value = value[1:-1]
249
250 # if this is a filename and the starting characters look like
251 # a UNC path, then just return the value without quotes. Using the
252 # replace sequence below on a UNC path has the effect of turning
253 # the leading double slash into a single slash and then
254 # _fix_ie_filename() doesn't work correctly. See #458.
255 if not is_filename or value[:2] != "\\\\":
256 return value.replace("\\\\", "\\").replace('\\"', '"')
257 return value
258
259
260 def dump_options_header(header, options):
261 """The reverse function to :func:`parse_options_header`.
262
263 :param header: the header to dump
264 :param options: a dict of options to append.
265 """
266 segments = []
267 if header is not None:
268 segments.append(header)
269 for key, value in iteritems(options):
270 if value is None:
271 segments.append(key)
272 else:
273 segments.append("%s=%s" % (key, quote_header_value(value)))
274 return "; ".join(segments)
275
276
277 def dump_header(iterable, allow_token=True):
278 """Dump an HTTP header again. This is the reversal of
279 :func:`parse_list_header`, :func:`parse_set_header` and
280 :func:`parse_dict_header`. This also quotes strings that include an
281 equals sign unless you pass it as dict of key, value pairs.
282
283 >>> dump_header({'foo': 'bar baz'})
284 'foo="bar baz"'
285 >>> dump_header(('foo', 'bar baz'))
286 'foo, "bar baz"'
287
288 :param iterable: the iterable or dict of values to quote.
289 :param allow_token: if set to `False` tokens as values are disallowed.
290 See :func:`quote_header_value` for more details.
291 """
292 if isinstance(iterable, dict):
293 items = []
294 for key, value in iteritems(iterable):
295 if value is None:
296 items.append(key)
297 else:
298 items.append(
299 "%s=%s" % (key, quote_header_value(value, allow_token=allow_token))
300 )
301 else:
302 items = [quote_header_value(x, allow_token=allow_token) for x in iterable]
303 return ", ".join(items)
304
305
306 def parse_list_header(value):
307 """Parse lists as described by RFC 2068 Section 2.
308
309 In particular, parse comma-separated lists where the elements of
310 the list may include quoted-strings. A quoted-string could
311 contain a comma. A non-quoted string could have quotes in the
312 middle. Quotes are removed automatically after parsing.
313
314 It basically works like :func:`parse_set_header` just that items
315 may appear multiple times and case sensitivity is preserved.
316
317 The return value is a standard :class:`list`:
318
319 >>> parse_list_header('token, "quoted value"')
320 ['token', 'quoted value']
321
322 To create a header from the :class:`list` again, use the
323 :func:`dump_header` function.
324
325 :param value: a string with a list header.
326 :return: :class:`list`
327 """
328 result = []
329 for item in _parse_list_header(value):
330 if item[:1] == item[-1:] == '"':
331 item = unquote_header_value(item[1:-1])
332 result.append(item)
333 return result
334
335
336 def parse_dict_header(value, cls=dict):
337 """Parse lists of key, value pairs as described by RFC 2068 Section 2 and
338 convert them into a python dict (or any other mapping object created from
339 the type with a dict like interface provided by the `cls` argument):
340
341 >>> d = parse_dict_header('foo="is a fish", bar="as well"')
342 >>> type(d) is dict
343 True
344 >>> sorted(d.items())
345 [('bar', 'as well'), ('foo', 'is a fish')]
346
347 If there is no value for a key it will be `None`:
348
349 >>> parse_dict_header('key_without_value')
350 {'key_without_value': None}
351
352 To create a header from the :class:`dict` again, use the
353 :func:`dump_header` function.
354
355 .. versionchanged:: 0.9
356 Added support for `cls` argument.
357
358 :param value: a string with a dict header.
359 :param cls: callable to use for storage of parsed results.
360 :return: an instance of `cls`
361 """
362 result = cls()
363 if not isinstance(value, text_type):
364 # XXX: validate
365 value = bytes_to_wsgi(value)
366 for item in _parse_list_header(value):
367 if "=" not in item:
368 result[item] = None
369 continue
370 name, value = item.split("=", 1)
371 if value[:1] == value[-1:] == '"':
372 value = unquote_header_value(value[1:-1])
373 result[name] = value
374 return result
375
376
377 def parse_options_header(value, multiple=False):
378 """Parse a ``Content-Type`` like header into a tuple with the content
379 type and the options:
380
381 >>> parse_options_header('text/html; charset=utf8')
382 ('text/html', {'charset': 'utf8'})
383
384 This should not be used to parse ``Cache-Control`` like headers that use
385 a slightly different format. For these headers use the
386 :func:`parse_dict_header` function.
387
388 .. versionchanged:: 0.15
389 :rfc:`2231` parameter continuations are handled.
390
391 .. versionadded:: 0.5
392
393 :param value: the header to parse.
394 :param multiple: Whether try to parse and return multiple MIME types
395 :return: (mimetype, options) or (mimetype, options, mimetype, options, …)
396 if multiple=True
397 """
398 if not value:
399 return "", {}
400
401 result = []
402
403 value = "," + value.replace("\n", ",")
404 while value:
405 match = _option_header_start_mime_type.match(value)
406 if not match:
407 break
408 result.append(match.group(1)) # mimetype
409 options = {}
410 # Parse options
411 rest = match.group(2)
412 continued_encoding = None
413 while rest:
414 optmatch = _option_header_piece_re.match(rest)
415 if not optmatch:
416 break
417 option, count, encoding, language, option_value = optmatch.groups()
418 # Continuations don't have to supply the encoding after the
419 # first line. If we're in a continuation, track the current
420 # encoding to use for subsequent lines. Reset it when the
421 # continuation ends.
422 if not count:
423 continued_encoding = None
424 else:
425 if not encoding:
426 encoding = continued_encoding
427 continued_encoding = encoding
428 option = unquote_header_value(option)
429 if option_value is not None:
430 option_value = unquote_header_value(option_value, option == "filename")
431 if encoding is not None:
432 option_value = _unquote(option_value).decode(encoding)
433 if count:
434 # Continuations append to the existing value. For
435 # simplicity, this ignores the possibility of
436 # out-of-order indices, which shouldn't happen anyway.
437 options[option] = options.get(option, "") + option_value
438 else:
439 options[option] = option_value
440 rest = rest[optmatch.end() :]
441 result.append(options)
442 if multiple is False:
443 return tuple(result)
444 value = rest
445
446 return tuple(result) if result else ("", {})
447
448
449 def parse_accept_header(value, cls=None):
450 """Parses an HTTP Accept-* header. This does not implement a complete
451 valid algorithm but one that supports at least value and quality
452 extraction.
453
454 Returns a new :class:`Accept` object (basically a list of ``(value, quality)``
455 tuples sorted by the quality with some additional accessor methods).
456
457 The second parameter can be a subclass of :class:`Accept` that is created
458 with the parsed values and returned.
459
460 :param value: the accept header string to be parsed.
461 :param cls: the wrapper class for the return value (can be
462 :class:`Accept` or a subclass thereof)
463 :return: an instance of `cls`.
464 """
465 if cls is None:
466 cls = Accept
467
468 if not value:
469 return cls(None)
470
471 result = []
472 for match in _accept_re.finditer(value):
473 quality = match.group(2)
474 if not quality:
475 quality = 1
476 else:
477 quality = max(min(float(quality), 1), 0)
478 result.append((match.group(1), quality))
479 return cls(result)
480
481
482 def parse_cache_control_header(value, on_update=None, cls=None):
483 """Parse a cache control header. The RFC differs between response and
484 request cache control, this method does not. It's your responsibility
485 to not use the wrong control statements.
486
487 .. versionadded:: 0.5
488 The `cls` was added. If not specified an immutable
489 :class:`~werkzeug.datastructures.RequestCacheControl` is returned.
490
491 :param value: a cache control header to be parsed.
492 :param on_update: an optional callable that is called every time a value
493 on the :class:`~werkzeug.datastructures.CacheControl`
494 object is changed.
495 :param cls: the class for the returned object. By default
496 :class:`~werkzeug.datastructures.RequestCacheControl` is used.
497 :return: a `cls` object.
498 """
499 if cls is None:
500 cls = RequestCacheControl
501 if not value:
502 return cls(None, on_update)
503 return cls(parse_dict_header(value), on_update)
504
505
506 def parse_set_header(value, on_update=None):
507 """Parse a set-like header and return a
508 :class:`~werkzeug.datastructures.HeaderSet` object:
509
510 >>> hs = parse_set_header('token, "quoted value"')
511
512 The return value is an object that treats the items case-insensitively
513 and keeps the order of the items:
514
515 >>> 'TOKEN' in hs
516 True
517 >>> hs.index('quoted value')
518 1
519 >>> hs
520 HeaderSet(['token', 'quoted value'])
521
522 To create a header from the :class:`HeaderSet` again, use the
523 :func:`dump_header` function.
524
525 :param value: a set header to be parsed.
526 :param on_update: an optional callable that is called every time a
527 value on the :class:`~werkzeug.datastructures.HeaderSet`
528 object is changed.
529 :return: a :class:`~werkzeug.datastructures.HeaderSet`
530 """
531 if not value:
532 return HeaderSet(None, on_update)
533 return HeaderSet(parse_list_header(value), on_update)
534
535
536 def parse_authorization_header(value):
537 """Parse an HTTP basic/digest authorization header transmitted by the web
538 browser. The return value is either `None` if the header was invalid or
539 not given, otherwise an :class:`~werkzeug.datastructures.Authorization`
540 object.
541
542 :param value: the authorization header to parse.
543 :return: a :class:`~werkzeug.datastructures.Authorization` object or `None`.
544 """
545 if not value:
546 return
547 value = wsgi_to_bytes(value)
548 try:
549 auth_type, auth_info = value.split(None, 1)
550 auth_type = auth_type.lower()
551 except ValueError:
552 return
553 if auth_type == b"basic":
554 try:
555 username, password = base64.b64decode(auth_info).split(b":", 1)
556 except Exception:
557 return
558 return Authorization(
559 "basic",
560 {
561 "username": to_unicode(username, _basic_auth_charset),
562 "password": to_unicode(password, _basic_auth_charset),
563 },
564 )
565 elif auth_type == b"digest":
566 auth_map = parse_dict_header(auth_info)
567 for key in "username", "realm", "nonce", "uri", "response":
568 if key not in auth_map:
569 return
570 if "qop" in auth_map:
571 if not auth_map.get("nc") or not auth_map.get("cnonce"):
572 return
573 return Authorization("digest", auth_map)
574
575
576 def parse_www_authenticate_header(value, on_update=None):
577 """Parse an HTTP WWW-Authenticate header into a
578 :class:`~werkzeug.datastructures.WWWAuthenticate` object.
579
580 :param value: a WWW-Authenticate header to parse.
581 :param on_update: an optional callable that is called every time a value
582 on the :class:`~werkzeug.datastructures.WWWAuthenticate`
583 object is changed.
584 :return: a :class:`~werkzeug.datastructures.WWWAuthenticate` object.
585 """
586 if not value:
587 return WWWAuthenticate(on_update=on_update)
588 try:
589 auth_type, auth_info = value.split(None, 1)
590 auth_type = auth_type.lower()
591 except (ValueError, AttributeError):
592 return WWWAuthenticate(value.strip().lower(), on_update=on_update)
593 return WWWAuthenticate(auth_type, parse_dict_header(auth_info), on_update)
594
595
596 def parse_if_range_header(value):
597 """Parses an if-range header which can be an etag or a date. Returns
598 a :class:`~werkzeug.datastructures.IfRange` object.
599
600 .. versionadded:: 0.7
601 """
602 if not value:
603 return IfRange()
604 date = parse_date(value)
605 if date is not None:
606 return IfRange(date=date)
607 # drop weakness information
608 return IfRange(unquote_etag(value)[0])
609
610
611 def parse_range_header(value, make_inclusive=True):
612 """Parses a range header into a :class:`~werkzeug.datastructures.Range`
613 object. If the header is missing or malformed `None` is returned.
614 `ranges` is a list of ``(start, stop)`` tuples where the ranges are
615 non-inclusive.
616
617 .. versionadded:: 0.7
618 """
619 if not value or "=" not in value:
620 return None
621
622 ranges = []
623 last_end = 0
624 units, rng = value.split("=", 1)
625 units = units.strip().lower()
626
627 for item in rng.split(","):
628 item = item.strip()
629 if "-" not in item:
630 return None
631 if item.startswith("-"):
632 if last_end < 0:
633 return None
634 try:
635 begin = int(item)
636 except ValueError:
637 return None
638 end = None
639 last_end = -1
640 elif "-" in item:
641 begin, end = item.split("-", 1)
642 begin = begin.strip()
643 end = end.strip()
644 if not begin.isdigit():
645 return None
646 begin = int(begin)
647 if begin < last_end or last_end < 0:
648 return None
649 if end:
650 if not end.isdigit():
651 return None
652 end = int(end) + 1
653 if begin >= end:
654 return None
655 else:
656 end = None
657 last_end = end
658 ranges.append((begin, end))
659
660 return Range(units, ranges)
661
662
663 def parse_content_range_header(value, on_update=None):
664 """Parses a range header into a
665 :class:`~werkzeug.datastructures.ContentRange` object or `None` if
666 parsing is not possible.
667
668 .. versionadded:: 0.7
669
670 :param value: a content range header to be parsed.
671 :param on_update: an optional callable that is called every time a value
672 on the :class:`~werkzeug.datastructures.ContentRange`
673 object is changed.
674 """
675 if value is None:
676 return None
677 try:
678 units, rangedef = (value or "").strip().split(None, 1)
679 except ValueError:
680 return None
681
682 if "/" not in rangedef:
683 return None
684 rng, length = rangedef.split("/", 1)
685 if length == "*":
686 length = None
687 elif length.isdigit():
688 length = int(length)
689 else:
690 return None
691
692 if rng == "*":
693 return ContentRange(units, None, None, length, on_update=on_update)
694 elif "-" not in rng:
695 return None
696
697 start, stop = rng.split("-", 1)
698 try:
699 start = int(start)
700 stop = int(stop) + 1
701 except ValueError:
702 return None
703
704 if is_byte_range_valid(start, stop, length):
705 return ContentRange(units, start, stop, length, on_update=on_update)
706
707
708 def quote_etag(etag, weak=False):
709 """Quote an etag.
710
711 :param etag: the etag to quote.
712 :param weak: set to `True` to tag it "weak".
713 """
714 if '"' in etag:
715 raise ValueError("invalid etag")
716 etag = '"%s"' % etag
717 if weak:
718 etag = "W/" + etag
719 return etag
720
721
722 def unquote_etag(etag):
723 """Unquote a single etag:
724
725 >>> unquote_etag('W/"bar"')
726 ('bar', True)
727 >>> unquote_etag('"bar"')
728 ('bar', False)
729
730 :param etag: the etag identifier to unquote.
731 :return: a ``(etag, weak)`` tuple.
732 """
733 if not etag:
734 return None, None
735 etag = etag.strip()
736 weak = False
737 if etag.startswith(("W/", "w/")):
738 weak = True
739 etag = etag[2:]
740 if etag[:1] == etag[-1:] == '"':
741 etag = etag[1:-1]
742 return etag, weak
743
744
745 def parse_etags(value):
746 """Parse an etag header.
747
748 :param value: the tag header to parse
749 :return: an :class:`~werkzeug.datastructures.ETags` object.
750 """
751 if not value:
752 return ETags()
753 strong = []
754 weak = []
755 end = len(value)
756 pos = 0
757 while pos < end:
758 match = _etag_re.match(value, pos)
759 if match is None:
760 break
761 is_weak, quoted, raw = match.groups()
762 if raw == "*":
763 return ETags(star_tag=True)
764 elif quoted:
765 raw = quoted
766 if is_weak:
767 weak.append(raw)
768 else:
769 strong.append(raw)
770 pos = match.end()
771 return ETags(strong, weak)
772
773
774 def generate_etag(data):
775 """Generate an etag for some data."""
776 return md5(data).hexdigest()
777
778
779 def parse_date(value):
780 """Parse one of the following date formats into a datetime object:
781
782 .. sourcecode:: text
783
784 Sun, 06 Nov 1994 08:49:37 GMT ; RFC 822, updated by RFC 1123
785 Sunday, 06-Nov-94 08:49:37 GMT ; RFC 850, obsoleted by RFC 1036
786 Sun Nov 6 08:49:37 1994 ; ANSI C's asctime() format
787
788 If parsing fails the return value is `None`.
789
790 :param value: a string with a supported date format.
791 :return: a :class:`datetime.datetime` object.
792 """
793 if value:
794 t = parsedate_tz(value.strip())
795 if t is not None:
796 try:
797 year = t[0]
798 # unfortunately that function does not tell us if two digit
799 # years were part of the string, or if they were prefixed
800 # with two zeroes. So what we do is to assume that 69-99
801 # refer to 1900, and everything below to 2000
802 if year >= 0 and year <= 68:
803 year += 2000
804 elif year >= 69 and year <= 99:
805 year += 1900
806 return datetime(*((year,) + t[1:7])) - timedelta(seconds=t[-1] or 0)
807 except (ValueError, OverflowError):
808 return None
809
810
811 def _dump_date(d, delim):
812 """Used for `http_date` and `cookie_date`."""
813 if d is None:
814 d = gmtime()
815 elif isinstance(d, datetime):
816 d = d.utctimetuple()
817 elif isinstance(d, (integer_types, float)):
818 d = gmtime(d)
819 return "%s, %02d%s%s%s%s %02d:%02d:%02d GMT" % (
820 ("Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun")[d.tm_wday],
821 d.tm_mday,
822 delim,
823 (
824 "Jan",
825 "Feb",
826 "Mar",
827 "Apr",
828 "May",
829 "Jun",
830 "Jul",
831 "Aug",
832 "Sep",
833 "Oct",
834 "Nov",
835 "Dec",
836 )[d.tm_mon - 1],
837 delim,
838 str(d.tm_year),
839 d.tm_hour,
840 d.tm_min,
841 d.tm_sec,
842 )
843
844
845 def cookie_date(expires=None):
846 """Formats the time to ensure compatibility with Netscape's cookie
847 standard.
848
849 Accepts a floating point number expressed in seconds since the epoch in, a
850 datetime object or a timetuple. All times in UTC. The :func:`parse_date`
851 function can be used to parse such a date.
852
853 Outputs a string in the format ``Wdy, DD-Mon-YYYY HH:MM:SS GMT``.
854
855 :param expires: If provided that date is used, otherwise the current.
856 """
857 return _dump_date(expires, "-")
858
859
860 def http_date(timestamp=None):
861 """Formats the time to match the RFC1123 date format.
862
863 Accepts a floating point number expressed in seconds since the epoch in, a
864 datetime object or a timetuple. All times in UTC. The :func:`parse_date`
865 function can be used to parse such a date.
866
867 Outputs a string in the format ``Wdy, DD Mon YYYY HH:MM:SS GMT``.
868
869 :param timestamp: If provided that date is used, otherwise the current.
870 """
871 return _dump_date(timestamp, " ")
872
873
874 def parse_age(value=None):
875 """Parses a base-10 integer count of seconds into a timedelta.
876
877 If parsing fails, the return value is `None`.
878
879 :param value: a string consisting of an integer represented in base-10
880 :return: a :class:`datetime.timedelta` object or `None`.
881 """
882 if not value:
883 return None
884 try:
885 seconds = int(value)
886 except ValueError:
887 return None
888 if seconds < 0:
889 return None
890 try:
891 return timedelta(seconds=seconds)
892 except OverflowError:
893 return None
894
895
896 def dump_age(age=None):
897 """Formats the duration as a base-10 integer.
898
899 :param age: should be an integer number of seconds,
900 a :class:`datetime.timedelta` object, or,
901 if the age is unknown, `None` (default).
902 """
903 if age is None:
904 return
905 if isinstance(age, timedelta):
906 # do the equivalent of Python 2.7's timedelta.total_seconds(),
907 # but disregarding fractional seconds
908 age = age.seconds + (age.days * 24 * 3600)
909
910 age = int(age)
911 if age < 0:
912 raise ValueError("age cannot be negative")
913
914 return str(age)
915
916
917 def is_resource_modified(
918 environ, etag=None, data=None, last_modified=None, ignore_if_range=True
919 ):
920 """Convenience method for conditional requests.
921
922 :param environ: the WSGI environment of the request to be checked.
923 :param etag: the etag for the response for comparison.
924 :param data: or alternatively the data of the response to automatically
925 generate an etag using :func:`generate_etag`.
926 :param last_modified: an optional date of the last modification.
927 :param ignore_if_range: If `False`, `If-Range` header will be taken into
928 account.
929 :return: `True` if the resource was modified, otherwise `False`.
930 """
931 if etag is None and data is not None:
932 etag = generate_etag(data)
933 elif data is not None:
934 raise TypeError("both data and etag given")
935 if environ["REQUEST_METHOD"] not in ("GET", "HEAD"):
936 return False
937
938 unmodified = False
939 if isinstance(last_modified, string_types):
940 last_modified = parse_date(last_modified)
941
942 # ensure that microsecond is zero because the HTTP spec does not transmit
943 # that either and we might have some false positives. See issue #39
944 if last_modified is not None:
945 last_modified = last_modified.replace(microsecond=0)
946
947 if_range = None
948 if not ignore_if_range and "HTTP_RANGE" in environ:
949 # https://tools.ietf.org/html/rfc7233#section-3.2
950 # A server MUST ignore an If-Range header field received in a request
951 # that does not contain a Range header field.
952 if_range = parse_if_range_header(environ.get("HTTP_IF_RANGE"))
953
954 if if_range is not None and if_range.date is not None:
955 modified_since = if_range.date
956 else:
957 modified_since = parse_date(environ.get("HTTP_IF_MODIFIED_SINCE"))
958
959 if modified_since and last_modified and last_modified <= modified_since:
960 unmodified = True
961
962 if etag:
963 etag, _ = unquote_etag(etag)
964 if if_range is not None and if_range.etag is not None:
965 unmodified = parse_etags(if_range.etag).contains(etag)
966 else:
967 if_none_match = parse_etags(environ.get("HTTP_IF_NONE_MATCH"))
968 if if_none_match:
969 # https://tools.ietf.org/html/rfc7232#section-3.2
970 # "A recipient MUST use the weak comparison function when comparing
971 # entity-tags for If-None-Match"
972 unmodified = if_none_match.contains_weak(etag)
973
974 # https://tools.ietf.org/html/rfc7232#section-3.1
975 # "Origin server MUST use the strong comparison function when
976 # comparing entity-tags for If-Match"
977 if_match = parse_etags(environ.get("HTTP_IF_MATCH"))
978 if if_match:
979 unmodified = not if_match.is_strong(etag)
980
981 return not unmodified
982
983
984 def remove_entity_headers(headers, allowed=("expires", "content-location")):
985 """Remove all entity headers from a list or :class:`Headers` object. This
986 operation works in-place. `Expires` and `Content-Location` headers are
987 by default not removed. The reason for this is :rfc:`2616` section
988 10.3.5 which specifies some entity headers that should be sent.
989
990 .. versionchanged:: 0.5
991 added `allowed` parameter.
992
993 :param headers: a list or :class:`Headers` object.
994 :param allowed: a list of headers that should still be allowed even though
995 they are entity headers.
996 """
997 allowed = set(x.lower() for x in allowed)
998 headers[:] = [
999 (key, value)
1000 for key, value in headers
1001 if not is_entity_header(key) or key.lower() in allowed
1002 ]
1003
1004
1005 def remove_hop_by_hop_headers(headers):
1006 """Remove all HTTP/1.1 "Hop-by-Hop" headers from a list or
1007 :class:`Headers` object. This operation works in-place.
1008
1009 .. versionadded:: 0.5
1010
1011 :param headers: a list or :class:`Headers` object.
1012 """
1013 headers[:] = [
1014 (key, value) for key, value in headers if not is_hop_by_hop_header(key)
1015 ]
1016
1017
1018 def is_entity_header(header):
1019 """Check if a header is an entity header.
1020
1021 .. versionadded:: 0.5
1022
1023 :param header: the header to test.
1024 :return: `True` if it's an entity header, `False` otherwise.
1025 """
1026 return header.lower() in _entity_headers
1027
1028
1029 def is_hop_by_hop_header(header):
1030 """Check if a header is an HTTP/1.1 "Hop-by-Hop" header.
1031
1032 .. versionadded:: 0.5
1033
1034 :param header: the header to test.
1035 :return: `True` if it's an HTTP/1.1 "Hop-by-Hop" header, `False` otherwise.
1036 """
1037 return header.lower() in _hop_by_hop_headers
1038
1039
1040 def parse_cookie(header, charset="utf-8", errors="replace", cls=None):
1041 """Parse a cookie. Either from a string or WSGI environ.
1042
1043 Per default encoding errors are ignored. If you want a different behavior
1044 you can set `errors` to ``'replace'`` or ``'strict'``. In strict mode a
1045 :exc:`HTTPUnicodeError` is raised.
1046
1047 .. versionchanged:: 0.5
1048 This function now returns a :class:`TypeConversionDict` instead of a
1049 regular dict. The `cls` parameter was added.
1050
1051 :param header: the header to be used to parse the cookie. Alternatively
1052 this can be a WSGI environment.
1053 :param charset: the charset for the cookie values.
1054 :param errors: the error behavior for the charset decoding.
1055 :param cls: an optional dict class to use. If this is not specified
1056 or `None` the default :class:`TypeConversionDict` is
1057 used.
1058 """
1059 if isinstance(header, dict):
1060 header = header.get("HTTP_COOKIE", "")
1061 elif header is None:
1062 header = ""
1063
1064 # If the value is an unicode string it's mangled through latin1. This
1065 # is done because on PEP 3333 on Python 3 all headers are assumed latin1
1066 # which however is incorrect for cookies, which are sent in page encoding.
1067 # As a result we
1068 if isinstance(header, text_type):
1069 header = header.encode("latin1", "replace")
1070
1071 if cls is None:
1072 cls = TypeConversionDict
1073
1074 def _parse_pairs():
1075 for key, val in _cookie_parse_impl(header):
1076 key = to_unicode(key, charset, errors, allow_none_charset=True)
1077 if not key:
1078 continue
1079 val = to_unicode(val, charset, errors, allow_none_charset=True)
1080 yield try_coerce_native(key), val
1081
1082 return cls(_parse_pairs())
1083
1084
1085 def dump_cookie(
1086 key,
1087 value="",
1088 max_age=None,
1089 expires=None,
1090 path="/",
1091 domain=None,
1092 secure=False,
1093 httponly=False,
1094 charset="utf-8",
1095 sync_expires=True,
1096 max_size=4093,
1097 samesite=None,
1098 ):
1099 """Creates a new Set-Cookie header without the ``Set-Cookie`` prefix
1100 The parameters are the same as in the cookie Morsel object in the
1101 Python standard library but it accepts unicode data, too.
1102
1103 On Python 3 the return value of this function will be a unicode
1104 string, on Python 2 it will be a native string. In both cases the
1105 return value is usually restricted to ascii as the vast majority of
1106 values are properly escaped, but that is no guarantee. If a unicode
1107 string is returned it's tunneled through latin1 as required by
1108 PEP 3333.
1109
1110 The return value is not ASCII safe if the key contains unicode
1111 characters. This is technically against the specification but
1112 happens in the wild. It's strongly recommended to not use
1113 non-ASCII values for the keys.
1114
1115 :param max_age: should be a number of seconds, or `None` (default) if
1116 the cookie should last only as long as the client's
1117 browser session. Additionally `timedelta` objects
1118 are accepted, too.
1119 :param expires: should be a `datetime` object or unix timestamp.
1120 :param path: limits the cookie to a given path, per default it will
1121 span the whole domain.
1122 :param domain: Use this if you want to set a cross-domain cookie. For
1123 example, ``domain=".example.com"`` will set a cookie
1124 that is readable by the domain ``www.example.com``,
1125 ``foo.example.com`` etc. Otherwise, a cookie will only
1126 be readable by the domain that set it.
1127 :param secure: The cookie will only be available via HTTPS
1128 :param httponly: disallow JavaScript to access the cookie. This is an
1129 extension to the cookie standard and probably not
1130 supported by all browsers.
1131 :param charset: the encoding for unicode values.
1132 :param sync_expires: automatically set expires if max_age is defined
1133 but expires not.
1134 :param max_size: Warn if the final header value exceeds this size. The
1135 default, 4093, should be safely `supported by most browsers
1136 <cookie_>`_. Set to 0 to disable this check.
1137 :param samesite: Limits the scope of the cookie such that it will only
1138 be attached to requests if those requests are "same-site".
1139
1140 .. _`cookie`: http://browsercookielimits.squawky.net/
1141 """
1142 key = to_bytes(key, charset)
1143 value = to_bytes(value, charset)
1144
1145 if path is not None:
1146 path = iri_to_uri(path, charset)
1147 domain = _make_cookie_domain(domain)
1148 if isinstance(max_age, timedelta):
1149 max_age = (max_age.days * 60 * 60 * 24) + max_age.seconds
1150 if expires is not None:
1151 if not isinstance(expires, string_types):
1152 expires = cookie_date(expires)
1153 elif max_age is not None and sync_expires:
1154 expires = to_bytes(cookie_date(time() + max_age))
1155
1156 samesite = samesite.title() if samesite else None
1157 if samesite not in ("Strict", "Lax", None):
1158 raise ValueError("invalid SameSite value; must be 'Strict', 'Lax' or None")
1159
1160 buf = [key + b"=" + _cookie_quote(value)]
1161
1162 # XXX: In theory all of these parameters that are not marked with `None`
1163 # should be quoted. Because stdlib did not quote it before I did not
1164 # want to introduce quoting there now.
1165 for k, v, q in (
1166 (b"Domain", domain, True),
1167 (b"Expires", expires, False),
1168 (b"Max-Age", max_age, False),
1169 (b"Secure", secure, None),
1170 (b"HttpOnly", httponly, None),
1171 (b"Path", path, False),
1172 (b"SameSite", samesite, False),
1173 ):
1174 if q is None:
1175 if v:
1176 buf.append(k)
1177 continue
1178
1179 if v is None:
1180 continue
1181
1182 tmp = bytearray(k)
1183 if not isinstance(v, (bytes, bytearray)):
1184 v = to_bytes(text_type(v), charset)
1185 if q:
1186 v = _cookie_quote(v)
1187 tmp += b"=" + v
1188 buf.append(bytes(tmp))
1189
1190 # The return value will be an incorrectly encoded latin1 header on
1191 # Python 3 for consistency with the headers object and a bytestring
1192 # on Python 2 because that's how the API makes more sense.
1193 rv = b"; ".join(buf)
1194 if not PY2:
1195 rv = rv.decode("latin1")
1196
1197 # Warn if the final value of the cookie is less than the limit. If the
1198 # cookie is too large, then it may be silently ignored, which can be quite
1199 # hard to debug.
1200 cookie_size = len(rv)
1201
1202 if max_size and cookie_size > max_size:
1203 value_size = len(value)
1204 warnings.warn(
1205 'The "{key}" cookie is too large: the value was {value_size} bytes'
1206 " but the header required {extra_size} extra bytes. The final size"
1207 " was {cookie_size} bytes but the limit is {max_size} bytes."
1208 " Browsers may silently ignore cookies larger than this.".format(
1209 key=key,
1210 value_size=value_size,
1211 extra_size=cookie_size - value_size,
1212 cookie_size=cookie_size,
1213 max_size=max_size,
1214 ),
1215 stacklevel=2,
1216 )
1217
1218 return rv
1219
1220
1221 def is_byte_range_valid(start, stop, length):
1222 """Checks if a given byte content range is valid for the given length.
1223
1224 .. versionadded:: 0.7
1225 """
1226 if (start is None) != (stop is None):
1227 return False
1228 elif start is None:
1229 return length is None or length >= 0
1230 elif length is None:
1231 return 0 <= start < stop
1232 elif start >= stop:
1233 return False
1234 return 0 <= start < length
1235
1236
1237 # circular dependency fun
1238 from .datastructures import Accept
1239 from .datastructures import Authorization
1240 from .datastructures import ContentRange
1241 from .datastructures import ETags
1242 from .datastructures import HeaderSet
1243 from .datastructures import IfRange
1244 from .datastructures import Range
1245 from .datastructures import RequestCacheControl
1246 from .datastructures import TypeConversionDict
1247 from .datastructures import WWWAuthenticate
1248 from .urls import iri_to_uri
1249
1250 # DEPRECATED
1251 from .datastructures import CharsetAccept as _CharsetAccept
1252 from .datastructures import Headers as _Headers
1253 from .datastructures import LanguageAccept as _LanguageAccept
1254 from .datastructures import MIMEAccept as _MIMEAccept
1255
1256
1257 class MIMEAccept(_MIMEAccept):
1258 def __init__(self, *args, **kwargs):
1259 warnings.warn(
1260 "'werkzeug.http.MIMEAccept' has moved to 'werkzeug"
1261 ".datastructures.MIMEAccept' as of version 0.5. This old"
1262 " import will be removed in version 1.0.",
1263 DeprecationWarning,
1264 stacklevel=2,
1265 )
1266 super(MIMEAccept, self).__init__(*args, **kwargs)
1267
1268
1269 class CharsetAccept(_CharsetAccept):
1270 def __init__(self, *args, **kwargs):
1271 warnings.warn(
1272 "'werkzeug.http.CharsetAccept' has moved to 'werkzeug"
1273 ".datastructures.CharsetAccept' as of version 0.5. This old"
1274 " import will be removed in version 1.0.",
1275 DeprecationWarning,
1276 stacklevel=2,
1277 )
1278 super(CharsetAccept, self).__init__(*args, **kwargs)
1279
1280
1281 class LanguageAccept(_LanguageAccept):
1282 def __init__(self, *args, **kwargs):
1283 warnings.warn(
1284 "'werkzeug.http.LanguageAccept' has moved to 'werkzeug"
1285 ".datastructures.LanguageAccept' as of version 0.5. This"
1286 " old import will be removed in version 1.0.",
1287 DeprecationWarning,
1288 stacklevel=2,
1289 )
1290 super(LanguageAccept, self).__init__(*args, **kwargs)
1291
1292
1293 class Headers(_Headers):
1294 def __init__(self, *args, **kwargs):
1295 warnings.warn(
1296 "'werkzeug.http.Headers' has moved to 'werkzeug"
1297 ".datastructures.Headers' as of version 0.5. This old"
1298 " import will be removed in version 1.0.",
1299 DeprecationWarning,
1300 stacklevel=2,
1301 )
1302 super(Headers, self).__init__(*args, **kwargs)
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.local
3 ~~~~~~~~~~~~~~
4
5 This module implements context-local objects.
6
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
9 """
10 import copy
11 from functools import update_wrapper
12
13 from ._compat import implements_bool
14 from ._compat import PY2
15 from .wsgi import ClosingIterator
16
17 # since each thread has its own greenlet we can just use those as identifiers
18 # for the context. If greenlets are not available we fall back to the
19 # current thread ident depending on where it is.
20 try:
21 from greenlet import getcurrent as get_ident
22 except ImportError:
23 try:
24 from thread import get_ident
25 except ImportError:
26 from _thread import get_ident
27
28
29 def release_local(local):
30 """Releases the contents of the local for the current context.
31 This makes it possible to use locals without a manager.
32
33 Example::
34
35 >>> loc = Local()
36 >>> loc.foo = 42
37 >>> release_local(loc)
38 >>> hasattr(loc, 'foo')
39 False
40
41 With this function one can release :class:`Local` objects as well
42 as :class:`LocalStack` objects. However it is not possible to
43 release data held by proxies that way, one always has to retain
44 a reference to the underlying local object in order to be able
45 to release it.
46
47 .. versionadded:: 0.6.1
48 """
49 local.__release_local__()
50
51
52 class Local(object):
53 __slots__ = ("__storage__", "__ident_func__")
54
55 def __init__(self):
56 object.__setattr__(self, "__storage__", {})
57 object.__setattr__(self, "__ident_func__", get_ident)
58
59 def __iter__(self):
60 return iter(self.__storage__.items())
61
62 def __call__(self, proxy):
63 """Create a proxy for a name."""
64 return LocalProxy(self, proxy)
65
66 def __release_local__(self):
67 self.__storage__.pop(self.__ident_func__(), None)
68
69 def __getattr__(self, name):
70 try:
71 return self.__storage__[self.__ident_func__()][name]
72 except KeyError:
73 raise AttributeError(name)
74
75 def __setattr__(self, name, value):
76 ident = self.__ident_func__()
77 storage = self.__storage__
78 try:
79 storage[ident][name] = value
80 except KeyError:
81 storage[ident] = {name: value}
82
83 def __delattr__(self, name):
84 try:
85 del self.__storage__[self.__ident_func__()][name]
86 except KeyError:
87 raise AttributeError(name)
88
89
90 class LocalStack(object):
91 """This class works similar to a :class:`Local` but keeps a stack
92 of objects instead. This is best explained with an example::
93
94 >>> ls = LocalStack()
95 >>> ls.push(42)
96 >>> ls.top
97 42
98 >>> ls.push(23)
99 >>> ls.top
100 23
101 >>> ls.pop()
102 23
103 >>> ls.top
104 42
105
106 They can be force released by using a :class:`LocalManager` or with
107 the :func:`release_local` function but the correct way is to pop the
108 item from the stack after using. When the stack is empty it will
109 no longer be bound to the current context (and as such released).
110
111 By calling the stack without arguments it returns a proxy that resolves to
112 the topmost item on the stack.
113
114 .. versionadded:: 0.6.1
115 """
116
117 def __init__(self):
118 self._local = Local()
119
120 def __release_local__(self):
121 self._local.__release_local__()
122
123 def _get__ident_func__(self):
124 return self._local.__ident_func__
125
126 def _set__ident_func__(self, value):
127 object.__setattr__(self._local, "__ident_func__", value)
128
129 __ident_func__ = property(_get__ident_func__, _set__ident_func__)
130 del _get__ident_func__, _set__ident_func__
131
132 def __call__(self):
133 def _lookup():
134 rv = self.top
135 if rv is None:
136 raise RuntimeError("object unbound")
137 return rv
138
139 return LocalProxy(_lookup)
140
141 def push(self, obj):
142 """Pushes a new item to the stack"""
143 rv = getattr(self._local, "stack", None)
144 if rv is None:
145 self._local.stack = rv = []
146 rv.append(obj)
147 return rv
148
149 def pop(self):
150 """Removes the topmost item from the stack, will return the
151 old value or `None` if the stack was already empty.
152 """
153 stack = getattr(self._local, "stack", None)
154 if stack is None:
155 return None
156 elif len(stack) == 1:
157 release_local(self._local)
158 return stack[-1]
159 else:
160 return stack.pop()
161
162 @property
163 def top(self):
164 """The topmost item on the stack. If the stack is empty,
165 `None` is returned.
166 """
167 try:
168 return self._local.stack[-1]
169 except (AttributeError, IndexError):
170 return None
171
172
173 class LocalManager(object):
174 """Local objects cannot manage themselves. For that you need a local
175 manager. You can pass a local manager multiple locals or add them later
176 by appending them to `manager.locals`. Every time the manager cleans up,
177 it will clean up all the data left in the locals for this context.
178
179 The `ident_func` parameter can be added to override the default ident
180 function for the wrapped locals.
181
182 .. versionchanged:: 0.6.1
183 Instead of a manager the :func:`release_local` function can be used
184 as well.
185
186 .. versionchanged:: 0.7
187 `ident_func` was added.
188 """
189
190 def __init__(self, locals=None, ident_func=None):
191 if locals is None:
192 self.locals = []
193 elif isinstance(locals, Local):
194 self.locals = [locals]
195 else:
196 self.locals = list(locals)
197 if ident_func is not None:
198 self.ident_func = ident_func
199 for local in self.locals:
200 object.__setattr__(local, "__ident_func__", ident_func)
201 else:
202 self.ident_func = get_ident
203
204 def get_ident(self):
205 """Return the context identifier the local objects use internally for
206 this context. You cannot override this method to change the behavior
207 but use it to link other context local objects (such as SQLAlchemy's
208 scoped sessions) to the Werkzeug locals.
209
210 .. versionchanged:: 0.7
211 You can pass a different ident function to the local manager that
212 will then be propagated to all the locals passed to the
213 constructor.
214 """
215 return self.ident_func()
216
217 def cleanup(self):
218 """Manually clean up the data in the locals for this context. Call
219 this at the end of the request or use `make_middleware()`.
220 """
221 for local in self.locals:
222 release_local(local)
223
224 def make_middleware(self, app):
225 """Wrap a WSGI application so that cleaning up happens after
226 request end.
227 """
228
229 def application(environ, start_response):
230 return ClosingIterator(app(environ, start_response), self.cleanup)
231
232 return application
233
234 def middleware(self, func):
235 """Like `make_middleware` but for decorating functions.
236
237 Example usage::
238
239 @manager.middleware
240 def application(environ, start_response):
241 ...
242
243 The difference to `make_middleware` is that the function passed
244 will have all the arguments copied from the inner application
245 (name, docstring, module).
246 """
247 return update_wrapper(self.make_middleware(func), func)
248
249 def __repr__(self):
250 return "<%s storages: %d>" % (self.__class__.__name__, len(self.locals))
251
252
253 @implements_bool
254 class LocalProxy(object):
255 """Acts as a proxy for a werkzeug local. Forwards all operations to
256 a proxied object. The only operations not supported for forwarding
257 are right handed operands and any kind of assignment.
258
259 Example usage::
260
261 from werkzeug.local import Local
262 l = Local()
263
264 # these are proxies
265 request = l('request')
266 user = l('user')
267
268
269 from werkzeug.local import LocalStack
270 _response_local = LocalStack()
271
272 # this is a proxy
273 response = _response_local()
274
275 Whenever something is bound to l.user / l.request the proxy objects
276 will forward all operations. If no object is bound a :exc:`RuntimeError`
277 will be raised.
278
279 To create proxies to :class:`Local` or :class:`LocalStack` objects,
280 call the object as shown above. If you want to have a proxy to an
281 object looked up by a function, you can (as of Werkzeug 0.6.1) pass
282 a function to the :class:`LocalProxy` constructor::
283
284 session = LocalProxy(lambda: get_current_request().session)
285
286 .. versionchanged:: 0.6.1
287 The class can be instantiated with a callable as well now.
288 """
289
290 __slots__ = ("__local", "__dict__", "__name__", "__wrapped__")
291
292 def __init__(self, local, name=None):
293 object.__setattr__(self, "_LocalProxy__local", local)
294 object.__setattr__(self, "__name__", name)
295 if callable(local) and not hasattr(local, "__release_local__"):
296 # "local" is a callable that is not an instance of Local or
297 # LocalManager: mark it as a wrapped function.
298 object.__setattr__(self, "__wrapped__", local)
299
300 def _get_current_object(self):
301 """Return the current object. This is useful if you want the real
302 object behind the proxy at a time for performance reasons or because
303 you want to pass the object into a different context.
304 """
305 if not hasattr(self.__local, "__release_local__"):
306 return self.__local()
307 try:
308 return getattr(self.__local, self.__name__)
309 except AttributeError:
310 raise RuntimeError("no object bound to %s" % self.__name__)
311
312 @property
313 def __dict__(self):
314 try:
315 return self._get_current_object().__dict__
316 except RuntimeError:
317 raise AttributeError("__dict__")
318
319 def __repr__(self):
320 try:
321 obj = self._get_current_object()
322 except RuntimeError:
323 return "<%s unbound>" % self.__class__.__name__
324 return repr(obj)
325
326 def __bool__(self):
327 try:
328 return bool(self._get_current_object())
329 except RuntimeError:
330 return False
331
332 def __unicode__(self):
333 try:
334 return unicode(self._get_current_object()) # noqa
335 except RuntimeError:
336 return repr(self)
337
338 def __dir__(self):
339 try:
340 return dir(self._get_current_object())
341 except RuntimeError:
342 return []
343
344 def __getattr__(self, name):
345 if name == "__members__":
346 return dir(self._get_current_object())
347 return getattr(self._get_current_object(), name)
348
349 def __setitem__(self, key, value):
350 self._get_current_object()[key] = value
351
352 def __delitem__(self, key):
353 del self._get_current_object()[key]
354
355 if PY2:
356 __getslice__ = lambda x, i, j: x._get_current_object()[i:j]
357
358 def __setslice__(self, i, j, seq):
359 self._get_current_object()[i:j] = seq
360
361 def __delslice__(self, i, j):
362 del self._get_current_object()[i:j]
363
364 __setattr__ = lambda x, n, v: setattr(x._get_current_object(), n, v)
365 __delattr__ = lambda x, n: delattr(x._get_current_object(), n)
366 __str__ = lambda x: str(x._get_current_object())
367 __lt__ = lambda x, o: x._get_current_object() < o
368 __le__ = lambda x, o: x._get_current_object() <= o
369 __eq__ = lambda x, o: x._get_current_object() == o
370 __ne__ = lambda x, o: x._get_current_object() != o
371 __gt__ = lambda x, o: x._get_current_object() > o
372 __ge__ = lambda x, o: x._get_current_object() >= o
373 __cmp__ = lambda x, o: cmp(x._get_current_object(), o) # noqa
374 __hash__ = lambda x: hash(x._get_current_object())
375 __call__ = lambda x, *a, **kw: x._get_current_object()(*a, **kw)
376 __len__ = lambda x: len(x._get_current_object())
377 __getitem__ = lambda x, i: x._get_current_object()[i]
378 __iter__ = lambda x: iter(x._get_current_object())
379 __contains__ = lambda x, i: i in x._get_current_object()
380 __add__ = lambda x, o: x._get_current_object() + o
381 __sub__ = lambda x, o: x._get_current_object() - o
382 __mul__ = lambda x, o: x._get_current_object() * o
383 __floordiv__ = lambda x, o: x._get_current_object() // o
384 __mod__ = lambda x, o: x._get_current_object() % o
385 __divmod__ = lambda x, o: x._get_current_object().__divmod__(o)
386 __pow__ = lambda x, o: x._get_current_object() ** o
387 __lshift__ = lambda x, o: x._get_current_object() << o
388 __rshift__ = lambda x, o: x._get_current_object() >> o
389 __and__ = lambda x, o: x._get_current_object() & o
390 __xor__ = lambda x, o: x._get_current_object() ^ o
391 __or__ = lambda x, o: x._get_current_object() | o
392 __div__ = lambda x, o: x._get_current_object().__div__(o)
393 __truediv__ = lambda x, o: x._get_current_object().__truediv__(o)
394 __neg__ = lambda x: -(x._get_current_object())
395 __pos__ = lambda x: +(x._get_current_object())
396 __abs__ = lambda x: abs(x._get_current_object())
397 __invert__ = lambda x: ~(x._get_current_object())
398 __complex__ = lambda x: complex(x._get_current_object())
399 __int__ = lambda x: int(x._get_current_object())
400 __long__ = lambda x: long(x._get_current_object()) # noqa
401 __float__ = lambda x: float(x._get_current_object())
402 __oct__ = lambda x: oct(x._get_current_object())
403 __hex__ = lambda x: hex(x._get_current_object())
404 __index__ = lambda x: x._get_current_object().__index__()
405 __coerce__ = lambda x, o: x._get_current_object().__coerce__(x, o)
406 __enter__ = lambda x: x._get_current_object().__enter__()
407 __exit__ = lambda x, *a, **kw: x._get_current_object().__exit__(*a, **kw)
408 __radd__ = lambda x, o: o + x._get_current_object()
409 __rsub__ = lambda x, o: o - x._get_current_object()
410 __rmul__ = lambda x, o: o * x._get_current_object()
411 __rdiv__ = lambda x, o: o / x._get_current_object()
412 if PY2:
413 __rtruediv__ = lambda x, o: x._get_current_object().__rtruediv__(o)
414 else:
415 __rtruediv__ = __rdiv__
416 __rfloordiv__ = lambda x, o: o // x._get_current_object()
417 __rmod__ = lambda x, o: o % x._get_current_object()
418 __rdivmod__ = lambda x, o: x._get_current_object().__rdivmod__(o)
419 __copy__ = lambda x: copy.copy(x._get_current_object())
420 __deepcopy__ = lambda x, memo: copy.deepcopy(x._get_current_object(), memo)
0 """
1 Middleware
2 ==========
3
4 A WSGI middleware is a WSGI application that wraps another application
5 in order to observe or change its behavior. Werkzeug provides some
6 middleware for common use cases.
7
8 .. toctree::
9 :maxdepth: 1
10
11 proxy_fix
12 shared_data
13 dispatcher
14 http_proxy
15 lint
16 profiler
17
18 The :doc:`interactive debugger </debug>` is also a middleware that can
19 be applied manually, although it is typically used automatically with
20 the :doc:`development server </serving>`.
21
22 :copyright: 2007 Pallets
23 :license: BSD-3-Clause
24 """
0 """
1 Application Dispatcher
2 ======================
3
4 This middleware creates a single WSGI application that dispatches to
5 multiple other WSGI applications mounted at different URL paths.
6
7 A common example is writing a Single Page Application, where you have a
8 backend API and a frontend written in JavaScript that does the routing
9 in the browser rather than requesting different pages from the server.
10 The frontend is a single HTML and JS file that should be served for any
11 path besides "/api".
12
13 This example dispatches to an API app under "/api", an admin app
14 under "/admin", and an app that serves frontend files for all other
15 requests::
16
17 app = DispatcherMiddleware(serve_frontend, {
18 '/api': api_app,
19 '/admin': admin_app,
20 })
21
22 In production, you might instead handle this at the HTTP server level,
23 serving files or proxying to application servers based on location. The
24 API and admin apps would each be deployed with a separate WSGI server,
25 and the static files would be served directly by the HTTP server.
26
27 .. autoclass:: DispatcherMiddleware
28
29 :copyright: 2007 Pallets
30 :license: BSD-3-Clause
31 """
32
33
34 class DispatcherMiddleware(object):
35 """Combine multiple applications as a single WSGI application.
36 Requests are dispatched to an application based on the path it is
37 mounted under.
38
39 :param app: The WSGI application to dispatch to if the request
40 doesn't match a mounted path.
41 :param mounts: Maps path prefixes to applications for dispatching.
42 """
43
44 def __init__(self, app, mounts=None):
45 self.app = app
46 self.mounts = mounts or {}
47
48 def __call__(self, environ, start_response):
49 script = environ.get("PATH_INFO", "")
50 path_info = ""
51
52 while "/" in script:
53 if script in self.mounts:
54 app = self.mounts[script]
55 break
56
57 script, last_item = script.rsplit("/", 1)
58 path_info = "/%s%s" % (last_item, path_info)
59 else:
60 app = self.mounts.get(script, self.app)
61
62 original_script_name = environ.get("SCRIPT_NAME", "")
63 environ["SCRIPT_NAME"] = original_script_name + script
64 environ["PATH_INFO"] = path_info
65 return app(environ, start_response)
0 """
1 Basic HTTP Proxy
2 ================
3
4 .. autoclass:: ProxyMiddleware
5
6 :copyright: 2007 Pallets
7 :license: BSD-3-Clause
8 """
9 import socket
10
11 from ..datastructures import EnvironHeaders
12 from ..http import is_hop_by_hop_header
13 from ..urls import url_parse
14 from ..urls import url_quote
15 from ..wsgi import get_input_stream
16
17 try:
18 from http import client
19 except ImportError:
20 import httplib as client
21
22
23 class ProxyMiddleware(object):
24 """Proxy requests under a path to an external server, routing other
25 requests to the app.
26
27 This middleware can only proxy HTTP requests, as that is the only
28 protocol handled by the WSGI server. Other protocols, such as
29 websocket requests, cannot be proxied at this layer. This should
30 only be used for development, in production a real proxying server
31 should be used.
32
33 The middleware takes a dict that maps a path prefix to a dict
34 describing the host to be proxied to::
35
36 app = ProxyMiddleware(app, {
37 "/static/": {
38 "target": "http://127.0.0.1:5001/",
39 }
40 })
41
42 Each host has the following options:
43
44 ``target``:
45 The target URL to dispatch to. This is required.
46 ``remove_prefix``:
47 Whether to remove the prefix from the URL before dispatching it
48 to the target. The default is ``False``.
49 ``host``:
50 ``"<auto>"`` (default):
51 The host header is automatically rewritten to the URL of the
52 target.
53 ``None``:
54 The host header is unmodified from the client request.
55 Any other value:
56 The host header is overwritten with the value.
57 ``headers``:
58 A dictionary of headers to be sent with the request to the
59 target. The default is ``{}``.
60 ``ssl_context``:
61 A :class:`ssl.SSLContext` defining how to verify requests if the
62 target is HTTPS. The default is ``None``.
63
64 In the example above, everything under ``"/static/"`` is proxied to
65 the server on port 5001. The host header is rewritten to the target,
66 and the ``"/static/"`` prefix is removed from the URLs.
67
68 :param app: The WSGI application to wrap.
69 :param targets: Proxy target configurations. See description above.
70 :param chunk_size: Size of chunks to read from input stream and
71 write to target.
72 :param timeout: Seconds before an operation to a target fails.
73
74 .. versionadded:: 0.14
75 """
76
77 def __init__(self, app, targets, chunk_size=2 << 13, timeout=10):
78 def _set_defaults(opts):
79 opts.setdefault("remove_prefix", False)
80 opts.setdefault("host", "<auto>")
81 opts.setdefault("headers", {})
82 opts.setdefault("ssl_context", None)
83 return opts
84
85 self.app = app
86 self.targets = dict(
87 ("/%s/" % k.strip("/"), _set_defaults(v)) for k, v in targets.items()
88 )
89 self.chunk_size = chunk_size
90 self.timeout = timeout
91
92 def proxy_to(self, opts, path, prefix):
93 target = url_parse(opts["target"])
94
95 def application(environ, start_response):
96 headers = list(EnvironHeaders(environ).items())
97 headers[:] = [
98 (k, v)
99 for k, v in headers
100 if not is_hop_by_hop_header(k)
101 and k.lower() not in ("content-length", "host")
102 ]
103 headers.append(("Connection", "close"))
104
105 if opts["host"] == "<auto>":
106 headers.append(("Host", target.ascii_host))
107 elif opts["host"] is None:
108 headers.append(("Host", environ["HTTP_HOST"]))
109 else:
110 headers.append(("Host", opts["host"]))
111
112 headers.extend(opts["headers"].items())
113 remote_path = path
114
115 if opts["remove_prefix"]:
116 remote_path = "%s/%s" % (
117 target.path.rstrip("/"),
118 remote_path[len(prefix) :].lstrip("/"),
119 )
120
121 content_length = environ.get("CONTENT_LENGTH")
122 chunked = False
123
124 if content_length not in ("", None):
125 headers.append(("Content-Length", content_length))
126 elif content_length is not None:
127 headers.append(("Transfer-Encoding", "chunked"))
128 chunked = True
129
130 try:
131 if target.scheme == "http":
132 con = client.HTTPConnection(
133 target.ascii_host, target.port or 80, timeout=self.timeout
134 )
135 elif target.scheme == "https":
136 con = client.HTTPSConnection(
137 target.ascii_host,
138 target.port or 443,
139 timeout=self.timeout,
140 context=opts["ssl_context"],
141 )
142 else:
143 raise RuntimeError(
144 "Target scheme must be 'http' or 'https', got '{}'.".format(
145 target.scheme
146 )
147 )
148
149 con.connect()
150 remote_url = url_quote(remote_path)
151 querystring = environ["QUERY_STRING"]
152
153 if querystring:
154 remote_url = remote_url + "?" + querystring
155
156 con.putrequest(environ["REQUEST_METHOD"], remote_url, skip_host=True)
157
158 for k, v in headers:
159 if k.lower() == "connection":
160 v = "close"
161
162 con.putheader(k, v)
163
164 con.endheaders()
165 stream = get_input_stream(environ)
166
167 while 1:
168 data = stream.read(self.chunk_size)
169
170 if not data:
171 break
172
173 if chunked:
174 con.send(b"%x\r\n%s\r\n" % (len(data), data))
175 else:
176 con.send(data)
177
178 resp = con.getresponse()
179 except socket.error:
180 from ..exceptions import BadGateway
181
182 return BadGateway()(environ, start_response)
183
184 start_response(
185 "%d %s" % (resp.status, resp.reason),
186 [
187 (k.title(), v)
188 for k, v in resp.getheaders()
189 if not is_hop_by_hop_header(k)
190 ],
191 )
192
193 def read():
194 while 1:
195 try:
196 data = resp.read(self.chunk_size)
197 except socket.error:
198 break
199
200 if not data:
201 break
202
203 yield data
204
205 return read()
206
207 return application
208
209 def __call__(self, environ, start_response):
210 path = environ["PATH_INFO"]
211 app = self.app
212
213 for prefix, opts in self.targets.items():
214 if path.startswith(prefix):
215 app = self.proxy_to(opts, path, prefix)
216 break
217
218 return app(environ, start_response)
0 """
1 WSGI Protocol Linter
2 ====================
3
4 This module provides a middleware that performs sanity checks on the
5 behavior of the WSGI server and application. It checks that the
6 :pep:`3333` WSGI spec is properly implemented. It also warns on some
7 common HTTP errors such as non-empty responses for 304 status codes.
8
9 .. autoclass:: LintMiddleware
10
11 :copyright: 2007 Pallets
12 :license: BSD-3-Clause
13 """
14 from warnings import warn
15
16 from .._compat import implements_iterator
17 from .._compat import PY2
18 from .._compat import string_types
19 from ..datastructures import Headers
20 from ..http import is_entity_header
21 from ..wsgi import FileWrapper
22
23 try:
24 from urllib.parse import urlparse
25 except ImportError:
26 from urlparse import urlparse
27
28
29 class WSGIWarning(Warning):
30 """Warning class for WSGI warnings."""
31
32
33 class HTTPWarning(Warning):
34 """Warning class for HTTP warnings."""
35
36
37 def check_string(context, obj, stacklevel=3):
38 if type(obj) is not str:
39 warn(
40 "'%s' requires strings, got '%s'" % (context, type(obj).__name__),
41 WSGIWarning,
42 )
43
44
45 class InputStream(object):
46 def __init__(self, stream):
47 self._stream = stream
48
49 def read(self, *args):
50 if len(args) == 0:
51 warn(
52 "WSGI does not guarantee an EOF marker on the input stream, thus making"
53 " calls to 'wsgi.input.read()' unsafe. Conforming servers may never"
54 " return from this call.",
55 WSGIWarning,
56 stacklevel=2,
57 )
58 elif len(args) != 1:
59 warn(
60 "Too many parameters passed to 'wsgi.input.read()'.",
61 WSGIWarning,
62 stacklevel=2,
63 )
64 return self._stream.read(*args)
65
66 def readline(self, *args):
67 if len(args) == 0:
68 warn(
69 "Calls to 'wsgi.input.readline()' without arguments are unsafe. Use"
70 " 'wsgi.input.read()' instead.",
71 WSGIWarning,
72 stacklevel=2,
73 )
74 elif len(args) == 1:
75 warn(
76 "'wsgi.input.readline()' was called with a size hint. WSGI does not"
77 " support this, although it's available on all major servers.",
78 WSGIWarning,
79 stacklevel=2,
80 )
81 else:
82 raise TypeError("Too many arguments passed to 'wsgi.input.readline()'.")
83 return self._stream.readline(*args)
84
85 def __iter__(self):
86 try:
87 return iter(self._stream)
88 except TypeError:
89 warn("'wsgi.input' is not iterable.", WSGIWarning, stacklevel=2)
90 return iter(())
91
92 def close(self):
93 warn("The application closed the input stream!", WSGIWarning, stacklevel=2)
94 self._stream.close()
95
96
97 class ErrorStream(object):
98 def __init__(self, stream):
99 self._stream = stream
100
101 def write(self, s):
102 check_string("wsgi.error.write()", s)
103 self._stream.write(s)
104
105 def flush(self):
106 self._stream.flush()
107
108 def writelines(self, seq):
109 for line in seq:
110 self.write(line)
111
112 def close(self):
113 warn("The application closed the error stream!", WSGIWarning, stacklevel=2)
114 self._stream.close()
115
116
117 class GuardedWrite(object):
118 def __init__(self, write, chunks):
119 self._write = write
120 self._chunks = chunks
121
122 def __call__(self, s):
123 check_string("write()", s)
124 self._write.write(s)
125 self._chunks.append(len(s))
126
127
128 @implements_iterator
129 class GuardedIterator(object):
130 def __init__(self, iterator, headers_set, chunks):
131 self._iterator = iterator
132 if PY2:
133 self._next = iter(iterator).next
134 else:
135 self._next = iter(iterator).__next__
136 self.closed = False
137 self.headers_set = headers_set
138 self.chunks = chunks
139
140 def __iter__(self):
141 return self
142
143 def __next__(self):
144 if self.closed:
145 warn("Iterated over closed 'app_iter'.", WSGIWarning, stacklevel=2)
146
147 rv = self._next()
148
149 if not self.headers_set:
150 warn(
151 "The application returned before it started the response.",
152 WSGIWarning,
153 stacklevel=2,
154 )
155
156 check_string("application iterator items", rv)
157 self.chunks.append(len(rv))
158 return rv
159
160 def close(self):
161 self.closed = True
162
163 if hasattr(self._iterator, "close"):
164 self._iterator.close()
165
166 if self.headers_set:
167 status_code, headers = self.headers_set
168 bytes_sent = sum(self.chunks)
169 content_length = headers.get("content-length", type=int)
170
171 if status_code == 304:
172 for key, _value in headers:
173 key = key.lower()
174 if key not in ("expires", "content-location") and is_entity_header(
175 key
176 ):
177 warn(
178 "Entity header %r found in 304 response." % key, HTTPWarning
179 )
180 if bytes_sent:
181 warn("304 responses must not have a body.", HTTPWarning)
182 elif 100 <= status_code < 200 or status_code == 204:
183 if content_length != 0:
184 warn(
185 "%r responses must have an empty content length." % status_code,
186 HTTPWarning,
187 )
188 if bytes_sent:
189 warn(
190 "%r responses must not have a body." % status_code, HTTPWarning
191 )
192 elif content_length is not None and content_length != bytes_sent:
193 warn(
194 "Content-Length and the number of bytes sent to the client do not"
195 " match.",
196 WSGIWarning,
197 )
198
199 def __del__(self):
200 if not self.closed:
201 try:
202 warn(
203 "Iterator was garbage collected before it was closed.", WSGIWarning
204 )
205 except Exception:
206 pass
207
208
209 class LintMiddleware(object):
210 """Warns about common errors in the WSGI and HTTP behavior of the
211 server and wrapped application. Some of the issues it check are:
212
213 - invalid status codes
214 - non-bytestrings sent to the WSGI server
215 - strings returned from the WSGI application
216 - non-empty conditional responses
217 - unquoted etags
218 - relative URLs in the Location header
219 - unsafe calls to wsgi.input
220 - unclosed iterators
221
222 Error information is emitted using the :mod:`warnings` module.
223
224 :param app: The WSGI application to wrap.
225
226 .. code-block:: python
227
228 from werkzeug.middleware.lint import LintMiddleware
229 app = LintMiddleware(app)
230 """
231
232 def __init__(self, app):
233 self.app = app
234
235 def check_environ(self, environ):
236 if type(environ) is not dict:
237 warn(
238 "WSGI environment is not a standard Python dict.",
239 WSGIWarning,
240 stacklevel=4,
241 )
242 for key in (
243 "REQUEST_METHOD",
244 "SERVER_NAME",
245 "SERVER_PORT",
246 "wsgi.version",
247 "wsgi.input",
248 "wsgi.errors",
249 "wsgi.multithread",
250 "wsgi.multiprocess",
251 "wsgi.run_once",
252 ):
253 if key not in environ:
254 warn(
255 "Required environment key %r not found" % key,
256 WSGIWarning,
257 stacklevel=3,
258 )
259 if environ["wsgi.version"] != (1, 0):
260 warn("Environ is not a WSGI 1.0 environ.", WSGIWarning, stacklevel=3)
261
262 script_name = environ.get("SCRIPT_NAME", "")
263 path_info = environ.get("PATH_INFO", "")
264
265 if script_name and script_name[0] != "/":
266 warn(
267 "'SCRIPT_NAME' does not start with a slash: %r" % script_name,
268 WSGIWarning,
269 stacklevel=3,
270 )
271
272 if path_info and path_info[0] != "/":
273 warn(
274 "'PATH_INFO' does not start with a slash: %r" % path_info,
275 WSGIWarning,
276 stacklevel=3,
277 )
278
279 def check_start_response(self, status, headers, exc_info):
280 check_string("status", status)
281 status_code = status.split(None, 1)[0]
282
283 if len(status_code) != 3 or not status_code.isdigit():
284 warn(WSGIWarning("Status code must be three digits"), stacklevel=3)
285
286 if len(status) < 4 or status[3] != " ":
287 warn(
288 WSGIWarning(
289 "Invalid value for status %r. Valid "
290 "status strings are three digits, a space "
291 "and a status explanation"
292 ),
293 stacklevel=3,
294 )
295
296 status_code = int(status_code)
297
298 if status_code < 100:
299 warn(WSGIWarning("status code < 100 detected"), stacklevel=3)
300
301 if type(headers) is not list:
302 warn(WSGIWarning("header list is not a list"), stacklevel=3)
303
304 for item in headers:
305 if type(item) is not tuple or len(item) != 2:
306 warn(WSGIWarning("Headers must tuple 2-item tuples"), stacklevel=3)
307 name, value = item
308 if type(name) is not str or type(value) is not str:
309 warn(WSGIWarning("header items must be strings"), stacklevel=3)
310 if name.lower() == "status":
311 warn(
312 WSGIWarning(
313 "The status header is not supported due to "
314 "conflicts with the CGI spec."
315 ),
316 stacklevel=3,
317 )
318
319 if exc_info is not None and not isinstance(exc_info, tuple):
320 warn(WSGIWarning("invalid value for exc_info"), stacklevel=3)
321
322 headers = Headers(headers)
323 self.check_headers(headers)
324
325 return status_code, headers
326
327 def check_headers(self, headers):
328 etag = headers.get("etag")
329
330 if etag is not None:
331 if etag.startswith(("W/", "w/")):
332 if etag.startswith("w/"):
333 warn(
334 HTTPWarning("weak etag indicator should be upcase."),
335 stacklevel=4,
336 )
337
338 etag = etag[2:]
339
340 if not (etag[:1] == etag[-1:] == '"'):
341 warn(HTTPWarning("unquoted etag emitted."), stacklevel=4)
342
343 location = headers.get("location")
344
345 if location is not None:
346 if not urlparse(location).netloc:
347 warn(
348 HTTPWarning("absolute URLs required for location header"),
349 stacklevel=4,
350 )
351
352 def check_iterator(self, app_iter):
353 if isinstance(app_iter, string_types):
354 warn(
355 "The application returned astring. The response will send one character"
356 " at a time to the client, which will kill performance. Return a list"
357 " or iterable instead.",
358 WSGIWarning,
359 stacklevel=3,
360 )
361
362 def __call__(self, *args, **kwargs):
363 if len(args) != 2:
364 warn("A WSGI app takes two arguments.", WSGIWarning, stacklevel=2)
365
366 if kwargs:
367 warn(
368 "A WSGI app does not take keyword arguments.", WSGIWarning, stacklevel=2
369 )
370
371 environ, start_response = args
372
373 self.check_environ(environ)
374 environ["wsgi.input"] = InputStream(environ["wsgi.input"])
375 environ["wsgi.errors"] = ErrorStream(environ["wsgi.errors"])
376
377 # Hook our own file wrapper in so that applications will always
378 # iterate to the end and we can check the content length.
379 environ["wsgi.file_wrapper"] = FileWrapper
380
381 headers_set = []
382 chunks = []
383
384 def checking_start_response(*args, **kwargs):
385 if len(args) not in (2, 3):
386 warn(
387 "Invalid number of arguments: %s, expected 2 or 3." % len(args),
388 WSGIWarning,
389 stacklevel=2,
390 )
391
392 if kwargs:
393 warn("'start_response' does not take keyword arguments.", WSGIWarning)
394
395 status, headers = args[:2]
396
397 if len(args) == 3:
398 exc_info = args[2]
399 else:
400 exc_info = None
401
402 headers_set[:] = self.check_start_response(status, headers, exc_info)
403 return GuardedWrite(start_response(status, headers, exc_info), chunks)
404
405 app_iter = self.app(environ, checking_start_response)
406 self.check_iterator(app_iter)
407 return GuardedIterator(app_iter, headers_set, chunks)
0 """
1 Application Profiler
2 ====================
3
4 This module provides a middleware that profiles each request with the
5 :mod:`cProfile` module. This can help identify bottlenecks in your code
6 that may be slowing down your application.
7
8 .. autoclass:: ProfilerMiddleware
9
10 :copyright: 2007 Pallets
11 :license: BSD-3-Clause
12 """
13 from __future__ import print_function
14
15 import os.path
16 import sys
17 import time
18 from pstats import Stats
19
20 try:
21 from cProfile import Profile
22 except ImportError:
23 from profile import Profile
24
25
26 class ProfilerMiddleware(object):
27 """Wrap a WSGI application and profile the execution of each
28 request. Responses are buffered so that timings are more exact.
29
30 If ``stream`` is given, :class:`pstats.Stats` are written to it
31 after each request. If ``profile_dir`` is given, :mod:`cProfile`
32 data files are saved to that directory, one file per request.
33
34 The filename can be customized by passing ``filename_format``. If
35 it is a string, it will be formatted using :meth:`str.format` with
36 the following fields available:
37
38 - ``{method}`` - The request method; GET, POST, etc.
39 - ``{path}`` - The request path or 'root' should one not exist.
40 - ``{elapsed}`` - The elapsed time of the request.
41 - ``{time}`` - The time of the request.
42
43 If it is a callable, it will be called with the WSGI ``environ``
44 dict and should return a filename.
45
46 :param app: The WSGI application to wrap.
47 :param stream: Write stats to this stream. Disable with ``None``.
48 :param sort_by: A tuple of columns to sort stats by. See
49 :meth:`pstats.Stats.sort_stats`.
50 :param restrictions: A tuple of restrictions to filter stats by. See
51 :meth:`pstats.Stats.print_stats`.
52 :param profile_dir: Save profile data files to this directory.
53 :param filename_format: Format string for profile data file names,
54 or a callable returning a name. See explanation above.
55
56 .. code-block:: python
57
58 from werkzeug.middleware.profiler import ProfilerMiddleware
59 app = ProfilerMiddleware(app)
60
61 .. versionchanged:: 0.15
62 Stats are written even if ``profile_dir`` is given, and can be
63 disable by passing ``stream=None``.
64
65 .. versionadded:: 0.15
66 Added ``filename_format``.
67
68 .. versionadded:: 0.9
69 Added ``restrictions`` and ``profile_dir``.
70 """
71
72 def __init__(
73 self,
74 app,
75 stream=sys.stdout,
76 sort_by=("time", "calls"),
77 restrictions=(),
78 profile_dir=None,
79 filename_format="{method}.{path}.{elapsed:.0f}ms.{time:.0f}.prof",
80 ):
81 self._app = app
82 self._stream = stream
83 self._sort_by = sort_by
84 self._restrictions = restrictions
85 self._profile_dir = profile_dir
86 self._filename_format = filename_format
87
88 def __call__(self, environ, start_response):
89 response_body = []
90
91 def catching_start_response(status, headers, exc_info=None):
92 start_response(status, headers, exc_info)
93 return response_body.append
94
95 def runapp():
96 app_iter = self._app(environ, catching_start_response)
97 response_body.extend(app_iter)
98
99 if hasattr(app_iter, "close"):
100 app_iter.close()
101
102 profile = Profile()
103 start = time.time()
104 profile.runcall(runapp)
105 body = b"".join(response_body)
106 elapsed = time.time() - start
107
108 if self._profile_dir is not None:
109 if callable(self._filename_format):
110 filename = self._filename_format(environ)
111 else:
112 filename = self._filename_format.format(
113 method=environ["REQUEST_METHOD"],
114 path=(
115 environ.get("PATH_INFO").strip("/").replace("/", ".") or "root"
116 ),
117 elapsed=elapsed * 1000.0,
118 time=time.time(),
119 )
120 filename = os.path.join(self._profile_dir, filename)
121 profile.dump_stats(filename)
122
123 if self._stream is not None:
124 stats = Stats(profile, stream=self._stream)
125 stats.sort_stats(*self._sort_by)
126 print("-" * 80, file=self._stream)
127 print("PATH: {!r}".format(environ.get("PATH_INFO", "")), file=self._stream)
128 stats.print_stats(*self._restrictions)
129 print("-" * 80 + "\n", file=self._stream)
130
131 return [body]
0 """
1 X-Forwarded-For Proxy Fix
2 =========================
3
4 This module provides a middleware that adjusts the WSGI environ based on
5 ``X-Forwarded-`` headers that proxies in front of an application may
6 set.
7
8 When an application is running behind a proxy server, WSGI may see the
9 request as coming from that server rather than the real client. Proxies
10 set various headers to track where the request actually came from.
11
12 This middleware should only be applied if the application is actually
13 behind such a proxy, and should be configured with the number of proxies
14 that are chained in front of it. Not all proxies set all the headers.
15 Since incoming headers can be faked, you must set how many proxies are
16 setting each header so the middleware knows what to trust.
17
18 .. autoclass:: ProxyFix
19
20 :copyright: 2007 Pallets
21 :license: BSD-3-Clause
22 """
23 import warnings
24
25
26 class ProxyFix(object):
27 """Adjust the WSGI environ based on ``X-Forwarded-`` that proxies in
28 front of the application may set.
29
30 - ``X-Forwarded-For`` sets ``REMOTE_ADDR``.
31 - ``X-Forwarded-Proto`` sets ``wsgi.url_scheme``.
32 - ``X-Forwarded-Host`` sets ``HTTP_HOST``, ``SERVER_NAME``, and
33 ``SERVER_PORT``.
34 - ``X-Forwarded-Port`` sets ``HTTP_HOST`` and ``SERVER_PORT``.
35 - ``X-Forwarded-Prefix`` sets ``SCRIPT_NAME``.
36
37 You must tell the middleware how many proxies set each header so it
38 knows what values to trust. It is a security issue to trust values
39 that came from the client rather than a proxy.
40
41 The original values of the headers are stored in the WSGI
42 environ as ``werkzeug.proxy_fix.orig``, a dict.
43
44 :param app: The WSGI application to wrap.
45 :param x_for: Number of values to trust for ``X-Forwarded-For``.
46 :param x_proto: Number of values to trust for ``X-Forwarded-Proto``.
47 :param x_host: Number of values to trust for ``X-Forwarded-Host``.
48 :param x_port: Number of values to trust for ``X-Forwarded-Port``.
49 :param x_prefix: Number of values to trust for
50 ``X-Forwarded-Prefix``.
51 :param num_proxies: Deprecated, use ``x_for`` instead.
52
53 .. code-block:: python
54
55 from werkzeug.middleware.proxy_fix import ProxyFix
56 # App is behind one proxy that sets the -For and -Host headers.
57 app = ProxyFix(app, x_for=1, x_host=1)
58
59 .. versionchanged:: 0.15
60 All headers support multiple values. The ``num_proxies``
61 argument is deprecated. Each header is configured with a
62 separate number of trusted proxies.
63
64 .. versionchanged:: 0.15
65 Original WSGI environ values are stored in the
66 ``werkzeug.proxy_fix.orig`` dict. ``orig_remote_addr``,
67 ``orig_wsgi_url_scheme``, and ``orig_http_host`` are deprecated
68 and will be removed in 1.0.
69
70 .. versionchanged:: 0.15
71 Support ``X-Forwarded-Port`` and ``X-Forwarded-Prefix``.
72
73 .. versionchanged:: 0.15
74 ``X-Fowarded-Host`` and ``X-Forwarded-Port`` modify
75 ``SERVER_NAME`` and ``SERVER_PORT``.
76 """
77
78 def __init__(
79 self, app, num_proxies=None, x_for=1, x_proto=0, x_host=0, x_port=0, x_prefix=0
80 ):
81 self.app = app
82 self.x_for = x_for
83 self.x_proto = x_proto
84 self.x_host = x_host
85 self.x_port = x_port
86 self.x_prefix = x_prefix
87 self.num_proxies = num_proxies
88
89 @property
90 def num_proxies(self):
91 """The number of proxies setting ``X-Forwarded-For`` in front
92 of the application.
93
94 .. deprecated:: 0.15
95 A separate number of trusted proxies is configured for each
96 header. ``num_proxies`` maps to ``x_for``. This method will
97 be removed in 1.0.
98
99 :internal:
100 """
101 warnings.warn(
102 "'num_proxies' is deprecated as of version 0.15 and will be"
103 " removed in version 1.0. Use 'x_for' instead.",
104 DeprecationWarning,
105 stacklevel=2,
106 )
107 return self.x_for
108
109 @num_proxies.setter
110 def num_proxies(self, value):
111 if value is not None:
112 warnings.warn(
113 "'num_proxies' is deprecated as of version 0.15 and"
114 " will be removed in version 1.0. Use 'x_for' instead.",
115 DeprecationWarning,
116 stacklevel=2,
117 )
118 self.x_for = value
119
120 def get_remote_addr(self, forwarded_for):
121 """Get the real ``remote_addr`` by looking backwards ``x_for``
122 number of values in the ``X-Forwarded-For`` header.
123
124 :param forwarded_for: List of values parsed from the
125 ``X-Forwarded-For`` header.
126 :return: The real ``remote_addr``, or ``None`` if there were not
127 at least ``x_for`` values.
128
129 .. deprecated:: 0.15
130 This is handled internally for each header. This method will
131 be removed in 1.0.
132
133 .. versionchanged:: 0.9
134 Use ``num_proxies`` instead of always picking the first
135 value.
136
137 .. versionadded:: 0.8
138 """
139 warnings.warn(
140 "'get_remote_addr' is deprecated as of version 0.15 and"
141 " will be removed in version 1.0. It is now handled"
142 " internally for each header.",
143 DeprecationWarning,
144 )
145 return self._get_trusted_comma(self.x_for, ",".join(forwarded_for))
146
147 def _get_trusted_comma(self, trusted, value):
148 """Get the real value from a comma-separated header based on the
149 configured number of trusted proxies.
150
151 :param trusted: Number of values to trust in the header.
152 :param value: Header value to parse.
153 :return: The real value, or ``None`` if there are fewer values
154 than the number of trusted proxies.
155
156 .. versionadded:: 0.15
157 """
158 if not (trusted and value):
159 return
160 values = [x.strip() for x in value.split(",")]
161 if len(values) >= trusted:
162 return values[-trusted]
163
164 def __call__(self, environ, start_response):
165 """Modify the WSGI environ based on the various ``Forwarded``
166 headers before calling the wrapped application. Store the
167 original environ values in ``werkzeug.proxy_fix.orig_{key}``.
168 """
169 environ_get = environ.get
170 orig_remote_addr = environ_get("REMOTE_ADDR")
171 orig_wsgi_url_scheme = environ_get("wsgi.url_scheme")
172 orig_http_host = environ_get("HTTP_HOST")
173 environ.update(
174 {
175 "werkzeug.proxy_fix.orig": {
176 "REMOTE_ADDR": orig_remote_addr,
177 "wsgi.url_scheme": orig_wsgi_url_scheme,
178 "HTTP_HOST": orig_http_host,
179 "SERVER_NAME": environ_get("SERVER_NAME"),
180 "SERVER_PORT": environ_get("SERVER_PORT"),
181 "SCRIPT_NAME": environ_get("SCRIPT_NAME"),
182 },
183 # todo: remove deprecated keys
184 "werkzeug.proxy_fix.orig_remote_addr": orig_remote_addr,
185 "werkzeug.proxy_fix.orig_wsgi_url_scheme": orig_wsgi_url_scheme,
186 "werkzeug.proxy_fix.orig_http_host": orig_http_host,
187 }
188 )
189
190 x_for = self._get_trusted_comma(self.x_for, environ_get("HTTP_X_FORWARDED_FOR"))
191 if x_for:
192 environ["REMOTE_ADDR"] = x_for
193
194 x_proto = self._get_trusted_comma(
195 self.x_proto, environ_get("HTTP_X_FORWARDED_PROTO")
196 )
197 if x_proto:
198 environ["wsgi.url_scheme"] = x_proto
199
200 x_host = self._get_trusted_comma(
201 self.x_host, environ_get("HTTP_X_FORWARDED_HOST")
202 )
203 if x_host:
204 environ["HTTP_HOST"] = x_host
205 parts = x_host.split(":", 1)
206 environ["SERVER_NAME"] = parts[0]
207 if len(parts) == 2:
208 environ["SERVER_PORT"] = parts[1]
209
210 x_port = self._get_trusted_comma(
211 self.x_port, environ_get("HTTP_X_FORWARDED_PORT")
212 )
213 if x_port:
214 host = environ.get("HTTP_HOST")
215 if host:
216 parts = host.split(":", 1)
217 host = parts[0] if len(parts) == 2 else host
218 environ["HTTP_HOST"] = "%s:%s" % (host, x_port)
219 environ["SERVER_PORT"] = x_port
220
221 x_prefix = self._get_trusted_comma(
222 self.x_prefix, environ_get("HTTP_X_FORWARDED_PREFIX")
223 )
224 if x_prefix:
225 environ["SCRIPT_NAME"] = x_prefix
226
227 return self.app(environ, start_response)
0 """
1 Serve Shared Static Files
2 =========================
3
4 .. autoclass:: SharedDataMiddleware
5 :members: is_allowed
6
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
9 """
10 import mimetypes
11 import os
12 import posixpath
13 from datetime import datetime
14 from io import BytesIO
15 from time import mktime
16 from time import time
17 from zlib import adler32
18
19 from .._compat import PY2
20 from .._compat import string_types
21 from ..filesystem import get_filesystem_encoding
22 from ..http import http_date
23 from ..http import is_resource_modified
24 from ..wsgi import get_path_info
25 from ..wsgi import wrap_file
26
27
28 class SharedDataMiddleware(object):
29
30 """A WSGI middleware that provides static content for development
31 environments or simple server setups. Usage is quite simple::
32
33 import os
34 from werkzeug.wsgi import SharedDataMiddleware
35
36 app = SharedDataMiddleware(app, {
37 '/static': os.path.join(os.path.dirname(__file__), 'static')
38 })
39
40 The contents of the folder ``./shared`` will now be available on
41 ``http://example.com/shared/``. This is pretty useful during development
42 because a standalone media server is not required. One can also mount
43 files on the root folder and still continue to use the application because
44 the shared data middleware forwards all unhandled requests to the
45 application, even if the requests are below one of the shared folders.
46
47 If `pkg_resources` is available you can also tell the middleware to serve
48 files from package data::
49
50 app = SharedDataMiddleware(app, {
51 '/static': ('myapplication', 'static')
52 })
53
54 This will then serve the ``static`` folder in the `myapplication`
55 Python package.
56
57 The optional `disallow` parameter can be a list of :func:`~fnmatch.fnmatch`
58 rules for files that are not accessible from the web. If `cache` is set to
59 `False` no caching headers are sent.
60
61 Currently the middleware does not support non ASCII filenames. If the
62 encoding on the file system happens to be the encoding of the URI it may
63 work but this could also be by accident. We strongly suggest using ASCII
64 only file names for static files.
65
66 The middleware will guess the mimetype using the Python `mimetype`
67 module. If it's unable to figure out the charset it will fall back
68 to `fallback_mimetype`.
69
70 .. versionchanged:: 0.5
71 The cache timeout is configurable now.
72
73 .. versionadded:: 0.6
74 The `fallback_mimetype` parameter was added.
75
76 :param app: the application to wrap. If you don't want to wrap an
77 application you can pass it :exc:`NotFound`.
78 :param exports: a list or dict of exported files and folders.
79 :param disallow: a list of :func:`~fnmatch.fnmatch` rules.
80 :param fallback_mimetype: the fallback mimetype for unknown files.
81 :param cache: enable or disable caching headers.
82 :param cache_timeout: the cache timeout in seconds for the headers.
83 """
84
85 def __init__(
86 self,
87 app,
88 exports,
89 disallow=None,
90 cache=True,
91 cache_timeout=60 * 60 * 12,
92 fallback_mimetype="text/plain",
93 ):
94 self.app = app
95 self.exports = []
96 self.cache = cache
97 self.cache_timeout = cache_timeout
98
99 if hasattr(exports, "items"):
100 exports = exports.items()
101
102 for key, value in exports:
103 if isinstance(value, tuple):
104 loader = self.get_package_loader(*value)
105 elif isinstance(value, string_types):
106 if os.path.isfile(value):
107 loader = self.get_file_loader(value)
108 else:
109 loader = self.get_directory_loader(value)
110 else:
111 raise TypeError("unknown def %r" % value)
112
113 self.exports.append((key, loader))
114
115 if disallow is not None:
116 from fnmatch import fnmatch
117
118 self.is_allowed = lambda x: not fnmatch(x, disallow)
119
120 self.fallback_mimetype = fallback_mimetype
121
122 def is_allowed(self, filename):
123 """Subclasses can override this method to disallow the access to
124 certain files. However by providing `disallow` in the constructor
125 this method is overwritten.
126 """
127 return True
128
129 def _opener(self, filename):
130 return lambda: (
131 open(filename, "rb"),
132 datetime.utcfromtimestamp(os.path.getmtime(filename)),
133 int(os.path.getsize(filename)),
134 )
135
136 def get_file_loader(self, filename):
137 return lambda x: (os.path.basename(filename), self._opener(filename))
138
139 def get_package_loader(self, package, package_path):
140 from pkg_resources import DefaultProvider, ResourceManager, get_provider
141
142 loadtime = datetime.utcnow()
143 provider = get_provider(package)
144 manager = ResourceManager()
145 filesystem_bound = isinstance(provider, DefaultProvider)
146
147 def loader(path):
148 if path is None:
149 return None, None
150
151 path = posixpath.join(package_path, path)
152
153 if not provider.has_resource(path):
154 return None, None
155
156 basename = posixpath.basename(path)
157
158 if filesystem_bound:
159 return (
160 basename,
161 self._opener(provider.get_resource_filename(manager, path)),
162 )
163
164 s = provider.get_resource_string(manager, path)
165 return basename, lambda: (BytesIO(s), loadtime, len(s))
166
167 return loader
168
169 def get_directory_loader(self, directory):
170 def loader(path):
171 if path is not None:
172 path = os.path.join(directory, path)
173 else:
174 path = directory
175
176 if os.path.isfile(path):
177 return os.path.basename(path), self._opener(path)
178
179 return None, None
180
181 return loader
182
183 def generate_etag(self, mtime, file_size, real_filename):
184 if not isinstance(real_filename, bytes):
185 real_filename = real_filename.encode(get_filesystem_encoding())
186
187 return "wzsdm-%d-%s-%s" % (
188 mktime(mtime.timetuple()),
189 file_size,
190 adler32(real_filename) & 0xFFFFFFFF,
191 )
192
193 def __call__(self, environ, start_response):
194 cleaned_path = get_path_info(environ)
195
196 if PY2:
197 cleaned_path = cleaned_path.encode(get_filesystem_encoding())
198
199 # sanitize the path for non unix systems
200 cleaned_path = cleaned_path.strip("/")
201
202 for sep in os.sep, os.altsep:
203 if sep and sep != "/":
204 cleaned_path = cleaned_path.replace(sep, "/")
205
206 path = "/" + "/".join(x for x in cleaned_path.split("/") if x and x != "..")
207 file_loader = None
208
209 for search_path, loader in self.exports:
210 if search_path == path:
211 real_filename, file_loader = loader(None)
212
213 if file_loader is not None:
214 break
215
216 if not search_path.endswith("/"):
217 search_path += "/"
218
219 if path.startswith(search_path):
220 real_filename, file_loader = loader(path[len(search_path) :])
221
222 if file_loader is not None:
223 break
224
225 if file_loader is None or not self.is_allowed(real_filename):
226 return self.app(environ, start_response)
227
228 guessed_type = mimetypes.guess_type(real_filename)
229 mime_type = guessed_type[0] or self.fallback_mimetype
230 f, mtime, file_size = file_loader()
231
232 headers = [("Date", http_date())]
233
234 if self.cache:
235 timeout = self.cache_timeout
236 etag = self.generate_etag(mtime, file_size, real_filename)
237 headers += [
238 ("Etag", '"%s"' % etag),
239 ("Cache-Control", "max-age=%d, public" % timeout),
240 ]
241
242 if not is_resource_modified(environ, etag, last_modified=mtime):
243 f.close()
244 start_response("304 Not Modified", headers)
245 return []
246
247 headers.append(("Expires", http_date(time() + timeout)))
248 else:
249 headers.append(("Cache-Control", "public"))
250
251 headers.extend(
252 (
253 ("Content-Type", mime_type),
254 ("Content-Length", str(file_size)),
255 ("Last-Modified", http_date(mtime)),
256 )
257 )
258 start_response("200 OK", headers)
259 return wrap_file(environ, f)
0 # -*- coding: utf-8 -*-
1 r"""
2 werkzeug.posixemulation
3 ~~~~~~~~~~~~~~~~~~~~~~~
4
5 Provides a POSIX emulation for some features that are relevant to
6 web applications. The main purpose is to simplify support for
7 systems such as Windows NT that are not 100% POSIX compatible.
8
9 Currently this only implements a :func:`rename` function that
10 follows POSIX semantics. Eg: if the target file already exists it
11 will be replaced without asking.
12
13 This module was introduced in 0.6.1 and is not a public interface.
14 It might become one in later versions of Werkzeug.
15
16 :copyright: 2007 Pallets
17 :license: BSD-3-Clause
18 """
19 import errno
20 import os
21 import random
22 import sys
23 import time
24
25 from ._compat import to_unicode
26 from .filesystem import get_filesystem_encoding
27
28 can_rename_open_file = False
29
30 if os.name == "nt":
31 try:
32 import ctypes
33
34 _MOVEFILE_REPLACE_EXISTING = 0x1
35 _MOVEFILE_WRITE_THROUGH = 0x8
36 _MoveFileEx = ctypes.windll.kernel32.MoveFileExW
37
38 def _rename(src, dst):
39 src = to_unicode(src, get_filesystem_encoding())
40 dst = to_unicode(dst, get_filesystem_encoding())
41 if _rename_atomic(src, dst):
42 return True
43 retry = 0
44 rv = False
45 while not rv and retry < 100:
46 rv = _MoveFileEx(
47 src, dst, _MOVEFILE_REPLACE_EXISTING | _MOVEFILE_WRITE_THROUGH
48 )
49 if not rv:
50 time.sleep(0.001)
51 retry += 1
52 return rv
53
54 # new in Vista and Windows Server 2008
55 _CreateTransaction = ctypes.windll.ktmw32.CreateTransaction
56 _CommitTransaction = ctypes.windll.ktmw32.CommitTransaction
57 _MoveFileTransacted = ctypes.windll.kernel32.MoveFileTransactedW
58 _CloseHandle = ctypes.windll.kernel32.CloseHandle
59 can_rename_open_file = True
60
61 def _rename_atomic(src, dst):
62 ta = _CreateTransaction(None, 0, 0, 0, 0, 1000, "Werkzeug rename")
63 if ta == -1:
64 return False
65 try:
66 retry = 0
67 rv = False
68 while not rv and retry < 100:
69 rv = _MoveFileTransacted(
70 src,
71 dst,
72 None,
73 None,
74 _MOVEFILE_REPLACE_EXISTING | _MOVEFILE_WRITE_THROUGH,
75 ta,
76 )
77 if rv:
78 rv = _CommitTransaction(ta)
79 break
80 else:
81 time.sleep(0.001)
82 retry += 1
83 return rv
84 finally:
85 _CloseHandle(ta)
86
87 except Exception:
88
89 def _rename(src, dst):
90 return False
91
92 def _rename_atomic(src, dst):
93 return False
94
95 def rename(src, dst):
96 # Try atomic or pseudo-atomic rename
97 if _rename(src, dst):
98 return
99 # Fall back to "move away and replace"
100 try:
101 os.rename(src, dst)
102 except OSError as e:
103 if e.errno != errno.EEXIST:
104 raise
105 old = "%s-%08x" % (dst, random.randint(0, sys.maxsize))
106 os.rename(dst, old)
107 os.rename(src, dst)
108 try:
109 os.unlink(old)
110 except Exception:
111 pass
112
113
114 else:
115 rename = os.rename
116 can_rename_open_file = True
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.routing
3 ~~~~~~~~~~~~~~~~
4
5 When it comes to combining multiple controller or view functions (however
6 you want to call them) you need a dispatcher. A simple way would be
7 applying regular expression tests on the ``PATH_INFO`` and calling
8 registered callback functions that return the value then.
9
10 This module implements a much more powerful system than simple regular
11 expression matching because it can also convert values in the URLs and
12 build URLs.
13
14 Here a simple example that creates an URL map for an application with
15 two subdomains (www and kb) and some URL rules:
16
17 >>> m = Map([
18 ... # Static URLs
19 ... Rule('/', endpoint='static/index'),
20 ... Rule('/about', endpoint='static/about'),
21 ... Rule('/help', endpoint='static/help'),
22 ... # Knowledge Base
23 ... Subdomain('kb', [
24 ... Rule('/', endpoint='kb/index'),
25 ... Rule('/browse/', endpoint='kb/browse'),
26 ... Rule('/browse/<int:id>/', endpoint='kb/browse'),
27 ... Rule('/browse/<int:id>/<int:page>', endpoint='kb/browse')
28 ... ])
29 ... ], default_subdomain='www')
30
31 If the application doesn't use subdomains it's perfectly fine to not set
32 the default subdomain and not use the `Subdomain` rule factory. The endpoint
33 in the rules can be anything, for example import paths or unique
34 identifiers. The WSGI application can use those endpoints to get the
35 handler for that URL. It doesn't have to be a string at all but it's
36 recommended.
37
38 Now it's possible to create a URL adapter for one of the subdomains and
39 build URLs:
40
41 >>> c = m.bind('example.com')
42 >>> c.build("kb/browse", dict(id=42))
43 'http://kb.example.com/browse/42/'
44 >>> c.build("kb/browse", dict())
45 'http://kb.example.com/browse/'
46 >>> c.build("kb/browse", dict(id=42, page=3))
47 'http://kb.example.com/browse/42/3'
48 >>> c.build("static/about")
49 '/about'
50 >>> c.build("static/index", force_external=True)
51 'http://www.example.com/'
52
53 >>> c = m.bind('example.com', subdomain='kb')
54 >>> c.build("static/about")
55 'http://www.example.com/about'
56
57 The first argument to bind is the server name *without* the subdomain.
58 Per default it will assume that the script is mounted on the root, but
59 often that's not the case so you can provide the real mount point as
60 second argument:
61
62 >>> c = m.bind('example.com', '/applications/example')
63
64 The third argument can be the subdomain, if not given the default
65 subdomain is used. For more details about binding have a look at the
66 documentation of the `MapAdapter`.
67
68 And here is how you can match URLs:
69
70 >>> c = m.bind('example.com')
71 >>> c.match("/")
72 ('static/index', {})
73 >>> c.match("/about")
74 ('static/about', {})
75 >>> c = m.bind('example.com', '/', 'kb')
76 >>> c.match("/")
77 ('kb/index', {})
78 >>> c.match("/browse/42/23")
79 ('kb/browse', {'id': 42, 'page': 23})
80
81 If matching fails you get a `NotFound` exception, if the rule thinks
82 it's a good idea to redirect (for example because the URL was defined
83 to have a slash at the end but the request was missing that slash) it
84 will raise a `RequestRedirect` exception. Both are subclasses of the
85 `HTTPException` so you can use those errors as responses in the
86 application.
87
88 If matching succeeded but the URL rule was incompatible to the given
89 method (for example there were only rules for `GET` and `HEAD` and
90 routing system tried to match a `POST` request) a `MethodNotAllowed`
91 exception is raised.
92
93
94 :copyright: 2007 Pallets
95 :license: BSD-3-Clause
96 """
97 import ast
98 import difflib
99 import posixpath
100 import re
101 import uuid
102 from pprint import pformat
103 from threading import Lock
104
105 from ._compat import implements_to_string
106 from ._compat import iteritems
107 from ._compat import itervalues
108 from ._compat import native_string_result
109 from ._compat import string_types
110 from ._compat import text_type
111 from ._compat import to_bytes
112 from ._compat import to_unicode
113 from ._compat import wsgi_decoding_dance
114 from ._internal import _encode_idna
115 from ._internal import _get_environ
116 from .datastructures import ImmutableDict
117 from .datastructures import MultiDict
118 from .exceptions import BadHost
119 from .exceptions import HTTPException
120 from .exceptions import MethodNotAllowed
121 from .exceptions import NotFound
122 from .urls import _fast_url_quote
123 from .urls import url_encode
124 from .urls import url_join
125 from .urls import url_quote
126 from .utils import cached_property
127 from .utils import format_string
128 from .utils import redirect
129 from .wsgi import get_host
130
131 _rule_re = re.compile(
132 r"""
133 (?P<static>[^<]*) # static rule data
134 <
135 (?:
136 (?P<converter>[a-zA-Z_][a-zA-Z0-9_]*) # converter name
137 (?:\((?P<args>.*?)\))? # converter arguments
138 \: # variable delimiter
139 )?
140 (?P<variable>[a-zA-Z_][a-zA-Z0-9_]*) # variable name
141 >
142 """,
143 re.VERBOSE,
144 )
145 _simple_rule_re = re.compile(r"<([^>]+)>")
146 _converter_args_re = re.compile(
147 r"""
148 ((?P<name>\w+)\s*=\s*)?
149 (?P<value>
150 True|False|
151 \d+.\d+|
152 \d+.|
153 \d+|
154 [\w\d_.]+|
155 [urUR]?(?P<stringval>"[^"]*?"|'[^']*')
156 )\s*,
157 """,
158 re.VERBOSE | re.UNICODE,
159 )
160
161
162 _PYTHON_CONSTANTS = {"None": None, "True": True, "False": False}
163
164
165 def _pythonize(value):
166 if value in _PYTHON_CONSTANTS:
167 return _PYTHON_CONSTANTS[value]
168 for convert in int, float:
169 try:
170 return convert(value)
171 except ValueError:
172 pass
173 if value[:1] == value[-1:] and value[0] in "\"'":
174 value = value[1:-1]
175 return text_type(value)
176
177
178 def parse_converter_args(argstr):
179 argstr += ","
180 args = []
181 kwargs = {}
182
183 for item in _converter_args_re.finditer(argstr):
184 value = item.group("stringval")
185 if value is None:
186 value = item.group("value")
187 value = _pythonize(value)
188 if not item.group("name"):
189 args.append(value)
190 else:
191 name = item.group("name")
192 kwargs[name] = value
193
194 return tuple(args), kwargs
195
196
197 def parse_rule(rule):
198 """Parse a rule and return it as generator. Each iteration yields tuples
199 in the form ``(converter, arguments, variable)``. If the converter is
200 `None` it's a static url part, otherwise it's a dynamic one.
201
202 :internal:
203 """
204 pos = 0
205 end = len(rule)
206 do_match = _rule_re.match
207 used_names = set()
208 while pos < end:
209 m = do_match(rule, pos)
210 if m is None:
211 break
212 data = m.groupdict()
213 if data["static"]:
214 yield None, None, data["static"]
215 variable = data["variable"]
216 converter = data["converter"] or "default"
217 if variable in used_names:
218 raise ValueError("variable name %r used twice." % variable)
219 used_names.add(variable)
220 yield converter, data["args"] or None, variable
221 pos = m.end()
222 if pos < end:
223 remaining = rule[pos:]
224 if ">" in remaining or "<" in remaining:
225 raise ValueError("malformed url rule: %r" % rule)
226 yield None, None, remaining
227
228
229 class RoutingException(Exception):
230 """Special exceptions that require the application to redirect, notifying
231 about missing urls, etc.
232
233 :internal:
234 """
235
236
237 class RequestRedirect(HTTPException, RoutingException):
238 """Raise if the map requests a redirect. This is for example the case if
239 `strict_slashes` are activated and an url that requires a trailing slash.
240
241 The attribute `new_url` contains the absolute destination url.
242 """
243
244 code = 308
245
246 def __init__(self, new_url):
247 RoutingException.__init__(self, new_url)
248 self.new_url = new_url
249
250 def get_response(self, environ):
251 return redirect(self.new_url, self.code)
252
253
254 class RequestSlash(RoutingException):
255 """Internal exception."""
256
257
258 class RequestAliasRedirect(RoutingException): # noqa: B903
259 """This rule is an alias and wants to redirect to the canonical URL."""
260
261 def __init__(self, matched_values):
262 self.matched_values = matched_values
263
264
265 @implements_to_string
266 class BuildError(RoutingException, LookupError):
267 """Raised if the build system cannot find a URL for an endpoint with the
268 values provided.
269 """
270
271 def __init__(self, endpoint, values, method, adapter=None):
272 LookupError.__init__(self, endpoint, values, method)
273 self.endpoint = endpoint
274 self.values = values
275 self.method = method
276 self.adapter = adapter
277
278 @cached_property
279 def suggested(self):
280 return self.closest_rule(self.adapter)
281
282 def closest_rule(self, adapter):
283 def _score_rule(rule):
284 return sum(
285 [
286 0.98
287 * difflib.SequenceMatcher(
288 None, rule.endpoint, self.endpoint
289 ).ratio(),
290 0.01 * bool(set(self.values or ()).issubset(rule.arguments)),
291 0.01 * bool(rule.methods and self.method in rule.methods),
292 ]
293 )
294
295 if adapter and adapter.map._rules:
296 return max(adapter.map._rules, key=_score_rule)
297
298 def __str__(self):
299 message = []
300 message.append("Could not build url for endpoint %r" % self.endpoint)
301 if self.method:
302 message.append(" (%r)" % self.method)
303 if self.values:
304 message.append(" with values %r" % sorted(self.values.keys()))
305 message.append(".")
306 if self.suggested:
307 if self.endpoint == self.suggested.endpoint:
308 if self.method and self.method not in self.suggested.methods:
309 message.append(
310 " Did you mean to use methods %r?"
311 % sorted(self.suggested.methods)
312 )
313 missing_values = self.suggested.arguments.union(
314 set(self.suggested.defaults or ())
315 ) - set(self.values.keys())
316 if missing_values:
317 message.append(
318 " Did you forget to specify values %r?" % sorted(missing_values)
319 )
320 else:
321 message.append(" Did you mean %r instead?" % self.suggested.endpoint)
322 return u"".join(message)
323
324
325 class ValidationError(ValueError):
326 """Validation error. If a rule converter raises this exception the rule
327 does not match the current URL and the next URL is tried.
328 """
329
330
331 class RuleFactory(object):
332 """As soon as you have more complex URL setups it's a good idea to use rule
333 factories to avoid repetitive tasks. Some of them are builtin, others can
334 be added by subclassing `RuleFactory` and overriding `get_rules`.
335 """
336
337 def get_rules(self, map):
338 """Subclasses of `RuleFactory` have to override this method and return
339 an iterable of rules."""
340 raise NotImplementedError()
341
342
343 class Subdomain(RuleFactory):
344 """All URLs provided by this factory have the subdomain set to a
345 specific domain. For example if you want to use the subdomain for
346 the current language this can be a good setup::
347
348 url_map = Map([
349 Rule('/', endpoint='#select_language'),
350 Subdomain('<string(length=2):lang_code>', [
351 Rule('/', endpoint='index'),
352 Rule('/about', endpoint='about'),
353 Rule('/help', endpoint='help')
354 ])
355 ])
356
357 All the rules except for the ``'#select_language'`` endpoint will now
358 listen on a two letter long subdomain that holds the language code
359 for the current request.
360 """
361
362 def __init__(self, subdomain, rules):
363 self.subdomain = subdomain
364 self.rules = rules
365
366 def get_rules(self, map):
367 for rulefactory in self.rules:
368 for rule in rulefactory.get_rules(map):
369 rule = rule.empty()
370 rule.subdomain = self.subdomain
371 yield rule
372
373
374 class Submount(RuleFactory):
375 """Like `Subdomain` but prefixes the URL rule with a given string::
376
377 url_map = Map([
378 Rule('/', endpoint='index'),
379 Submount('/blog', [
380 Rule('/', endpoint='blog/index'),
381 Rule('/entry/<entry_slug>', endpoint='blog/show')
382 ])
383 ])
384
385 Now the rule ``'blog/show'`` matches ``/blog/entry/<entry_slug>``.
386 """
387
388 def __init__(self, path, rules):
389 self.path = path.rstrip("/")
390 self.rules = rules
391
392 def get_rules(self, map):
393 for rulefactory in self.rules:
394 for rule in rulefactory.get_rules(map):
395 rule = rule.empty()
396 rule.rule = self.path + rule.rule
397 yield rule
398
399
400 class EndpointPrefix(RuleFactory):
401 """Prefixes all endpoints (which must be strings for this factory) with
402 another string. This can be useful for sub applications::
403
404 url_map = Map([
405 Rule('/', endpoint='index'),
406 EndpointPrefix('blog/', [Submount('/blog', [
407 Rule('/', endpoint='index'),
408 Rule('/entry/<entry_slug>', endpoint='show')
409 ])])
410 ])
411 """
412
413 def __init__(self, prefix, rules):
414 self.prefix = prefix
415 self.rules = rules
416
417 def get_rules(self, map):
418 for rulefactory in self.rules:
419 for rule in rulefactory.get_rules(map):
420 rule = rule.empty()
421 rule.endpoint = self.prefix + rule.endpoint
422 yield rule
423
424
425 class RuleTemplate(object):
426 """Returns copies of the rules wrapped and expands string templates in
427 the endpoint, rule, defaults or subdomain sections.
428
429 Here a small example for such a rule template::
430
431 from werkzeug.routing import Map, Rule, RuleTemplate
432
433 resource = RuleTemplate([
434 Rule('/$name/', endpoint='$name.list'),
435 Rule('/$name/<int:id>', endpoint='$name.show')
436 ])
437
438 url_map = Map([resource(name='user'), resource(name='page')])
439
440 When a rule template is called the keyword arguments are used to
441 replace the placeholders in all the string parameters.
442 """
443
444 def __init__(self, rules):
445 self.rules = list(rules)
446
447 def __call__(self, *args, **kwargs):
448 return RuleTemplateFactory(self.rules, dict(*args, **kwargs))
449
450
451 class RuleTemplateFactory(RuleFactory):
452 """A factory that fills in template variables into rules. Used by
453 `RuleTemplate` internally.
454
455 :internal:
456 """
457
458 def __init__(self, rules, context):
459 self.rules = rules
460 self.context = context
461
462 def get_rules(self, map):
463 for rulefactory in self.rules:
464 for rule in rulefactory.get_rules(map):
465 new_defaults = subdomain = None
466 if rule.defaults:
467 new_defaults = {}
468 for key, value in iteritems(rule.defaults):
469 if isinstance(value, string_types):
470 value = format_string(value, self.context)
471 new_defaults[key] = value
472 if rule.subdomain is not None:
473 subdomain = format_string(rule.subdomain, self.context)
474 new_endpoint = rule.endpoint
475 if isinstance(new_endpoint, string_types):
476 new_endpoint = format_string(new_endpoint, self.context)
477 yield Rule(
478 format_string(rule.rule, self.context),
479 new_defaults,
480 subdomain,
481 rule.methods,
482 rule.build_only,
483 new_endpoint,
484 rule.strict_slashes,
485 )
486
487
488 def _prefix_names(src):
489 """ast parse and prefix names with `.` to avoid collision with user vars"""
490 tree = ast.parse(src).body[0]
491 if isinstance(tree, ast.Expr):
492 tree = tree.value
493 for node in ast.walk(tree):
494 if isinstance(node, ast.Name):
495 node.id = "." + node.id
496 return tree
497
498
499 _CALL_CONVERTER_CODE_FMT = "self._converters[{elem!r}].to_url()"
500 _IF_KWARGS_URL_ENCODE_CODE = """\
501 if kwargs:
502 q = '?'
503 params = self._encode_query_vars(kwargs)
504 else:
505 q = params = ''
506 """
507 _IF_KWARGS_URL_ENCODE_AST = _prefix_names(_IF_KWARGS_URL_ENCODE_CODE)
508 _URL_ENCODE_AST_NAMES = (_prefix_names("q"), _prefix_names("params"))
509
510
511 @implements_to_string
512 class Rule(RuleFactory):
513 """A Rule represents one URL pattern. There are some options for `Rule`
514 that change the way it behaves and are passed to the `Rule` constructor.
515 Note that besides the rule-string all arguments *must* be keyword arguments
516 in order to not break the application on Werkzeug upgrades.
517
518 `string`
519 Rule strings basically are just normal URL paths with placeholders in
520 the format ``<converter(arguments):name>`` where the converter and the
521 arguments are optional. If no converter is defined the `default`
522 converter is used which means `string` in the normal configuration.
523
524 URL rules that end with a slash are branch URLs, others are leaves.
525 If you have `strict_slashes` enabled (which is the default), all
526 branch URLs that are matched without a trailing slash will trigger a
527 redirect to the same URL with the missing slash appended.
528
529 The converters are defined on the `Map`.
530
531 `endpoint`
532 The endpoint for this rule. This can be anything. A reference to a
533 function, a string, a number etc. The preferred way is using a string
534 because the endpoint is used for URL generation.
535
536 `defaults`
537 An optional dict with defaults for other rules with the same endpoint.
538 This is a bit tricky but useful if you want to have unique URLs::
539
540 url_map = Map([
541 Rule('/all/', defaults={'page': 1}, endpoint='all_entries'),
542 Rule('/all/page/<int:page>', endpoint='all_entries')
543 ])
544
545 If a user now visits ``http://example.com/all/page/1`` he will be
546 redirected to ``http://example.com/all/``. If `redirect_defaults` is
547 disabled on the `Map` instance this will only affect the URL
548 generation.
549
550 `subdomain`
551 The subdomain rule string for this rule. If not specified the rule
552 only matches for the `default_subdomain` of the map. If the map is
553 not bound to a subdomain this feature is disabled.
554
555 Can be useful if you want to have user profiles on different subdomains
556 and all subdomains are forwarded to your application::
557
558 url_map = Map([
559 Rule('/', subdomain='<username>', endpoint='user/homepage'),
560 Rule('/stats', subdomain='<username>', endpoint='user/stats')
561 ])
562
563 `methods`
564 A sequence of http methods this rule applies to. If not specified, all
565 methods are allowed. For example this can be useful if you want different
566 endpoints for `POST` and `GET`. If methods are defined and the path
567 matches but the method matched against is not in this list or in the
568 list of another rule for that path the error raised is of the type
569 `MethodNotAllowed` rather than `NotFound`. If `GET` is present in the
570 list of methods and `HEAD` is not, `HEAD` is added automatically.
571
572 .. versionchanged:: 0.6.1
573 `HEAD` is now automatically added to the methods if `GET` is
574 present. The reason for this is that existing code often did not
575 work properly in servers not rewriting `HEAD` to `GET`
576 automatically and it was not documented how `HEAD` should be
577 treated. This was considered a bug in Werkzeug because of that.
578
579 `strict_slashes`
580 Override the `Map` setting for `strict_slashes` only for this rule. If
581 not specified the `Map` setting is used.
582
583 `build_only`
584 Set this to True and the rule will never match but will create a URL
585 that can be build. This is useful if you have resources on a subdomain
586 or folder that are not handled by the WSGI application (like static data)
587
588 `redirect_to`
589 If given this must be either a string or callable. In case of a
590 callable it's called with the url adapter that triggered the match and
591 the values of the URL as keyword arguments and has to return the target
592 for the redirect, otherwise it has to be a string with placeholders in
593 rule syntax::
594
595 def foo_with_slug(adapter, id):
596 # ask the database for the slug for the old id. this of
597 # course has nothing to do with werkzeug.
598 return 'foo/' + Foo.get_slug_for_id(id)
599
600 url_map = Map([
601 Rule('/foo/<slug>', endpoint='foo'),
602 Rule('/some/old/url/<slug>', redirect_to='foo/<slug>'),
603 Rule('/other/old/url/<int:id>', redirect_to=foo_with_slug)
604 ])
605
606 When the rule is matched the routing system will raise a
607 `RequestRedirect` exception with the target for the redirect.
608
609 Keep in mind that the URL will be joined against the URL root of the
610 script so don't use a leading slash on the target URL unless you
611 really mean root of that domain.
612
613 `alias`
614 If enabled this rule serves as an alias for another rule with the same
615 endpoint and arguments.
616
617 `host`
618 If provided and the URL map has host matching enabled this can be
619 used to provide a match rule for the whole host. This also means
620 that the subdomain feature is disabled.
621
622 .. versionadded:: 0.7
623 The `alias` and `host` parameters were added.
624 """
625
626 def __init__(
627 self,
628 string,
629 defaults=None,
630 subdomain=None,
631 methods=None,
632 build_only=False,
633 endpoint=None,
634 strict_slashes=None,
635 redirect_to=None,
636 alias=False,
637 host=None,
638 ):
639 if not string.startswith("/"):
640 raise ValueError("urls must start with a leading slash")
641 self.rule = string
642 self.is_leaf = not string.endswith("/")
643
644 self.map = None
645 self.strict_slashes = strict_slashes
646 self.subdomain = subdomain
647 self.host = host
648 self.defaults = defaults
649 self.build_only = build_only
650 self.alias = alias
651 if methods is None:
652 self.methods = None
653 else:
654 if isinstance(methods, str):
655 raise TypeError("param `methods` should be `Iterable[str]`, not `str`")
656 self.methods = set([x.upper() for x in methods])
657 if "HEAD" not in self.methods and "GET" in self.methods:
658 self.methods.add("HEAD")
659 self.endpoint = endpoint
660 self.redirect_to = redirect_to
661
662 if defaults:
663 self.arguments = set(map(str, defaults))
664 else:
665 self.arguments = set()
666 self._trace = self._converters = self._regex = self._argument_weights = None
667
668 def empty(self):
669 """
670 Return an unbound copy of this rule.
671
672 This can be useful if want to reuse an already bound URL for another
673 map. See ``get_empty_kwargs`` to override what keyword arguments are
674 provided to the new copy.
675 """
676 return type(self)(self.rule, **self.get_empty_kwargs())
677
678 def get_empty_kwargs(self):
679 """
680 Provides kwargs for instantiating empty copy with empty()
681
682 Use this method to provide custom keyword arguments to the subclass of
683 ``Rule`` when calling ``some_rule.empty()``. Helpful when the subclass
684 has custom keyword arguments that are needed at instantiation.
685
686 Must return a ``dict`` that will be provided as kwargs to the new
687 instance of ``Rule``, following the initial ``self.rule`` value which
688 is always provided as the first, required positional argument.
689 """
690 defaults = None
691 if self.defaults:
692 defaults = dict(self.defaults)
693 return dict(
694 defaults=defaults,
695 subdomain=self.subdomain,
696 methods=self.methods,
697 build_only=self.build_only,
698 endpoint=self.endpoint,
699 strict_slashes=self.strict_slashes,
700 redirect_to=self.redirect_to,
701 alias=self.alias,
702 host=self.host,
703 )
704
705 def get_rules(self, map):
706 yield self
707
708 def refresh(self):
709 """Rebinds and refreshes the URL. Call this if you modified the
710 rule in place.
711
712 :internal:
713 """
714 self.bind(self.map, rebind=True)
715
716 def bind(self, map, rebind=False):
717 """Bind the url to a map and create a regular expression based on
718 the information from the rule itself and the defaults from the map.
719
720 :internal:
721 """
722 if self.map is not None and not rebind:
723 raise RuntimeError("url rule %r already bound to map %r" % (self, self.map))
724 self.map = map
725 if self.strict_slashes is None:
726 self.strict_slashes = map.strict_slashes
727 if self.subdomain is None:
728 self.subdomain = map.default_subdomain
729 self.compile()
730
731 def get_converter(self, variable_name, converter_name, args, kwargs):
732 """Looks up the converter for the given parameter.
733
734 .. versionadded:: 0.9
735 """
736 if converter_name not in self.map.converters:
737 raise LookupError("the converter %r does not exist" % converter_name)
738 return self.map.converters[converter_name](self.map, *args, **kwargs)
739
740 def _encode_query_vars(self, query_vars):
741 return url_encode(
742 query_vars,
743 charset=self.map.charset,
744 sort=self.map.sort_parameters,
745 key=self.map.sort_key,
746 )
747
748 def compile(self):
749 """Compiles the regular expression and stores it."""
750 assert self.map is not None, "rule not bound"
751
752 if self.map.host_matching:
753 domain_rule = self.host or ""
754 else:
755 domain_rule = self.subdomain or ""
756
757 self._trace = []
758 self._converters = {}
759 self._static_weights = []
760 self._argument_weights = []
761 regex_parts = []
762
763 def _build_regex(rule):
764 index = 0
765 for converter, arguments, variable in parse_rule(rule):
766 if converter is None:
767 regex_parts.append(re.escape(variable))
768 self._trace.append((False, variable))
769 for part in variable.split("/"):
770 if part:
771 self._static_weights.append((index, -len(part)))
772 else:
773 if arguments:
774 c_args, c_kwargs = parse_converter_args(arguments)
775 else:
776 c_args = ()
777 c_kwargs = {}
778 convobj = self.get_converter(variable, converter, c_args, c_kwargs)
779 regex_parts.append("(?P<%s>%s)" % (variable, convobj.regex))
780 self._converters[variable] = convobj
781 self._trace.append((True, variable))
782 self._argument_weights.append(convobj.weight)
783 self.arguments.add(str(variable))
784 index = index + 1
785
786 _build_regex(domain_rule)
787 regex_parts.append("\\|")
788 self._trace.append((False, "|"))
789 _build_regex(self.rule if self.is_leaf else self.rule.rstrip("/"))
790 if not self.is_leaf:
791 self._trace.append((False, "/"))
792
793 self._build = self._compile_builder(False).__get__(self, None)
794 self._build_unknown = self._compile_builder(True).__get__(self, None)
795
796 if self.build_only:
797 return
798 regex = r"^%s%s$" % (
799 u"".join(regex_parts),
800 (not self.is_leaf or not self.strict_slashes)
801 and "(?<!/)(?P<__suffix__>/?)"
802 or "",
803 )
804 self._regex = re.compile(regex, re.UNICODE)
805
806 def match(self, path, method=None):
807 """Check if the rule matches a given path. Path is a string in the
808 form ``"subdomain|/path"`` and is assembled by the map. If
809 the map is doing host matching the subdomain part will be the host
810 instead.
811
812 If the rule matches a dict with the converted values is returned,
813 otherwise the return value is `None`.
814
815 :internal:
816 """
817 if not self.build_only:
818 m = self._regex.search(path)
819 if m is not None:
820 groups = m.groupdict()
821 # we have a folder like part of the url without a trailing
822 # slash and strict slashes enabled. raise an exception that
823 # tells the map to redirect to the same url but with a
824 # trailing slash
825 if (
826 self.strict_slashes
827 and not self.is_leaf
828 and not groups.pop("__suffix__")
829 and (
830 method is None or self.methods is None or method in self.methods
831 )
832 ):
833 raise RequestSlash()
834 # if we are not in strict slashes mode we have to remove
835 # a __suffix__
836 elif not self.strict_slashes:
837 del groups["__suffix__"]
838
839 result = {}
840 for name, value in iteritems(groups):
841 try:
842 value = self._converters[name].to_python(value)
843 except ValidationError:
844 return
845 result[str(name)] = value
846 if self.defaults:
847 result.update(self.defaults)
848
849 if self.alias and self.map.redirect_defaults:
850 raise RequestAliasRedirect(result)
851
852 return result
853
854 @staticmethod
855 def _get_func_code(code, name):
856 globs, locs = {}, {}
857 exec(code, globs, locs)
858 return locs[name]
859
860 def _compile_builder(self, append_unknown=True):
861 defaults = self.defaults or {}
862 dom_ops = []
863 url_ops = []
864
865 opl = dom_ops
866 for is_dynamic, data in self._trace:
867 if data == "|" and opl is dom_ops:
868 opl = url_ops
869 continue
870 # this seems like a silly case to ever come up but:
871 # if a default is given for a value that appears in the rule,
872 # resolve it to a constant ahead of time
873 if is_dynamic and data in defaults:
874 data = self._converters[data].to_url(defaults[data])
875 opl.append((False, data))
876 elif not is_dynamic:
877 opl.append(
878 (False, url_quote(to_bytes(data, self.map.charset), safe="/:|+"))
879 )
880 else:
881 opl.append((True, data))
882
883 def _convert(elem):
884 ret = _prefix_names(_CALL_CONVERTER_CODE_FMT.format(elem=elem))
885 ret.args = [ast.Name(str(elem), ast.Load())] # str for py2
886 return ret
887
888 def _parts(ops):
889 parts = [
890 _convert(elem) if is_dynamic else ast.Str(s=elem)
891 for is_dynamic, elem in ops
892 ]
893 parts = parts or [ast.Str("")]
894 # constant fold
895 ret = [parts[0]]
896 for p in parts[1:]:
897 if isinstance(p, ast.Str) and isinstance(ret[-1], ast.Str):
898 ret[-1] = ast.Str(ret[-1].s + p.s)
899 else:
900 ret.append(p)
901 return ret
902
903 dom_parts = _parts(dom_ops)
904 url_parts = _parts(url_ops)
905 if not append_unknown:
906 body = []
907 else:
908 body = [_IF_KWARGS_URL_ENCODE_AST]
909 url_parts.extend(_URL_ENCODE_AST_NAMES)
910
911 def _join(parts):
912 if len(parts) == 1: # shortcut
913 return parts[0]
914 elif hasattr(ast, "JoinedStr"): # py36+
915 return ast.JoinedStr(parts)
916 else:
917 call = _prefix_names('"".join()')
918 call.args = [ast.Tuple(parts, ast.Load())]
919 return call
920
921 body.append(
922 ast.Return(ast.Tuple([_join(dom_parts), _join(url_parts)], ast.Load()))
923 )
924
925 # str is necessary for python2
926 pargs = [
927 str(elem)
928 for is_dynamic, elem in dom_ops + url_ops
929 if is_dynamic and elem not in defaults
930 ]
931 kargs = [str(k) for k in defaults]
932
933 func_ast = _prefix_names("def _(): pass")
934 func_ast.name = "<builder:{!r}>".format(self.rule)
935 if hasattr(ast, "arg"): # py3
936 func_ast.args.args.append(ast.arg(".self", None))
937 for arg in pargs + kargs:
938 func_ast.args.args.append(ast.arg(arg, None))
939 func_ast.args.kwarg = ast.arg(".kwargs", None)
940 else:
941 func_ast.args.args.append(ast.Name(".self", ast.Load()))
942 for arg in pargs + kargs:
943 func_ast.args.args.append(ast.Name(arg, ast.Load()))
944 func_ast.args.kwarg = ".kwargs"
945 for _ in kargs:
946 func_ast.args.defaults.append(ast.Str(""))
947 func_ast.body = body
948
949 module = ast.fix_missing_locations(ast.Module([func_ast]))
950 code = compile(module, "<werkzeug routing>", "exec")
951 return self._get_func_code(code, func_ast.name)
952
953 def build(self, values, append_unknown=True):
954 """Assembles the relative url for that rule and the subdomain.
955 If building doesn't work for some reasons `None` is returned.
956
957 :internal:
958 """
959 try:
960 if append_unknown:
961 return self._build_unknown(**values)
962 else:
963 return self._build(**values)
964 except ValidationError:
965 return None
966
967 def provides_defaults_for(self, rule):
968 """Check if this rule has defaults for a given rule.
969
970 :internal:
971 """
972 return (
973 not self.build_only
974 and self.defaults
975 and self.endpoint == rule.endpoint
976 and self != rule
977 and self.arguments == rule.arguments
978 )
979
980 def suitable_for(self, values, method=None):
981 """Check if the dict of values has enough data for url generation.
982
983 :internal:
984 """
985 # if a method was given explicitly and that method is not supported
986 # by this rule, this rule is not suitable.
987 if (
988 method is not None
989 and self.methods is not None
990 and method not in self.methods
991 ):
992 return False
993
994 defaults = self.defaults or ()
995
996 # all arguments required must be either in the defaults dict or
997 # the value dictionary otherwise it's not suitable
998 for key in self.arguments:
999 if key not in defaults and key not in values:
1000 return False
1001
1002 # in case defaults are given we ensure that either the value was
1003 # skipped or the value is the same as the default value.
1004 if defaults:
1005 for key, value in iteritems(defaults):
1006 if key in values and value != values[key]:
1007 return False
1008
1009 return True
1010
1011 def match_compare_key(self):
1012 """The match compare key for sorting.
1013
1014 Current implementation:
1015
1016 1. rules without any arguments come first for performance
1017 reasons only as we expect them to match faster and some
1018 common ones usually don't have any arguments (index pages etc.)
1019 2. rules with more static parts come first so the second argument
1020 is the negative length of the number of the static weights.
1021 3. we order by static weights, which is a combination of index
1022 and length
1023 4. The more complex rules come first so the next argument is the
1024 negative length of the number of argument weights.
1025 5. lastly we order by the actual argument weights.
1026
1027 :internal:
1028 """
1029 return (
1030 bool(self.arguments),
1031 -len(self._static_weights),
1032 self._static_weights,
1033 -len(self._argument_weights),
1034 self._argument_weights,
1035 )
1036
1037 def build_compare_key(self):
1038 """The build compare key for sorting.
1039
1040 :internal:
1041 """
1042 return 1 if self.alias else 0, -len(self.arguments), -len(self.defaults or ())
1043
1044 def __eq__(self, other):
1045 return self.__class__ is other.__class__ and self._trace == other._trace
1046
1047 __hash__ = None
1048
1049 def __ne__(self, other):
1050 return not self.__eq__(other)
1051
1052 def __str__(self):
1053 return self.rule
1054
1055 @native_string_result
1056 def __repr__(self):
1057 if self.map is None:
1058 return u"<%s (unbound)>" % self.__class__.__name__
1059 tmp = []
1060 for is_dynamic, data in self._trace:
1061 if is_dynamic:
1062 tmp.append(u"<%s>" % data)
1063 else:
1064 tmp.append(data)
1065 return u"<%s %s%s -> %s>" % (
1066 self.__class__.__name__,
1067 repr((u"".join(tmp)).lstrip(u"|")).lstrip(u"u"),
1068 self.methods is not None and u" (%s)" % u", ".join(self.methods) or u"",
1069 self.endpoint,
1070 )
1071
1072
1073 class BaseConverter(object):
1074 """Base class for all converters."""
1075
1076 regex = "[^/]+"
1077 weight = 100
1078
1079 def __init__(self, map):
1080 self.map = map
1081
1082 def to_python(self, value):
1083 return value
1084
1085 def to_url(self, value):
1086 if isinstance(value, (bytes, bytearray)):
1087 return _fast_url_quote(value)
1088 return _fast_url_quote(text_type(value).encode(self.map.charset))
1089
1090
1091 class UnicodeConverter(BaseConverter):
1092 """This converter is the default converter and accepts any string but
1093 only one path segment. Thus the string can not include a slash.
1094
1095 This is the default validator.
1096
1097 Example::
1098
1099 Rule('/pages/<page>'),
1100 Rule('/<string(length=2):lang_code>')
1101
1102 :param map: the :class:`Map`.
1103 :param minlength: the minimum length of the string. Must be greater
1104 or equal 1.
1105 :param maxlength: the maximum length of the string.
1106 :param length: the exact length of the string.
1107 """
1108
1109 def __init__(self, map, minlength=1, maxlength=None, length=None):
1110 BaseConverter.__init__(self, map)
1111 if length is not None:
1112 length = "{%d}" % int(length)
1113 else:
1114 if maxlength is None:
1115 maxlength = ""
1116 else:
1117 maxlength = int(maxlength)
1118 length = "{%s,%s}" % (int(minlength), maxlength)
1119 self.regex = "[^/]" + length
1120
1121
1122 class AnyConverter(BaseConverter):
1123 """Matches one of the items provided. Items can either be Python
1124 identifiers or strings::
1125
1126 Rule('/<any(about, help, imprint, class, "foo,bar"):page_name>')
1127
1128 :param map: the :class:`Map`.
1129 :param items: this function accepts the possible items as positional
1130 arguments.
1131 """
1132
1133 def __init__(self, map, *items):
1134 BaseConverter.__init__(self, map)
1135 self.regex = "(?:%s)" % "|".join([re.escape(x) for x in items])
1136
1137
1138 class PathConverter(BaseConverter):
1139 """Like the default :class:`UnicodeConverter`, but it also matches
1140 slashes. This is useful for wikis and similar applications::
1141
1142 Rule('/<path:wikipage>')
1143 Rule('/<path:wikipage>/edit')
1144
1145 :param map: the :class:`Map`.
1146 """
1147
1148 regex = "[^/].*?"
1149 weight = 200
1150
1151
1152 class NumberConverter(BaseConverter):
1153 """Baseclass for `IntegerConverter` and `FloatConverter`.
1154
1155 :internal:
1156 """
1157
1158 weight = 50
1159
1160 def __init__(self, map, fixed_digits=0, min=None, max=None, signed=False):
1161 if signed:
1162 self.regex = self.signed_regex
1163 BaseConverter.__init__(self, map)
1164 self.fixed_digits = fixed_digits
1165 self.min = min
1166 self.max = max
1167 self.signed = signed
1168
1169 def to_python(self, value):
1170 if self.fixed_digits and len(value) != self.fixed_digits:
1171 raise ValidationError()
1172 value = self.num_convert(value)
1173 if (self.min is not None and value < self.min) or (
1174 self.max is not None and value > self.max
1175 ):
1176 raise ValidationError()
1177 return value
1178
1179 def to_url(self, value):
1180 value = self.num_convert(value)
1181 if self.fixed_digits:
1182 value = ("%%0%sd" % self.fixed_digits) % value
1183 return str(value)
1184
1185 @property
1186 def signed_regex(self):
1187 return r"-?" + self.regex
1188
1189
1190 class IntegerConverter(NumberConverter):
1191 """This converter only accepts integer values::
1192
1193 Rule("/page/<int:page>")
1194
1195 By default it only accepts unsigned, positive values. The ``signed``
1196 parameter will enable signed, negative values. ::
1197
1198 Rule("/page/<int(signed=True):page>")
1199
1200 :param map: The :class:`Map`.
1201 :param fixed_digits: The number of fixed digits in the URL. If you
1202 set this to ``4`` for example, the rule will only match if the
1203 URL looks like ``/0001/``. The default is variable length.
1204 :param min: The minimal value.
1205 :param max: The maximal value.
1206 :param signed: Allow signed (negative) values.
1207
1208 .. versionadded:: 0.15
1209 The ``signed`` parameter.
1210 """
1211
1212 regex = r"\d+"
1213 num_convert = int
1214
1215
1216 class FloatConverter(NumberConverter):
1217 """This converter only accepts floating point values::
1218
1219 Rule("/probability/<float:probability>")
1220
1221 By default it only accepts unsigned, positive values. The ``signed``
1222 parameter will enable signed, negative values. ::
1223
1224 Rule("/offset/<float(signed=True):offset>")
1225
1226 :param map: The :class:`Map`.
1227 :param min: The minimal value.
1228 :param max: The maximal value.
1229 :param signed: Allow signed (negative) values.
1230
1231 .. versionadded:: 0.15
1232 The ``signed`` parameter.
1233 """
1234
1235 regex = r"\d+\.\d+"
1236 num_convert = float
1237
1238 def __init__(self, map, min=None, max=None, signed=False):
1239 NumberConverter.__init__(self, map, min=min, max=max, signed=signed)
1240
1241
1242 class UUIDConverter(BaseConverter):
1243 """This converter only accepts UUID strings::
1244
1245 Rule('/object/<uuid:identifier>')
1246
1247 .. versionadded:: 0.10
1248
1249 :param map: the :class:`Map`.
1250 """
1251
1252 regex = (
1253 r"[A-Fa-f0-9]{8}-[A-Fa-f0-9]{4}-"
1254 r"[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{12}"
1255 )
1256
1257 def to_python(self, value):
1258 return uuid.UUID(value)
1259
1260 def to_url(self, value):
1261 return str(value)
1262
1263
1264 #: the default converter mapping for the map.
1265 DEFAULT_CONVERTERS = {
1266 "default": UnicodeConverter,
1267 "string": UnicodeConverter,
1268 "any": AnyConverter,
1269 "path": PathConverter,
1270 "int": IntegerConverter,
1271 "float": FloatConverter,
1272 "uuid": UUIDConverter,
1273 }
1274
1275
1276 class Map(object):
1277 """The map class stores all the URL rules and some configuration
1278 parameters. Some of the configuration values are only stored on the
1279 `Map` instance since those affect all rules, others are just defaults
1280 and can be overridden for each rule. Note that you have to specify all
1281 arguments besides the `rules` as keyword arguments!
1282
1283 :param rules: sequence of url rules for this map.
1284 :param default_subdomain: The default subdomain for rules without a
1285 subdomain defined.
1286 :param charset: charset of the url. defaults to ``"utf-8"``
1287 :param strict_slashes: Take care of trailing slashes.
1288 :param redirect_defaults: This will redirect to the default rule if it
1289 wasn't visited that way. This helps creating
1290 unique URLs.
1291 :param converters: A dict of converters that adds additional converters
1292 to the list of converters. If you redefine one
1293 converter this will override the original one.
1294 :param sort_parameters: If set to `True` the url parameters are sorted.
1295 See `url_encode` for more details.
1296 :param sort_key: The sort key function for `url_encode`.
1297 :param encoding_errors: the error method to use for decoding
1298 :param host_matching: if set to `True` it enables the host matching
1299 feature and disables the subdomain one. If
1300 enabled the `host` parameter to rules is used
1301 instead of the `subdomain` one.
1302
1303 .. versionadded:: 0.5
1304 `sort_parameters` and `sort_key` was added.
1305
1306 .. versionadded:: 0.7
1307 `encoding_errors` and `host_matching` was added.
1308 """
1309
1310 #: A dict of default converters to be used.
1311 default_converters = ImmutableDict(DEFAULT_CONVERTERS)
1312
1313 def __init__(
1314 self,
1315 rules=None,
1316 default_subdomain="",
1317 charset="utf-8",
1318 strict_slashes=True,
1319 redirect_defaults=True,
1320 converters=None,
1321 sort_parameters=False,
1322 sort_key=None,
1323 encoding_errors="replace",
1324 host_matching=False,
1325 ):
1326 self._rules = []
1327 self._rules_by_endpoint = {}
1328 self._remap = True
1329 self._remap_lock = Lock()
1330
1331 self.default_subdomain = default_subdomain
1332 self.charset = charset
1333 self.encoding_errors = encoding_errors
1334 self.strict_slashes = strict_slashes
1335 self.redirect_defaults = redirect_defaults
1336 self.host_matching = host_matching
1337
1338 self.converters = self.default_converters.copy()
1339 if converters:
1340 self.converters.update(converters)
1341
1342 self.sort_parameters = sort_parameters
1343 self.sort_key = sort_key
1344
1345 for rulefactory in rules or ():
1346 self.add(rulefactory)
1347
1348 def is_endpoint_expecting(self, endpoint, *arguments):
1349 """Iterate over all rules and check if the endpoint expects
1350 the arguments provided. This is for example useful if you have
1351 some URLs that expect a language code and others that do not and
1352 you want to wrap the builder a bit so that the current language
1353 code is automatically added if not provided but endpoints expect
1354 it.
1355
1356 :param endpoint: the endpoint to check.
1357 :param arguments: this function accepts one or more arguments
1358 as positional arguments. Each one of them is
1359 checked.
1360 """
1361 self.update()
1362 arguments = set(arguments)
1363 for rule in self._rules_by_endpoint[endpoint]:
1364 if arguments.issubset(rule.arguments):
1365 return True
1366 return False
1367
1368 def iter_rules(self, endpoint=None):
1369 """Iterate over all rules or the rules of an endpoint.
1370
1371 :param endpoint: if provided only the rules for that endpoint
1372 are returned.
1373 :return: an iterator
1374 """
1375 self.update()
1376 if endpoint is not None:
1377 return iter(self._rules_by_endpoint[endpoint])
1378 return iter(self._rules)
1379
1380 def add(self, rulefactory):
1381 """Add a new rule or factory to the map and bind it. Requires that the
1382 rule is not bound to another map.
1383
1384 :param rulefactory: a :class:`Rule` or :class:`RuleFactory`
1385 """
1386 for rule in rulefactory.get_rules(self):
1387 rule.bind(self)
1388 self._rules.append(rule)
1389 self._rules_by_endpoint.setdefault(rule.endpoint, []).append(rule)
1390 self._remap = True
1391
1392 def bind(
1393 self,
1394 server_name,
1395 script_name=None,
1396 subdomain=None,
1397 url_scheme="http",
1398 default_method="GET",
1399 path_info=None,
1400 query_args=None,
1401 ):
1402 """Return a new :class:`MapAdapter` with the details specified to the
1403 call. Note that `script_name` will default to ``'/'`` if not further
1404 specified or `None`. The `server_name` at least is a requirement
1405 because the HTTP RFC requires absolute URLs for redirects and so all
1406 redirect exceptions raised by Werkzeug will contain the full canonical
1407 URL.
1408
1409 If no path_info is passed to :meth:`match` it will use the default path
1410 info passed to bind. While this doesn't really make sense for
1411 manual bind calls, it's useful if you bind a map to a WSGI
1412 environment which already contains the path info.
1413
1414 `subdomain` will default to the `default_subdomain` for this map if
1415 no defined. If there is no `default_subdomain` you cannot use the
1416 subdomain feature.
1417
1418 .. versionadded:: 0.7
1419 `query_args` added
1420
1421 .. versionadded:: 0.8
1422 `query_args` can now also be a string.
1423
1424 .. versionchanged:: 0.15
1425 ``path_info`` defaults to ``'/'`` if ``None``.
1426 """
1427 server_name = server_name.lower()
1428 if self.host_matching:
1429 if subdomain is not None:
1430 raise RuntimeError("host matching enabled and a subdomain was provided")
1431 elif subdomain is None:
1432 subdomain = self.default_subdomain
1433 if script_name is None:
1434 script_name = "/"
1435 if path_info is None:
1436 path_info = "/"
1437 try:
1438 server_name = _encode_idna(server_name)
1439 except UnicodeError:
1440 raise BadHost()
1441 return MapAdapter(
1442 self,
1443 server_name,
1444 script_name,
1445 subdomain,
1446 url_scheme,
1447 path_info,
1448 default_method,
1449 query_args,
1450 )
1451
1452 def bind_to_environ(self, environ, server_name=None, subdomain=None):
1453 """Like :meth:`bind` but you can pass it an WSGI environment and it
1454 will fetch the information from that dictionary. Note that because of
1455 limitations in the protocol there is no way to get the current
1456 subdomain and real `server_name` from the environment. If you don't
1457 provide it, Werkzeug will use `SERVER_NAME` and `SERVER_PORT` (or
1458 `HTTP_HOST` if provided) as used `server_name` with disabled subdomain
1459 feature.
1460
1461 If `subdomain` is `None` but an environment and a server name is
1462 provided it will calculate the current subdomain automatically.
1463 Example: `server_name` is ``'example.com'`` and the `SERVER_NAME`
1464 in the wsgi `environ` is ``'staging.dev.example.com'`` the calculated
1465 subdomain will be ``'staging.dev'``.
1466
1467 If the object passed as environ has an environ attribute, the value of
1468 this attribute is used instead. This allows you to pass request
1469 objects. Additionally `PATH_INFO` added as a default of the
1470 :class:`MapAdapter` so that you don't have to pass the path info to
1471 the match method.
1472
1473 .. versionchanged:: 0.5
1474 previously this method accepted a bogus `calculate_subdomain`
1475 parameter that did not have any effect. It was removed because
1476 of that.
1477
1478 .. versionchanged:: 0.8
1479 This will no longer raise a ValueError when an unexpected server
1480 name was passed.
1481
1482 :param environ: a WSGI environment.
1483 :param server_name: an optional server name hint (see above).
1484 :param subdomain: optionally the current subdomain (see above).
1485 """
1486 environ = _get_environ(environ)
1487
1488 wsgi_server_name = get_host(environ).lower()
1489
1490 if server_name is None:
1491 server_name = wsgi_server_name
1492 else:
1493 server_name = server_name.lower()
1494
1495 if subdomain is None and not self.host_matching:
1496 cur_server_name = wsgi_server_name.split(".")
1497 real_server_name = server_name.split(".")
1498 offset = -len(real_server_name)
1499 if cur_server_name[offset:] != real_server_name:
1500 # This can happen even with valid configs if the server was
1501 # accesssed directly by IP address under some situations.
1502 # Instead of raising an exception like in Werkzeug 0.7 or
1503 # earlier we go by an invalid subdomain which will result
1504 # in a 404 error on matching.
1505 subdomain = "<invalid>"
1506 else:
1507 subdomain = ".".join(filter(None, cur_server_name[:offset]))
1508
1509 def _get_wsgi_string(name):
1510 val = environ.get(name)
1511 if val is not None:
1512 return wsgi_decoding_dance(val, self.charset)
1513
1514 script_name = _get_wsgi_string("SCRIPT_NAME")
1515 path_info = _get_wsgi_string("PATH_INFO")
1516 query_args = _get_wsgi_string("QUERY_STRING")
1517 return Map.bind(
1518 self,
1519 server_name,
1520 script_name,
1521 subdomain,
1522 environ["wsgi.url_scheme"],
1523 environ["REQUEST_METHOD"],
1524 path_info,
1525 query_args=query_args,
1526 )
1527
1528 def update(self):
1529 """Called before matching and building to keep the compiled rules
1530 in the correct order after things changed.
1531 """
1532 if not self._remap:
1533 return
1534
1535 with self._remap_lock:
1536 if not self._remap:
1537 return
1538
1539 self._rules.sort(key=lambda x: x.match_compare_key())
1540 for rules in itervalues(self._rules_by_endpoint):
1541 rules.sort(key=lambda x: x.build_compare_key())
1542 self._remap = False
1543
1544 def __repr__(self):
1545 rules = self.iter_rules()
1546 return "%s(%s)" % (self.__class__.__name__, pformat(list(rules)))
1547
1548
1549 class MapAdapter(object):
1550
1551 """Returned by :meth:`Map.bind` or :meth:`Map.bind_to_environ` and does
1552 the URL matching and building based on runtime information.
1553 """
1554
1555 def __init__(
1556 self,
1557 map,
1558 server_name,
1559 script_name,
1560 subdomain,
1561 url_scheme,
1562 path_info,
1563 default_method,
1564 query_args=None,
1565 ):
1566 self.map = map
1567 self.server_name = to_unicode(server_name)
1568 script_name = to_unicode(script_name)
1569 if not script_name.endswith(u"/"):
1570 script_name += u"/"
1571 self.script_name = script_name
1572 self.subdomain = to_unicode(subdomain)
1573 self.url_scheme = to_unicode(url_scheme)
1574 self.path_info = to_unicode(path_info)
1575 self.default_method = to_unicode(default_method)
1576 self.query_args = query_args
1577
1578 def dispatch(
1579 self, view_func, path_info=None, method=None, catch_http_exceptions=False
1580 ):
1581 """Does the complete dispatching process. `view_func` is called with
1582 the endpoint and a dict with the values for the view. It should
1583 look up the view function, call it, and return a response object
1584 or WSGI application. http exceptions are not caught by default
1585 so that applications can display nicer error messages by just
1586 catching them by hand. If you want to stick with the default
1587 error messages you can pass it ``catch_http_exceptions=True`` and
1588 it will catch the http exceptions.
1589
1590 Here a small example for the dispatch usage::
1591
1592 from werkzeug.wrappers import Request, Response
1593 from werkzeug.wsgi import responder
1594 from werkzeug.routing import Map, Rule
1595
1596 def on_index(request):
1597 return Response('Hello from the index')
1598
1599 url_map = Map([Rule('/', endpoint='index')])
1600 views = {'index': on_index}
1601
1602 @responder
1603 def application(environ, start_response):
1604 request = Request(environ)
1605 urls = url_map.bind_to_environ(environ)
1606 return urls.dispatch(lambda e, v: views[e](request, **v),
1607 catch_http_exceptions=True)
1608
1609 Keep in mind that this method might return exception objects, too, so
1610 use :class:`Response.force_type` to get a response object.
1611
1612 :param view_func: a function that is called with the endpoint as
1613 first argument and the value dict as second. Has
1614 to dispatch to the actual view function with this
1615 information. (see above)
1616 :param path_info: the path info to use for matching. Overrides the
1617 path info specified on binding.
1618 :param method: the HTTP method used for matching. Overrides the
1619 method specified on binding.
1620 :param catch_http_exceptions: set to `True` to catch any of the
1621 werkzeug :class:`HTTPException`\\s.
1622 """
1623 try:
1624 try:
1625 endpoint, args = self.match(path_info, method)
1626 except RequestRedirect as e:
1627 return e
1628 return view_func(endpoint, args)
1629 except HTTPException as e:
1630 if catch_http_exceptions:
1631 return e
1632 raise
1633
1634 def match(self, path_info=None, method=None, return_rule=False, query_args=None):
1635 """The usage is simple: you just pass the match method the current
1636 path info as well as the method (which defaults to `GET`). The
1637 following things can then happen:
1638
1639 - you receive a `NotFound` exception that indicates that no URL is
1640 matching. A `NotFound` exception is also a WSGI application you
1641 can call to get a default page not found page (happens to be the
1642 same object as `werkzeug.exceptions.NotFound`)
1643
1644 - you receive a `MethodNotAllowed` exception that indicates that there
1645 is a match for this URL but not for the current request method.
1646 This is useful for RESTful applications.
1647
1648 - you receive a `RequestRedirect` exception with a `new_url`
1649 attribute. This exception is used to notify you about a request
1650 Werkzeug requests from your WSGI application. This is for example the
1651 case if you request ``/foo`` although the correct URL is ``/foo/``
1652 You can use the `RequestRedirect` instance as response-like object
1653 similar to all other subclasses of `HTTPException`.
1654
1655 - you get a tuple in the form ``(endpoint, arguments)`` if there is
1656 a match (unless `return_rule` is True, in which case you get a tuple
1657 in the form ``(rule, arguments)``)
1658
1659 If the path info is not passed to the match method the default path
1660 info of the map is used (defaults to the root URL if not defined
1661 explicitly).
1662
1663 All of the exceptions raised are subclasses of `HTTPException` so they
1664 can be used as WSGI responses. They will all render generic error or
1665 redirect pages.
1666
1667 Here is a small example for matching:
1668
1669 >>> m = Map([
1670 ... Rule('/', endpoint='index'),
1671 ... Rule('/downloads/', endpoint='downloads/index'),
1672 ... Rule('/downloads/<int:id>', endpoint='downloads/show')
1673 ... ])
1674 >>> urls = m.bind("example.com", "/")
1675 >>> urls.match("/", "GET")
1676 ('index', {})
1677 >>> urls.match("/downloads/42")
1678 ('downloads/show', {'id': 42})
1679
1680 And here is what happens on redirect and missing URLs:
1681
1682 >>> urls.match("/downloads")
1683 Traceback (most recent call last):
1684 ...
1685 RequestRedirect: http://example.com/downloads/
1686 >>> urls.match("/missing")
1687 Traceback (most recent call last):
1688 ...
1689 NotFound: 404 Not Found
1690
1691 :param path_info: the path info to use for matching. Overrides the
1692 path info specified on binding.
1693 :param method: the HTTP method used for matching. Overrides the
1694 method specified on binding.
1695 :param return_rule: return the rule that matched instead of just the
1696 endpoint (defaults to `False`).
1697 :param query_args: optional query arguments that are used for
1698 automatic redirects as string or dictionary. It's
1699 currently not possible to use the query arguments
1700 for URL matching.
1701
1702 .. versionadded:: 0.6
1703 `return_rule` was added.
1704
1705 .. versionadded:: 0.7
1706 `query_args` was added.
1707
1708 .. versionchanged:: 0.8
1709 `query_args` can now also be a string.
1710 """
1711 self.map.update()
1712 if path_info is None:
1713 path_info = self.path_info
1714 else:
1715 path_info = to_unicode(path_info, self.map.charset)
1716 if query_args is None:
1717 query_args = self.query_args
1718 method = (method or self.default_method).upper()
1719
1720 path = u"%s|%s" % (
1721 self.map.host_matching and self.server_name or self.subdomain,
1722 path_info and "/%s" % path_info.lstrip("/"),
1723 )
1724
1725 have_match_for = set()
1726 for rule in self.map._rules:
1727 try:
1728 rv = rule.match(path, method)
1729 except RequestSlash:
1730 raise RequestRedirect(
1731 self.make_redirect_url(
1732 url_quote(path_info, self.map.charset, safe="/:|+") + "/",
1733 query_args,
1734 )
1735 )
1736 except RequestAliasRedirect as e:
1737 raise RequestRedirect(
1738 self.make_alias_redirect_url(
1739 path, rule.endpoint, e.matched_values, method, query_args
1740 )
1741 )
1742 if rv is None:
1743 continue
1744 if rule.methods is not None and method not in rule.methods:
1745 have_match_for.update(rule.methods)
1746 continue
1747
1748 if self.map.redirect_defaults:
1749 redirect_url = self.get_default_redirect(rule, method, rv, query_args)
1750 if redirect_url is not None:
1751 raise RequestRedirect(redirect_url)
1752
1753 if rule.redirect_to is not None:
1754 if isinstance(rule.redirect_to, string_types):
1755
1756 def _handle_match(match):
1757 value = rv[match.group(1)]
1758 return rule._converters[match.group(1)].to_url(value)
1759
1760 redirect_url = _simple_rule_re.sub(_handle_match, rule.redirect_to)
1761 else:
1762 redirect_url = rule.redirect_to(self, **rv)
1763 raise RequestRedirect(
1764 str(
1765 url_join(
1766 "%s://%s%s%s"
1767 % (
1768 self.url_scheme or "http",
1769 self.subdomain + "." if self.subdomain else "",
1770 self.server_name,
1771 self.script_name,
1772 ),
1773 redirect_url,
1774 )
1775 )
1776 )
1777
1778 if return_rule:
1779 return rule, rv
1780 else:
1781 return rule.endpoint, rv
1782
1783 if have_match_for:
1784 raise MethodNotAllowed(valid_methods=list(have_match_for))
1785 raise NotFound()
1786
1787 def test(self, path_info=None, method=None):
1788 """Test if a rule would match. Works like `match` but returns `True`
1789 if the URL matches, or `False` if it does not exist.
1790
1791 :param path_info: the path info to use for matching. Overrides the
1792 path info specified on binding.
1793 :param method: the HTTP method used for matching. Overrides the
1794 method specified on binding.
1795 """
1796 try:
1797 self.match(path_info, method)
1798 except RequestRedirect:
1799 pass
1800 except HTTPException:
1801 return False
1802 return True
1803
1804 def allowed_methods(self, path_info=None):
1805 """Returns the valid methods that match for a given path.
1806
1807 .. versionadded:: 0.7
1808 """
1809 try:
1810 self.match(path_info, method="--")
1811 except MethodNotAllowed as e:
1812 return e.valid_methods
1813 except HTTPException:
1814 pass
1815 return []
1816
1817 def get_host(self, domain_part):
1818 """Figures out the full host name for the given domain part. The
1819 domain part is a subdomain in case host matching is disabled or
1820 a full host name.
1821 """
1822 if self.map.host_matching:
1823 if domain_part is None:
1824 return self.server_name
1825 return to_unicode(domain_part, "ascii")
1826 subdomain = domain_part
1827 if subdomain is None:
1828 subdomain = self.subdomain
1829 else:
1830 subdomain = to_unicode(subdomain, "ascii")
1831 return (subdomain + u"." if subdomain else u"") + self.server_name
1832
1833 def get_default_redirect(self, rule, method, values, query_args):
1834 """A helper that returns the URL to redirect to if it finds one.
1835 This is used for default redirecting only.
1836
1837 :internal:
1838 """
1839 assert self.map.redirect_defaults
1840 for r in self.map._rules_by_endpoint[rule.endpoint]:
1841 # every rule that comes after this one, including ourself
1842 # has a lower priority for the defaults. We order the ones
1843 # with the highest priority up for building.
1844 if r is rule:
1845 break
1846 if r.provides_defaults_for(rule) and r.suitable_for(values, method):
1847 values.update(r.defaults)
1848 domain_part, path = r.build(values)
1849 return self.make_redirect_url(path, query_args, domain_part=domain_part)
1850
1851 def encode_query_args(self, query_args):
1852 if not isinstance(query_args, string_types):
1853 query_args = url_encode(query_args, self.map.charset)
1854 return query_args
1855
1856 def make_redirect_url(self, path_info, query_args=None, domain_part=None):
1857 """Creates a redirect URL.
1858
1859 :internal:
1860 """
1861 suffix = ""
1862 if query_args:
1863 suffix = "?" + self.encode_query_args(query_args)
1864 return str(
1865 "%s://%s/%s%s"
1866 % (
1867 self.url_scheme or "http",
1868 self.get_host(domain_part),
1869 posixpath.join(
1870 self.script_name[:-1].lstrip("/"), path_info.lstrip("/")
1871 ),
1872 suffix,
1873 )
1874 )
1875
1876 def make_alias_redirect_url(self, path, endpoint, values, method, query_args):
1877 """Internally called to make an alias redirect URL."""
1878 url = self.build(
1879 endpoint, values, method, append_unknown=False, force_external=True
1880 )
1881 if query_args:
1882 url += "?" + self.encode_query_args(query_args)
1883 assert url != path, "detected invalid alias setting. No canonical URL found"
1884 return url
1885
1886 def _partial_build(self, endpoint, values, method, append_unknown):
1887 """Helper for :meth:`build`. Returns subdomain and path for the
1888 rule that accepts this endpoint, values and method.
1889
1890 :internal:
1891 """
1892 # in case the method is none, try with the default method first
1893 if method is None:
1894 rv = self._partial_build(
1895 endpoint, values, self.default_method, append_unknown
1896 )
1897 if rv is not None:
1898 return rv
1899
1900 # default method did not match or a specific method is passed,
1901 # check all and go with first result.
1902 for rule in self.map._rules_by_endpoint.get(endpoint, ()):
1903 if rule.suitable_for(values, method):
1904 rv = rule.build(values, append_unknown)
1905 if rv is not None:
1906 return rv
1907
1908 def build(
1909 self,
1910 endpoint,
1911 values=None,
1912 method=None,
1913 force_external=False,
1914 append_unknown=True,
1915 ):
1916 """Building URLs works pretty much the other way round. Instead of
1917 `match` you call `build` and pass it the endpoint and a dict of
1918 arguments for the placeholders.
1919
1920 The `build` function also accepts an argument called `force_external`
1921 which, if you set it to `True` will force external URLs. Per default
1922 external URLs (include the server name) will only be used if the
1923 target URL is on a different subdomain.
1924
1925 >>> m = Map([
1926 ... Rule('/', endpoint='index'),
1927 ... Rule('/downloads/', endpoint='downloads/index'),
1928 ... Rule('/downloads/<int:id>', endpoint='downloads/show')
1929 ... ])
1930 >>> urls = m.bind("example.com", "/")
1931 >>> urls.build("index", {})
1932 '/'
1933 >>> urls.build("downloads/show", {'id': 42})
1934 '/downloads/42'
1935 >>> urls.build("downloads/show", {'id': 42}, force_external=True)
1936 'http://example.com/downloads/42'
1937
1938 Because URLs cannot contain non ASCII data you will always get
1939 bytestrings back. Non ASCII characters are urlencoded with the
1940 charset defined on the map instance.
1941
1942 Additional values are converted to unicode and appended to the URL as
1943 URL querystring parameters:
1944
1945 >>> urls.build("index", {'q': 'My Searchstring'})
1946 '/?q=My+Searchstring'
1947
1948 When processing those additional values, lists are furthermore
1949 interpreted as multiple values (as per
1950 :py:class:`werkzeug.datastructures.MultiDict`):
1951
1952 >>> urls.build("index", {'q': ['a', 'b', 'c']})
1953 '/?q=a&q=b&q=c'
1954
1955 Passing a ``MultiDict`` will also add multiple values:
1956
1957 >>> urls.build("index", MultiDict((('p', 'z'), ('q', 'a'), ('q', 'b'))))
1958 '/?p=z&q=a&q=b'
1959
1960 If a rule does not exist when building a `BuildError` exception is
1961 raised.
1962
1963 The build method accepts an argument called `method` which allows you
1964 to specify the method you want to have an URL built for if you have
1965 different methods for the same endpoint specified.
1966
1967 .. versionadded:: 0.6
1968 the `append_unknown` parameter was added.
1969
1970 :param endpoint: the endpoint of the URL to build.
1971 :param values: the values for the URL to build. Unhandled values are
1972 appended to the URL as query parameters.
1973 :param method: the HTTP method for the rule if there are different
1974 URLs for different methods on the same endpoint.
1975 :param force_external: enforce full canonical external URLs. If the URL
1976 scheme is not provided, this will generate
1977 a protocol-relative URL.
1978 :param append_unknown: unknown parameters are appended to the generated
1979 URL as query string argument. Disable this
1980 if you want the builder to ignore those.
1981 """
1982 self.map.update()
1983
1984 if values:
1985 if isinstance(values, MultiDict):
1986 temp_values = {}
1987 # iteritems(dict, values) is like `values.lists()`
1988 # without the call or `list()` coercion overhead.
1989 for key, value in iteritems(dict, values):
1990 if not value:
1991 continue
1992 if len(value) == 1: # flatten single item lists
1993 value = value[0]
1994 if value is None: # drop None
1995 continue
1996 temp_values[key] = value
1997 values = temp_values
1998 else:
1999 # drop None
2000 values = dict(i for i in iteritems(values) if i[1] is not None)
2001 else:
2002 values = {}
2003
2004 rv = self._partial_build(endpoint, values, method, append_unknown)
2005 if rv is None:
2006 raise BuildError(endpoint, values, method, self)
2007 domain_part, path = rv
2008
2009 host = self.get_host(domain_part)
2010
2011 # shortcut this.
2012 if not force_external and (
2013 (self.map.host_matching and host == self.server_name)
2014 or (not self.map.host_matching and domain_part == self.subdomain)
2015 ):
2016 return "%s/%s" % (self.script_name.rstrip("/"), path.lstrip("/"))
2017 return str(
2018 "%s//%s%s/%s"
2019 % (
2020 self.url_scheme + ":" if self.url_scheme else "",
2021 host,
2022 self.script_name[:-1],
2023 path.lstrip("/"),
2024 )
2025 )
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.security
3 ~~~~~~~~~~~~~~~~~
4
5 Security related helpers such as secure password hashing tools.
6
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
9 """
10 import codecs
11 import hashlib
12 import hmac
13 import os
14 import posixpath
15 from random import SystemRandom
16 from struct import Struct
17
18 from ._compat import izip
19 from ._compat import PY2
20 from ._compat import range_type
21 from ._compat import text_type
22 from ._compat import to_bytes
23 from ._compat import to_native
24
25 SALT_CHARS = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789"
26 DEFAULT_PBKDF2_ITERATIONS = 150000
27
28 _pack_int = Struct(">I").pack
29 _builtin_safe_str_cmp = getattr(hmac, "compare_digest", None)
30 _sys_rng = SystemRandom()
31 _os_alt_seps = list(
32 sep for sep in [os.path.sep, os.path.altsep] if sep not in (None, "/")
33 )
34
35
36 def pbkdf2_hex(
37 data, salt, iterations=DEFAULT_PBKDF2_ITERATIONS, keylen=None, hashfunc=None
38 ):
39 """Like :func:`pbkdf2_bin`, but returns a hex-encoded string.
40
41 .. versionadded:: 0.9
42
43 :param data: the data to derive.
44 :param salt: the salt for the derivation.
45 :param iterations: the number of iterations.
46 :param keylen: the length of the resulting key. If not provided,
47 the digest size will be used.
48 :param hashfunc: the hash function to use. This can either be the
49 string name of a known hash function, or a function
50 from the hashlib module. Defaults to sha256.
51 """
52 rv = pbkdf2_bin(data, salt, iterations, keylen, hashfunc)
53 return to_native(codecs.encode(rv, "hex_codec"))
54
55
56 def pbkdf2_bin(
57 data, salt, iterations=DEFAULT_PBKDF2_ITERATIONS, keylen=None, hashfunc=None
58 ):
59 """Returns a binary digest for the PBKDF2 hash algorithm of `data`
60 with the given `salt`. It iterates `iterations` times and produces a
61 key of `keylen` bytes. By default, SHA-256 is used as hash function;
62 a different hashlib `hashfunc` can be provided.
63
64 .. versionadded:: 0.9
65
66 :param data: the data to derive.
67 :param salt: the salt for the derivation.
68 :param iterations: the number of iterations.
69 :param keylen: the length of the resulting key. If not provided
70 the digest size will be used.
71 :param hashfunc: the hash function to use. This can either be the
72 string name of a known hash function or a function
73 from the hashlib module. Defaults to sha256.
74 """
75 if not hashfunc:
76 hashfunc = "sha256"
77
78 data = to_bytes(data)
79 salt = to_bytes(salt)
80
81 if callable(hashfunc):
82 _test_hash = hashfunc()
83 hash_name = getattr(_test_hash, "name", None)
84 else:
85 hash_name = hashfunc
86 return hashlib.pbkdf2_hmac(hash_name, data, salt, iterations, keylen)
87
88
89 def safe_str_cmp(a, b):
90 """This function compares strings in somewhat constant time. This
91 requires that the length of at least one string is known in advance.
92
93 Returns `True` if the two strings are equal, or `False` if they are not.
94
95 .. versionadded:: 0.7
96 """
97 if isinstance(a, text_type):
98 a = a.encode("utf-8")
99 if isinstance(b, text_type):
100 b = b.encode("utf-8")
101
102 if _builtin_safe_str_cmp is not None:
103 return _builtin_safe_str_cmp(a, b)
104
105 if len(a) != len(b):
106 return False
107
108 rv = 0
109 if PY2:
110 for x, y in izip(a, b):
111 rv |= ord(x) ^ ord(y)
112 else:
113 for x, y in izip(a, b):
114 rv |= x ^ y
115
116 return rv == 0
117
118
119 def gen_salt(length):
120 """Generate a random string of SALT_CHARS with specified ``length``."""
121 if length <= 0:
122 raise ValueError("Salt length must be positive")
123 return "".join(_sys_rng.choice(SALT_CHARS) for _ in range_type(length))
124
125
126 def _hash_internal(method, salt, password):
127 """Internal password hash helper. Supports plaintext without salt,
128 unsalted and salted passwords. In case salted passwords are used
129 hmac is used.
130 """
131 if method == "plain":
132 return password, method
133
134 if isinstance(password, text_type):
135 password = password.encode("utf-8")
136
137 if method.startswith("pbkdf2:"):
138 args = method[7:].split(":")
139 if len(args) not in (1, 2):
140 raise ValueError("Invalid number of arguments for PBKDF2")
141 method = args.pop(0)
142 iterations = args and int(args[0] or 0) or DEFAULT_PBKDF2_ITERATIONS
143 is_pbkdf2 = True
144 actual_method = "pbkdf2:%s:%d" % (method, iterations)
145 else:
146 is_pbkdf2 = False
147 actual_method = method
148
149 if is_pbkdf2:
150 if not salt:
151 raise ValueError("Salt is required for PBKDF2")
152 rv = pbkdf2_hex(password, salt, iterations, hashfunc=method)
153 elif salt:
154 if isinstance(salt, text_type):
155 salt = salt.encode("utf-8")
156 mac = _create_mac(salt, password, method)
157 rv = mac.hexdigest()
158 else:
159 rv = hashlib.new(method, password).hexdigest()
160 return rv, actual_method
161
162
163 def _create_mac(key, msg, method):
164 if callable(method):
165 return hmac.HMAC(key, msg, method)
166
167 def hashfunc(d=b""):
168 return hashlib.new(method, d)
169
170 # Python 2.7 used ``hasattr(digestmod, '__call__')``
171 # to detect if hashfunc is callable
172 hashfunc.__call__ = hashfunc
173 return hmac.HMAC(key, msg, hashfunc)
174
175
176 def generate_password_hash(password, method="pbkdf2:sha256", salt_length=8):
177 """Hash a password with the given method and salt with a string of
178 the given length. The format of the string returned includes the method
179 that was used so that :func:`check_password_hash` can check the hash.
180
181 The format for the hashed string looks like this::
182
183 method$salt$hash
184
185 This method can **not** generate unsalted passwords but it is possible
186 to set param method='plain' in order to enforce plaintext passwords.
187 If a salt is used, hmac is used internally to salt the password.
188
189 If PBKDF2 is wanted it can be enabled by setting the method to
190 ``pbkdf2:method:iterations`` where iterations is optional::
191
192 pbkdf2:sha256:80000$salt$hash
193 pbkdf2:sha256$salt$hash
194
195 :param password: the password to hash.
196 :param method: the hash method to use (one that hashlib supports). Can
197 optionally be in the format ``pbkdf2:<method>[:iterations]``
198 to enable PBKDF2.
199 :param salt_length: the length of the salt in letters.
200 """
201 salt = gen_salt(salt_length) if method != "plain" else ""
202 h, actual_method = _hash_internal(method, salt, password)
203 return "%s$%s$%s" % (actual_method, salt, h)
204
205
206 def check_password_hash(pwhash, password):
207 """check a password against a given salted and hashed password value.
208 In order to support unsalted legacy passwords this method supports
209 plain text passwords, md5 and sha1 hashes (both salted and unsalted).
210
211 Returns `True` if the password matched, `False` otherwise.
212
213 :param pwhash: a hashed string like returned by
214 :func:`generate_password_hash`.
215 :param password: the plaintext password to compare against the hash.
216 """
217 if pwhash.count("$") < 2:
218 return False
219 method, salt, hashval = pwhash.split("$", 2)
220 return safe_str_cmp(_hash_internal(method, salt, password)[0], hashval)
221
222
223 def safe_join(directory, *pathnames):
224 """Safely join `directory` and one or more untrusted `pathnames`. If this
225 cannot be done, this function returns ``None``.
226
227 :param directory: the base directory.
228 :param pathnames: the untrusted pathnames relative to that directory.
229 """
230 parts = [directory]
231 for filename in pathnames:
232 if filename != "":
233 filename = posixpath.normpath(filename)
234 for sep in _os_alt_seps:
235 if sep in filename:
236 return None
237 if os.path.isabs(filename) or filename == ".." or filename.startswith("../"):
238 return None
239 parts.append(filename)
240 return posixpath.join(*parts)
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.serving
3 ~~~~~~~~~~~~~~~~
4
5 There are many ways to serve a WSGI application. While you're developing
6 it you usually don't want a full blown webserver like Apache but a simple
7 standalone one. From Python 2.5 onwards there is the `wsgiref`_ server in
8 the standard library. If you're using older versions of Python you can
9 download the package from the cheeseshop.
10
11 However there are some caveats. Sourcecode won't reload itself when
12 changed and each time you kill the server using ``^C`` you get an
13 `KeyboardInterrupt` error. While the latter is easy to solve the first
14 one can be a pain in the ass in some situations.
15
16 The easiest way is creating a small ``start-myproject.py`` that runs the
17 application::
18
19 #!/usr/bin/env python
20 # -*- coding: utf-8 -*-
21 from myproject import make_app
22 from werkzeug.serving import run_simple
23
24 app = make_app(...)
25 run_simple('localhost', 8080, app, use_reloader=True)
26
27 You can also pass it a `extra_files` keyword argument with a list of
28 additional files (like configuration files) you want to observe.
29
30 For bigger applications you should consider using `click`
31 (http://click.pocoo.org) instead of a simple start file.
32
33
34 :copyright: 2007 Pallets
35 :license: BSD-3-Clause
36 """
37 import io
38 import os
39 import signal
40 import socket
41 import sys
42
43 import werkzeug
44 from ._compat import PY2
45 from ._compat import reraise
46 from ._compat import WIN
47 from ._compat import wsgi_encoding_dance
48 from ._internal import _log
49 from .exceptions import InternalServerError
50 from .urls import uri_to_iri
51 from .urls import url_parse
52 from .urls import url_unquote
53
54 try:
55 import socketserver
56 from http.server import BaseHTTPRequestHandler
57 from http.server import HTTPServer
58 except ImportError:
59 import SocketServer as socketserver
60 from BaseHTTPServer import HTTPServer
61 from BaseHTTPServer import BaseHTTPRequestHandler
62
63 try:
64 import ssl
65 except ImportError:
66
67 class _SslDummy(object):
68 def __getattr__(self, name):
69 raise RuntimeError("SSL support unavailable")
70
71 ssl = _SslDummy()
72
73 try:
74 import termcolor
75 except ImportError:
76 termcolor = None
77
78
79 def _get_openssl_crypto_module():
80 try:
81 from OpenSSL import crypto
82 except ImportError:
83 raise TypeError("Using ad-hoc certificates requires the pyOpenSSL library.")
84 else:
85 return crypto
86
87
88 ThreadingMixIn = socketserver.ThreadingMixIn
89 can_fork = hasattr(os, "fork")
90
91 if can_fork:
92 ForkingMixIn = socketserver.ForkingMixIn
93 else:
94
95 class ForkingMixIn(object):
96 pass
97
98
99 try:
100 af_unix = socket.AF_UNIX
101 except AttributeError:
102 af_unix = None
103
104
105 LISTEN_QUEUE = 128
106 can_open_by_fd = not WIN and hasattr(socket, "fromfd")
107
108 # On Python 3, ConnectionError represents the same errnos as
109 # socket.error from Python 2, while socket.error is an alias for the
110 # more generic OSError.
111 if PY2:
112 _ConnectionError = socket.error
113 else:
114 _ConnectionError = ConnectionError
115
116
117 class DechunkedInput(io.RawIOBase):
118 """An input stream that handles Transfer-Encoding 'chunked'"""
119
120 def __init__(self, rfile):
121 self._rfile = rfile
122 self._done = False
123 self._len = 0
124
125 def readable(self):
126 return True
127
128 def read_chunk_len(self):
129 try:
130 line = self._rfile.readline().decode("latin1")
131 _len = int(line.strip(), 16)
132 except ValueError:
133 raise IOError("Invalid chunk header")
134 if _len < 0:
135 raise IOError("Negative chunk length not allowed")
136 return _len
137
138 def readinto(self, buf):
139 read = 0
140 while not self._done and read < len(buf):
141 if self._len == 0:
142 # This is the first chunk or we fully consumed the previous
143 # one. Read the next length of the next chunk
144 self._len = self.read_chunk_len()
145
146 if self._len == 0:
147 # Found the final chunk of size 0. The stream is now exhausted,
148 # but there is still a final newline that should be consumed
149 self._done = True
150
151 if self._len > 0:
152 # There is data (left) in this chunk, so append it to the
153 # buffer. If this operation fully consumes the chunk, this will
154 # reset self._len to 0.
155 n = min(len(buf), self._len)
156 buf[read : read + n] = self._rfile.read(n)
157 self._len -= n
158 read += n
159
160 if self._len == 0:
161 # Skip the terminating newline of a chunk that has been fully
162 # consumed. This also applies to the 0-sized final chunk
163 terminator = self._rfile.readline()
164 if terminator not in (b"\n", b"\r\n", b"\r"):
165 raise IOError("Missing chunk terminating newline")
166
167 return read
168
169
170 class WSGIRequestHandler(BaseHTTPRequestHandler, object):
171
172 """A request handler that implements WSGI dispatching."""
173
174 @property
175 def server_version(self):
176 return "Werkzeug/" + werkzeug.__version__
177
178 def make_environ(self):
179 request_url = url_parse(self.path)
180
181 def shutdown_server():
182 self.server.shutdown_signal = True
183
184 url_scheme = "http" if self.server.ssl_context is None else "https"
185 if not self.client_address:
186 self.client_address = "<local>"
187 if isinstance(self.client_address, str):
188 self.client_address = (self.client_address, 0)
189 else:
190 pass
191 path_info = url_unquote(request_url.path)
192
193 environ = {
194 "wsgi.version": (1, 0),
195 "wsgi.url_scheme": url_scheme,
196 "wsgi.input": self.rfile,
197 "wsgi.errors": sys.stderr,
198 "wsgi.multithread": self.server.multithread,
199 "wsgi.multiprocess": self.server.multiprocess,
200 "wsgi.run_once": False,
201 "werkzeug.server.shutdown": shutdown_server,
202 "SERVER_SOFTWARE": self.server_version,
203 "REQUEST_METHOD": self.command,
204 "SCRIPT_NAME": "",
205 "PATH_INFO": wsgi_encoding_dance(path_info),
206 "QUERY_STRING": wsgi_encoding_dance(request_url.query),
207 # Non-standard, added by mod_wsgi, uWSGI
208 "REQUEST_URI": wsgi_encoding_dance(self.path),
209 # Non-standard, added by gunicorn
210 "RAW_URI": wsgi_encoding_dance(self.path),
211 "REMOTE_ADDR": self.address_string(),
212 "REMOTE_PORT": self.port_integer(),
213 "SERVER_NAME": self.server.server_address[0],
214 "SERVER_PORT": str(self.server.server_address[1]),
215 "SERVER_PROTOCOL": self.request_version,
216 }
217
218 for key, value in self.get_header_items():
219 key = key.upper().replace("-", "_")
220 value = value.replace("\r\n", "")
221 if key not in ("CONTENT_TYPE", "CONTENT_LENGTH"):
222 key = "HTTP_" + key
223 if key in environ:
224 value = "{},{}".format(environ[key], value)
225 environ[key] = value
226
227 if environ.get("HTTP_TRANSFER_ENCODING", "").strip().lower() == "chunked":
228 environ["wsgi.input_terminated"] = True
229 environ["wsgi.input"] = DechunkedInput(environ["wsgi.input"])
230
231 if request_url.scheme and request_url.netloc:
232 environ["HTTP_HOST"] = request_url.netloc
233
234 return environ
235
236 def run_wsgi(self):
237 if self.headers.get("Expect", "").lower().strip() == "100-continue":
238 self.wfile.write(b"HTTP/1.1 100 Continue\r\n\r\n")
239
240 self.environ = environ = self.make_environ()
241 headers_set = []
242 headers_sent = []
243
244 def write(data):
245 assert headers_set, "write() before start_response"
246 if not headers_sent:
247 status, response_headers = headers_sent[:] = headers_set
248 try:
249 code, msg = status.split(None, 1)
250 except ValueError:
251 code, msg = status, ""
252 code = int(code)
253 self.send_response(code, msg)
254 header_keys = set()
255 for key, value in response_headers:
256 self.send_header(key, value)
257 key = key.lower()
258 header_keys.add(key)
259 if not (
260 "content-length" in header_keys
261 or environ["REQUEST_METHOD"] == "HEAD"
262 or code < 200
263 or code in (204, 304)
264 ):
265 self.close_connection = True
266 self.send_header("Connection", "close")
267 if "server" not in header_keys:
268 self.send_header("Server", self.version_string())
269 if "date" not in header_keys:
270 self.send_header("Date", self.date_time_string())
271 self.end_headers()
272
273 assert isinstance(data, bytes), "applications must write bytes"
274 self.wfile.write(data)
275 self.wfile.flush()
276
277 def start_response(status, response_headers, exc_info=None):
278 if exc_info:
279 try:
280 if headers_sent:
281 reraise(*exc_info)
282 finally:
283 exc_info = None
284 elif headers_set:
285 raise AssertionError("Headers already set")
286 headers_set[:] = [status, response_headers]
287 return write
288
289 def execute(app):
290 application_iter = app(environ, start_response)
291 try:
292 for data in application_iter:
293 write(data)
294 if not headers_sent:
295 write(b"")
296 finally:
297 if hasattr(application_iter, "close"):
298 application_iter.close()
299 application_iter = None
300
301 try:
302 execute(self.server.app)
303 except (_ConnectionError, socket.timeout) as e:
304 self.connection_dropped(e, environ)
305 except Exception:
306 if self.server.passthrough_errors:
307 raise
308 from .debug.tbtools import get_current_traceback
309
310 traceback = get_current_traceback(ignore_system_exceptions=True)
311 try:
312 # if we haven't yet sent the headers but they are set
313 # we roll back to be able to set them again.
314 if not headers_sent:
315 del headers_set[:]
316 execute(InternalServerError())
317 except Exception:
318 pass
319 self.server.log("error", "Error on request:\n%s", traceback.plaintext)
320
321 def handle(self):
322 """Handles a request ignoring dropped connections."""
323 rv = None
324 try:
325 rv = BaseHTTPRequestHandler.handle(self)
326 except (_ConnectionError, socket.timeout) as e:
327 self.connection_dropped(e)
328 except Exception as e:
329 if self.server.ssl_context is None or not is_ssl_error(e):
330 raise
331 if self.server.shutdown_signal:
332 self.initiate_shutdown()
333 return rv
334
335 def initiate_shutdown(self):
336 """A horrible, horrible way to kill the server for Python 2.6 and
337 later. It's the best we can do.
338 """
339 # Windows does not provide SIGKILL, go with SIGTERM then.
340 sig = getattr(signal, "SIGKILL", signal.SIGTERM)
341 # reloader active
342 if is_running_from_reloader():
343 os.kill(os.getpid(), sig)
344 # python 2.7
345 self.server._BaseServer__shutdown_request = True
346 # python 2.6
347 self.server._BaseServer__serving = False
348
349 def connection_dropped(self, error, environ=None):
350 """Called if the connection was closed by the client. By default
351 nothing happens.
352 """
353
354 def handle_one_request(self):
355 """Handle a single HTTP request."""
356 self.raw_requestline = self.rfile.readline()
357 if not self.raw_requestline:
358 self.close_connection = 1
359 elif self.parse_request():
360 return self.run_wsgi()
361
362 def send_response(self, code, message=None):
363 """Send the response header and log the response code."""
364 self.log_request(code)
365 if message is None:
366 message = code in self.responses and self.responses[code][0] or ""
367 if self.request_version != "HTTP/0.9":
368 hdr = "%s %d %s\r\n" % (self.protocol_version, code, message)
369 self.wfile.write(hdr.encode("ascii"))
370
371 def version_string(self):
372 return BaseHTTPRequestHandler.version_string(self).strip()
373
374 def address_string(self):
375 if getattr(self, "environ", None):
376 return self.environ["REMOTE_ADDR"]
377 elif not self.client_address:
378 return "<local>"
379 elif isinstance(self.client_address, str):
380 return self.client_address
381 else:
382 return self.client_address[0]
383
384 def port_integer(self):
385 return self.client_address[1]
386
387 def log_request(self, code="-", size="-"):
388 try:
389 path = uri_to_iri(self.path)
390 msg = "%s %s %s" % (self.command, path, self.request_version)
391 except AttributeError:
392 # path isn't set if the requestline was bad
393 msg = self.requestline
394
395 code = str(code)
396
397 if termcolor:
398 color = termcolor.colored
399
400 if code[0] == "1": # 1xx - Informational
401 msg = color(msg, attrs=["bold"])
402 elif code[0] == "2": # 2xx - Success
403 msg = color(msg, color="white")
404 elif code == "304": # 304 - Resource Not Modified
405 msg = color(msg, color="cyan")
406 elif code[0] == "3": # 3xx - Redirection
407 msg = color(msg, color="green")
408 elif code == "404": # 404 - Resource Not Found
409 msg = color(msg, color="yellow")
410 elif code[0] == "4": # 4xx - Client Error
411 msg = color(msg, color="red", attrs=["bold"])
412 else: # 5xx, or any other response
413 msg = color(msg, color="magenta", attrs=["bold"])
414
415 self.log("info", '"%s" %s %s', msg, code, size)
416
417 def log_error(self, *args):
418 self.log("error", *args)
419
420 def log_message(self, format, *args):
421 self.log("info", format, *args)
422
423 def log(self, type, message, *args):
424 _log(
425 type,
426 "%s - - [%s] %s\n"
427 % (self.address_string(), self.log_date_time_string(), message % args),
428 )
429
430 def get_header_items(self):
431 """
432 Get an iterable list of key/value pairs representing headers.
433
434 This function provides Python 2/3 compatibility as related to the
435 parsing of request headers. Python 2.7 is not compliant with
436 RFC 3875 Section 4.1.18 which requires multiple values for headers
437 to be provided or RFC 2616 which allows for folding of multi-line
438 headers. This function will return a matching list regardless
439 of Python version. It can be removed once Python 2.7 support
440 is dropped.
441
442 :return: List of tuples containing header hey/value pairs
443 """
444 if PY2:
445 # For Python 2, process the headers manually according to
446 # W3C RFC 2616 Section 4.2.
447 items = []
448 for header in self.headers.headers:
449 # Remove "\r\n" from the header and split on ":" to get
450 # the field name and value.
451 try:
452 key, value = header[0:-2].split(":", 1)
453 except ValueError:
454 # If header could not be slit with : but starts with white
455 # space and it follows an existing header, it's a folded
456 # header.
457 if header[0] in ("\t", " ") and items:
458 # Pop off the last header
459 key, value = items.pop()
460 # Append the current header to the value of the last
461 # header which will be placed back on the end of the
462 # list
463 value = value + header
464 # Otherwise it's just a bad header and should error
465 else:
466 # Re-raise the value error
467 raise
468
469 # Add the key and the value once stripped of leading
470 # white space. The specification allows for stripping
471 # trailing white space but the Python 3 code does not
472 # strip trailing white space. Therefore, trailing space
473 # will be left as is to match the Python 3 behavior.
474 items.append((key, value.lstrip()))
475 else:
476 items = self.headers.items()
477
478 return items
479
480
481 #: backwards compatible name if someone is subclassing it
482 BaseRequestHandler = WSGIRequestHandler
483
484
485 def generate_adhoc_ssl_pair(cn=None):
486 from random import random
487
488 crypto = _get_openssl_crypto_module()
489
490 # pretty damn sure that this is not actually accepted by anyone
491 if cn is None:
492 cn = "*"
493
494 cert = crypto.X509()
495 cert.set_serial_number(int(random() * sys.maxsize))
496 cert.gmtime_adj_notBefore(0)
497 cert.gmtime_adj_notAfter(60 * 60 * 24 * 365)
498
499 subject = cert.get_subject()
500 subject.CN = cn
501 subject.O = "Dummy Certificate" # noqa: E741
502
503 issuer = cert.get_issuer()
504 issuer.CN = subject.CN
505 issuer.O = subject.O # noqa: E741
506
507 pkey = crypto.PKey()
508 pkey.generate_key(crypto.TYPE_RSA, 2048)
509 cert.set_pubkey(pkey)
510 cert.sign(pkey, "sha256")
511
512 return cert, pkey
513
514
515 def make_ssl_devcert(base_path, host=None, cn=None):
516 """Creates an SSL key for development. This should be used instead of
517 the ``'adhoc'`` key which generates a new cert on each server start.
518 It accepts a path for where it should store the key and cert and
519 either a host or CN. If a host is given it will use the CN
520 ``*.host/CN=host``.
521
522 For more information see :func:`run_simple`.
523
524 .. versionadded:: 0.9
525
526 :param base_path: the path to the certificate and key. The extension
527 ``.crt`` is added for the certificate, ``.key`` is
528 added for the key.
529 :param host: the name of the host. This can be used as an alternative
530 for the `cn`.
531 :param cn: the `CN` to use.
532 """
533 from OpenSSL import crypto
534
535 if host is not None:
536 cn = "*.%s/CN=%s" % (host, host)
537 cert, pkey = generate_adhoc_ssl_pair(cn=cn)
538
539 cert_file = base_path + ".crt"
540 pkey_file = base_path + ".key"
541
542 with open(cert_file, "wb") as f:
543 f.write(crypto.dump_certificate(crypto.FILETYPE_PEM, cert))
544 with open(pkey_file, "wb") as f:
545 f.write(crypto.dump_privatekey(crypto.FILETYPE_PEM, pkey))
546
547 return cert_file, pkey_file
548
549
550 def generate_adhoc_ssl_context():
551 """Generates an adhoc SSL context for the development server."""
552 crypto = _get_openssl_crypto_module()
553 import tempfile
554 import atexit
555
556 cert, pkey = generate_adhoc_ssl_pair()
557 cert_handle, cert_file = tempfile.mkstemp()
558 pkey_handle, pkey_file = tempfile.mkstemp()
559 atexit.register(os.remove, pkey_file)
560 atexit.register(os.remove, cert_file)
561
562 os.write(cert_handle, crypto.dump_certificate(crypto.FILETYPE_PEM, cert))
563 os.write(pkey_handle, crypto.dump_privatekey(crypto.FILETYPE_PEM, pkey))
564 os.close(cert_handle)
565 os.close(pkey_handle)
566 ctx = load_ssl_context(cert_file, pkey_file)
567 return ctx
568
569
570 def load_ssl_context(cert_file, pkey_file=None, protocol=None):
571 """Loads SSL context from cert/private key files and optional protocol.
572 Many parameters are directly taken from the API of
573 :py:class:`ssl.SSLContext`.
574
575 :param cert_file: Path of the certificate to use.
576 :param pkey_file: Path of the private key to use. If not given, the key
577 will be obtained from the certificate file.
578 :param protocol: One of the ``PROTOCOL_*`` constants in the stdlib ``ssl``
579 module. Defaults to ``PROTOCOL_SSLv23``.
580 """
581 if protocol is None:
582 protocol = ssl.PROTOCOL_SSLv23
583 ctx = _SSLContext(protocol)
584 ctx.load_cert_chain(cert_file, pkey_file)
585 return ctx
586
587
588 class _SSLContext(object):
589
590 """A dummy class with a small subset of Python3's ``ssl.SSLContext``, only
591 intended to be used with and by Werkzeug."""
592
593 def __init__(self, protocol):
594 self._protocol = protocol
595 self._certfile = None
596 self._keyfile = None
597 self._password = None
598
599 def load_cert_chain(self, certfile, keyfile=None, password=None):
600 self._certfile = certfile
601 self._keyfile = keyfile or certfile
602 self._password = password
603
604 def wrap_socket(self, sock, **kwargs):
605 return ssl.wrap_socket(
606 sock,
607 keyfile=self._keyfile,
608 certfile=self._certfile,
609 ssl_version=self._protocol,
610 **kwargs
611 )
612
613
614 def is_ssl_error(error=None):
615 """Checks if the given error (or the current one) is an SSL error."""
616 exc_types = (ssl.SSLError,)
617 try:
618 from OpenSSL.SSL import Error
619
620 exc_types += (Error,)
621 except ImportError:
622 pass
623
624 if error is None:
625 error = sys.exc_info()[1]
626 return isinstance(error, exc_types)
627
628
629 def select_address_family(host, port):
630 """Return ``AF_INET4``, ``AF_INET6``, or ``AF_UNIX`` depending on
631 the host and port."""
632 # disabled due to problems with current ipv6 implementations
633 # and various operating systems. Probably this code also is
634 # not supposed to work, but I can't come up with any other
635 # ways to implement this.
636 # try:
637 # info = socket.getaddrinfo(host, port, socket.AF_UNSPEC,
638 # socket.SOCK_STREAM, 0,
639 # socket.AI_PASSIVE)
640 # if info:
641 # return info[0][0]
642 # except socket.gaierror:
643 # pass
644 if host.startswith("unix://"):
645 return socket.AF_UNIX
646 elif ":" in host and hasattr(socket, "AF_INET6"):
647 return socket.AF_INET6
648 return socket.AF_INET
649
650
651 def get_sockaddr(host, port, family):
652 """Return a fully qualified socket address that can be passed to
653 :func:`socket.bind`."""
654 if family == af_unix:
655 return host.split("://", 1)[1]
656 try:
657 res = socket.getaddrinfo(
658 host, port, family, socket.SOCK_STREAM, socket.IPPROTO_TCP
659 )
660 except socket.gaierror:
661 return host, port
662 return res[0][4]
663
664
665 class BaseWSGIServer(HTTPServer, object):
666
667 """Simple single-threaded, single-process WSGI server."""
668
669 multithread = False
670 multiprocess = False
671 request_queue_size = LISTEN_QUEUE
672
673 def __init__(
674 self,
675 host,
676 port,
677 app,
678 handler=None,
679 passthrough_errors=False,
680 ssl_context=None,
681 fd=None,
682 ):
683 if handler is None:
684 handler = WSGIRequestHandler
685
686 self.address_family = select_address_family(host, port)
687
688 if fd is not None:
689 real_sock = socket.fromfd(fd, self.address_family, socket.SOCK_STREAM)
690 port = 0
691
692 server_address = get_sockaddr(host, int(port), self.address_family)
693
694 # remove socket file if it already exists
695 if self.address_family == af_unix and os.path.exists(server_address):
696 os.unlink(server_address)
697 HTTPServer.__init__(self, server_address, handler)
698
699 self.app = app
700 self.passthrough_errors = passthrough_errors
701 self.shutdown_signal = False
702 self.host = host
703 self.port = self.socket.getsockname()[1]
704
705 # Patch in the original socket.
706 if fd is not None:
707 self.socket.close()
708 self.socket = real_sock
709 self.server_address = self.socket.getsockname()
710
711 if ssl_context is not None:
712 if isinstance(ssl_context, tuple):
713 ssl_context = load_ssl_context(*ssl_context)
714 if ssl_context == "adhoc":
715 ssl_context = generate_adhoc_ssl_context()
716 # If we are on Python 2 the return value from socket.fromfd
717 # is an internal socket object but what we need for ssl wrap
718 # is the wrapper around it :(
719 sock = self.socket
720 if PY2 and not isinstance(sock, socket.socket):
721 sock = socket.socket(sock.family, sock.type, sock.proto, sock)
722 self.socket = ssl_context.wrap_socket(sock, server_side=True)
723 self.ssl_context = ssl_context
724 else:
725 self.ssl_context = None
726
727 def log(self, type, message, *args):
728 _log(type, message, *args)
729
730 def serve_forever(self):
731 self.shutdown_signal = False
732 try:
733 HTTPServer.serve_forever(self)
734 except KeyboardInterrupt:
735 pass
736 finally:
737 self.server_close()
738
739 def handle_error(self, request, client_address):
740 if self.passthrough_errors:
741 raise
742 # Python 2 still causes a socket.error after the earlier
743 # handling, so silence it here.
744 if isinstance(sys.exc_info()[1], _ConnectionError):
745 return
746 return HTTPServer.handle_error(self, request, client_address)
747
748 def get_request(self):
749 con, info = self.socket.accept()
750 return con, info
751
752
753 class ThreadedWSGIServer(ThreadingMixIn, BaseWSGIServer):
754
755 """A WSGI server that does threading."""
756
757 multithread = True
758 daemon_threads = True
759
760
761 class ForkingWSGIServer(ForkingMixIn, BaseWSGIServer):
762
763 """A WSGI server that does forking."""
764
765 multiprocess = True
766
767 def __init__(
768 self,
769 host,
770 port,
771 app,
772 processes=40,
773 handler=None,
774 passthrough_errors=False,
775 ssl_context=None,
776 fd=None,
777 ):
778 if not can_fork:
779 raise ValueError("Your platform does not support forking.")
780 BaseWSGIServer.__init__(
781 self, host, port, app, handler, passthrough_errors, ssl_context, fd
782 )
783 self.max_children = processes
784
785
786 def make_server(
787 host=None,
788 port=None,
789 app=None,
790 threaded=False,
791 processes=1,
792 request_handler=None,
793 passthrough_errors=False,
794 ssl_context=None,
795 fd=None,
796 ):
797 """Create a new server instance that is either threaded, or forks
798 or just processes one request after another.
799 """
800 if threaded and processes > 1:
801 raise ValueError("cannot have a multithreaded and multi process server.")
802 elif threaded:
803 return ThreadedWSGIServer(
804 host, port, app, request_handler, passthrough_errors, ssl_context, fd=fd
805 )
806 elif processes > 1:
807 return ForkingWSGIServer(
808 host,
809 port,
810 app,
811 processes,
812 request_handler,
813 passthrough_errors,
814 ssl_context,
815 fd=fd,
816 )
817 else:
818 return BaseWSGIServer(
819 host, port, app, request_handler, passthrough_errors, ssl_context, fd=fd
820 )
821
822
823 def is_running_from_reloader():
824 """Checks if the application is running from within the Werkzeug
825 reloader subprocess.
826
827 .. versionadded:: 0.10
828 """
829 return os.environ.get("WERKZEUG_RUN_MAIN") == "true"
830
831
832 def run_simple(
833 hostname,
834 port,
835 application,
836 use_reloader=False,
837 use_debugger=False,
838 use_evalex=True,
839 extra_files=None,
840 reloader_interval=1,
841 reloader_type="auto",
842 threaded=False,
843 processes=1,
844 request_handler=None,
845 static_files=None,
846 passthrough_errors=False,
847 ssl_context=None,
848 ):
849 """Start a WSGI application. Optional features include a reloader,
850 multithreading and fork support.
851
852 This function has a command-line interface too::
853
854 python -m werkzeug.serving --help
855
856 .. versionadded:: 0.5
857 `static_files` was added to simplify serving of static files as well
858 as `passthrough_errors`.
859
860 .. versionadded:: 0.6
861 support for SSL was added.
862
863 .. versionadded:: 0.8
864 Added support for automatically loading a SSL context from certificate
865 file and private key.
866
867 .. versionadded:: 0.9
868 Added command-line interface.
869
870 .. versionadded:: 0.10
871 Improved the reloader and added support for changing the backend
872 through the `reloader_type` parameter. See :ref:`reloader`
873 for more information.
874
875 .. versionchanged:: 0.15
876 Bind to a Unix socket by passing a path that starts with
877 ``unix://`` as the ``hostname``.
878
879 :param hostname: The host to bind to, for example ``'localhost'``.
880 If the value is a path that starts with ``unix://`` it will bind
881 to a Unix socket instead of a TCP socket..
882 :param port: The port for the server. eg: ``8080``
883 :param application: the WSGI application to execute
884 :param use_reloader: should the server automatically restart the python
885 process if modules were changed?
886 :param use_debugger: should the werkzeug debugging system be used?
887 :param use_evalex: should the exception evaluation feature be enabled?
888 :param extra_files: a list of files the reloader should watch
889 additionally to the modules. For example configuration
890 files.
891 :param reloader_interval: the interval for the reloader in seconds.
892 :param reloader_type: the type of reloader to use. The default is
893 auto detection. Valid values are ``'stat'`` and
894 ``'watchdog'``. See :ref:`reloader` for more
895 information.
896 :param threaded: should the process handle each request in a separate
897 thread?
898 :param processes: if greater than 1 then handle each request in a new process
899 up to this maximum number of concurrent processes.
900 :param request_handler: optional parameter that can be used to replace
901 the default one. You can use this to replace it
902 with a different
903 :class:`~BaseHTTPServer.BaseHTTPRequestHandler`
904 subclass.
905 :param static_files: a list or dict of paths for static files. This works
906 exactly like :class:`SharedDataMiddleware`, it's actually
907 just wrapping the application in that middleware before
908 serving.
909 :param passthrough_errors: set this to `True` to disable the error catching.
910 This means that the server will die on errors but
911 it can be useful to hook debuggers in (pdb etc.)
912 :param ssl_context: an SSL context for the connection. Either an
913 :class:`ssl.SSLContext`, a tuple in the form
914 ``(cert_file, pkey_file)``, the string ``'adhoc'`` if
915 the server should automatically create one, or ``None``
916 to disable SSL (which is the default).
917 """
918 if not isinstance(port, int):
919 raise TypeError("port must be an integer")
920 if use_debugger:
921 from .debug import DebuggedApplication
922
923 application = DebuggedApplication(application, use_evalex)
924 if static_files:
925 from .middleware.shared_data import SharedDataMiddleware
926
927 application = SharedDataMiddleware(application, static_files)
928
929 def log_startup(sock):
930 display_hostname = hostname if hostname not in ("", "*") else "localhost"
931 quit_msg = "(Press CTRL+C to quit)"
932 if sock.family == af_unix:
933 _log("info", " * Running on %s %s", display_hostname, quit_msg)
934 else:
935 if ":" in display_hostname:
936 display_hostname = "[%s]" % display_hostname
937 port = sock.getsockname()[1]
938 _log(
939 "info",
940 " * Running on %s://%s:%d/ %s",
941 "http" if ssl_context is None else "https",
942 display_hostname,
943 port,
944 quit_msg,
945 )
946
947 def inner():
948 try:
949 fd = int(os.environ["WERKZEUG_SERVER_FD"])
950 except (LookupError, ValueError):
951 fd = None
952 srv = make_server(
953 hostname,
954 port,
955 application,
956 threaded,
957 processes,
958 request_handler,
959 passthrough_errors,
960 ssl_context,
961 fd=fd,
962 )
963 if fd is None:
964 log_startup(srv.socket)
965 srv.serve_forever()
966
967 if use_reloader:
968 # If we're not running already in the subprocess that is the
969 # reloader we want to open up a socket early to make sure the
970 # port is actually available.
971 if not is_running_from_reloader():
972 if port == 0 and not can_open_by_fd:
973 raise ValueError(
974 "Cannot bind to a random port with enabled "
975 "reloader if the Python interpreter does "
976 "not support socket opening by fd."
977 )
978
979 # Create and destroy a socket so that any exceptions are
980 # raised before we spawn a separate Python interpreter and
981 # lose this ability.
982 address_family = select_address_family(hostname, port)
983 server_address = get_sockaddr(hostname, port, address_family)
984 s = socket.socket(address_family, socket.SOCK_STREAM)
985 s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
986 s.bind(server_address)
987 if hasattr(s, "set_inheritable"):
988 s.set_inheritable(True)
989
990 # If we can open the socket by file descriptor, then we can just
991 # reuse this one and our socket will survive the restarts.
992 if can_open_by_fd:
993 os.environ["WERKZEUG_SERVER_FD"] = str(s.fileno())
994 s.listen(LISTEN_QUEUE)
995 log_startup(s)
996 else:
997 s.close()
998 if address_family == af_unix:
999 _log("info", "Unlinking %s" % server_address)
1000 os.unlink(server_address)
1001
1002 # Do not use relative imports, otherwise "python -m werkzeug.serving"
1003 # breaks.
1004 from ._reloader import run_with_reloader
1005
1006 run_with_reloader(inner, extra_files, reloader_interval, reloader_type)
1007 else:
1008 inner()
1009
1010
1011 def run_with_reloader(*args, **kwargs):
1012 # People keep using undocumented APIs. Do not use this function
1013 # please, we do not guarantee that it continues working.
1014 from ._reloader import run_with_reloader
1015
1016 return run_with_reloader(*args, **kwargs)
1017
1018
1019 def main():
1020 """A simple command-line interface for :py:func:`run_simple`."""
1021
1022 # in contrast to argparse, this works at least under Python < 2.7
1023 import optparse
1024 from .utils import import_string
1025
1026 parser = optparse.OptionParser(usage="Usage: %prog [options] app_module:app_object")
1027 parser.add_option(
1028 "-b",
1029 "--bind",
1030 dest="address",
1031 help="The hostname:port the app should listen on.",
1032 )
1033 parser.add_option(
1034 "-d",
1035 "--debug",
1036 dest="use_debugger",
1037 action="store_true",
1038 default=False,
1039 help="Use Werkzeug's debugger.",
1040 )
1041 parser.add_option(
1042 "-r",
1043 "--reload",
1044 dest="use_reloader",
1045 action="store_true",
1046 default=False,
1047 help="Reload Python process if modules change.",
1048 )
1049 options, args = parser.parse_args()
1050
1051 hostname, port = None, None
1052 if options.address:
1053 address = options.address.split(":")
1054 hostname = address[0]
1055 if len(address) > 1:
1056 port = address[1]
1057
1058 if len(args) != 1:
1059 sys.stdout.write("No application supplied, or too much. See --help\n")
1060 sys.exit(1)
1061 app = import_string(args[0])
1062
1063 run_simple(
1064 hostname=(hostname or "127.0.0.1"),
1065 port=int(port or 5000),
1066 application=app,
1067 use_reloader=options.use_reloader,
1068 use_debugger=options.use_debugger,
1069 )
1070
1071
1072 if __name__ == "__main__":
1073 main()
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.test
3 ~~~~~~~~~~~~~
4
5 This module implements a client to WSGI applications for testing.
6
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
9 """
10 import mimetypes
11 import sys
12 from io import BytesIO
13 from itertools import chain
14 from random import random
15 from tempfile import TemporaryFile
16 from time import time
17
18 from ._compat import iteritems
19 from ._compat import iterlists
20 from ._compat import itervalues
21 from ._compat import make_literal_wrapper
22 from ._compat import reraise
23 from ._compat import string_types
24 from ._compat import text_type
25 from ._compat import to_bytes
26 from ._compat import wsgi_encoding_dance
27 from ._internal import _get_environ
28 from .datastructures import CallbackDict
29 from .datastructures import CombinedMultiDict
30 from .datastructures import EnvironHeaders
31 from .datastructures import FileMultiDict
32 from .datastructures import FileStorage
33 from .datastructures import Headers
34 from .datastructures import MultiDict
35 from .http import dump_cookie
36 from .http import dump_options_header
37 from .http import parse_options_header
38 from .urls import iri_to_uri
39 from .urls import url_encode
40 from .urls import url_fix
41 from .urls import url_parse
42 from .urls import url_unparse
43 from .urls import url_unquote
44 from .utils import get_content_type
45 from .wrappers import BaseRequest
46 from .wsgi import ClosingIterator
47 from .wsgi import get_current_url
48
49 try:
50 from urllib.request import Request as U2Request
51 except ImportError:
52 from urllib2 import Request as U2Request
53
54 try:
55 from http.cookiejar import CookieJar
56 except ImportError:
57 from cookielib import CookieJar
58
59
60 def stream_encode_multipart(
61 values, use_tempfile=True, threshold=1024 * 500, boundary=None, charset="utf-8"
62 ):
63 """Encode a dict of values (either strings or file descriptors or
64 :class:`FileStorage` objects.) into a multipart encoded string stored
65 in a file descriptor.
66 """
67 if boundary is None:
68 boundary = "---------------WerkzeugFormPart_%s%s" % (time(), random())
69 _closure = [BytesIO(), 0, False]
70
71 if use_tempfile:
72
73 def write_binary(string):
74 stream, total_length, on_disk = _closure
75 if on_disk:
76 stream.write(string)
77 else:
78 length = len(string)
79 if length + _closure[1] <= threshold:
80 stream.write(string)
81 else:
82 new_stream = TemporaryFile("wb+")
83 new_stream.write(stream.getvalue())
84 new_stream.write(string)
85 _closure[0] = new_stream
86 _closure[2] = True
87 _closure[1] = total_length + length
88
89 else:
90 write_binary = _closure[0].write
91
92 def write(string):
93 write_binary(string.encode(charset))
94
95 if not isinstance(values, MultiDict):
96 values = MultiDict(values)
97
98 for key, values in iterlists(values):
99 for value in values:
100 write('--%s\r\nContent-Disposition: form-data; name="%s"' % (boundary, key))
101 reader = getattr(value, "read", None)
102 if reader is not None:
103 filename = getattr(value, "filename", getattr(value, "name", None))
104 content_type = getattr(value, "content_type", None)
105 if content_type is None:
106 content_type = (
107 filename
108 and mimetypes.guess_type(filename)[0]
109 or "application/octet-stream"
110 )
111 if filename is not None:
112 write('; filename="%s"\r\n' % filename)
113 else:
114 write("\r\n")
115 write("Content-Type: %s\r\n\r\n" % content_type)
116 while 1:
117 chunk = reader(16384)
118 if not chunk:
119 break
120 write_binary(chunk)
121 else:
122 if not isinstance(value, string_types):
123 value = str(value)
124
125 value = to_bytes(value, charset)
126 write("\r\n\r\n")
127 write_binary(value)
128 write("\r\n")
129 write("--%s--\r\n" % boundary)
130
131 length = int(_closure[0].tell())
132 _closure[0].seek(0)
133 return _closure[0], length, boundary
134
135
136 def encode_multipart(values, boundary=None, charset="utf-8"):
137 """Like `stream_encode_multipart` but returns a tuple in the form
138 (``boundary``, ``data``) where data is a bytestring.
139 """
140 stream, length, boundary = stream_encode_multipart(
141 values, use_tempfile=False, boundary=boundary, charset=charset
142 )
143 return boundary, stream.read()
144
145
146 def File(fd, filename=None, mimetype=None):
147 """Backwards compat.
148
149 .. deprecated:: 0.5
150 """
151 from warnings import warn
152
153 warn(
154 "'werkzeug.test.File' is deprecated as of version 0.5 and will"
155 " be removed in version 1.0. Use 'EnvironBuilder' or"
156 " 'FileStorage' instead.",
157 DeprecationWarning,
158 stacklevel=2,
159 )
160 return FileStorage(fd, filename=filename, content_type=mimetype)
161
162
163 class _TestCookieHeaders(object):
164
165 """A headers adapter for cookielib
166 """
167
168 def __init__(self, headers):
169 self.headers = headers
170
171 def getheaders(self, name):
172 headers = []
173 name = name.lower()
174 for k, v in self.headers:
175 if k.lower() == name:
176 headers.append(v)
177 return headers
178
179 def get_all(self, name, default=None):
180 rv = []
181 for k, v in self.headers:
182 if k.lower() == name.lower():
183 rv.append(v)
184 return rv or default or []
185
186
187 class _TestCookieResponse(object):
188
189 """Something that looks like a httplib.HTTPResponse, but is actually just an
190 adapter for our test responses to make them available for cookielib.
191 """
192
193 def __init__(self, headers):
194 self.headers = _TestCookieHeaders(headers)
195
196 def info(self):
197 return self.headers
198
199
200 class _TestCookieJar(CookieJar):
201
202 """A cookielib.CookieJar modified to inject and read cookie headers from
203 and to wsgi environments, and wsgi application responses.
204 """
205
206 def inject_wsgi(self, environ):
207 """Inject the cookies as client headers into the server's wsgi
208 environment.
209 """
210 cvals = ["%s=%s" % (c.name, c.value) for c in self]
211
212 if cvals:
213 environ["HTTP_COOKIE"] = "; ".join(cvals)
214 else:
215 environ.pop("HTTP_COOKIE", None)
216
217 def extract_wsgi(self, environ, headers):
218 """Extract the server's set-cookie headers as cookies into the
219 cookie jar.
220 """
221 self.extract_cookies(
222 _TestCookieResponse(headers), U2Request(get_current_url(environ))
223 )
224
225
226 def _iter_data(data):
227 """Iterates over a `dict` or :class:`MultiDict` yielding all keys and
228 values.
229 This is used to iterate over the data passed to the
230 :class:`EnvironBuilder`.
231 """
232 if isinstance(data, MultiDict):
233 for key, values in iterlists(data):
234 for value in values:
235 yield key, value
236 else:
237 for key, values in iteritems(data):
238 if isinstance(values, list):
239 for value in values:
240 yield key, value
241 else:
242 yield key, values
243
244
245 class EnvironBuilder(object):
246 """This class can be used to conveniently create a WSGI environment
247 for testing purposes. It can be used to quickly create WSGI environments
248 or request objects from arbitrary data.
249
250 The signature of this class is also used in some other places as of
251 Werkzeug 0.5 (:func:`create_environ`, :meth:`BaseResponse.from_values`,
252 :meth:`Client.open`). Because of this most of the functionality is
253 available through the constructor alone.
254
255 Files and regular form data can be manipulated independently of each
256 other with the :attr:`form` and :attr:`files` attributes, but are
257 passed with the same argument to the constructor: `data`.
258
259 `data` can be any of these values:
260
261 - a `str` or `bytes` object: The object is converted into an
262 :attr:`input_stream`, the :attr:`content_length` is set and you have to
263 provide a :attr:`content_type`.
264 - a `dict` or :class:`MultiDict`: The keys have to be strings. The values
265 have to be either any of the following objects, or a list of any of the
266 following objects:
267
268 - a :class:`file`-like object: These are converted into
269 :class:`FileStorage` objects automatically.
270 - a `tuple`: The :meth:`~FileMultiDict.add_file` method is called
271 with the key and the unpacked `tuple` items as positional
272 arguments.
273 - a `str`: The string is set as form data for the associated key.
274 - a file-like object: The object content is loaded in memory and then
275 handled like a regular `str` or a `bytes`.
276
277 :param path: the path of the request. In the WSGI environment this will
278 end up as `PATH_INFO`. If the `query_string` is not defined
279 and there is a question mark in the `path` everything after
280 it is used as query string.
281 :param base_url: the base URL is a URL that is used to extract the WSGI
282 URL scheme, host (server name + server port) and the
283 script root (`SCRIPT_NAME`).
284 :param query_string: an optional string or dict with URL parameters.
285 :param method: the HTTP method to use, defaults to `GET`.
286 :param input_stream: an optional input stream. Do not specify this and
287 `data`. As soon as an input stream is set you can't
288 modify :attr:`args` and :attr:`files` unless you
289 set the :attr:`input_stream` to `None` again.
290 :param content_type: The content type for the request. As of 0.5 you
291 don't have to provide this when specifying files
292 and form data via `data`.
293 :param content_length: The content length for the request. You don't
294 have to specify this when providing data via
295 `data`.
296 :param errors_stream: an optional error stream that is used for
297 `wsgi.errors`. Defaults to :data:`stderr`.
298 :param multithread: controls `wsgi.multithread`. Defaults to `False`.
299 :param multiprocess: controls `wsgi.multiprocess`. Defaults to `False`.
300 :param run_once: controls `wsgi.run_once`. Defaults to `False`.
301 :param headers: an optional list or :class:`Headers` object of headers.
302 :param data: a string or dict of form data or a file-object.
303 See explanation above.
304 :param json: An object to be serialized and assigned to ``data``.
305 Defaults the content type to ``"application/json"``.
306 Serialized with the function assigned to :attr:`json_dumps`.
307 :param environ_base: an optional dict of environment defaults.
308 :param environ_overrides: an optional dict of environment overrides.
309 :param charset: the charset used to encode unicode data.
310
311 .. versionadded:: 0.15
312 The ``json`` param and :meth:`json_dumps` method.
313
314 .. versionadded:: 0.15
315 The environ has keys ``REQUEST_URI`` and ``RAW_URI`` containing
316 the path before perecent-decoding. This is not part of the WSGI
317 PEP, but many WSGI servers include it.
318
319 .. versionchanged:: 0.6
320 ``path`` and ``base_url`` can now be unicode strings that are
321 encoded with :func:`iri_to_uri`.
322 """
323
324 #: the server protocol to use. defaults to HTTP/1.1
325 server_protocol = "HTTP/1.1"
326
327 #: the wsgi version to use. defaults to (1, 0)
328 wsgi_version = (1, 0)
329
330 #: the default request class for :meth:`get_request`
331 request_class = BaseRequest
332
333 import json
334
335 #: The serialization function used when ``json`` is passed.
336 json_dumps = staticmethod(json.dumps)
337 del json
338
339 def __init__(
340 self,
341 path="/",
342 base_url=None,
343 query_string=None,
344 method="GET",
345 input_stream=None,
346 content_type=None,
347 content_length=None,
348 errors_stream=None,
349 multithread=False,
350 multiprocess=False,
351 run_once=False,
352 headers=None,
353 data=None,
354 environ_base=None,
355 environ_overrides=None,
356 charset="utf-8",
357 mimetype=None,
358 json=None,
359 ):
360 path_s = make_literal_wrapper(path)
361 if query_string is not None and path_s("?") in path:
362 raise ValueError("Query string is defined in the path and as an argument")
363 if query_string is None and path_s("?") in path:
364 path, query_string = path.split(path_s("?"), 1)
365 self.charset = charset
366 self.path = iri_to_uri(path)
367 if base_url is not None:
368 base_url = url_fix(iri_to_uri(base_url, charset), charset)
369 self.base_url = base_url
370 if isinstance(query_string, (bytes, text_type)):
371 self.query_string = query_string
372 else:
373 if query_string is None:
374 query_string = MultiDict()
375 elif not isinstance(query_string, MultiDict):
376 query_string = MultiDict(query_string)
377 self.args = query_string
378 self.method = method
379 if headers is None:
380 headers = Headers()
381 elif not isinstance(headers, Headers):
382 headers = Headers(headers)
383 self.headers = headers
384 if content_type is not None:
385 self.content_type = content_type
386 if errors_stream is None:
387 errors_stream = sys.stderr
388 self.errors_stream = errors_stream
389 self.multithread = multithread
390 self.multiprocess = multiprocess
391 self.run_once = run_once
392 self.environ_base = environ_base
393 self.environ_overrides = environ_overrides
394 self.input_stream = input_stream
395 self.content_length = content_length
396 self.closed = False
397
398 if json is not None:
399 if data is not None:
400 raise TypeError("can't provide both json and data")
401
402 data = self.json_dumps(json)
403
404 if self.content_type is None:
405 self.content_type = "application/json"
406
407 if data:
408 if input_stream is not None:
409 raise TypeError("can't provide input stream and data")
410 if hasattr(data, "read"):
411 data = data.read()
412 if isinstance(data, text_type):
413 data = data.encode(self.charset)
414 if isinstance(data, bytes):
415 self.input_stream = BytesIO(data)
416 if self.content_length is None:
417 self.content_length = len(data)
418 else:
419 for key, value in _iter_data(data):
420 if isinstance(value, (tuple, dict)) or hasattr(value, "read"):
421 self._add_file_from_data(key, value)
422 else:
423 self.form.setlistdefault(key).append(value)
424
425 if mimetype is not None:
426 self.mimetype = mimetype
427
428 @classmethod
429 def from_environ(cls, environ, **kwargs):
430 """Turn an environ dict back into a builder. Any extra kwargs
431 override the args extracted from the environ.
432
433 .. versionadded:: 0.15
434 """
435 headers = Headers(EnvironHeaders(environ))
436 out = {
437 "path": environ["PATH_INFO"],
438 "base_url": cls._make_base_url(
439 environ["wsgi.url_scheme"], headers.pop("Host"), environ["SCRIPT_NAME"]
440 ),
441 "query_string": environ["QUERY_STRING"],
442 "method": environ["REQUEST_METHOD"],
443 "input_stream": environ["wsgi.input"],
444 "content_type": headers.pop("Content-Type", None),
445 "content_length": headers.pop("Content-Length", None),
446 "errors_stream": environ["wsgi.errors"],
447 "multithread": environ["wsgi.multithread"],
448 "multiprocess": environ["wsgi.multiprocess"],
449 "run_once": environ["wsgi.run_once"],
450 "headers": headers,
451 }
452 out.update(kwargs)
453 return cls(**out)
454
455 def _add_file_from_data(self, key, value):
456 """Called in the EnvironBuilder to add files from the data dict."""
457 if isinstance(value, tuple):
458 self.files.add_file(key, *value)
459 elif isinstance(value, dict):
460 from warnings import warn
461
462 warn(
463 "Passing a dict as file data is deprecated as of"
464 " version 0.5 and will be removed in version 1.0. Use"
465 " a tuple or 'FileStorage' object instead.",
466 DeprecationWarning,
467 stacklevel=2,
468 )
469 value = dict(value)
470 mimetype = value.pop("mimetype", None)
471 if mimetype is not None:
472 value["content_type"] = mimetype
473 self.files.add_file(key, **value)
474 else:
475 self.files.add_file(key, value)
476
477 @staticmethod
478 def _make_base_url(scheme, host, script_root):
479 return url_unparse((scheme, host, script_root, "", "")).rstrip("/") + "/"
480
481 @property
482 def base_url(self):
483 """The base URL is used to extract the URL scheme, host name,
484 port, and root path.
485 """
486 return self._make_base_url(self.url_scheme, self.host, self.script_root)
487
488 @base_url.setter
489 def base_url(self, value):
490 if value is None:
491 scheme = "http"
492 netloc = "localhost"
493 script_root = ""
494 else:
495 scheme, netloc, script_root, qs, anchor = url_parse(value)
496 if qs or anchor:
497 raise ValueError("base url must not contain a query string or fragment")
498 self.script_root = script_root.rstrip("/")
499 self.host = netloc
500 self.url_scheme = scheme
501
502 def _get_content_type(self):
503 ct = self.headers.get("Content-Type")
504 if ct is None and not self._input_stream:
505 if self._files:
506 return "multipart/form-data"
507 elif self._form:
508 return "application/x-www-form-urlencoded"
509 return None
510 return ct
511
512 def _set_content_type(self, value):
513 if value is None:
514 self.headers.pop("Content-Type", None)
515 else:
516 self.headers["Content-Type"] = value
517
518 content_type = property(
519 _get_content_type,
520 _set_content_type,
521 doc="""The content type for the request. Reflected from and to
522 the :attr:`headers`. Do not set if you set :attr:`files` or
523 :attr:`form` for auto detection.""",
524 )
525 del _get_content_type, _set_content_type
526
527 def _get_content_length(self):
528 return self.headers.get("Content-Length", type=int)
529
530 def _get_mimetype(self):
531 ct = self.content_type
532 if ct:
533 return ct.split(";")[0].strip()
534
535 def _set_mimetype(self, value):
536 self.content_type = get_content_type(value, self.charset)
537
538 def _get_mimetype_params(self):
539 def on_update(d):
540 self.headers["Content-Type"] = dump_options_header(self.mimetype, d)
541
542 d = parse_options_header(self.headers.get("content-type", ""))[1]
543 return CallbackDict(d, on_update)
544
545 mimetype = property(
546 _get_mimetype,
547 _set_mimetype,
548 doc="""The mimetype (content type without charset etc.)
549
550 .. versionadded:: 0.14
551 """,
552 )
553 mimetype_params = property(
554 _get_mimetype_params,
555 doc=""" The mimetype parameters as dict. For example if the
556 content type is ``text/html; charset=utf-8`` the params would be
557 ``{'charset': 'utf-8'}``.
558
559 .. versionadded:: 0.14
560 """,
561 )
562 del _get_mimetype, _set_mimetype, _get_mimetype_params
563
564 def _set_content_length(self, value):
565 if value is None:
566 self.headers.pop("Content-Length", None)
567 else:
568 self.headers["Content-Length"] = str(value)
569
570 content_length = property(
571 _get_content_length,
572 _set_content_length,
573 doc="""The content length as integer. Reflected from and to the
574 :attr:`headers`. Do not set if you set :attr:`files` or
575 :attr:`form` for auto detection.""",
576 )
577 del _get_content_length, _set_content_length
578
579 def form_property(name, storage, doc): # noqa: B902
580 key = "_" + name
581
582 def getter(self):
583 if self._input_stream is not None:
584 raise AttributeError("an input stream is defined")
585 rv = getattr(self, key)
586 if rv is None:
587 rv = storage()
588 setattr(self, key, rv)
589
590 return rv
591
592 def setter(self, value):
593 self._input_stream = None
594 setattr(self, key, value)
595
596 return property(getter, setter, doc=doc)
597
598 form = form_property("form", MultiDict, doc="A :class:`MultiDict` of form values.")
599 files = form_property(
600 "files",
601 FileMultiDict,
602 doc="""A :class:`FileMultiDict` of uploaded files. You can use
603 the :meth:`~FileMultiDict.add_file` method to add new files to
604 the dict.""",
605 )
606 del form_property
607
608 def _get_input_stream(self):
609 return self._input_stream
610
611 def _set_input_stream(self, value):
612 self._input_stream = value
613 self._form = self._files = None
614
615 input_stream = property(
616 _get_input_stream,
617 _set_input_stream,
618 doc="""An optional input stream. If you set this it will clear
619 :attr:`form` and :attr:`files`.""",
620 )
621 del _get_input_stream, _set_input_stream
622
623 def _get_query_string(self):
624 if self._query_string is None:
625 if self._args is not None:
626 return url_encode(self._args, charset=self.charset)
627 return ""
628 return self._query_string
629
630 def _set_query_string(self, value):
631 self._query_string = value
632 self._args = None
633
634 query_string = property(
635 _get_query_string,
636 _set_query_string,
637 doc="""The query string. If you set this to a string
638 :attr:`args` will no longer be available.""",
639 )
640 del _get_query_string, _set_query_string
641
642 def _get_args(self):
643 if self._query_string is not None:
644 raise AttributeError("a query string is defined")
645 if self._args is None:
646 self._args = MultiDict()
647 return self._args
648
649 def _set_args(self, value):
650 self._query_string = None
651 self._args = value
652
653 args = property(
654 _get_args, _set_args, doc="The URL arguments as :class:`MultiDict`."
655 )
656 del _get_args, _set_args
657
658 @property
659 def server_name(self):
660 """The server name (read-only, use :attr:`host` to set)"""
661 return self.host.split(":", 1)[0]
662
663 @property
664 def server_port(self):
665 """The server port as integer (read-only, use :attr:`host` to set)"""
666 pieces = self.host.split(":", 1)
667 if len(pieces) == 2 and pieces[1].isdigit():
668 return int(pieces[1])
669 elif self.url_scheme == "https":
670 return 443
671 return 80
672
673 def __del__(self):
674 try:
675 self.close()
676 except Exception:
677 pass
678
679 def close(self):
680 """Closes all files. If you put real :class:`file` objects into the
681 :attr:`files` dict you can call this method to automatically close
682 them all in one go.
683 """
684 if self.closed:
685 return
686 try:
687 files = itervalues(self.files)
688 except AttributeError:
689 files = ()
690 for f in files:
691 try:
692 f.close()
693 except Exception:
694 pass
695 self.closed = True
696
697 def get_environ(self):
698 """Return the built environ.
699
700 .. versionchanged:: 0.15
701 The content type and length headers are set based on
702 input stream detection. Previously this only set the WSGI
703 keys.
704 """
705 input_stream = self.input_stream
706 content_length = self.content_length
707
708 mimetype = self.mimetype
709 content_type = self.content_type
710
711 if input_stream is not None:
712 start_pos = input_stream.tell()
713 input_stream.seek(0, 2)
714 end_pos = input_stream.tell()
715 input_stream.seek(start_pos)
716 content_length = end_pos - start_pos
717 elif mimetype == "multipart/form-data":
718 values = CombinedMultiDict([self.form, self.files])
719 input_stream, content_length, boundary = stream_encode_multipart(
720 values, charset=self.charset
721 )
722 content_type = mimetype + '; boundary="%s"' % boundary
723 elif mimetype == "application/x-www-form-urlencoded":
724 # XXX: py2v3 review
725 values = url_encode(self.form, charset=self.charset)
726 values = values.encode("ascii")
727 content_length = len(values)
728 input_stream = BytesIO(values)
729 else:
730 input_stream = BytesIO()
731
732 result = {}
733 if self.environ_base:
734 result.update(self.environ_base)
735
736 def _path_encode(x):
737 return wsgi_encoding_dance(url_unquote(x, self.charset), self.charset)
738
739 qs = wsgi_encoding_dance(self.query_string)
740
741 result.update(
742 {
743 "REQUEST_METHOD": self.method,
744 "SCRIPT_NAME": _path_encode(self.script_root),
745 "PATH_INFO": _path_encode(self.path),
746 "QUERY_STRING": qs,
747 # Non-standard, added by mod_wsgi, uWSGI
748 "REQUEST_URI": wsgi_encoding_dance(self.path),
749 # Non-standard, added by gunicorn
750 "RAW_URI": wsgi_encoding_dance(self.path),
751 "SERVER_NAME": self.server_name,
752 "SERVER_PORT": str(self.server_port),
753 "HTTP_HOST": self.host,
754 "SERVER_PROTOCOL": self.server_protocol,
755 "wsgi.version": self.wsgi_version,
756 "wsgi.url_scheme": self.url_scheme,
757 "wsgi.input": input_stream,
758 "wsgi.errors": self.errors_stream,
759 "wsgi.multithread": self.multithread,
760 "wsgi.multiprocess": self.multiprocess,
761 "wsgi.run_once": self.run_once,
762 }
763 )
764
765 headers = self.headers.copy()
766
767 if content_type is not None:
768 result["CONTENT_TYPE"] = content_type
769 headers.set("Content-Type", content_type)
770
771 if content_length is not None:
772 result["CONTENT_LENGTH"] = str(content_length)
773 headers.set("Content-Length", content_length)
774
775 for key, value in headers.to_wsgi_list():
776 result["HTTP_%s" % key.upper().replace("-", "_")] = value
777
778 if self.environ_overrides:
779 result.update(self.environ_overrides)
780
781 return result
782
783 def get_request(self, cls=None):
784 """Returns a request with the data. If the request class is not
785 specified :attr:`request_class` is used.
786
787 :param cls: The request wrapper to use.
788 """
789 if cls is None:
790 cls = self.request_class
791 return cls(self.get_environ())
792
793
794 class ClientRedirectError(Exception):
795 """If a redirect loop is detected when using follow_redirects=True with
796 the :cls:`Client`, then this exception is raised.
797 """
798
799
800 class Client(object):
801 """This class allows you to send requests to a wrapped application.
802
803 The response wrapper can be a class or factory function that takes
804 three arguments: app_iter, status and headers. The default response
805 wrapper just returns a tuple.
806
807 Example::
808
809 class ClientResponse(BaseResponse):
810 ...
811
812 client = Client(MyApplication(), response_wrapper=ClientResponse)
813
814 The use_cookies parameter indicates whether cookies should be stored and
815 sent for subsequent requests. This is True by default, but passing False
816 will disable this behaviour.
817
818 If you want to request some subdomain of your application you may set
819 `allow_subdomain_redirects` to `True` as if not no external redirects
820 are allowed.
821
822 .. versionadded:: 0.5
823 `use_cookies` is new in this version. Older versions did not provide
824 builtin cookie support.
825
826 .. versionadded:: 0.14
827 The `mimetype` parameter was added.
828
829 .. versionadded:: 0.15
830 The ``json`` parameter.
831 """
832
833 def __init__(
834 self,
835 application,
836 response_wrapper=None,
837 use_cookies=True,
838 allow_subdomain_redirects=False,
839 ):
840 self.application = application
841 self.response_wrapper = response_wrapper
842 if use_cookies:
843 self.cookie_jar = _TestCookieJar()
844 else:
845 self.cookie_jar = None
846 self.allow_subdomain_redirects = allow_subdomain_redirects
847
848 def set_cookie(
849 self,
850 server_name,
851 key,
852 value="",
853 max_age=None,
854 expires=None,
855 path="/",
856 domain=None,
857 secure=None,
858 httponly=False,
859 charset="utf-8",
860 ):
861 """Sets a cookie in the client's cookie jar. The server name
862 is required and has to match the one that is also passed to
863 the open call.
864 """
865 assert self.cookie_jar is not None, "cookies disabled"
866 header = dump_cookie(
867 key, value, max_age, expires, path, domain, secure, httponly, charset
868 )
869 environ = create_environ(path, base_url="http://" + server_name)
870 headers = [("Set-Cookie", header)]
871 self.cookie_jar.extract_wsgi(environ, headers)
872
873 def delete_cookie(self, server_name, key, path="/", domain=None):
874 """Deletes a cookie in the test client."""
875 self.set_cookie(
876 server_name, key, expires=0, max_age=0, path=path, domain=domain
877 )
878
879 def run_wsgi_app(self, environ, buffered=False):
880 """Runs the wrapped WSGI app with the given environment."""
881 if self.cookie_jar is not None:
882 self.cookie_jar.inject_wsgi(environ)
883 rv = run_wsgi_app(self.application, environ, buffered=buffered)
884 if self.cookie_jar is not None:
885 self.cookie_jar.extract_wsgi(environ, rv[2])
886 return rv
887
888 def resolve_redirect(self, response, new_location, environ, buffered=False):
889 """Perform a new request to the location given by the redirect
890 response to the previous request.
891 """
892 scheme, netloc, path, qs, anchor = url_parse(new_location)
893 builder = EnvironBuilder.from_environ(environ, query_string=qs)
894
895 to_name_parts = netloc.split(":", 1)[0].split(".")
896 from_name_parts = builder.server_name.split(".")
897
898 if to_name_parts != [""]:
899 # The new location has a host, use it for the base URL.
900 builder.url_scheme = scheme
901 builder.host = netloc
902 else:
903 # A local redirect with autocorrect_location_header=False
904 # doesn't have a host, so use the request's host.
905 to_name_parts = from_name_parts
906
907 # Explain why a redirect to a different server name won't be followed.
908 if to_name_parts != from_name_parts:
909 if to_name_parts[-len(from_name_parts) :] == from_name_parts:
910 if not self.allow_subdomain_redirects:
911 raise RuntimeError("Following subdomain redirects is not enabled.")
912 else:
913 raise RuntimeError("Following external redirects is not supported.")
914
915 path_parts = path.split("/")
916 root_parts = builder.script_root.split("/")
917
918 if path_parts[: len(root_parts)] == root_parts:
919 # Strip the script root from the path.
920 builder.path = path[len(builder.script_root) :]
921 else:
922 # The new location is not under the script root, so use the
923 # whole path and clear the previous root.
924 builder.path = path
925 builder.script_root = ""
926
927 status_code = int(response[1].split(None, 1)[0])
928
929 # Only 307 and 308 preserve all of the original request.
930 if status_code not in {307, 308}:
931 # HEAD is preserved, everything else becomes GET.
932 if builder.method != "HEAD":
933 builder.method = "GET"
934
935 # Clear the body and the headers that describe it.
936 builder.input_stream = None
937 builder.content_type = None
938 builder.content_length = None
939 builder.headers.pop("Transfer-Encoding", None)
940
941 # Disable the response wrapper while handling redirects. Not
942 # thread safe, but the client should not be shared anyway.
943 old_response_wrapper = self.response_wrapper
944 self.response_wrapper = None
945
946 try:
947 return self.open(builder, as_tuple=True, buffered=buffered)
948 finally:
949 self.response_wrapper = old_response_wrapper
950
951 def open(self, *args, **kwargs):
952 """Takes the same arguments as the :class:`EnvironBuilder` class with
953 some additions: You can provide a :class:`EnvironBuilder` or a WSGI
954 environment as only argument instead of the :class:`EnvironBuilder`
955 arguments and two optional keyword arguments (`as_tuple`, `buffered`)
956 that change the type of the return value or the way the application is
957 executed.
958
959 .. versionchanged:: 0.5
960 If a dict is provided as file in the dict for the `data` parameter
961 the content type has to be called `content_type` now instead of
962 `mimetype`. This change was made for consistency with
963 :class:`werkzeug.FileWrapper`.
964
965 The `follow_redirects` parameter was added to :func:`open`.
966
967 Additional parameters:
968
969 :param as_tuple: Returns a tuple in the form ``(environ, result)``
970 :param buffered: Set this to True to buffer the application run.
971 This will automatically close the application for
972 you as well.
973 :param follow_redirects: Set this to True if the `Client` should
974 follow HTTP redirects.
975 """
976 as_tuple = kwargs.pop("as_tuple", False)
977 buffered = kwargs.pop("buffered", False)
978 follow_redirects = kwargs.pop("follow_redirects", False)
979 environ = None
980 if not kwargs and len(args) == 1:
981 if isinstance(args[0], EnvironBuilder):
982 environ = args[0].get_environ()
983 elif isinstance(args[0], dict):
984 environ = args[0]
985 if environ is None:
986 builder = EnvironBuilder(*args, **kwargs)
987 try:
988 environ = builder.get_environ()
989 finally:
990 builder.close()
991
992 response = self.run_wsgi_app(environ.copy(), buffered=buffered)
993
994 # handle redirects
995 redirect_chain = []
996 while 1:
997 status_code = int(response[1].split(None, 1)[0])
998 if (
999 status_code not in {301, 302, 303, 305, 307, 308}
1000 or not follow_redirects
1001 ):
1002 break
1003
1004 # Exhaust intermediate response bodies to ensure middleware
1005 # that returns an iterator runs any cleanup code.
1006 if not buffered:
1007 for _ in response[0]:
1008 pass
1009
1010 new_location = response[2]["location"]
1011 new_redirect_entry = (new_location, status_code)
1012 if new_redirect_entry in redirect_chain:
1013 raise ClientRedirectError("loop detected")
1014 redirect_chain.append(new_redirect_entry)
1015 environ, response = self.resolve_redirect(
1016 response, new_location, environ, buffered=buffered
1017 )
1018
1019 if self.response_wrapper is not None:
1020 response = self.response_wrapper(*response)
1021 if as_tuple:
1022 return environ, response
1023 return response
1024
1025 def get(self, *args, **kw):
1026 """Like open but method is enforced to GET."""
1027 kw["method"] = "GET"
1028 return self.open(*args, **kw)
1029
1030 def patch(self, *args, **kw):
1031 """Like open but method is enforced to PATCH."""
1032 kw["method"] = "PATCH"
1033 return self.open(*args, **kw)
1034
1035 def post(self, *args, **kw):
1036 """Like open but method is enforced to POST."""
1037 kw["method"] = "POST"
1038 return self.open(*args, **kw)
1039
1040 def head(self, *args, **kw):
1041 """Like open but method is enforced to HEAD."""
1042 kw["method"] = "HEAD"
1043 return self.open(*args, **kw)
1044
1045 def put(self, *args, **kw):
1046 """Like open but method is enforced to PUT."""
1047 kw["method"] = "PUT"
1048 return self.open(*args, **kw)
1049
1050 def delete(self, *args, **kw):
1051 """Like open but method is enforced to DELETE."""
1052 kw["method"] = "DELETE"
1053 return self.open(*args, **kw)
1054
1055 def options(self, *args, **kw):
1056 """Like open but method is enforced to OPTIONS."""
1057 kw["method"] = "OPTIONS"
1058 return self.open(*args, **kw)
1059
1060 def trace(self, *args, **kw):
1061 """Like open but method is enforced to TRACE."""
1062 kw["method"] = "TRACE"
1063 return self.open(*args, **kw)
1064
1065 def __repr__(self):
1066 return "<%s %r>" % (self.__class__.__name__, self.application)
1067
1068
1069 def create_environ(*args, **kwargs):
1070 """Create a new WSGI environ dict based on the values passed. The first
1071 parameter should be the path of the request which defaults to '/'. The
1072 second one can either be an absolute path (in that case the host is
1073 localhost:80) or a full path to the request with scheme, netloc port and
1074 the path to the script.
1075
1076 This accepts the same arguments as the :class:`EnvironBuilder`
1077 constructor.
1078
1079 .. versionchanged:: 0.5
1080 This function is now a thin wrapper over :class:`EnvironBuilder` which
1081 was added in 0.5. The `headers`, `environ_base`, `environ_overrides`
1082 and `charset` parameters were added.
1083 """
1084 builder = EnvironBuilder(*args, **kwargs)
1085 try:
1086 return builder.get_environ()
1087 finally:
1088 builder.close()
1089
1090
1091 def run_wsgi_app(app, environ, buffered=False):
1092 """Return a tuple in the form (app_iter, status, headers) of the
1093 application output. This works best if you pass it an application that
1094 returns an iterator all the time.
1095
1096 Sometimes applications may use the `write()` callable returned
1097 by the `start_response` function. This tries to resolve such edge
1098 cases automatically. But if you don't get the expected output you
1099 should set `buffered` to `True` which enforces buffering.
1100
1101 If passed an invalid WSGI application the behavior of this function is
1102 undefined. Never pass non-conforming WSGI applications to this function.
1103
1104 :param app: the application to execute.
1105 :param buffered: set to `True` to enforce buffering.
1106 :return: tuple in the form ``(app_iter, status, headers)``
1107 """
1108 environ = _get_environ(environ)
1109 response = []
1110 buffer = []
1111
1112 def start_response(status, headers, exc_info=None):
1113 if exc_info is not None:
1114 reraise(*exc_info)
1115 response[:] = [status, headers]
1116 return buffer.append
1117
1118 app_rv = app(environ, start_response)
1119 close_func = getattr(app_rv, "close", None)
1120 app_iter = iter(app_rv)
1121
1122 # when buffering we emit the close call early and convert the
1123 # application iterator into a regular list
1124 if buffered:
1125 try:
1126 app_iter = list(app_iter)
1127 finally:
1128 if close_func is not None:
1129 close_func()
1130
1131 # otherwise we iterate the application iter until we have a response, chain
1132 # the already received data with the already collected data and wrap it in
1133 # a new `ClosingIterator` if we need to restore a `close` callable from the
1134 # original return value.
1135 else:
1136 for item in app_iter:
1137 buffer.append(item)
1138 if response:
1139 break
1140 if buffer:
1141 app_iter = chain(buffer, app_iter)
1142 if close_func is not None and app_iter is not app_rv:
1143 app_iter = ClosingIterator(app_iter, close_func)
1144
1145 return app_iter, response[0], Headers(response[1])
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.testapp
3 ~~~~~~~~~~~~~~~~
4
5 Provide a small test application that can be used to test a WSGI server
6 and check it for WSGI compliance.
7
8 :copyright: 2007 Pallets
9 :license: BSD-3-Clause
10 """
11 import base64
12 import os
13 import sys
14 from textwrap import wrap
15
16 import werkzeug
17 from .utils import escape
18 from .wrappers import BaseRequest as Request
19 from .wrappers import BaseResponse as Response
20
21 logo = Response(
22 base64.b64decode(
23 """
24 R0lGODlhoACgAOMIAAEDACwpAEpCAGdgAJaKAM28AOnVAP3rAP/////////
25 //////////////////////yH5BAEKAAgALAAAAACgAKAAAAT+EMlJq704680R+F0ojmRpnuj0rWnrv
26 nB8rbRs33gu0bzu/0AObxgsGn3D5HHJbCUFyqZ0ukkSDlAidctNFg7gbI9LZlrBaHGtzAae0eloe25
27 7w9EDOX2fst/xenyCIn5/gFqDiVVDV4aGeYiKkhSFjnCQY5OTlZaXgZp8nJ2ekaB0SQOjqphrpnOiq
28 ncEn65UsLGytLVmQ6m4sQazpbtLqL/HwpnER8bHyLrLOc3Oz8PRONPU1crXN9na263dMt/g4SzjMeX
29 m5yDpLqgG7OzJ4u8lT/P69ej3JPn69kHzN2OIAHkB9RUYSFCFQYQJFTIkCDBiwoXWGnowaLEjRm7+G
30 p9A7Hhx4rUkAUaSLJlxHMqVMD/aSycSZkyTplCqtGnRAM5NQ1Ly5OmzZc6gO4d6DGAUKA+hSocWYAo
31 SlM6oUWX2O/o0KdaVU5vuSQLAa0ADwQgMEMB2AIECZhVSnTno6spgbtXmHcBUrQACcc2FrTrWS8wAf
32 78cMFBgwIBgbN+qvTt3ayikRBk7BoyGAGABAdYyfdzRQGV3l4coxrqQ84GpUBmrdR3xNIDUPAKDBSA
33 ADIGDhhqTZIWaDcrVX8EsbNzbkvCOxG8bN5w8ly9H8jyTJHC6DFndQydbguh2e/ctZJFXRxMAqqPVA
34 tQH5E64SPr1f0zz7sQYjAHg0In+JQ11+N2B0XXBeeYZgBZFx4tqBToiTCPv0YBgQv8JqA6BEf6RhXx
35 w1ENhRBnWV8ctEX4Ul2zc3aVGcQNC2KElyTDYyYUWvShdjDyMOGMuFjqnII45aogPhz/CodUHFwaDx
36 lTgsaOjNyhGWJQd+lFoAGk8ObghI0kawg+EV5blH3dr+digkYuAGSaQZFHFz2P/cTaLmhF52QeSb45
37 Jwxd+uSVGHlqOZpOeJpCFZ5J+rkAkFjQ0N1tah7JJSZUFNsrkeJUJMIBi8jyaEKIhKPomnC91Uo+NB
38 yyaJ5umnnpInIFh4t6ZSpGaAVmizqjpByDegYl8tPE0phCYrhcMWSv+uAqHfgH88ak5UXZmlKLVJhd
39 dj78s1Fxnzo6yUCrV6rrDOkluG+QzCAUTbCwf9SrmMLzK6p+OPHx7DF+bsfMRq7Ec61Av9i6GLw23r
40 idnZ+/OO0a99pbIrJkproCQMA17OPG6suq3cca5ruDfXCCDoS7BEdvmJn5otdqscn+uogRHHXs8cbh
41 EIfYaDY1AkrC0cqwcZpnM6ludx72x0p7Fo/hZAcpJDjax0UdHavMKAbiKltMWCF3xxh9k25N/Viud8
42 ba78iCvUkt+V6BpwMlErmcgc502x+u1nSxJSJP9Mi52awD1V4yB/QHONsnU3L+A/zR4VL/indx/y64
43 gqcj+qgTeweM86f0Qy1QVbvmWH1D9h+alqg254QD8HJXHvjQaGOqEqC22M54PcftZVKVSQG9jhkv7C
44 JyTyDoAJfPdu8v7DRZAxsP/ky9MJ3OL36DJfCFPASC3/aXlfLOOON9vGZZHydGf8LnxYJuuVIbl83y
45 Az5n/RPz07E+9+zw2A2ahz4HxHo9Kt79HTMx1Q7ma7zAzHgHqYH0SoZWyTuOLMiHwSfZDAQTn0ajk9
46 YQqodnUYjByQZhZak9Wu4gYQsMyEpIOAOQKze8CmEF45KuAHTvIDOfHJNipwoHMuGHBnJElUoDmAyX
47 c2Qm/R8Ah/iILCCJOEokGowdhDYc/yoL+vpRGwyVSCWFYZNljkhEirGXsalWcAgOdeAdoXcktF2udb
48 qbUhjWyMQxYO01o6KYKOr6iK3fE4MaS+DsvBsGOBaMb0Y6IxADaJhFICaOLmiWTlDAnY1KzDG4ambL
49 cWBA8mUzjJsN2KjSaSXGqMCVXYpYkj33mcIApyhQf6YqgeNAmNvuC0t4CsDbSshZJkCS1eNisKqlyG
50 cF8G2JeiDX6tO6Mv0SmjCa3MFb0bJaGPMU0X7c8XcpvMaOQmCajwSeY9G0WqbBmKv34DsMIEztU6Y2
51 KiDlFdt6jnCSqx7Dmt6XnqSKaFFHNO5+FmODxMCWBEaco77lNDGXBM0ECYB/+s7nKFdwSF5hgXumQe
52 EZ7amRg39RHy3zIjyRCykQh8Zo2iviRKyTDn/zx6EefptJj2Cw+Ep2FSc01U5ry4KLPYsTyWnVGnvb
53 UpyGlhjBUljyjHhWpf8OFaXwhp9O4T1gU9UeyPPa8A2l0p1kNqPXEVRm1AOs1oAGZU596t6SOR2mcB
54 Oco1srWtkaVrMUzIErrKri85keKqRQYX9VX0/eAUK1hrSu6HMEX3Qh2sCh0q0D2CtnUqS4hj62sE/z
55 aDs2Sg7MBS6xnQeooc2R2tC9YrKpEi9pLXfYXp20tDCpSP8rKlrD4axprb9u1Df5hSbz9QU0cRpfgn
56 kiIzwKucd0wsEHlLpe5yHXuc6FrNelOl7pY2+11kTWx7VpRu97dXA3DO1vbkhcb4zyvERYajQgAADs
57 ="""
58 ),
59 mimetype="image/png",
60 )
61
62
63 TEMPLATE = u"""\
64 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
65 "http://www.w3.org/TR/html4/loose.dtd">
66 <title>WSGI Information</title>
67 <style type="text/css">
68 @import url(https://fonts.googleapis.com/css?family=Ubuntu);
69
70 body { font-family: 'Lucida Grande', 'Lucida Sans Unicode', 'Geneva',
71 'Verdana', sans-serif; background-color: white; color: #000;
72 font-size: 15px; text-align: center; }
73 #logo { float: right; padding: 0 0 10px 10px; }
74 div.box { text-align: left; width: 45em; margin: auto; padding: 50px 0;
75 background-color: white; }
76 h1, h2 { font-family: 'Ubuntu', 'Lucida Grande', 'Lucida Sans Unicode',
77 'Geneva', 'Verdana', sans-serif; font-weight: normal; }
78 h1 { margin: 0 0 30px 0; }
79 h2 { font-size: 1.4em; margin: 1em 0 0.5em 0; }
80 table { width: 100%%; border-collapse: collapse; border: 1px solid #AFC5C9 }
81 table th { background-color: #AFC1C4; color: white; font-size: 0.72em;
82 font-weight: normal; width: 18em; vertical-align: top;
83 padding: 0.5em 0 0.1em 0.5em; }
84 table td { border: 1px solid #AFC5C9; padding: 0.1em 0 0.1em 0.5em; }
85 code { font-family: 'Consolas', 'Monaco', 'Bitstream Vera Sans Mono',
86 monospace; font-size: 0.7em; }
87 ul li { line-height: 1.5em; }
88 ul.path { font-size: 0.7em; margin: 0 -30px; padding: 8px 30px;
89 list-style: none; background: #E8EFF0; }
90 ul.path li { line-height: 1.6em; }
91 li.virtual { color: #999; text-decoration: underline; }
92 li.exp { background: white; }
93 </style>
94 <div class="box">
95 <img src="?resource=logo" id="logo" alt="[The Werkzeug Logo]" />
96 <h1>WSGI Information</h1>
97 <p>
98 This page displays all available information about the WSGI server and
99 the underlying Python interpreter.
100 <h2 id="python-interpreter">Python Interpreter</h2>
101 <table>
102 <tr>
103 <th>Python Version
104 <td>%(python_version)s
105 <tr>
106 <th>Platform
107 <td>%(platform)s [%(os)s]
108 <tr>
109 <th>API Version
110 <td>%(api_version)s
111 <tr>
112 <th>Byteorder
113 <td>%(byteorder)s
114 <tr>
115 <th>Werkzeug Version
116 <td>%(werkzeug_version)s
117 </table>
118 <h2 id="wsgi-environment">WSGI Environment</h2>
119 <table>%(wsgi_env)s</table>
120 <h2 id="installed-eggs">Installed Eggs</h2>
121 <p>
122 The following python packages were installed on the system as
123 Python eggs:
124 <ul>%(python_eggs)s</ul>
125 <h2 id="sys-path">System Path</h2>
126 <p>
127 The following paths are the current contents of the load path. The
128 following entries are looked up for Python packages. Note that not
129 all items in this path are folders. Gray and underlined items are
130 entries pointing to invalid resources or used by custom import hooks
131 such as the zip importer.
132 <p>
133 Items with a bright background were expanded for display from a relative
134 path. If you encounter such paths in the output you might want to check
135 your setup as relative paths are usually problematic in multithreaded
136 environments.
137 <ul class="path">%(sys_path)s</ul>
138 </div>
139 """
140
141
142 def iter_sys_path():
143 if os.name == "posix":
144
145 def strip(x):
146 prefix = os.path.expanduser("~")
147 if x.startswith(prefix):
148 x = "~" + x[len(prefix) :]
149 return x
150
151 else:
152
153 def strip(x):
154 return x
155
156 cwd = os.path.abspath(os.getcwd())
157 for item in sys.path:
158 path = os.path.join(cwd, item or os.path.curdir)
159 yield strip(os.path.normpath(path)), not os.path.isdir(path), path != item
160
161
162 def render_testapp(req):
163 try:
164 import pkg_resources
165 except ImportError:
166 eggs = ()
167 else:
168 eggs = sorted(pkg_resources.working_set, key=lambda x: x.project_name.lower())
169 python_eggs = []
170 for egg in eggs:
171 try:
172 version = egg.version
173 except (ValueError, AttributeError):
174 version = "unknown"
175 python_eggs.append(
176 "<li>%s <small>[%s]</small>" % (escape(egg.project_name), escape(version))
177 )
178
179 wsgi_env = []
180 sorted_environ = sorted(req.environ.items(), key=lambda x: repr(x[0]).lower())
181 for key, value in sorted_environ:
182 wsgi_env.append(
183 "<tr><th>%s<td><code>%s</code>"
184 % (escape(str(key)), " ".join(wrap(escape(repr(value)))))
185 )
186
187 sys_path = []
188 for item, virtual, expanded in iter_sys_path():
189 class_ = []
190 if virtual:
191 class_.append("virtual")
192 if expanded:
193 class_.append("exp")
194 sys_path.append(
195 "<li%s>%s"
196 % (' class="%s"' % " ".join(class_) if class_ else "", escape(item))
197 )
198
199 return (
200 TEMPLATE
201 % {
202 "python_version": "<br>".join(escape(sys.version).splitlines()),
203 "platform": escape(sys.platform),
204 "os": escape(os.name),
205 "api_version": sys.api_version,
206 "byteorder": sys.byteorder,
207 "werkzeug_version": werkzeug.__version__,
208 "python_eggs": "\n".join(python_eggs),
209 "wsgi_env": "\n".join(wsgi_env),
210 "sys_path": "\n".join(sys_path),
211 }
212 ).encode("utf-8")
213
214
215 def test_app(environ, start_response):
216 """Simple test application that dumps the environment. You can use
217 it to check if Werkzeug is working properly:
218
219 .. sourcecode:: pycon
220
221 >>> from werkzeug.serving import run_simple
222 >>> from werkzeug.testapp import test_app
223 >>> run_simple('localhost', 3000, test_app)
224 * Running on http://localhost:3000/
225
226 The application displays important information from the WSGI environment,
227 the Python interpreter and the installed libraries.
228 """
229 req = Request(environ, populate_request=False)
230 if req.args.get("resource") == "logo":
231 response = logo
232 else:
233 response = Response(render_testapp(req), mimetype="text/html")
234 return response(environ, start_response)
235
236
237 if __name__ == "__main__":
238 from .serving import run_simple
239
240 run_simple("localhost", 5000, test_app, use_reloader=True)
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.urls
3 ~~~~~~~~~~~~~
4
5 ``werkzeug.urls`` used to provide several wrapper functions for Python 2
6 urlparse, whose main purpose were to work around the behavior of the Py2
7 stdlib and its lack of unicode support. While this was already a somewhat
8 inconvenient situation, it got even more complicated because Python 3's
9 ``urllib.parse`` actually does handle unicode properly. In other words,
10 this module would wrap two libraries with completely different behavior. So
11 now this module contains a 2-and-3-compatible backport of Python 3's
12 ``urllib.parse``, which is mostly API-compatible.
13
14 :copyright: 2007 Pallets
15 :license: BSD-3-Clause
16 """
17 import codecs
18 import os
19 import re
20 from collections import namedtuple
21
22 from ._compat import fix_tuple_repr
23 from ._compat import implements_to_string
24 from ._compat import make_literal_wrapper
25 from ._compat import normalize_string_tuple
26 from ._compat import PY2
27 from ._compat import text_type
28 from ._compat import to_native
29 from ._compat import to_unicode
30 from ._compat import try_coerce_native
31 from ._internal import _decode_idna
32 from ._internal import _encode_idna
33 from .datastructures import iter_multi_items
34 from .datastructures import MultiDict
35
36 # A regular expression for what a valid schema looks like
37 _scheme_re = re.compile(r"^[a-zA-Z0-9+-.]+$")
38
39 # Characters that are safe in any part of an URL.
40 _always_safe = frozenset(
41 bytearray(
42 b"abcdefghijklmnopqrstuvwxyz"
43 b"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
44 b"0123456789"
45 b"-._~"
46 )
47 )
48
49 _hexdigits = "0123456789ABCDEFabcdef"
50 _hextobyte = dict(
51 ((a + b).encode(), int(a + b, 16)) for a in _hexdigits for b in _hexdigits
52 )
53 _bytetohex = [("%%%02X" % char).encode("ascii") for char in range(256)]
54
55
56 _URLTuple = fix_tuple_repr(
57 namedtuple("_URLTuple", ["scheme", "netloc", "path", "query", "fragment"])
58 )
59
60
61 class BaseURL(_URLTuple):
62 """Superclass of :py:class:`URL` and :py:class:`BytesURL`."""
63
64 __slots__ = ()
65
66 def replace(self, **kwargs):
67 """Return an URL with the same values, except for those parameters
68 given new values by whichever keyword arguments are specified."""
69 return self._replace(**kwargs)
70
71 @property
72 def host(self):
73 """The host part of the URL if available, otherwise `None`. The
74 host is either the hostname or the IP address mentioned in the
75 URL. It will not contain the port.
76 """
77 return self._split_host()[0]
78
79 @property
80 def ascii_host(self):
81 """Works exactly like :attr:`host` but will return a result that
82 is restricted to ASCII. If it finds a netloc that is not ASCII
83 it will attempt to idna decode it. This is useful for socket
84 operations when the URL might include internationalized characters.
85 """
86 rv = self.host
87 if rv is not None and isinstance(rv, text_type):
88 try:
89 rv = _encode_idna(rv)
90 except UnicodeError:
91 rv = rv.encode("ascii", "ignore")
92 return to_native(rv, "ascii", "ignore")
93
94 @property
95 def port(self):
96 """The port in the URL as an integer if it was present, `None`
97 otherwise. This does not fill in default ports.
98 """
99 try:
100 rv = int(to_native(self._split_host()[1]))
101 if 0 <= rv <= 65535:
102 return rv
103 except (ValueError, TypeError):
104 pass
105
106 @property
107 def auth(self):
108 """The authentication part in the URL if available, `None`
109 otherwise.
110 """
111 return self._split_netloc()[0]
112
113 @property
114 def username(self):
115 """The username if it was part of the URL, `None` otherwise.
116 This undergoes URL decoding and will always be a unicode string.
117 """
118 rv = self._split_auth()[0]
119 if rv is not None:
120 return _url_unquote_legacy(rv)
121
122 @property
123 def raw_username(self):
124 """The username if it was part of the URL, `None` otherwise.
125 Unlike :attr:`username` this one is not being decoded.
126 """
127 return self._split_auth()[0]
128
129 @property
130 def password(self):
131 """The password if it was part of the URL, `None` otherwise.
132 This undergoes URL decoding and will always be a unicode string.
133 """
134 rv = self._split_auth()[1]
135 if rv is not None:
136 return _url_unquote_legacy(rv)
137
138 @property
139 def raw_password(self):
140 """The password if it was part of the URL, `None` otherwise.
141 Unlike :attr:`password` this one is not being decoded.
142 """
143 return self._split_auth()[1]
144
145 def decode_query(self, *args, **kwargs):
146 """Decodes the query part of the URL. Ths is a shortcut for
147 calling :func:`url_decode` on the query argument. The arguments and
148 keyword arguments are forwarded to :func:`url_decode` unchanged.
149 """
150 return url_decode(self.query, *args, **kwargs)
151
152 def join(self, *args, **kwargs):
153 """Joins this URL with another one. This is just a convenience
154 function for calling into :meth:`url_join` and then parsing the
155 return value again.
156 """
157 return url_parse(url_join(self, *args, **kwargs))
158
159 def to_url(self):
160 """Returns a URL string or bytes depending on the type of the
161 information stored. This is just a convenience function
162 for calling :meth:`url_unparse` for this URL.
163 """
164 return url_unparse(self)
165
166 def decode_netloc(self):
167 """Decodes the netloc part into a string."""
168 rv = _decode_idna(self.host or "")
169
170 if ":" in rv:
171 rv = "[%s]" % rv
172 port = self.port
173 if port is not None:
174 rv = "%s:%d" % (rv, port)
175 auth = ":".join(
176 filter(
177 None,
178 [
179 _url_unquote_legacy(self.raw_username or "", "/:%@"),
180 _url_unquote_legacy(self.raw_password or "", "/:%@"),
181 ],
182 )
183 )
184 if auth:
185 rv = "%s@%s" % (auth, rv)
186 return rv
187
188 def to_uri_tuple(self):
189 """Returns a :class:`BytesURL` tuple that holds a URI. This will
190 encode all the information in the URL properly to ASCII using the
191 rules a web browser would follow.
192
193 It's usually more interesting to directly call :meth:`iri_to_uri` which
194 will return a string.
195 """
196 return url_parse(iri_to_uri(self).encode("ascii"))
197
198 def to_iri_tuple(self):
199 """Returns a :class:`URL` tuple that holds a IRI. This will try
200 to decode as much information as possible in the URL without
201 losing information similar to how a web browser does it for the
202 URL bar.
203
204 It's usually more interesting to directly call :meth:`uri_to_iri` which
205 will return a string.
206 """
207 return url_parse(uri_to_iri(self))
208
209 def get_file_location(self, pathformat=None):
210 """Returns a tuple with the location of the file in the form
211 ``(server, location)``. If the netloc is empty in the URL or
212 points to localhost, it's represented as ``None``.
213
214 The `pathformat` by default is autodetection but needs to be set
215 when working with URLs of a specific system. The supported values
216 are ``'windows'`` when working with Windows or DOS paths and
217 ``'posix'`` when working with posix paths.
218
219 If the URL does not point to a local file, the server and location
220 are both represented as ``None``.
221
222 :param pathformat: The expected format of the path component.
223 Currently ``'windows'`` and ``'posix'`` are
224 supported. Defaults to ``None`` which is
225 autodetect.
226 """
227 if self.scheme != "file":
228 return None, None
229
230 path = url_unquote(self.path)
231 host = self.netloc or None
232
233 if pathformat is None:
234 if os.name == "nt":
235 pathformat = "windows"
236 else:
237 pathformat = "posix"
238
239 if pathformat == "windows":
240 if path[:1] == "/" and path[1:2].isalpha() and path[2:3] in "|:":
241 path = path[1:2] + ":" + path[3:]
242 windows_share = path[:3] in ("\\" * 3, "/" * 3)
243 import ntpath
244
245 path = ntpath.normpath(path)
246 # Windows shared drives are represented as ``\\host\\directory``.
247 # That results in a URL like ``file://///host/directory``, and a
248 # path like ``///host/directory``. We need to special-case this
249 # because the path contains the hostname.
250 if windows_share and host is None:
251 parts = path.lstrip("\\").split("\\", 1)
252 if len(parts) == 2:
253 host, path = parts
254 else:
255 host = parts[0]
256 path = ""
257 elif pathformat == "posix":
258 import posixpath
259
260 path = posixpath.normpath(path)
261 else:
262 raise TypeError("Invalid path format %s" % repr(pathformat))
263
264 if host in ("127.0.0.1", "::1", "localhost"):
265 host = None
266
267 return host, path
268
269 def _split_netloc(self):
270 if self._at in self.netloc:
271 return self.netloc.split(self._at, 1)
272 return None, self.netloc
273
274 def _split_auth(self):
275 auth = self._split_netloc()[0]
276 if not auth:
277 return None, None
278 if self._colon not in auth:
279 return auth, None
280 return auth.split(self._colon, 1)
281
282 def _split_host(self):
283 rv = self._split_netloc()[1]
284 if not rv:
285 return None, None
286
287 if not rv.startswith(self._lbracket):
288 if self._colon in rv:
289 return rv.split(self._colon, 1)
290 return rv, None
291
292 idx = rv.find(self._rbracket)
293 if idx < 0:
294 return rv, None
295
296 host = rv[1:idx]
297 rest = rv[idx + 1 :]
298 if rest.startswith(self._colon):
299 return host, rest[1:]
300 return host, None
301
302
303 @implements_to_string
304 class URL(BaseURL):
305 """Represents a parsed URL. This behaves like a regular tuple but
306 also has some extra attributes that give further insight into the
307 URL.
308 """
309
310 __slots__ = ()
311 _at = "@"
312 _colon = ":"
313 _lbracket = "["
314 _rbracket = "]"
315
316 def __str__(self):
317 return self.to_url()
318
319 def encode_netloc(self):
320 """Encodes the netloc part to an ASCII safe URL as bytes."""
321 rv = self.ascii_host or ""
322 if ":" in rv:
323 rv = "[%s]" % rv
324 port = self.port
325 if port is not None:
326 rv = "%s:%d" % (rv, port)
327 auth = ":".join(
328 filter(
329 None,
330 [
331 url_quote(self.raw_username or "", "utf-8", "strict", "/:%"),
332 url_quote(self.raw_password or "", "utf-8", "strict", "/:%"),
333 ],
334 )
335 )
336 if auth:
337 rv = "%s@%s" % (auth, rv)
338 return to_native(rv)
339
340 def encode(self, charset="utf-8", errors="replace"):
341 """Encodes the URL to a tuple made out of bytes. The charset is
342 only being used for the path, query and fragment.
343 """
344 return BytesURL(
345 self.scheme.encode("ascii"),
346 self.encode_netloc(),
347 self.path.encode(charset, errors),
348 self.query.encode(charset, errors),
349 self.fragment.encode(charset, errors),
350 )
351
352
353 class BytesURL(BaseURL):
354 """Represents a parsed URL in bytes."""
355
356 __slots__ = ()
357 _at = b"@"
358 _colon = b":"
359 _lbracket = b"["
360 _rbracket = b"]"
361
362 def __str__(self):
363 return self.to_url().decode("utf-8", "replace")
364
365 def encode_netloc(self):
366 """Returns the netloc unchanged as bytes."""
367 return self.netloc
368
369 def decode(self, charset="utf-8", errors="replace"):
370 """Decodes the URL to a tuple made out of strings. The charset is
371 only being used for the path, query and fragment.
372 """
373 return URL(
374 self.scheme.decode("ascii"),
375 self.decode_netloc(),
376 self.path.decode(charset, errors),
377 self.query.decode(charset, errors),
378 self.fragment.decode(charset, errors),
379 )
380
381
382 _unquote_maps = {frozenset(): _hextobyte}
383
384
385 def _unquote_to_bytes(string, unsafe=""):
386 if isinstance(string, text_type):
387 string = string.encode("utf-8")
388
389 if isinstance(unsafe, text_type):
390 unsafe = unsafe.encode("utf-8")
391
392 unsafe = frozenset(bytearray(unsafe))
393 groups = iter(string.split(b"%"))
394 result = bytearray(next(groups, b""))
395
396 try:
397 hex_to_byte = _unquote_maps[unsafe]
398 except KeyError:
399 hex_to_byte = _unquote_maps[unsafe] = {
400 h: b for h, b in _hextobyte.items() if b not in unsafe
401 }
402
403 for group in groups:
404 code = group[:2]
405
406 if code in hex_to_byte:
407 result.append(hex_to_byte[code])
408 result.extend(group[2:])
409 else:
410 result.append(37) # %
411 result.extend(group)
412
413 return bytes(result)
414
415
416 def _url_encode_impl(obj, charset, encode_keys, sort, key):
417 iterable = iter_multi_items(obj)
418 if sort:
419 iterable = sorted(iterable, key=key)
420 for key, value in iterable:
421 if value is None:
422 continue
423 if not isinstance(key, bytes):
424 key = text_type(key).encode(charset)
425 if not isinstance(value, bytes):
426 value = text_type(value).encode(charset)
427 yield _fast_url_quote_plus(key) + "=" + _fast_url_quote_plus(value)
428
429
430 def _url_unquote_legacy(value, unsafe=""):
431 try:
432 return url_unquote(value, charset="utf-8", errors="strict", unsafe=unsafe)
433 except UnicodeError:
434 return url_unquote(value, charset="latin1", unsafe=unsafe)
435
436
437 def url_parse(url, scheme=None, allow_fragments=True):
438 """Parses a URL from a string into a :class:`URL` tuple. If the URL
439 is lacking a scheme it can be provided as second argument. Otherwise,
440 it is ignored. Optionally fragments can be stripped from the URL
441 by setting `allow_fragments` to `False`.
442
443 The inverse of this function is :func:`url_unparse`.
444
445 :param url: the URL to parse.
446 :param scheme: the default schema to use if the URL is schemaless.
447 :param allow_fragments: if set to `False` a fragment will be removed
448 from the URL.
449 """
450 s = make_literal_wrapper(url)
451 is_text_based = isinstance(url, text_type)
452
453 if scheme is None:
454 scheme = s("")
455 netloc = query = fragment = s("")
456 i = url.find(s(":"))
457 if i > 0 and _scheme_re.match(to_native(url[:i], errors="replace")):
458 # make sure "iri" is not actually a port number (in which case
459 # "scheme" is really part of the path)
460 rest = url[i + 1 :]
461 if not rest or any(c not in s("0123456789") for c in rest):
462 # not a port number
463 scheme, url = url[:i].lower(), rest
464
465 if url[:2] == s("//"):
466 delim = len(url)
467 for c in s("/?#"):
468 wdelim = url.find(c, 2)
469 if wdelim >= 0:
470 delim = min(delim, wdelim)
471 netloc, url = url[2:delim], url[delim:]
472 if (s("[") in netloc and s("]") not in netloc) or (
473 s("]") in netloc and s("[") not in netloc
474 ):
475 raise ValueError("Invalid IPv6 URL")
476
477 if allow_fragments and s("#") in url:
478 url, fragment = url.split(s("#"), 1)
479 if s("?") in url:
480 url, query = url.split(s("?"), 1)
481
482 result_type = URL if is_text_based else BytesURL
483 return result_type(scheme, netloc, url, query, fragment)
484
485
486 def _make_fast_url_quote(charset="utf-8", errors="strict", safe="/:", unsafe=""):
487 """Precompile the translation table for a URL encoding function.
488
489 Unlike :func:`url_quote`, the generated function only takes the
490 string to quote.
491
492 :param charset: The charset to encode the result with.
493 :param errors: How to handle encoding errors.
494 :param safe: An optional sequence of safe characters to never encode.
495 :param unsafe: An optional sequence of unsafe characters to always encode.
496 """
497 if isinstance(safe, text_type):
498 safe = safe.encode(charset, errors)
499
500 if isinstance(unsafe, text_type):
501 unsafe = unsafe.encode(charset, errors)
502
503 safe = (frozenset(bytearray(safe)) | _always_safe) - frozenset(bytearray(unsafe))
504 table = [chr(c) if c in safe else "%%%02X" % c for c in range(256)]
505
506 if not PY2:
507
508 def quote(string):
509 return "".join([table[c] for c in string])
510
511 else:
512
513 def quote(string):
514 return "".join([table[c] for c in bytearray(string)])
515
516 return quote
517
518
519 _fast_url_quote = _make_fast_url_quote()
520 _fast_quote_plus = _make_fast_url_quote(safe=" ", unsafe="+")
521
522
523 def _fast_url_quote_plus(string):
524 return _fast_quote_plus(string).replace(" ", "+")
525
526
527 def url_quote(string, charset="utf-8", errors="strict", safe="/:", unsafe=""):
528 """URL encode a single string with a given encoding.
529
530 :param s: the string to quote.
531 :param charset: the charset to be used.
532 :param safe: an optional sequence of safe characters.
533 :param unsafe: an optional sequence of unsafe characters.
534
535 .. versionadded:: 0.9.2
536 The `unsafe` parameter was added.
537 """
538 if not isinstance(string, (text_type, bytes, bytearray)):
539 string = text_type(string)
540 if isinstance(string, text_type):
541 string = string.encode(charset, errors)
542 if isinstance(safe, text_type):
543 safe = safe.encode(charset, errors)
544 if isinstance(unsafe, text_type):
545 unsafe = unsafe.encode(charset, errors)
546 safe = (frozenset(bytearray(safe)) | _always_safe) - frozenset(bytearray(unsafe))
547 rv = bytearray()
548 for char in bytearray(string):
549 if char in safe:
550 rv.append(char)
551 else:
552 rv.extend(_bytetohex[char])
553 return to_native(bytes(rv))
554
555
556 def url_quote_plus(string, charset="utf-8", errors="strict", safe=""):
557 """URL encode a single string with the given encoding and convert
558 whitespace to "+".
559
560 :param s: The string to quote.
561 :param charset: The charset to be used.
562 :param safe: An optional sequence of safe characters.
563 """
564 return url_quote(string, charset, errors, safe + " ", "+").replace(" ", "+")
565
566
567 def url_unparse(components):
568 """The reverse operation to :meth:`url_parse`. This accepts arbitrary
569 as well as :class:`URL` tuples and returns a URL as a string.
570
571 :param components: the parsed URL as tuple which should be converted
572 into a URL string.
573 """
574 scheme, netloc, path, query, fragment = normalize_string_tuple(components)
575 s = make_literal_wrapper(scheme)
576 url = s("")
577
578 # We generally treat file:///x and file:/x the same which is also
579 # what browsers seem to do. This also allows us to ignore a schema
580 # register for netloc utilization or having to differenciate between
581 # empty and missing netloc.
582 if netloc or (scheme and path.startswith(s("/"))):
583 if path and path[:1] != s("/"):
584 path = s("/") + path
585 url = s("//") + (netloc or s("")) + path
586 elif path:
587 url += path
588 if scheme:
589 url = scheme + s(":") + url
590 if query:
591 url = url + s("?") + query
592 if fragment:
593 url = url + s("#") + fragment
594 return url
595
596
597 def url_unquote(string, charset="utf-8", errors="replace", unsafe=""):
598 """URL decode a single string with a given encoding. If the charset
599 is set to `None` no unicode decoding is performed and raw bytes
600 are returned.
601
602 :param s: the string to unquote.
603 :param charset: the charset of the query string. If set to `None`
604 no unicode decoding will take place.
605 :param errors: the error handling for the charset decoding.
606 """
607 rv = _unquote_to_bytes(string, unsafe)
608 if charset is not None:
609 rv = rv.decode(charset, errors)
610 return rv
611
612
613 def url_unquote_plus(s, charset="utf-8", errors="replace"):
614 """URL decode a single string with the given `charset` and decode "+" to
615 whitespace.
616
617 Per default encoding errors are ignored. If you want a different behavior
618 you can set `errors` to ``'replace'`` or ``'strict'``. In strict mode a
619 :exc:`HTTPUnicodeError` is raised.
620
621 :param s: The string to unquote.
622 :param charset: the charset of the query string. If set to `None`
623 no unicode decoding will take place.
624 :param errors: The error handling for the `charset` decoding.
625 """
626 if isinstance(s, text_type):
627 s = s.replace(u"+", u" ")
628 else:
629 s = s.replace(b"+", b" ")
630 return url_unquote(s, charset, errors)
631
632
633 def url_fix(s, charset="utf-8"):
634 r"""Sometimes you get an URL by a user that just isn't a real URL because
635 it contains unsafe characters like ' ' and so on. This function can fix
636 some of the problems in a similar way browsers handle data entered by the
637 user:
638
639 >>> url_fix(u'http://de.wikipedia.org/wiki/Elf (Begriffskl\xe4rung)')
640 'http://de.wikipedia.org/wiki/Elf%20(Begriffskl%C3%A4rung)'
641
642 :param s: the string with the URL to fix.
643 :param charset: The target charset for the URL if the url was given as
644 unicode string.
645 """
646 # First step is to switch to unicode processing and to convert
647 # backslashes (which are invalid in URLs anyways) to slashes. This is
648 # consistent with what Chrome does.
649 s = to_unicode(s, charset, "replace").replace("\\", "/")
650
651 # For the specific case that we look like a malformed windows URL
652 # we want to fix this up manually:
653 if s.startswith("file://") and s[7:8].isalpha() and s[8:10] in (":/", "|/"):
654 s = "file:///" + s[7:]
655
656 url = url_parse(s)
657 path = url_quote(url.path, charset, safe="/%+$!*'(),")
658 qs = url_quote_plus(url.query, charset, safe=":&%=+$!*'(),")
659 anchor = url_quote_plus(url.fragment, charset, safe=":&%=+$!*'(),")
660 return to_native(url_unparse((url.scheme, url.encode_netloc(), path, qs, anchor)))
661
662
663 # not-unreserved characters remain quoted when unquoting to IRI
664 _to_iri_unsafe = "".join([chr(c) for c in range(128) if c not in _always_safe])
665
666
667 def _codec_error_url_quote(e):
668 """Used in :func:`uri_to_iri` after unquoting to re-quote any
669 invalid bytes.
670 """
671 out = _fast_url_quote(e.object[e.start : e.end])
672
673 if PY2:
674 out = out.decode("utf-8")
675
676 return out, e.end
677
678
679 codecs.register_error("werkzeug.url_quote", _codec_error_url_quote)
680
681
682 def uri_to_iri(uri, charset="utf-8", errors="werkzeug.url_quote"):
683 """Convert a URI to an IRI. All valid UTF-8 characters are unquoted,
684 leaving all reserved and invalid characters quoted. If the URL has
685 a domain, it is decoded from Punycode.
686
687 >>> uri_to_iri("http://xn--n3h.net/p%C3%A5th?q=%C3%A8ry%DF")
688 'http://\\u2603.net/p\\xe5th?q=\\xe8ry%DF'
689
690 :param uri: The URI to convert.
691 :param charset: The encoding to encode unquoted bytes with.
692 :param errors: Error handler to use during ``bytes.encode``. By
693 default, invalid bytes are left quoted.
694
695 .. versionchanged:: 0.15
696 All reserved and invalid characters remain quoted. Previously,
697 only some reserved characters were preserved, and invalid bytes
698 were replaced instead of left quoted.
699
700 .. versionadded:: 0.6
701 """
702 if isinstance(uri, tuple):
703 uri = url_unparse(uri)
704
705 uri = url_parse(to_unicode(uri, charset))
706 path = url_unquote(uri.path, charset, errors, _to_iri_unsafe)
707 query = url_unquote(uri.query, charset, errors, _to_iri_unsafe)
708 fragment = url_unquote(uri.fragment, charset, errors, _to_iri_unsafe)
709 return url_unparse((uri.scheme, uri.decode_netloc(), path, query, fragment))
710
711
712 # reserved characters remain unquoted when quoting to URI
713 _to_uri_safe = ":/?#[]@!$&'()*+,;=%"
714
715
716 def iri_to_uri(iri, charset="utf-8", errors="strict", safe_conversion=False):
717 """Convert an IRI to a URI. All non-ASCII and unsafe characters are
718 quoted. If the URL has a domain, it is encoded to Punycode.
719
720 >>> iri_to_uri('http://\\u2603.net/p\\xe5th?q=\\xe8ry%DF')
721 'http://xn--n3h.net/p%C3%A5th?q=%C3%A8ry%DF'
722
723 :param iri: The IRI to convert.
724 :param charset: The encoding of the IRI.
725 :param errors: Error handler to use during ``bytes.encode``.
726 :param safe_conversion: Return the URL unchanged if it only contains
727 ASCII characters and no whitespace. See the explanation below.
728
729 There is a general problem with IRI conversion with some protocols
730 that are in violation of the URI specification. Consider the
731 following two IRIs::
732
733 magnet:?xt=uri:whatever
734 itms-services://?action=download-manifest
735
736 After parsing, we don't know if the scheme requires the ``//``,
737 which is dropped if empty, but conveys different meanings in the
738 final URL if it's present or not. In this case, you can use
739 ``safe_conversion``, which will return the URL unchanged if it only
740 contains ASCII characters and no whitespace. This can result in a
741 URI with unquoted characters if it was not already quoted correctly,
742 but preserves the URL's semantics. Werkzeug uses this for the
743 ``Location`` header for redirects.
744
745 .. versionchanged:: 0.15
746 All reserved characters remain unquoted. Previously, only some
747 reserved characters were left unquoted.
748
749 .. versionchanged:: 0.9.6
750 The ``safe_conversion`` parameter was added.
751
752 .. versionadded:: 0.6
753 """
754 if isinstance(iri, tuple):
755 iri = url_unparse(iri)
756
757 if safe_conversion:
758 # If we're not sure if it's safe to convert the URL, and it only
759 # contains ASCII characters, return it unconverted.
760 try:
761 native_iri = to_native(iri)
762 ascii_iri = native_iri.encode("ascii")
763
764 # Only return if it doesn't have whitespace. (Why?)
765 if len(ascii_iri.split()) == 1:
766 return native_iri
767 except UnicodeError:
768 pass
769
770 iri = url_parse(to_unicode(iri, charset, errors))
771 path = url_quote(iri.path, charset, errors, _to_uri_safe)
772 query = url_quote(iri.query, charset, errors, _to_uri_safe)
773 fragment = url_quote(iri.fragment, charset, errors, _to_uri_safe)
774 return to_native(
775 url_unparse((iri.scheme, iri.encode_netloc(), path, query, fragment))
776 )
777
778
779 def url_decode(
780 s,
781 charset="utf-8",
782 decode_keys=False,
783 include_empty=True,
784 errors="replace",
785 separator="&",
786 cls=None,
787 ):
788 """
789 Parse a querystring and return it as :class:`MultiDict`. There is a
790 difference in key decoding on different Python versions. On Python 3
791 keys will always be fully decoded whereas on Python 2, keys will
792 remain bytestrings if they fit into ASCII. On 2.x keys can be forced
793 to be unicode by setting `decode_keys` to `True`.
794
795 If the charset is set to `None` no unicode decoding will happen and
796 raw bytes will be returned.
797
798 Per default a missing value for a key will default to an empty key. If
799 you don't want that behavior you can set `include_empty` to `False`.
800
801 Per default encoding errors are ignored. If you want a different behavior
802 you can set `errors` to ``'replace'`` or ``'strict'``. In strict mode a
803 `HTTPUnicodeError` is raised.
804
805 .. versionchanged:: 0.5
806 In previous versions ";" and "&" could be used for url decoding.
807 This changed in 0.5 where only "&" is supported. If you want to
808 use ";" instead a different `separator` can be provided.
809
810 The `cls` parameter was added.
811
812 :param s: a string with the query string to decode.
813 :param charset: the charset of the query string. If set to `None`
814 no unicode decoding will take place.
815 :param decode_keys: Used on Python 2.x to control whether keys should
816 be forced to be unicode objects. If set to `True`
817 then keys will be unicode in all cases. Otherwise,
818 they remain `str` if they fit into ASCII.
819 :param include_empty: Set to `False` if you don't want empty values to
820 appear in the dict.
821 :param errors: the decoding error behavior.
822 :param separator: the pair separator to be used, defaults to ``&``
823 :param cls: an optional dict class to use. If this is not specified
824 or `None` the default :class:`MultiDict` is used.
825 """
826 if cls is None:
827 cls = MultiDict
828 if isinstance(s, text_type) and not isinstance(separator, text_type):
829 separator = separator.decode(charset or "ascii")
830 elif isinstance(s, bytes) and not isinstance(separator, bytes):
831 separator = separator.encode(charset or "ascii")
832 return cls(
833 _url_decode_impl(
834 s.split(separator), charset, decode_keys, include_empty, errors
835 )
836 )
837
838
839 def url_decode_stream(
840 stream,
841 charset="utf-8",
842 decode_keys=False,
843 include_empty=True,
844 errors="replace",
845 separator="&",
846 cls=None,
847 limit=None,
848 return_iterator=False,
849 ):
850 """Works like :func:`url_decode` but decodes a stream. The behavior
851 of stream and limit follows functions like
852 :func:`~werkzeug.wsgi.make_line_iter`. The generator of pairs is
853 directly fed to the `cls` so you can consume the data while it's
854 parsed.
855
856 .. versionadded:: 0.8
857
858 :param stream: a stream with the encoded querystring
859 :param charset: the charset of the query string. If set to `None`
860 no unicode decoding will take place.
861 :param decode_keys: Used on Python 2.x to control whether keys should
862 be forced to be unicode objects. If set to `True`,
863 keys will be unicode in all cases. Otherwise, they
864 remain `str` if they fit into ASCII.
865 :param include_empty: Set to `False` if you don't want empty values to
866 appear in the dict.
867 :param errors: the decoding error behavior.
868 :param separator: the pair separator to be used, defaults to ``&``
869 :param cls: an optional dict class to use. If this is not specified
870 or `None` the default :class:`MultiDict` is used.
871 :param limit: the content length of the URL data. Not necessary if
872 a limited stream is provided.
873 :param return_iterator: if set to `True` the `cls` argument is ignored
874 and an iterator over all decoded pairs is
875 returned
876 """
877 from .wsgi import make_chunk_iter
878
879 pair_iter = make_chunk_iter(stream, separator, limit)
880 decoder = _url_decode_impl(pair_iter, charset, decode_keys, include_empty, errors)
881
882 if return_iterator:
883 return decoder
884
885 if cls is None:
886 cls = MultiDict
887
888 return cls(decoder)
889
890
891 def _url_decode_impl(pair_iter, charset, decode_keys, include_empty, errors):
892 for pair in pair_iter:
893 if not pair:
894 continue
895 s = make_literal_wrapper(pair)
896 equal = s("=")
897 if equal in pair:
898 key, value = pair.split(equal, 1)
899 else:
900 if not include_empty:
901 continue
902 key = pair
903 value = s("")
904 key = url_unquote_plus(key, charset, errors)
905 if charset is not None and PY2 and not decode_keys:
906 key = try_coerce_native(key)
907 yield key, url_unquote_plus(value, charset, errors)
908
909
910 def url_encode(
911 obj, charset="utf-8", encode_keys=False, sort=False, key=None, separator=b"&"
912 ):
913 """URL encode a dict/`MultiDict`. If a value is `None` it will not appear
914 in the result string. Per default only values are encoded into the target
915 charset strings. If `encode_keys` is set to ``True`` unicode keys are
916 supported too.
917
918 If `sort` is set to `True` the items are sorted by `key` or the default
919 sorting algorithm.
920
921 .. versionadded:: 0.5
922 `sort`, `key`, and `separator` were added.
923
924 :param obj: the object to encode into a query string.
925 :param charset: the charset of the query string.
926 :param encode_keys: set to `True` if you have unicode keys. (Ignored on
927 Python 3.x)
928 :param sort: set to `True` if you want parameters to be sorted by `key`.
929 :param separator: the separator to be used for the pairs.
930 :param key: an optional function to be used for sorting. For more details
931 check out the :func:`sorted` documentation.
932 """
933 separator = to_native(separator, "ascii")
934 return separator.join(_url_encode_impl(obj, charset, encode_keys, sort, key))
935
936
937 def url_encode_stream(
938 obj,
939 stream=None,
940 charset="utf-8",
941 encode_keys=False,
942 sort=False,
943 key=None,
944 separator=b"&",
945 ):
946 """Like :meth:`url_encode` but writes the results to a stream
947 object. If the stream is `None` a generator over all encoded
948 pairs is returned.
949
950 .. versionadded:: 0.8
951
952 :param obj: the object to encode into a query string.
953 :param stream: a stream to write the encoded object into or `None` if
954 an iterator over the encoded pairs should be returned. In
955 that case the separator argument is ignored.
956 :param charset: the charset of the query string.
957 :param encode_keys: set to `True` if you have unicode keys. (Ignored on
958 Python 3.x)
959 :param sort: set to `True` if you want parameters to be sorted by `key`.
960 :param separator: the separator to be used for the pairs.
961 :param key: an optional function to be used for sorting. For more details
962 check out the :func:`sorted` documentation.
963 """
964 separator = to_native(separator, "ascii")
965 gen = _url_encode_impl(obj, charset, encode_keys, sort, key)
966 if stream is None:
967 return gen
968 for idx, chunk in enumerate(gen):
969 if idx:
970 stream.write(separator)
971 stream.write(chunk)
972
973
974 def url_join(base, url, allow_fragments=True):
975 """Join a base URL and a possibly relative URL to form an absolute
976 interpretation of the latter.
977
978 :param base: the base URL for the join operation.
979 :param url: the URL to join.
980 :param allow_fragments: indicates whether fragments should be allowed.
981 """
982 if isinstance(base, tuple):
983 base = url_unparse(base)
984 if isinstance(url, tuple):
985 url = url_unparse(url)
986
987 base, url = normalize_string_tuple((base, url))
988 s = make_literal_wrapper(base)
989
990 if not base:
991 return url
992 if not url:
993 return base
994
995 bscheme, bnetloc, bpath, bquery, bfragment = url_parse(
996 base, allow_fragments=allow_fragments
997 )
998 scheme, netloc, path, query, fragment = url_parse(url, bscheme, allow_fragments)
999 if scheme != bscheme:
1000 return url
1001 if netloc:
1002 return url_unparse((scheme, netloc, path, query, fragment))
1003 netloc = bnetloc
1004
1005 if path[:1] == s("/"):
1006 segments = path.split(s("/"))
1007 elif not path:
1008 segments = bpath.split(s("/"))
1009 if not query:
1010 query = bquery
1011 else:
1012 segments = bpath.split(s("/"))[:-1] + path.split(s("/"))
1013
1014 # If the rightmost part is "./" we want to keep the slash but
1015 # remove the dot.
1016 if segments[-1] == s("."):
1017 segments[-1] = s("")
1018
1019 # Resolve ".." and "."
1020 segments = [segment for segment in segments if segment != s(".")]
1021 while 1:
1022 i = 1
1023 n = len(segments) - 1
1024 while i < n:
1025 if segments[i] == s("..") and segments[i - 1] not in (s(""), s("..")):
1026 del segments[i - 1 : i + 1]
1027 break
1028 i += 1
1029 else:
1030 break
1031
1032 # Remove trailing ".." if the URL is absolute
1033 unwanted_marker = [s(""), s("..")]
1034 while segments[:2] == unwanted_marker:
1035 del segments[1]
1036
1037 path = s("/").join(segments)
1038 return url_unparse((scheme, netloc, path, query, fragment))
1039
1040
1041 class Href(object):
1042 """Implements a callable that constructs URLs with the given base. The
1043 function can be called with any number of positional and keyword
1044 arguments which than are used to assemble the URL. Works with URLs
1045 and posix paths.
1046
1047 Positional arguments are appended as individual segments to
1048 the path of the URL:
1049
1050 >>> href = Href('/foo')
1051 >>> href('bar', 23)
1052 '/foo/bar/23'
1053 >>> href('foo', bar=23)
1054 '/foo/foo?bar=23'
1055
1056 If any of the arguments (positional or keyword) evaluates to `None` it
1057 will be skipped. If no keyword arguments are given the last argument
1058 can be a :class:`dict` or :class:`MultiDict` (or any other dict subclass),
1059 otherwise the keyword arguments are used for the query parameters, cutting
1060 off the first trailing underscore of the parameter name:
1061
1062 >>> href(is_=42)
1063 '/foo?is=42'
1064 >>> href({'foo': 'bar'})
1065 '/foo?foo=bar'
1066
1067 Combining of both methods is not allowed:
1068
1069 >>> href({'foo': 'bar'}, bar=42)
1070 Traceback (most recent call last):
1071 ...
1072 TypeError: keyword arguments and query-dicts can't be combined
1073
1074 Accessing attributes on the href object creates a new href object with
1075 the attribute name as prefix:
1076
1077 >>> bar_href = href.bar
1078 >>> bar_href("blub")
1079 '/foo/bar/blub'
1080
1081 If `sort` is set to `True` the items are sorted by `key` or the default
1082 sorting algorithm:
1083
1084 >>> href = Href("/", sort=True)
1085 >>> href(a=1, b=2, c=3)
1086 '/?a=1&b=2&c=3'
1087
1088 .. versionadded:: 0.5
1089 `sort` and `key` were added.
1090 """
1091
1092 def __init__(self, base="./", charset="utf-8", sort=False, key=None):
1093 if not base:
1094 base = "./"
1095 self.base = base
1096 self.charset = charset
1097 self.sort = sort
1098 self.key = key
1099
1100 def __getattr__(self, name):
1101 if name[:2] == "__":
1102 raise AttributeError(name)
1103 base = self.base
1104 if base[-1:] != "/":
1105 base += "/"
1106 return Href(url_join(base, name), self.charset, self.sort, self.key)
1107
1108 def __call__(self, *path, **query):
1109 if path and isinstance(path[-1], dict):
1110 if query:
1111 raise TypeError("keyword arguments and query-dicts can't be combined")
1112 query, path = path[-1], path[:-1]
1113 elif query:
1114 query = dict(
1115 [(k.endswith("_") and k[:-1] or k, v) for k, v in query.items()]
1116 )
1117 path = "/".join(
1118 [
1119 to_unicode(url_quote(x, self.charset), "ascii")
1120 for x in path
1121 if x is not None
1122 ]
1123 ).lstrip("/")
1124 rv = self.base
1125 if path:
1126 if not rv.endswith("/"):
1127 rv += "/"
1128 rv = url_join(rv, "./" + path)
1129 if query:
1130 rv += "?" + to_unicode(
1131 url_encode(query, self.charset, sort=self.sort, key=self.key), "ascii"
1132 )
1133 return to_native(rv)
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.useragents
3 ~~~~~~~~~~~~~~~~~~~
4
5 This module provides a helper to inspect user agent strings. This module
6 is far from complete but should work for most of the currently available
7 browsers.
8
9
10 :copyright: 2007 Pallets
11 :license: BSD-3-Clause
12 """
13 import re
14 import warnings
15
16
17 class UserAgentParser(object):
18 """A simple user agent parser. Used by the `UserAgent`."""
19
20 platforms = (
21 ("cros", "chromeos"),
22 ("iphone|ios", "iphone"),
23 ("ipad", "ipad"),
24 (r"darwin|mac|os\s*x", "macos"),
25 ("win", "windows"),
26 (r"android", "android"),
27 ("netbsd", "netbsd"),
28 ("openbsd", "openbsd"),
29 ("freebsd", "freebsd"),
30 ("dragonfly", "dragonflybsd"),
31 ("(sun|i86)os", "solaris"),
32 (r"x11|lin(\b|ux)?", "linux"),
33 (r"nintendo\s+wii", "wii"),
34 ("irix", "irix"),
35 ("hp-?ux", "hpux"),
36 ("aix", "aix"),
37 ("sco|unix_sv", "sco"),
38 ("bsd", "bsd"),
39 ("amiga", "amiga"),
40 ("blackberry|playbook", "blackberry"),
41 ("symbian", "symbian"),
42 )
43 browsers = (
44 ("googlebot", "google"),
45 ("msnbot", "msn"),
46 ("yahoo", "yahoo"),
47 ("ask jeeves", "ask"),
48 (r"aol|america\s+online\s+browser", "aol"),
49 ("opera", "opera"),
50 ("edge", "edge"),
51 ("chrome|crios", "chrome"),
52 ("seamonkey", "seamonkey"),
53 ("firefox|firebird|phoenix|iceweasel", "firefox"),
54 ("galeon", "galeon"),
55 ("safari|version", "safari"),
56 ("webkit", "webkit"),
57 ("camino", "camino"),
58 ("konqueror", "konqueror"),
59 ("k-meleon", "kmeleon"),
60 ("netscape", "netscape"),
61 (r"msie|microsoft\s+internet\s+explorer|trident/.+? rv:", "msie"),
62 ("lynx", "lynx"),
63 ("links", "links"),
64 ("Baiduspider", "baidu"),
65 ("bingbot", "bing"),
66 ("mozilla", "mozilla"),
67 )
68
69 _browser_version_re = r"(?:%s)[/\sa-z(]*(\d+[.\da-z]+)?"
70 _language_re = re.compile(
71 r"(?:;\s*|\s+)(\b\w{2}\b(?:-\b\w{2}\b)?)\s*;|"
72 r"(?:\(|\[|;)\s*(\b\w{2}\b(?:-\b\w{2}\b)?)\s*(?:\]|\)|;)"
73 )
74
75 def __init__(self):
76 self.platforms = [(b, re.compile(a, re.I)) for a, b in self.platforms]
77 self.browsers = [
78 (b, re.compile(self._browser_version_re % a, re.I))
79 for a, b in self.browsers
80 ]
81
82 def __call__(self, user_agent):
83 for platform, regex in self.platforms: # noqa: B007
84 match = regex.search(user_agent)
85 if match is not None:
86 break
87 else:
88 platform = None
89 for browser, regex in self.browsers: # noqa: B007
90 match = regex.search(user_agent)
91 if match is not None:
92 version = match.group(1)
93 break
94 else:
95 browser = version = None
96 match = self._language_re.search(user_agent)
97 if match is not None:
98 language = match.group(1) or match.group(2)
99 else:
100 language = None
101 return platform, browser, version, language
102
103
104 class UserAgent(object):
105 """Represents a user agent. Pass it a WSGI environment or a user agent
106 string and you can inspect some of the details from the user agent
107 string via the attributes. The following attributes exist:
108
109 .. attribute:: string
110
111 the raw user agent string
112
113 .. attribute:: platform
114
115 the browser platform. The following platforms are currently
116 recognized:
117
118 - `aix`
119 - `amiga`
120 - `android`
121 - `blackberry`
122 - `bsd`
123 - `chromeos`
124 - `dragonflybsd`
125 - `freebsd`
126 - `hpux`
127 - `ipad`
128 - `iphone`
129 - `irix`
130 - `linux`
131 - `macos`
132 - `netbsd`
133 - `openbsd`
134 - `sco`
135 - `solaris`
136 - `symbian`
137 - `wii`
138 - `windows`
139
140 .. attribute:: browser
141
142 the name of the browser. The following browsers are currently
143 recognized:
144
145 - `aol` *
146 - `ask` *
147 - `baidu` *
148 - `bing` *
149 - `camino`
150 - `chrome`
151 - `edge`
152 - `firefox`
153 - `galeon`
154 - `google` *
155 - `kmeleon`
156 - `konqueror`
157 - `links`
158 - `lynx`
159 - `mozilla`
160 - `msie`
161 - `msn`
162 - `netscape`
163 - `opera`
164 - `safari`
165 - `seamonkey`
166 - `webkit`
167 - `yahoo` *
168
169 (Browsers marked with a star (``*``) are crawlers.)
170
171 .. attribute:: version
172
173 the version of the browser
174
175 .. attribute:: language
176
177 the language of the browser
178 """
179
180 _parser = UserAgentParser()
181
182 def __init__(self, environ_or_string):
183 if isinstance(environ_or_string, dict):
184 environ_or_string = environ_or_string.get("HTTP_USER_AGENT", "")
185 self.string = environ_or_string
186 self.platform, self.browser, self.version, self.language = self._parser(
187 environ_or_string
188 )
189
190 def to_header(self):
191 return self.string
192
193 def __str__(self):
194 return self.string
195
196 def __nonzero__(self):
197 return bool(self.browser)
198
199 __bool__ = __nonzero__
200
201 def __repr__(self):
202 return "<%s %r/%s>" % (self.__class__.__name__, self.browser, self.version)
203
204
205 # DEPRECATED
206 from .wrappers import UserAgentMixin as _UserAgentMixin
207
208
209 class UserAgentMixin(_UserAgentMixin):
210 @property
211 def user_agent(self, *args, **kwargs):
212 warnings.warn(
213 "'werkzeug.useragents.UserAgentMixin' should be imported"
214 " from 'werkzeug.wrappers.UserAgentMixin'. This old import"
215 " will be removed in version 1.0.",
216 DeprecationWarning,
217 stacklevel=2,
218 )
219 return super(_UserAgentMixin, self).user_agent
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.utils
3 ~~~~~~~~~~~~~~
4
5 This module implements various utilities for WSGI applications. Most of
6 them are used by the request and response wrappers but especially for
7 middleware development it makes sense to use them without the wrappers.
8
9 :copyright: 2007 Pallets
10 :license: BSD-3-Clause
11 """
12 import codecs
13 import os
14 import pkgutil
15 import re
16 import sys
17 import warnings
18
19 from ._compat import iteritems
20 from ._compat import PY2
21 from ._compat import reraise
22 from ._compat import string_types
23 from ._compat import text_type
24 from ._compat import unichr
25 from ._internal import _DictAccessorProperty
26 from ._internal import _missing
27 from ._internal import _parse_signature
28
29 try:
30 from html.entities import name2codepoint
31 except ImportError:
32 from htmlentitydefs import name2codepoint
33
34
35 _format_re = re.compile(r"\$(?:(%s)|\{(%s)\})" % (("[a-zA-Z_][a-zA-Z0-9_]*",) * 2))
36 _entity_re = re.compile(r"&([^;]+);")
37 _filename_ascii_strip_re = re.compile(r"[^A-Za-z0-9_.-]")
38 _windows_device_files = (
39 "CON",
40 "AUX",
41 "COM1",
42 "COM2",
43 "COM3",
44 "COM4",
45 "LPT1",
46 "LPT2",
47 "LPT3",
48 "PRN",
49 "NUL",
50 )
51
52
53 class cached_property(property):
54 """A decorator that converts a function into a lazy property. The
55 function wrapped is called the first time to retrieve the result
56 and then that calculated result is used the next time you access
57 the value::
58
59 class Foo(object):
60
61 @cached_property
62 def foo(self):
63 # calculate something important here
64 return 42
65
66 The class has to have a `__dict__` in order for this property to
67 work.
68 """
69
70 # implementation detail: A subclass of python's builtin property
71 # decorator, we override __get__ to check for a cached value. If one
72 # chooses to invoke __get__ by hand the property will still work as
73 # expected because the lookup logic is replicated in __get__ for
74 # manual invocation.
75
76 def __init__(self, func, name=None, doc=None):
77 self.__name__ = name or func.__name__
78 self.__module__ = func.__module__
79 self.__doc__ = doc or func.__doc__
80 self.func = func
81
82 def __set__(self, obj, value):
83 obj.__dict__[self.__name__] = value
84
85 def __get__(self, obj, type=None):
86 if obj is None:
87 return self
88 value = obj.__dict__.get(self.__name__, _missing)
89 if value is _missing:
90 value = self.func(obj)
91 obj.__dict__[self.__name__] = value
92 return value
93
94
95 class environ_property(_DictAccessorProperty):
96 """Maps request attributes to environment variables. This works not only
97 for the Werzeug request object, but also any other class with an
98 environ attribute:
99
100 >>> class Test(object):
101 ... environ = {'key': 'value'}
102 ... test = environ_property('key')
103 >>> var = Test()
104 >>> var.test
105 'value'
106
107 If you pass it a second value it's used as default if the key does not
108 exist, the third one can be a converter that takes a value and converts
109 it. If it raises :exc:`ValueError` or :exc:`TypeError` the default value
110 is used. If no default value is provided `None` is used.
111
112 Per default the property is read only. You have to explicitly enable it
113 by passing ``read_only=False`` to the constructor.
114 """
115
116 read_only = True
117
118 def lookup(self, obj):
119 return obj.environ
120
121
122 class header_property(_DictAccessorProperty):
123 """Like `environ_property` but for headers."""
124
125 def lookup(self, obj):
126 return obj.headers
127
128
129 class HTMLBuilder(object):
130 """Helper object for HTML generation.
131
132 Per default there are two instances of that class. The `html` one, and
133 the `xhtml` one for those two dialects. The class uses keyword parameters
134 and positional parameters to generate small snippets of HTML.
135
136 Keyword parameters are converted to XML/SGML attributes, positional
137 arguments are used as children. Because Python accepts positional
138 arguments before keyword arguments it's a good idea to use a list with the
139 star-syntax for some children:
140
141 >>> html.p(class_='foo', *[html.a('foo', href='foo.html'), ' ',
142 ... html.a('bar', href='bar.html')])
143 u'<p class="foo"><a href="foo.html">foo</a> <a href="bar.html">bar</a></p>'
144
145 This class works around some browser limitations and can not be used for
146 arbitrary SGML/XML generation. For that purpose lxml and similar
147 libraries exist.
148
149 Calling the builder escapes the string passed:
150
151 >>> html.p(html("<foo>"))
152 u'<p>&lt;foo&gt;</p>'
153 """
154
155 _entity_re = re.compile(r"&([^;]+);")
156 _entities = name2codepoint.copy()
157 _entities["apos"] = 39
158 _empty_elements = {
159 "area",
160 "base",
161 "basefont",
162 "br",
163 "col",
164 "command",
165 "embed",
166 "frame",
167 "hr",
168 "img",
169 "input",
170 "keygen",
171 "isindex",
172 "link",
173 "meta",
174 "param",
175 "source",
176 "wbr",
177 }
178 _boolean_attributes = {
179 "selected",
180 "checked",
181 "compact",
182 "declare",
183 "defer",
184 "disabled",
185 "ismap",
186 "multiple",
187 "nohref",
188 "noresize",
189 "noshade",
190 "nowrap",
191 }
192 _plaintext_elements = {"textarea"}
193 _c_like_cdata = {"script", "style"}
194
195 def __init__(self, dialect):
196 self._dialect = dialect
197
198 def __call__(self, s):
199 return escape(s)
200
201 def __getattr__(self, tag):
202 if tag[:2] == "__":
203 raise AttributeError(tag)
204
205 def proxy(*children, **arguments):
206 buffer = "<" + tag
207 for key, value in iteritems(arguments):
208 if value is None:
209 continue
210 if key[-1] == "_":
211 key = key[:-1]
212 if key in self._boolean_attributes:
213 if not value:
214 continue
215 if self._dialect == "xhtml":
216 value = '="' + key + '"'
217 else:
218 value = ""
219 else:
220 value = '="' + escape(value) + '"'
221 buffer += " " + key + value
222 if not children and tag in self._empty_elements:
223 if self._dialect == "xhtml":
224 buffer += " />"
225 else:
226 buffer += ">"
227 return buffer
228 buffer += ">"
229
230 children_as_string = "".join(
231 [text_type(x) for x in children if x is not None]
232 )
233
234 if children_as_string:
235 if tag in self._plaintext_elements:
236 children_as_string = escape(children_as_string)
237 elif tag in self._c_like_cdata and self._dialect == "xhtml":
238 children_as_string = (
239 "/*<![CDATA[*/" + children_as_string + "/*]]>*/"
240 )
241 buffer += children_as_string + "</" + tag + ">"
242 return buffer
243
244 return proxy
245
246 def __repr__(self):
247 return "<%s for %r>" % (self.__class__.__name__, self._dialect)
248
249
250 html = HTMLBuilder("html")
251 xhtml = HTMLBuilder("xhtml")
252
253 # https://cgit.freedesktop.org/xdg/shared-mime-info/tree/freedesktop.org.xml.in
254 # https://www.iana.org/assignments/media-types/media-types.xhtml
255 # Types listed in the XDG mime info that have a charset in the IANA registration.
256 _charset_mimetypes = {
257 "application/ecmascript",
258 "application/javascript",
259 "application/sql",
260 "application/xml",
261 "application/xml-dtd",
262 "application/xml-external-parsed-entity",
263 }
264
265
266 def get_content_type(mimetype, charset):
267 """Returns the full content type string with charset for a mimetype.
268
269 If the mimetype represents text, the charset parameter will be
270 appended, otherwise the mimetype is returned unchanged.
271
272 :param mimetype: The mimetype to be used as content type.
273 :param charset: The charset to be appended for text mimetypes.
274 :return: The content type.
275
276 .. verionchanged:: 0.15
277 Any type that ends with ``+xml`` gets a charset, not just those
278 that start with ``application/``. Known text types such as
279 ``application/javascript`` are also given charsets.
280 """
281 if (
282 mimetype.startswith("text/")
283 or mimetype in _charset_mimetypes
284 or mimetype.endswith("+xml")
285 ):
286 mimetype += "; charset=" + charset
287
288 return mimetype
289
290
291 def detect_utf_encoding(data):
292 """Detect which UTF encoding was used to encode the given bytes.
293
294 The latest JSON standard (:rfc:`8259`) suggests that only UTF-8 is
295 accepted. Older documents allowed 8, 16, or 32. 16 and 32 can be big
296 or little endian. Some editors or libraries may prepend a BOM.
297
298 :internal:
299
300 :param data: Bytes in unknown UTF encoding.
301 :return: UTF encoding name
302
303 .. versionadded:: 0.15
304 """
305 head = data[:4]
306
307 if head[:3] == codecs.BOM_UTF8:
308 return "utf-8-sig"
309
310 if b"\x00" not in head:
311 return "utf-8"
312
313 if head in (codecs.BOM_UTF32_BE, codecs.BOM_UTF32_LE):
314 return "utf-32"
315
316 if head[:2] in (codecs.BOM_UTF16_BE, codecs.BOM_UTF16_LE):
317 return "utf-16"
318
319 if len(head) == 4:
320 if head[:3] == b"\x00\x00\x00":
321 return "utf-32-be"
322
323 if head[::2] == b"\x00\x00":
324 return "utf-16-be"
325
326 if head[1:] == b"\x00\x00\x00":
327 return "utf-32-le"
328
329 if head[1::2] == b"\x00\x00":
330 return "utf-16-le"
331
332 if len(head) == 2:
333 return "utf-16-be" if head.startswith(b"\x00") else "utf-16-le"
334
335 return "utf-8"
336
337
338 def format_string(string, context):
339 """String-template format a string:
340
341 >>> format_string('$foo and ${foo}s', dict(foo=42))
342 '42 and 42s'
343
344 This does not do any attribute lookup etc. For more advanced string
345 formattings have a look at the `werkzeug.template` module.
346
347 :param string: the format string.
348 :param context: a dict with the variables to insert.
349 """
350
351 def lookup_arg(match):
352 x = context[match.group(1) or match.group(2)]
353 if not isinstance(x, string_types):
354 x = type(string)(x)
355 return x
356
357 return _format_re.sub(lookup_arg, string)
358
359
360 def secure_filename(filename):
361 r"""Pass it a filename and it will return a secure version of it. This
362 filename can then safely be stored on a regular file system and passed
363 to :func:`os.path.join`. The filename returned is an ASCII only string
364 for maximum portability.
365
366 On windows systems the function also makes sure that the file is not
367 named after one of the special device files.
368
369 >>> secure_filename("My cool movie.mov")
370 'My_cool_movie.mov'
371 >>> secure_filename("../../../etc/passwd")
372 'etc_passwd'
373 >>> secure_filename(u'i contain cool \xfcml\xe4uts.txt')
374 'i_contain_cool_umlauts.txt'
375
376 The function might return an empty filename. It's your responsibility
377 to ensure that the filename is unique and that you abort or
378 generate a random filename if the function returned an empty one.
379
380 .. versionadded:: 0.5
381
382 :param filename: the filename to secure
383 """
384 if isinstance(filename, text_type):
385 from unicodedata import normalize
386
387 filename = normalize("NFKD", filename).encode("ascii", "ignore")
388 if not PY2:
389 filename = filename.decode("ascii")
390 for sep in os.path.sep, os.path.altsep:
391 if sep:
392 filename = filename.replace(sep, " ")
393 filename = str(_filename_ascii_strip_re.sub("", "_".join(filename.split()))).strip(
394 "._"
395 )
396
397 # on nt a couple of special files are present in each folder. We
398 # have to ensure that the target file is not such a filename. In
399 # this case we prepend an underline
400 if (
401 os.name == "nt"
402 and filename
403 and filename.split(".")[0].upper() in _windows_device_files
404 ):
405 filename = "_" + filename
406
407 return filename
408
409
410 def escape(s, quote=None):
411 """Replace special characters "&", "<", ">" and (") to HTML-safe sequences.
412
413 There is a special handling for `None` which escapes to an empty string.
414
415 .. versionchanged:: 0.9
416 `quote` is now implicitly on.
417
418 :param s: the string to escape.
419 :param quote: ignored.
420 """
421 if s is None:
422 return ""
423 elif hasattr(s, "__html__"):
424 return text_type(s.__html__())
425 elif not isinstance(s, string_types):
426 s = text_type(s)
427 if quote is not None:
428 from warnings import warn
429
430 warn(
431 "The 'quote' parameter is no longer used as of version 0.9"
432 " and will be removed in version 1.0.",
433 DeprecationWarning,
434 stacklevel=2,
435 )
436 s = (
437 s.replace("&", "&amp;")
438 .replace("<", "&lt;")
439 .replace(">", "&gt;")
440 .replace('"', "&quot;")
441 )
442 return s
443
444
445 def unescape(s):
446 """The reverse function of `escape`. This unescapes all the HTML
447 entities, not only the XML entities inserted by `escape`.
448
449 :param s: the string to unescape.
450 """
451
452 def handle_match(m):
453 name = m.group(1)
454 if name in HTMLBuilder._entities:
455 return unichr(HTMLBuilder._entities[name])
456 try:
457 if name[:2] in ("#x", "#X"):
458 return unichr(int(name[2:], 16))
459 elif name.startswith("#"):
460 return unichr(int(name[1:]))
461 except ValueError:
462 pass
463 return u""
464
465 return _entity_re.sub(handle_match, s)
466
467
468 def redirect(location, code=302, Response=None):
469 """Returns a response object (a WSGI application) that, if called,
470 redirects the client to the target location. Supported codes are
471 301, 302, 303, 305, 307, and 308. 300 is not supported because
472 it's not a real redirect and 304 because it's the answer for a
473 request with a request with defined If-Modified-Since headers.
474
475 .. versionadded:: 0.6
476 The location can now be a unicode string that is encoded using
477 the :func:`iri_to_uri` function.
478
479 .. versionadded:: 0.10
480 The class used for the Response object can now be passed in.
481
482 :param location: the location the response should redirect to.
483 :param code: the redirect status code. defaults to 302.
484 :param class Response: a Response class to use when instantiating a
485 response. The default is :class:`werkzeug.wrappers.Response` if
486 unspecified.
487 """
488 if Response is None:
489 from .wrappers import Response
490
491 display_location = escape(location)
492 if isinstance(location, text_type):
493 # Safe conversion is necessary here as we might redirect
494 # to a broken URI scheme (for instance itms-services).
495 from .urls import iri_to_uri
496
497 location = iri_to_uri(location, safe_conversion=True)
498 response = Response(
499 '<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n'
500 "<title>Redirecting...</title>\n"
501 "<h1>Redirecting...</h1>\n"
502 "<p>You should be redirected automatically to target URL: "
503 '<a href="%s">%s</a>. If not click the link.'
504 % (escape(location), display_location),
505 code,
506 mimetype="text/html",
507 )
508 response.headers["Location"] = location
509 return response
510
511
512 def append_slash_redirect(environ, code=301):
513 """Redirects to the same URL but with a slash appended. The behavior
514 of this function is undefined if the path ends with a slash already.
515
516 :param environ: the WSGI environment for the request that triggers
517 the redirect.
518 :param code: the status code for the redirect.
519 """
520 new_path = environ["PATH_INFO"].strip("/") + "/"
521 query_string = environ.get("QUERY_STRING")
522 if query_string:
523 new_path += "?" + query_string
524 return redirect(new_path, code)
525
526
527 def import_string(import_name, silent=False):
528 """Imports an object based on a string. This is useful if you want to
529 use import paths as endpoints or something similar. An import path can
530 be specified either in dotted notation (``xml.sax.saxutils.escape``)
531 or with a colon as object delimiter (``xml.sax.saxutils:escape``).
532
533 If `silent` is True the return value will be `None` if the import fails.
534
535 :param import_name: the dotted name for the object to import.
536 :param silent: if set to `True` import errors are ignored and
537 `None` is returned instead.
538 :return: imported object
539 """
540 # force the import name to automatically convert to strings
541 # __import__ is not able to handle unicode strings in the fromlist
542 # if the module is a package
543 import_name = str(import_name).replace(":", ".")
544 try:
545 try:
546 __import__(import_name)
547 except ImportError:
548 if "." not in import_name:
549 raise
550 else:
551 return sys.modules[import_name]
552
553 module_name, obj_name = import_name.rsplit(".", 1)
554 module = __import__(module_name, globals(), locals(), [obj_name])
555 try:
556 return getattr(module, obj_name)
557 except AttributeError as e:
558 raise ImportError(e)
559
560 except ImportError as e:
561 if not silent:
562 reraise(
563 ImportStringError, ImportStringError(import_name, e), sys.exc_info()[2]
564 )
565
566
567 def find_modules(import_path, include_packages=False, recursive=False):
568 """Finds all the modules below a package. This can be useful to
569 automatically import all views / controllers so that their metaclasses /
570 function decorators have a chance to register themselves on the
571 application.
572
573 Packages are not returned unless `include_packages` is `True`. This can
574 also recursively list modules but in that case it will import all the
575 packages to get the correct load path of that module.
576
577 :param import_path: the dotted name for the package to find child modules.
578 :param include_packages: set to `True` if packages should be returned, too.
579 :param recursive: set to `True` if recursion should happen.
580 :return: generator
581 """
582 module = import_string(import_path)
583 path = getattr(module, "__path__", None)
584 if path is None:
585 raise ValueError("%r is not a package" % import_path)
586 basename = module.__name__ + "."
587 for _importer, modname, ispkg in pkgutil.iter_modules(path):
588 modname = basename + modname
589 if ispkg:
590 if include_packages:
591 yield modname
592 if recursive:
593 for item in find_modules(modname, include_packages, True):
594 yield item
595 else:
596 yield modname
597
598
599 def validate_arguments(func, args, kwargs, drop_extra=True):
600 """Checks if the function accepts the arguments and keyword arguments.
601 Returns a new ``(args, kwargs)`` tuple that can safely be passed to
602 the function without causing a `TypeError` because the function signature
603 is incompatible. If `drop_extra` is set to `True` (which is the default)
604 any extra positional or keyword arguments are dropped automatically.
605
606 The exception raised provides three attributes:
607
608 `missing`
609 A set of argument names that the function expected but where
610 missing.
611
612 `extra`
613 A dict of keyword arguments that the function can not handle but
614 where provided.
615
616 `extra_positional`
617 A list of values that where given by positional argument but the
618 function cannot accept.
619
620 This can be useful for decorators that forward user submitted data to
621 a view function::
622
623 from werkzeug.utils import ArgumentValidationError, validate_arguments
624
625 def sanitize(f):
626 def proxy(request):
627 data = request.values.to_dict()
628 try:
629 args, kwargs = validate_arguments(f, (request,), data)
630 except ArgumentValidationError:
631 raise BadRequest('The browser failed to transmit all '
632 'the data expected.')
633 return f(*args, **kwargs)
634 return proxy
635
636 :param func: the function the validation is performed against.
637 :param args: a tuple of positional arguments.
638 :param kwargs: a dict of keyword arguments.
639 :param drop_extra: set to `False` if you don't want extra arguments
640 to be silently dropped.
641 :return: tuple in the form ``(args, kwargs)``.
642 """
643 parser = _parse_signature(func)
644 args, kwargs, missing, extra, extra_positional = parser(args, kwargs)[:5]
645 if missing:
646 raise ArgumentValidationError(tuple(missing))
647 elif (extra or extra_positional) and not drop_extra:
648 raise ArgumentValidationError(None, extra, extra_positional)
649 return tuple(args), kwargs
650
651
652 def bind_arguments(func, args, kwargs):
653 """Bind the arguments provided into a dict. When passed a function,
654 a tuple of arguments and a dict of keyword arguments `bind_arguments`
655 returns a dict of names as the function would see it. This can be useful
656 to implement a cache decorator that uses the function arguments to build
657 the cache key based on the values of the arguments.
658
659 :param func: the function the arguments should be bound for.
660 :param args: tuple of positional arguments.
661 :param kwargs: a dict of keyword arguments.
662 :return: a :class:`dict` of bound keyword arguments.
663 """
664 (
665 args,
666 kwargs,
667 missing,
668 extra,
669 extra_positional,
670 arg_spec,
671 vararg_var,
672 kwarg_var,
673 ) = _parse_signature(func)(args, kwargs)
674 values = {}
675 for (name, _has_default, _default), value in zip(arg_spec, args):
676 values[name] = value
677 if vararg_var is not None:
678 values[vararg_var] = tuple(extra_positional)
679 elif extra_positional:
680 raise TypeError("too many positional arguments")
681 if kwarg_var is not None:
682 multikw = set(extra) & set([x[0] for x in arg_spec])
683 if multikw:
684 raise TypeError(
685 "got multiple values for keyword argument " + repr(next(iter(multikw)))
686 )
687 values[kwarg_var] = extra
688 elif extra:
689 raise TypeError("got unexpected keyword argument " + repr(next(iter(extra))))
690 return values
691
692
693 class ArgumentValidationError(ValueError):
694
695 """Raised if :func:`validate_arguments` fails to validate"""
696
697 def __init__(self, missing=None, extra=None, extra_positional=None):
698 self.missing = set(missing or ())
699 self.extra = extra or {}
700 self.extra_positional = extra_positional or []
701 ValueError.__init__(
702 self,
703 "function arguments invalid. (%d missing, %d additional)"
704 % (len(self.missing), len(self.extra) + len(self.extra_positional)),
705 )
706
707
708 class ImportStringError(ImportError):
709 """Provides information about a failed :func:`import_string` attempt."""
710
711 #: String in dotted notation that failed to be imported.
712 import_name = None
713 #: Wrapped exception.
714 exception = None
715
716 def __init__(self, import_name, exception):
717 self.import_name = import_name
718 self.exception = exception
719
720 msg = (
721 "import_string() failed for %r. Possible reasons are:\n\n"
722 "- missing __init__.py in a package;\n"
723 "- package or module path not included in sys.path;\n"
724 "- duplicated package or module name taking precedence in "
725 "sys.path;\n"
726 "- missing module, class, function or variable;\n\n"
727 "Debugged import:\n\n%s\n\n"
728 "Original exception:\n\n%s: %s"
729 )
730
731 name = ""
732 tracked = []
733 for part in import_name.replace(":", ".").split("."):
734 name += (name and ".") + part
735 imported = import_string(name, silent=True)
736 if imported:
737 tracked.append((name, getattr(imported, "__file__", None)))
738 else:
739 track = ["- %r found in %r." % (n, i) for n, i in tracked]
740 track.append("- %r not found." % name)
741 msg = msg % (
742 import_name,
743 "\n".join(track),
744 exception.__class__.__name__,
745 str(exception),
746 )
747 break
748
749 ImportError.__init__(self, msg)
750
751 def __repr__(self):
752 return "<%s(%r, %r)>" % (
753 self.__class__.__name__,
754 self.import_name,
755 self.exception,
756 )
757
758
759 # DEPRECATED
760 from .datastructures import CombinedMultiDict as _CombinedMultiDict
761 from .datastructures import EnvironHeaders as _EnvironHeaders
762 from .datastructures import Headers as _Headers
763 from .datastructures import MultiDict as _MultiDict
764 from .http import dump_cookie as _dump_cookie
765 from .http import parse_cookie as _parse_cookie
766
767
768 class MultiDict(_MultiDict):
769 def __init__(self, *args, **kwargs):
770 warnings.warn(
771 "'werkzeug.utils.MultiDict' has moved to 'werkzeug"
772 ".datastructures.MultiDict' as of version 0.5. This old"
773 " import will be removed in version 1.0.",
774 DeprecationWarning,
775 stacklevel=2,
776 )
777 super(MultiDict, self).__init__(*args, **kwargs)
778
779
780 class CombinedMultiDict(_CombinedMultiDict):
781 def __init__(self, *args, **kwargs):
782 warnings.warn(
783 "'werkzeug.utils.CombinedMultiDict' has moved to 'werkzeug"
784 ".datastructures.CombinedMultiDict' as of version 0.5. This"
785 " old import will be removed in version 1.0.",
786 DeprecationWarning,
787 stacklevel=2,
788 )
789 super(CombinedMultiDict, self).__init__(*args, **kwargs)
790
791
792 class Headers(_Headers):
793 def __init__(self, *args, **kwargs):
794 warnings.warn(
795 "'werkzeug.utils.Headers' has moved to 'werkzeug"
796 ".datastructures.Headers' as of version 0.5. This old"
797 " import will be removed in version 1.0.",
798 DeprecationWarning,
799 stacklevel=2,
800 )
801 super(Headers, self).__init__(*args, **kwargs)
802
803
804 class EnvironHeaders(_EnvironHeaders):
805 def __init__(self, *args, **kwargs):
806 warnings.warn(
807 "'werkzeug.utils.EnvironHeaders' has moved to 'werkzeug"
808 ".datastructures.EnvironHeaders' as of version 0.5. This"
809 " old import will be removed in version 1.0.",
810 DeprecationWarning,
811 stacklevel=2,
812 )
813 super(EnvironHeaders, self).__init__(*args, **kwargs)
814
815
816 def parse_cookie(*args, **kwargs):
817 warnings.warn(
818 "'werkzeug.utils.parse_cookie' as moved to 'werkzeug.http"
819 ".parse_cookie' as of version 0.5. This old import will be"
820 " removed in version 1.0.",
821 DeprecationWarning,
822 stacklevel=2,
823 )
824 return _parse_cookie(*args, **kwargs)
825
826
827 def dump_cookie(*args, **kwargs):
828 warnings.warn(
829 "'werkzeug.utils.dump_cookie' as moved to 'werkzeug.http"
830 ".dump_cookie' as of version 0.5. This old import will be"
831 " removed in version 1.0.",
832 DeprecationWarning,
833 stacklevel=2,
834 )
835 return _dump_cookie(*args, **kwargs)
0 """
1 werkzeug.wrappers
2 ~~~~~~~~~~~~~~~~~
3
4 The wrappers are simple request and response objects which you can
5 subclass to do whatever you want them to do. The request object contains
6 the information transmitted by the client (webbrowser) and the response
7 object contains all the information sent back to the browser.
8
9 An important detail is that the request object is created with the WSGI
10 environ and will act as high-level proxy whereas the response object is an
11 actual WSGI application.
12
13 Like everything else in Werkzeug these objects will work correctly with
14 unicode data. Incoming form data parsed by the response object will be
15 decoded into an unicode object if possible and if it makes sense.
16
17 :copyright: 2007 Pallets
18 :license: BSD-3-Clause
19 """
20 from .accept import AcceptMixin
21 from .auth import AuthorizationMixin
22 from .auth import WWWAuthenticateMixin
23 from .base_request import BaseRequest
24 from .base_response import BaseResponse
25 from .common_descriptors import CommonRequestDescriptorsMixin
26 from .common_descriptors import CommonResponseDescriptorsMixin
27 from .etag import ETagRequestMixin
28 from .etag import ETagResponseMixin
29 from .request import PlainRequest
30 from .request import Request
31 from .request import StreamOnlyMixin
32 from .response import Response
33 from .response import ResponseStream
34 from .response import ResponseStreamMixin
35 from .user_agent import UserAgentMixin
0 from ..datastructures import CharsetAccept
1 from ..datastructures import LanguageAccept
2 from ..datastructures import MIMEAccept
3 from ..http import parse_accept_header
4 from ..utils import cached_property
5
6
7 class AcceptMixin(object):
8 """A mixin for classes with an :attr:`~BaseResponse.environ` attribute
9 to get all the HTTP accept headers as
10 :class:`~werkzeug.datastructures.Accept` objects (or subclasses
11 thereof).
12 """
13
14 @cached_property
15 def accept_mimetypes(self):
16 """List of mimetypes this client supports as
17 :class:`~werkzeug.datastructures.MIMEAccept` object.
18 """
19 return parse_accept_header(self.environ.get("HTTP_ACCEPT"), MIMEAccept)
20
21 @cached_property
22 def accept_charsets(self):
23 """List of charsets this client supports as
24 :class:`~werkzeug.datastructures.CharsetAccept` object.
25 """
26 return parse_accept_header(
27 self.environ.get("HTTP_ACCEPT_CHARSET"), CharsetAccept
28 )
29
30 @cached_property
31 def accept_encodings(self):
32 """List of encodings this client accepts. Encodings in a HTTP term
33 are compression encodings such as gzip. For charsets have a look at
34 :attr:`accept_charset`.
35 """
36 return parse_accept_header(self.environ.get("HTTP_ACCEPT_ENCODING"))
37
38 @cached_property
39 def accept_languages(self):
40 """List of languages this client accepts as
41 :class:`~werkzeug.datastructures.LanguageAccept` object.
42
43 .. versionchanged 0.5
44 In previous versions this was a regular
45 :class:`~werkzeug.datastructures.Accept` object.
46 """
47 return parse_accept_header(
48 self.environ.get("HTTP_ACCEPT_LANGUAGE"), LanguageAccept
49 )
0 from ..http import parse_authorization_header
1 from ..http import parse_www_authenticate_header
2 from ..utils import cached_property
3
4
5 class AuthorizationMixin(object):
6 """Adds an :attr:`authorization` property that represents the parsed
7 value of the `Authorization` header as
8 :class:`~werkzeug.datastructures.Authorization` object.
9 """
10
11 @cached_property
12 def authorization(self):
13 """The `Authorization` object in parsed form."""
14 header = self.environ.get("HTTP_AUTHORIZATION")
15 return parse_authorization_header(header)
16
17
18 class WWWAuthenticateMixin(object):
19 """Adds a :attr:`www_authenticate` property to a response object."""
20
21 @property
22 def www_authenticate(self):
23 """The `WWW-Authenticate` header in a parsed form."""
24
25 def on_update(www_auth):
26 if not www_auth and "www-authenticate" in self.headers:
27 del self.headers["www-authenticate"]
28 elif www_auth:
29 self.headers["WWW-Authenticate"] = www_auth.to_header()
30
31 header = self.headers.get("www-authenticate")
32 return parse_www_authenticate_header(header, on_update)
0 import warnings
1 from functools import update_wrapper
2 from io import BytesIO
3
4 from .._compat import to_native
5 from .._compat import to_unicode
6 from .._compat import wsgi_decoding_dance
7 from .._compat import wsgi_get_bytes
8 from ..datastructures import CombinedMultiDict
9 from ..datastructures import EnvironHeaders
10 from ..datastructures import ImmutableList
11 from ..datastructures import ImmutableMultiDict
12 from ..datastructures import ImmutableTypeConversionDict
13 from ..datastructures import iter_multi_items
14 from ..datastructures import MultiDict
15 from ..formparser import default_stream_factory
16 from ..formparser import FormDataParser
17 from ..http import parse_cookie
18 from ..http import parse_options_header
19 from ..urls import url_decode
20 from ..utils import cached_property
21 from ..utils import environ_property
22 from ..wsgi import get_content_length
23 from ..wsgi import get_current_url
24 from ..wsgi import get_host
25 from ..wsgi import get_input_stream
26
27
28 class BaseRequest(object):
29 """Very basic request object. This does not implement advanced stuff like
30 entity tag parsing or cache controls. The request object is created with
31 the WSGI environment as first argument and will add itself to the WSGI
32 environment as ``'werkzeug.request'`` unless it's created with
33 `populate_request` set to False.
34
35 There are a couple of mixins available that add additional functionality
36 to the request object, there is also a class called `Request` which
37 subclasses `BaseRequest` and all the important mixins.
38
39 It's a good idea to create a custom subclass of the :class:`BaseRequest`
40 and add missing functionality either via mixins or direct implementation.
41 Here an example for such subclasses::
42
43 from werkzeug.wrappers import BaseRequest, ETagRequestMixin
44
45 class Request(BaseRequest, ETagRequestMixin):
46 pass
47
48 Request objects are **read only**. As of 0.5 modifications are not
49 allowed in any place. Unlike the lower level parsing functions the
50 request object will use immutable objects everywhere possible.
51
52 Per default the request object will assume all the text data is `utf-8`
53 encoded. Please refer to :doc:`the unicode chapter </unicode>` for more
54 details about customizing the behavior.
55
56 Per default the request object will be added to the WSGI
57 environment as `werkzeug.request` to support the debugging system.
58 If you don't want that, set `populate_request` to `False`.
59
60 If `shallow` is `True` the environment is initialized as shallow
61 object around the environ. Every operation that would modify the
62 environ in any way (such as consuming form data) raises an exception
63 unless the `shallow` attribute is explicitly set to `False`. This
64 is useful for middlewares where you don't want to consume the form
65 data by accident. A shallow request is not populated to the WSGI
66 environment.
67
68 .. versionchanged:: 0.5
69 read-only mode was enforced by using immutables classes for all
70 data.
71 """
72
73 #: the charset for the request, defaults to utf-8
74 charset = "utf-8"
75
76 #: the error handling procedure for errors, defaults to 'replace'
77 encoding_errors = "replace"
78
79 #: the maximum content length. This is forwarded to the form data
80 #: parsing function (:func:`parse_form_data`). When set and the
81 #: :attr:`form` or :attr:`files` attribute is accessed and the
82 #: parsing fails because more than the specified value is transmitted
83 #: a :exc:`~werkzeug.exceptions.RequestEntityTooLarge` exception is raised.
84 #:
85 #: Have a look at :ref:`dealing-with-request-data` for more details.
86 #:
87 #: .. versionadded:: 0.5
88 max_content_length = None
89
90 #: the maximum form field size. This is forwarded to the form data
91 #: parsing function (:func:`parse_form_data`). When set and the
92 #: :attr:`form` or :attr:`files` attribute is accessed and the
93 #: data in memory for post data is longer than the specified value a
94 #: :exc:`~werkzeug.exceptions.RequestEntityTooLarge` exception is raised.
95 #:
96 #: Have a look at :ref:`dealing-with-request-data` for more details.
97 #:
98 #: .. versionadded:: 0.5
99 max_form_memory_size = None
100
101 #: the class to use for `args` and `form`. The default is an
102 #: :class:`~werkzeug.datastructures.ImmutableMultiDict` which supports
103 #: multiple values per key. alternatively it makes sense to use an
104 #: :class:`~werkzeug.datastructures.ImmutableOrderedMultiDict` which
105 #: preserves order or a :class:`~werkzeug.datastructures.ImmutableDict`
106 #: which is the fastest but only remembers the last key. It is also
107 #: possible to use mutable structures, but this is not recommended.
108 #:
109 #: .. versionadded:: 0.6
110 parameter_storage_class = ImmutableMultiDict
111
112 #: the type to be used for list values from the incoming WSGI environment.
113 #: By default an :class:`~werkzeug.datastructures.ImmutableList` is used
114 #: (for example for :attr:`access_list`).
115 #:
116 #: .. versionadded:: 0.6
117 list_storage_class = ImmutableList
118
119 #: the type to be used for dict values from the incoming WSGI environment.
120 #: By default an
121 #: :class:`~werkzeug.datastructures.ImmutableTypeConversionDict` is used
122 #: (for example for :attr:`cookies`).
123 #:
124 #: .. versionadded:: 0.6
125 dict_storage_class = ImmutableTypeConversionDict
126
127 #: The form data parser that shoud be used. Can be replaced to customize
128 #: the form date parsing.
129 form_data_parser_class = FormDataParser
130
131 #: Optionally a list of hosts that is trusted by this request. By default
132 #: all hosts are trusted which means that whatever the client sends the
133 #: host is will be accepted.
134 #:
135 #: Because `Host` and `X-Forwarded-Host` headers can be set to any value by
136 #: a malicious client, it is recommended to either set this property or
137 #: implement similar validation in the proxy (if application is being run
138 #: behind one).
139 #:
140 #: .. versionadded:: 0.9
141 trusted_hosts = None
142
143 #: Indicates whether the data descriptor should be allowed to read and
144 #: buffer up the input stream. By default it's enabled.
145 #:
146 #: .. versionadded:: 0.9
147 disable_data_descriptor = False
148
149 def __init__(self, environ, populate_request=True, shallow=False):
150 self.environ = environ
151 if populate_request and not shallow:
152 self.environ["werkzeug.request"] = self
153 self.shallow = shallow
154
155 def __repr__(self):
156 # make sure the __repr__ even works if the request was created
157 # from an invalid WSGI environment. If we display the request
158 # in a debug session we don't want the repr to blow up.
159 args = []
160 try:
161 args.append("'%s'" % to_native(self.url, self.url_charset))
162 args.append("[%s]" % self.method)
163 except Exception:
164 args.append("(invalid WSGI environ)")
165
166 return "<%s %s>" % (self.__class__.__name__, " ".join(args))
167
168 @property
169 def url_charset(self):
170 """The charset that is assumed for URLs. Defaults to the value
171 of :attr:`charset`.
172
173 .. versionadded:: 0.6
174 """
175 return self.charset
176
177 @classmethod
178 def from_values(cls, *args, **kwargs):
179 """Create a new request object based on the values provided. If
180 environ is given missing values are filled from there. This method is
181 useful for small scripts when you need to simulate a request from an URL.
182 Do not use this method for unittesting, there is a full featured client
183 object (:class:`Client`) that allows to create multipart requests,
184 support for cookies etc.
185
186 This accepts the same options as the
187 :class:`~werkzeug.test.EnvironBuilder`.
188
189 .. versionchanged:: 0.5
190 This method now accepts the same arguments as
191 :class:`~werkzeug.test.EnvironBuilder`. Because of this the
192 `environ` parameter is now called `environ_overrides`.
193
194 :return: request object
195 """
196 from ..test import EnvironBuilder
197
198 charset = kwargs.pop("charset", cls.charset)
199 kwargs["charset"] = charset
200 builder = EnvironBuilder(*args, **kwargs)
201 try:
202 return builder.get_request(cls)
203 finally:
204 builder.close()
205
206 @classmethod
207 def application(cls, f):
208 """Decorate a function as responder that accepts the request as first
209 argument. This works like the :func:`responder` decorator but the
210 function is passed the request object as first argument and the
211 request object will be closed automatically::
212
213 @Request.application
214 def my_wsgi_app(request):
215 return Response('Hello World!')
216
217 As of Werkzeug 0.14 HTTP exceptions are automatically caught and
218 converted to responses instead of failing.
219
220 :param f: the WSGI callable to decorate
221 :return: a new WSGI callable
222 """
223 #: return a callable that wraps the -2nd argument with the request
224 #: and calls the function with all the arguments up to that one and
225 #: the request. The return value is then called with the latest
226 #: two arguments. This makes it possible to use this decorator for
227 #: both methods and standalone WSGI functions.
228 from ..exceptions import HTTPException
229
230 def application(*args):
231 request = cls(args[-2])
232 with request:
233 try:
234 resp = f(*args[:-2] + (request,))
235 except HTTPException as e:
236 resp = e.get_response(args[-2])
237 return resp(*args[-2:])
238
239 return update_wrapper(application, f)
240
241 def _get_file_stream(
242 self, total_content_length, content_type, filename=None, content_length=None
243 ):
244 """Called to get a stream for the file upload.
245
246 This must provide a file-like class with `read()`, `readline()`
247 and `seek()` methods that is both writeable and readable.
248
249 The default implementation returns a temporary file if the total
250 content length is higher than 500KB. Because many browsers do not
251 provide a content length for the files only the total content
252 length matters.
253
254 :param total_content_length: the total content length of all the
255 data in the request combined. This value
256 is guaranteed to be there.
257 :param content_type: the mimetype of the uploaded file.
258 :param filename: the filename of the uploaded file. May be `None`.
259 :param content_length: the length of this file. This value is usually
260 not provided because webbrowsers do not provide
261 this value.
262 """
263 return default_stream_factory(
264 total_content_length=total_content_length,
265 filename=filename,
266 content_type=content_type,
267 content_length=content_length,
268 )
269
270 @property
271 def want_form_data_parsed(self):
272 """Returns True if the request method carries content. As of
273 Werkzeug 0.9 this will be the case if a content type is transmitted.
274
275 .. versionadded:: 0.8
276 """
277 return bool(self.environ.get("CONTENT_TYPE"))
278
279 def make_form_data_parser(self):
280 """Creates the form data parser. Instantiates the
281 :attr:`form_data_parser_class` with some parameters.
282
283 .. versionadded:: 0.8
284 """
285 return self.form_data_parser_class(
286 self._get_file_stream,
287 self.charset,
288 self.encoding_errors,
289 self.max_form_memory_size,
290 self.max_content_length,
291 self.parameter_storage_class,
292 )
293
294 def _load_form_data(self):
295 """Method used internally to retrieve submitted data. After calling
296 this sets `form` and `files` on the request object to multi dicts
297 filled with the incoming form data. As a matter of fact the input
298 stream will be empty afterwards. You can also call this method to
299 force the parsing of the form data.
300
301 .. versionadded:: 0.8
302 """
303 # abort early if we have already consumed the stream
304 if "form" in self.__dict__:
305 return
306
307 _assert_not_shallow(self)
308
309 if self.want_form_data_parsed:
310 content_type = self.environ.get("CONTENT_TYPE", "")
311 content_length = get_content_length(self.environ)
312 mimetype, options = parse_options_header(content_type)
313 parser = self.make_form_data_parser()
314 data = parser.parse(
315 self._get_stream_for_parsing(), mimetype, content_length, options
316 )
317 else:
318 data = (
319 self.stream,
320 self.parameter_storage_class(),
321 self.parameter_storage_class(),
322 )
323
324 # inject the values into the instance dict so that we bypass
325 # our cached_property non-data descriptor.
326 d = self.__dict__
327 d["stream"], d["form"], d["files"] = data
328
329 def _get_stream_for_parsing(self):
330 """This is the same as accessing :attr:`stream` with the difference
331 that if it finds cached data from calling :meth:`get_data` first it
332 will create a new stream out of the cached data.
333
334 .. versionadded:: 0.9.3
335 """
336 cached_data = getattr(self, "_cached_data", None)
337 if cached_data is not None:
338 return BytesIO(cached_data)
339 return self.stream
340
341 def close(self):
342 """Closes associated resources of this request object. This
343 closes all file handles explicitly. You can also use the request
344 object in a with statement which will automatically close it.
345
346 .. versionadded:: 0.9
347 """
348 files = self.__dict__.get("files")
349 for _key, value in iter_multi_items(files or ()):
350 value.close()
351
352 def __enter__(self):
353 return self
354
355 def __exit__(self, exc_type, exc_value, tb):
356 self.close()
357
358 @cached_property
359 def stream(self):
360 """
361 If the incoming form data was not encoded with a known mimetype
362 the data is stored unmodified in this stream for consumption. Most
363 of the time it is a better idea to use :attr:`data` which will give
364 you that data as a string. The stream only returns the data once.
365
366 Unlike :attr:`input_stream` this stream is properly guarded that you
367 can't accidentally read past the length of the input. Werkzeug will
368 internally always refer to this stream to read data which makes it
369 possible to wrap this object with a stream that does filtering.
370
371 .. versionchanged:: 0.9
372 This stream is now always available but might be consumed by the
373 form parser later on. Previously the stream was only set if no
374 parsing happened.
375 """
376 _assert_not_shallow(self)
377 return get_input_stream(self.environ)
378
379 input_stream = environ_property(
380 "wsgi.input",
381 """The WSGI input stream.
382
383 In general it's a bad idea to use this one because you can
384 easily read past the boundary. Use the :attr:`stream`
385 instead.""",
386 )
387
388 @cached_property
389 def args(self):
390 """The parsed URL parameters (the part in the URL after the question
391 mark).
392
393 By default an
394 :class:`~werkzeug.datastructures.ImmutableMultiDict`
395 is returned from this function. This can be changed by setting
396 :attr:`parameter_storage_class` to a different type. This might
397 be necessary if the order of the form data is important.
398 """
399 return url_decode(
400 wsgi_get_bytes(self.environ.get("QUERY_STRING", "")),
401 self.url_charset,
402 errors=self.encoding_errors,
403 cls=self.parameter_storage_class,
404 )
405
406 @cached_property
407 def data(self):
408 """
409 Contains the incoming request data as string in case it came with
410 a mimetype Werkzeug does not handle.
411 """
412
413 if self.disable_data_descriptor:
414 raise AttributeError("data descriptor is disabled")
415 # XXX: this should eventually be deprecated.
416
417 # We trigger form data parsing first which means that the descriptor
418 # will not cache the data that would otherwise be .form or .files
419 # data. This restores the behavior that was there in Werkzeug
420 # before 0.9. New code should use :meth:`get_data` explicitly as
421 # this will make behavior explicit.
422 return self.get_data(parse_form_data=True)
423
424 def get_data(self, cache=True, as_text=False, parse_form_data=False):
425 """This reads the buffered incoming data from the client into one
426 bytestring. By default this is cached but that behavior can be
427 changed by setting `cache` to `False`.
428
429 Usually it's a bad idea to call this method without checking the
430 content length first as a client could send dozens of megabytes or more
431 to cause memory problems on the server.
432
433 Note that if the form data was already parsed this method will not
434 return anything as form data parsing does not cache the data like
435 this method does. To implicitly invoke form data parsing function
436 set `parse_form_data` to `True`. When this is done the return value
437 of this method will be an empty string if the form parser handles
438 the data. This generally is not necessary as if the whole data is
439 cached (which is the default) the form parser will used the cached
440 data to parse the form data. Please be generally aware of checking
441 the content length first in any case before calling this method
442 to avoid exhausting server memory.
443
444 If `as_text` is set to `True` the return value will be a decoded
445 unicode string.
446
447 .. versionadded:: 0.9
448 """
449 rv = getattr(self, "_cached_data", None)
450 if rv is None:
451 if parse_form_data:
452 self._load_form_data()
453 rv = self.stream.read()
454 if cache:
455 self._cached_data = rv
456 if as_text:
457 rv = rv.decode(self.charset, self.encoding_errors)
458 return rv
459
460 @cached_property
461 def form(self):
462 """The form parameters. By default an
463 :class:`~werkzeug.datastructures.ImmutableMultiDict`
464 is returned from this function. This can be changed by setting
465 :attr:`parameter_storage_class` to a different type. This might
466 be necessary if the order of the form data is important.
467
468 Please keep in mind that file uploads will not end up here, but instead
469 in the :attr:`files` attribute.
470
471 .. versionchanged:: 0.9
472
473 Previous to Werkzeug 0.9 this would only contain form data for POST
474 and PUT requests.
475 """
476 self._load_form_data()
477 return self.form
478
479 @cached_property
480 def values(self):
481 """A :class:`werkzeug.datastructures.CombinedMultiDict` that combines
482 :attr:`args` and :attr:`form`."""
483 args = []
484 for d in self.args, self.form:
485 if not isinstance(d, MultiDict):
486 d = MultiDict(d)
487 args.append(d)
488 return CombinedMultiDict(args)
489
490 @cached_property
491 def files(self):
492 """:class:`~werkzeug.datastructures.MultiDict` object containing
493 all uploaded files. Each key in :attr:`files` is the name from the
494 ``<input type="file" name="">``. Each value in :attr:`files` is a
495 Werkzeug :class:`~werkzeug.datastructures.FileStorage` object.
496
497 It basically behaves like a standard file object you know from Python,
498 with the difference that it also has a
499 :meth:`~werkzeug.datastructures.FileStorage.save` function that can
500 store the file on the filesystem.
501
502 Note that :attr:`files` will only contain data if the request method was
503 POST, PUT or PATCH and the ``<form>`` that posted to the request had
504 ``enctype="multipart/form-data"``. It will be empty otherwise.
505
506 See the :class:`~werkzeug.datastructures.MultiDict` /
507 :class:`~werkzeug.datastructures.FileStorage` documentation for
508 more details about the used data structure.
509 """
510 self._load_form_data()
511 return self.files
512
513 @cached_property
514 def cookies(self):
515 """A :class:`dict` with the contents of all cookies transmitted with
516 the request."""
517 return parse_cookie(
518 self.environ,
519 self.charset,
520 self.encoding_errors,
521 cls=self.dict_storage_class,
522 )
523
524 @cached_property
525 def headers(self):
526 """The headers from the WSGI environ as immutable
527 :class:`~werkzeug.datastructures.EnvironHeaders`.
528 """
529 return EnvironHeaders(self.environ)
530
531 @cached_property
532 def path(self):
533 """Requested path as unicode. This works a bit like the regular path
534 info in the WSGI environment but will always include a leading slash,
535 even if the URL root is accessed.
536 """
537 raw_path = wsgi_decoding_dance(
538 self.environ.get("PATH_INFO") or "", self.charset, self.encoding_errors
539 )
540 return "/" + raw_path.lstrip("/")
541
542 @cached_property
543 def full_path(self):
544 """Requested path as unicode, including the query string."""
545 return self.path + u"?" + to_unicode(self.query_string, self.url_charset)
546
547 @cached_property
548 def script_root(self):
549 """The root path of the script without the trailing slash."""
550 raw_path = wsgi_decoding_dance(
551 self.environ.get("SCRIPT_NAME") or "", self.charset, self.encoding_errors
552 )
553 return raw_path.rstrip("/")
554
555 @cached_property
556 def url(self):
557 """The reconstructed current URL as IRI.
558 See also: :attr:`trusted_hosts`.
559 """
560 return get_current_url(self.environ, trusted_hosts=self.trusted_hosts)
561
562 @cached_property
563 def base_url(self):
564 """Like :attr:`url` but without the querystring
565 See also: :attr:`trusted_hosts`.
566 """
567 return get_current_url(
568 self.environ, strip_querystring=True, trusted_hosts=self.trusted_hosts
569 )
570
571 @cached_property
572 def url_root(self):
573 """The full URL root (with hostname), this is the application
574 root as IRI.
575 See also: :attr:`trusted_hosts`.
576 """
577 return get_current_url(self.environ, True, trusted_hosts=self.trusted_hosts)
578
579 @cached_property
580 def host_url(self):
581 """Just the host with scheme as IRI.
582 See also: :attr:`trusted_hosts`.
583 """
584 return get_current_url(
585 self.environ, host_only=True, trusted_hosts=self.trusted_hosts
586 )
587
588 @cached_property
589 def host(self):
590 """Just the host including the port if available.
591 See also: :attr:`trusted_hosts`.
592 """
593 return get_host(self.environ, trusted_hosts=self.trusted_hosts)
594
595 query_string = environ_property(
596 "QUERY_STRING",
597 "",
598 read_only=True,
599 load_func=wsgi_get_bytes,
600 doc="The URL parameters as raw bytestring.",
601 )
602 method = environ_property(
603 "REQUEST_METHOD",
604 "GET",
605 read_only=True,
606 load_func=lambda x: x.upper(),
607 doc="The request method. (For example ``'GET'`` or ``'POST'``).",
608 )
609
610 @cached_property
611 def access_route(self):
612 """If a forwarded header exists this is a list of all ip addresses
613 from the client ip to the last proxy server.
614 """
615 if "HTTP_X_FORWARDED_FOR" in self.environ:
616 addr = self.environ["HTTP_X_FORWARDED_FOR"].split(",")
617 return self.list_storage_class([x.strip() for x in addr])
618 elif "REMOTE_ADDR" in self.environ:
619 return self.list_storage_class([self.environ["REMOTE_ADDR"]])
620 return self.list_storage_class()
621
622 @property
623 def remote_addr(self):
624 """The remote address of the client."""
625 return self.environ.get("REMOTE_ADDR")
626
627 remote_user = environ_property(
628 "REMOTE_USER",
629 doc="""If the server supports user authentication, and the
630 script is protected, this attribute contains the username the
631 user has authenticated as.""",
632 )
633
634 scheme = environ_property(
635 "wsgi.url_scheme",
636 doc="""
637 URL scheme (http or https).
638
639 .. versionadded:: 0.7""",
640 )
641
642 @property
643 def is_xhr(self):
644 """True if the request was triggered via a JavaScript XMLHttpRequest.
645 This only works with libraries that support the ``X-Requested-With``
646 header and set it to "XMLHttpRequest". Libraries that do that are
647 prototype, jQuery and Mochikit and probably some more.
648
649 .. deprecated:: 0.13
650 ``X-Requested-With`` is not standard and is unreliable. You
651 may be able to use :attr:`AcceptMixin.accept_mimetypes`
652 instead.
653 """
654 warnings.warn(
655 "'Request.is_xhr' is deprecated as of version 0.13 and will"
656 " be removed in version 1.0. The 'X-Requested-With' header"
657 " is not standard and is unreliable. You may be able to use"
658 " 'accept_mimetypes' instead.",
659 DeprecationWarning,
660 stacklevel=2,
661 )
662 return self.environ.get("HTTP_X_REQUESTED_WITH", "").lower() == "xmlhttprequest"
663
664 is_secure = property(
665 lambda self: self.environ["wsgi.url_scheme"] == "https",
666 doc="`True` if the request is secure.",
667 )
668 is_multithread = environ_property(
669 "wsgi.multithread",
670 doc="""boolean that is `True` if the application is served by a
671 multithreaded WSGI server.""",
672 )
673 is_multiprocess = environ_property(
674 "wsgi.multiprocess",
675 doc="""boolean that is `True` if the application is served by a
676 WSGI server that spawns multiple processes.""",
677 )
678 is_run_once = environ_property(
679 "wsgi.run_once",
680 doc="""boolean that is `True` if the application will be
681 executed only once in a process lifetime. This is the case for
682 CGI for example, but it's not guaranteed that the execution only
683 happens one time.""",
684 )
685
686
687 def _assert_not_shallow(request):
688 if request.shallow:
689 raise RuntimeError(
690 "A shallow request tried to consume form data. If you really"
691 " want to do that, set `shallow` to False."
692 )
0 import warnings
1
2 from .._compat import integer_types
3 from .._compat import string_types
4 from .._compat import text_type
5 from .._compat import to_bytes
6 from .._compat import to_native
7 from ..datastructures import Headers
8 from ..http import dump_cookie
9 from ..http import HTTP_STATUS_CODES
10 from ..http import remove_entity_headers
11 from ..urls import iri_to_uri
12 from ..urls import url_join
13 from ..utils import get_content_type
14 from ..wsgi import ClosingIterator
15 from ..wsgi import get_current_url
16
17
18 def _run_wsgi_app(*args):
19 """This function replaces itself to ensure that the test module is not
20 imported unless required. DO NOT USE!
21 """
22 global _run_wsgi_app
23 from ..test import run_wsgi_app as _run_wsgi_app
24
25 return _run_wsgi_app(*args)
26
27
28 def _warn_if_string(iterable):
29 """Helper for the response objects to check if the iterable returned
30 to the WSGI server is not a string.
31 """
32 if isinstance(iterable, string_types):
33 warnings.warn(
34 "Response iterable was set to a string. This will appear to"
35 " work but means that the server will send the data to the"
36 " client one character at a time. This is almost never"
37 " intended behavior, use 'response.data' to assign strings"
38 " to the response object.",
39 stacklevel=2,
40 )
41
42
43 def _iter_encoded(iterable, charset):
44 for item in iterable:
45 if isinstance(item, text_type):
46 yield item.encode(charset)
47 else:
48 yield item
49
50
51 def _clean_accept_ranges(accept_ranges):
52 if accept_ranges is True:
53 return "bytes"
54 elif accept_ranges is False:
55 return "none"
56 elif isinstance(accept_ranges, text_type):
57 return to_native(accept_ranges)
58 raise ValueError("Invalid accept_ranges value")
59
60
61 class BaseResponse(object):
62 """Base response class. The most important fact about a response object
63 is that it's a regular WSGI application. It's initialized with a couple
64 of response parameters (headers, body, status code etc.) and will start a
65 valid WSGI response when called with the environ and start response
66 callable.
67
68 Because it's a WSGI application itself processing usually ends before the
69 actual response is sent to the server. This helps debugging systems
70 because they can catch all the exceptions before responses are started.
71
72 Here a small example WSGI application that takes advantage of the
73 response objects::
74
75 from werkzeug.wrappers import BaseResponse as Response
76
77 def index():
78 return Response('Index page')
79
80 def application(environ, start_response):
81 path = environ.get('PATH_INFO') or '/'
82 if path == '/':
83 response = index()
84 else:
85 response = Response('Not Found', status=404)
86 return response(environ, start_response)
87
88 Like :class:`BaseRequest` which object is lacking a lot of functionality
89 implemented in mixins. This gives you a better control about the actual
90 API of your response objects, so you can create subclasses and add custom
91 functionality. A full featured response object is available as
92 :class:`Response` which implements a couple of useful mixins.
93
94 To enforce a new type of already existing responses you can use the
95 :meth:`force_type` method. This is useful if you're working with different
96 subclasses of response objects and you want to post process them with a
97 known interface.
98
99 Per default the response object will assume all the text data is `utf-8`
100 encoded. Please refer to :doc:`the unicode chapter </unicode>` for more
101 details about customizing the behavior.
102
103 Response can be any kind of iterable or string. If it's a string it's
104 considered being an iterable with one item which is the string passed.
105 Headers can be a list of tuples or a
106 :class:`~werkzeug.datastructures.Headers` object.
107
108 Special note for `mimetype` and `content_type`: For most mime types
109 `mimetype` and `content_type` work the same, the difference affects
110 only 'text' mimetypes. If the mimetype passed with `mimetype` is a
111 mimetype starting with `text/`, the charset parameter of the response
112 object is appended to it. In contrast the `content_type` parameter is
113 always added as header unmodified.
114
115 .. versionchanged:: 0.5
116 the `direct_passthrough` parameter was added.
117
118 :param response: a string or response iterable.
119 :param status: a string with a status or an integer with the status code.
120 :param headers: a list of headers or a
121 :class:`~werkzeug.datastructures.Headers` object.
122 :param mimetype: the mimetype for the response. See notice above.
123 :param content_type: the content type for the response. See notice above.
124 :param direct_passthrough: if set to `True` :meth:`iter_encoded` is not
125 called before iteration which makes it
126 possible to pass special iterators through
127 unchanged (see :func:`wrap_file` for more
128 details.)
129 """
130
131 #: the charset of the response.
132 charset = "utf-8"
133
134 #: the default status if none is provided.
135 default_status = 200
136
137 #: the default mimetype if none is provided.
138 default_mimetype = "text/plain"
139
140 #: if set to `False` accessing properties on the response object will
141 #: not try to consume the response iterator and convert it into a list.
142 #:
143 #: .. versionadded:: 0.6.2
144 #:
145 #: That attribute was previously called `implicit_seqence_conversion`.
146 #: (Notice the typo). If you did use this feature, you have to adapt
147 #: your code to the name change.
148 implicit_sequence_conversion = True
149
150 #: Should this response object correct the location header to be RFC
151 #: conformant? This is true by default.
152 #:
153 #: .. versionadded:: 0.8
154 autocorrect_location_header = True
155
156 #: Should this response object automatically set the content-length
157 #: header if possible? This is true by default.
158 #:
159 #: .. versionadded:: 0.8
160 automatically_set_content_length = True
161
162 #: Warn if a cookie header exceeds this size. The default, 4093, should be
163 #: safely `supported by most browsers <cookie_>`_. A cookie larger than
164 #: this size will still be sent, but it may be ignored or handled
165 #: incorrectly by some browsers. Set to 0 to disable this check.
166 #:
167 #: .. versionadded:: 0.13
168 #:
169 #: .. _`cookie`: http://browsercookielimits.squawky.net/
170 max_cookie_size = 4093
171
172 def __init__(
173 self,
174 response=None,
175 status=None,
176 headers=None,
177 mimetype=None,
178 content_type=None,
179 direct_passthrough=False,
180 ):
181 if isinstance(headers, Headers):
182 self.headers = headers
183 elif not headers:
184 self.headers = Headers()
185 else:
186 self.headers = Headers(headers)
187
188 if content_type is None:
189 if mimetype is None and "content-type" not in self.headers:
190 mimetype = self.default_mimetype
191 if mimetype is not None:
192 mimetype = get_content_type(mimetype, self.charset)
193 content_type = mimetype
194 if content_type is not None:
195 self.headers["Content-Type"] = content_type
196 if status is None:
197 status = self.default_status
198 if isinstance(status, integer_types):
199 self.status_code = status
200 else:
201 self.status = status
202
203 self.direct_passthrough = direct_passthrough
204 self._on_close = []
205
206 # we set the response after the headers so that if a class changes
207 # the charset attribute, the data is set in the correct charset.
208 if response is None:
209 self.response = []
210 elif isinstance(response, (text_type, bytes, bytearray)):
211 self.set_data(response)
212 else:
213 self.response = response
214
215 def call_on_close(self, func):
216 """Adds a function to the internal list of functions that should
217 be called as part of closing down the response. Since 0.7 this
218 function also returns the function that was passed so that this
219 can be used as a decorator.
220
221 .. versionadded:: 0.6
222 """
223 self._on_close.append(func)
224 return func
225
226 def __repr__(self):
227 if self.is_sequence:
228 body_info = "%d bytes" % sum(map(len, self.iter_encoded()))
229 else:
230 body_info = "streamed" if self.is_streamed else "likely-streamed"
231 return "<%s %s [%s]>" % (self.__class__.__name__, body_info, self.status)
232
233 @classmethod
234 def force_type(cls, response, environ=None):
235 """Enforce that the WSGI response is a response object of the current
236 type. Werkzeug will use the :class:`BaseResponse` internally in many
237 situations like the exceptions. If you call :meth:`get_response` on an
238 exception you will get back a regular :class:`BaseResponse` object, even
239 if you are using a custom subclass.
240
241 This method can enforce a given response type, and it will also
242 convert arbitrary WSGI callables into response objects if an environ
243 is provided::
244
245 # convert a Werkzeug response object into an instance of the
246 # MyResponseClass subclass.
247 response = MyResponseClass.force_type(response)
248
249 # convert any WSGI application into a response object
250 response = MyResponseClass.force_type(response, environ)
251
252 This is especially useful if you want to post-process responses in
253 the main dispatcher and use functionality provided by your subclass.
254
255 Keep in mind that this will modify response objects in place if
256 possible!
257
258 :param response: a response object or wsgi application.
259 :param environ: a WSGI environment object.
260 :return: a response object.
261 """
262 if not isinstance(response, BaseResponse):
263 if environ is None:
264 raise TypeError(
265 "cannot convert WSGI application into response"
266 " objects without an environ"
267 )
268 response = BaseResponse(*_run_wsgi_app(response, environ))
269 response.__class__ = cls
270 return response
271
272 @classmethod
273 def from_app(cls, app, environ, buffered=False):
274 """Create a new response object from an application output. This
275 works best if you pass it an application that returns a generator all
276 the time. Sometimes applications may use the `write()` callable
277 returned by the `start_response` function. This tries to resolve such
278 edge cases automatically. But if you don't get the expected output
279 you should set `buffered` to `True` which enforces buffering.
280
281 :param app: the WSGI application to execute.
282 :param environ: the WSGI environment to execute against.
283 :param buffered: set to `True` to enforce buffering.
284 :return: a response object.
285 """
286 return cls(*_run_wsgi_app(app, environ, buffered))
287
288 def _get_status_code(self):
289 return self._status_code
290
291 def _set_status_code(self, code):
292 self._status_code = code
293 try:
294 self._status = "%d %s" % (code, HTTP_STATUS_CODES[code].upper())
295 except KeyError:
296 self._status = "%d UNKNOWN" % code
297
298 status_code = property(
299 _get_status_code, _set_status_code, doc="The HTTP Status code as number"
300 )
301 del _get_status_code, _set_status_code
302
303 def _get_status(self):
304 return self._status
305
306 def _set_status(self, value):
307 try:
308 self._status = to_native(value)
309 except AttributeError:
310 raise TypeError("Invalid status argument")
311
312 try:
313 self._status_code = int(self._status.split(None, 1)[0])
314 except ValueError:
315 self._status_code = 0
316 self._status = "0 %s" % self._status
317 except IndexError:
318 raise ValueError("Empty status argument")
319
320 status = property(_get_status, _set_status, doc="The HTTP Status code")
321 del _get_status, _set_status
322
323 def get_data(self, as_text=False):
324 """The string representation of the request body. Whenever you call
325 this property the request iterable is encoded and flattened. This
326 can lead to unwanted behavior if you stream big data.
327
328 This behavior can be disabled by setting
329 :attr:`implicit_sequence_conversion` to `False`.
330
331 If `as_text` is set to `True` the return value will be a decoded
332 unicode string.
333
334 .. versionadded:: 0.9
335 """
336 self._ensure_sequence()
337 rv = b"".join(self.iter_encoded())
338 if as_text:
339 rv = rv.decode(self.charset)
340 return rv
341
342 def set_data(self, value):
343 """Sets a new string as response. The value set must either by a
344 unicode or bytestring. If a unicode string is set it's encoded
345 automatically to the charset of the response (utf-8 by default).
346
347 .. versionadded:: 0.9
348 """
349 # if an unicode string is set, it's encoded directly so that we
350 # can set the content length
351 if isinstance(value, text_type):
352 value = value.encode(self.charset)
353 else:
354 value = bytes(value)
355 self.response = [value]
356 if self.automatically_set_content_length:
357 self.headers["Content-Length"] = str(len(value))
358
359 data = property(
360 get_data,
361 set_data,
362 doc="A descriptor that calls :meth:`get_data` and :meth:`set_data`.",
363 )
364
365 def calculate_content_length(self):
366 """Returns the content length if available or `None` otherwise."""
367 try:
368 self._ensure_sequence()
369 except RuntimeError:
370 return None
371 return sum(len(x) for x in self.iter_encoded())
372
373 def _ensure_sequence(self, mutable=False):
374 """This method can be called by methods that need a sequence. If
375 `mutable` is true, it will also ensure that the response sequence
376 is a standard Python list.
377
378 .. versionadded:: 0.6
379 """
380 if self.is_sequence:
381 # if we need a mutable object, we ensure it's a list.
382 if mutable and not isinstance(self.response, list):
383 self.response = list(self.response)
384 return
385 if self.direct_passthrough:
386 raise RuntimeError(
387 "Attempted implicit sequence conversion but the"
388 " response object is in direct passthrough mode."
389 )
390 if not self.implicit_sequence_conversion:
391 raise RuntimeError(
392 "The response object required the iterable to be a"
393 " sequence, but the implicit conversion was disabled."
394 " Call make_sequence() yourself."
395 )
396 self.make_sequence()
397
398 def make_sequence(self):
399 """Converts the response iterator in a list. By default this happens
400 automatically if required. If `implicit_sequence_conversion` is
401 disabled, this method is not automatically called and some properties
402 might raise exceptions. This also encodes all the items.
403
404 .. versionadded:: 0.6
405 """
406 if not self.is_sequence:
407 # if we consume an iterable we have to ensure that the close
408 # method of the iterable is called if available when we tear
409 # down the response
410 close = getattr(self.response, "close", None)
411 self.response = list(self.iter_encoded())
412 if close is not None:
413 self.call_on_close(close)
414
415 def iter_encoded(self):
416 """Iter the response encoded with the encoding of the response.
417 If the response object is invoked as WSGI application the return
418 value of this method is used as application iterator unless
419 :attr:`direct_passthrough` was activated.
420 """
421 if __debug__:
422 _warn_if_string(self.response)
423 # Encode in a separate function so that self.response is fetched
424 # early. This allows us to wrap the response with the return
425 # value from get_app_iter or iter_encoded.
426 return _iter_encoded(self.response, self.charset)
427
428 def set_cookie(
429 self,
430 key,
431 value="",
432 max_age=None,
433 expires=None,
434 path="/",
435 domain=None,
436 secure=False,
437 httponly=False,
438 samesite=None,
439 ):
440 """Sets a cookie. The parameters are the same as in the cookie `Morsel`
441 object in the Python standard library but it accepts unicode data, too.
442
443 A warning is raised if the size of the cookie header exceeds
444 :attr:`max_cookie_size`, but the header will still be set.
445
446 :param key: the key (name) of the cookie to be set.
447 :param value: the value of the cookie.
448 :param max_age: should be a number of seconds, or `None` (default) if
449 the cookie should last only as long as the client's
450 browser session.
451 :param expires: should be a `datetime` object or UNIX timestamp.
452 :param path: limits the cookie to a given path, per default it will
453 span the whole domain.
454 :param domain: if you want to set a cross-domain cookie. For example,
455 ``domain=".example.com"`` will set a cookie that is
456 readable by the domain ``www.example.com``,
457 ``foo.example.com`` etc. Otherwise, a cookie will only
458 be readable by the domain that set it.
459 :param secure: If `True`, the cookie will only be available via HTTPS
460 :param httponly: disallow JavaScript to access the cookie. This is an
461 extension to the cookie standard and probably not
462 supported by all browsers.
463 :param samesite: Limits the scope of the cookie such that it will only
464 be attached to requests if those requests are
465 "same-site".
466 """
467 self.headers.add(
468 "Set-Cookie",
469 dump_cookie(
470 key,
471 value=value,
472 max_age=max_age,
473 expires=expires,
474 path=path,
475 domain=domain,
476 secure=secure,
477 httponly=httponly,
478 charset=self.charset,
479 max_size=self.max_cookie_size,
480 samesite=samesite,
481 ),
482 )
483
484 def delete_cookie(self, key, path="/", domain=None):
485 """Delete a cookie. Fails silently if key doesn't exist.
486
487 :param key: the key (name) of the cookie to be deleted.
488 :param path: if the cookie that should be deleted was limited to a
489 path, the path has to be defined here.
490 :param domain: if the cookie that should be deleted was limited to a
491 domain, that domain has to be defined here.
492 """
493 self.set_cookie(key, expires=0, max_age=0, path=path, domain=domain)
494
495 @property
496 def is_streamed(self):
497 """If the response is streamed (the response is not an iterable with
498 a length information) this property is `True`. In this case streamed
499 means that there is no information about the number of iterations.
500 This is usually `True` if a generator is passed to the response object.
501
502 This is useful for checking before applying some sort of post
503 filtering that should not take place for streamed responses.
504 """
505 try:
506 len(self.response)
507 except (TypeError, AttributeError):
508 return True
509 return False
510
511 @property
512 def is_sequence(self):
513 """If the iterator is buffered, this property will be `True`. A
514 response object will consider an iterator to be buffered if the
515 response attribute is a list or tuple.
516
517 .. versionadded:: 0.6
518 """
519 return isinstance(self.response, (tuple, list))
520
521 def close(self):
522 """Close the wrapped response if possible. You can also use the object
523 in a with statement which will automatically close it.
524
525 .. versionadded:: 0.9
526 Can now be used in a with statement.
527 """
528 if hasattr(self.response, "close"):
529 self.response.close()
530 for func in self._on_close:
531 func()
532
533 def __enter__(self):
534 return self
535
536 def __exit__(self, exc_type, exc_value, tb):
537 self.close()
538
539 def freeze(self):
540 """Call this method if you want to make your response object ready for
541 being pickled. This buffers the generator if there is one. It will
542 also set the `Content-Length` header to the length of the body.
543
544 .. versionchanged:: 0.6
545 The `Content-Length` header is now set.
546 """
547 # we explicitly set the length to a list of the *encoded* response
548 # iterator. Even if the implicit sequence conversion is disabled.
549 self.response = list(self.iter_encoded())
550 self.headers["Content-Length"] = str(sum(map(len, self.response)))
551
552 def get_wsgi_headers(self, environ):
553 """This is automatically called right before the response is started
554 and returns headers modified for the given environment. It returns a
555 copy of the headers from the response with some modifications applied
556 if necessary.
557
558 For example the location header (if present) is joined with the root
559 URL of the environment. Also the content length is automatically set
560 to zero here for certain status codes.
561
562 .. versionchanged:: 0.6
563 Previously that function was called `fix_headers` and modified
564 the response object in place. Also since 0.6, IRIs in location
565 and content-location headers are handled properly.
566
567 Also starting with 0.6, Werkzeug will attempt to set the content
568 length if it is able to figure it out on its own. This is the
569 case if all the strings in the response iterable are already
570 encoded and the iterable is buffered.
571
572 :param environ: the WSGI environment of the request.
573 :return: returns a new :class:`~werkzeug.datastructures.Headers`
574 object.
575 """
576 headers = Headers(self.headers)
577 location = None
578 content_location = None
579 content_length = None
580 status = self.status_code
581
582 # iterate over the headers to find all values in one go. Because
583 # get_wsgi_headers is used each response that gives us a tiny
584 # speedup.
585 for key, value in headers:
586 ikey = key.lower()
587 if ikey == u"location":
588 location = value
589 elif ikey == u"content-location":
590 content_location = value
591 elif ikey == u"content-length":
592 content_length = value
593
594 # make sure the location header is an absolute URL
595 if location is not None:
596 old_location = location
597 if isinstance(location, text_type):
598 # Safe conversion is necessary here as we might redirect
599 # to a broken URI scheme (for instance itms-services).
600 location = iri_to_uri(location, safe_conversion=True)
601
602 if self.autocorrect_location_header:
603 current_url = get_current_url(environ, strip_querystring=True)
604 if isinstance(current_url, text_type):
605 current_url = iri_to_uri(current_url)
606 location = url_join(current_url, location)
607 if location != old_location:
608 headers["Location"] = location
609
610 # make sure the content location is a URL
611 if content_location is not None and isinstance(content_location, text_type):
612 headers["Content-Location"] = iri_to_uri(content_location)
613
614 if 100 <= status < 200 or status == 204:
615 # Per section 3.3.2 of RFC 7230, "a server MUST NOT send a
616 # Content-Length header field in any response with a status
617 # code of 1xx (Informational) or 204 (No Content)."
618 headers.remove("Content-Length")
619 elif status == 304:
620 remove_entity_headers(headers)
621
622 # if we can determine the content length automatically, we
623 # should try to do that. But only if this does not involve
624 # flattening the iterator or encoding of unicode strings in
625 # the response. We however should not do that if we have a 304
626 # response.
627 if (
628 self.automatically_set_content_length
629 and self.is_sequence
630 and content_length is None
631 and status not in (204, 304)
632 and not (100 <= status < 200)
633 ):
634 try:
635 content_length = sum(len(to_bytes(x, "ascii")) for x in self.response)
636 except UnicodeError:
637 # aha, something non-bytestringy in there, too bad, we
638 # can't safely figure out the length of the response.
639 pass
640 else:
641 headers["Content-Length"] = str(content_length)
642
643 return headers
644
645 def get_app_iter(self, environ):
646 """Returns the application iterator for the given environ. Depending
647 on the request method and the current status code the return value
648 might be an empty response rather than the one from the response.
649
650 If the request method is `HEAD` or the status code is in a range
651 where the HTTP specification requires an empty response, an empty
652 iterable is returned.
653
654 .. versionadded:: 0.6
655
656 :param environ: the WSGI environment of the request.
657 :return: a response iterable.
658 """
659 status = self.status_code
660 if (
661 environ["REQUEST_METHOD"] == "HEAD"
662 or 100 <= status < 200
663 or status in (204, 304)
664 ):
665 iterable = ()
666 elif self.direct_passthrough:
667 if __debug__:
668 _warn_if_string(self.response)
669 return self.response
670 else:
671 iterable = self.iter_encoded()
672 return ClosingIterator(iterable, self.close)
673
674 def get_wsgi_response(self, environ):
675 """Returns the final WSGI response as tuple. The first item in
676 the tuple is the application iterator, the second the status and
677 the third the list of headers. The response returned is created
678 specially for the given environment. For example if the request
679 method in the WSGI environment is ``'HEAD'`` the response will
680 be empty and only the headers and status code will be present.
681
682 .. versionadded:: 0.6
683
684 :param environ: the WSGI environment of the request.
685 :return: an ``(app_iter, status, headers)`` tuple.
686 """
687 headers = self.get_wsgi_headers(environ)
688 app_iter = self.get_app_iter(environ)
689 return app_iter, self.status, headers.to_wsgi_list()
690
691 def __call__(self, environ, start_response):
692 """Process this response as WSGI application.
693
694 :param environ: the WSGI environment.
695 :param start_response: the response callable provided by the WSGI
696 server.
697 :return: an application iterator
698 """
699 app_iter, status, headers = self.get_wsgi_response(environ)
700 start_response(status, headers)
701 return app_iter
0 from datetime import datetime
1 from datetime import timedelta
2
3 from .._compat import string_types
4 from ..datastructures import CallbackDict
5 from ..http import dump_age
6 from ..http import dump_header
7 from ..http import dump_options_header
8 from ..http import http_date
9 from ..http import parse_age
10 from ..http import parse_date
11 from ..http import parse_options_header
12 from ..http import parse_set_header
13 from ..utils import cached_property
14 from ..utils import environ_property
15 from ..utils import get_content_type
16 from ..utils import header_property
17 from ..wsgi import get_content_length
18
19
20 class CommonRequestDescriptorsMixin(object):
21 """A mixin for :class:`BaseRequest` subclasses. Request objects that
22 mix this class in will automatically get descriptors for a couple of
23 HTTP headers with automatic type conversion.
24
25 .. versionadded:: 0.5
26 """
27
28 content_type = environ_property(
29 "CONTENT_TYPE",
30 doc="""The Content-Type entity-header field indicates the media
31 type of the entity-body sent to the recipient or, in the case of
32 the HEAD method, the media type that would have been sent had
33 the request been a GET.""",
34 )
35
36 @cached_property
37 def content_length(self):
38 """The Content-Length entity-header field indicates the size of the
39 entity-body in bytes or, in the case of the HEAD method, the size of
40 the entity-body that would have been sent had the request been a
41 GET.
42 """
43 return get_content_length(self.environ)
44
45 content_encoding = environ_property(
46 "HTTP_CONTENT_ENCODING",
47 doc="""The Content-Encoding entity-header field is used as a
48 modifier to the media-type. When present, its value indicates
49 what additional content codings have been applied to the
50 entity-body, and thus what decoding mechanisms must be applied
51 in order to obtain the media-type referenced by the Content-Type
52 header field.
53
54 .. versionadded:: 0.9""",
55 )
56 content_md5 = environ_property(
57 "HTTP_CONTENT_MD5",
58 doc="""The Content-MD5 entity-header field, as defined in
59 RFC 1864, is an MD5 digest of the entity-body for the purpose of
60 providing an end-to-end message integrity check (MIC) of the
61 entity-body. (Note: a MIC is good for detecting accidental
62 modification of the entity-body in transit, but is not proof
63 against malicious attacks.)
64
65 .. versionadded:: 0.9""",
66 )
67 referrer = environ_property(
68 "HTTP_REFERER",
69 doc="""The Referer[sic] request-header field allows the client
70 to specify, for the server's benefit, the address (URI) of the
71 resource from which the Request-URI was obtained (the
72 "referrer", although the header field is misspelled).""",
73 )
74 date = environ_property(
75 "HTTP_DATE",
76 None,
77 parse_date,
78 doc="""The Date general-header field represents the date and
79 time at which the message was originated, having the same
80 semantics as orig-date in RFC 822.""",
81 )
82 max_forwards = environ_property(
83 "HTTP_MAX_FORWARDS",
84 None,
85 int,
86 doc="""The Max-Forwards request-header field provides a
87 mechanism with the TRACE and OPTIONS methods to limit the number
88 of proxies or gateways that can forward the request to the next
89 inbound server.""",
90 )
91
92 def _parse_content_type(self):
93 if not hasattr(self, "_parsed_content_type"):
94 self._parsed_content_type = parse_options_header(
95 self.environ.get("CONTENT_TYPE", "")
96 )
97
98 @property
99 def mimetype(self):
100 """Like :attr:`content_type`, but without parameters (eg, without
101 charset, type etc.) and always lowercase. For example if the content
102 type is ``text/HTML; charset=utf-8`` the mimetype would be
103 ``'text/html'``.
104 """
105 self._parse_content_type()
106 return self._parsed_content_type[0].lower()
107
108 @property
109 def mimetype_params(self):
110 """The mimetype parameters as dict. For example if the content
111 type is ``text/html; charset=utf-8`` the params would be
112 ``{'charset': 'utf-8'}``.
113 """
114 self._parse_content_type()
115 return self._parsed_content_type[1]
116
117 @cached_property
118 def pragma(self):
119 """The Pragma general-header field is used to include
120 implementation-specific directives that might apply to any recipient
121 along the request/response chain. All pragma directives specify
122 optional behavior from the viewpoint of the protocol; however, some
123 systems MAY require that behavior be consistent with the directives.
124 """
125 return parse_set_header(self.environ.get("HTTP_PRAGMA", ""))
126
127
128 class CommonResponseDescriptorsMixin(object):
129 """A mixin for :class:`BaseResponse` subclasses. Response objects that
130 mix this class in will automatically get descriptors for a couple of
131 HTTP headers with automatic type conversion.
132 """
133
134 @property
135 def mimetype(self):
136 """The mimetype (content type without charset etc.)"""
137 ct = self.headers.get("content-type")
138 if ct:
139 return ct.split(";")[0].strip()
140
141 @mimetype.setter
142 def mimetype(self, value):
143 self.headers["Content-Type"] = get_content_type(value, self.charset)
144
145 @property
146 def mimetype_params(self):
147 """The mimetype parameters as dict. For example if the
148 content type is ``text/html; charset=utf-8`` the params would be
149 ``{'charset': 'utf-8'}``.
150
151 .. versionadded:: 0.5
152 """
153
154 def on_update(d):
155 self.headers["Content-Type"] = dump_options_header(self.mimetype, d)
156
157 d = parse_options_header(self.headers.get("content-type", ""))[1]
158 return CallbackDict(d, on_update)
159
160 location = header_property(
161 "Location",
162 doc="""The Location response-header field is used to redirect
163 the recipient to a location other than the Request-URI for
164 completion of the request or identification of a new
165 resource.""",
166 )
167 age = header_property(
168 "Age",
169 None,
170 parse_age,
171 dump_age,
172 doc="""The Age response-header field conveys the sender's
173 estimate of the amount of time since the response (or its
174 revalidation) was generated at the origin server.
175
176 Age values are non-negative decimal integers, representing time
177 in seconds.""",
178 )
179 content_type = header_property(
180 "Content-Type",
181 doc="""The Content-Type entity-header field indicates the media
182 type of the entity-body sent to the recipient or, in the case of
183 the HEAD method, the media type that would have been sent had
184 the request been a GET.""",
185 )
186 content_length = header_property(
187 "Content-Length",
188 None,
189 int,
190 str,
191 doc="""The Content-Length entity-header field indicates the size
192 of the entity-body, in decimal number of OCTETs, sent to the
193 recipient or, in the case of the HEAD method, the size of the
194 entity-body that would have been sent had the request been a
195 GET.""",
196 )
197 content_location = header_property(
198 "Content-Location",
199 doc="""The Content-Location entity-header field MAY be used to
200 supply the resource location for the entity enclosed in the
201 message when that entity is accessible from a location separate
202 from the requested resource's URI.""",
203 )
204 content_encoding = header_property(
205 "Content-Encoding",
206 doc="""The Content-Encoding entity-header field is used as a
207 modifier to the media-type. When present, its value indicates
208 what additional content codings have been applied to the
209 entity-body, and thus what decoding mechanisms must be applied
210 in order to obtain the media-type referenced by the Content-Type
211 header field.""",
212 )
213 content_md5 = header_property(
214 "Content-MD5",
215 doc="""The Content-MD5 entity-header field, as defined in
216 RFC 1864, is an MD5 digest of the entity-body for the purpose of
217 providing an end-to-end message integrity check (MIC) of the
218 entity-body. (Note: a MIC is good for detecting accidental
219 modification of the entity-body in transit, but is not proof
220 against malicious attacks.)""",
221 )
222 date = header_property(
223 "Date",
224 None,
225 parse_date,
226 http_date,
227 doc="""The Date general-header field represents the date and
228 time at which the message was originated, having the same
229 semantics as orig-date in RFC 822.""",
230 )
231 expires = header_property(
232 "Expires",
233 None,
234 parse_date,
235 http_date,
236 doc="""The Expires entity-header field gives the date/time after
237 which the response is considered stale. A stale cache entry may
238 not normally be returned by a cache.""",
239 )
240 last_modified = header_property(
241 "Last-Modified",
242 None,
243 parse_date,
244 http_date,
245 doc="""The Last-Modified entity-header field indicates the date
246 and time at which the origin server believes the variant was
247 last modified.""",
248 )
249
250 @property
251 def retry_after(self):
252 """The Retry-After response-header field can be used with a
253 503 (Service Unavailable) response to indicate how long the
254 service is expected to be unavailable to the requesting client.
255
256 Time in seconds until expiration or date.
257 """
258 value = self.headers.get("retry-after")
259 if value is None:
260 return
261 elif value.isdigit():
262 return datetime.utcnow() + timedelta(seconds=int(value))
263 return parse_date(value)
264
265 @retry_after.setter
266 def retry_after(self, value):
267 if value is None:
268 if "retry-after" in self.headers:
269 del self.headers["retry-after"]
270 return
271 elif isinstance(value, datetime):
272 value = http_date(value)
273 else:
274 value = str(value)
275 self.headers["Retry-After"] = value
276
277 def _set_property(name, doc=None): # noqa: B902
278 def fget(self):
279 def on_update(header_set):
280 if not header_set and name in self.headers:
281 del self.headers[name]
282 elif header_set:
283 self.headers[name] = header_set.to_header()
284
285 return parse_set_header(self.headers.get(name), on_update)
286
287 def fset(self, value):
288 if not value:
289 del self.headers[name]
290 elif isinstance(value, string_types):
291 self.headers[name] = value
292 else:
293 self.headers[name] = dump_header(value)
294
295 return property(fget, fset, doc=doc)
296
297 vary = _set_property(
298 "Vary",
299 doc="""The Vary field value indicates the set of request-header
300 fields that fully determines, while the response is fresh,
301 whether a cache is permitted to use the response to reply to a
302 subsequent request without revalidation.""",
303 )
304 content_language = _set_property(
305 "Content-Language",
306 doc="""The Content-Language entity-header field describes the
307 natural language(s) of the intended audience for the enclosed
308 entity. Note that this might not be equivalent to all the
309 languages used within the entity-body.""",
310 )
311 allow = _set_property(
312 "Allow",
313 doc="""The Allow entity-header field lists the set of methods
314 supported by the resource identified by the Request-URI. The
315 purpose of this field is strictly to inform the recipient of
316 valid methods associated with the resource. An Allow header
317 field MUST be present in a 405 (Method Not Allowed)
318 response.""",
319 )
320
321 del _set_property
0 from .._compat import string_types
1 from .._internal import _get_environ
2 from ..datastructures import ContentRange
3 from ..datastructures import RequestCacheControl
4 from ..datastructures import ResponseCacheControl
5 from ..http import generate_etag
6 from ..http import http_date
7 from ..http import is_resource_modified
8 from ..http import parse_cache_control_header
9 from ..http import parse_content_range_header
10 from ..http import parse_date
11 from ..http import parse_etags
12 from ..http import parse_if_range_header
13 from ..http import parse_range_header
14 from ..http import quote_etag
15 from ..http import unquote_etag
16 from ..utils import cached_property
17 from ..utils import header_property
18 from ..wrappers.base_response import _clean_accept_ranges
19 from ..wsgi import _RangeWrapper
20
21
22 class ETagRequestMixin(object):
23 """Add entity tag and cache descriptors to a request object or object with
24 a WSGI environment available as :attr:`~BaseRequest.environ`. This not
25 only provides access to etags but also to the cache control header.
26 """
27
28 @cached_property
29 def cache_control(self):
30 """A :class:`~werkzeug.datastructures.RequestCacheControl` object
31 for the incoming cache control headers.
32 """
33 cache_control = self.environ.get("HTTP_CACHE_CONTROL")
34 return parse_cache_control_header(cache_control, None, RequestCacheControl)
35
36 @cached_property
37 def if_match(self):
38 """An object containing all the etags in the `If-Match` header.
39
40 :rtype: :class:`~werkzeug.datastructures.ETags`
41 """
42 return parse_etags(self.environ.get("HTTP_IF_MATCH"))
43
44 @cached_property
45 def if_none_match(self):
46 """An object containing all the etags in the `If-None-Match` header.
47
48 :rtype: :class:`~werkzeug.datastructures.ETags`
49 """
50 return parse_etags(self.environ.get("HTTP_IF_NONE_MATCH"))
51
52 @cached_property
53 def if_modified_since(self):
54 """The parsed `If-Modified-Since` header as datetime object."""
55 return parse_date(self.environ.get("HTTP_IF_MODIFIED_SINCE"))
56
57 @cached_property
58 def if_unmodified_since(self):
59 """The parsed `If-Unmodified-Since` header as datetime object."""
60 return parse_date(self.environ.get("HTTP_IF_UNMODIFIED_SINCE"))
61
62 @cached_property
63 def if_range(self):
64 """The parsed `If-Range` header.
65
66 .. versionadded:: 0.7
67
68 :rtype: :class:`~werkzeug.datastructures.IfRange`
69 """
70 return parse_if_range_header(self.environ.get("HTTP_IF_RANGE"))
71
72 @cached_property
73 def range(self):
74 """The parsed `Range` header.
75
76 .. versionadded:: 0.7
77
78 :rtype: :class:`~werkzeug.datastructures.Range`
79 """
80 return parse_range_header(self.environ.get("HTTP_RANGE"))
81
82
83 class ETagResponseMixin(object):
84 """Adds extra functionality to a response object for etag and cache
85 handling. This mixin requires an object with at least a `headers`
86 object that implements a dict like interface similar to
87 :class:`~werkzeug.datastructures.Headers`.
88
89 If you want the :meth:`freeze` method to automatically add an etag, you
90 have to mixin this method before the response base class. The default
91 response class does not do that.
92 """
93
94 @property
95 def cache_control(self):
96 """The Cache-Control general-header field is used to specify
97 directives that MUST be obeyed by all caching mechanisms along the
98 request/response chain.
99 """
100
101 def on_update(cache_control):
102 if not cache_control and "cache-control" in self.headers:
103 del self.headers["cache-control"]
104 elif cache_control:
105 self.headers["Cache-Control"] = cache_control.to_header()
106
107 return parse_cache_control_header(
108 self.headers.get("cache-control"), on_update, ResponseCacheControl
109 )
110
111 def _wrap_response(self, start, length):
112 """Wrap existing Response in case of Range Request context."""
113 if self.status_code == 206:
114 self.response = _RangeWrapper(self.response, start, length)
115
116 def _is_range_request_processable(self, environ):
117 """Return ``True`` if `Range` header is present and if underlying
118 resource is considered unchanged when compared with `If-Range` header.
119 """
120 return (
121 "HTTP_IF_RANGE" not in environ
122 or not is_resource_modified(
123 environ,
124 self.headers.get("etag"),
125 None,
126 self.headers.get("last-modified"),
127 ignore_if_range=False,
128 )
129 ) and "HTTP_RANGE" in environ
130
131 def _process_range_request(self, environ, complete_length=None, accept_ranges=None):
132 """Handle Range Request related headers (RFC7233). If `Accept-Ranges`
133 header is valid, and Range Request is processable, we set the headers
134 as described by the RFC, and wrap the underlying response in a
135 RangeWrapper.
136
137 Returns ``True`` if Range Request can be fulfilled, ``False`` otherwise.
138
139 :raises: :class:`~werkzeug.exceptions.RequestedRangeNotSatisfiable`
140 if `Range` header could not be parsed or satisfied.
141 """
142 from ..exceptions import RequestedRangeNotSatisfiable
143
144 if accept_ranges is None:
145 return False
146 self.headers["Accept-Ranges"] = accept_ranges
147 if not self._is_range_request_processable(environ) or complete_length is None:
148 return False
149 parsed_range = parse_range_header(environ.get("HTTP_RANGE"))
150 if parsed_range is None:
151 raise RequestedRangeNotSatisfiable(complete_length)
152 range_tuple = parsed_range.range_for_length(complete_length)
153 content_range_header = parsed_range.to_content_range_header(complete_length)
154 if range_tuple is None or content_range_header is None:
155 raise RequestedRangeNotSatisfiable(complete_length)
156 content_length = range_tuple[1] - range_tuple[0]
157 # Be sure not to send 206 response
158 # if requested range is the full content.
159 if content_length != complete_length:
160 self.headers["Content-Length"] = content_length
161 self.content_range = content_range_header
162 self.status_code = 206
163 self._wrap_response(range_tuple[0], content_length)
164 return True
165 return False
166
167 def make_conditional(
168 self, request_or_environ, accept_ranges=False, complete_length=None
169 ):
170 """Make the response conditional to the request. This method works
171 best if an etag was defined for the response already. The `add_etag`
172 method can be used to do that. If called without etag just the date
173 header is set.
174
175 This does nothing if the request method in the request or environ is
176 anything but GET or HEAD.
177
178 For optimal performance when handling range requests, it's recommended
179 that your response data object implements `seekable`, `seek` and `tell`
180 methods as described by :py:class:`io.IOBase`. Objects returned by
181 :meth:`~werkzeug.wsgi.wrap_file` automatically implement those methods.
182
183 It does not remove the body of the response because that's something
184 the :meth:`__call__` function does for us automatically.
185
186 Returns self so that you can do ``return resp.make_conditional(req)``
187 but modifies the object in-place.
188
189 :param request_or_environ: a request object or WSGI environment to be
190 used to make the response conditional
191 against.
192 :param accept_ranges: This parameter dictates the value of
193 `Accept-Ranges` header. If ``False`` (default),
194 the header is not set. If ``True``, it will be set
195 to ``"bytes"``. If ``None``, it will be set to
196 ``"none"``. If it's a string, it will use this
197 value.
198 :param complete_length: Will be used only in valid Range Requests.
199 It will set `Content-Range` complete length
200 value and compute `Content-Length` real value.
201 This parameter is mandatory for successful
202 Range Requests completion.
203 :raises: :class:`~werkzeug.exceptions.RequestedRangeNotSatisfiable`
204 if `Range` header could not be parsed or satisfied.
205 """
206 environ = _get_environ(request_or_environ)
207 if environ["REQUEST_METHOD"] in ("GET", "HEAD"):
208 # if the date is not in the headers, add it now. We however
209 # will not override an already existing header. Unfortunately
210 # this header will be overriden by many WSGI servers including
211 # wsgiref.
212 if "date" not in self.headers:
213 self.headers["Date"] = http_date()
214 accept_ranges = _clean_accept_ranges(accept_ranges)
215 is206 = self._process_range_request(environ, complete_length, accept_ranges)
216 if not is206 and not is_resource_modified(
217 environ,
218 self.headers.get("etag"),
219 None,
220 self.headers.get("last-modified"),
221 ):
222 if parse_etags(environ.get("HTTP_IF_MATCH")):
223 self.status_code = 412
224 else:
225 self.status_code = 304
226 if (
227 self.automatically_set_content_length
228 and "content-length" not in self.headers
229 ):
230 length = self.calculate_content_length()
231 if length is not None:
232 self.headers["Content-Length"] = length
233 return self
234
235 def add_etag(self, overwrite=False, weak=False):
236 """Add an etag for the current response if there is none yet."""
237 if overwrite or "etag" not in self.headers:
238 self.set_etag(generate_etag(self.get_data()), weak)
239
240 def set_etag(self, etag, weak=False):
241 """Set the etag, and override the old one if there was one."""
242 self.headers["ETag"] = quote_etag(etag, weak)
243
244 def get_etag(self):
245 """Return a tuple in the form ``(etag, is_weak)``. If there is no
246 ETag the return value is ``(None, None)``.
247 """
248 return unquote_etag(self.headers.get("ETag"))
249
250 def freeze(self, no_etag=False):
251 """Call this method if you want to make your response object ready for
252 pickeling. This buffers the generator if there is one. This also
253 sets the etag unless `no_etag` is set to `True`.
254 """
255 if not no_etag:
256 self.add_etag()
257 super(ETagResponseMixin, self).freeze()
258
259 accept_ranges = header_property(
260 "Accept-Ranges",
261 doc="""The `Accept-Ranges` header. Even though the name would
262 indicate that multiple values are supported, it must be one
263 string token only.
264
265 The values ``'bytes'`` and ``'none'`` are common.
266
267 .. versionadded:: 0.7""",
268 )
269
270 def _get_content_range(self):
271 def on_update(rng):
272 if not rng:
273 del self.headers["content-range"]
274 else:
275 self.headers["Content-Range"] = rng.to_header()
276
277 rv = parse_content_range_header(self.headers.get("content-range"), on_update)
278 # always provide a content range object to make the descriptor
279 # more user friendly. It provides an unset() method that can be
280 # used to remove the header quickly.
281 if rv is None:
282 rv = ContentRange(None, None, None, on_update=on_update)
283 return rv
284
285 def _set_content_range(self, value):
286 if not value:
287 del self.headers["content-range"]
288 elif isinstance(value, string_types):
289 self.headers["Content-Range"] = value
290 else:
291 self.headers["Content-Range"] = value.to_header()
292
293 content_range = property(
294 _get_content_range,
295 _set_content_range,
296 doc="""The ``Content-Range`` header as
297 :class:`~werkzeug.datastructures.ContentRange` object. Even if
298 the header is not set it wil provide such an object for easier
299 manipulation.
300
301 .. versionadded:: 0.7""",
302 )
303 del _get_content_range, _set_content_range
0 from __future__ import absolute_import
1
2 import datetime
3 import uuid
4
5 from .._compat import text_type
6 from ..exceptions import BadRequest
7 from ..utils import detect_utf_encoding
8
9 try:
10 import simplejson as _json
11 except ImportError:
12 import json as _json
13
14
15 class _JSONModule(object):
16 @staticmethod
17 def _default(o):
18 if isinstance(o, datetime.date):
19 return o.isoformat()
20
21 if isinstance(o, uuid.UUID):
22 return str(o)
23
24 if hasattr(o, "__html__"):
25 return text_type(o.__html__())
26
27 raise TypeError()
28
29 @classmethod
30 def dumps(cls, obj, **kw):
31 kw.setdefault("separators", (",", ":"))
32 kw.setdefault("default", cls._default)
33 kw.setdefault("sort_keys", True)
34 return _json.dumps(obj, **kw)
35
36 @staticmethod
37 def loads(s, **kw):
38 if isinstance(s, bytes):
39 # Needed for Python < 3.6
40 encoding = detect_utf_encoding(s)
41 s = s.decode(encoding)
42
43 return _json.loads(s, **kw)
44
45
46 class JSONMixin(object):
47 """Mixin to parse :attr:`data` as JSON. Can be mixed in for both
48 :class:`~werkzeug.wrappers.Request` and
49 :class:`~werkzeug.wrappers.Response` classes.
50
51 If `simplejson`_ is installed it is preferred over Python's built-in
52 :mod:`json` module.
53
54 .. _simplejson: https://simplejson.readthedocs.io/en/latest/
55 """
56
57 #: A module or other object that has ``dumps`` and ``loads``
58 #: functions that match the API of the built-in :mod:`json` module.
59 json_module = _JSONModule
60
61 @property
62 def json(self):
63 """The parsed JSON data if :attr:`mimetype` indicates JSON
64 (:mimetype:`application/json`, see :meth:`is_json`).
65
66 Calls :meth:`get_json` with default arguments.
67 """
68 return self.get_json()
69
70 @property
71 def is_json(self):
72 """Check if the mimetype indicates JSON data, either
73 :mimetype:`application/json` or :mimetype:`application/*+json`.
74 """
75 mt = self.mimetype
76 return (
77 mt == "application/json"
78 or mt.startswith("application/")
79 and mt.endswith("+json")
80 )
81
82 def _get_data_for_json(self, cache):
83 try:
84 return self.get_data(cache=cache)
85 except TypeError:
86 # Response doesn't have cache param.
87 return self.get_data()
88
89 # Cached values for ``(silent=False, silent=True)``. Initialized
90 # with sentinel values.
91 _cached_json = (Ellipsis, Ellipsis)
92
93 def get_json(self, force=False, silent=False, cache=True):
94 """Parse :attr:`data` as JSON.
95
96 If the mimetype does not indicate JSON
97 (:mimetype:`application/json`, see :meth:`is_json`), this
98 returns ``None``.
99
100 If parsing fails, :meth:`on_json_loading_failed` is called and
101 its return value is used as the return value.
102
103 :param force: Ignore the mimetype and always try to parse JSON.
104 :param silent: Silence parsing errors and return ``None``
105 instead.
106 :param cache: Store the parsed JSON to return for subsequent
107 calls.
108 """
109 if cache and self._cached_json[silent] is not Ellipsis:
110 return self._cached_json[silent]
111
112 if not (force or self.is_json):
113 return None
114
115 data = self._get_data_for_json(cache=cache)
116
117 try:
118 rv = self.json_module.loads(data)
119 except ValueError as e:
120 if silent:
121 rv = None
122
123 if cache:
124 normal_rv, _ = self._cached_json
125 self._cached_json = (normal_rv, rv)
126 else:
127 rv = self.on_json_loading_failed(e)
128
129 if cache:
130 _, silent_rv = self._cached_json
131 self._cached_json = (rv, silent_rv)
132 else:
133 if cache:
134 self._cached_json = (rv, rv)
135
136 return rv
137
138 def on_json_loading_failed(self, e):
139 """Called if :meth:`get_json` parsing fails and isn't silenced.
140 If this method returns a value, it is used as the return value
141 for :meth:`get_json`. The default implementation raises
142 :exc:`~werkzeug.exceptions.BadRequest`.
143 """
144 raise BadRequest("Failed to decode JSON object: {0}".format(e))
0 from .accept import AcceptMixin
1 from .auth import AuthorizationMixin
2 from .base_request import BaseRequest
3 from .common_descriptors import CommonRequestDescriptorsMixin
4 from .etag import ETagRequestMixin
5 from .user_agent import UserAgentMixin
6
7
8 class Request(
9 BaseRequest,
10 AcceptMixin,
11 ETagRequestMixin,
12 UserAgentMixin,
13 AuthorizationMixin,
14 CommonRequestDescriptorsMixin,
15 ):
16 """Full featured request object implementing the following mixins:
17
18 - :class:`AcceptMixin` for accept header parsing
19 - :class:`ETagRequestMixin` for etag and cache control handling
20 - :class:`UserAgentMixin` for user agent introspection
21 - :class:`AuthorizationMixin` for http auth handling
22 - :class:`CommonRequestDescriptorsMixin` for common headers
23 """
24
25
26 class StreamOnlyMixin(object):
27 """If mixed in before the request object this will change the bahavior
28 of it to disable handling of form parsing. This disables the
29 :attr:`files`, :attr:`form` attributes and will just provide a
30 :attr:`stream` attribute that however is always available.
31
32 .. versionadded:: 0.9
33 """
34
35 disable_data_descriptor = True
36 want_form_data_parsed = False
37
38
39 class PlainRequest(StreamOnlyMixin, Request):
40 """A request object without special form parsing capabilities.
41
42 .. versionadded:: 0.9
43 """
0 from ..utils import cached_property
1 from .auth import WWWAuthenticateMixin
2 from .base_response import BaseResponse
3 from .common_descriptors import CommonResponseDescriptorsMixin
4 from .etag import ETagResponseMixin
5
6
7 class ResponseStream(object):
8 """A file descriptor like object used by the :class:`ResponseStreamMixin` to
9 represent the body of the stream. It directly pushes into the response
10 iterable of the response object.
11 """
12
13 mode = "wb+"
14
15 def __init__(self, response):
16 self.response = response
17 self.closed = False
18
19 def write(self, value):
20 if self.closed:
21 raise ValueError("I/O operation on closed file")
22 self.response._ensure_sequence(mutable=True)
23 self.response.response.append(value)
24 self.response.headers.pop("Content-Length", None)
25 return len(value)
26
27 def writelines(self, seq):
28 for item in seq:
29 self.write(item)
30
31 def close(self):
32 self.closed = True
33
34 def flush(self):
35 if self.closed:
36 raise ValueError("I/O operation on closed file")
37
38 def isatty(self):
39 if self.closed:
40 raise ValueError("I/O operation on closed file")
41 return False
42
43 def tell(self):
44 self.response._ensure_sequence()
45 return sum(map(len, self.response.response))
46
47 @property
48 def encoding(self):
49 return self.response.charset
50
51
52 class ResponseStreamMixin(object):
53 """Mixin for :class:`BaseRequest` subclasses. Classes that inherit from
54 this mixin will automatically get a :attr:`stream` property that provides
55 a write-only interface to the response iterable.
56 """
57
58 @cached_property
59 def stream(self):
60 """The response iterable as write-only stream."""
61 return ResponseStream(self)
62
63
64 class Response(
65 BaseResponse,
66 ETagResponseMixin,
67 ResponseStreamMixin,
68 CommonResponseDescriptorsMixin,
69 WWWAuthenticateMixin,
70 ):
71 """Full featured response object implementing the following mixins:
72
73 - :class:`ETagResponseMixin` for etag and cache control handling
74 - :class:`ResponseStreamMixin` to add support for the `stream` property
75 - :class:`CommonResponseDescriptorsMixin` for various HTTP descriptors
76 - :class:`WWWAuthenticateMixin` for HTTP authentication support
77 """
0 from ..utils import cached_property
1
2
3 class UserAgentMixin(object):
4 """Adds a `user_agent` attribute to the request object which
5 contains the parsed user agent of the browser that triggered the
6 request as a :class:`~werkzeug.useragents.UserAgent` object.
7 """
8
9 @cached_property
10 def user_agent(self):
11 """The current user agent."""
12 from ..useragents import UserAgent
13
14 return UserAgent(self.environ)
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.wsgi
3 ~~~~~~~~~~~~~
4
5 This module implements WSGI related helpers.
6
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
9 """
10 import io
11 import re
12 import warnings
13 from functools import partial
14 from functools import update_wrapper
15 from itertools import chain
16
17 from ._compat import BytesIO
18 from ._compat import implements_iterator
19 from ._compat import make_literal_wrapper
20 from ._compat import string_types
21 from ._compat import text_type
22 from ._compat import to_bytes
23 from ._compat import to_unicode
24 from ._compat import try_coerce_native
25 from ._compat import wsgi_get_bytes
26 from ._internal import _encode_idna
27 from .urls import uri_to_iri
28 from .urls import url_join
29 from .urls import url_parse
30 from .urls import url_quote
31
32
33 def responder(f):
34 """Marks a function as responder. Decorate a function with it and it
35 will automatically call the return value as WSGI application.
36
37 Example::
38
39 @responder
40 def application(environ, start_response):
41 return Response('Hello World!')
42 """
43 return update_wrapper(lambda *a: f(*a)(*a[-2:]), f)
44
45
46 def get_current_url(
47 environ,
48 root_only=False,
49 strip_querystring=False,
50 host_only=False,
51 trusted_hosts=None,
52 ):
53 """A handy helper function that recreates the full URL as IRI for the
54 current request or parts of it. Here's an example:
55
56 >>> from werkzeug.test import create_environ
57 >>> env = create_environ("/?param=foo", "http://localhost/script")
58 >>> get_current_url(env)
59 'http://localhost/script/?param=foo'
60 >>> get_current_url(env, root_only=True)
61 'http://localhost/script/'
62 >>> get_current_url(env, host_only=True)
63 'http://localhost/'
64 >>> get_current_url(env, strip_querystring=True)
65 'http://localhost/script/'
66
67 This optionally it verifies that the host is in a list of trusted hosts.
68 If the host is not in there it will raise a
69 :exc:`~werkzeug.exceptions.SecurityError`.
70
71 Note that the string returned might contain unicode characters as the
72 representation is an IRI not an URI. If you need an ASCII only
73 representation you can use the :func:`~werkzeug.urls.iri_to_uri`
74 function:
75
76 >>> from werkzeug.urls import iri_to_uri
77 >>> iri_to_uri(get_current_url(env))
78 'http://localhost/script/?param=foo'
79
80 :param environ: the WSGI environment to get the current URL from.
81 :param root_only: set `True` if you only want the root URL.
82 :param strip_querystring: set to `True` if you don't want the querystring.
83 :param host_only: set to `True` if the host URL should be returned.
84 :param trusted_hosts: a list of trusted hosts, see :func:`host_is_trusted`
85 for more information.
86 """
87 tmp = [environ["wsgi.url_scheme"], "://", get_host(environ, trusted_hosts)]
88 cat = tmp.append
89 if host_only:
90 return uri_to_iri("".join(tmp) + "/")
91 cat(url_quote(wsgi_get_bytes(environ.get("SCRIPT_NAME", ""))).rstrip("/"))
92 cat("/")
93 if not root_only:
94 cat(url_quote(wsgi_get_bytes(environ.get("PATH_INFO", "")).lstrip(b"/")))
95 if not strip_querystring:
96 qs = get_query_string(environ)
97 if qs:
98 cat("?" + qs)
99 return uri_to_iri("".join(tmp))
100
101
102 def host_is_trusted(hostname, trusted_list):
103 """Checks if a host is trusted against a list. This also takes care
104 of port normalization.
105
106 .. versionadded:: 0.9
107
108 :param hostname: the hostname to check
109 :param trusted_list: a list of hostnames to check against. If a
110 hostname starts with a dot it will match against
111 all subdomains as well.
112 """
113 if not hostname:
114 return False
115
116 if isinstance(trusted_list, string_types):
117 trusted_list = [trusted_list]
118
119 def _normalize(hostname):
120 if ":" in hostname:
121 hostname = hostname.rsplit(":", 1)[0]
122 return _encode_idna(hostname)
123
124 try:
125 hostname = _normalize(hostname)
126 except UnicodeError:
127 return False
128 for ref in trusted_list:
129 if ref.startswith("."):
130 ref = ref[1:]
131 suffix_match = True
132 else:
133 suffix_match = False
134 try:
135 ref = _normalize(ref)
136 except UnicodeError:
137 return False
138 if ref == hostname:
139 return True
140 if suffix_match and hostname.endswith(b"." + ref):
141 return True
142 return False
143
144
145 def get_host(environ, trusted_hosts=None):
146 """Return the host for the given WSGI environment. This first checks
147 the ``Host`` header. If it's not present, then ``SERVER_NAME`` and
148 ``SERVER_PORT`` are used. The host will only contain the port if it
149 is different than the standard port for the protocol.
150
151 Optionally, verify that the host is trusted using
152 :func:`host_is_trusted` and raise a
153 :exc:`~werkzeug.exceptions.SecurityError` if it is not.
154
155 :param environ: The WSGI environment to get the host from.
156 :param trusted_hosts: A list of trusted hosts.
157 :return: Host, with port if necessary.
158 :raise ~werkzeug.exceptions.SecurityError: If the host is not
159 trusted.
160 """
161 if "HTTP_HOST" in environ:
162 rv = environ["HTTP_HOST"]
163 if environ["wsgi.url_scheme"] == "http" and rv.endswith(":80"):
164 rv = rv[:-3]
165 elif environ["wsgi.url_scheme"] == "https" and rv.endswith(":443"):
166 rv = rv[:-4]
167 else:
168 rv = environ["SERVER_NAME"]
169 if (environ["wsgi.url_scheme"], environ["SERVER_PORT"]) not in (
170 ("https", "443"),
171 ("http", "80"),
172 ):
173 rv += ":" + environ["SERVER_PORT"]
174 if trusted_hosts is not None:
175 if not host_is_trusted(rv, trusted_hosts):
176 from .exceptions import SecurityError
177
178 raise SecurityError('Host "%s" is not trusted' % rv)
179 return rv
180
181
182 def get_content_length(environ):
183 """Returns the content length from the WSGI environment as
184 integer. If it's not available or chunked transfer encoding is used,
185 ``None`` is returned.
186
187 .. versionadded:: 0.9
188
189 :param environ: the WSGI environ to fetch the content length from.
190 """
191 if environ.get("HTTP_TRANSFER_ENCODING", "") == "chunked":
192 return None
193
194 content_length = environ.get("CONTENT_LENGTH")
195 if content_length is not None:
196 try:
197 return max(0, int(content_length))
198 except (ValueError, TypeError):
199 pass
200
201
202 def get_input_stream(environ, safe_fallback=True):
203 """Returns the input stream from the WSGI environment and wraps it
204 in the most sensible way possible. The stream returned is not the
205 raw WSGI stream in most cases but one that is safe to read from
206 without taking into account the content length.
207
208 If content length is not set, the stream will be empty for safety reasons.
209 If the WSGI server supports chunked or infinite streams, it should set
210 the ``wsgi.input_terminated`` value in the WSGI environ to indicate that.
211
212 .. versionadded:: 0.9
213
214 :param environ: the WSGI environ to fetch the stream from.
215 :param safe_fallback: use an empty stream as a safe fallback when the
216 content length is not set. Disabling this allows infinite streams,
217 which can be a denial-of-service risk.
218 """
219 stream = environ["wsgi.input"]
220 content_length = get_content_length(environ)
221
222 # A wsgi extension that tells us if the input is terminated. In
223 # that case we return the stream unchanged as we know we can safely
224 # read it until the end.
225 if environ.get("wsgi.input_terminated"):
226 return stream
227
228 # If the request doesn't specify a content length, returning the stream is
229 # potentially dangerous because it could be infinite, malicious or not. If
230 # safe_fallback is true, return an empty stream instead for safety.
231 if content_length is None:
232 return BytesIO() if safe_fallback else stream
233
234 # Otherwise limit the stream to the content length
235 return LimitedStream(stream, content_length)
236
237
238 def get_query_string(environ):
239 """Returns the `QUERY_STRING` from the WSGI environment. This also takes
240 care about the WSGI decoding dance on Python 3 environments as a
241 native string. The string returned will be restricted to ASCII
242 characters.
243
244 .. versionadded:: 0.9
245
246 :param environ: the WSGI environment object to get the query string from.
247 """
248 qs = wsgi_get_bytes(environ.get("QUERY_STRING", ""))
249 # QUERY_STRING really should be ascii safe but some browsers
250 # will send us some unicode stuff (I am looking at you IE).
251 # In that case we want to urllib quote it badly.
252 return try_coerce_native(url_quote(qs, safe=":&%=+$!*'(),"))
253
254
255 def get_path_info(environ, charset="utf-8", errors="replace"):
256 """Returns the `PATH_INFO` from the WSGI environment and properly
257 decodes it. This also takes care about the WSGI decoding dance
258 on Python 3 environments. if the `charset` is set to `None` a
259 bytestring is returned.
260
261 .. versionadded:: 0.9
262
263 :param environ: the WSGI environment object to get the path from.
264 :param charset: the charset for the path info, or `None` if no
265 decoding should be performed.
266 :param errors: the decoding error handling.
267 """
268 path = wsgi_get_bytes(environ.get("PATH_INFO", ""))
269 return to_unicode(path, charset, errors, allow_none_charset=True)
270
271
272 def get_script_name(environ, charset="utf-8", errors="replace"):
273 """Returns the `SCRIPT_NAME` from the WSGI environment and properly
274 decodes it. This also takes care about the WSGI decoding dance
275 on Python 3 environments. if the `charset` is set to `None` a
276 bytestring is returned.
277
278 .. versionadded:: 0.9
279
280 :param environ: the WSGI environment object to get the path from.
281 :param charset: the charset for the path, or `None` if no
282 decoding should be performed.
283 :param errors: the decoding error handling.
284 """
285 path = wsgi_get_bytes(environ.get("SCRIPT_NAME", ""))
286 return to_unicode(path, charset, errors, allow_none_charset=True)
287
288
289 def pop_path_info(environ, charset="utf-8", errors="replace"):
290 """Removes and returns the next segment of `PATH_INFO`, pushing it onto
291 `SCRIPT_NAME`. Returns `None` if there is nothing left on `PATH_INFO`.
292
293 If the `charset` is set to `None` a bytestring is returned.
294
295 If there are empty segments (``'/foo//bar``) these are ignored but
296 properly pushed to the `SCRIPT_NAME`:
297
298 >>> env = {'SCRIPT_NAME': '/foo', 'PATH_INFO': '/a/b'}
299 >>> pop_path_info(env)
300 'a'
301 >>> env['SCRIPT_NAME']
302 '/foo/a'
303 >>> pop_path_info(env)
304 'b'
305 >>> env['SCRIPT_NAME']
306 '/foo/a/b'
307
308 .. versionadded:: 0.5
309
310 .. versionchanged:: 0.9
311 The path is now decoded and a charset and encoding
312 parameter can be provided.
313
314 :param environ: the WSGI environment that is modified.
315 """
316 path = environ.get("PATH_INFO")
317 if not path:
318 return None
319
320 script_name = environ.get("SCRIPT_NAME", "")
321
322 # shift multiple leading slashes over
323 old_path = path
324 path = path.lstrip("/")
325 if path != old_path:
326 script_name += "/" * (len(old_path) - len(path))
327
328 if "/" not in path:
329 environ["PATH_INFO"] = ""
330 environ["SCRIPT_NAME"] = script_name + path
331 rv = wsgi_get_bytes(path)
332 else:
333 segment, path = path.split("/", 1)
334 environ["PATH_INFO"] = "/" + path
335 environ["SCRIPT_NAME"] = script_name + segment
336 rv = wsgi_get_bytes(segment)
337
338 return to_unicode(rv, charset, errors, allow_none_charset=True)
339
340
341 def peek_path_info(environ, charset="utf-8", errors="replace"):
342 """Returns the next segment on the `PATH_INFO` or `None` if there
343 is none. Works like :func:`pop_path_info` without modifying the
344 environment:
345
346 >>> env = {'SCRIPT_NAME': '/foo', 'PATH_INFO': '/a/b'}
347 >>> peek_path_info(env)
348 'a'
349 >>> peek_path_info(env)
350 'a'
351
352 If the `charset` is set to `None` a bytestring is returned.
353
354 .. versionadded:: 0.5
355
356 .. versionchanged:: 0.9
357 The path is now decoded and a charset and encoding
358 parameter can be provided.
359
360 :param environ: the WSGI environment that is checked.
361 """
362 segments = environ.get("PATH_INFO", "").lstrip("/").split("/", 1)
363 if segments:
364 return to_unicode(
365 wsgi_get_bytes(segments[0]), charset, errors, allow_none_charset=True
366 )
367
368
369 def extract_path_info(
370 environ_or_baseurl,
371 path_or_url,
372 charset="utf-8",
373 errors="werkzeug.url_quote",
374 collapse_http_schemes=True,
375 ):
376 """Extracts the path info from the given URL (or WSGI environment) and
377 path. The path info returned is a unicode string, not a bytestring
378 suitable for a WSGI environment. The URLs might also be IRIs.
379
380 If the path info could not be determined, `None` is returned.
381
382 Some examples:
383
384 >>> extract_path_info('http://example.com/app', '/app/hello')
385 u'/hello'
386 >>> extract_path_info('http://example.com/app',
387 ... 'https://example.com/app/hello')
388 u'/hello'
389 >>> extract_path_info('http://example.com/app',
390 ... 'https://example.com/app/hello',
391 ... collapse_http_schemes=False) is None
392 True
393
394 Instead of providing a base URL you can also pass a WSGI environment.
395
396 :param environ_or_baseurl: a WSGI environment dict, a base URL or
397 base IRI. This is the root of the
398 application.
399 :param path_or_url: an absolute path from the server root, a
400 relative path (in which case it's the path info)
401 or a full URL. Also accepts IRIs and unicode
402 parameters.
403 :param charset: the charset for byte data in URLs
404 :param errors: the error handling on decode
405 :param collapse_http_schemes: if set to `False` the algorithm does
406 not assume that http and https on the
407 same server point to the same
408 resource.
409
410 .. versionchanged:: 0.15
411 The ``errors`` parameter defaults to leaving invalid bytes
412 quoted instead of replacing them.
413
414 .. versionadded:: 0.6
415 """
416
417 def _normalize_netloc(scheme, netloc):
418 parts = netloc.split(u"@", 1)[-1].split(u":", 1)
419 if len(parts) == 2:
420 netloc, port = parts
421 if (scheme == u"http" and port == u"80") or (
422 scheme == u"https" and port == u"443"
423 ):
424 port = None
425 else:
426 netloc = parts[0]
427 port = None
428 if port is not None:
429 netloc += u":" + port
430 return netloc
431
432 # make sure whatever we are working on is a IRI and parse it
433 path = uri_to_iri(path_or_url, charset, errors)
434 if isinstance(environ_or_baseurl, dict):
435 environ_or_baseurl = get_current_url(environ_or_baseurl, root_only=True)
436 base_iri = uri_to_iri(environ_or_baseurl, charset, errors)
437 base_scheme, base_netloc, base_path = url_parse(base_iri)[:3]
438 cur_scheme, cur_netloc, cur_path, = url_parse(url_join(base_iri, path))[:3]
439
440 # normalize the network location
441 base_netloc = _normalize_netloc(base_scheme, base_netloc)
442 cur_netloc = _normalize_netloc(cur_scheme, cur_netloc)
443
444 # is that IRI even on a known HTTP scheme?
445 if collapse_http_schemes:
446 for scheme in base_scheme, cur_scheme:
447 if scheme not in (u"http", u"https"):
448 return None
449 else:
450 if not (base_scheme in (u"http", u"https") and base_scheme == cur_scheme):
451 return None
452
453 # are the netlocs compatible?
454 if base_netloc != cur_netloc:
455 return None
456
457 # are we below the application path?
458 base_path = base_path.rstrip(u"/")
459 if not cur_path.startswith(base_path):
460 return None
461
462 return u"/" + cur_path[len(base_path) :].lstrip(u"/")
463
464
465 @implements_iterator
466 class ClosingIterator(object):
467 """The WSGI specification requires that all middlewares and gateways
468 respect the `close` callback of the iterable returned by the application.
469 Because it is useful to add another close action to a returned iterable
470 and adding a custom iterable is a boring task this class can be used for
471 that::
472
473 return ClosingIterator(app(environ, start_response), [cleanup_session,
474 cleanup_locals])
475
476 If there is just one close function it can be passed instead of the list.
477
478 A closing iterator is not needed if the application uses response objects
479 and finishes the processing if the response is started::
480
481 try:
482 return response(environ, start_response)
483 finally:
484 cleanup_session()
485 cleanup_locals()
486 """
487
488 def __init__(self, iterable, callbacks=None):
489 iterator = iter(iterable)
490 self._next = partial(next, iterator)
491 if callbacks is None:
492 callbacks = []
493 elif callable(callbacks):
494 callbacks = [callbacks]
495 else:
496 callbacks = list(callbacks)
497 iterable_close = getattr(iterable, "close", None)
498 if iterable_close:
499 callbacks.insert(0, iterable_close)
500 self._callbacks = callbacks
501
502 def __iter__(self):
503 return self
504
505 def __next__(self):
506 return self._next()
507
508 def close(self):
509 for callback in self._callbacks:
510 callback()
511
512
513 def wrap_file(environ, file, buffer_size=8192):
514 """Wraps a file. This uses the WSGI server's file wrapper if available
515 or otherwise the generic :class:`FileWrapper`.
516
517 .. versionadded:: 0.5
518
519 If the file wrapper from the WSGI server is used it's important to not
520 iterate over it from inside the application but to pass it through
521 unchanged. If you want to pass out a file wrapper inside a response
522 object you have to set :attr:`~BaseResponse.direct_passthrough` to `True`.
523
524 More information about file wrappers are available in :pep:`333`.
525
526 :param file: a :class:`file`-like object with a :meth:`~file.read` method.
527 :param buffer_size: number of bytes for one iteration.
528 """
529 return environ.get("wsgi.file_wrapper", FileWrapper)(file, buffer_size)
530
531
532 @implements_iterator
533 class FileWrapper(object):
534 """This class can be used to convert a :class:`file`-like object into
535 an iterable. It yields `buffer_size` blocks until the file is fully
536 read.
537
538 You should not use this class directly but rather use the
539 :func:`wrap_file` function that uses the WSGI server's file wrapper
540 support if it's available.
541
542 .. versionadded:: 0.5
543
544 If you're using this object together with a :class:`BaseResponse` you have
545 to use the `direct_passthrough` mode.
546
547 :param file: a :class:`file`-like object with a :meth:`~file.read` method.
548 :param buffer_size: number of bytes for one iteration.
549 """
550
551 def __init__(self, file, buffer_size=8192):
552 self.file = file
553 self.buffer_size = buffer_size
554
555 def close(self):
556 if hasattr(self.file, "close"):
557 self.file.close()
558
559 def seekable(self):
560 if hasattr(self.file, "seekable"):
561 return self.file.seekable()
562 if hasattr(self.file, "seek"):
563 return True
564 return False
565
566 def seek(self, *args):
567 if hasattr(self.file, "seek"):
568 self.file.seek(*args)
569
570 def tell(self):
571 if hasattr(self.file, "tell"):
572 return self.file.tell()
573 return None
574
575 def __iter__(self):
576 return self
577
578 def __next__(self):
579 data = self.file.read(self.buffer_size)
580 if data:
581 return data
582 raise StopIteration()
583
584
585 @implements_iterator
586 class _RangeWrapper(object):
587 # private for now, but should we make it public in the future ?
588
589 """This class can be used to convert an iterable object into
590 an iterable that will only yield a piece of the underlying content.
591 It yields blocks until the underlying stream range is fully read.
592 The yielded blocks will have a size that can't exceed the original
593 iterator defined block size, but that can be smaller.
594
595 If you're using this object together with a :class:`BaseResponse` you have
596 to use the `direct_passthrough` mode.
597
598 :param iterable: an iterable object with a :meth:`__next__` method.
599 :param start_byte: byte from which read will start.
600 :param byte_range: how many bytes to read.
601 """
602
603 def __init__(self, iterable, start_byte=0, byte_range=None):
604 self.iterable = iter(iterable)
605 self.byte_range = byte_range
606 self.start_byte = start_byte
607 self.end_byte = None
608 if byte_range is not None:
609 self.end_byte = self.start_byte + self.byte_range
610 self.read_length = 0
611 self.seekable = hasattr(iterable, "seekable") and iterable.seekable()
612 self.end_reached = False
613
614 def __iter__(self):
615 return self
616
617 def _next_chunk(self):
618 try:
619 chunk = next(self.iterable)
620 self.read_length += len(chunk)
621 return chunk
622 except StopIteration:
623 self.end_reached = True
624 raise
625
626 def _first_iteration(self):
627 chunk = None
628 if self.seekable:
629 self.iterable.seek(self.start_byte)
630 self.read_length = self.iterable.tell()
631 contextual_read_length = self.read_length
632 else:
633 while self.read_length <= self.start_byte:
634 chunk = self._next_chunk()
635 if chunk is not None:
636 chunk = chunk[self.start_byte - self.read_length :]
637 contextual_read_length = self.start_byte
638 return chunk, contextual_read_length
639
640 def _next(self):
641 if self.end_reached:
642 raise StopIteration()
643 chunk = None
644 contextual_read_length = self.read_length
645 if self.read_length == 0:
646 chunk, contextual_read_length = self._first_iteration()
647 if chunk is None:
648 chunk = self._next_chunk()
649 if self.end_byte is not None and self.read_length >= self.end_byte:
650 self.end_reached = True
651 return chunk[: self.end_byte - contextual_read_length]
652 return chunk
653
654 def __next__(self):
655 chunk = self._next()
656 if chunk:
657 return chunk
658 self.end_reached = True
659 raise StopIteration()
660
661 def close(self):
662 if hasattr(self.iterable, "close"):
663 self.iterable.close()
664
665
666 def _make_chunk_iter(stream, limit, buffer_size):
667 """Helper for the line and chunk iter functions."""
668 if isinstance(stream, (bytes, bytearray, text_type)):
669 raise TypeError(
670 "Passed a string or byte object instead of true iterator or stream."
671 )
672 if not hasattr(stream, "read"):
673 for item in stream:
674 if item:
675 yield item
676 return
677 if not isinstance(stream, LimitedStream) and limit is not None:
678 stream = LimitedStream(stream, limit)
679 _read = stream.read
680 while 1:
681 item = _read(buffer_size)
682 if not item:
683 break
684 yield item
685
686
687 def make_line_iter(stream, limit=None, buffer_size=10 * 1024, cap_at_buffer=False):
688 """Safely iterates line-based over an input stream. If the input stream
689 is not a :class:`LimitedStream` the `limit` parameter is mandatory.
690
691 This uses the stream's :meth:`~file.read` method internally as opposite
692 to the :meth:`~file.readline` method that is unsafe and can only be used
693 in violation of the WSGI specification. The same problem applies to the
694 `__iter__` function of the input stream which calls :meth:`~file.readline`
695 without arguments.
696
697 If you need line-by-line processing it's strongly recommended to iterate
698 over the input stream using this helper function.
699
700 .. versionchanged:: 0.8
701 This function now ensures that the limit was reached.
702
703 .. versionadded:: 0.9
704 added support for iterators as input stream.
705
706 .. versionadded:: 0.11.10
707 added support for the `cap_at_buffer` parameter.
708
709 :param stream: the stream or iterate to iterate over.
710 :param limit: the limit in bytes for the stream. (Usually
711 content length. Not necessary if the `stream`
712 is a :class:`LimitedStream`.
713 :param buffer_size: The optional buffer size.
714 :param cap_at_buffer: if this is set chunks are split if they are longer
715 than the buffer size. Internally this is implemented
716 that the buffer size might be exhausted by a factor
717 of two however.
718 """
719 _iter = _make_chunk_iter(stream, limit, buffer_size)
720
721 first_item = next(_iter, "")
722 if not first_item:
723 return
724
725 s = make_literal_wrapper(first_item)
726 empty = s("")
727 cr = s("\r")
728 lf = s("\n")
729 crlf = s("\r\n")
730
731 _iter = chain((first_item,), _iter)
732
733 def _iter_basic_lines():
734 _join = empty.join
735 buffer = []
736 while 1:
737 new_data = next(_iter, "")
738 if not new_data:
739 break
740 new_buf = []
741 buf_size = 0
742 for item in chain(buffer, new_data.splitlines(True)):
743 new_buf.append(item)
744 buf_size += len(item)
745 if item and item[-1:] in crlf:
746 yield _join(new_buf)
747 new_buf = []
748 elif cap_at_buffer and buf_size >= buffer_size:
749 rv = _join(new_buf)
750 while len(rv) >= buffer_size:
751 yield rv[:buffer_size]
752 rv = rv[buffer_size:]
753 new_buf = [rv]
754 buffer = new_buf
755 if buffer:
756 yield _join(buffer)
757
758 # This hackery is necessary to merge 'foo\r' and '\n' into one item
759 # of 'foo\r\n' if we were unlucky and we hit a chunk boundary.
760 previous = empty
761 for item in _iter_basic_lines():
762 if item == lf and previous[-1:] == cr:
763 previous += item
764 item = empty
765 if previous:
766 yield previous
767 previous = item
768 if previous:
769 yield previous
770
771
772 def make_chunk_iter(
773 stream, separator, limit=None, buffer_size=10 * 1024, cap_at_buffer=False
774 ):
775 """Works like :func:`make_line_iter` but accepts a separator
776 which divides chunks. If you want newline based processing
777 you should use :func:`make_line_iter` instead as it
778 supports arbitrary newline markers.
779
780 .. versionadded:: 0.8
781
782 .. versionadded:: 0.9
783 added support for iterators as input stream.
784
785 .. versionadded:: 0.11.10
786 added support for the `cap_at_buffer` parameter.
787
788 :param stream: the stream or iterate to iterate over.
789 :param separator: the separator that divides chunks.
790 :param limit: the limit in bytes for the stream. (Usually
791 content length. Not necessary if the `stream`
792 is otherwise already limited).
793 :param buffer_size: The optional buffer size.
794 :param cap_at_buffer: if this is set chunks are split if they are longer
795 than the buffer size. Internally this is implemented
796 that the buffer size might be exhausted by a factor
797 of two however.
798 """
799 _iter = _make_chunk_iter(stream, limit, buffer_size)
800
801 first_item = next(_iter, "")
802 if not first_item:
803 return
804
805 _iter = chain((first_item,), _iter)
806 if isinstance(first_item, text_type):
807 separator = to_unicode(separator)
808 _split = re.compile(r"(%s)" % re.escape(separator)).split
809 _join = u"".join
810 else:
811 separator = to_bytes(separator)
812 _split = re.compile(b"(" + re.escape(separator) + b")").split
813 _join = b"".join
814
815 buffer = []
816 while 1:
817 new_data = next(_iter, "")
818 if not new_data:
819 break
820 chunks = _split(new_data)
821 new_buf = []
822 buf_size = 0
823 for item in chain(buffer, chunks):
824 if item == separator:
825 yield _join(new_buf)
826 new_buf = []
827 buf_size = 0
828 else:
829 buf_size += len(item)
830 new_buf.append(item)
831
832 if cap_at_buffer and buf_size >= buffer_size:
833 rv = _join(new_buf)
834 while len(rv) >= buffer_size:
835 yield rv[:buffer_size]
836 rv = rv[buffer_size:]
837 new_buf = [rv]
838 buf_size = len(rv)
839
840 buffer = new_buf
841 if buffer:
842 yield _join(buffer)
843
844
845 @implements_iterator
846 class LimitedStream(io.IOBase):
847 """Wraps a stream so that it doesn't read more than n bytes. If the
848 stream is exhausted and the caller tries to get more bytes from it
849 :func:`on_exhausted` is called which by default returns an empty
850 string. The return value of that function is forwarded
851 to the reader function. So if it returns an empty string
852 :meth:`read` will return an empty string as well.
853
854 The limit however must never be higher than what the stream can
855 output. Otherwise :meth:`readlines` will try to read past the
856 limit.
857
858 .. admonition:: Note on WSGI compliance
859
860 calls to :meth:`readline` and :meth:`readlines` are not
861 WSGI compliant because it passes a size argument to the
862 readline methods. Unfortunately the WSGI PEP is not safely
863 implementable without a size argument to :meth:`readline`
864 because there is no EOF marker in the stream. As a result
865 of that the use of :meth:`readline` is discouraged.
866
867 For the same reason iterating over the :class:`LimitedStream`
868 is not portable. It internally calls :meth:`readline`.
869
870 We strongly suggest using :meth:`read` only or using the
871 :func:`make_line_iter` which safely iterates line-based
872 over a WSGI input stream.
873
874 :param stream: the stream to wrap.
875 :param limit: the limit for the stream, must not be longer than
876 what the string can provide if the stream does not
877 end with `EOF` (like `wsgi.input`)
878 """
879
880 def __init__(self, stream, limit):
881 self._read = stream.read
882 self._readline = stream.readline
883 self._pos = 0
884 self.limit = limit
885
886 def __iter__(self):
887 return self
888
889 @property
890 def is_exhausted(self):
891 """If the stream is exhausted this attribute is `True`."""
892 return self._pos >= self.limit
893
894 def on_exhausted(self):
895 """This is called when the stream tries to read past the limit.
896 The return value of this function is returned from the reading
897 function.
898 """
899 # Read null bytes from the stream so that we get the
900 # correct end of stream marker.
901 return self._read(0)
902
903 def on_disconnect(self):
904 """What should happen if a disconnect is detected? The return
905 value of this function is returned from read functions in case
906 the client went away. By default a
907 :exc:`~werkzeug.exceptions.ClientDisconnected` exception is raised.
908 """
909 from .exceptions import ClientDisconnected
910
911 raise ClientDisconnected()
912
913 def exhaust(self, chunk_size=1024 * 64):
914 """Exhaust the stream. This consumes all the data left until the
915 limit is reached.
916
917 :param chunk_size: the size for a chunk. It will read the chunk
918 until the stream is exhausted and throw away
919 the results.
920 """
921 to_read = self.limit - self._pos
922 chunk = chunk_size
923 while to_read > 0:
924 chunk = min(to_read, chunk)
925 self.read(chunk)
926 to_read -= chunk
927
928 def read(self, size=None):
929 """Read `size` bytes or if size is not provided everything is read.
930
931 :param size: the number of bytes read.
932 """
933 if self._pos >= self.limit:
934 return self.on_exhausted()
935 if size is None or size == -1: # -1 is for consistence with file
936 size = self.limit
937 to_read = min(self.limit - self._pos, size)
938 try:
939 read = self._read(to_read)
940 except (IOError, ValueError):
941 return self.on_disconnect()
942 if to_read and len(read) != to_read:
943 return self.on_disconnect()
944 self._pos += len(read)
945 return read
946
947 def readline(self, size=None):
948 """Reads one line from the stream."""
949 if self._pos >= self.limit:
950 return self.on_exhausted()
951 if size is None:
952 size = self.limit - self._pos
953 else:
954 size = min(size, self.limit - self._pos)
955 try:
956 line = self._readline(size)
957 except (ValueError, IOError):
958 return self.on_disconnect()
959 if size and not line:
960 return self.on_disconnect()
961 self._pos += len(line)
962 return line
963
964 def readlines(self, size=None):
965 """Reads a file into a list of strings. It calls :meth:`readline`
966 until the file is read to the end. It does support the optional
967 `size` argument if the underlaying stream supports it for
968 `readline`.
969 """
970 last_pos = self._pos
971 result = []
972 if size is not None:
973 end = min(self.limit, last_pos + size)
974 else:
975 end = self.limit
976 while 1:
977 if size is not None:
978 size -= last_pos - self._pos
979 if self._pos >= end:
980 break
981 result.append(self.readline(size))
982 if size is not None:
983 last_pos = self._pos
984 return result
985
986 def tell(self):
987 """Returns the position of the stream.
988
989 .. versionadded:: 0.9
990 """
991 return self._pos
992
993 def __next__(self):
994 line = self.readline()
995 if not line:
996 raise StopIteration()
997 return line
998
999 def readable(self):
1000 return True
1001
1002
1003 # DEPRECATED
1004 from .middleware.dispatcher import DispatcherMiddleware as _DispatcherMiddleware
1005 from .middleware.http_proxy import ProxyMiddleware as _ProxyMiddleware
1006 from .middleware.shared_data import SharedDataMiddleware as _SharedDataMiddleware
1007
1008
1009 class ProxyMiddleware(_ProxyMiddleware):
1010 """
1011 .. deprecated:: 0.15
1012 ``werkzeug.wsgi.ProxyMiddleware`` has moved to
1013 :mod:`werkzeug.middleware.http_proxy`. This import will be
1014 removed in 1.0.
1015 """
1016
1017 def __init__(self, *args, **kwargs):
1018 warnings.warn(
1019 "'werkzeug.wsgi.ProxyMiddleware' has moved to 'werkzeug"
1020 ".middleware.http_proxy.ProxyMiddleware'. This import is"
1021 " deprecated as of version 0.15 and will be removed in"
1022 " version 1.0.",
1023 DeprecationWarning,
1024 stacklevel=2,
1025 )
1026 super(ProxyMiddleware, self).__init__(*args, **kwargs)
1027
1028
1029 class SharedDataMiddleware(_SharedDataMiddleware):
1030 """
1031 .. deprecated:: 0.15
1032 ``werkzeug.wsgi.SharedDataMiddleware`` has moved to
1033 :mod:`werkzeug.middleware.shared_data`. This import will be
1034 removed in 1.0.
1035 """
1036
1037 def __init__(self, *args, **kwargs):
1038 warnings.warn(
1039 "'werkzeug.wsgi.SharedDataMiddleware' has moved to"
1040 " 'werkzeug.middleware.shared_data.SharedDataMiddleware'."
1041 " This import is deprecated as of version 0.15 and will be"
1042 " removed in version 1.0.",
1043 DeprecationWarning,
1044 stacklevel=2,
1045 )
1046 super(SharedDataMiddleware, self).__init__(*args, **kwargs)
1047
1048
1049 class DispatcherMiddleware(_DispatcherMiddleware):
1050 """
1051 .. deprecated:: 0.15
1052 ``werkzeug.wsgi.DispatcherMiddleware`` has moved to
1053 :mod:`werkzeug.middleware.dispatcher`. This import will be
1054 removed in 1.0.
1055 """
1056
1057 def __init__(self, *args, **kwargs):
1058 warnings.warn(
1059 "'werkzeug.wsgi.DispatcherMiddleware' has moved to"
1060 " 'werkzeug.middleware.dispatcher.DispatcherMiddleware'."
1061 " This import is deprecated as of version 0.15 and will be"
1062 " removed in version 1.0.",
1063 DeprecationWarning,
1064 stacklevel=2,
1065 )
1066 super(DispatcherMiddleware, self).__init__(*args, **kwargs)
22 tests.conftest
33 ~~~~~~~~~~~~~~
44
5 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
6 :license: BSD, see LICENSE for more details.
5 :copyright: 2007 Pallets
6 :license: BSD-3-Clause
77 """
8 from __future__ import with_statement, print_function
9
8 from __future__ import print_function
9
10 import logging
1011 import os
12 import platform
1113 import signal
14 import subprocess
1215 import sys
1316 import textwrap
1417 import time
15
16 import requests
18 from itertools import count
19
1720 import pytest
1821
1922 from werkzeug import serving
23 from werkzeug._compat import to_bytes
24 from werkzeug.urls import url_quote
2025 from werkzeug.utils import cached_property
21 from werkzeug._compat import to_bytes
22 from itertools import count
23
2426
2527 try:
26 __import__('pytest_xprocess')
28 __import__("pytest_xprocess")
2729 except ImportError:
28 @pytest.fixture(scope='session')
29 def subprocess():
30 pytest.skip('pytest-xprocess not installed.')
31 else:
32 @pytest.fixture(scope='session')
33 def subprocess(xprocess):
34 return xprocess
30
31 @pytest.fixture(scope="session")
32 def xprocess():
33 pytest.skip("pytest-xprocess not installed.")
3534
3635
3736 port_generator = count(13220)
3938
4039 def _patch_reloader_loop():
4140 def f(x):
42 print('reloader loop finished')
41 print("reloader loop finished")
4342 # Need to flush for some reason even though xprocess opens the
4443 # subprocess' stdout in unbuffered mode.
4544 # flush=True makes the test fail on py2, so flush manually
4746 return time.sleep(x)
4847
4948 import werkzeug._reloader
49
5050 werkzeug._reloader.ReloaderLoop._sleep = staticmethod(f)
51
52
53 pid_logger = logging.getLogger("get_pid_middleware")
54 pid_logger.setLevel(logging.INFO)
55 pid_handler = logging.StreamHandler(sys.stdout)
56 pid_logger.addHandler(pid_handler)
5157
5258
5359 def _get_pid_middleware(f):
5460 def inner(environ, start_response):
55 if environ['PATH_INFO'] == '/_getpid':
56 start_response('200 OK', [('Content-Type', 'text/plain')])
61 if environ["PATH_INFO"] == "/_getpid":
62 start_response("200 OK", [("Content-Type", "text/plain")])
63 pid_logger.info("pid=%s", os.getpid())
5764 return [to_bytes(str(os.getpid()))]
5865 return f(environ, start_response)
66
5967 return inner
6068
6169
6371 _patch_reloader_loop()
6472 sys.path.insert(0, sys.argv[1])
6573 import testsuite_app
74
6675 app = _get_pid_middleware(testsuite_app.app)
67 serving.run_simple(hostname='localhost', application=app,
68 **testsuite_app.kwargs)
76 serving.run_simple(application=app, **testsuite_app.kwargs)
6977
7078
7179 class _ServerInfo(object):
8391
8492 @cached_property
8593 def logfile(self):
86 return self.xprocess.getinfo('dev_server').logpath.open()
94 return self.xprocess.getinfo("dev_server").logpath.open()
8795
8896 def request_pid(self):
89 for i in range(20):
97 if self.url.startswith("http+unix://"):
98 from requests_unixsocket import get as rget
99 else:
100 from requests import get as rget
101
102 for i in range(10):
90103 time.sleep(0.1 * i)
91104 try:
92 self.last_pid = int(requests.get(self.url + '/_getpid',
93 verify=False).text)
105 response = rget(self.url + "/_getpid", verify=False)
106 self.last_pid = int(response.text)
94107 return self.last_pid
95108 except Exception as e: # urllib also raises socketerrors
96109 print(self.url)
102115 time.sleep(0.1 * i)
103116 new_pid = self.request_pid()
104117 if not new_pid:
105 raise RuntimeError('Server is down.')
106 if self.request_pid() != old_pid:
118 raise RuntimeError("Server is down.")
119 if new_pid != old_pid:
107120 return
108 raise RuntimeError('Server did not reload.')
121 raise RuntimeError("Server did not reload.")
109122
110123 def wait_for_reloader_loop(self):
111124 for i in range(20):
112125 time.sleep(0.1 * i)
113126 line = self.logfile.readline()
114 if 'reloader loop finished' in line:
127 if "reloader loop finished" in line:
115128 return
116129
117130
118131 @pytest.fixture
119 def dev_server(tmpdir, subprocess, request, monkeypatch):
120 '''Run werkzeug.serving.run_simple in its own process.
132 def dev_server(tmpdir, xprocess, request, monkeypatch):
133 """Run werkzeug.serving.run_simple in its own process.
121134
122135 :param application: String for the module that will be created. The module
123136 must have a global ``app`` object, a ``kwargs`` dict is also available
124137 whose values will be passed to ``run_simple``.
125 '''
138 """
139
126140 def run_dev_server(application):
127 app_pkg = tmpdir.mkdir('testsuite_app')
128 appfile = app_pkg.join('__init__.py')
141 app_pkg = tmpdir.mkdir("testsuite_app")
142 appfile = app_pkg.join("__init__.py")
129143 port = next(port_generator)
130 appfile.write('\n\n'.join((
131 'kwargs = dict(port=%d)' % port,
132 textwrap.dedent(application)
133 )))
134
135 monkeypatch.delitem(sys.modules, 'testsuite_app', raising=False)
144 appfile.write(
145 "\n\n".join(
146 (
147 "kwargs = {{'hostname': 'localhost', 'port': {port:d}}}".format(
148 port=port
149 ),
150 textwrap.dedent(application),
151 )
152 )
153 )
154
155 monkeypatch.delitem(sys.modules, "testsuite_app", raising=False)
136156 monkeypatch.syspath_prepend(str(tmpdir))
137157 import testsuite_app
138 port = testsuite_app.kwargs['port']
139
140 if testsuite_app.kwargs.get('ssl_context', None):
141 url_base = 'https://localhost:{0}'.format(port)
158
159 hostname = testsuite_app.kwargs["hostname"]
160 port = testsuite_app.kwargs["port"]
161 addr = "{}:{}".format(hostname, port)
162
163 if hostname.startswith("unix://"):
164 addr = hostname.split("unix://", 1)[1]
165 requests_url = "http+unix://" + url_quote(addr, safe="")
166 elif testsuite_app.kwargs.get("ssl_context", None):
167 requests_url = "https://localhost:{0}".format(port)
142168 else:
143 url_base = 'http://localhost:{0}'.format(port)
144
145 info = _ServerInfo(
146 subprocess,
147 'localhost:{0}'.format(port),
148 url_base,
149 port
150 )
151
152 def preparefunc(cwd):
169 requests_url = "http://localhost:{0}".format(port)
170
171 info = _ServerInfo(xprocess, addr, requests_url, port)
172
173 from xprocess import ProcessStarter
174
175 class Starter(ProcessStarter):
153176 args = [sys.executable, __file__, str(tmpdir)]
154 return lambda: 'pid=%s' % info.request_pid(), args
155
156 subprocess.ensure('dev_server', preparefunc, restart=True)
157
177
178 @property
179 def pattern(self):
180 return "pid=%s" % info.request_pid()
181
182 xprocess.ensure("dev_server", Starter, restart=True)
183
184 @request.addfinalizer
158185 def teardown():
159186 # Killing the process group that runs the server, not just the
160187 # parent process attached. xprocess is confused about Werkzeug's
161188 # reloader and won't help here.
162189 pid = info.request_pid()
163 if pid:
190 if not pid:
191 return
192 if platform.system() == "Windows":
193 subprocess.call(["taskkill", "/F", "/T", "/PID", str(pid)])
194 else:
164195 os.killpg(os.getpgid(pid), signal.SIGTERM)
165 request.addfinalizer(teardown)
166196
167197 return info
168198
169199 return run_dev_server
170200
171201
172 if __name__ == '__main__':
202 if __name__ == "__main__":
173203 _dev_server()
33
44 # build the path to the uwsgi marker file
55 # when running in tox, this will be relative to the tox env
6 filename = os.path.join(
7 os.environ.get('TOX_ENVTMPDIR', ''),
8 'test_uwsgi_failed'
9 )
6 filename = os.path.join(os.environ.get("TOX_ENVTMPDIR", ""), "test_uwsgi_failed")
107
118
129 @pytest.hookimpl(tryfirst=True, hookwrapper=True)
1916 outcome = yield
2017 report = outcome.get_result()
2118
22 if item.cls.__name__ != 'TestUWSGICache':
19 if item.cls.__name__ != "TestUWSGICache":
2320 return
2421
2522 if report.failed:
26 with open(filename, 'a') as f:
27 f.write(item.name + '\n')
23 with open(filename, "a") as f:
24 f.write(item.name + "\n")
44
55 Tests the cache system
66
7 :copyright: (c) 2014 by Armin Ronacher.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
1010 import errno
11
1112 import pytest
1213
1314 from werkzeug._compat import text_type
2930 except ImportError:
3031 memcache = None
3132
32
33 class CacheTests(object):
33 pytestmark = pytest.mark.skip("werkzeug.contrib.cache moved to cachelib")
34
35
36 class CacheTestsBase(object):
3437 _can_use_fast_sleep = True
35 _guaranteed_deletes = False
38 _guaranteed_deletes = True
3639
3740 @pytest.fixture
3841 def fast_sleep(self, monkeypatch):
3942 if self._can_use_fast_sleep:
43
4044 def sleep(delta):
4145 orig_time = cache.time
42 monkeypatch.setattr(cache, 'time', lambda: orig_time() + delta)
46 monkeypatch.setattr(cache, "time", lambda: orig_time() + delta)
4347
4448 return sleep
4549 else:
4650 import time
51
4752 return time.sleep
4853
4954 @pytest.fixture
5661 """Return a cache instance."""
5762 return make_cache()
5863
64
65 class GenericCacheTests(CacheTestsBase):
5966 def test_generic_get_dict(self, c):
60 assert c.set('a', 'a')
61 assert c.set('b', 'b')
62 d = c.get_dict('a', 'b')
63 assert 'a' in d
64 assert 'a' == d['a']
65 assert 'b' in d
66 assert 'b' == d['b']
67 assert c.set("a", "a")
68 assert c.set("b", "b")
69 d = c.get_dict("a", "b")
70 assert "a" in d
71 assert "a" == d["a"]
72 assert "b" in d
73 assert "b" == d["b"]
6774
6875 def test_generic_set_get(self, c):
6976 for i in range(3):
7481 assert result == i * i, result
7582
7683 def test_generic_get_set(self, c):
77 assert c.set('foo', ['bar'])
78 assert c.get('foo') == ['bar']
84 assert c.set("foo", ["bar"])
85 assert c.get("foo") == ["bar"]
7986
8087 def test_generic_get_many(self, c):
81 assert c.set('foo', ['bar'])
82 assert c.set('spam', 'eggs')
83 assert c.get_many('foo', 'spam') == [['bar'], 'eggs']
88 assert c.set("foo", ["bar"])
89 assert c.set("spam", "eggs")
90 assert c.get_many("foo", "spam") == [["bar"], "eggs"]
8491
8592 def test_generic_set_many(self, c):
86 assert c.set_many({'foo': 'bar', 'spam': ['eggs']})
87 assert c.get('foo') == 'bar'
88 assert c.get('spam') == ['eggs']
93 assert c.set_many({"foo": "bar", "spam": ["eggs"]})
94 assert c.get("foo") == "bar"
95 assert c.get("spam") == ["eggs"]
8996
9097 def test_generic_add(self, c):
9198 # sanity check that add() works like set()
92 assert c.add('foo', 'bar')
93 assert c.get('foo') == 'bar'
94 assert not c.add('foo', 'qux')
95 assert c.get('foo') == 'bar'
99 assert c.add("foo", "bar")
100 assert c.get("foo") == "bar"
101 assert not c.add("foo", "qux")
102 assert c.get("foo") == "bar"
96103
97104 def test_generic_delete(self, c):
98 assert c.add('foo', 'bar')
99 assert c.get('foo') == 'bar'
100 assert c.delete('foo')
101 assert c.get('foo') is None
105 assert c.add("foo", "bar")
106 assert c.get("foo") == "bar"
107 assert c.delete("foo")
108 assert c.get("foo") is None
102109
103110 def test_generic_delete_many(self, c):
104 assert c.add('foo', 'bar')
105 assert c.add('spam', 'eggs')
106 assert c.delete_many('foo', 'spam')
107 assert c.get('foo') is None
108 assert c.get('spam') is None
111 assert c.add("foo", "bar")
112 assert c.add("spam", "eggs")
113 assert c.delete_many("foo", "spam")
114 assert c.get("foo") is None
115 assert c.get("spam") is None
109116
110117 def test_generic_inc_dec(self, c):
111 assert c.set('foo', 1)
112 assert c.inc('foo') == c.get('foo') == 2
113 assert c.dec('foo') == c.get('foo') == 1
114 assert c.delete('foo')
118 assert c.set("foo", 1)
119 assert c.inc("foo") == c.get("foo") == 2
120 assert c.dec("foo") == c.get("foo") == 1
121 assert c.delete("foo")
115122
116123 def test_generic_true_false(self, c):
117 assert c.set('foo', True)
118 assert c.get('foo') in (True, 1)
119 assert c.set('bar', False)
120 assert c.get('bar') in (False, 0)
124 assert c.set("foo", True)
125 assert c.get("foo") in (True, 1)
126 assert c.set("bar", False)
127 assert c.get("bar") in (False, 0)
121128
122129 def test_generic_timeout(self, c, fast_sleep):
123 c.set('foo', 'bar', 0)
124 assert c.get('foo') == 'bar'
125 c.set('baz', 'qux', 1)
126 assert c.get('baz') == 'qux'
130 c.set("foo", "bar", 0)
131 assert c.get("foo") == "bar"
132 c.set("baz", "qux", 1)
133 assert c.get("baz") == "qux"
127134 fast_sleep(3)
128135 # timeout of zero means no timeout
129 assert c.get('foo') == 'bar'
136 assert c.get("foo") == "bar"
130137 if self._guaranteed_deletes:
131 assert c.get('baz') is None
138 assert c.get("baz") is None
132139
133140 def test_generic_has(self, c):
134 assert c.has('foo') in (False, 0)
135 assert c.has('spam') in (False, 0)
136 assert c.set('foo', 'bar')
137 assert c.has('foo') in (True, 1)
138 assert c.has('spam') in (False, 0)
139 c.delete('foo')
140 assert c.has('foo') in (False, 0)
141 assert c.has('spam') in (False, 0)
142
143
144 class TestSimpleCache(CacheTests):
145 _guaranteed_deletes = True
146
141 assert c.has("foo") in (False, 0)
142 assert c.has("spam") in (False, 0)
143 assert c.set("foo", "bar")
144 assert c.has("foo") in (True, 1)
145 assert c.has("spam") in (False, 0)
146 c.delete("foo")
147 assert c.has("foo") in (False, 0)
148 assert c.has("spam") in (False, 0)
149
150
151 class TestSimpleCache(GenericCacheTests):
147152 @pytest.fixture
148153 def make_cache(self):
149154 return cache.SimpleCache
150155
151156 def test_purge(self):
152157 c = cache.SimpleCache(threshold=2)
153 c.set('a', 'a')
154 c.set('b', 'b')
155 c.set('c', 'c')
156 c.set('d', 'd')
158 c.set("a", "a")
159 c.set("b", "b")
160 c.set("c", "c")
161 c.set("d", "d")
157162 # Cache purges old items *before* it sets new ones.
158163 assert len(c._cache) == 3
159164
160165
161 class TestFileSystemCache(CacheTests):
162 _guaranteed_deletes = True
163
166 class TestFileSystemCache(GenericCacheTests):
164167 @pytest.fixture
165168 def make_cache(self, tmpdir):
166169 return lambda **kw: cache.FileSystemCache(cache_dir=str(tmpdir), **kw)
176179 assert nof_cache_files <= THRESHOLD
177180
178181 def test_filesystemcache_clear(self, c):
179 assert c.set('foo', 'bar')
182 assert c.set("foo", "bar")
180183 nof_cache_files = c.get(c._fs_count_file)
181184 assert nof_cache_files == 1
182185 assert c.clear()
200203 assert nof_cache_files is None
201204
202205 def test_count_file_accuracy(self, c):
203 assert c.set('foo', 'bar')
204 assert c.set('moo', 'car')
205 c.add('moo', 'tar')
206 assert c.set("foo", "bar")
207 assert c.set("moo", "car")
208 c.add("moo", "tar")
206209 assert c.get(c._fs_count_file) == 2
207 assert c.add('too', 'far')
210 assert c.add("too", "far")
208211 assert c.get(c._fs_count_file) == 3
209 assert c.delete('moo')
212 assert c.delete("moo")
210213 assert c.get(c._fs_count_file) == 2
211214 assert c.clear()
212215 assert c.get(c._fs_count_file) == 0
215218 # don't use pytest.mark.skipif on subclasses
216219 # https://bitbucket.org/hpk42/pytest/issue/568
217220 # skip happens in requirements fixture instead
218 class TestRedisCache(CacheTests):
221 class TestRedisCache(GenericCacheTests):
219222 _can_use_fast_sleep = False
220 _guaranteed_deletes = True
221
222 @pytest.fixture(scope='class', autouse=True)
223 def requirements(self, subprocess):
223
224 @pytest.fixture(scope="class", autouse=True)
225 def requirements(self, xprocess):
224226 if redis is None:
225227 pytest.skip('Python package "redis" is not installed.')
226228
227229 def prepare(cwd):
228 return '[Rr]eady to accept connections', ['redis-server']
230 return "[Rr]eady to accept connections", ["redis-server"]
229231
230232 try:
231 subprocess.ensure('redis_server', prepare)
233 xprocess.ensure("redis_server", prepare)
232234 except IOError as e:
233235 # xprocess raises FileNotFoundError
234236 if e.errno == errno.ENOENT:
235 pytest.skip('Redis is not installed.')
237 pytest.skip("Redis is not installed.")
236238 else:
237239 raise
238240
239241 yield
240 subprocess.getinfo('redis_server').terminate()
242 xprocess.getinfo("redis_server").terminate()
241243
242244 @pytest.fixture(params=(None, False, True))
243245 def make_cache(self, request):
244246 if request.param is None:
245 host = 'localhost'
247 host = "localhost"
246248 elif request.param:
247249 host = redis.StrictRedis()
248250 else:
249251 host = redis.Redis()
250252
251 c = cache.RedisCache(
252 host=host,
253 key_prefix='werkzeug-test-case:',
254 )
253 c = cache.RedisCache(host=host, key_prefix="werkzeug-test-case:")
255254 yield lambda: c
256255 c.clear()
257256
258257 def test_compat(self, c):
259 assert c._client.set(c.key_prefix + 'foo', 'Awesome')
260 assert c.get('foo') == b'Awesome'
261 assert c._client.set(c.key_prefix + 'foo', '42')
262 assert c.get('foo') == 42
258 assert c._client.set(c.key_prefix + "foo", "Awesome")
259 assert c.get("foo") == b"Awesome"
260 assert c._client.set(c.key_prefix + "foo", "42")
261 assert c.get("foo") == 42
263262
264263 def test_empty_host(self):
265264 with pytest.raises(ValueError) as exc_info:
266265 cache.RedisCache(host=None)
267 assert text_type(exc_info.value) == 'RedisCache host parameter may not be None'
268
269
270 class TestMemcachedCache(CacheTests):
266 assert text_type(exc_info.value) == "RedisCache host parameter may not be None"
267
268
269 class TestMemcachedCache(GenericCacheTests):
271270 _can_use_fast_sleep = False
272
273 @pytest.fixture(scope='class', autouse=True)
274 def requirements(self, subprocess):
271 _guaranteed_deletes = False
272
273 @pytest.fixture(scope="class", autouse=True)
274 def requirements(self, xprocess):
275275 if memcache is None:
276276 pytest.skip(
277 'Python package for memcache is not installed. Need one of '
277 "Python package for memcache is not installed. Need one of "
278278 '"pylibmc", "google.appengine", or "memcache".'
279279 )
280280
281281 def prepare(cwd):
282 return '', ['memcached']
282 return "", ["memcached"]
283283
284284 try:
285 subprocess.ensure('memcached', prepare)
285 xprocess.ensure("memcached", prepare)
286286 except IOError as e:
287287 # xprocess raises FileNotFoundError
288288 if e.errno == errno.ENOENT:
289 pytest.skip('Memcached is not installed.')
289 pytest.skip("Memcached is not installed.")
290290 else:
291291 raise
292292
293293 yield
294 subprocess.getinfo('memcached').terminate()
295
296 @pytest.fixture
297 def make_cache(self):
298 c = cache.MemcachedCache(key_prefix='werkzeug-test-case:')
294 xprocess.getinfo("memcached").terminate()
295
296 @pytest.fixture
297 def make_cache(self):
298 c = cache.MemcachedCache(key_prefix="werkzeug-test-case:")
299299 yield lambda: c
300300 c.clear()
301301
302302 def test_compat(self, c):
303 assert c._client.set(c.key_prefix + 'foo', 'bar')
304 assert c.get('foo') == 'bar'
303 assert c._client.set(c.key_prefix + "foo", "bar")
304 assert c.get("foo") == "bar"
305305
306306 def test_huge_timeouts(self, c):
307307 # Timeouts greater than epoch are interpreted as POSIX timestamps
308308 # (i.e. not relative to now, but relative to epoch)
309309 epoch = 2592000
310 c.set('foo', 'bar', epoch + 100)
311 assert c.get('foo') == 'bar'
312
313
314 class TestUWSGICache(CacheTests):
310 c.set("foo", "bar", epoch + 100)
311 assert c.get("foo") == "bar"
312
313
314 class TestUWSGICache(GenericCacheTests):
315315 _can_use_fast_sleep = False
316
317 @pytest.fixture(scope='class', autouse=True)
316 _guaranteed_deletes = False
317
318 @pytest.fixture(scope="class", autouse=True)
318319 def requirements(self):
319320 try:
320321 import uwsgi # NOQA
321322 except ImportError:
322323 pytest.skip(
323324 'Python "uwsgi" package is only avaialable when running '
324 'inside uWSGI.'
325 "inside uWSGI."
325326 )
326327
327328 @pytest.fixture
328329 def make_cache(self):
329 c = cache.UWSGICache(cache='werkzeugtest')
330 c = cache.UWSGICache(cache="werkzeugtest")
330331 yield lambda: c
331332 c.clear()
333
334
335 class TestNullCache(CacheTestsBase):
336 @pytest.fixture(scope="class", autouse=True)
337 def make_cache(self):
338 return cache.NullCache
339
340 def test_has(self, c):
341 assert not c.has("foo")
44
55 Tests the cache system
66
7 :copyright: (c) 2014 by Armin Ronacher.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
1010 import datetime
11
1112 import pytest
1213
13 from werkzeug.contrib.atom import format_iso8601, AtomFeed, FeedEntry
14 from werkzeug.contrib.atom import AtomFeed
15 from werkzeug.contrib.atom import FeedEntry
16 from werkzeug.contrib.atom import format_iso8601
1417
1518
1619 class TestAtomFeed(object):
2427
2528 def test_atom_title_no_id(self):
2629 with pytest.raises(ValueError):
27 AtomFeed(title='test_title')
30 AtomFeed(title="test_title")
2831
2932 def test_atom_add_one(self):
30 a = AtomFeed(title='test_title', id=1)
31 f = FeedEntry(
32 title='test_title', id=1, updated=datetime.datetime.now())
33 a = AtomFeed(title="test_title", id=1)
34 f = FeedEntry(title="test_title", id=1, updated=datetime.datetime.now())
3335 assert len(a.entries) == 0
3436 a.add(f)
3537 assert len(a.entries) == 1
3638
3739 def test_atom_add_one_kwargs(self):
38 a = AtomFeed(title='test_title', id=1)
40 a = AtomFeed(title="test_title", id=1)
3941 assert len(a.entries) == 0
40 a.add(title='test_title', id=1, updated=datetime.datetime.now())
42 a.add(title="test_title", id=1, updated=datetime.datetime.now())
4143 assert len(a.entries) == 1
4244 assert isinstance(a.entries[0], FeedEntry)
4345
4446 def test_atom_to_str(self):
4547 updated_time = datetime.datetime.now()
46 expected_repr = '''
48 expected_repr = """
4749 <?xml version="1.0" encoding="utf-8"?>
4850 <feed xmlns="http://www.w3.org/2005/Atom">
4951 <title type="text">test_title</title>
5153 <updated>%s</updated>
5254 <generator>Werkzeug</generator>
5355 </feed>
54 ''' % format_iso8601(updated_time)
55 a = AtomFeed(title='test_title', id=1, updated=updated_time)
56 assert str(a).strip().replace(' ', '') == \
57 expected_repr.strip().replace(' ', '')
56 """ % format_iso8601(
57 updated_time
58 )
59 a = AtomFeed(title="test_title", id=1, updated=updated_time)
60 assert str(a).strip().replace(" ", "") == expected_repr.strip().replace(" ", "")
5861
5962
6063 class TestFeedEntry(object):
6871
6972 def test_feed_entry_no_id(self):
7073 with pytest.raises(ValueError):
71 FeedEntry(title='test_title')
74 FeedEntry(title="test_title")
7275
7376 def test_feed_entry_no_updated(self):
7477 with pytest.raises(ValueError):
75 FeedEntry(title='test_title', id=1)
78 FeedEntry(title="test_title", id=1)
7679
7780 def test_feed_entry_to_str(self):
7881 updated_time = datetime.datetime.now()
79 expected_feed_entry_str = '''
82 expected_feed_entry_str = """
8083 <entry>
8184 <title type="text">test_title</title>
8285 <id>1</id>
8386 <updated>%s</updated>
8487 </entry>
85 ''' % format_iso8601(updated_time)
88 """ % format_iso8601(
89 updated_time
90 )
8691
87 f = FeedEntry(title='test_title', id=1, updated=updated_time)
88 assert str(f).strip().replace(' ', '') == \
89 expected_feed_entry_str.strip().replace(' ', '')
92 f = FeedEntry(title="test_title", id=1, updated=updated_time)
93 assert str(f).strip().replace(
94 " ", ""
95 ) == expected_feed_entry_str.strip().replace(" ", "")
9096
9197
9298 def test_format_iso8601():
9399 # naive datetime should be treated as utc
94100 dt = datetime.datetime(2014, 8, 31, 2, 5, 6)
95 assert format_iso8601(dt) == '2014-08-31T02:05:06Z'
101 assert format_iso8601(dt) == "2014-08-31T02:05:06Z"
96102
97103 # tz-aware datetime
98104 dt = datetime.datetime(2014, 8, 31, 11, 5, 6, tzinfo=KST())
99 assert format_iso8601(dt) == '2014-08-31T11:05:06+09:00'
105 assert format_iso8601(dt) == "2014-08-31T11:05:06+09:00"
100106
101107
102108 class KST(datetime.tzinfo):
107113 return datetime.timedelta(hours=9)
108114
109115 def tzname(self, dt):
110 return 'KST'
116 return "KST"
111117
112118 def dst(self, dt):
113119 return datetime.timedelta(0)
+0
-288
tests/contrib/test_cache.py less more
0 # -*- coding: utf-8 -*-
1 """
2 tests.cache
3 ~~~~~~~~~~~
4
5 Tests the cache system
6
7 :copyright: (c) 2014 by Armin Ronacher.
8 :license: BSD, see LICENSE for more details.
9 """
10 import pytest
11 import os
12 import random
13
14 from werkzeug.contrib import cache
15
16 try:
17 import redis
18 except ImportError:
19 redis = None
20
21 try:
22 import pylibmc as memcache
23 except ImportError:
24 try:
25 from google.appengine.api import memcache
26 except ImportError:
27 try:
28 import memcache
29 except ImportError:
30 memcache = None
31
32
33 class CacheTests(object):
34 _can_use_fast_sleep = True
35
36 @pytest.fixture
37 def make_cache(self):
38 '''Return a cache class or factory.'''
39 raise NotImplementedError()
40
41 @pytest.fixture
42 def fast_sleep(self, monkeypatch):
43 if self._can_use_fast_sleep:
44 def sleep(delta):
45 orig_time = cache.time
46 monkeypatch.setattr(cache, 'time', lambda: orig_time() + delta)
47
48 return sleep
49 else:
50 import time
51 return time.sleep
52
53 @pytest.fixture
54 def c(self, make_cache):
55 '''Return a cache instance.'''
56 return make_cache()
57
58 def test_generic_get_dict(self, c):
59 assert c.set('a', 'a')
60 assert c.set('b', 'b')
61 d = c.get_dict('a', 'b')
62 assert 'a' in d
63 assert 'a' == d['a']
64 assert 'b' in d
65 assert 'b' == d['b']
66
67 def test_generic_set_get(self, c):
68 for i in range(3):
69 assert c.set(str(i), i * i)
70 for i in range(3):
71 result = c.get(str(i))
72 assert result == i * i, result
73
74 def test_generic_get_set(self, c):
75 assert c.set('foo', ['bar'])
76 assert c.get('foo') == ['bar']
77
78 def test_generic_get_many(self, c):
79 assert c.set('foo', ['bar'])
80 assert c.set('spam', 'eggs')
81 assert list(c.get_many('foo', 'spam')) == [['bar'], 'eggs']
82
83 def test_generic_set_many(self, c):
84 assert c.set_many({'foo': 'bar', 'spam': ['eggs']})
85 assert c.get('foo') == 'bar'
86 assert c.get('spam') == ['eggs']
87
88 def test_generic_expire(self, c, fast_sleep):
89 assert c.set('foo', 'bar', 1)
90 fast_sleep(5)
91 assert c.get('foo') is None
92
93 def test_generic_add(self, c):
94 # sanity check that add() works like set()
95 assert c.add('foo', 'bar')
96 assert c.get('foo') == 'bar'
97 assert not c.add('foo', 'qux')
98 assert c.get('foo') == 'bar'
99
100 def test_generic_delete(self, c):
101 assert c.add('foo', 'bar')
102 assert c.get('foo') == 'bar'
103 assert c.delete('foo')
104 assert c.get('foo') is None
105
106 def test_generic_delete_many(self, c):
107 assert c.add('foo', 'bar')
108 assert c.add('spam', 'eggs')
109 assert c.delete_many('foo', 'spam')
110 assert c.get('foo') is None
111 assert c.get('spam') is None
112
113 def test_generic_inc_dec(self, c):
114 assert c.set('foo', 1)
115 assert c.inc('foo') == c.get('foo') == 2
116 assert c.dec('foo') == c.get('foo') == 1
117 assert c.delete('foo')
118
119 def test_generic_true_false(self, c):
120 assert c.set('foo', True)
121 assert c.get('foo') in (True, 1)
122 assert c.set('bar', False)
123 assert c.get('bar') in (False, 0)
124
125 def test_generic_no_timeout(self, c, fast_sleep):
126 # Timeouts of zero should cause the cache to never expire
127 c.set('foo', 'bar', 0)
128 fast_sleep(random.randint(1, 5))
129 assert c.get('foo') == 'bar'
130
131 def test_generic_timeout(self, c, fast_sleep):
132 # Check that cache expires when the timeout is reached
133 timeout = random.randint(1, 5)
134 c.set('foo', 'bar', timeout)
135 assert c.get('foo') == 'bar'
136 # sleep a bit longer than timeout to ensure there are no
137 # race conditions
138 fast_sleep(timeout + 5)
139 assert c.get('foo') is None
140
141 def test_generic_has(self, c):
142 assert c.has('foo') in (False, 0)
143 assert c.has('spam') in (False, 0)
144 assert c.set('foo', 'bar')
145 assert c.has('foo') in (True, 1)
146 assert c.has('spam') in (False, 0)
147 c.delete('foo')
148 assert c.has('foo') in (False, 0)
149 assert c.has('spam') in (False, 0)
150
151
152 class TestSimpleCache(CacheTests):
153
154 @pytest.fixture
155 def make_cache(self):
156 return cache.SimpleCache
157
158 def test_purge(self):
159 c = cache.SimpleCache(threshold=2)
160 c.set('a', 'a')
161 c.set('b', 'b')
162 c.set('c', 'c')
163 c.set('d', 'd')
164 # Cache purges old items *before* it sets new ones.
165 assert len(c._cache) == 3
166
167
168 class TestFileSystemCache(CacheTests):
169
170 @pytest.fixture
171 def make_cache(self, tmpdir):
172 return lambda **kw: cache.FileSystemCache(cache_dir=str(tmpdir), **kw)
173
174 def test_filesystemcache_prune(self, make_cache):
175 THRESHOLD = 13
176 c = make_cache(threshold=THRESHOLD)
177 for i in range(2 * THRESHOLD):
178 assert c.set(str(i), i)
179 cache_files = os.listdir(c._path)
180 assert len(cache_files) <= THRESHOLD
181
182 def test_filesystemcache_clear(self, c):
183 assert c.set('foo', 'bar')
184 cache_files = os.listdir(c._path)
185 # count = 2 because of the count file
186 assert len(cache_files) == 2
187 assert c.clear()
188
189 # The only file remaining is the count file
190 cache_files = os.listdir(c._path)
191 assert os.listdir(c._path) == [
192 os.path.basename(c._get_filename(c._fs_count_file))]
193
194
195 # Don't use pytest marker
196 # https://bitbucket.org/hpk42/pytest/issue/568
197 if redis is not None:
198 class TestRedisCache(CacheTests):
199 _can_use_fast_sleep = False
200
201 @pytest.fixture(params=[
202 ([], dict()),
203 ([redis.Redis()], dict()),
204 ([redis.StrictRedis()], dict())
205 ])
206 def make_cache(self, xprocess, request):
207 def preparefunc(cwd):
208 return 'Ready to accept connections', ['redis-server']
209
210 xprocess.ensure('redis_server', preparefunc)
211 args, kwargs = request.param
212 c = cache.RedisCache(*args, key_prefix='werkzeug-test-case:',
213 **kwargs)
214 request.addfinalizer(c.clear)
215 return lambda: c
216
217 def test_compat(self, c):
218 assert c._client.set(c.key_prefix + 'foo', 'Awesome')
219 assert c.get('foo') == b'Awesome'
220 assert c._client.set(c.key_prefix + 'foo', '42')
221 assert c.get('foo') == 42
222
223
224 # Don't use pytest marker
225 # https://bitbucket.org/hpk42/pytest/issue/568
226 if memcache is not None:
227 class TestMemcachedCache(CacheTests):
228 _can_use_fast_sleep = False
229
230 @pytest.fixture
231 def make_cache(self, xprocess, request):
232 def preparefunc(cwd):
233 return '', ['memcached']
234
235 xprocess.ensure('memcached', preparefunc)
236 c = cache.MemcachedCache(key_prefix='werkzeug-test-case:')
237 request.addfinalizer(c.clear)
238 return lambda: c
239
240 def test_compat(self, c):
241 assert c._client.set(c.key_prefix + 'foo', 'bar')
242 assert c.get('foo') == 'bar'
243
244 def test_huge_timeouts(self, c):
245 # Timeouts greater than epoch are interpreted as POSIX timestamps
246 # (i.e. not relative to now, but relative to epoch)
247 import random
248 epoch = 2592000
249 timeout = epoch + random.random() * 100
250 c.set('foo', 'bar', timeout)
251 assert c.get('foo') == 'bar'
252
253
254 def _running_in_uwsgi():
255 try:
256 import uwsgi # NOQA
257 except ImportError:
258 return False
259 else:
260 return True
261
262
263 @pytest.mark.skipif(not _running_in_uwsgi(),
264 reason="uWSGI module can't be imported outside of uWSGI")
265 class TestUWSGICache(CacheTests):
266 _can_use_fast_sleep = False
267
268 @pytest.fixture
269 def make_cache(self, xprocess, request):
270 c = cache.UWSGICache(cache='werkzeugtest')
271 request.addfinalizer(c.clear)
272 return lambda: c
273
274
275 class TestNullCache(object):
276
277 @pytest.fixture
278 def make_cache(self):
279 return cache.NullCache
280
281 @pytest.fixture
282 def c(self, make_cache):
283 return make_cache()
284
285 def test_nullcache_has(self, c):
286 assert c.has('foo') in (False, 0)
287 assert c.has('spam') in (False, 0)
0 # -*- coding: utf-8 -*-
1 """
2 tests.fixers
3 ~~~~~~~~~~~~
4
5 Server / Browser fixers.
6
7 :copyright: (c) 2014 by Armin Ronacher.
8 :license: BSD, see LICENSE for more details.
9 """
10 from tests import strict_eq
0 from werkzeug.contrib import fixers
111 from werkzeug.datastructures import ResponseCacheControl
122 from werkzeug.http import parse_cache_control_header
13
14 from werkzeug.test import create_environ, Client
15 from werkzeug.wrappers import Request, Response
16 from werkzeug.contrib import fixers
17 from werkzeug.utils import redirect
3 from werkzeug.test import Client
4 from werkzeug.test import create_environ
5 from werkzeug.wrappers import Request
6 from werkzeug.wrappers import Response
187
198
209 @Request.application
2110 def path_check_app(request):
22 return Response('PATH_INFO: %s\nSCRIPT_NAME: %s' % (
23 request.environ.get('PATH_INFO', ''),
24 request.environ.get('SCRIPT_NAME', '')
25 ))
11 return Response(
12 "PATH_INFO: %s\nSCRIPT_NAME: %s"
13 % (request.environ.get("PATH_INFO", ""), request.environ.get("SCRIPT_NAME", ""))
14 )
2615
2716
2817 class TestServerFixer(object):
29
3018 def test_cgi_root_fix(self):
3119 app = fixers.CGIRootFix(path_check_app)
3220 response = Response.from_app(
33 app,
34 dict(create_environ(),
35 SCRIPT_NAME='/foo',
36 PATH_INFO='/bar',
37 SERVER_SOFTWARE='lighttpd/1.4.27'))
38 assert response.get_data() == b'PATH_INFO: /foo/bar\nSCRIPT_NAME: '
21 app, dict(create_environ(), SCRIPT_NAME="/foo", PATH_INFO="/bar")
22 )
23 assert response.get_data() == b"PATH_INFO: /bar\nSCRIPT_NAME: "
3924
4025 def test_cgi_root_fix_custom_app_root(self):
41 app = fixers.CGIRootFix(path_check_app, app_root='/baz/poop/')
26 app = fixers.CGIRootFix(path_check_app, app_root="/baz/")
4227 response = Response.from_app(
43 app,
44 dict(create_environ(),
45 SCRIPT_NAME='/foo',
46 PATH_INFO='/bar'))
47 assert response.get_data() == b'PATH_INFO: /foo/bar\nSCRIPT_NAME: baz/poop'
28 app, dict(create_environ(), SCRIPT_NAME="/foo", PATH_INFO="/bar")
29 )
30 assert response.get_data() == b"PATH_INFO: /bar\nSCRIPT_NAME: baz"
4831
4932 def test_path_info_from_request_uri_fix(self):
5033 app = fixers.PathInfoFromRequestUriFix(path_check_app)
51 for key in 'REQUEST_URI', 'REQUEST_URL', 'UNENCODED_URL':
52 env = dict(create_environ(), SCRIPT_NAME='/test', PATH_INFO='/?????')
53 env[key] = '/test/foo%25bar?drop=this'
34 for key in "REQUEST_URI", "REQUEST_URL", "UNENCODED_URL":
35 env = dict(create_environ(), SCRIPT_NAME="/test", PATH_INFO="/?????")
36 env[key] = "/test/foo%25bar?drop=this"
5437 response = Response.from_app(app, env)
55 assert response.get_data() == b'PATH_INFO: /foo%bar\nSCRIPT_NAME: /test'
56
57 def test_proxy_fix(self):
58 @Request.application
59 def app(request):
60 return Response('%s|%s' % (
61 request.remote_addr,
62 # do not use request.host as this fixes too :)
63 request.environ['HTTP_HOST']
64 ))
65 app = fixers.ProxyFix(app, num_proxies=2)
66 environ = dict(
67 create_environ(),
68 HTTP_X_FORWARDED_PROTO="https",
69 HTTP_X_FORWARDED_HOST='example.com',
70 HTTP_X_FORWARDED_FOR='1.2.3.4, 5.6.7.8',
71 REMOTE_ADDR='127.0.0.1',
72 HTTP_HOST='fake'
73 )
74
75 response = Response.from_app(app, environ)
76
77 assert response.get_data() == b'1.2.3.4|example.com'
78
79 # And we must check that if it is a redirection it is
80 # correctly done:
81
82 redirect_app = redirect('/foo/bar.hml')
83 response = Response.from_app(redirect_app, environ)
84
85 wsgi_headers = response.get_wsgi_headers(environ)
86 assert wsgi_headers['Location'] == 'https://example.com/foo/bar.hml'
87
88 def test_proxy_fix_weird_enum(self):
89 @fixers.ProxyFix
90 @Request.application
91 def app(request):
92 return Response(request.remote_addr)
93 environ = dict(
94 create_environ(),
95 HTTP_X_FORWARDED_FOR=',',
96 REMOTE_ADDR='127.0.0.1',
97 )
98
99 response = Response.from_app(app, environ)
100 strict_eq(response.get_data(), b'127.0.0.1')
38 assert response.get_data() == b"PATH_INFO: /foo%bar\nSCRIPT_NAME: /test"
10139
10240 def test_header_rewriter_fix(self):
10341 @Request.application
10442 def application(request):
105 return Response("", headers=[
106 ('X-Foo', 'bar')
107 ])
108 application = fixers.HeaderRewriterFix(application, ('X-Foo',), (('X-Bar', '42'),))
43 return Response("", headers=[("X-Foo", "bar")])
44
45 application = fixers.HeaderRewriterFix(
46 application, ("X-Foo",), (("X-Bar", "42"),)
47 )
10948 response = Response.from_app(application, create_environ())
110 assert response.headers['Content-Type'] == 'text/plain; charset=utf-8'
111 assert 'X-Foo' not in response.headers
112 assert response.headers['X-Bar'] == '42'
49 assert response.headers["Content-Type"] == "text/plain; charset=utf-8"
50 assert "X-Foo" not in response.headers
51 assert response.headers["X-Bar"] == "42"
11352
11453
11554 class TestBrowserFixer(object):
116
11755 def test_ie_fixes(self):
11856 @fixers.InternetExplorerFix
11957 @Request.application
12058 def application(request):
121 response = Response('binary data here', mimetype='application/vnd.ms-excel')
122 response.headers['Vary'] = 'Cookie'
123 response.headers['Content-Disposition'] = 'attachment; filename=foo.xls'
59 response = Response("binary data here", mimetype="application/vnd.ms-excel")
60 response.headers["Vary"] = "Cookie"
61 response.headers["Content-Disposition"] = "attachment; filename=foo.xls"
12462 return response
12563
12664 c = Client(application, Response)
127 response = c.get('/', headers=[
128 ('User-Agent', 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)')
129 ])
65 response = c.get(
66 "/",
67 headers=[
68 ("User-Agent", "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)")
69 ],
70 )
13071
13172 # IE gets no vary
132 assert response.get_data() == b'binary data here'
133 assert 'vary' not in response.headers
134 assert response.headers['content-disposition'] == 'attachment; filename=foo.xls'
135 assert response.headers['content-type'] == 'application/vnd.ms-excel'
73 assert response.get_data() == b"binary data here"
74 assert "vary" not in response.headers
75 assert response.headers["content-disposition"] == "attachment; filename=foo.xls"
76 assert response.headers["content-type"] == "application/vnd.ms-excel"
13677
13778 # other browsers do
13879 c = Client(application, Response)
139 response = c.get('/')
140 assert response.get_data() == b'binary data here'
141 assert 'vary' in response.headers
80 response = c.get("/")
81 assert response.get_data() == b"binary data here"
82 assert "vary" in response.headers
14283
14384 cc = ResponseCacheControl()
14485 cc.no_cache = True
14687 @fixers.InternetExplorerFix
14788 @Request.application
14889 def application(request):
149 response = Response('binary data here', mimetype='application/vnd.ms-excel')
150 response.headers['Pragma'] = ', '.join(pragma)
151 response.headers['Cache-Control'] = cc.to_header()
152 response.headers['Content-Disposition'] = 'attachment; filename=foo.xls'
90 response = Response("binary data here", mimetype="application/vnd.ms-excel")
91 response.headers["Pragma"] = ", ".join(pragma)
92 response.headers["Cache-Control"] = cc.to_header()
93 response.headers["Content-Disposition"] = "attachment; filename=foo.xls"
15394 return response
15495
15596 # IE has no pragma or cache control
156 pragma = ('no-cache',)
97 pragma = ("no-cache",)
15798 c = Client(application, Response)
158 response = c.get('/', headers=[
159 ('User-Agent', 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)')
160 ])
161 assert response.get_data() == b'binary data here'
162 assert 'pragma' not in response.headers
163 assert 'cache-control' not in response.headers
164 assert response.headers['content-disposition'] == 'attachment; filename=foo.xls'
99 response = c.get(
100 "/",
101 headers=[
102 ("User-Agent", "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)")
103 ],
104 )
105 assert response.get_data() == b"binary data here"
106 assert "pragma" not in response.headers
107 assert "cache-control" not in response.headers
108 assert response.headers["content-disposition"] == "attachment; filename=foo.xls"
165109
166110 # IE has simplified pragma
167 pragma = ('no-cache', 'x-foo')
111 pragma = ("no-cache", "x-foo")
168112 cc.proxy_revalidate = True
169 response = c.get('/', headers=[
170 ('User-Agent', 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)')
171 ])
172 assert response.get_data() == b'binary data here'
173 assert response.headers['pragma'] == 'x-foo'
174 assert response.headers['cache-control'] == 'proxy-revalidate'
175 assert response.headers['content-disposition'] == 'attachment; filename=foo.xls'
113 response = c.get(
114 "/",
115 headers=[
116 ("User-Agent", "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)")
117 ],
118 )
119 assert response.get_data() == b"binary data here"
120 assert response.headers["pragma"] == "x-foo"
121 assert response.headers["cache-control"] == "proxy-revalidate"
122 assert response.headers["content-disposition"] == "attachment; filename=foo.xls"
176123
177124 # regular browsers get everything
178 response = c.get('/')
179 assert response.get_data() == b'binary data here'
180 assert response.headers['pragma'] == 'no-cache, x-foo'
181 cc = parse_cache_control_header(response.headers['cache-control'],
182 cls=ResponseCacheControl)
125 response = c.get("/")
126 assert response.get_data() == b"binary data here"
127 assert response.headers["pragma"] == "no-cache, x-foo"
128 cc = parse_cache_control_header(
129 response.headers["cache-control"], cls=ResponseCacheControl
130 )
183131 assert cc.no_cache
184132 assert cc.proxy_revalidate
185 assert response.headers['content-disposition'] == 'attachment; filename=foo.xls'
133 assert response.headers["content-disposition"] == "attachment; filename=foo.xls"
44
55 Tests the iterio object.
66
7 :copyright: (c) 2014 by Armin Ronacher.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
1010 import pytest
1111
12 from tests import strict_eq
13 from werkzeug.contrib.iterio import IterIO, greenlet
12 from .. import strict_eq
13 from werkzeug.contrib.iterio import greenlet
14 from werkzeug.contrib.iterio import IterIO
1415
1516
1617 class TestIterO(object):
17
1818 def test_basic_native(self):
1919 io = IterIO(["Hello", "World", "1", "2", "3"])
20 io.seek(0)
2021 assert io.tell() == 0
2122 assert io.read(2) == "He"
2223 assert io.tell() == 2
3233 assert io.closed
3334
3435 io = IterIO(["Hello\n", "World!"])
35 assert io.readline() == 'Hello\n'
36 assert io._buf == 'Hello\n'
37 assert io.read() == 'World!'
38 assert io._buf == 'Hello\nWorld!'
36 assert io.readline() == "Hello\n"
37 assert io._buf == "Hello\n"
38 assert io.read() == "World!"
39 assert io._buf == "Hello\nWorld!"
3940 assert io.tell() == 12
4041 io.seek(0)
41 assert io.readlines() == ['Hello\n', 'World!']
42 assert io.readlines() == ["Hello\n", "World!"]
4243
43 io = IterIO(['Line one\nLine ', 'two\nLine three'])
44 assert list(io) == ['Line one\n', 'Line two\n', 'Line three']
45 io = IterIO(iter('Line one\nLine two\nLine three'))
46 assert list(io) == ['Line one\n', 'Line two\n', 'Line three']
47 io = IterIO(['Line one\nL', 'ine', ' two', '\nLine three'])
48 assert list(io) == ['Line one\n', 'Line two\n', 'Line three']
44 io = IterIO(["Line one\nLine ", "two\nLine three"])
45 assert list(io) == ["Line one\n", "Line two\n", "Line three"]
46 io = IterIO(iter("Line one\nLine two\nLine three"))
47 assert list(io) == ["Line one\n", "Line two\n", "Line three"]
48 io = IterIO(["Line one\nL", "ine", " two", "\nLine three"])
49 assert list(io) == ["Line one\n", "Line two\n", "Line three"]
4950
5051 io = IterIO(["foo\n", "bar"])
5152 io.seek(-4, 2)
52 assert io.read(4) == '\nbar'
53 assert io.read(4) == "\nbar"
5354
5455 pytest.raises(IOError, io.seek, 2, 100)
5556 io.close()
7273 assert io.closed
7374
7475 io = IterIO([b"Hello\n", b"World!"])
75 assert io.readline() == b'Hello\n'
76 assert io._buf == b'Hello\n'
77 assert io.read() == b'World!'
78 assert io._buf == b'Hello\nWorld!'
76 assert io.readline() == b"Hello\n"
77 assert io._buf == b"Hello\n"
78 assert io.read() == b"World!"
79 assert io._buf == b"Hello\nWorld!"
7980 assert io.tell() == 12
8081 io.seek(0)
81 assert io.readlines() == [b'Hello\n', b'World!']
82 assert io.readlines() == [b"Hello\n", b"World!"]
8283
8384 io = IterIO([b"foo\n", b"bar"])
8485 io.seek(-4, 2)
85 assert io.read(4) == b'\nbar'
86 assert io.read(4) == b"\nbar"
8687
8788 pytest.raises(IOError, io.seek, 2, 100)
8889 io.close()
105106 assert io.closed
106107
107108 io = IterIO([u"Hello\n", u"World!"])
108 assert io.readline() == u'Hello\n'
109 assert io._buf == u'Hello\n'
110 assert io.read() == u'World!'
111 assert io._buf == u'Hello\nWorld!'
109 assert io.readline() == u"Hello\n"
110 assert io._buf == u"Hello\n"
111 assert io.read() == u"World!"
112 assert io._buf == u"Hello\nWorld!"
112113 assert io.tell() == 12
113114 io.seek(0)
114 assert io.readlines() == [u'Hello\n', u'World!']
115 assert io.readlines() == [u"Hello\n", u"World!"]
115116
116117 io = IterIO([u"foo\n", u"bar"])
117118 io.seek(-4, 2)
118 assert io.read(4) == u'\nbar'
119 assert io.read(4) == u"\nbar"
119120
120121 pytest.raises(IOError, io.seek, 2, 100)
121122 io.close()
123124
124125 def test_sentinel_cases(self):
125126 io = IterIO([])
126 strict_eq(io.read(), '')
127 io = IterIO([], b'')
128 strict_eq(io.read(), b'')
129 io = IterIO([], u'')
130 strict_eq(io.read(), u'')
127 strict_eq(io.read(), "")
128 io = IterIO([], b"")
129 strict_eq(io.read(), b"")
130 io = IterIO([], u"")
131 strict_eq(io.read(), u"")
131132
132133 io = IterIO([])
133 strict_eq(io.read(), '')
134 io = IterIO([b''])
135 strict_eq(io.read(), b'')
136 io = IterIO([u''])
137 strict_eq(io.read(), u'')
134 strict_eq(io.read(), "")
135 io = IterIO([b""])
136 strict_eq(io.read(), b"")
137 io = IterIO([u""])
138 strict_eq(io.read(), u"")
138139
139140 io = IterIO([])
140 strict_eq(io.readline(), '')
141 io = IterIO([], b'')
142 strict_eq(io.readline(), b'')
143 io = IterIO([], u'')
144 strict_eq(io.readline(), u'')
141 strict_eq(io.readline(), "")
142 io = IterIO([], b"")
143 strict_eq(io.readline(), b"")
144 io = IterIO([], u"")
145 strict_eq(io.readline(), u"")
145146
146147 io = IterIO([])
147 strict_eq(io.readline(), '')
148 io = IterIO([b''])
149 strict_eq(io.readline(), b'')
150 io = IterIO([u''])
151 strict_eq(io.readline(), u'')
148 strict_eq(io.readline(), "")
149 io = IterIO([b""])
150 strict_eq(io.readline(), b"")
151 io = IterIO([u""])
152 strict_eq(io.readline(), u"")
152153
153154
154 @pytest.mark.skipif(greenlet is None, reason='Greenlet is not installed.')
155 @pytest.mark.skipif(greenlet is None, reason="Greenlet is not installed.")
155156 class TestIterI(object):
156
157157 def test_basic(self):
158158 def producer(out):
159 out.write('1\n')
160 out.write('2\n')
159 out.write("1\n")
160 out.write("2\n")
161161 out.flush()
162 out.write('3\n')
162 out.write("3\n")
163
163164 iterable = IterIO(producer)
164 assert next(iterable) == '1\n2\n'
165 assert next(iterable) == '3\n'
165 assert next(iterable) == "1\n2\n"
166 assert next(iterable) == "3\n"
166167 pytest.raises(StopIteration, next, iterable)
167168
168169 def test_sentinel_cases(self):
169170 def producer_dummy_flush(out):
170171 out.flush()
172
171173 iterable = IterIO(producer_dummy_flush)
172 strict_eq(next(iterable), '')
174 strict_eq(next(iterable), "")
173175
174176 def producer_empty(out):
175177 pass
178
176179 iterable = IterIO(producer_empty)
177180 pytest.raises(StopIteration, next, iterable)
178181
179 iterable = IterIO(producer_dummy_flush, b'')
180 strict_eq(next(iterable), b'')
181 iterable = IterIO(producer_dummy_flush, u'')
182 strict_eq(next(iterable), u'')
182 iterable = IterIO(producer_dummy_flush, b"")
183 strict_eq(next(iterable), b"")
184 iterable = IterIO(producer_dummy_flush, u"")
185 strict_eq(next(iterable), u"")
44
55 Tests the secure cookie.
66
7 :copyright: (c) 2014 by Armin Ronacher.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
10 import json
1011
12 import pytest
13
14 from werkzeug._compat import to_native
15 from werkzeug.contrib.securecookie import SecureCookie
1116 from werkzeug.utils import parse_cookie
12 from werkzeug.wrappers import Request, Response
13 from werkzeug.contrib.securecookie import SecureCookie
17 from werkzeug.wrappers import Request
18 from werkzeug.wrappers import Response
1419
1520
1621 def test_basic_support():
17 c = SecureCookie(secret_key=b'foo')
22 c = SecureCookie(secret_key=b"foo")
1823 assert c.new
1924 assert not c.modified
2025 assert not c.should_save
21 c['x'] = 42
26 c["x"] = 42
2227 assert c.modified
2328 assert c.should_save
2429 s = c.serialize()
2530
26 c2 = SecureCookie.unserialize(s, b'foo')
31 c2 = SecureCookie.unserialize(s, b"foo")
2732 assert c is not c2
2833 assert not c2.new
2934 assert not c2.modified
3035 assert not c2.should_save
3136 assert c2 == c
3237
33 c3 = SecureCookie.unserialize(s, b'wrong foo')
38 c3 = SecureCookie.unserialize(s, b"wrong foo")
3439 assert not c3.modified
3540 assert not c3.new
3641 assert c3 == {}
3742
38 c4 = SecureCookie({'x': 42}, 'foo')
43 c4 = SecureCookie({"x": 42}, "foo")
3944 c4_serialized = c4.serialize()
40 assert SecureCookie.unserialize(c4_serialized, 'foo') == c4
45 assert SecureCookie.unserialize(c4_serialized, "foo") == c4
4146
4247
4348 def test_wrapper_support():
4449 req = Request.from_values()
4550 resp = Response()
46 c = SecureCookie.load_cookie(req, secret_key=b'foo')
51 c = SecureCookie.load_cookie(req, secret_key=b"foo")
4752 assert c.new
48 c['foo'] = 42
49 assert c.secret_key == b'foo'
53 c["foo"] = 42
54 assert c.secret_key == b"foo"
5055 c.save_cookie(resp)
5156
52 req = Request.from_values(headers={
53 'Cookie': 'session="%s"' % parse_cookie(resp.headers['set-cookie'])['session']
54 })
55 c2 = SecureCookie.load_cookie(req, secret_key=b'foo')
57 req = Request.from_values(
58 headers={
59 "Cookie": 'session="%s"'
60 % parse_cookie(resp.headers["set-cookie"])["session"]
61 }
62 )
63 c2 = SecureCookie.load_cookie(req, secret_key=b"foo")
5664 assert not c2.new
5765 assert c2 == c
66
67
68 def test_pickle_deprecated():
69 with pytest.warns(UserWarning):
70 SecureCookie({"foo": "bar"}, "secret").serialize()
71
72
73 def test_json():
74 class JSONCompat(object):
75 dumps = staticmethod(json.dumps)
76
77 @staticmethod
78 def loads(s):
79 # json on Python < 3.6 fails on bytes
80 return json.loads(to_native(s, "utf8"))
81
82 class JSONSecureCookie(SecureCookie):
83 serialization_method = JSONCompat
84
85 secure = JSONSecureCookie({"foo": "bar"}, "secret").serialize()
86 data = JSONSecureCookie.unserialize(secure, "secret")
87 assert data == {"foo": "bar"}
44
55 Added tests for the sessions.
66
7 :copyright: (c) 2014 by Armin Ronacher.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
1010 import os
1111 from tempfile import gettempdir
2323 x = store.new()
2424 assert x.new
2525 assert not x.modified
26 x['foo'] = [1, 2, 3]
26 x["foo"] = [1, 2, 3]
2727 assert x.modified
2828 store.save(x)
2929
3232 assert not x2.modified
3333 assert x2 is not x
3434 assert x2 == x
35 x2['test'] = 3
35 x2["test"] = 3
3636 assert x2.modified
3737 assert not x2.new
3838 store.save(x2)
6666 def test_fs_session_lising(tmpdir):
6767 store = FilesystemSessionStore(str(tmpdir), renew_missing=True)
6868 sessions = set()
69 for x in range(10):
69 for _ in range(10):
7070 sess = store.new()
7171 store.save(sess)
7272 sessions.add(sess.sid)
44
55 Added tests for the sessions.
66
7 :copyright: (c) 2014 by Armin Ronacher.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
10
11 from __future__ import with_statement
12
10 from werkzeug import routing
1311 from werkzeug.contrib import wrappers
14 from werkzeug import routing
15 from werkzeug.wrappers import Request, Response
16
17
18 def test_json_request_mixin():
19 class MyRequest(wrappers.JSONRequestMixin, Request):
20 pass
21 req = MyRequest.from_values(
22 data=u'{"foä": "bar"}'.encode('utf-8'),
23 content_type='text/json'
24 )
25 assert req.json == {u'foä': 'bar'}
12 from werkzeug.wrappers import Request
13 from werkzeug.wrappers import Response
2614
2715
2816 def test_reverse_slash_behavior():
2917 class MyRequest(wrappers.ReverseSlashBehaviorRequestMixin, Request):
3018 pass
31 req = MyRequest.from_values('/foo/bar', 'http://example.com/test')
32 assert req.url == 'http://example.com/test/foo/bar'
33 assert req.path == 'foo/bar'
34 assert req.script_root == '/test/'
19
20 req = MyRequest.from_values("/foo/bar", "http://example.com/test")
21 assert req.url == "http://example.com/test/foo/bar"
22 assert req.path == "foo/bar"
23 assert req.script_root == "/test/"
3524
3625 # make sure the routing system works with the slashes in
3726 # reverse order as well.
38 map = routing.Map([routing.Rule('/foo/bar', endpoint='foo')])
27 map = routing.Map([routing.Rule("/foo/bar", endpoint="foo")])
3928 adapter = map.bind_to_environ(req.environ)
40 assert adapter.match() == ('foo', {})
29 assert adapter.match() == ("foo", {})
4130 adapter = map.bind(req.host, req.script_root)
42 assert adapter.match(req.path) == ('foo', {})
31 assert adapter.match(req.path) == ("foo", {})
4332
4433
4534 def test_dynamic_charset_request_mixin():
4635 class MyRequest(wrappers.DynamicCharsetRequestMixin, Request):
4736 pass
48 env = {'CONTENT_TYPE': 'text/html'}
37
38 env = {"CONTENT_TYPE": "text/html"}
4939 req = MyRequest(env)
50 assert req.charset == 'latin1'
40 assert req.charset == "latin1"
5141
52 env = {'CONTENT_TYPE': 'text/html; charset=utf-8'}
42 env = {"CONTENT_TYPE": "text/html; charset=utf-8"}
5343 req = MyRequest(env)
54 assert req.charset == 'utf-8'
44 assert req.charset == "utf-8"
5545
56 env = {'CONTENT_TYPE': 'application/octet-stream'}
46 env = {"CONTENT_TYPE": "application/octet-stream"}
5747 req = MyRequest(env)
58 assert req.charset == 'latin1'
59 assert req.url_charset == 'latin1'
48 assert req.charset == "latin1"
49 assert req.url_charset == "latin1"
6050
61 MyRequest.url_charset = 'utf-8'
62 env = {'CONTENT_TYPE': 'application/octet-stream'}
51 MyRequest.url_charset = "utf-8"
52 env = {"CONTENT_TYPE": "application/octet-stream"}
6353 req = MyRequest(env)
64 assert req.charset == 'latin1'
65 assert req.url_charset == 'utf-8'
54 assert req.charset == "latin1"
55 assert req.url_charset == "utf-8"
6656
6757 def return_ascii(x):
6858 return "ascii"
69 env = {'CONTENT_TYPE': 'text/plain; charset=x-weird-charset'}
59
60 env = {"CONTENT_TYPE": "text/plain; charset=x-weird-charset"}
7061 req = MyRequest(env)
7162 req.unknown_charset = return_ascii
72 assert req.charset == 'ascii'
73 assert req.url_charset == 'utf-8'
63 assert req.charset == "ascii"
64 assert req.url_charset == "utf-8"
7465
7566
7667 def test_dynamic_charset_response_mixin():
7768 class MyResponse(wrappers.DynamicCharsetResponseMixin, Response):
78 default_charset = 'utf-7'
79 resp = MyResponse(mimetype='text/html')
80 assert resp.charset == 'utf-7'
81 resp.charset = 'utf-8'
82 assert resp.charset == 'utf-8'
83 assert resp.mimetype == 'text/html'
84 assert resp.mimetype_params == {'charset': 'utf-8'}
85 resp.mimetype_params['charset'] = 'iso-8859-15'
86 assert resp.charset == 'iso-8859-15'
87 resp.set_data(u'Hällo Wörld')
88 assert b''.join(resp.iter_encoded()) == \
89 u'Hällo Wörld'.encode('iso-8859-15')
90 del resp.headers['content-type']
69 default_charset = "utf-7"
70
71 resp = MyResponse(mimetype="text/html")
72 assert resp.charset == "utf-7"
73 resp.charset = "utf-8"
74 assert resp.charset == "utf-8"
75 assert resp.mimetype == "text/html"
76 assert resp.mimetype_params == {"charset": "utf-8"}
77 resp.mimetype_params["charset"] = "iso-8859-15"
78 assert resp.charset == "iso-8859-15"
79 resp.set_data(u"Hällo Wörld")
80 assert b"".join(resp.iter_encoded()) == u"Hällo Wörld".encode("iso-8859-15")
81 del resp.headers["content-type"]
9182 try:
92 resp.charset = 'utf-8'
83 resp.charset = "utf-8"
9384 except TypeError:
9485 pass
9586 else:
96 assert False, 'expected type error on charset setting without ct'
87 raise AssertionError("expected type error on charset setting without ct")
00 import hypothesis
1 from hypothesis.strategies import text, dictionaries, lists, integers
1 from hypothesis.strategies import dictionaries
2 from hypothesis.strategies import integers
3 from hypothesis.strategies import lists
4 from hypothesis.strategies import text
25
36 from werkzeug import urls
47 from werkzeug.datastructures import OrderedMultiDict
2124
2225 @hypothesis.given(dictionaries(text(), integers()))
2326 def test_url_encoding_dict_str_int(d):
24 assert OrderedMultiDict({k: str(v) for k, v in d.items()}) == \
25 urls.url_decode(urls.url_encode(d))
27 assert OrderedMultiDict({k: str(v) for k, v in d.items()}) == urls.url_decode(
28 urls.url_encode(d)
29 )
2630
2731
2832 @hypothesis.given(text(), text())
0 from werkzeug._compat import to_bytes
1 from werkzeug.middleware.dispatcher import DispatcherMiddleware
2 from werkzeug.test import create_environ
3 from werkzeug.test import run_wsgi_app
4
5
6 def test_dispatcher():
7 def null_application(environ, start_response):
8 start_response("404 NOT FOUND", [("Content-Type", "text/plain")])
9 yield b"NOT FOUND"
10
11 def dummy_application(environ, start_response):
12 start_response("200 OK", [("Content-Type", "text/plain")])
13 yield to_bytes(environ["SCRIPT_NAME"])
14
15 app = DispatcherMiddleware(
16 null_application,
17 {"/test1": dummy_application, "/test2/very": dummy_application},
18 )
19 tests = {
20 "/test1": ("/test1", "/test1/asfd", "/test1/very"),
21 "/test2/very": ("/test2/very", "/test2/very/long/path/after/script/name"),
22 }
23
24 for name, urls in tests.items():
25 for p in urls:
26 environ = create_environ(p)
27 app_iter, status, headers = run_wsgi_app(app, environ)
28 assert status == "200 OK"
29 assert b"".join(app_iter).strip() == to_bytes(name)
30
31 app_iter, status, headers = run_wsgi_app(app, create_environ("/missing"))
32 assert status == "404 NOT FOUND"
33 assert b"".join(app_iter).strip() == b"NOT FOUND"
0 from werkzeug.middleware.http_proxy import ProxyMiddleware
1 from werkzeug.test import Client
2 from werkzeug.urls import url_parse
3 from werkzeug.wrappers import BaseResponse
4
5
6 def test_http_proxy(dev_server):
7 server = dev_server(
8 r"""
9 from werkzeug.wrappers import Request, Response
10
11 @Request.application
12 def app(request):
13 return Response(u'%s|%s|%s' % (
14 request.headers.get('X-Special'),
15 request.environ['HTTP_HOST'],
16 request.full_path,
17 ))
18 """
19 )
20
21 app = ProxyMiddleware(
22 BaseResponse("ROOT"),
23 {
24 "/foo": {
25 "target": server.url,
26 "host": "faked.invalid",
27 "headers": {"X-Special": "foo"},
28 },
29 "/bar": {
30 "target": server.url,
31 "host": None,
32 "remove_prefix": True,
33 "headers": {"X-Special": "bar"},
34 },
35 "/autohost": {"target": server.url},
36 },
37 )
38
39 client = Client(app, response_wrapper=BaseResponse)
40
41 rv = client.get("/")
42 assert rv.data == b"ROOT"
43
44 rv = client.get("/foo/bar")
45 assert rv.data.decode("ascii") == "foo|faked.invalid|/foo/bar?"
46
47 rv = client.get("/bar/baz")
48 assert rv.data.decode("ascii") == "bar|localhost|/baz?"
49
50 rv = client.get("/autohost/aha")
51 expected = "None|%s|/autohost/aha?" % url_parse(server.url).ascii_host
52 assert rv.data.decode("ascii") == expected
53
54 # test query string
55 rv = client.get("/bar/baz?a=a&b=b")
56 assert rv.data.decode("ascii") == "bar|localhost|/baz?a=a&b=b"
0 # -*- coding: utf-8 -*-
1 import pytest
2
3 from werkzeug.middleware.lint import HTTPWarning
4 from werkzeug.middleware.lint import LintMiddleware
5 from werkzeug.middleware.lint import WSGIWarning
6 from werkzeug.test import create_environ
7 from werkzeug.test import run_wsgi_app
8
9
10 def dummy_application(environ, start_response):
11 start_response("200 OK", [("Content-Type", "text/plain")])
12 return ["Foo"]
13
14
15 def test_lint_middleware():
16 """ Test lint middleware runs for a dummy applications without warnings """
17 app = LintMiddleware(dummy_application)
18
19 environ = create_environ("/test")
20 app_iter, status, headers = run_wsgi_app(app, environ, buffered=True)
21 assert status == "200 OK"
22
23
24 @pytest.mark.parametrize(
25 "key, value, message",
26 [
27 ("wsgi.version", (0, 7), "Environ is not a WSGI 1.0 environ."),
28 ("SCRIPT_NAME", "test", "'SCRIPT_NAME' does not start with a slash:"),
29 ("PATH_INFO", "test", "'PATH_INFO' does not start with a slash:"),
30 ],
31 )
32 def test_lint_middleware_check_environ(key, value, message):
33 app = LintMiddleware(dummy_application)
34
35 environ = create_environ("/test")
36 environ[key] = value
37 with pytest.warns(WSGIWarning, match=message):
38 app_iter, status, headers = run_wsgi_app(app, environ, buffered=True)
39 assert status == "200 OK"
40
41
42 def test_lint_middleware_invalid_status():
43 def my_dummy_application(environ, start_response):
44 start_response("20 OK", [("Content-Type", "text/plain")])
45 return ["Foo"]
46
47 app = LintMiddleware(my_dummy_application)
48
49 environ = create_environ("/test")
50 with pytest.warns(WSGIWarning) as record:
51 run_wsgi_app(app, environ, buffered=True)
52
53 # Returning status 20 should raise three different warnings
54 assert len(record) == 3
55
56
57 @pytest.mark.parametrize(
58 "headers, message",
59 [
60 (tuple([("Content-Type", "text/plain")]), "header list is not a list"),
61 (["fo"], "Headers must tuple 2-item tuples"),
62 ([("status", "foo")], "The status header is not supported"),
63 ],
64 )
65 def test_lint_middleware_http_headers(headers, message):
66 def my_dummy_application(environ, start_response):
67 start_response("200 OK", headers)
68 return ["Foo"]
69
70 app = LintMiddleware(my_dummy_application)
71
72 environ = create_environ("/test")
73 with pytest.warns(WSGIWarning, match=message):
74 run_wsgi_app(app, environ, buffered=True)
75
76
77 def test_lint_middleware_invalid_location():
78 def my_dummy_application(environ, start_response):
79 start_response("200 OK", [("location", "foo")])
80 return ["Foo"]
81
82 app = LintMiddleware(my_dummy_application)
83
84 environ = create_environ("/test")
85 with pytest.warns(HTTPWarning, match="absolute URLs required for location header"):
86 run_wsgi_app(app, environ, buffered=True)
0 import pytest
1
2 from werkzeug.middleware.proxy_fix import ProxyFix
3 from werkzeug.routing import Map
4 from werkzeug.routing import Rule
5 from werkzeug.test import create_environ
6 from werkzeug.utils import redirect
7 from werkzeug.wrappers import Request
8 from werkzeug.wrappers import Response
9
10
11 @pytest.mark.parametrize(
12 ("kwargs", "base", "url_root"),
13 (
14 pytest.param(
15 {},
16 {
17 "REMOTE_ADDR": "192.168.0.2",
18 "HTTP_HOST": "spam",
19 "HTTP_X_FORWARDED_FOR": "192.168.0.1",
20 },
21 "http://spam/",
22 id="for",
23 ),
24 pytest.param(
25 {"x_proto": 1},
26 {"HTTP_HOST": "spam", "HTTP_X_FORWARDED_PROTO": "https"},
27 "https://spam/",
28 id="proto",
29 ),
30 pytest.param(
31 {"x_host": 1},
32 {"HTTP_HOST": "spam", "HTTP_X_FORWARDED_HOST": "eggs"},
33 "http://eggs/",
34 id="host",
35 ),
36 pytest.param(
37 {"x_port": 1},
38 {"HTTP_HOST": "spam", "HTTP_X_FORWARDED_PORT": "8080"},
39 "http://spam:8080/",
40 id="port, host without port",
41 ),
42 pytest.param(
43 {"x_port": 1},
44 {"HTTP_HOST": "spam:9000", "HTTP_X_FORWARDED_PORT": "8080"},
45 "http://spam:8080/",
46 id="port, host with port",
47 ),
48 pytest.param(
49 {"x_port": 1},
50 {
51 "SERVER_NAME": "spam",
52 "SERVER_PORT": "9000",
53 "HTTP_X_FORWARDED_PORT": "8080",
54 },
55 "http://spam:8080/",
56 id="port, name",
57 ),
58 pytest.param(
59 {"x_prefix": 1},
60 {"HTTP_HOST": "spam", "HTTP_X_FORWARDED_PREFIX": "/eggs"},
61 "http://spam/eggs/",
62 id="prefix",
63 ),
64 pytest.param(
65 {"x_for": 1, "x_proto": 1, "x_host": 1, "x_port": 1, "x_prefix": 1},
66 {
67 "REMOTE_ADDR": "192.168.0.2",
68 "HTTP_HOST": "spam:9000",
69 "HTTP_X_FORWARDED_FOR": "192.168.0.1",
70 "HTTP_X_FORWARDED_PROTO": "https",
71 "HTTP_X_FORWARDED_HOST": "eggs",
72 "HTTP_X_FORWARDED_PORT": "443",
73 "HTTP_X_FORWARDED_PREFIX": "/ham",
74 },
75 "https://eggs/ham/",
76 id="all",
77 ),
78 pytest.param(
79 {"x_for": 2},
80 {
81 "REMOTE_ADDR": "192.168.0.3",
82 "HTTP_HOST": "spam",
83 "HTTP_X_FORWARDED_FOR": "192.168.0.1, 192.168.0.2",
84 },
85 "http://spam/",
86 id="multiple for",
87 ),
88 pytest.param(
89 {"x_for": 0},
90 {
91 "REMOTE_ADDR": "192.168.0.1",
92 "HTTP_HOST": "spam",
93 "HTTP_X_FORWARDED_FOR": "192.168.0.2",
94 },
95 "http://spam/",
96 id="ignore 0",
97 ),
98 pytest.param(
99 {"x_for": 3},
100 {
101 "REMOTE_ADDR": "192.168.0.1",
102 "HTTP_HOST": "spam",
103 "HTTP_X_FORWARDED_FOR": "192.168.0.3, 192.168.0.2",
104 },
105 "http://spam/",
106 id="ignore len < trusted",
107 ),
108 pytest.param(
109 {},
110 {
111 "REMOTE_ADDR": "192.168.0.2",
112 "HTTP_HOST": "spam",
113 "HTTP_X_FORWARDED_FOR": "192.168.0.3, 192.168.0.1",
114 },
115 "http://spam/",
116 id="ignore untrusted",
117 ),
118 pytest.param(
119 {"x_for": 2},
120 {
121 "REMOTE_ADDR": "192.168.0.1",
122 "HTTP_HOST": "spam",
123 "HTTP_X_FORWARDED_FOR": ", 192.168.0.3",
124 },
125 "http://spam/",
126 id="ignore empty",
127 ),
128 pytest.param(
129 {"x_for": 2, "x_prefix": 1},
130 {
131 "REMOTE_ADDR": "192.168.0.2",
132 "HTTP_HOST": "spam",
133 "HTTP_X_FORWARDED_FOR": "192.168.0.1, 192.168.0.3",
134 "HTTP_X_FORWARDED_PREFIX": "/ham, /eggs",
135 },
136 "http://spam/eggs/",
137 id="prefix < for",
138 ),
139 ),
140 )
141 def test_proxy_fix(kwargs, base, url_root):
142 @Request.application
143 def app(request):
144 # for header
145 assert request.remote_addr == "192.168.0.1"
146 # proto, host, port, prefix headers
147 assert request.url_root == url_root
148
149 urls = url_map.bind_to_environ(request.environ)
150 # build includes prefix
151 assert urls.build("parrot") == "/".join((request.script_root, "parrot"))
152 # match doesn't include prefix
153 assert urls.match("/parrot")[0] == "parrot"
154
155 return Response("success")
156
157 url_map = Map([Rule("/parrot", endpoint="parrot")])
158 app = ProxyFix(app, **kwargs)
159
160 base.setdefault("REMOTE_ADDR", "192.168.0.1")
161 environ = create_environ(environ_overrides=base)
162 # host is always added, remove it if the test doesn't set it
163 if "HTTP_HOST" not in base:
164 del environ["HTTP_HOST"]
165
166 # ensure app request has correct headers
167 response = Response.from_app(app, environ)
168 assert response.get_data() == b"success"
169
170 # ensure redirect location is correct
171 redirect_app = redirect(url_map.bind_to_environ(environ).build("parrot"))
172 response = Response.from_app(redirect_app, environ)
173 location = response.headers["Location"]
174 assert location == url_root + "parrot"
175
176
177 def test_proxy_fix_deprecations():
178 app = pytest.deprecated_call(ProxyFix, None, 2)
179 assert app.x_for == 2
180
181 with pytest.deprecated_call():
182 assert app.num_proxies == 2
183
184 with pytest.deprecated_call():
185 assert app.get_remote_addr(["spam", "eggs"]) == "spam"
0 # -*- coding: utf-8 -*-
1 import os
2 from contextlib import closing
3
4 from werkzeug._compat import to_native
5 from werkzeug.middleware.shared_data import SharedDataMiddleware
6 from werkzeug.test import create_environ
7 from werkzeug.test import run_wsgi_app
8
9
10 def test_get_file_loader():
11 app = SharedDataMiddleware(None, {})
12 assert callable(app.get_file_loader("foo"))
13
14
15 def test_shared_data_middleware(tmpdir):
16 def null_application(environ, start_response):
17 start_response("404 NOT FOUND", [("Content-Type", "text/plain")])
18 yield b"NOT FOUND"
19
20 test_dir = str(tmpdir)
21
22 with open(os.path.join(test_dir, to_native(u"äöü", "utf-8")), "w") as test_file:
23 test_file.write(u"FOUND")
24
25 for t in [list, dict]:
26 app = SharedDataMiddleware(
27 null_application,
28 t(
29 [
30 ("/", os.path.join(os.path.dirname(__file__), "..", "res")),
31 ("/sources", os.path.join(os.path.dirname(__file__), "..", "res")),
32 ("/pkg", ("werkzeug.debug", "shared")),
33 ("/foo", test_dir),
34 ]
35 ),
36 )
37
38 for p in "/test.txt", "/sources/test.txt", "/foo/äöü":
39 app_iter, status, headers = run_wsgi_app(app, create_environ(p))
40 assert status == "200 OK"
41
42 with closing(app_iter) as app_iter:
43 data = b"".join(app_iter).strip()
44
45 assert data == b"FOUND"
46
47 app_iter, status, headers = run_wsgi_app(
48 app, create_environ("/pkg/debugger.js")
49 )
50
51 with closing(app_iter) as app_iter:
52 contents = b"".join(app_iter)
53
54 assert b"$(function() {" in contents
55
56 app_iter, status, headers = run_wsgi_app(app, create_environ("/missing"))
57 assert status == "404 NOT FOUND"
58 assert b"".join(app_iter).strip() == b"NOT FOUND"
tests/multipart/firefox3-2png1txt/request.txt less more
Binary diff not shown
0 example text
0 example text
tests/multipart/firefox3-2pnglongtext/request.txt less more
Binary diff not shown
00 --long text
11 --with boundary
2 --lookalikes--
2 --lookalikes--
tests/multipart/ie6-2png1txt/request.txt less more
Binary diff not shown
0 ie6 sucks :-/
0 ie6 sucks :-/
tests/multipart/ie7_full_path_request.txt less more
Binary diff not shown
tests/multipart/opera8-2png1txt/request.txt less more
Binary diff not shown
0 blafasel öäü
0 blafasel öäü
22 Hacky helper application to collect form data.
33 """
44 from werkzeug.serving import run_simple
5 from werkzeug.wrappers import Request, Response
5 from werkzeug.wrappers import Request
6 from werkzeug.wrappers import Response
67
78
89 def copy_stream(request):
910 from os import mkdir
1011 from time import time
11 folder = 'request-%d' % time()
12
13 folder = "request-%d" % time()
1214 mkdir(folder)
1315 environ = request.environ
14 f = open(folder + '/request.txt', 'wb+')
15 f.write(environ['wsgi.input'].read(int(environ['CONTENT_LENGTH'])))
16 f = open(folder + "/request.http", "wb+")
17 f.write(environ["wsgi.input"].read(int(environ["CONTENT_LENGTH"])))
1618 f.flush()
1719 f.seek(0)
18 environ['wsgi.input'] = f
20 environ["wsgi.input"] = f
1921 request.stat_folder = folder
2022
2123
2224 def stats(request):
2325 copy_stream(request)
24 f1 = request.files['file1']
25 f2 = request.files['file2']
26 text = request.form['text']
27 f1.save(request.stat_folder + '/file1.bin')
28 f2.save(request.stat_folder + '/file2.bin')
29 with open(request.stat_folder + '/text.txt', 'w') as f:
30 f.write(text.encode('utf-8'))
31 return Response('Done.')
26 f1 = request.files["file1"]
27 f2 = request.files["file2"]
28 text = request.form["text"]
29 f1.save(request.stat_folder + "/file1.bin")
30 f2.save(request.stat_folder + "/file2.bin")
31 with open(request.stat_folder + "/text.txt", "w") as f:
32 f.write(text.encode("utf-8"))
33 return Response("Done.")
3234
3335
3436 def upload_file(request):
35 return Response('''
36 <h1>Upload File</h1>
37 <form action="" method="post" enctype="multipart/form-data">
38 <input type="file" name="file1"><br>
39 <input type="file" name="file2"><br>
40 <textarea name="text"></textarea><br>
41 <input type="submit" value="Send">
42 </form>
43 ''', mimetype='text/html')
37 return Response(
38 """<h1>Upload File</h1>
39 <form action="" method="post" enctype="multipart/form-data">
40 <input type="file" name="file1"><br>
41 <input type="file" name="file2"><br>
42 <textarea name="text"></textarea><br>
43 <input type="submit" value="Send">
44 </form>""",
45 mimetype="text/html",
46 )
4447
4548
4649 def application(environ, start_responseonse):
4750 request = Request(environ)
48 if request.method == 'POST':
51 if request.method == "POST":
4952 response = stats(request)
5053 else:
5154 response = upload_file(request)
5255 return response(environ, start_responseonse)
5356
5457
55 if __name__ == '__main__':
56 run_simple('localhost', 5000, application, use_debugger=True)
58 if __name__ == "__main__":
59 run_simple("localhost", 5000, application, use_debugger=True)
tests/multipart/webkit3-2png1txt/request.txt less more
Binary diff not shown
0 this is another text with ümläüts
0 this is another text with ümläüts
0 94
1 ----------------------------898239224156930639461866
2 Content-Disposition: form-data; name="file"; filename="test.txt"
3 Content-Type: text/plain
4
5
6 f
7 This is a test
8
9 2
10
11
12 65
13 ----------------------------898239224156930639461866
14 Content-Disposition: form-data; name="type"
15
16
17 a
18 text/plain
19 3a
20
21 ----------------------------898239224156930639461866--
22
23 0
24
+0
-25
tests/res/chunked.txt less more
0 94
1 ----------------------------898239224156930639461866
2 Content-Disposition: form-data; name="file"; filename="test.txt"
3 Content-Type: text/plain
4
5
6 f
7 This is a test
8
9 2
10
11
12 65
13 ----------------------------898239224156930639461866
14 Content-Disposition: form-data; name="type"
15
16
17 a
18 text/plain
19 3a
20
21 ----------------------------898239224156930639461866--
22
23 0
24
00 # -*- coding: utf-8 -*-
1 # flake8: noqa
12 """
23 tests.compat
34 ~~~~~~~~~~~~
45
56 Ensure that old stuff does not break on update.
67
7 :copyright: (c) 2014 by Armin Ronacher.
8 :license: BSD, see LICENSE for more details.
8 :copyright: 2007 Pallets
9 :license: BSD-3-Clause
910 """
10
11 # This file shouldn't be linted:
12 # flake8: noqa
13
14 import warnings
15
11 from werkzeug.test import create_environ
1612 from werkzeug.wrappers import Response
17 from werkzeug.test import create_environ
1813
1914
2015 def test_old_imports():
21 from werkzeug.utils import Headers, MultiDict, CombinedMultiDict, \
22 Headers, EnvironHeaders
23 from werkzeug.http import Accept, MIMEAccept, CharsetAccept, \
24 LanguageAccept, ETags, HeaderSet, WWWAuthenticate, \
25 Authorization
16 from werkzeug.utils import (
17 Headers,
18 MultiDict,
19 CombinedMultiDict,
20 Headers,
21 EnvironHeaders,
22 )
23 from werkzeug.http import (
24 Accept,
25 MIMEAccept,
26 CharsetAccept,
27 LanguageAccept,
28 ETags,
29 HeaderSet,
30 WWWAuthenticate,
31 Authorization,
32 )
2633
2734
2835 def test_exposed_werkzeug_mod():
2936 import werkzeug
37
3038 for key in werkzeug.__all__:
31 # deprecated, skip it
32 if key in ('templates', 'Template'):
33 continue
3439 getattr(werkzeug, key)
1414 - Immutable types undertested
1515 - Split up dict tests
1616
17 :copyright: (c) 2014 by Armin Ronacher.
18 :license: BSD, see LICENSE for more details.
17 :copyright: 2007 Pallets
18 :license: BSD-3-Clause
1919 """
20
21 from __future__ import with_statement
20 import io
21 import pickle
22 import tempfile
23 from contextlib import contextmanager
24 from copy import copy
25 from copy import deepcopy
2226
2327 import pytest
24 from tests import strict_eq
25
26
27 import pickle
28 from contextlib import contextmanager
29 from copy import copy, deepcopy
30
28
29 from . import strict_eq
3130 from werkzeug import datastructures
32 from werkzeug._compat import iterkeys, itervalues, iteritems, iterlists, \
33 iterlistvalues, text_type, PY2
31 from werkzeug import http
32 from werkzeug._compat import iteritems
33 from werkzeug._compat import iterkeys
34 from werkzeug._compat import iterlists
35 from werkzeug._compat import iterlistvalues
36 from werkzeug._compat import itervalues
37 from werkzeug._compat import PY2
38 from werkzeug._compat import text_type
39 from werkzeug.datastructures import Range
3440 from werkzeug.exceptions import BadRequestKeyError
3541
3642
3743 class TestNativeItermethods(object):
38
3944 def test_basic(self):
40 @datastructures.native_itermethods(['keys', 'values', 'items'])
45 @datastructures.native_itermethods(["keys", "values", "items"])
4146 class StupidDict(object):
42
4347 def keys(self, multi=1):
44 return iter(['a', 'b', 'c'] * multi)
48 return iter(["a", "b", "c"] * multi)
4549
4650 def values(self, multi=1):
4751 return iter([1, 2, 3] * multi)
4852
4953 def items(self, multi=1):
50 return iter(zip(iterkeys(self, multi=multi),
51 itervalues(self, multi=multi)))
54 return iter(
55 zip(iterkeys(self, multi=multi), itervalues(self, multi=multi))
56 )
5257
5358 d = StupidDict()
54 expected_keys = ['a', 'b', 'c']
59 expected_keys = ["a", "b", "c"]
5560 expected_values = [1, 2, 3]
5661 expected_items = list(zip(expected_keys, expected_values))
5762
7883 cls.__module__ = module
7984 d = cls()
8085 cls.__module__ = old
81 d.setlist(b'foo', [1, 2, 3, 4])
82 d.setlist(b'bar', b'foo bar baz'.split())
86 d.setlist(b"foo", [1, 2, 3, 4])
87 d.setlist(b"bar", b"foo bar baz".split())
8388 return d
8489
8590 for protocol in range(pickle.HIGHEST_PROTOCOL + 1):
8893 ud = pickle.loads(s)
8994 assert type(ud) == type(d)
9095 assert ud == d
91 alternative = pickle.dumps(create_instance('werkzeug'), protocol)
96 alternative = pickle.dumps(create_instance("werkzeug"), protocol)
9297 assert pickle.loads(alternative) == d
93 ud[b'newkey'] = b'bla'
98 ud[b"newkey"] = b"bla"
9499 assert ud != d
95100
96101 def test_basic_interface(self):
97102 md = self.storage_class()
98103 assert isinstance(md, dict)
99104
100 mapping = [('a', 1), ('b', 2), ('a', 2), ('d', 3),
101 ('a', 1), ('a', 3), ('d', 4), ('c', 3)]
105 mapping = [
106 ("a", 1),
107 ("b", 2),
108 ("a", 2),
109 ("d", 3),
110 ("a", 1),
111 ("a", 3),
112 ("d", 4),
113 ("c", 3),
114 ]
102115 md = self.storage_class(mapping)
103116
104117 # simple getitem gives the first value
105 assert md['a'] == 1
106 assert md['c'] == 3
118 assert md["a"] == 1
119 assert md["c"] == 3
107120 with pytest.raises(KeyError):
108 md['e']
109 assert md.get('a') == 1
121 md["e"]
122 assert md.get("a") == 1
110123
111124 # list getitem
112 assert md.getlist('a') == [1, 2, 1, 3]
113 assert md.getlist('d') == [3, 4]
125 assert md.getlist("a") == [1, 2, 1, 3]
126 assert md.getlist("d") == [3, 4]
114127 # do not raise if key not found
115 assert md.getlist('x') == []
128 assert md.getlist("x") == []
116129
117130 # simple setitem overwrites all values
118 md['a'] = 42
119 assert md.getlist('a') == [42]
131 md["a"] = 42
132 assert md.getlist("a") == [42]
120133
121134 # list setitem
122 md.setlist('a', [1, 2, 3])
123 assert md['a'] == 1
124 assert md.getlist('a') == [1, 2, 3]
135 md.setlist("a", [1, 2, 3])
136 assert md["a"] == 1
137 assert md.getlist("a") == [1, 2, 3]
125138
126139 # verify that it does not change original lists
127140 l1 = [1, 2, 3]
128 md.setlist('a', l1)
141 md.setlist("a", l1)
129142 del l1[:]
130 assert md['a'] == 1
143 assert md["a"] == 1
131144
132145 # setdefault, setlistdefault
133 assert md.setdefault('u', 23) == 23
134 assert md.getlist('u') == [23]
135 del md['u']
136
137 md.setlist('u', [-1, -2])
146 assert md.setdefault("u", 23) == 23
147 assert md.getlist("u") == [23]
148 del md["u"]
149
150 md.setlist("u", [-1, -2])
138151
139152 # delitem
140 del md['u']
153 del md["u"]
141154 with pytest.raises(KeyError):
142 md['u']
143 del md['d']
144 assert md.getlist('d') == []
155 md["u"]
156 del md["d"]
157 assert md.getlist("d") == []
145158
146159 # keys, values, items, lists
147 assert list(sorted(md.keys())) == ['a', 'b', 'c']
148 assert list(sorted(iterkeys(md))) == ['a', 'b', 'c']
160 assert list(sorted(md.keys())) == ["a", "b", "c"]
161 assert list(sorted(iterkeys(md))) == ["a", "b", "c"]
149162
150163 assert list(sorted(itervalues(md))) == [1, 2, 3]
151164 assert list(sorted(itervalues(md))) == [1, 2, 3]
152165
153 assert list(sorted(md.items())) == [('a', 1), ('b', 2), ('c', 3)]
154 assert list(sorted(md.items(multi=True))) == \
155 [('a', 1), ('a', 2), ('a', 3), ('b', 2), ('c', 3)]
156 assert list(sorted(iteritems(md))) == [('a', 1), ('b', 2), ('c', 3)]
157 assert list(sorted(iteritems(md, multi=True))) == \
158 [('a', 1), ('a', 2), ('a', 3), ('b', 2), ('c', 3)]
159
160 assert list(sorted(md.lists())) == \
161 [('a', [1, 2, 3]), ('b', [2]), ('c', [3])]
162 assert list(sorted(iterlists(md))) == \
163 [('a', [1, 2, 3]), ('b', [2]), ('c', [3])]
166 assert list(sorted(md.items())) == [("a", 1), ("b", 2), ("c", 3)]
167 assert list(sorted(md.items(multi=True))) == [
168 ("a", 1),
169 ("a", 2),
170 ("a", 3),
171 ("b", 2),
172 ("c", 3),
173 ]
174 assert list(sorted(iteritems(md))) == [("a", 1), ("b", 2), ("c", 3)]
175 assert list(sorted(iteritems(md, multi=True))) == [
176 ("a", 1),
177 ("a", 2),
178 ("a", 3),
179 ("b", 2),
180 ("c", 3),
181 ]
182
183 assert list(sorted(md.lists())) == [("a", [1, 2, 3]), ("b", [2]), ("c", [3])]
184 assert list(sorted(iterlists(md))) == [("a", [1, 2, 3]), ("b", [2]), ("c", [3])]
164185
165186 # copy method
166187 c = md.copy()
167 assert c['a'] == 1
168 assert c.getlist('a') == [1, 2, 3]
188 assert c["a"] == 1
189 assert c.getlist("a") == [1, 2, 3]
169190
170191 # copy method 2
171192 c = copy(md)
172 assert c['a'] == 1
173 assert c.getlist('a') == [1, 2, 3]
193 assert c["a"] == 1
194 assert c.getlist("a") == [1, 2, 3]
174195
175196 # deepcopy method
176197 c = md.deepcopy()
177 assert c['a'] == 1
178 assert c.getlist('a') == [1, 2, 3]
198 assert c["a"] == 1
199 assert c.getlist("a") == [1, 2, 3]
179200
180201 # deepcopy method 2
181202 c = deepcopy(md)
182 assert c['a'] == 1
183 assert c.getlist('a') == [1, 2, 3]
203 assert c["a"] == 1
204 assert c.getlist("a") == [1, 2, 3]
184205
185206 # update with a multidict
186 od = self.storage_class([('a', 4), ('a', 5), ('y', 0)])
207 od = self.storage_class([("a", 4), ("a", 5), ("y", 0)])
187208 md.update(od)
188 assert md.getlist('a') == [1, 2, 3, 4, 5]
189 assert md.getlist('y') == [0]
209 assert md.getlist("a") == [1, 2, 3, 4, 5]
210 assert md.getlist("y") == [0]
190211
191212 # update with a regular dict
192213 md = c
193 od = {'a': 4, 'y': 0}
214 od = {"a": 4, "y": 0}
194215 md.update(od)
195 assert md.getlist('a') == [1, 2, 3, 4]
196 assert md.getlist('y') == [0]
216 assert md.getlist("a") == [1, 2, 3, 4]
217 assert md.getlist("y") == [0]
197218
198219 # pop, poplist, popitem, popitemlist
199 assert md.pop('y') == 0
200 assert 'y' not in md
201 assert md.poplist('a') == [1, 2, 3, 4]
202 assert 'a' not in md
203 assert md.poplist('missing') == []
220 assert md.pop("y") == 0
221 assert "y" not in md
222 assert md.poplist("a") == [1, 2, 3, 4]
223 assert "a" not in md
224 assert md.poplist("missing") == []
204225
205226 # remaining: b=2, c=3
206227 popped = md.popitem()
207 assert popped in [('b', 2), ('c', 3)]
228 assert popped in [("b", 2), ("c", 3)]
208229 popped = md.popitemlist()
209 assert popped in [('b', [2]), ('c', [3])]
230 assert popped in [("b", [2]), ("c", [3])]
210231
211232 # type conversion
212 md = self.storage_class({'a': '4', 'b': ['2', '3']})
213 assert md.get('a', type=int) == 4
214 assert md.getlist('b', type=int) == [2, 3]
233 md = self.storage_class({"a": "4", "b": ["2", "3"]})
234 assert md.get("a", type=int) == 4
235 assert md.getlist("b", type=int) == [2, 3]
215236
216237 # repr
217 md = self.storage_class([('a', 1), ('a', 2), ('b', 3)])
238 md = self.storage_class([("a", 1), ("a", 2), ("b", 3)])
218239 assert "('a', 1)" in repr(md)
219240 assert "('a', 2)" in repr(md)
220241 assert "('b', 3)" in repr(md)
221242
222243 # add and getlist
223 md.add('c', '42')
224 md.add('c', '23')
225 assert md.getlist('c') == ['42', '23']
226 md.add('c', 'blah')
227 assert md.getlist('c', type=int) == [42, 23]
244 md.add("c", "42")
245 md.add("c", "23")
246 assert md.getlist("c") == ["42", "23"]
247 md.add("c", "blah")
248 assert md.getlist("c", type=int) == [42, 23]
228249
229250 # setdefault
230251 md = self.storage_class()
231 md.setdefault('x', []).append(42)
232 md.setdefault('x', []).append(23)
233 assert md['x'] == [42, 23]
252 md.setdefault("x", []).append(42)
253 md.setdefault("x", []).append(23)
254 assert md["x"] == [42, 23]
234255
235256 # to dict
236257 md = self.storage_class()
237 md['foo'] = 42
238 md.add('bar', 1)
239 md.add('bar', 2)
240 assert md.to_dict() == {'foo': 42, 'bar': 1}
241 assert md.to_dict(flat=False) == {'foo': [42], 'bar': [1, 2]}
258 md["foo"] = 42
259 md.add("bar", 1)
260 md.add("bar", 2)
261 assert md.to_dict() == {"foo": 42, "bar": 1}
262 assert md.to_dict(flat=False) == {"foo": [42], "bar": [1, 2]}
242263
243264 # popitem from empty dict
244265 with pytest.raises(KeyError):
253274
254275 # setlist works
255276 md = self.storage_class()
256 md['foo'] = 42
257 md.setlist('foo', [1, 2])
258 assert md.getlist('foo') == [1, 2]
277 md["foo"] = 42
278 md.setlist("foo", [1, 2])
279 assert md.getlist("foo") == [1, 2]
259280
260281
261282 class _ImmutableDictTests(object):
264285 def test_follows_dict_interface(self):
265286 cls = self.storage_class
266287
267 data = {'foo': 1, 'bar': 2, 'baz': 3}
288 data = {"foo": 1, "bar": 2, "baz": 3}
268289 d = cls(data)
269290
270 assert d['foo'] == 1
271 assert d['bar'] == 2
272 assert d['baz'] == 3
273 assert sorted(d.keys()) == ['bar', 'baz', 'foo']
274 assert 'foo' in d
275 assert 'foox' not in d
291 assert d["foo"] == 1
292 assert d["bar"] == 2
293 assert d["baz"] == 3
294 assert sorted(d.keys()) == ["bar", "baz", "foo"]
295 assert "foo" in d
296 assert "foox" not in d
276297 assert len(d) == 3
277298
278299 def test_copies_are_mutable(self):
279300 cls = self.storage_class
280 immutable = cls({'a': 1})
301 immutable = cls({"a": 1})
281302 with pytest.raises(TypeError):
282 immutable.pop('a')
303 immutable.pop("a")
283304
284305 mutable = immutable.copy()
285 mutable.pop('a')
286 assert 'a' in immutable
306 mutable.pop("a")
307 assert "a" in immutable
287308 assert mutable is not immutable
288309 assert copy(immutable) is immutable
289310
290311 def test_dict_is_hashable(self):
291312 cls = self.storage_class
292 immutable = cls({'a': 1, 'b': 2})
293 immutable2 = cls({'a': 2, 'b': 2})
313 immutable = cls({"a": 1, "b": 2})
314 immutable2 = cls({"a": 2, "b": 2})
294315 x = set([immutable])
295316 assert immutable in x
296317 assert immutable2 not in x
314335
315336 def test_multidict_is_hashable(self):
316337 cls = self.storage_class
317 immutable = cls({'a': [1, 2], 'b': 2})
318 immutable2 = cls({'a': [1], 'b': 2})
338 immutable = cls({"a": [1, 2], "b": 2})
339 immutable2 = cls({"a": [1], "b": 2})
319340 x = set([immutable])
320341 assert immutable in x
321342 assert immutable2 not in x
338359 storage_class = datastructures.ImmutableOrderedMultiDict
339360
340361 def test_ordered_multidict_is_hashable(self):
341 a = self.storage_class([('a', 1), ('b', 1), ('a', 2)])
342 b = self.storage_class([('a', 1), ('a', 2), ('b', 1)])
362 a = self.storage_class([("a", 1), ("b", 1), ("a", 2)])
363 b = self.storage_class([("a", 1), ("a", 2), ("b", 1)])
343364 assert hash(a) != hash(b)
344365
345366
347368 storage_class = datastructures.MultiDict
348369
349370 def test_multidict_pop(self):
350 make_d = lambda: self.storage_class({'foo': [1, 2, 3, 4]})
371 def make_d():
372 return self.storage_class({"foo": [1, 2, 3, 4]})
373
351374 d = make_d()
352 assert d.pop('foo') == 1
375 assert d.pop("foo") == 1
353376 assert not d
354377 d = make_d()
355 assert d.pop('foo', 32) == 1
378 assert d.pop("foo", 32) == 1
356379 assert not d
357380 d = make_d()
358 assert d.pop('foos', 32) == 32
381 assert d.pop("foos", 32) == 32
359382 assert d
360383
361384 with pytest.raises(KeyError):
362 d.pop('foos')
385 d.pop("foos")
363386
364387 def test_multidict_pop_raise_badrequestkeyerror_for_empty_list_value(self):
365 mapping = [('a', 'b'), ('a', 'c')]
388 mapping = [("a", "b"), ("a", "c")]
366389 md = self.storage_class(mapping)
367390
368 md.setlistdefault('empty', [])
391 md.setlistdefault("empty", [])
369392
370393 with pytest.raises(KeyError):
371 md.pop('empty')
394 md.pop("empty")
372395
373396 def test_multidict_popitem_raise_badrequestkeyerror_for_empty_list_value(self):
374397 mapping = []
375398 md = self.storage_class(mapping)
376399
377 md.setlistdefault('empty', [])
378
379 with pytest.raises(KeyError):
400 md.setlistdefault("empty", [])
401
402 with pytest.raises(BadRequestKeyError):
380403 md.popitem()
381404
382405 def test_setlistdefault(self):
383406 md = self.storage_class()
384 assert md.setlistdefault('u', [-1, -2]) == [-1, -2]
385 assert md.getlist('u') == [-1, -2]
386 assert md['u'] == -1
407 assert md.setlistdefault("u", [-1, -2]) == [-1, -2]
408 assert md.getlist("u") == [-1, -2]
409 assert md["u"] == -1
387410
388411 def test_iter_interfaces(self):
389 mapping = [('a', 1), ('b', 2), ('a', 2), ('d', 3),
390 ('a', 1), ('a', 3), ('d', 4), ('c', 3)]
412 mapping = [
413 ("a", 1),
414 ("b", 2),
415 ("a", 2),
416 ("d", 3),
417 ("a", 1),
418 ("a", 3),
419 ("d", 4),
420 ("c", 3),
421 ]
391422 md = self.storage_class(mapping)
392423 assert list(zip(md.keys(), md.listvalues())) == list(md.lists())
393424 assert list(zip(md, iterlistvalues(md))) == list(iterlists(md))
394 assert list(zip(iterkeys(md), iterlistvalues(md))) == \
395 list(iterlists(md))
396
397 @pytest.mark.skipif(not PY2, reason='viewmethods work only for the 2-nd version.')
425 assert list(zip(iterkeys(md), iterlistvalues(md))) == list(iterlists(md))
426
427 @pytest.mark.skipif(not PY2, reason="viewmethods work only for the 2-nd version.")
398428 def test_view_methods(self):
399 mapping = [('a', 'b'), ('a', 'c')]
429 mapping = [("a", "b"), ("a", "c")]
400430 md = self.storage_class(mapping)
401431
402 vi = md.viewitems()
403 vk = md.viewkeys()
404 vv = md.viewvalues()
432 vi = md.viewitems() # noqa: B302
433 vk = md.viewkeys() # noqa: B302
434 vv = md.viewvalues() # noqa: B302
405435
406436 assert list(vi) == list(md.items())
407437 assert list(vk) == list(md.keys())
408438 assert list(vv) == list(md.values())
409439
410 md['k'] = 'n'
440 md["k"] = "n"
411441
412442 assert list(vi) == list(md.items())
413443 assert list(vk) == list(md.keys())
414444 assert list(vv) == list(md.values())
415445
416 @pytest.mark.skipif(not PY2, reason='viewmethods work only for the 2-nd version.')
446 @pytest.mark.skipif(not PY2, reason="viewmethods work only for the 2-nd version.")
417447 def test_viewitems_with_multi(self):
418 mapping = [('a', 'b'), ('a', 'c')]
448 mapping = [("a", "b"), ("a", "c")]
419449 md = self.storage_class(mapping)
420450
421 vi = md.viewitems(multi=True)
451 vi = md.viewitems(multi=True) # noqa: B302
422452
423453 assert list(vi) == list(md.items(multi=True))
424454
425 md['k'] = 'n'
455 md["k"] = "n"
426456
427457 assert list(vi) == list(md.items(multi=True))
428458
429459 def test_getitem_raise_badrequestkeyerror_for_empty_list_value(self):
430 mapping = [('a', 'b'), ('a', 'c')]
460 mapping = [("a", "b"), ("a", "c")]
431461 md = self.storage_class(mapping)
432462
433 md.setlistdefault('empty', [])
463 md.setlistdefault("empty", [])
434464
435465 with pytest.raises(KeyError):
436 md['empty']
466 md["empty"]
437467
438468
439469 class TestOrderedMultiDict(_MutableMultiDictTests):
444474
445475 d = cls()
446476 assert not d
447 d.add('foo', 'bar')
477 d.add("foo", "bar")
448478 assert len(d) == 1
449 d.add('foo', 'baz')
479 d.add("foo", "baz")
450480 assert len(d) == 1
451 assert list(iteritems(d)) == [('foo', 'bar')]
452 assert list(d) == ['foo']
453 assert list(iteritems(d, multi=True)) == \
454 [('foo', 'bar'), ('foo', 'baz')]
455 del d['foo']
481 assert list(iteritems(d)) == [("foo", "bar")]
482 assert list(d) == ["foo"]
483 assert list(iteritems(d, multi=True)) == [("foo", "bar"), ("foo", "baz")]
484 del d["foo"]
456485 assert not d
457486 assert len(d) == 0
458487 assert list(d) == []
459488
460 d.update([('foo', 1), ('foo', 2), ('bar', 42)])
461 d.add('foo', 3)
462 assert d.getlist('foo') == [1, 2, 3]
463 assert d.getlist('bar') == [42]
464 assert list(iteritems(d)) == [('foo', 1), ('bar', 42)]
465
466 expected = ['foo', 'bar']
489 d.update([("foo", 1), ("foo", 2), ("bar", 42)])
490 d.add("foo", 3)
491 assert d.getlist("foo") == [1, 2, 3]
492 assert d.getlist("bar") == [42]
493 assert list(iteritems(d)) == [("foo", 1), ("bar", 42)]
494
495 expected = ["foo", "bar"]
467496
468497 assert list(d.keys()) == expected
469498 assert list(d) == expected
470499 assert list(iterkeys(d)) == expected
471500
472 assert list(iteritems(d, multi=True)) == \
473 [('foo', 1), ('foo', 2), ('bar', 42), ('foo', 3)]
501 assert list(iteritems(d, multi=True)) == [
502 ("foo", 1),
503 ("foo", 2),
504 ("bar", 42),
505 ("foo", 3),
506 ]
474507 assert len(d) == 2
475508
476 assert d.pop('foo') == 1
477 assert d.pop('blafasel', None) is None
478 assert d.pop('blafasel', 42) == 42
509 assert d.pop("foo") == 1
510 assert d.pop("blafasel", None) is None
511 assert d.pop("blafasel", 42) == 42
479512 assert len(d) == 1
480 assert d.poplist('bar') == [42]
513 assert d.poplist("bar") == [42]
481514 assert not d
482515
483 d.get('missingkey') is None
484
485 d.add('foo', 42)
486 d.add('foo', 23)
487 d.add('bar', 2)
488 d.add('foo', 42)
516 d.get("missingkey") is None
517
518 d.add("foo", 42)
519 d.add("foo", 23)
520 d.add("bar", 2)
521 d.add("foo", 42)
489522 assert d == datastructures.MultiDict(d)
490523 id = self.storage_class(d)
491524 assert d == id
492 d.add('foo', 2)
525 d.add("foo", 2)
493526 assert d != id
494527
495 d.update({'blah': [1, 2, 3]})
496 assert d['blah'] == 1
497 assert d.getlist('blah') == [1, 2, 3]
528 d.update({"blah": [1, 2, 3]})
529 assert d["blah"] == 1
530 assert d.getlist("blah") == [1, 2, 3]
498531
499532 # setlist works
500533 d = self.storage_class()
501 d['foo'] = 42
502 d.setlist('foo', [1, 2])
503 assert d.getlist('foo') == [1, 2]
504
534 d["foo"] = 42
535 d.setlist("foo", [1, 2])
536 assert d.getlist("foo") == [1, 2]
505537 with pytest.raises(BadRequestKeyError):
506 d.pop('missing')
538 d.pop("missing")
539
507540 with pytest.raises(BadRequestKeyError):
508 d['missing']
541 d["missing"]
509542
510543 # popping
511544 d = self.storage_class()
512 d.add('foo', 23)
513 d.add('foo', 42)
514 d.add('foo', 1)
515 assert d.popitem() == ('foo', 23)
545 d.add("foo", 23)
546 d.add("foo", 42)
547 d.add("foo", 1)
548 assert d.popitem() == ("foo", 23)
516549 with pytest.raises(BadRequestKeyError):
517550 d.popitem()
518551 assert not d
519552
520 d.add('foo', 23)
521 d.add('foo', 42)
522 d.add('foo', 1)
523 assert d.popitemlist() == ('foo', [23, 42, 1])
553 d.add("foo", 23)
554 d.add("foo", 42)
555 d.add("foo", 1)
556 assert d.popitemlist() == ("foo", [23, 42, 1])
524557
525558 with pytest.raises(BadRequestKeyError):
526559 d.popitemlist()
527560
528561 # Unhashable
529562 d = self.storage_class()
530 d.add('foo', 23)
563 d.add("foo", 23)
531564 pytest.raises(TypeError, hash, d)
532565
533566 def test_iterables(self):
535568 b = datastructures.MultiDict((("key_b", "value_b"),))
536569 ab = datastructures.CombinedMultiDict((a, b))
537570
538 assert sorted(ab.lists()) == [('key_a', ['value_a']), ('key_b', ['value_b'])]
539 assert sorted(ab.listvalues()) == [['value_a'], ['value_b']]
571 assert sorted(ab.lists()) == [("key_a", ["value_a"]), ("key_b", ["value_b"])]
572 assert sorted(ab.listvalues()) == [["value_a"], ["value_b"]]
540573 assert sorted(ab.keys()) == ["key_a", "key_b"]
541574
542 assert sorted(iterlists(ab)) == [('key_a', ['value_a']), ('key_b', ['value_b'])]
543 assert sorted(iterlistvalues(ab)) == [['value_a'], ['value_b']]
575 assert sorted(iterlists(ab)) == [("key_a", ["value_a"]), ("key_b", ["value_b"])]
576 assert sorted(iterlistvalues(ab)) == [["value_a"], ["value_b"]]
544577 assert sorted(iterkeys(ab)) == ["key_a", "key_b"]
578
579 def test_get_description(self):
580 data = datastructures.OrderedMultiDict()
581
582 with pytest.raises(BadRequestKeyError) as exc_info:
583 data["baz"]
584
585 assert "baz" in exc_info.value.get_description()
586
587 with pytest.raises(BadRequestKeyError) as exc_info:
588 data.pop("baz")
589
590 assert "baz" in exc_info.value.get_description()
591 exc_info.value.args = ()
592 assert "baz" not in exc_info.value.get_description()
545593
546594
547595 class TestTypeConversionDict(object):
548596 storage_class = datastructures.TypeConversionDict
549597
550598 def test_value_conversion(self):
551 d = self.storage_class(foo='1')
552 assert d.get('foo', type=int) == 1
599 d = self.storage_class(foo="1")
600 assert d.get("foo", type=int) == 1
553601
554602 def test_return_default_when_conversion_is_not_possible(self):
555 d = self.storage_class(foo='bar')
556 assert d.get('foo', default=-1, type=int) == -1
603 d = self.storage_class(foo="bar")
604 assert d.get("foo", default=-1, type=int) == -1
557605
558606 def test_propagate_exceptions_in_conversion(self):
559 d = self.storage_class(foo='bar')
560 switch = {'a': 1}
607 d = self.storage_class(foo="bar")
608 switch = {"a": 1}
561609 with pytest.raises(KeyError):
562 d.get('foo', type=lambda x: switch[x])
610 d.get("foo", type=lambda x: switch[x])
563611
564612
565613 class TestCombinedMultiDict(object):
566614 storage_class = datastructures.CombinedMultiDict
567615
568616 def test_basic_interface(self):
569 d1 = datastructures.MultiDict([('foo', '1')])
570 d2 = datastructures.MultiDict([('bar', '2'), ('bar', '3')])
617 d1 = datastructures.MultiDict([("foo", "1")])
618 d2 = datastructures.MultiDict([("bar", "2"), ("bar", "3")])
571619 d = self.storage_class([d1, d2])
572620
573621 # lookup
574 assert d['foo'] == '1'
575 assert d['bar'] == '2'
576 assert d.getlist('bar') == ['2', '3']
577
578 assert sorted(d.items()) == [('bar', '2'), ('foo', '1')]
579 assert sorted(d.items(multi=True)) == \
580 [('bar', '2'), ('bar', '3'), ('foo', '1')]
581 assert 'missingkey' not in d
582 assert 'foo' in d
622 assert d["foo"] == "1"
623 assert d["bar"] == "2"
624 assert d.getlist("bar") == ["2", "3"]
625
626 assert sorted(d.items()) == [("bar", "2"), ("foo", "1")]
627 assert sorted(d.items(multi=True)) == [("bar", "2"), ("bar", "3"), ("foo", "1")]
628 assert "missingkey" not in d
629 assert "foo" in d
583630
584631 # type lookup
585 assert d.get('foo', type=int) == 1
586 assert d.getlist('bar', type=int) == [2, 3]
632 assert d.get("foo", type=int) == 1
633 assert d.getlist("bar", type=int) == [2, 3]
587634
588635 # get key errors for missing stuff
589636 with pytest.raises(KeyError):
590 d['missing']
637 d["missing"]
591638
592639 # make sure that they are immutable
593640 with pytest.raises(TypeError):
594 d['foo'] = 'blub'
595
596 # copies are immutable
641 d["foo"] = "blub"
642
643 # copies are mutable
597644 d = d.copy()
598 with pytest.raises(TypeError):
599 d['foo'] = 'blub'
645 d["foo"] = "blub"
600646
601647 # make sure lists merges
602648 md1 = datastructures.MultiDict((("foo", "bar"),))
603649 md2 = datastructures.MultiDict((("foo", "blafasel"),))
604650 x = self.storage_class((md1, md2))
605 assert list(iterlists(x)) == [('foo', ['bar', 'blafasel'])]
651 assert list(iterlists(x)) == [("foo", ["bar", "blafasel"])]
606652
607653 def test_length(self):
608 d1 = datastructures.MultiDict([('foo', '1')])
609 d2 = datastructures.MultiDict([('bar', '2')])
654 d1 = datastructures.MultiDict([("foo", "1")])
655 d2 = datastructures.MultiDict([("bar", "2")])
610656 assert len(d1) == len(d2) == 1
611657 d = self.storage_class([d1, d2])
612658 assert len(d) == 2
620666
621667 def test_basic_interface(self):
622668 headers = self.storage_class()
623 headers.add('Content-Type', 'text/plain')
624 headers.add('X-Foo', 'bar')
625 assert 'x-Foo' in headers
626 assert 'Content-type' in headers
627
628 headers['Content-Type'] = 'foo/bar'
629 assert headers['Content-Type'] == 'foo/bar'
630 assert len(headers.getlist('Content-Type')) == 1
669 headers.add("Content-Type", "text/plain")
670 headers.add("X-Foo", "bar")
671 assert "x-Foo" in headers
672 assert "Content-type" in headers
673
674 headers["Content-Type"] = "foo/bar"
675 assert headers["Content-Type"] == "foo/bar"
676 assert len(headers.getlist("Content-Type")) == 1
631677
632678 # list conversion
633 assert headers.to_wsgi_list() == [
634 ('Content-Type', 'foo/bar'),
635 ('X-Foo', 'bar')
636 ]
637 assert str(headers) == (
638 "Content-Type: foo/bar\r\n"
639 "X-Foo: bar\r\n"
640 "\r\n"
641 )
679 assert headers.to_wsgi_list() == [("Content-Type", "foo/bar"), ("X-Foo", "bar")]
680 assert str(headers) == "Content-Type: foo/bar\r\nX-Foo: bar\r\n\r\n"
642681 assert str(self.storage_class()) == "\r\n"
643682
644683 # extended add
645 headers.add('Content-Disposition', 'attachment', filename='foo')
646 assert headers['Content-Disposition'] == 'attachment; filename=foo'
647
648 headers.add('x', 'y', z='"')
649 assert headers['x'] == r'y; z="\""'
684 headers.add("Content-Disposition", "attachment", filename="foo")
685 assert headers["Content-Disposition"] == "attachment; filename=foo"
686
687 headers.add("x", "y", z='"')
688 assert headers["x"] == r'y; z="\""'
650689
651690 def test_defaults_and_conversion(self):
652691 # defaults
653 headers = self.storage_class([
654 ('Content-Type', 'text/plain'),
655 ('X-Foo', 'bar'),
656 ('X-Bar', '1'),
657 ('X-Bar', '2')
658 ])
659 assert headers.getlist('x-bar') == ['1', '2']
660 assert headers.get('x-Bar') == '1'
661 assert headers.get('Content-Type') == 'text/plain'
662
663 assert headers.setdefault('X-Foo', 'nope') == 'bar'
664 assert headers.setdefault('X-Bar', 'nope') == '1'
665 assert headers.setdefault('X-Baz', 'quux') == 'quux'
666 assert headers.setdefault('X-Baz', 'nope') == 'quux'
667 headers.pop('X-Baz')
692 headers = self.storage_class(
693 [
694 ("Content-Type", "text/plain"),
695 ("X-Foo", "bar"),
696 ("X-Bar", "1"),
697 ("X-Bar", "2"),
698 ]
699 )
700 assert headers.getlist("x-bar") == ["1", "2"]
701 assert headers.get("x-Bar") == "1"
702 assert headers.get("Content-Type") == "text/plain"
703
704 assert headers.setdefault("X-Foo", "nope") == "bar"
705 assert headers.setdefault("X-Bar", "nope") == "1"
706 assert headers.setdefault("X-Baz", "quux") == "quux"
707 assert headers.setdefault("X-Baz", "nope") == "quux"
708 headers.pop("X-Baz")
668709
669710 # type conversion
670 assert headers.get('x-bar', type=int) == 1
671 assert headers.getlist('x-bar', type=int) == [1, 2]
711 assert headers.get("x-bar", type=int) == 1
712 assert headers.getlist("x-bar", type=int) == [1, 2]
672713
673714 # list like operations
674 assert headers[0] == ('Content-Type', 'text/plain')
675 assert headers[:1] == self.storage_class([('Content-Type', 'text/plain')])
715 assert headers[0] == ("Content-Type", "text/plain")
716 assert headers[:1] == self.storage_class([("Content-Type", "text/plain")])
676717 del headers[:2]
677718 del headers[-1]
678 assert headers == self.storage_class([('X-Bar', '1')])
719 assert headers == self.storage_class([("X-Bar", "1")])
679720
680721 def test_copying(self):
681 a = self.storage_class([('foo', 'bar')])
722 a = self.storage_class([("foo", "bar")])
682723 b = a.copy()
683 a.add('foo', 'baz')
684 assert a.getlist('foo') == ['bar', 'baz']
685 assert b.getlist('foo') == ['bar']
724 a.add("foo", "baz")
725 assert a.getlist("foo") == ["bar", "baz"]
726 assert b.getlist("foo") == ["bar"]
686727
687728 def test_popping(self):
688 headers = self.storage_class([('a', 1)])
689 assert headers.pop('a') == 1
690 assert headers.pop('b', 2) == 2
729 headers = self.storage_class([("a", 1)])
730 assert headers.pop("a") == 1
731 assert headers.pop("b", 2) == 2
691732
692733 with pytest.raises(KeyError):
693 headers.pop('c')
734 headers.pop("c")
694735
695736 def test_set_arguments(self):
696737 a = self.storage_class()
697 a.set('Content-Disposition', 'useless')
698 a.set('Content-Disposition', 'attachment', filename='foo')
699 assert a['Content-Disposition'] == 'attachment; filename=foo'
738 a.set("Content-Disposition", "useless")
739 a.set("Content-Disposition", "attachment", filename="foo")
740 assert a["Content-Disposition"] == "attachment; filename=foo"
700741
701742 def test_reject_newlines(self):
702743 h = self.storage_class()
703744
704 for variation in 'foo\nbar', 'foo\r\nbar', 'foo\rbar':
745 for variation in "foo\nbar", "foo\r\nbar", "foo\rbar":
705746 with pytest.raises(ValueError):
706 h['foo'] = variation
747 h["foo"] = variation
707748 with pytest.raises(ValueError):
708 h.add('foo', variation)
749 h.add("foo", variation)
709750 with pytest.raises(ValueError):
710 h.add('foo', 'test', option=variation)
751 h.add("foo", "test", option=variation)
711752 with pytest.raises(ValueError):
712 h.set('foo', variation)
753 h.set("foo", variation)
713754 with pytest.raises(ValueError):
714 h.set('foo', 'test', option=variation)
755 h.set("foo", "test", option=variation)
715756
716757 def test_slicing(self):
717758 # there's nothing wrong with these being native strings
718759 # Headers doesn't care about the data types
719760 h = self.storage_class()
720 h.set('X-Foo-Poo', 'bleh')
721 h.set('Content-Type', 'application/whocares')
722 h.set('X-Forwarded-For', '192.168.0.123')
723 h[:] = [(k, v) for k, v in h if k.startswith(u'X-')]
724 assert list(h) == [
725 ('X-Foo-Poo', 'bleh'),
726 ('X-Forwarded-For', '192.168.0.123')
727 ]
761 h.set("X-Foo-Poo", "bleh")
762 h.set("Content-Type", "application/whocares")
763 h.set("X-Forwarded-For", "192.168.0.123")
764 h[:] = [(k, v) for k, v in h if k.startswith(u"X-")]
765 assert list(h) == [("X-Foo-Poo", "bleh"), ("X-Forwarded-For", "192.168.0.123")]
728766
729767 def test_bytes_operations(self):
730768 h = self.storage_class()
731 h.set('X-Foo-Poo', 'bleh')
732 h.set('X-Whoops', b'\xff')
733
734 assert h.get('x-foo-poo', as_bytes=True) == b'bleh'
735 assert h.get('x-whoops', as_bytes=True) == b'\xff'
769 h.set("X-Foo-Poo", "bleh")
770 h.set("X-Whoops", b"\xff")
771 h.set(b"X-Bytes", b"something")
772
773 assert h.get("x-foo-poo", as_bytes=True) == b"bleh"
774 assert h.get("x-whoops", as_bytes=True) == b"\xff"
775 assert h.get("x-bytes") == "something"
736776
737777 def test_to_wsgi_list(self):
738778 h = self.storage_class()
739 h.set(u'Key', u'Value')
779 h.set(u"Key", u"Value")
740780 for key, value in h.to_wsgi_list():
741781 if PY2:
742 strict_eq(key, b'Key')
743 strict_eq(value, b'Value')
782 strict_eq(key, b"Key")
783 strict_eq(value, b"Value")
744784 else:
745 strict_eq(key, u'Key')
746 strict_eq(value, u'Value')
785 strict_eq(key, u"Key")
786 strict_eq(value, u"Value")
787
788 def test_to_wsgi_list_bytes(self):
789 h = self.storage_class()
790 h.set(b"Key", b"Value")
791 for key, value in h.to_wsgi_list():
792 if PY2:
793 strict_eq(key, b"Key")
794 strict_eq(value, b"Value")
795 else:
796 strict_eq(key, u"Key")
797 strict_eq(value, u"Value")
747798
748799
749800 class TestEnvironHeaders(object):
753804 # this happens in multiple WSGI servers because they
754805 # use a vary naive way to convert the headers;
755806 broken_env = {
756 'HTTP_CONTENT_TYPE': 'text/html',
757 'CONTENT_TYPE': 'text/html',
758 'HTTP_CONTENT_LENGTH': '0',
759 'CONTENT_LENGTH': '0',
760 'HTTP_ACCEPT': '*',
761 'wsgi.version': (1, 0)
807 "HTTP_CONTENT_TYPE": "text/html",
808 "CONTENT_TYPE": "text/html",
809 "HTTP_CONTENT_LENGTH": "0",
810 "CONTENT_LENGTH": "0",
811 "HTTP_ACCEPT": "*",
812 "wsgi.version": (1, 0),
762813 }
763814 headers = self.storage_class(broken_env)
764815 assert headers
765816 assert len(headers) == 3
766817 assert sorted(headers) == [
767 ('Accept', '*'),
768 ('Content-Length', '0'),
769 ('Content-Type', 'text/html')
818 ("Accept", "*"),
819 ("Content-Length", "0"),
820 ("Content-Type", "text/html"),
770821 ]
771 assert not self.storage_class({'wsgi.version': (1, 0)})
772 assert len(self.storage_class({'wsgi.version': (1, 0)})) == 0
822 assert not self.storage_class({"wsgi.version": (1, 0)})
823 assert len(self.storage_class({"wsgi.version": (1, 0)})) == 0
773824 assert 42 not in headers
774825
775826 def test_skip_empty_special_vars(self):
776 env = {
777 'HTTP_X_FOO': '42',
778 'CONTENT_TYPE': '',
779 'CONTENT_LENGTH': '',
780 }
827 env = {"HTTP_X_FOO": "42", "CONTENT_TYPE": "", "CONTENT_LENGTH": ""}
781828 headers = self.storage_class(env)
782 assert dict(headers) == {'X-Foo': '42'}
783
784 env = {
785 'HTTP_X_FOO': '42',
786 'CONTENT_TYPE': '',
787 'CONTENT_LENGTH': '0',
788 }
829 assert dict(headers) == {"X-Foo": "42"}
830
831 env = {"HTTP_X_FOO": "42", "CONTENT_TYPE": "", "CONTENT_LENGTH": "0"}
789832 headers = self.storage_class(env)
790 assert dict(headers) == {'X-Foo': '42', 'Content-Length': '0'}
833 assert dict(headers) == {"X-Foo": "42", "Content-Length": "0"}
791834
792835 def test_return_type_is_unicode(self):
793836 # environ contains native strings; we return unicode
794 headers = self.storage_class({
795 'HTTP_FOO': '\xe2\x9c\x93',
796 'CONTENT_TYPE': 'text/plain',
797 })
798 assert headers['Foo'] == u"\xe2\x9c\x93"
799 assert isinstance(headers['Foo'], text_type)
800 assert isinstance(headers['Content-Type'], text_type)
837 headers = self.storage_class(
838 {"HTTP_FOO": "\xe2\x9c\x93", "CONTENT_TYPE": "text/plain"}
839 )
840 assert headers["Foo"] == u"\xe2\x9c\x93"
841 assert isinstance(headers["Foo"], text_type)
842 assert isinstance(headers["Content-Type"], text_type)
801843 iter_output = dict(iter(headers))
802 assert iter_output['Foo'] == u"\xe2\x9c\x93"
803 assert isinstance(iter_output['Foo'], text_type)
804 assert isinstance(iter_output['Content-Type'], text_type)
844 assert iter_output["Foo"] == u"\xe2\x9c\x93"
845 assert isinstance(iter_output["Foo"], text_type)
846 assert isinstance(iter_output["Content-Type"], text_type)
805847
806848 def test_bytes_operations(self):
807 foo_val = '\xff'
808 h = self.storage_class({
809 'HTTP_X_FOO': foo_val
810 })
811
812 assert h.get('x-foo', as_bytes=True) == b'\xff'
813 assert h.get('x-foo') == u'\xff'
849 foo_val = "\xff"
850 h = self.storage_class({"HTTP_X_FOO": foo_val})
851
852 assert h.get("x-foo", as_bytes=True) == b"\xff"
853 assert h.get("x-foo") == u"\xff"
814854
815855
816856 class TestHeaderSet(object):
818858
819859 def test_basic_interface(self):
820860 hs = self.storage_class()
821 hs.add('foo')
822 hs.add('bar')
823 assert 'Bar' in hs
824 assert hs.find('foo') == 0
825 assert hs.find('BAR') == 1
826 assert hs.find('baz') < 0
827 hs.discard('missing')
828 hs.discard('foo')
829 assert hs.find('foo') < 0
830 assert hs.find('bar') == 0
861 hs.add("foo")
862 hs.add("bar")
863 assert "Bar" in hs
864 assert hs.find("foo") == 0
865 assert hs.find("BAR") == 1
866 assert hs.find("baz") < 0
867 hs.discard("missing")
868 hs.discard("foo")
869 assert hs.find("foo") < 0
870 assert hs.find("bar") == 0
831871
832872 with pytest.raises(IndexError):
833 hs.index('missing')
834
835 assert hs.index('bar') == 0
873 hs.index("missing")
874
875 assert hs.index("bar") == 0
836876 assert hs
837877 hs.clear()
838878 assert not hs
855895
856896 >>> assert_calls, func = make_call_asserter()
857897 >>> with assert_calls(2):
858 func()
859 func()
898 ... func()
899 ... func()
860900 """
861901
862902 calls = [0]
880920
881921 def test_callback_dict_reads(self):
882922 assert_calls, func = make_call_asserter()
883 initial = {'a': 'foo', 'b': 'bar'}
923 initial = {"a": "foo", "b": "bar"}
884924 dct = self.storage_class(initial=initial, on_update=func)
885 with assert_calls(0, 'callback triggered by read-only method'):
925 with assert_calls(0, "callback triggered by read-only method"):
886926 # read-only methods
887 dct['a']
888 dct.get('a')
889 pytest.raises(KeyError, lambda: dct['x'])
890 'a' in dct
927 dct["a"]
928 dct.get("a")
929 pytest.raises(KeyError, lambda: dct["x"])
930 "a" in dct
891931 list(iter(dct))
892932 dct.copy()
893 with assert_calls(0, 'callback triggered without modification'):
933 with assert_calls(0, "callback triggered without modification"):
894934 # methods that may write but don't
895 dct.pop('z', None)
896 dct.setdefault('a')
935 dct.pop("z", None)
936 dct.setdefault("a")
897937
898938 def test_callback_dict_writes(self):
899939 assert_calls, func = make_call_asserter()
900 initial = {'a': 'foo', 'b': 'bar'}
940 initial = {"a": "foo", "b": "bar"}
901941 dct = self.storage_class(initial=initial, on_update=func)
902 with assert_calls(8, 'callback not triggered by write method'):
942 with assert_calls(8, "callback not triggered by write method"):
903943 # always-write methods
904 dct['z'] = 123
905 dct['z'] = 123 # must trigger again
906 del dct['z']
907 dct.pop('b', None)
908 dct.setdefault('x')
944 dct["z"] = 123
945 dct["z"] = 123 # must trigger again
946 del dct["z"]
947 dct.pop("b", None)
948 dct.setdefault("x")
909949 dct.popitem()
910950 dct.update([])
911951 dct.clear()
912 with assert_calls(0, 'callback triggered by failed del'):
913 pytest.raises(KeyError, lambda: dct.__delitem__('x'))
914 with assert_calls(0, 'callback triggered by failed pop'):
915 pytest.raises(KeyError, lambda: dct.pop('x'))
952 with assert_calls(0, "callback triggered by failed del"):
953 pytest.raises(KeyError, lambda: dct.__delitem__("x"))
954 with assert_calls(0, "callback triggered by failed pop"):
955 pytest.raises(KeyError, lambda: dct.pop("x"))
916956
917957
918958 class TestCacheControl(object):
919
920959 def test_repr(self):
921 cc = datastructures.RequestCacheControl(
922 [("max-age", "0"), ("private", "True")],
923 )
960 cc = datastructures.RequestCacheControl([("max-age", "0"), ("private", "True")])
924961 assert repr(cc) == "<RequestCacheControl max-age='0' private='True'>"
925962
926963
928965 storage_class = datastructures.Accept
929966
930967 def test_accept_basic(self):
931 accept = self.storage_class([('tinker', 0), ('tailor', 0.333),
932 ('soldier', 0.667), ('sailor', 1)])
968 accept = self.storage_class(
969 [("tinker", 0), ("tailor", 0.333), ("soldier", 0.667), ("sailor", 1)]
970 )
933971 # check __getitem__ on indices
934 assert accept[3] == ('tinker', 0)
935 assert accept[2] == ('tailor', 0.333)
936 assert accept[1] == ('soldier', 0.667)
937 assert accept[0], ('sailor', 1)
972 assert accept[3] == ("tinker", 0)
973 assert accept[2] == ("tailor", 0.333)
974 assert accept[1] == ("soldier", 0.667)
975 assert accept[0], ("sailor", 1)
938976 # check __getitem__ on string
939 assert accept['tinker'] == 0
940 assert accept['tailor'] == 0.333
941 assert accept['soldier'] == 0.667
942 assert accept['sailor'] == 1
943 assert accept['spy'] == 0
977 assert accept["tinker"] == 0
978 assert accept["tailor"] == 0.333
979 assert accept["soldier"] == 0.667
980 assert accept["sailor"] == 1
981 assert accept["spy"] == 0
944982 # check quality method
945 assert accept.quality('tinker') == 0
946 assert accept.quality('tailor') == 0.333
947 assert accept.quality('soldier') == 0.667
948 assert accept.quality('sailor') == 1
949 assert accept.quality('spy') == 0
983 assert accept.quality("tinker") == 0
984 assert accept.quality("tailor") == 0.333
985 assert accept.quality("soldier") == 0.667
986 assert accept.quality("sailor") == 1
987 assert accept.quality("spy") == 0
950988 # check __contains__
951 assert 'sailor' in accept
952 assert 'spy' not in accept
989 assert "sailor" in accept
990 assert "spy" not in accept
953991 # check index method
954 assert accept.index('tinker') == 3
955 assert accept.index('tailor') == 2
956 assert accept.index('soldier') == 1
957 assert accept.index('sailor') == 0
992 assert accept.index("tinker") == 3
993 assert accept.index("tailor") == 2
994 assert accept.index("soldier") == 1
995 assert accept.index("sailor") == 0
958996 with pytest.raises(ValueError):
959 accept.index('spy')
997 accept.index("spy")
960998 # check find method
961 assert accept.find('tinker') == 3
962 assert accept.find('tailor') == 2
963 assert accept.find('soldier') == 1
964 assert accept.find('sailor') == 0
965 assert accept.find('spy') == -1
999 assert accept.find("tinker") == 3
1000 assert accept.find("tailor") == 2
1001 assert accept.find("soldier") == 1
1002 assert accept.find("sailor") == 0
1003 assert accept.find("spy") == -1
9661004 # check to_header method
967 assert accept.to_header() == \
968 'sailor,soldier;q=0.667,tailor;q=0.333,tinker;q=0'
1005 assert accept.to_header() == "sailor,soldier;q=0.667,tailor;q=0.333,tinker;q=0"
9691006 # check best_match method
970 assert accept.best_match(['tinker', 'tailor', 'soldier', 'sailor'],
971 default=None) == 'sailor'
972 assert accept.best_match(['tinker', 'tailor', 'soldier'],
973 default=None) == 'soldier'
974 assert accept.best_match(['tinker', 'tailor'], default=None) == \
975 'tailor'
976 assert accept.best_match(['tinker'], default=None) is None
977 assert accept.best_match(['tinker'], default='x') == 'x'
1007 assert (
1008 accept.best_match(["tinker", "tailor", "soldier", "sailor"], default=None)
1009 == "sailor"
1010 )
1011 assert (
1012 accept.best_match(["tinker", "tailor", "soldier"], default=None)
1013 == "soldier"
1014 )
1015 assert accept.best_match(["tinker", "tailor"], default=None) == "tailor"
1016 assert accept.best_match(["tinker"], default=None) is None
1017 assert accept.best_match(["tinker"], default="x") == "x"
9781018
9791019 def test_accept_wildcard(self):
980 accept = self.storage_class([('*', 0), ('asterisk', 1)])
981 assert '*' in accept
982 assert accept.best_match(['asterisk', 'star'], default=None) == \
983 'asterisk'
984 assert accept.best_match(['star'], default=None) is None
1020 accept = self.storage_class([("*", 0), ("asterisk", 1)])
1021 assert "*" in accept
1022 assert accept.best_match(["asterisk", "star"], default=None) == "asterisk"
1023 assert accept.best_match(["star"], default=None) is None
9851024
9861025 def test_accept_keep_order(self):
987 accept = self.storage_class([('*', 1)])
1026 accept = self.storage_class([("*", 1)])
9881027 assert accept.best_match(["alice", "bob"]) == "alice"
9891028 assert accept.best_match(["bob", "alice"]) == "bob"
990 accept = self.storage_class([('alice', 1), ('bob', 1)])
1029 accept = self.storage_class([("alice", 1), ("bob", 1)])
9911030 assert accept.best_match(["alice", "bob"]) == "alice"
9921031 assert accept.best_match(["bob", "alice"]) == "bob"
9931032
9941033 def test_accept_wildcard_specificity(self):
995 accept = self.storage_class([('asterisk', 0), ('star', 0.5), ('*', 1)])
996 assert accept.best_match(['star', 'asterisk'], default=None) == 'star'
997 assert accept.best_match(['asterisk', 'star'], default=None) == 'star'
998 assert accept.best_match(['asterisk', 'times'], default=None) == \
999 'times'
1000 assert accept.best_match(['asterisk'], default=None) is None
1034 accept = self.storage_class([("asterisk", 0), ("star", 0.5), ("*", 1)])
1035 assert accept.best_match(["star", "asterisk"], default=None) == "star"
1036 assert accept.best_match(["asterisk", "star"], default=None) == "star"
1037 assert accept.best_match(["asterisk", "times"], default=None) == "times"
1038 assert accept.best_match(["asterisk"], default=None) is None
10011039
10021040
10031041 class TestMIMEAccept(object):
10041042 storage_class = datastructures.MIMEAccept
10051043
10061044 def test_accept_wildcard_subtype(self):
1007 accept = self.storage_class([('text/*', 1)])
1008 assert accept.best_match(['text/html'], default=None) == 'text/html'
1009 assert accept.best_match(['image/png', 'text/plain']) == 'text/plain'
1010 assert accept.best_match(['image/png'], default=None) is None
1045 accept = self.storage_class([("text/*", 1)])
1046 assert accept.best_match(["text/html"], default=None) == "text/html"
1047 assert accept.best_match(["image/png", "text/plain"]) == "text/plain"
1048 assert accept.best_match(["image/png"], default=None) is None
10111049
10121050 def test_accept_wildcard_specificity(self):
1013 accept = self.storage_class([('*/*', 1), ('text/html', 1)])
1014 assert accept.best_match(['image/png', 'text/html']) == 'text/html'
1015 assert accept.best_match(['image/png', 'text/plain']) == 'image/png'
1016 accept = self.storage_class([('*/*', 1), ('text/html', 1),
1017 ('image/*', 1)])
1018 assert accept.best_match(['image/png', 'text/html']) == 'text/html'
1019 assert accept.best_match(['text/plain', 'image/png']) == 'image/png'
1051 accept = self.storage_class([("*/*", 1), ("text/html", 1)])
1052 assert accept.best_match(["image/png", "text/html"]) == "text/html"
1053 assert accept.best_match(["image/png", "text/plain"]) == "image/png"
1054 accept = self.storage_class([("*/*", 1), ("text/html", 1), ("image/*", 1)])
1055 assert accept.best_match(["image/png", "text/html"]) == "text/html"
1056 assert accept.best_match(["text/plain", "image/png"]) == "image/png"
10201057
10211058
10221059 class TestFileStorage(object):
10231060 storage_class = datastructures.FileStorage
10241061
10251062 def test_mimetype_always_lowercase(self):
1026 file_storage = self.storage_class(content_type='APPLICATION/JSON')
1027 assert file_storage.mimetype == 'application/json'
1063 file_storage = self.storage_class(content_type="APPLICATION/JSON")
1064 assert file_storage.mimetype == "application/json"
10281065
10291066 def test_bytes_proper_sentinel(self):
10301067 # ensure we iterate over new lines and don't enter into an infinite loop
10311068 import io
1069
10321070 unicode_storage = self.storage_class(io.StringIO(u"one\ntwo"))
1033 for idx, line in enumerate(unicode_storage):
1071 for idx, _line in enumerate(unicode_storage):
10341072 assert idx < 2
10351073 assert idx == 1
10361074 binary_storage = self.storage_class(io.BytesIO(b"one\ntwo"))
1037 for idx, line in enumerate(binary_storage):
1075 for idx, _line in enumerate(binary_storage):
10381076 assert idx < 2
10391077 assert idx == 1
1078
1079 @pytest.mark.skipif(PY2, reason="io.IOBase is only needed in PY3.")
1080 @pytest.mark.parametrize("stream", (tempfile.SpooledTemporaryFile, io.BytesIO))
1081 def test_proxy_can_access_stream_attrs(self, stream):
1082 """``SpooledTemporaryFile`` doesn't implement some of
1083 ``IOBase``. Ensure that ``FileStorage`` can still access the
1084 attributes from the backing file object.
1085
1086 https://github.com/pallets/werkzeug/issues/1344
1087 https://github.com/python/cpython/pull/3249
1088 """
1089 file_storage = self.storage_class(stream=stream())
1090
1091 for name in ("fileno", "writable", "readable", "seekable"):
1092 assert hasattr(file_storage, name)
1093
1094
1095 @pytest.mark.parametrize("ranges", ([(0, 1), (-5, None)], [(5, None)]))
1096 def test_range_to_header(ranges):
1097 header = Range("byes", ranges).to_header()
1098 r = http.parse_range_header(header)
1099 assert r.ranges == ranges
1100
1101
1102 @pytest.mark.parametrize(
1103 "ranges", ([(0, 0)], [(None, 1)], [(1, 0)], [(0, 1), (-5, 10)])
1104 )
1105 def test_range_validates_ranges(ranges):
1106 with pytest.raises(ValueError):
1107 datastructures.Range("bytes", ranges)
44
55 Tests some debug utilities.
66
7 :copyright: (c) 2014 by Armin Ronacher.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
10 import io
11 import re
1012 import sys
11 import re
12 import io
1313
1414 import pytest
1515 import requests
1616
17 from werkzeug._compat import PY2
18 from werkzeug.debug import DebuggedApplication
1719 from werkzeug.debug import get_machine_id
18 from werkzeug.debug.repr import debug_repr, DebugReprGenerator, \
19 dump, helper
2020 from werkzeug.debug.console import HTMLStringO
21 from werkzeug.debug.repr import debug_repr
22 from werkzeug.debug.repr import DebugReprGenerator
23 from werkzeug.debug.repr import dump
24 from werkzeug.debug.repr import helper
2125 from werkzeug.debug.tbtools import Traceback
22 from werkzeug._compat import PY2
26 from werkzeug.test import Client
27 from werkzeug.wrappers import Request
28 from werkzeug.wrappers import Response
2329
2430
2531 class TestDebugRepr(object):
26
2732 def test_basic_repr(self):
28 assert debug_repr([]) == u'[]'
29 assert debug_repr([1, 2]) == \
30 u'[<span class="number">1</span>, <span class="number">2</span>]'
31 assert debug_repr([1, 'test']) == \
32 u'[<span class="number">1</span>, <span class="string">\'test\'</span>]'
33 assert debug_repr([None]) == \
34 u'[<span class="object">None</span>]'
33 assert debug_repr([]) == u"[]"
34 assert (
35 debug_repr([1, 2])
36 == u'[<span class="number">1</span>, <span class="number">2</span>]'
37 )
38 assert (
39 debug_repr([1, "test"])
40 == u'[<span class="number">1</span>, <span class="string">\'test\'</span>]'
41 )
42 assert debug_repr([None]) == u'[<span class="object">None</span>]'
3543
3644 def test_string_repr(self):
37 assert debug_repr('') == u'<span class="string">\'\'</span>'
38 assert debug_repr('foo') == u'<span class="string">\'foo\'</span>'
39 assert debug_repr('s' * 80) == u'<span class="string">\''\
40 + 's' * 70 + '<span class="extended">'\
41 + 's' * 10 + '\'</span></span>'
42 assert debug_repr('<' * 80) == u'<span class="string">\''\
43 + '&lt;' * 70 + '<span class="extended">'\
44 + '&lt;' * 10 + '\'</span></span>'
45 assert debug_repr("") == u"<span class=\"string\">''</span>"
46 assert debug_repr("foo") == u"<span class=\"string\">'foo'</span>"
47 assert (
48 debug_repr("s" * 80)
49 == u'<span class="string">\''
50 + "s" * 69
51 + '<span class="extended">'
52 + "s" * 11
53 + "'</span></span>"
54 )
55 assert (
56 debug_repr("<" * 80)
57 == u'<span class="string">\''
58 + "&lt;" * 69
59 + '<span class="extended">'
60 + "&lt;" * 11
61 + "'</span></span>"
62 )
63
64 def test_string_subclass_repr(self):
65 class Test(str):
66 pass
67
68 assert debug_repr(Test("foo")) == (
69 u'<span class="module">tests.test_debug.</span>'
70 u"Test(<span class=\"string\">'foo'</span>)"
71 )
72
73 @pytest.mark.skipif(not PY2, reason="u prefix on py2 only")
74 def test_unicode_repr(self):
75 assert debug_repr(u"foo") == u"<span class=\"string\">u'foo'</span>"
76
77 @pytest.mark.skipif(PY2, reason="b prefix on py3 only")
78 def test_bytes_repr(self):
79 assert debug_repr(b"foo") == u"<span class=\"string\">b'foo'</span>"
4580
4681 def test_sequence_repr(self):
4782 assert debug_repr(list(range(20))) == (
5994 )
6095
6196 def test_mapping_repr(self):
62 assert debug_repr({}) == u'{}'
63 assert debug_repr({'foo': 42}) == (
97 assert debug_repr({}) == u"{}"
98 assert debug_repr({"foo": 42}) == (
6499 u'{<span class="pair"><span class="key"><span class="string">\'foo\''
65100 u'</span></span>: <span class="value"><span class="number">42'
66 u'</span></span></span>}'
101 u"</span></span></span>}"
67102 )
68103 assert debug_repr(dict(zip(range(10), [None] * 10))) == (
69 u'{<span class="pair"><span class="key"><span class="number">0</span></span>: <span class="value"><span class="object">None</span></span></span>, <span class="pair"><span class="key"><span class="number">1</span></span>: <span class="value"><span class="object">None</span></span></span>, <span class="pair"><span class="key"><span class="number">2</span></span>: <span class="value"><span class="object">None</span></span></span>, <span class="pair"><span class="key"><span class="number">3</span></span>: <span class="value"><span class="object">None</span></span></span>, <span class="extended"><span class="pair"><span class="key"><span class="number">4</span></span>: <span class="value"><span class="object">None</span></span></span>, <span class="pair"><span class="key"><span class="number">5</span></span>: <span class="value"><span class="object">None</span></span></span>, <span class="pair"><span class="key"><span class="number">6</span></span>: <span class="value"><span class="object">None</span></span></span>, <span class="pair"><span class="key"><span class="number">7</span></span>: <span class="value"><span class="object">None</span></span></span>, <span class="pair"><span class="key"><span class="number">8</span></span>: <span class="value"><span class="object">None</span></span></span>, <span class="pair"><span class="key"><span class="number">9</span></span>: <span class="value"><span class="object">None</span></span></span></span>}' # noqa
70 )
71 assert debug_repr((1, 'zwei', u'drei')) == (
104 u'{<span class="pair"><span class="key"><span class="number">0'
105 u'</span></span>: <span class="value"><span class="object">None'
106 u"</span></span></span>, "
107 u'<span class="pair"><span class="key"><span class="number">1'
108 u'</span></span>: <span class="value"><span class="object">None'
109 u"</span></span></span>, "
110 u'<span class="pair"><span class="key"><span class="number">2'
111 u'</span></span>: <span class="value"><span class="object">None'
112 u"</span></span></span>, "
113 u'<span class="pair"><span class="key"><span class="number">3'
114 u'</span></span>: <span class="value"><span class="object">None'
115 u"</span></span></span>, "
116 u'<span class="extended">'
117 u'<span class="pair"><span class="key"><span class="number">4'
118 u'</span></span>: <span class="value"><span class="object">None'
119 u"</span></span></span>, "
120 u'<span class="pair"><span class="key"><span class="number">5'
121 u'</span></span>: <span class="value"><span class="object">None'
122 u"</span></span></span>, "
123 u'<span class="pair"><span class="key"><span class="number">6'
124 u'</span></span>: <span class="value"><span class="object">None'
125 u"</span></span></span>, "
126 u'<span class="pair"><span class="key"><span class="number">7'
127 u'</span></span>: <span class="value"><span class="object">None'
128 u"</span></span></span>, "
129 u'<span class="pair"><span class="key"><span class="number">8'
130 u'</span></span>: <span class="value"><span class="object">None'
131 u"</span></span></span>, "
132 u'<span class="pair"><span class="key"><span class="number">9'
133 u'</span></span>: <span class="value"><span class="object">None'
134 u"</span></span></span></span>}"
135 )
136 assert debug_repr((1, "zwei", u"drei")) == (
72137 u'(<span class="number">1</span>, <span class="string">\''
73 u'zwei\'</span>, <span class="string">%s\'drei\'</span>)'
74 ) % ('u' if PY2 else '')
138 u"zwei'</span>, <span class=\"string\">%s'drei'</span>)"
139 ) % ("u" if PY2 else "")
75140
76141 def test_custom_repr(self):
77142 class Foo(object):
78
79143 def __repr__(self):
80 return '<Foo 42>'
81 assert debug_repr(Foo()) == \
82 '<span class="object">&lt;Foo 42&gt;</span>'
144 return "<Foo 42>"
145
146 assert debug_repr(Foo()) == '<span class="object">&lt;Foo 42&gt;</span>'
83147
84148 def test_list_subclass_repr(self):
85149 class MyList(list):
86150 pass
151
87152 assert debug_repr(MyList([1, 2])) == (
88153 u'<span class="module">tests.test_debug.</span>MyList(['
89154 u'<span class="number">1</span>, <span class="number">2</span>])'
90155 )
91156
92157 def test_regex_repr(self):
93 assert debug_repr(re.compile(r'foo\d')) == \
94 u're.compile(<span class="string regex">r\'foo\\d\'</span>)'
158 assert (
159 debug_repr(re.compile(r"foo\d"))
160 == u"re.compile(<span class=\"string regex\">r'foo\\d'</span>)"
161 )
95162 # No ur'' in Py3
96 # http://bugs.python.org/issue15096
97 assert debug_repr(re.compile(u'foo\\d')) == (
98 u're.compile(<span class="string regex">%sr\'foo\\d\'</span>)' %
99 ('u' if PY2 else '')
163 # https://bugs.python.org/issue15096
164 assert debug_repr(re.compile(u"foo\\d")) == (
165 u"re.compile(<span class=\"string regex\">%sr'foo\\d'</span>)"
166 % ("u" if PY2 else "")
100167 )
101168
102169 def test_set_repr(self):
103 assert debug_repr(frozenset('x')) == \
104 u'frozenset([<span class="string">\'x\'</span>])'
105 assert debug_repr(set('x')) == \
106 u'set([<span class="string">\'x\'</span>])'
170 assert (
171 debug_repr(frozenset("x"))
172 == u"frozenset([<span class=\"string\">'x'</span>])"
173 )
174 assert debug_repr(set("x")) == u"set([<span class=\"string\">'x'</span>])"
107175
108176 def test_recursive_repr(self):
109177 a = [1]
112180
113181 def test_broken_repr(self):
114182 class Foo(object):
115
116183 def __repr__(self):
117 raise Exception('broken!')
184 raise Exception("broken!")
118185
119186 assert debug_repr(Foo()) == (
120187 u'<span class="brokenrepr">&lt;broken repr (Exception: '
121 u'broken!)&gt;</span>'
188 u"broken!)&gt;</span>"
122189 )
123190
124191
131198
132199
133200 class TestDebugHelpers(object):
134
135201 def test_object_dumping(self):
136202 drg = DebugReprGenerator()
137203 out = drg.dump_object(Foo())
138 assert re.search('Details for tests.test_debug.Foo object at', out)
204 assert re.search("Details for tests.test_debug.Foo object at", out)
139205 assert re.search('<th>x.*<span class="number">42</span>', out, flags=re.DOTALL)
140206 assert re.search('<th>y.*<span class="number">23</span>', out, flags=re.DOTALL)
141207 assert re.search('<th>z.*<span class="number">15</span>', out, flags=re.DOTALL)
142208
143 out = drg.dump_object({'x': 42, 'y': 23})
144 assert re.search('Contents of', out)
209 out = drg.dump_object({"x": 42, "y": 23})
210 assert re.search("Contents of", out)
145211 assert re.search('<th>x.*<span class="number">42</span>', out, flags=re.DOTALL)
146212 assert re.search('<th>y.*<span class="number">23</span>', out, flags=re.DOTALL)
147213
148 out = drg.dump_object({'x': 42, 'y': 23, 23: 11})
149 assert not re.search('Contents of', out)
150
151 out = drg.dump_locals({'x': 42, 'y': 23})
152 assert re.search('Local variables in frame', out)
214 out = drg.dump_object({"x": 42, "y": 23, 23: 11})
215 assert not re.search("Contents of", out)
216
217 out = drg.dump_locals({"x": 42, "y": 23})
218 assert re.search("Local variables in frame", out)
153219 assert re.search('<th>x.*<span class="number">42</span>', out, flags=re.DOTALL)
154220 assert re.search('<th>y.*<span class="number">23</span>', out, flags=re.DOTALL)
155221
164230 finally:
165231 sys.stdout = old
166232
167 assert 'Details for list object at' in x
233 assert "Details for list object at" in x
168234 assert '<span class="number">1</span>' in x
169 assert 'Local variables in frame' in y
170 assert '<th>x' in y
171 assert '<th>old' in y
235 assert "Local variables in frame" in y
236 assert "<th>x" in y
237 assert "<th>old" in y
172238
173239 def test_debug_help(self):
174240 old = sys.stdout
179245 finally:
180246 sys.stdout = old
181247
182 assert 'Help on list object' in x
183 assert '__delitem__' in x
248 assert "Help on list object" in x
249 assert "__delitem__" in x
250
251 @pytest.mark.skipif(PY2, reason="Python 2 doesn't have chained exceptions.")
252 def test_exc_divider_found_on_chained_exception(self):
253 @Request.application
254 def app(request):
255 def do_something():
256 raise ValueError("inner")
257
258 try:
259 do_something()
260 except ValueError:
261 raise KeyError("outer")
262
263 debugged = DebuggedApplication(app)
264 client = Client(debugged, Response)
265 response = client.get("/")
266 data = response.get_data(as_text=True)
267 assert u'raise ValueError("inner")' in data
268 assert u'<div class="exc-divider">' in data
269 assert u'raise KeyError("outer")' in data
184270
185271
186272 class TestTraceback(object):
187
188273 def test_log(self):
189274 try:
190275 1 / 0
196281 assert buffer_.getvalue().strip() == traceback.plaintext.strip()
197282
198283 def test_sourcelines_encoding(self):
199 source = (u'# -*- coding: latin1 -*-\n\n'
200 u'def foo():\n'
201 u' """höhö"""\n'
202 u' 1 / 0\n'
203 u'foo()').encode('latin1')
204 code = compile(source, filename='lol.py', mode='exec')
284 source = (
285 u"# -*- coding: latin1 -*-\n\n"
286 u"def foo():\n"
287 u' """höhö"""\n'
288 u" 1 / 0\n"
289 u"foo()"
290 ).encode("latin1")
291 code = compile(source, filename="lol.py", mode="exec")
205292 try:
206293 eval(code)
207294 except ZeroDivisionError:
209296
210297 frames = traceback.frames
211298 assert len(frames) == 3
212 assert frames[1].filename == 'lol.py'
213 assert frames[2].filename == 'lol.py'
299 assert frames[1].filename == "lol.py"
300 assert frames[2].filename == "lol.py"
214301
215302 class Loader(object):
216
217303 def get_source(self, module):
218304 return source
219305
220306 frames[1].loader = frames[2].loader = Loader()
221307 assert frames[1].sourcelines == frames[2].sourcelines
222 assert [line.code for line in frames[1].get_annotated_lines()] == \
223 [line.code for line in frames[2].get_annotated_lines()]
224 assert u'höhö' in frames[1].sourcelines[3]
308 assert [line.code for line in frames[1].get_annotated_lines()] == [
309 line.code for line in frames[2].get_annotated_lines()
310 ]
311 assert u"höhö" in frames[1].sourcelines[3]
225312
226313 def test_filename_encoding(self, tmpdir, monkeypatch):
227 moduledir = tmpdir.mkdir('föö')
228 moduledir.join('bar.py').write('def foo():\n 1/0\n')
314 moduledir = tmpdir.mkdir("föö")
315 moduledir.join("bar.py").write("def foo():\n 1/0\n")
229316 monkeypatch.syspath_prepend(str(moduledir))
230317
231318 import bar
235322 except ZeroDivisionError:
236323 traceback = Traceback(*sys.exc_info())
237324
238 assert u'föö' in u'\n'.join(frame.render() for frame in traceback.frames)
325 assert u"föö" in u"\n".join(frame.render() for frame in traceback.frames)
239326
240327
241328 def test_get_machine_id():
243330 assert isinstance(rv, bytes)
244331
245332
246 @pytest.mark.parametrize('crash', (True, False))
333 @pytest.mark.parametrize("crash", (True, False))
247334 def test_basic(dev_server, crash):
248 server = dev_server('''
335 server = dev_server(
336 """
249337 from werkzeug.debug import DebuggedApplication
250338
251339 @DebuggedApplication
254342 1 / 0
255343 start_response('200 OK', [('Content-Type', 'text/html')])
256344 return [b'hello']
257 '''.format(crash=crash))
345 """.format(
346 crash=crash
347 )
348 )
258349
259350 r = requests.get(server.url)
260351 assert r.status_code == 500 if crash else 200
261352 if crash:
262 assert 'The debugger caught an exception in your WSGI application' \
263 in r.text
353 assert "The debugger caught an exception in your WSGI application" in r.text
264354 else:
265 assert r.text == 'hello'
355 assert r.text == "hello"
356
357
358 @pytest.mark.skipif(PY2, reason="Python 2 doesn't have chained exceptions.")
359 @pytest.mark.timeout(2)
360 def test_chained_exception_cycle():
361 try:
362 try:
363 raise ValueError()
364 except ValueError:
365 raise TypeError()
366 except TypeError as e:
367 # create a cycle and make it available outside the except block
368 e.__context__.__context__ = error = e
369
370 # if cycles aren't broken, this will time out
371 tb = Traceback(TypeError, error, error.__traceback__)
372 assert len(tb.groups) == 2
373
374
375 def test_non_hashable_exception():
376 class MutableException(ValueError):
377 __hash__ = None
378
379 try:
380 raise MutableException()
381 except MutableException:
382 # previously crashed: `TypeError: unhashable type 'MutableException'`
383 Traceback(*sys.exc_info())
88
99 - This is undertested. HTML is never checked
1010
11 :copyright: (c) 2014 by Armin Ronacher.
12 :license: BSD, see LICENSE for more details.
11 :copyright: 2007 Pallets
12 :license: BSD-3-Clause
1313 """
1414 import pytest
1515
1616 from werkzeug import exceptions
17 from werkzeug._compat import text_type
18 from werkzeug.datastructures import WWWAuthenticate
1719 from werkzeug.wrappers import Response
18 from werkzeug._compat import text_type
1920
2021
2122 def test_proxy_exception():
22 orig_resp = Response('Hello World')
23 orig_resp = Response("Hello World")
2324 with pytest.raises(exceptions.HTTPException) as excinfo:
2425 exceptions.abort(orig_resp)
2526 resp = excinfo.value.get_response({})
2627 assert resp is orig_resp
27 assert resp.get_data() == b'Hello World'
28 assert resp.get_data() == b"Hello World"
2829
2930
30 @pytest.mark.parametrize('test', [
31 (exceptions.BadRequest, 400),
32 (exceptions.Unauthorized, 401),
33 (exceptions.Forbidden, 403),
34 (exceptions.NotFound, 404),
35 (exceptions.MethodNotAllowed, 405, ['GET', 'HEAD']),
36 (exceptions.NotAcceptable, 406),
37 (exceptions.RequestTimeout, 408),
38 (exceptions.Gone, 410),
39 (exceptions.LengthRequired, 411),
40 (exceptions.PreconditionFailed, 412),
41 (exceptions.RequestEntityTooLarge, 413),
42 (exceptions.RequestURITooLarge, 414),
43 (exceptions.UnsupportedMediaType, 415),
44 (exceptions.UnprocessableEntity, 422),
45 (exceptions.Locked, 423),
46 (exceptions.InternalServerError, 500),
47 (exceptions.NotImplemented, 501),
48 (exceptions.BadGateway, 502),
49 (exceptions.ServiceUnavailable, 503)
50 ])
31 @pytest.mark.parametrize(
32 "test",
33 [
34 (exceptions.BadRequest, 400),
35 (exceptions.Unauthorized, 401, 'Basic "test realm"'),
36 (exceptions.Forbidden, 403),
37 (exceptions.NotFound, 404),
38 (exceptions.MethodNotAllowed, 405, ["GET", "HEAD"]),
39 (exceptions.NotAcceptable, 406),
40 (exceptions.RequestTimeout, 408),
41 (exceptions.Gone, 410),
42 (exceptions.LengthRequired, 411),
43 (exceptions.PreconditionFailed, 412),
44 (exceptions.RequestEntityTooLarge, 413),
45 (exceptions.RequestURITooLarge, 414),
46 (exceptions.UnsupportedMediaType, 415),
47 (exceptions.UnprocessableEntity, 422),
48 (exceptions.Locked, 423),
49 (exceptions.InternalServerError, 500),
50 (exceptions.NotImplemented, 501),
51 (exceptions.BadGateway, 502),
52 (exceptions.ServiceUnavailable, 503),
53 ],
54 )
5155 def test_aborter_general(test):
5256 exc_type = test[0]
5357 args = test[1:]
7074 def test_exception_repr():
7175 exc = exceptions.NotFound()
7276 assert text_type(exc) == (
73 '404 Not Found: The requested URL was not found '
74 'on the server. If you entered the URL manually please check your '
75 'spelling and try again.')
77 "404 Not Found: The requested URL was not found on the server."
78 " If you entered the URL manually please check your spelling"
79 " and try again."
80 )
7681 assert repr(exc) == "<NotFound '404: Not Found'>"
7782
78 exc = exceptions.NotFound('Not There')
79 assert text_type(exc) == '404 Not Found: Not There'
83 exc = exceptions.NotFound("Not There")
84 assert text_type(exc) == "404 Not Found: Not There"
8085 assert repr(exc) == "<NotFound '404: Not Found'>"
8186
82 exc = exceptions.HTTPException('An error message')
83 assert text_type(exc) == '??? Unknown Error: An error message'
87 exc = exceptions.HTTPException("An error message")
88 assert text_type(exc) == "??? Unknown Error: An error message"
8489 assert repr(exc) == "<HTTPException '???: Unknown Error'>"
8590
8691
87 def test_special_exceptions():
88 exc = exceptions.MethodNotAllowed(['GET', 'HEAD', 'POST'])
92 def test_method_not_allowed_methods():
93 exc = exceptions.MethodNotAllowed(["GET", "HEAD", "POST"])
8994 h = dict(exc.get_headers({}))
90 assert h['Allow'] == 'GET, HEAD, POST'
91 assert 'The method is not allowed' in exc.get_description()
95 assert h["Allow"] == "GET, HEAD, POST"
96 assert "The method is not allowed" in exc.get_description()
97
98
99 def test_unauthorized_www_authenticate():
100 basic = WWWAuthenticate()
101 basic.set_basic("test")
102 digest = WWWAuthenticate()
103 digest.set_digest("test", "test")
104
105 exc = exceptions.Unauthorized(www_authenticate=basic)
106 h = dict(exc.get_headers({}))
107 assert h["WWW-Authenticate"] == str(basic)
108
109 exc = exceptions.Unauthorized(www_authenticate=[digest, basic])
110 h = dict(exc.get_headers({}))
111 assert h["WWW-Authenticate"] == ", ".join((str(digest), str(basic)))
112
113 exc = exceptions.Unauthorized()
114 h = dict(exc.get_headers({}))
115 assert "WWW-Authenticate" not in h
44
55 Tests the form parsing facilities.
66
7 :copyright: (c) 2014 by Armin Ronacher.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
10 from __future__ import with_statement
10 import csv
11 import io
12 from os.path import dirname
13 from os.path import join
1114
1215 import pytest
1316
14 from os.path import join, dirname
15
16 from tests import strict_eq
17
17 from . import strict_eq
1818 from werkzeug import formparser
19 from werkzeug.test import create_environ, Client
20 from werkzeug.wrappers import Request, Response
19 from werkzeug._compat import BytesIO
20 from werkzeug._compat import PY2
21 from werkzeug.datastructures import MultiDict
2122 from werkzeug.exceptions import RequestEntityTooLarge
22 from werkzeug.datastructures import MultiDict
23 from werkzeug.formparser import parse_form_data, FormDataParser
24 from werkzeug._compat import BytesIO
23 from werkzeug.formparser import FormDataParser
24 from werkzeug.formparser import parse_form_data
25 from werkzeug.test import Client
26 from werkzeug.test import create_environ
27 from werkzeug.wrappers import Request
28 from werkzeug.wrappers import Response
2529
2630
2731 @Request.application
2832 def form_data_consumer(request):
29 result_object = request.args['object']
30 if result_object == 'text':
31 return Response(repr(request.form['text']))
33 result_object = request.args["object"]
34 if result_object == "text":
35 return Response(repr(request.form["text"]))
3236 f = request.files[result_object]
33 return Response(b'\n'.join((
34 repr(f.filename).encode('ascii'),
35 repr(f.name).encode('ascii'),
36 repr(f.content_type).encode('ascii'),
37 f.stream.read()
38 )))
37 return Response(
38 b"\n".join(
39 (
40 repr(f.filename).encode("ascii"),
41 repr(f.name).encode("ascii"),
42 repr(f.content_type).encode("ascii"),
43 f.stream.read(),
44 )
45 )
46 )
3947
4048
4149 def get_contents(filename):
42 with open(filename, 'rb') as f:
50 with open(filename, "rb") as f:
4351 return f.read()
4452
4553
4654 class TestFormParser(object):
47
4855 def test_limiting(self):
49 data = b'foo=Hello+World&bar=baz'
50 req = Request.from_values(input_stream=BytesIO(data),
51 content_length=len(data),
52 content_type='application/x-www-form-urlencoded',
53 method='POST')
56 data = b"foo=Hello+World&bar=baz"
57 req = Request.from_values(
58 input_stream=BytesIO(data),
59 content_length=len(data),
60 content_type="application/x-www-form-urlencoded",
61 method="POST",
62 )
5463 req.max_content_length = 400
55 strict_eq(req.form['foo'], u'Hello World')
56
57 req = Request.from_values(input_stream=BytesIO(data),
58 content_length=len(data),
59 content_type='application/x-www-form-urlencoded',
60 method='POST')
64 strict_eq(req.form["foo"], u"Hello World")
65
66 req = Request.from_values(
67 input_stream=BytesIO(data),
68 content_length=len(data),
69 content_type="application/x-www-form-urlencoded",
70 method="POST",
71 )
6172 req.max_form_memory_size = 7
62 pytest.raises(RequestEntityTooLarge, lambda: req.form['foo'])
63
64 req = Request.from_values(input_stream=BytesIO(data),
65 content_length=len(data),
66 content_type='application/x-www-form-urlencoded',
67 method='POST')
73 pytest.raises(RequestEntityTooLarge, lambda: req.form["foo"])
74
75 req = Request.from_values(
76 input_stream=BytesIO(data),
77 content_length=len(data),
78 content_type="application/x-www-form-urlencoded",
79 method="POST",
80 )
6881 req.max_form_memory_size = 400
69 strict_eq(req.form['foo'], u'Hello World')
70
71 data = (b'--foo\r\nContent-Disposition: form-field; name=foo\r\n\r\n'
72 b'Hello World\r\n'
73 b'--foo\r\nContent-Disposition: form-field; name=bar\r\n\r\n'
74 b'bar=baz\r\n--foo--')
75 req = Request.from_values(input_stream=BytesIO(data),
76 content_length=len(data),
77 content_type='multipart/form-data; boundary=foo',
78 method='POST')
82 strict_eq(req.form["foo"], u"Hello World")
83
84 data = (
85 b"--foo\r\nContent-Disposition: form-field; name=foo\r\n\r\n"
86 b"Hello World\r\n"
87 b"--foo\r\nContent-Disposition: form-field; name=bar\r\n\r\n"
88 b"bar=baz\r\n--foo--"
89 )
90 req = Request.from_values(
91 input_stream=BytesIO(data),
92 content_length=len(data),
93 content_type="multipart/form-data; boundary=foo",
94 method="POST",
95 )
7996 req.max_content_length = 4
80 pytest.raises(RequestEntityTooLarge, lambda: req.form['foo'])
81
82 req = Request.from_values(input_stream=BytesIO(data),
83 content_length=len(data),
84 content_type='multipart/form-data; boundary=foo',
85 method='POST')
97 pytest.raises(RequestEntityTooLarge, lambda: req.form["foo"])
98
99 req = Request.from_values(
100 input_stream=BytesIO(data),
101 content_length=len(data),
102 content_type="multipart/form-data; boundary=foo",
103 method="POST",
104 )
86105 req.max_content_length = 400
87 strict_eq(req.form['foo'], u'Hello World')
88
89 req = Request.from_values(input_stream=BytesIO(data),
90 content_length=len(data),
91 content_type='multipart/form-data; boundary=foo',
92 method='POST')
106 strict_eq(req.form["foo"], u"Hello World")
107
108 req = Request.from_values(
109 input_stream=BytesIO(data),
110 content_length=len(data),
111 content_type="multipart/form-data; boundary=foo",
112 method="POST",
113 )
93114 req.max_form_memory_size = 7
94 pytest.raises(RequestEntityTooLarge, lambda: req.form['foo'])
95
96 req = Request.from_values(input_stream=BytesIO(data),
97 content_length=len(data),
98 content_type='multipart/form-data; boundary=foo',
99 method='POST')
115 pytest.raises(RequestEntityTooLarge, lambda: req.form["foo"])
116
117 req = Request.from_values(
118 input_stream=BytesIO(data),
119 content_length=len(data),
120 content_type="multipart/form-data; boundary=foo",
121 method="POST",
122 )
100123 req.max_form_memory_size = 400
101 strict_eq(req.form['foo'], u'Hello World')
124 strict_eq(req.form["foo"], u"Hello World")
102125
103126 def test_missing_multipart_boundary(self):
104 data = (b'--foo\r\nContent-Disposition: form-field; name=foo\r\n\r\n'
105 b'Hello World\r\n'
106 b'--foo\r\nContent-Disposition: form-field; name=bar\r\n\r\n'
107 b'bar=baz\r\n--foo--')
108 req = Request.from_values(input_stream=BytesIO(data),
109 content_length=len(data),
110 content_type='multipart/form-data',
111 method='POST')
127 data = (
128 b"--foo\r\nContent-Disposition: form-field; name=foo\r\n\r\n"
129 b"Hello World\r\n"
130 b"--foo\r\nContent-Disposition: form-field; name=bar\r\n\r\n"
131 b"bar=baz\r\n--foo--"
132 )
133 req = Request.from_values(
134 input_stream=BytesIO(data),
135 content_length=len(data),
136 content_type="multipart/form-data",
137 method="POST",
138 )
112139 assert req.form == {}
113140
114141 def test_parse_form_data_put_without_content(self):
118145 # containing an entity-body SHOULD include a Content-Type header field
119146 # defining the media type of that body." In the case where either
120147 # headers are omitted, parse_form_data should still work.
121 env = create_environ('/foo', 'http://example.org/', method='PUT')
122 del env['CONTENT_TYPE']
123 del env['CONTENT_LENGTH']
148 env = create_environ("/foo", "http://example.org/", method="PUT")
124149
125150 stream, form, files = formparser.parse_form_data(env)
126 strict_eq(stream.read(), b'')
151 strict_eq(stream.read(), b"")
127152 strict_eq(len(form), 0)
128153 strict_eq(len(files), 0)
129154
130155 def test_parse_form_data_get_without_content(self):
131 env = create_environ('/foo', 'http://example.org/', method='GET')
132 del env['CONTENT_TYPE']
133 del env['CONTENT_LENGTH']
156 env = create_environ("/foo", "http://example.org/", method="GET")
134157
135158 stream, form, files = formparser.parse_form_data(env)
136 strict_eq(stream.read(), b'')
159 strict_eq(stream.read(), b"")
137160 strict_eq(len(form), 0)
138161 strict_eq(len(files), 0)
139162
140 def test_large_file(self):
141 data = b'x' * (1024 * 600)
142 req = Request.from_values(data={'foo': (BytesIO(data), 'test.txt')},
143 method='POST')
144 # make sure we have a real file here, because we expect to be
145 # on the disk. > 1024 * 500
146 assert hasattr(req.files['foo'].stream, u'fileno')
147 # close file to prevent fds from leaking
148 req.files['foo'].close()
163 @pytest.mark.parametrize(
164 ("no_spooled", "size"), ((False, 100), (False, 3000), (True, 100), (True, 3000))
165 )
166 def test_default_stream_factory(self, no_spooled, size, monkeypatch):
167 if no_spooled:
168 monkeypatch.setattr("werkzeug.formparser.SpooledTemporaryFile", None)
169
170 data = b"a,b,c\n" * size
171 req = Request.from_values(
172 data={"foo": (BytesIO(data), "test.txt")}, method="POST"
173 )
174 file_storage = req.files["foo"]
175
176 try:
177 if PY2:
178 reader = csv.reader(file_storage)
179 else:
180 reader = csv.reader(io.TextIOWrapper(file_storage))
181 # This fails if file_storage doesn't implement IOBase.
182 # https://github.com/pallets/werkzeug/issues/1344
183 # https://github.com/python/cpython/pull/3249
184 assert sum(1 for _ in reader) == size
185 finally:
186 file_storage.close()
149187
150188 def test_streaming_parse(self):
151 data = b'x' * (1024 * 600)
189 data = b"x" * (1024 * 600)
152190
153191 class StreamMPP(formparser.MultiPartParser):
154
155192 def parse(self, file, boundary, content_length):
156 i = iter(self.parse_lines(file, boundary, content_length,
157 cap_at_buffer=False))
193 i = iter(
194 self.parse_lines(
195 file, boundary, content_length, cap_at_buffer=False
196 )
197 )
158198 one = next(i)
159199 two = next(i)
160 return self.cls(()), {'one': one, 'two': two}
200 return self.cls(()), {"one": one, "two": two}
161201
162202 class StreamFDP(formparser.FormDataParser):
163
164 def _sf_parse_multipart(self, stream, mimetype,
165 content_length, options):
203 def _sf_parse_multipart(self, stream, mimetype, content_length, options):
166204 form, files = StreamMPP(
167 self.stream_factory, self.charset, self.errors,
205 self.stream_factory,
206 self.charset,
207 self.errors,
168208 max_form_memory_size=self.max_form_memory_size,
169 cls=self.cls).parse(stream, options.get('boundary').encode('ascii'),
170 content_length)
209 cls=self.cls,
210 ).parse(stream, options.get("boundary").encode("ascii"), content_length)
171211 return stream, form, files
212
172213 parse_functions = {}
173214 parse_functions.update(formparser.FormDataParser.parse_functions)
174 parse_functions['multipart/form-data'] = _sf_parse_multipart
215 parse_functions["multipart/form-data"] = _sf_parse_multipart
175216
176217 class StreamReq(Request):
177218 form_data_parser_class = StreamFDP
178 req = StreamReq.from_values(data={'foo': (BytesIO(data), 'test.txt')},
179 method='POST')
180 strict_eq('begin_file', req.files['one'][0])
181 strict_eq(('foo', 'test.txt'), req.files['one'][1][1:])
182 strict_eq('cont', req.files['two'][0])
183 strict_eq(data, req.files['two'][1])
219
220 req = StreamReq.from_values(
221 data={"foo": (BytesIO(data), "test.txt")}, method="POST"
222 )
223 strict_eq("begin_file", req.files["one"][0])
224 strict_eq(("foo", "test.txt"), req.files["one"][1][1:])
225 strict_eq("cont", req.files["two"][0])
226 strict_eq(data, req.files["two"][1])
184227
185228 def test_parse_bad_content_type(self):
186229 parser = FormDataParser()
187 assert parser.parse('', 'bad-mime-type', 0) == \
188 ('', MultiDict([]), MultiDict([]))
230 assert parser.parse("", "bad-mime-type", 0) == (
231 "",
232 MultiDict([]),
233 MultiDict([]),
234 )
189235
190236 def test_parse_from_environ(self):
191237 parser = FormDataParser()
192 stream, _, _ = parser.parse_from_environ({'wsgi.input': ''})
238 stream, _, _ = parser.parse_from_environ({"wsgi.input": ""})
193239 assert stream is not None
194240
195241
196242 class TestMultiPart(object):
197
198243 def test_basic(self):
199 resources = join(dirname(__file__), 'multipart')
244 resources = join(dirname(__file__), "multipart")
200245 client = Client(form_data_consumer, Response)
201246
202247 repository = [
203 ('firefox3-2png1txt', '---------------------------186454651713519341951581030105', [
204 (u'anchor.png', 'file1', 'image/png', 'file1.png'),
205 (u'application_edit.png', 'file2', 'image/png', 'file2.png')
206 ], u'example text'),
207 ('firefox3-2pnglongtext', '---------------------------14904044739787191031754711748', [
208 (u'accept.png', 'file1', 'image/png', 'file1.png'),
209 (u'add.png', 'file2', 'image/png', 'file2.png')
210 ], u'--long text\r\n--with boundary\r\n--lookalikes--'),
211 ('opera8-2png1txt', '----------zEO9jQKmLc2Cq88c23Dx19', [
212 (u'arrow_branch.png', 'file1', 'image/png', 'file1.png'),
213 (u'award_star_bronze_1.png', 'file2', 'image/png', 'file2.png')
214 ], u'blafasel öäü'),
215 ('webkit3-2png1txt', '----WebKitFormBoundaryjdSFhcARk8fyGNy6', [
216 (u'gtk-apply.png', 'file1', 'image/png', 'file1.png'),
217 (u'gtk-no.png', 'file2', 'image/png', 'file2.png')
218 ], u'this is another text with ümläüts'),
219 ('ie6-2png1txt', '---------------------------7d91b03a20128', [
220 (u'file1.png', 'file1', 'image/x-png', 'file1.png'),
221 (u'file2.png', 'file2', 'image/x-png', 'file2.png')
222 ], u'ie6 sucks :-/')
248 (
249 "firefox3-2png1txt",
250 "---------------------------186454651713519341951581030105",
251 [
252 (u"anchor.png", "file1", "image/png", "file1.png"),
253 (u"application_edit.png", "file2", "image/png", "file2.png"),
254 ],
255 u"example text",
256 ),
257 (
258 "firefox3-2pnglongtext",
259 "---------------------------14904044739787191031754711748",
260 [
261 (u"accept.png", "file1", "image/png", "file1.png"),
262 (u"add.png", "file2", "image/png", "file2.png"),
263 ],
264 u"--long text\r\n--with boundary\r\n--lookalikes--",
265 ),
266 (
267 "opera8-2png1txt",
268 "----------zEO9jQKmLc2Cq88c23Dx19",
269 [
270 (u"arrow_branch.png", "file1", "image/png", "file1.png"),
271 (u"award_star_bronze_1.png", "file2", "image/png", "file2.png"),
272 ],
273 u"blafasel öäü",
274 ),
275 (
276 "webkit3-2png1txt",
277 "----WebKitFormBoundaryjdSFhcARk8fyGNy6",
278 [
279 (u"gtk-apply.png", "file1", "image/png", "file1.png"),
280 (u"gtk-no.png", "file2", "image/png", "file2.png"),
281 ],
282 u"this is another text with ümläüts",
283 ),
284 (
285 "ie6-2png1txt",
286 "---------------------------7d91b03a20128",
287 [
288 (u"file1.png", "file1", "image/x-png", "file1.png"),
289 (u"file2.png", "file2", "image/x-png", "file2.png"),
290 ],
291 u"ie6 sucks :-/",
292 ),
223293 ]
224294
225295 for name, boundary, files, text in repository:
226296 folder = join(resources, name)
227 data = get_contents(join(folder, 'request.txt'))
297 data = get_contents(join(folder, "request.http"))
228298 for filename, field, content_type, fsname in files:
229299 response = client.post(
230 '/?object=' + field,
300 "/?object=" + field,
231301 data=data,
232302 content_type='multipart/form-data; boundary="%s"' % boundary,
233 content_length=len(data))
234 lines = response.get_data().split(b'\n', 3)
235 strict_eq(lines[0], repr(filename).encode('ascii'))
236 strict_eq(lines[1], repr(field).encode('ascii'))
237 strict_eq(lines[2], repr(content_type).encode('ascii'))
303 content_length=len(data),
304 )
305 lines = response.get_data().split(b"\n", 3)
306 strict_eq(lines[0], repr(filename).encode("ascii"))
307 strict_eq(lines[1], repr(field).encode("ascii"))
308 strict_eq(lines[2], repr(content_type).encode("ascii"))
238309 strict_eq(lines[3], get_contents(join(folder, fsname)))
239310 response = client.post(
240 '/?object=text',
311 "/?object=text",
241312 data=data,
242313 content_type='multipart/form-data; boundary="%s"' % boundary,
243 content_length=len(data))
244 strict_eq(response.get_data(), repr(text).encode('utf-8'))
314 content_length=len(data),
315 )
316 strict_eq(response.get_data(), repr(text).encode("utf-8"))
245317
246318 def test_ie7_unc_path(self):
247319 client = Client(form_data_consumer, Response)
248 data_file = join(dirname(__file__), 'multipart', 'ie7_full_path_request.txt')
320 data_file = join(dirname(__file__), "multipart", "ie7_full_path_request.http")
249321 data = get_contents(data_file)
250 boundary = '---------------------------7da36d1b4a0164'
322 boundary = "---------------------------7da36d1b4a0164"
251323 response = client.post(
252 '/?object=cb_file_upload_multiple',
324 "/?object=cb_file_upload_multiple",
253325 data=data,
254326 content_type='multipart/form-data; boundary="%s"' % boundary,
255 content_length=len(data))
256 lines = response.get_data().split(b'\n', 3)
257 strict_eq(lines[0],
258 repr(u'Sellersburg Town Council Meeting 02-22-2010doc.doc').encode('ascii'))
327 content_length=len(data),
328 )
329 lines = response.get_data().split(b"\n", 3)
330 strict_eq(
331 lines[0],
332 repr(u"Sellersburg Town Council Meeting 02-22-2010doc.doc").encode("ascii"),
333 )
259334
260335 def test_end_of_file(self):
261336 # This test looks innocent but it was actually timeing out in
262337 # the Werkzeug 0.5 release version (#394)
263338 data = (
264 b'--foo\r\n'
339 b"--foo\r\n"
265340 b'Content-Disposition: form-data; name="test"; filename="test.txt"\r\n'
266 b'Content-Type: text/plain\r\n\r\n'
267 b'file contents and no end'
268 )
269 data = Request.from_values(input_stream=BytesIO(data),
270 content_length=len(data),
271 content_type='multipart/form-data; boundary=foo',
272 method='POST')
341 b"Content-Type: text/plain\r\n\r\n"
342 b"file contents and no end"
343 )
344 data = Request.from_values(
345 input_stream=BytesIO(data),
346 content_length=len(data),
347 content_type="multipart/form-data; boundary=foo",
348 method="POST",
349 )
273350 assert not data.files
274351 assert not data.form
275352
276353 def test_broken(self):
277354 data = (
278 '--foo\r\n'
355 "--foo\r\n"
279356 'Content-Disposition: form-data; name="test"; filename="test.txt"\r\n'
280 'Content-Transfer-Encoding: base64\r\n'
281 'Content-Type: text/plain\r\n\r\n'
282 'broken base 64'
283 '--foo--'
284 )
285 _, form, files = formparser.parse_form_data(create_environ(
286 data=data, method='POST', content_type='multipart/form-data; boundary=foo'
287 ))
357 "Content-Transfer-Encoding: base64\r\n"
358 "Content-Type: text/plain\r\n\r\n"
359 "broken base 64"
360 "--foo--"
361 )
362 _, form, files = formparser.parse_form_data(
363 create_environ(
364 data=data,
365 method="POST",
366 content_type="multipart/form-data; boundary=foo",
367 )
368 )
288369 assert not files
289370 assert not form
290371
291 pytest.raises(ValueError, formparser.parse_form_data,
292 create_environ(data=data, method='POST',
293 content_type='multipart/form-data; boundary=foo'),
294 silent=False)
372 pytest.raises(
373 ValueError,
374 formparser.parse_form_data,
375 create_environ(
376 data=data,
377 method="POST",
378 content_type="multipart/form-data; boundary=foo",
379 ),
380 silent=False,
381 )
295382
296383 def test_file_no_content_type(self):
297384 data = (
298 b'--foo\r\n'
385 b"--foo\r\n"
299386 b'Content-Disposition: form-data; name="test"; filename="test.txt"\r\n\r\n'
300 b'file contents\r\n--foo--'
301 )
302 data = Request.from_values(input_stream=BytesIO(data),
303 content_length=len(data),
304 content_type='multipart/form-data; boundary=foo',
305 method='POST')
306 assert data.files['test'].filename == 'test.txt'
307 strict_eq(data.files['test'].read(), b'file contents')
387 b"file contents\r\n--foo--"
388 )
389 data = Request.from_values(
390 input_stream=BytesIO(data),
391 content_length=len(data),
392 content_type="multipart/form-data; boundary=foo",
393 method="POST",
394 )
395 assert data.files["test"].filename == "test.txt"
396 strict_eq(data.files["test"].read(), b"file contents")
308397
309398 def test_extra_newline(self):
310399 # this test looks innocent but it was actually timeing out in
311400 # the Werkzeug 0.5 release version (#394)
312401 data = (
313 b'\r\n\r\n--foo\r\n'
402 b"\r\n\r\n--foo\r\n"
314403 b'Content-Disposition: form-data; name="foo"\r\n\r\n'
315 b'a string\r\n'
316 b'--foo--'
317 )
318 data = Request.from_values(input_stream=BytesIO(data),
319 content_length=len(data),
320 content_type='multipart/form-data; boundary=foo',
321 method='POST')
404 b"a string\r\n"
405 b"--foo--"
406 )
407 data = Request.from_values(
408 input_stream=BytesIO(data),
409 content_length=len(data),
410 content_type="multipart/form-data; boundary=foo",
411 method="POST",
412 )
322413 assert not data.files
323 strict_eq(data.form['foo'], u'a string')
414 strict_eq(data.form["foo"], u"a string")
324415
325416 def test_headers(self):
326 data = (b'--foo\r\n'
327 b'Content-Disposition: form-data; name="foo"; filename="foo.txt"\r\n'
328 b'X-Custom-Header: blah\r\n'
329 b'Content-Type: text/plain; charset=utf-8\r\n\r\n'
330 b'file contents, just the contents\r\n'
331 b'--foo--')
332 req = Request.from_values(input_stream=BytesIO(data),
333 content_length=len(data),
334 content_type='multipart/form-data; boundary=foo',
335 method='POST')
336 foo = req.files['foo']
337 strict_eq(foo.mimetype, 'text/plain')
338 strict_eq(foo.mimetype_params, {'charset': 'utf-8'})
339 strict_eq(foo.headers['content-type'], foo.content_type)
340 strict_eq(foo.content_type, 'text/plain; charset=utf-8')
341 strict_eq(foo.headers['x-custom-header'], 'blah')
417 data = (
418 b"--foo\r\n"
419 b'Content-Disposition: form-data; name="foo"; filename="foo.txt"\r\n'
420 b"X-Custom-Header: blah\r\n"
421 b"Content-Type: text/plain; charset=utf-8\r\n\r\n"
422 b"file contents, just the contents\r\n"
423 b"--foo--"
424 )
425 req = Request.from_values(
426 input_stream=BytesIO(data),
427 content_length=len(data),
428 content_type="multipart/form-data; boundary=foo",
429 method="POST",
430 )
431 foo = req.files["foo"]
432 strict_eq(foo.mimetype, "text/plain")
433 strict_eq(foo.mimetype_params, {"charset": "utf-8"})
434 strict_eq(foo.headers["content-type"], foo.content_type)
435 strict_eq(foo.content_type, "text/plain; charset=utf-8")
436 strict_eq(foo.headers["x-custom-header"], "blah")
342437
343438 def test_nonstandard_line_endings(self):
344 for nl in b'\n', b'\r', b'\r\n':
345 data = nl.join((
346 b'--foo',
347 b'Content-Disposition: form-data; name=foo',
348 b'',
349 b'this is just bar',
350 b'--foo',
351 b'Content-Disposition: form-data; name=bar',
352 b'',
353 b'blafasel',
354 b'--foo--'
355 ))
356 req = Request.from_values(input_stream=BytesIO(data),
357 content_length=len(data),
358 content_type='multipart/form-data; '
359 'boundary=foo', method='POST')
360 strict_eq(req.form['foo'], u'this is just bar')
361 strict_eq(req.form['bar'], u'blafasel')
439 for nl in b"\n", b"\r", b"\r\n":
440 data = nl.join(
441 (
442 b"--foo",
443 b"Content-Disposition: form-data; name=foo",
444 b"",
445 b"this is just bar",
446 b"--foo",
447 b"Content-Disposition: form-data; name=bar",
448 b"",
449 b"blafasel",
450 b"--foo--",
451 )
452 )
453 req = Request.from_values(
454 input_stream=BytesIO(data),
455 content_length=len(data),
456 content_type="multipart/form-data; boundary=foo",
457 method="POST",
458 )
459 strict_eq(req.form["foo"], u"this is just bar")
460 strict_eq(req.form["bar"], u"blafasel")
362461
363462 def test_failures(self):
364463 def parse_multipart(stream, boundary, content_length):
365464 parser = formparser.MultiPartParser(content_length)
366465 return parser.parse(stream, boundary, content_length)
367 pytest.raises(ValueError, parse_multipart, BytesIO(), b'broken ', 0)
368
369 data = b'--foo\r\n\r\nHello World\r\n--foo--'
370 pytest.raises(ValueError, parse_multipart, BytesIO(data), b'foo', len(data))
371
372 data = b'--foo\r\nContent-Disposition: form-field; name=foo\r\n' \
373 b'Content-Transfer-Encoding: base64\r\n\r\nHello World\r\n--foo--'
374 pytest.raises(ValueError, parse_multipart, BytesIO(data), b'foo', len(data))
375
376 data = b'--foo\r\nContent-Disposition: form-field; name=foo\r\n\r\nHello World\r\n'
377 pytest.raises(ValueError, parse_multipart, BytesIO(data), b'foo', len(data))
378
379 x = formparser.parse_multipart_headers(['foo: bar\r\n', ' x test\r\n'])
380 strict_eq(x['foo'], 'bar\n x test')
381 pytest.raises(ValueError, formparser.parse_multipart_headers,
382 ['foo: bar\r\n', ' x test'])
466
467 pytest.raises(ValueError, parse_multipart, BytesIO(), b"broken ", 0)
468
469 data = b"--foo\r\n\r\nHello World\r\n--foo--"
470 pytest.raises(ValueError, parse_multipart, BytesIO(data), b"foo", len(data))
471
472 data = (
473 b"--foo\r\nContent-Disposition: form-field; name=foo\r\n"
474 b"Content-Transfer-Encoding: base64\r\n\r\nHello World\r\n--foo--"
475 )
476 pytest.raises(ValueError, parse_multipart, BytesIO(data), b"foo", len(data))
477
478 data = (
479 b"--foo\r\nContent-Disposition: form-field; name=foo\r\n\r\nHello World\r\n"
480 )
481 pytest.raises(ValueError, parse_multipart, BytesIO(data), b"foo", len(data))
482
483 x = formparser.parse_multipart_headers(["foo: bar\r\n", " x test\r\n"])
484 strict_eq(x["foo"], "bar\n x test")
485 pytest.raises(
486 ValueError, formparser.parse_multipart_headers, ["foo: bar\r\n", " x test"]
487 )
383488
384489 def test_bad_newline_bad_newline_assumption(self):
385490 class ISORequest(Request):
386 charset = 'latin1'
387 contents = b'U2vlbmUgbORu'
388 data = b'--foo\r\nContent-Disposition: form-data; name="test"\r\n' \
389 b'Content-Transfer-Encoding: base64\r\n\r\n' + \
390 contents + b'\r\n--foo--'
391 req = ISORequest.from_values(input_stream=BytesIO(data),
392 content_length=len(data),
393 content_type='multipart/form-data; boundary=foo',
394 method='POST')
395 strict_eq(req.form['test'], u'Sk\xe5ne l\xe4n')
491 charset = "latin1"
492
493 contents = b"U2vlbmUgbORu"
494 data = (
495 b'--foo\r\nContent-Disposition: form-data; name="test"\r\n'
496 b"Content-Transfer-Encoding: base64\r\n\r\n" + contents + b"\r\n--foo--"
497 )
498 req = ISORequest.from_values(
499 input_stream=BytesIO(data),
500 content_length=len(data),
501 content_type="multipart/form-data; boundary=foo",
502 method="POST",
503 )
504 strict_eq(req.form["test"], u"Sk\xe5ne l\xe4n")
396505
397506 def test_empty_multipart(self):
398507 environ = {}
399 data = b'--boundary--'
400 environ['REQUEST_METHOD'] = 'POST'
401 environ['CONTENT_TYPE'] = 'multipart/form-data; boundary=boundary'
402 environ['CONTENT_LENGTH'] = str(len(data))
403 environ['wsgi.input'] = BytesIO(data)
508 data = b"--boundary--"
509 environ["REQUEST_METHOD"] = "POST"
510 environ["CONTENT_TYPE"] = "multipart/form-data; boundary=boundary"
511 environ["CONTENT_LENGTH"] = str(len(data))
512 environ["wsgi.input"] = BytesIO(data)
404513 stream, form, files = parse_form_data(environ, silent=False)
405514 rv = stream.read()
406 assert rv == b''
515 assert rv == b""
407516 assert form == MultiDict()
408517 assert files == MultiDict()
409518
410519
411520 class TestMultiPartParser(object):
412
413521 def test_constructor_not_pass_stream_factory_and_cls(self):
414522 parser = formparser.MultiPartParser()
415523
425533 assert parser.stream_factory is stream_factory
426534 assert parser.cls is dict
427535
536 def test_file_rfc2231_filename_continuations(self):
537 data = (
538 b"--foo\r\n"
539 b"Content-Type: text/plain; charset=utf-8\r\n"
540 b"Content-Disposition: form-data; name=rfc2231;\r\n"
541 b" filename*0*=ascii''a%20b%20;\r\n"
542 b" filename*1*=c%20d%20;\r\n"
543 b' filename*2="e f.txt"\r\n\r\n'
544 b"file contents\r\n--foo--"
545 )
546 request = Request.from_values(
547 input_stream=BytesIO(data),
548 content_length=len(data),
549 content_type="multipart/form-data; boundary=foo",
550 method="POST",
551 )
552 assert request.files["rfc2231"].filename == "a b c d e f.txt"
553 assert request.files["rfc2231"].read() == b"file contents"
554
428555
429556 class TestInternalFunctions(object):
430
431557 def test_line_parser(self):
432 assert formparser._line_parse('foo') == ('foo', False)
433 assert formparser._line_parse('foo\r\n') == ('foo', True)
434 assert formparser._line_parse('foo\r') == ('foo', True)
435 assert formparser._line_parse('foo\n') == ('foo', True)
558 assert formparser._line_parse("foo") == ("foo", False)
559 assert formparser._line_parse("foo\r\n") == ("foo", True)
560 assert formparser._line_parse("foo\r") == ("foo", True)
561 assert formparser._line_parse("foo\n") == ("foo", True)
436562
437563 def test_find_terminator(self):
438 lineiter = iter(b'\n\n\nfoo\nbar\nbaz'.splitlines(True))
564 lineiter = iter(b"\n\n\nfoo\nbar\nbaz".splitlines(True))
439565 find_terminator = formparser.MultiPartParser()._find_terminator
440566 line = find_terminator(lineiter)
441 assert line == b'foo'
442 assert list(lineiter) == [b'bar\n', b'baz']
443 assert find_terminator([]) == b''
444 assert find_terminator([b'']) == b''
567 assert line == b"foo"
568 assert list(lineiter) == [b"bar\n", b"baz"]
569 assert find_terminator([]) == b""
570 assert find_terminator([b""]) == b""
44
55 HTTP parsing utilities.
66
7 :copyright: (c) 2014 by Armin Ronacher.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
10 from datetime import datetime
11
1012 import pytest
1113
12 from datetime import datetime
13
14 from tests import strict_eq
15 from werkzeug._compat import itervalues, wsgi_encoding_dance
16
17 from werkzeug import http, datastructures
14 from . import strict_eq
15 from werkzeug import datastructures
16 from werkzeug import http
17 from werkzeug._compat import itervalues
18 from werkzeug._compat import wsgi_encoding_dance
1819 from werkzeug.test import create_environ
1920
2021
2122 class TestHTTPUtility(object):
22
2323 def test_accept(self):
24 a = http.parse_accept_header('en-us,ru;q=0.5')
25 assert list(itervalues(a)) == ['en-us', 'ru']
26 assert a.best == 'en-us'
27 assert a.find('ru') == 1
28 pytest.raises(ValueError, a.index, 'de')
29 assert a.to_header() == 'en-us,ru;q=0.5'
24 a = http.parse_accept_header("en-us,ru;q=0.5")
25 assert list(itervalues(a)) == ["en-us", "ru"]
26 assert a.best == "en-us"
27 assert a.find("ru") == 1
28 pytest.raises(ValueError, a.index, "de")
29 assert a.to_header() == "en-us,ru;q=0.5"
3030
3131 def test_mime_accept(self):
32 a = http.parse_accept_header('text/xml,application/xml,'
33 'application/xhtml+xml,'
34 'application/foo;quiet=no; bar=baz;q=0.6,'
35 'text/html;q=0.9,text/plain;q=0.8,'
36 'image/png,*/*;q=0.5',
37 datastructures.MIMEAccept)
38 pytest.raises(ValueError, lambda: a['missing'])
39 assert a['image/png'] == 1
40 assert a['text/plain'] == 0.8
41 assert a['foo/bar'] == 0.5
42 assert a['application/foo;quiet=no; bar=baz'] == 0.6
43 assert a[a.find('foo/bar')] == ('*/*', 0.5)
32 a = http.parse_accept_header(
33 "text/xml,application/xml,"
34 "application/xhtml+xml,"
35 "application/foo;quiet=no; bar=baz;q=0.6,"
36 "text/html;q=0.9,text/plain;q=0.8,"
37 "image/png,*/*;q=0.5",
38 datastructures.MIMEAccept,
39 )
40 pytest.raises(ValueError, lambda: a["missing"])
41 assert a["image/png"] == 1
42 assert a["text/plain"] == 0.8
43 assert a["foo/bar"] == 0.5
44 assert a["application/foo;quiet=no; bar=baz"] == 0.6
45 assert a[a.find("foo/bar")] == ("*/*", 0.5)
4446
4547 def test_accept_matches(self):
46 a = http.parse_accept_header('text/xml,application/xml,application/xhtml+xml,'
47 'text/html;q=0.9,text/plain;q=0.8,'
48 'image/png', datastructures.MIMEAccept)
49 assert a.best_match(['text/html', 'application/xhtml+xml']) == \
50 'application/xhtml+xml'
51 assert a.best_match(['text/html']) == 'text/html'
52 assert a.best_match(['foo/bar']) is None
53 assert a.best_match(['foo/bar', 'bar/foo'], default='foo/bar') == 'foo/bar'
54 assert a.best_match(['application/xml', 'text/xml']) == 'application/xml'
48 a = http.parse_accept_header(
49 "text/xml,application/xml,application/xhtml+xml,"
50 "text/html;q=0.9,text/plain;q=0.8,"
51 "image/png",
52 datastructures.MIMEAccept,
53 )
54 assert (
55 a.best_match(["text/html", "application/xhtml+xml"])
56 == "application/xhtml+xml"
57 )
58 assert a.best_match(["text/html"]) == "text/html"
59 assert a.best_match(["foo/bar"]) is None
60 assert a.best_match(["foo/bar", "bar/foo"], default="foo/bar") == "foo/bar"
61 assert a.best_match(["application/xml", "text/xml"]) == "application/xml"
5562
5663 def test_charset_accept(self):
57 a = http.parse_accept_header('ISO-8859-1,utf-8;q=0.7,*;q=0.7',
58 datastructures.CharsetAccept)
59 assert a['iso-8859-1'] == a['iso8859-1']
60 assert a['iso-8859-1'] == 1
61 assert a['UTF8'] == 0.7
62 assert a['ebcdic'] == 0.7
64 a = http.parse_accept_header(
65 "ISO-8859-1,utf-8;q=0.7,*;q=0.7", datastructures.CharsetAccept
66 )
67 assert a["iso-8859-1"] == a["iso8859-1"]
68 assert a["iso-8859-1"] == 1
69 assert a["UTF8"] == 0.7
70 assert a["ebcdic"] == 0.7
6371
6472 def test_language_accept(self):
65 a = http.parse_accept_header('de-AT,de;q=0.8,en;q=0.5',
66 datastructures.LanguageAccept)
67 assert a.best == 'de-AT'
68 assert 'de_AT' in a
69 assert 'en' in a
70 assert a['de-at'] == 1
71 assert a['en'] == 0.5
73 a = http.parse_accept_header(
74 "de-AT,de;q=0.8,en;q=0.5", datastructures.LanguageAccept
75 )
76 assert a.best == "de-AT"
77 assert "de_AT" in a
78 assert "en" in a
79 assert a["de-at"] == 1
80 assert a["en"] == 0.5
7281
7382 def test_set_header(self):
7483 hs = http.parse_set_header('foo, Bar, "Blah baz", Hehe')
75 assert 'blah baz' in hs
76 assert 'foobar' not in hs
77 assert 'foo' in hs
78 assert list(hs) == ['foo', 'Bar', 'Blah baz', 'Hehe']
79 hs.add('Foo')
84 assert "blah baz" in hs
85 assert "foobar" not in hs
86 assert "foo" in hs
87 assert list(hs) == ["foo", "Bar", "Blah baz", "Hehe"]
88 hs.add("Foo")
8089 assert hs.to_header() == 'foo, Bar, "Blah baz", Hehe'
8190
8291 def test_list_header(self):
83 hl = http.parse_list_header('foo baz, blah')
84 assert hl == ['foo baz', 'blah']
92 hl = http.parse_list_header("foo baz, blah")
93 assert hl == ["foo baz", "blah"]
8594
8695 def test_dict_header(self):
8796 d = http.parse_dict_header('foo="bar baz", blah=42')
88 assert d == {'foo': 'bar baz', 'blah': '42'}
97 assert d == {"foo": "bar baz", "blah": "42"}
8998
9099 def test_cache_control_header(self):
91 cc = http.parse_cache_control_header('max-age=0, no-cache')
100 cc = http.parse_cache_control_header("max-age=0, no-cache")
92101 assert cc.max_age == 0
93102 assert cc.no_cache
94 cc = http.parse_cache_control_header('private, community="UCI"', None,
95 datastructures.ResponseCacheControl)
103 cc = http.parse_cache_control_header(
104 'private, community="UCI"', None, datastructures.ResponseCacheControl
105 )
96106 assert cc.private
97 assert cc['community'] == 'UCI'
107 assert cc["community"] == "UCI"
98108
99109 c = datastructures.ResponseCacheControl()
100110 assert c.no_cache is None
101111 assert c.private is None
102112 c.no_cache = True
103 assert c.no_cache == '*'
113 assert c.no_cache == "*"
104114 c.private = True
105 assert c.private == '*'
115 assert c.private == "*"
106116 del c.private
107117 assert c.private is None
108 assert c.to_header() == 'no-cache'
118 assert c.to_header() == "no-cache"
109119
110120 def test_authorization_header(self):
111 a = http.parse_authorization_header('Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==')
112 assert a.type == 'basic'
113 assert a.username == 'Aladdin'
114 assert a.password == 'open sesame'
115
116 a = http.parse_authorization_header('''Digest username="Mufasa",
117 realm="testrealm@host.invalid",
118 nonce="dcd98b7102dd2f0e8b11d0f600bfb0c093",
119 uri="/dir/index.html",
120 qop=auth,
121 nc=00000001,
122 cnonce="0a4f113b",
123 response="6629fae49393a05397450978507c4ef1",
124 opaque="5ccc069c403ebaf9f0171e9517f40e41"''')
125 assert a.type == 'digest'
126 assert a.username == 'Mufasa'
127 assert a.realm == 'testrealm@host.invalid'
128 assert a.nonce == 'dcd98b7102dd2f0e8b11d0f600bfb0c093'
129 assert a.uri == '/dir/index.html'
130 assert a.qop == 'auth'
131 assert a.nc == '00000001'
132 assert a.cnonce == '0a4f113b'
133 assert a.response == '6629fae49393a05397450978507c4ef1'
134 assert a.opaque == '5ccc069c403ebaf9f0171e9517f40e41'
135
136 a = http.parse_authorization_header('''Digest username="Mufasa",
137 realm="testrealm@host.invalid",
138 nonce="dcd98b7102dd2f0e8b11d0f600bfb0c093",
139 uri="/dir/index.html",
140 response="e257afa1414a3340d93d30955171dd0e",
141 opaque="5ccc069c403ebaf9f0171e9517f40e41"''')
142 assert a.type == 'digest'
143 assert a.username == 'Mufasa'
144 assert a.realm == 'testrealm@host.invalid'
145 assert a.nonce == 'dcd98b7102dd2f0e8b11d0f600bfb0c093'
146 assert a.uri == '/dir/index.html'
147 assert a.response == 'e257afa1414a3340d93d30955171dd0e'
148 assert a.opaque == '5ccc069c403ebaf9f0171e9517f40e41'
149
150 assert http.parse_authorization_header('') is None
121 a = http.parse_authorization_header("Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==")
122 assert a.type == "basic"
123 assert a.username == u"Aladdin"
124 assert a.password == u"open sesame"
125
126 a = http.parse_authorization_header(
127 "Basic 0YDRg9GB0YHQutC40IE60JHRg9C60LLRiw=="
128 )
129 assert a.type == "basic"
130 assert a.username == u"русскиЁ"
131 assert a.password == u"Буквы"
132
133 a = http.parse_authorization_header("Basic 5pmu6YCa6K+dOuS4reaWhw==")
134 assert a.type == "basic"
135 assert a.username == u"普通话"
136 assert a.password == u"中文"
137
138 a = http.parse_authorization_header(
139 '''Digest username="Mufasa",
140 realm="testrealm@host.invalid",
141 nonce="dcd98b7102dd2f0e8b11d0f600bfb0c093",
142 uri="/dir/index.html",
143 qop=auth,
144 nc=00000001,
145 cnonce="0a4f113b",
146 response="6629fae49393a05397450978507c4ef1",
147 opaque="5ccc069c403ebaf9f0171e9517f40e41"'''
148 )
149 assert a.type == "digest"
150 assert a.username == "Mufasa"
151 assert a.realm == "testrealm@host.invalid"
152 assert a.nonce == "dcd98b7102dd2f0e8b11d0f600bfb0c093"
153 assert a.uri == "/dir/index.html"
154 assert a.qop == "auth"
155 assert a.nc == "00000001"
156 assert a.cnonce == "0a4f113b"
157 assert a.response == "6629fae49393a05397450978507c4ef1"
158 assert a.opaque == "5ccc069c403ebaf9f0171e9517f40e41"
159
160 a = http.parse_authorization_header(
161 '''Digest username="Mufasa",
162 realm="testrealm@host.invalid",
163 nonce="dcd98b7102dd2f0e8b11d0f600bfb0c093",
164 uri="/dir/index.html",
165 response="e257afa1414a3340d93d30955171dd0e",
166 opaque="5ccc069c403ebaf9f0171e9517f40e41"'''
167 )
168 assert a.type == "digest"
169 assert a.username == "Mufasa"
170 assert a.realm == "testrealm@host.invalid"
171 assert a.nonce == "dcd98b7102dd2f0e8b11d0f600bfb0c093"
172 assert a.uri == "/dir/index.html"
173 assert a.response == "e257afa1414a3340d93d30955171dd0e"
174 assert a.opaque == "5ccc069c403ebaf9f0171e9517f40e41"
175
176 assert http.parse_authorization_header("") is None
151177 assert http.parse_authorization_header(None) is None
152 assert http.parse_authorization_header('foo') is None
178 assert http.parse_authorization_header("foo") is None
153179
154180 def test_www_authenticate_header(self):
155181 wa = http.parse_www_authenticate_header('Basic realm="WallyWorld"')
156 assert wa.type == 'basic'
157 assert wa.realm == 'WallyWorld'
158 wa.realm = 'Foo Bar'
182 assert wa.type == "basic"
183 assert wa.realm == "WallyWorld"
184 wa.realm = "Foo Bar"
159185 assert wa.to_header() == 'Basic realm="Foo Bar"'
160186
161 wa = http.parse_www_authenticate_header('''Digest
162 realm="testrealm@host.com",
163 qop="auth,auth-int",
164 nonce="dcd98b7102dd2f0e8b11d0f600bfb0c093",
165 opaque="5ccc069c403ebaf9f0171e9517f40e41"''')
166 assert wa.type == 'digest'
167 assert wa.realm == 'testrealm@host.com'
168 assert 'auth' in wa.qop
169 assert 'auth-int' in wa.qop
170 assert wa.nonce == 'dcd98b7102dd2f0e8b11d0f600bfb0c093'
171 assert wa.opaque == '5ccc069c403ebaf9f0171e9517f40e41'
172
173 wa = http.parse_www_authenticate_header('broken')
174 assert wa.type == 'broken'
175
176 assert not http.parse_www_authenticate_header('').type
177 assert not http.parse_www_authenticate_header('')
187 wa = http.parse_www_authenticate_header(
188 '''Digest
189 realm="testrealm@host.com",
190 qop="auth,auth-int",
191 nonce="dcd98b7102dd2f0e8b11d0f600bfb0c093",
192 opaque="5ccc069c403ebaf9f0171e9517f40e41"'''
193 )
194 assert wa.type == "digest"
195 assert wa.realm == "testrealm@host.com"
196 assert "auth" in wa.qop
197 assert "auth-int" in wa.qop
198 assert wa.nonce == "dcd98b7102dd2f0e8b11d0f600bfb0c093"
199 assert wa.opaque == "5ccc069c403ebaf9f0171e9517f40e41"
200
201 wa = http.parse_www_authenticate_header("broken")
202 assert wa.type == "broken"
203
204 assert not http.parse_www_authenticate_header("").type
205 assert not http.parse_www_authenticate_header("")
178206
179207 def test_etags(self):
180 assert http.quote_etag('foo') == '"foo"'
181 assert http.quote_etag('foo', True) == 'W/"foo"'
182 assert http.unquote_etag('"foo"') == ('foo', False)
183 assert http.unquote_etag('W/"foo"') == ('foo', True)
208 assert http.quote_etag("foo") == '"foo"'
209 assert http.quote_etag("foo", True) == 'W/"foo"'
210 assert http.unquote_etag('"foo"') == ("foo", False)
211 assert http.unquote_etag('W/"foo"') == ("foo", True)
184212 es = http.parse_etags('"foo", "bar", W/"baz", blar')
185 assert sorted(es) == ['bar', 'blar', 'foo']
186 assert 'foo' in es
187 assert 'baz' not in es
188 assert es.contains_weak('baz')
189 assert 'blar' in es
213 assert sorted(es) == ["bar", "blar", "foo"]
214 assert "foo" in es
215 assert "baz" not in es
216 assert es.contains_weak("baz")
217 assert "blar" in es
190218 assert es.contains_raw('W/"baz"')
191219 assert es.contains_raw('"foo"')
192 assert sorted(es.to_header().split(', ')) == ['"bar"', '"blar"', '"foo"', 'W/"baz"']
220 assert sorted(es.to_header().split(", ")) == [
221 '"bar"',
222 '"blar"',
223 '"foo"',
224 'W/"baz"',
225 ]
193226
194227 def test_etags_nonzero(self):
195228 etags = http.parse_etags('W/"foo"')
197230 assert etags.contains_raw('W/"foo"')
198231
199232 def test_parse_date(self):
200 assert http.parse_date('Sun, 06 Nov 1994 08:49:37 GMT ') == datetime(
201 1994, 11, 6, 8, 49, 37)
202 assert http.parse_date('Sunday, 06-Nov-94 08:49:37 GMT') == datetime(1994, 11, 6, 8, 49, 37)
203 assert http.parse_date(' Sun Nov 6 08:49:37 1994') == datetime(1994, 11, 6, 8, 49, 37)
204 assert http.parse_date('foo') is None
233 assert http.parse_date("Sun, 06 Nov 1994 08:49:37 GMT ") == datetime(
234 1994, 11, 6, 8, 49, 37
235 )
236 assert http.parse_date("Sunday, 06-Nov-94 08:49:37 GMT") == datetime(
237 1994, 11, 6, 8, 49, 37
238 )
239 assert http.parse_date(" Sun Nov 6 08:49:37 1994") == datetime(
240 1994, 11, 6, 8, 49, 37
241 )
242 assert http.parse_date("foo") is None
205243
206244 def test_parse_date_overflows(self):
207 assert http.parse_date(' Sun 02 Feb 1343 08:49:37 GMT') == datetime(1343, 2, 2, 8, 49, 37)
208 assert http.parse_date('Thu, 01 Jan 1970 00:00:00 GMT') == datetime(1970, 1, 1, 0, 0)
209 assert http.parse_date('Thu, 33 Jan 1970 00:00:00 GMT') is None
245 assert http.parse_date(" Sun 02 Feb 1343 08:49:37 GMT") == datetime(
246 1343, 2, 2, 8, 49, 37
247 )
248 assert http.parse_date("Thu, 01 Jan 1970 00:00:00 GMT") == datetime(
249 1970, 1, 1, 0, 0
250 )
251 assert http.parse_date("Thu, 33 Jan 1970 00:00:00 GMT") is None
210252
211253 def test_remove_entity_headers(self):
212254 now = http.http_date()
213 headers1 = [('Date', now), ('Content-Type', 'text/html'), ('Content-Length', '0')]
255 headers1 = [
256 ("Date", now),
257 ("Content-Type", "text/html"),
258 ("Content-Length", "0"),
259 ]
214260 headers2 = datastructures.Headers(headers1)
215261
216262 http.remove_entity_headers(headers1)
217 assert headers1 == [('Date', now)]
263 assert headers1 == [("Date", now)]
218264
219265 http.remove_entity_headers(headers2)
220 assert headers2 == datastructures.Headers([(u'Date', now)])
266 assert headers2 == datastructures.Headers([(u"Date", now)])
221267
222268 def test_remove_hop_by_hop_headers(self):
223 headers1 = [('Connection', 'closed'), ('Foo', 'bar'),
224 ('Keep-Alive', 'wtf')]
269 headers1 = [("Connection", "closed"), ("Foo", "bar"), ("Keep-Alive", "wtf")]
225270 headers2 = datastructures.Headers(headers1)
226271
227272 http.remove_hop_by_hop_headers(headers1)
228 assert headers1 == [('Foo', 'bar')]
273 assert headers1 == [("Foo", "bar")]
229274
230275 http.remove_hop_by_hop_headers(headers2)
231 assert headers2 == datastructures.Headers([('Foo', 'bar')])
276 assert headers2 == datastructures.Headers([("Foo", "bar")])
232277
233278 def test_parse_options_header(self):
234 assert http.parse_options_header(None) == \
235 ('', {})
236 assert http.parse_options_header("") == \
237 ('', {})
238 assert http.parse_options_header(r'something; foo="other\"thing"') == \
239 ('something', {'foo': 'other"thing'})
240 assert http.parse_options_header(r'something; foo="other\"thing"; meh=42') == \
241 ('something', {'foo': 'other"thing', 'meh': '42'})
242 assert http.parse_options_header(r'something; foo="other\"thing"; meh=42; bleh') == \
243 ('something', {'foo': 'other"thing', 'meh': '42', 'bleh': None})
244 assert http.parse_options_header('something; foo="other;thing"; meh=42; bleh') == \
245 ('something', {'foo': 'other;thing', 'meh': '42', 'bleh': None})
246 assert http.parse_options_header('something; foo="otherthing"; meh=; bleh') == \
247 ('something', {'foo': 'otherthing', 'meh': None, 'bleh': None})
279 assert http.parse_options_header(None) == ("", {})
280 assert http.parse_options_header("") == ("", {})
281 assert http.parse_options_header(r'something; foo="other\"thing"') == (
282 "something",
283 {"foo": 'other"thing'},
284 )
285 assert http.parse_options_header(r'something; foo="other\"thing"; meh=42') == (
286 "something",
287 {"foo": 'other"thing', "meh": "42"},
288 )
289 assert http.parse_options_header(
290 r'something; foo="other\"thing"; meh=42; bleh'
291 ) == ("something", {"foo": 'other"thing', "meh": "42", "bleh": None})
292 assert http.parse_options_header(
293 'something; foo="other;thing"; meh=42; bleh'
294 ) == ("something", {"foo": "other;thing", "meh": "42", "bleh": None})
295 assert http.parse_options_header('something; foo="otherthing"; meh=; bleh') == (
296 "something",
297 {"foo": "otherthing", "meh": None, "bleh": None},
298 )
248299 # Issue #404
249 assert http.parse_options_header('multipart/form-data; name="foo bar"; '
250 'filename="bar foo"') == \
251 ('multipart/form-data', {'name': 'foo bar', 'filename': 'bar foo'})
300 assert http.parse_options_header(
301 'multipart/form-data; name="foo bar"; ' 'filename="bar foo"'
302 ) == ("multipart/form-data", {"name": "foo bar", "filename": "bar foo"})
252303 # Examples from RFC
253 assert http.parse_options_header('audio/*; q=0.2, audio/basic') == \
254 ('audio/*', {'q': '0.2'})
255 assert http.parse_options_header('audio/*; q=0.2, audio/basic', multiple=True) == \
256 ('audio/*', {'q': '0.2'}, "audio/basic", {})
257 assert http.parse_options_header(
258 'text/plain; q=0.5, text/html\n '
259 'text/x-dvi; q=0.8, text/x-c',
260 multiple=True) == \
261 ('text/plain', {'q': '0.5'}, "text/html", {},
262 "text/x-dvi", {'q': '0.8'}, "text/x-c", {})
263 assert http.parse_options_header('text/plain; q=0.5, text/html\n'
264 ' '
265 'text/x-dvi; q=0.8, text/x-c') == \
266 ('text/plain', {'q': '0.5'})
304 assert http.parse_options_header("audio/*; q=0.2, audio/basic") == (
305 "audio/*",
306 {"q": "0.2"},
307 )
308 assert http.parse_options_header(
309 "audio/*; q=0.2, audio/basic", multiple=True
310 ) == ("audio/*", {"q": "0.2"}, "audio/basic", {})
311 assert http.parse_options_header(
312 "text/plain; q=0.5, text/html\n text/x-dvi; q=0.8, text/x-c",
313 multiple=True,
314 ) == (
315 "text/plain",
316 {"q": "0.5"},
317 "text/html",
318 {},
319 "text/x-dvi",
320 {"q": "0.8"},
321 "text/x-c",
322 {},
323 )
324 assert http.parse_options_header(
325 "text/plain; q=0.5, text/html\n text/x-dvi; q=0.8, text/x-c"
326 ) == ("text/plain", {"q": "0.5"})
267327 # Issue #932
268328 assert http.parse_options_header(
269 'form-data; '
270 'name="a_file"; '
271 'filename*=UTF-8\'\''
272 '"%c2%a3%20and%20%e2%82%ac%20rates"') == \
273 ('form-data', {'name': 'a_file',
274 'filename': u'\xa3 and \u20ac rates'})
275 assert http.parse_options_header(
276 'form-data; '
277 'name*=UTF-8\'\'"%C5%AAn%C4%ADc%C5%8Dde%CC%BD"; '
278 'filename="some_file.txt"') == \
279 ('form-data', {'name': u'\u016an\u012dc\u014dde\u033d',
280 'filename': 'some_file.txt'})
329 "form-data; name=\"a_file\"; filename*=UTF-8''"
330 '"%c2%a3%20and%20%e2%82%ac%20rates"'
331 ) == ("form-data", {"name": "a_file", "filename": u"\xa3 and \u20ac rates"})
332 assert http.parse_options_header(
333 "form-data; name*=UTF-8''\"%C5%AAn%C4%ADc%C5%8Dde%CC%BD\"; "
334 'filename="some_file.txt"'
335 ) == (
336 "form-data",
337 {"name": u"\u016an\u012dc\u014dde\u033d", "filename": "some_file.txt"},
338 )
281339
282340 def test_parse_options_header_value_with_quotes(self):
283341 assert http.parse_options_header(
284342 'form-data; name="file"; filename="t\'es\'t.txt"'
285 ) == ('form-data', {'name': 'file', 'filename': "t'es't.txt"})
286 assert http.parse_options_header(
287 'form-data; name="file"; filename*=UTF-8\'\'"\'🐍\'.txt"'
288 ) == ('form-data', {'name': 'file', 'filename': u"'🐍'.txt"})
343 ) == ("form-data", {"name": "file", "filename": "t'es't.txt"})
344 assert http.parse_options_header(
345 "form-data; name=\"file\"; filename*=UTF-8''\"'🐍'.txt\""
346 ) == ("form-data", {"name": "file", "filename": u"'🐍'.txt"})
289347
290348 def test_parse_options_header_broken_values(self):
291349 # Issue #995
292 assert http.parse_options_header(' ') == ('', {})
293 assert http.parse_options_header(' , ') == ('', {})
294 assert http.parse_options_header(' ; ') == ('', {})
295 assert http.parse_options_header(' ,; ') == ('', {})
296 assert http.parse_options_header(' , a ') == ('', {})
297 assert http.parse_options_header(' ; a ') == ('', {})
350 assert http.parse_options_header(" ") == ("", {})
351 assert http.parse_options_header(" , ") == ("", {})
352 assert http.parse_options_header(" ; ") == ("", {})
353 assert http.parse_options_header(" ,; ") == ("", {})
354 assert http.parse_options_header(" , a ") == ("", {})
355 assert http.parse_options_header(" ; a ") == ("", {})
298356
299357 def test_dump_options_header(self):
300 assert http.dump_options_header('foo', {'bar': 42}) == \
301 'foo; bar=42'
302 assert http.dump_options_header('foo', {'bar': 42, 'fizz': None}) in \
303 ('foo; bar=42; fizz', 'foo; fizz; bar=42')
358 assert http.dump_options_header("foo", {"bar": 42}) == "foo; bar=42"
359 assert http.dump_options_header("foo", {"bar": 42, "fizz": None}) in (
360 "foo; bar=42; fizz",
361 "foo; fizz; bar=42",
362 )
304363
305364 def test_dump_header(self):
306 assert http.dump_header([1, 2, 3]) == '1, 2, 3'
365 assert http.dump_header([1, 2, 3]) == "1, 2, 3"
307366 assert http.dump_header([1, 2, 3], allow_token=False) == '"1", "2", "3"'
308 assert http.dump_header({'foo': 'bar'}, allow_token=False) == 'foo="bar"'
309 assert http.dump_header({'foo': 'bar'}) == 'foo=bar'
367 assert http.dump_header({"foo": "bar"}, allow_token=False) == 'foo="bar"'
368 assert http.dump_header({"foo": "bar"}) == "foo=bar"
310369
311370 def test_is_resource_modified(self):
312371 env = create_environ()
313372
314373 # ignore POST
315 env['REQUEST_METHOD'] = 'POST'
316 assert not http.is_resource_modified(env, etag='testing')
317 env['REQUEST_METHOD'] = 'GET'
374 env["REQUEST_METHOD"] = "POST"
375 assert not http.is_resource_modified(env, etag="testing")
376 env["REQUEST_METHOD"] = "GET"
318377
319378 # etagify from data
320 pytest.raises(TypeError, http.is_resource_modified, env,
321 data='42', etag='23')
322 env['HTTP_IF_NONE_MATCH'] = http.generate_etag(b'awesome')
323 assert not http.is_resource_modified(env, data=b'awesome')
324
325 env['HTTP_IF_MODIFIED_SINCE'] = http.http_date(datetime(2008, 1, 1, 12, 30))
326 assert not http.is_resource_modified(env,
327 last_modified=datetime(2008, 1, 1, 12, 00))
328 assert http.is_resource_modified(env,
329 last_modified=datetime(2008, 1, 1, 13, 00))
379 pytest.raises(TypeError, http.is_resource_modified, env, data="42", etag="23")
380 env["HTTP_IF_NONE_MATCH"] = http.generate_etag(b"awesome")
381 assert not http.is_resource_modified(env, data=b"awesome")
382
383 env["HTTP_IF_MODIFIED_SINCE"] = http.http_date(datetime(2008, 1, 1, 12, 30))
384 assert not http.is_resource_modified(
385 env, last_modified=datetime(2008, 1, 1, 12, 00)
386 )
387 assert http.is_resource_modified(
388 env, last_modified=datetime(2008, 1, 1, 13, 00)
389 )
330390
331391 def test_is_resource_modified_for_range_requests(self):
332392 env = create_environ()
333393
334 env['HTTP_IF_MODIFIED_SINCE'] = http.http_date(datetime(2008, 1, 1, 12, 30))
335 env['HTTP_IF_RANGE'] = http.generate_etag(b'awesome_if_range')
394 env["HTTP_IF_MODIFIED_SINCE"] = http.http_date(datetime(2008, 1, 1, 12, 30))
395 env["HTTP_IF_RANGE"] = http.generate_etag(b"awesome_if_range")
336396 # Range header not present, so If-Range should be ignored
337 assert not http.is_resource_modified(env, data=b'not_the_same',
338 ignore_if_range=False,
339 last_modified=datetime(2008, 1, 1, 12, 30))
340
341 env['HTTP_RANGE'] = ''
342 assert not http.is_resource_modified(env, data=b'awesome_if_range',
343 ignore_if_range=False)
344 assert http.is_resource_modified(env, data=b'not_the_same',
345 ignore_if_range=False)
346
347 env['HTTP_IF_RANGE'] = http.http_date(datetime(2008, 1, 1, 13, 30))
348 assert http.is_resource_modified(env, last_modified=datetime(2008, 1, 1, 14, 00),
349 ignore_if_range=False)
350 assert not http.is_resource_modified(env, last_modified=datetime(2008, 1, 1, 13, 30),
351 ignore_if_range=False)
352 assert http.is_resource_modified(env, last_modified=datetime(2008, 1, 1, 13, 30),
353 ignore_if_range=True)
397 assert not http.is_resource_modified(
398 env,
399 data=b"not_the_same",
400 ignore_if_range=False,
401 last_modified=datetime(2008, 1, 1, 12, 30),
402 )
403
404 env["HTTP_RANGE"] = ""
405 assert not http.is_resource_modified(
406 env, data=b"awesome_if_range", ignore_if_range=False
407 )
408 assert http.is_resource_modified(
409 env, data=b"not_the_same", ignore_if_range=False
410 )
411
412 env["HTTP_IF_RANGE"] = http.http_date(datetime(2008, 1, 1, 13, 30))
413 assert http.is_resource_modified(
414 env, last_modified=datetime(2008, 1, 1, 14, 00), ignore_if_range=False
415 )
416 assert not http.is_resource_modified(
417 env, last_modified=datetime(2008, 1, 1, 13, 30), ignore_if_range=False
418 )
419 assert http.is_resource_modified(
420 env, last_modified=datetime(2008, 1, 1, 13, 30), ignore_if_range=True
421 )
354422
355423 def test_date_formatting(self):
356 assert http.cookie_date(0) == 'Thu, 01-Jan-1970 00:00:00 GMT'
357 assert http.cookie_date(datetime(1970, 1, 1)) == 'Thu, 01-Jan-1970 00:00:00 GMT'
358 assert http.http_date(0) == 'Thu, 01 Jan 1970 00:00:00 GMT'
359 assert http.http_date(datetime(1970, 1, 1)) == 'Thu, 01 Jan 1970 00:00:00 GMT'
424 assert http.cookie_date(0) == "Thu, 01-Jan-1970 00:00:00 GMT"
425 assert http.cookie_date(datetime(1970, 1, 1)) == "Thu, 01-Jan-1970 00:00:00 GMT"
426 assert http.http_date(0) == "Thu, 01 Jan 1970 00:00:00 GMT"
427 assert http.http_date(datetime(1970, 1, 1)) == "Thu, 01 Jan 1970 00:00:00 GMT"
360428
361429 def test_cookies(self):
362430 strict_eq(
363 dict(http.parse_cookie('dismiss-top=6; CP=null*; PHPSESSID=0a539d42abc001cd'
364 'c762809248d4beed; a=42; b="\\\";"')),
431 dict(
432 http.parse_cookie(
433 "dismiss-top=6; CP=null*; PHPSESSID=0a539d42abc001cd"
434 'c762809248d4beed; a=42; b="\\";"'
435 )
436 ),
365437 {
366 'CP': u'null*',
367 'PHPSESSID': u'0a539d42abc001cdc762809248d4beed',
368 'a': u'42',
369 'dismiss-top': u'6',
370 'b': u'\";'
371 }
372 )
373 rv = http.dump_cookie('foo', 'bar baz blub', 360, httponly=True,
374 sync_expires=False)
438 "CP": u"null*",
439 "PHPSESSID": u"0a539d42abc001cdc762809248d4beed",
440 "a": u"42",
441 "dismiss-top": u"6",
442 "b": u'";',
443 },
444 )
445 rv = http.dump_cookie(
446 "foo", "bar baz blub", 360, httponly=True, sync_expires=False
447 )
375448 assert type(rv) is str
376 assert set(rv.split('; ')) == set(['HttpOnly', 'Max-Age=360',
377 'Path=/', 'foo="bar baz blub"'])
378
379 strict_eq(dict(http.parse_cookie('fo234{=bar; blub=Blah')),
380 {'fo234{': u'bar', 'blub': u'Blah'})
381
382 strict_eq(http.dump_cookie('key', 'xxx/'), 'key=xxx/; Path=/')
383 strict_eq(http.dump_cookie('key', 'xxx='), 'key=xxx=; Path=/')
449 assert set(rv.split("; ")) == {
450 "HttpOnly",
451 "Max-Age=360",
452 "Path=/",
453 'foo="bar baz blub"',
454 }
455
456 strict_eq(
457 dict(http.parse_cookie("fo234{=bar; blub=Blah")),
458 {"fo234{": u"bar", "blub": u"Blah"},
459 )
460
461 strict_eq(http.dump_cookie("key", "xxx/"), "key=xxx/; Path=/")
462 strict_eq(http.dump_cookie("key", "xxx="), "key=xxx=; Path=/")
384463
385464 def test_bad_cookies(self):
386465 strict_eq(
387 dict(http.parse_cookie('first=IamTheFirst ; a=1; oops ; a=2 ;'
388 'second = andMeTwo;')),
389 {
390 'first': u'IamTheFirst',
391 'a': u'1',
392 'a': u'2',
393 'oops': u'',
394 'second': u'andMeTwo',
395 }
466 dict(
467 http.parse_cookie(
468 "first=IamTheFirst ; a=1; oops ; a=2 ;second = andMeTwo;"
469 )
470 ),
471 {"first": u"IamTheFirst", "a": u"2", "oops": u"", "second": u"andMeTwo"},
472 )
473
474 def test_empty_keys_are_ignored(self):
475 strict_eq(
476 dict(
477 http.parse_cookie("first=IamTheFirst ; a=1; a=2 ;second=andMeTwo; ; ")
478 ),
479 {"first": u"IamTheFirst", "a": u"2", "second": u"andMeTwo"},
396480 )
397481
398482 def test_cookie_quoting(self):
399483 val = http.dump_cookie("foo", "?foo")
400484 strict_eq(val, 'foo="?foo"; Path=/')
401 strict_eq(dict(http.parse_cookie(val)), {'foo': u'?foo'})
402
403 strict_eq(dict(http.parse_cookie(r'foo="foo\054bar"')),
404 {'foo': u'foo,bar'})
485 strict_eq(dict(http.parse_cookie(val)), {"foo": u"?foo"})
486
487 strict_eq(dict(http.parse_cookie(r'foo="foo\054bar"')), {"foo": u"foo,bar"})
405488
406489 def test_cookie_domain_resolving(self):
407 val = http.dump_cookie('foo', 'bar', domain=u'\N{SNOWMAN}.com')
408 strict_eq(val, 'foo=bar; Domain=xn--n3h.com; Path=/')
490 val = http.dump_cookie("foo", "bar", domain=u"\N{SNOWMAN}.com")
491 strict_eq(val, "foo=bar; Domain=xn--n3h.com; Path=/")
409492
410493 def test_cookie_unicode_dumping(self):
411 val = http.dump_cookie('foo', u'\N{SNOWMAN}')
494 val = http.dump_cookie("foo", u"\N{SNOWMAN}")
412495 h = datastructures.Headers()
413 h.add('Set-Cookie', val)
414 assert h['Set-Cookie'] == 'foo="\\342\\230\\203"; Path=/'
415
416 cookies = http.parse_cookie(h['Set-Cookie'])
417 assert cookies['foo'] == u'\N{SNOWMAN}'
496 h.add("Set-Cookie", val)
497 assert h["Set-Cookie"] == 'foo="\\342\\230\\203"; Path=/'
498
499 cookies = http.parse_cookie(h["Set-Cookie"])
500 assert cookies["foo"] == u"\N{SNOWMAN}"
418501
419502 def test_cookie_unicode_keys(self):
420503 # Yes, this is technically against the spec but happens
421 val = http.dump_cookie(u'fö', u'fö')
422 assert val == wsgi_encoding_dance(u'fö="f\\303\\266"; Path=/', 'utf-8')
504 val = http.dump_cookie(u"fö", u"fö")
505 assert val == wsgi_encoding_dance(u'fö="f\\303\\266"; Path=/', "utf-8")
423506 cookies = http.parse_cookie(val)
424 assert cookies[u'fö'] == u'fö'
507 assert cookies[u"fö"] == u"fö"
425508
426509 def test_cookie_unicode_parsing(self):
427510 # This is actually a correct test. This is what is being submitted
428511 # by firefox if you set an unicode cookie and we get the cookie sent
429512 # in on Python 3 under PEP 3333.
430 cookies = http.parse_cookie(u'fö=fö')
431 assert cookies[u'fö'] == u'fö'
513 cookies = http.parse_cookie(u"fö=fö")
514 assert cookies[u"fö"] == u"fö"
432515
433516 def test_cookie_domain_encoding(self):
434 val = http.dump_cookie('foo', 'bar', domain=u'\N{SNOWMAN}.com')
435 strict_eq(val, 'foo=bar; Domain=xn--n3h.com; Path=/')
436
437 val = http.dump_cookie('foo', 'bar', domain=u'.\N{SNOWMAN}.com')
438 strict_eq(val, 'foo=bar; Domain=.xn--n3h.com; Path=/')
439
440 val = http.dump_cookie('foo', 'bar', domain=u'.foo.com')
441 strict_eq(val, 'foo=bar; Domain=.foo.com; Path=/')
517 val = http.dump_cookie("foo", "bar", domain=u"\N{SNOWMAN}.com")
518 strict_eq(val, "foo=bar; Domain=xn--n3h.com; Path=/")
519
520 val = http.dump_cookie("foo", "bar", domain=u".\N{SNOWMAN}.com")
521 strict_eq(val, "foo=bar; Domain=.xn--n3h.com; Path=/")
522
523 val = http.dump_cookie("foo", "bar", domain=u".foo.com")
524 strict_eq(val, "foo=bar; Domain=.foo.com; Path=/")
442525
443526 def test_cookie_maxsize(self, recwarn):
444 val = http.dump_cookie('foo', 'bar' * 1360 + 'b')
527 val = http.dump_cookie("foo", "bar" * 1360 + "b")
445528 assert len(recwarn) == 0
446529 assert len(val) == 4093
447530
448 http.dump_cookie('foo', 'bar' * 1360 + 'ba')
531 http.dump_cookie("foo", "bar" * 1360 + "ba")
449532 assert len(recwarn) == 1
450533 w = recwarn.pop()
451 assert 'cookie is too large' in str(w.message)
452
453 http.dump_cookie('foo', b'w' * 502, max_size=512)
534 assert "cookie is too large" in str(w.message)
535
536 http.dump_cookie("foo", b"w" * 502, max_size=512)
454537 assert len(recwarn) == 1
455538 w = recwarn.pop()
456 assert 'the limit is 512 bytes' in str(w.message)
457
458 @pytest.mark.parametrize('input, expected', [
459 ('strict', 'foo=bar; Path=/; SameSite=Strict'),
460 ('lax', 'foo=bar; Path=/; SameSite=Lax'),
461 (None, 'foo=bar; Path=/'),
462 ])
539 assert "the limit is 512 bytes" in str(w.message)
540
541 @pytest.mark.parametrize(
542 "input, expected",
543 [
544 ("strict", "foo=bar; Path=/; SameSite=Strict"),
545 ("lax", "foo=bar; Path=/; SameSite=Lax"),
546 (None, "foo=bar; Path=/"),
547 ],
548 )
463549 def test_cookie_samesite_attribute(self, input, expected):
464 val = http.dump_cookie('foo', 'bar', samesite=input)
550 val = http.dump_cookie("foo", "bar", samesite=input)
465551 strict_eq(val, expected)
466552
467553
468554 class TestRange(object):
469
470555 def test_if_range_parsing(self):
471556 rv = http.parse_if_range_header('"Test"')
472 assert rv.etag == 'Test'
557 assert rv.etag == "Test"
473558 assert rv.date is None
474559 assert rv.to_header() == '"Test"'
475560
476561 # weak information is dropped
477562 rv = http.parse_if_range_header('W/"Test"')
478 assert rv.etag == 'Test'
563 assert rv.etag == "Test"
479564 assert rv.date is None
480565 assert rv.to_header() == '"Test"'
481566
482567 # broken etags are supported too
483 rv = http.parse_if_range_header('bullshit')
484 assert rv.etag == 'bullshit'
568 rv = http.parse_if_range_header("bullshit")
569 assert rv.etag == "bullshit"
485570 assert rv.date is None
486571 assert rv.to_header() == '"bullshit"'
487572
488 rv = http.parse_if_range_header('Thu, 01 Jan 1970 00:00:00 GMT')
573 rv = http.parse_if_range_header("Thu, 01 Jan 1970 00:00:00 GMT")
489574 assert rv.etag is None
490575 assert rv.date == datetime(1970, 1, 1)
491 assert rv.to_header() == 'Thu, 01 Jan 1970 00:00:00 GMT'
492
493 for x in '', None:
576 assert rv.to_header() == "Thu, 01 Jan 1970 00:00:00 GMT"
577
578 for x in "", None:
494579 rv = http.parse_if_range_header(x)
495580 assert rv.etag is None
496581 assert rv.date is None
497 assert rv.to_header() == ''
582 assert rv.to_header() == ""
498583
499584 def test_range_parsing(self):
500 rv = http.parse_range_header('bytes=52')
585 rv = http.parse_range_header("bytes=52")
501586 assert rv is None
502587
503 rv = http.parse_range_header('bytes=52-')
504 assert rv.units == 'bytes'
588 rv = http.parse_range_header("bytes=52-")
589 assert rv.units == "bytes"
505590 assert rv.ranges == [(52, None)]
506 assert rv.to_header() == 'bytes=52-'
507
508 rv = http.parse_range_header('bytes=52-99')
509 assert rv.units == 'bytes'
591 assert rv.to_header() == "bytes=52-"
592
593 rv = http.parse_range_header("bytes=52-99")
594 assert rv.units == "bytes"
510595 assert rv.ranges == [(52, 100)]
511 assert rv.to_header() == 'bytes=52-99'
512
513 rv = http.parse_range_header('bytes=52-99,-1000')
514 assert rv.units == 'bytes'
596 assert rv.to_header() == "bytes=52-99"
597
598 rv = http.parse_range_header("bytes=52-99,-1000")
599 assert rv.units == "bytes"
515600 assert rv.ranges == [(52, 100), (-1000, None)]
516 assert rv.to_header() == 'bytes=52-99,-1000'
517
518 rv = http.parse_range_header('bytes = 1 - 100')
519 assert rv.units == 'bytes'
601 assert rv.to_header() == "bytes=52-99,-1000"
602
603 rv = http.parse_range_header("bytes = 1 - 100")
604 assert rv.units == "bytes"
520605 assert rv.ranges == [(1, 101)]
521 assert rv.to_header() == 'bytes=1-100'
522
523 rv = http.parse_range_header('AWesomes=0-999')
524 assert rv.units == 'awesomes'
606 assert rv.to_header() == "bytes=1-100"
607
608 rv = http.parse_range_header("AWesomes=0-999")
609 assert rv.units == "awesomes"
525610 assert rv.ranges == [(0, 1000)]
526 assert rv.to_header() == 'awesomes=0-999'
527
528 rv = http.parse_range_header('bytes=-')
611 assert rv.to_header() == "awesomes=0-999"
612
613 rv = http.parse_range_header("bytes=-")
529614 assert rv is None
530615
531 rv = http.parse_range_header('bytes=bullshit')
616 rv = http.parse_range_header("bytes=bad")
532617 assert rv is None
533618
534 rv = http.parse_range_header('bytes=bullshit-1')
619 rv = http.parse_range_header("bytes=bad-1")
535620 assert rv is None
536621
537 rv = http.parse_range_header('bytes=-bullshit')
622 rv = http.parse_range_header("bytes=-bad")
538623 assert rv is None
539624
540 rv = http.parse_range_header('bytes=52-99, bullshit')
625 rv = http.parse_range_header("bytes=52-99, bad")
541626 assert rv is None
542627
543628 def test_content_range_parsing(self):
544 rv = http.parse_content_range_header('bytes 0-98/*')
545 assert rv.units == 'bytes'
629 rv = http.parse_content_range_header("bytes 0-98/*")
630 assert rv.units == "bytes"
546631 assert rv.start == 0
547632 assert rv.stop == 99
548633 assert rv.length is None
549 assert rv.to_header() == 'bytes 0-98/*'
550
551 rv = http.parse_content_range_header('bytes 0-98/*asdfsa')
634 assert rv.to_header() == "bytes 0-98/*"
635
636 rv = http.parse_content_range_header("bytes 0-98/*asdfsa")
552637 assert rv is None
553638
554 rv = http.parse_content_range_header('bytes 0-99/100')
555 assert rv.to_header() == 'bytes 0-99/100'
639 rv = http.parse_content_range_header("bytes 0-99/100")
640 assert rv.to_header() == "bytes 0-99/100"
556641 rv.start = None
557642 rv.stop = None
558 assert rv.units == 'bytes'
559 assert rv.to_header() == 'bytes */100'
560
561 rv = http.parse_content_range_header('bytes */100')
643 assert rv.units == "bytes"
644 assert rv.to_header() == "bytes */100"
645
646 rv = http.parse_content_range_header("bytes */100")
562647 assert rv.start is None
563648 assert rv.stop is None
564649 assert rv.length == 100
565 assert rv.units == 'bytes'
650 assert rv.units == "bytes"
566651
567652
568653 class TestRegression(object):
569
570654 def test_best_match_works(self):
571655 # was a bug in 0.6
572 rv = http.parse_accept_header('foo=,application/xml,application/xhtml+xml,'
573 'text/html;q=0.9,text/plain;q=0.8,'
574 'image/png,*/*;q=0.5',
575 datastructures.MIMEAccept).best_match(['foo/bar'])
576 assert rv == 'foo/bar'
656 rv = http.parse_accept_header(
657 "foo=,application/xml,application/xhtml+xml,"
658 "text/html;q=0.9,text/plain;q=0.8,"
659 "image/png,*/*;q=0.5",
660 datastructures.MIMEAccept,
661 ).best_match(["foo/bar"])
662 assert rv == "foo/bar"
44
55 Internal tests.
66
7 :copyright: (c) 2014 by Armin Ronacher.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
10 from datetime import datetime
11 from warnings import filterwarnings
12 from warnings import resetwarnings
13
1014 import pytest
11
12 from datetime import datetime
13 from warnings import filterwarnings, resetwarnings
14
15 from werkzeug.wrappers import Request, Response
1615
1716 from werkzeug import _internal as internal
1817 from werkzeug.test import create_environ
18 from werkzeug.wrappers import Request
19 from werkzeug.wrappers import Response
1920
2021
2122 def test_date_to_unix():
2728
2829
2930 def test_easteregg():
30 req = Request.from_values('/?macgybarchakku')
31 req = Request.from_values("/?macgybarchakku")
3132 resp = Response.force_type(internal._easteregg(None), req)
32 assert b'About Werkzeug' in resp.get_data()
33 assert b'the Swiss Army knife of Python web development' in resp.get_data()
33 assert b"About Werkzeug" in resp.get_data()
34 assert b"the Swiss Army knife of Python web development" in resp.get_data()
3435
3536
3637 def test_wrapper_internals():
37 req = Request.from_values(data={'foo': 'bar'}, method='POST')
38 req = Request.from_values(data={"foo": "bar"}, method="POST")
3839 req._load_form_data()
39 assert req.form.to_dict() == {'foo': 'bar'}
40 assert req.form.to_dict() == {"foo": "bar"}
4041
4142 # second call does not break
4243 req._load_form_data()
43 assert req.form.to_dict() == {'foo': 'bar'}
44 assert req.form.to_dict() == {"foo": "bar"}
4445
4546 # check reprs
4647 assert repr(req) == "<Request 'http://localhost/' [POST]>"
4748 resp = Response()
48 assert repr(resp) == '<Response 0 bytes [200 OK]>'
49 resp.set_data('Hello World!')
50 assert repr(resp) == '<Response 12 bytes [200 OK]>'
51 resp.response = iter(['Test'])
52 assert repr(resp) == '<Response streamed [200 OK]>'
49 assert repr(resp) == "<Response 0 bytes [200 OK]>"
50 resp.set_data("Hello World!")
51 assert repr(resp) == "<Response 12 bytes [200 OK]>"
52 resp.response = iter(["Test"])
53 assert repr(resp) == "<Response streamed [200 OK]>"
5354
5455 # unicode data does not set content length
55 response = Response([u'Hällo Wörld'])
56 response = Response([u"Hällo Wörld"])
5657 headers = response.get_wsgi_headers(create_environ())
57 assert u'Content-Length' not in headers
58 assert u"Content-Length" not in headers
5859
59 response = Response([u'Hällo Wörld'.encode('utf-8')])
60 response = Response([u"Hällo Wörld".encode("utf-8")])
6061 headers = response.get_wsgi_headers(create_environ())
61 assert u'Content-Length' in headers
62 assert u"Content-Length" in headers
6263
6364 # check for internal warnings
64 filterwarnings('error', category=Warning)
65 filterwarnings("error", category=Warning)
6566 response = Response()
6667 environ = create_environ()
67 response.response = 'What the...?'
68 response.response = "What the...?"
6869 pytest.raises(Warning, lambda: list(response.iter_encoded()))
6970 pytest.raises(Warning, lambda: list(response.get_app_iter(environ)))
7071 response.direct_passthrough = True
44
55 Local and local proxy tests.
66
7 :copyright: (c) 2014 by Armin Ronacher.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
10 import pytest
11
10 import copy
1211 import time
13 import copy
1412 from functools import partial
1513 from threading import Thread
14
15 import pytest
1616
1717 from werkzeug import local
1818
2727 ns.foo = idx
2828 time.sleep(0.02)
2929 values.append(ns.foo)
30 threads = [Thread(target=value_setter, args=(x,))
31 for x in [1, 2, 3]]
30
31 threads = [Thread(target=value_setter, args=(x,)) for x in [1, 2, 3]]
3232 for thread in threads:
3333 thread.start()
3434 time.sleep(0.2)
3636
3737 def delfoo():
3838 del ns.foo
39
3940 delfoo()
4041 pytest.raises(AttributeError, lambda: ns.foo)
4142 pytest.raises(AttributeError, delfoo)
4748 ns = local.Local()
4849 ns.foo = 42
4950 local.release_local(ns)
50 assert not hasattr(ns, 'foo')
51 assert not hasattr(ns, "foo")
5152
5253 ls = local.LocalStack()
5354 ls.push(42)
121122 assert proxy == (1, 2)
122123 ls.pop()
123124 ls.pop()
124 assert repr(proxy) == '<LocalProxy unbound>'
125 assert repr(proxy) == "<LocalProxy unbound>"
125126
126127 assert ident not in ls._local.__storage__
127128
143144 local.LocalManager([ns, stack], ident_func=lambda: ident)
144145
145146 ns.foo = 42
146 stack.push({'foo': 42})
147 stack.push({"foo": 42})
147148 ident = 1
148149 ns.foo = 23
149 stack.push({'foo': 23})
150 stack.push({"foo": 23})
150151 ident = 0
151152 assert ns.foo == 42
152 assert stack.top['foo'] == 42
153 assert stack.top["foo"] == 42
153154 stack.pop()
154155 assert stack.top is None
155156 ident = 1
156157 assert ns.foo == 23
157 assert stack.top['foo'] == 23
158 assert stack.top["foo"] == 23
158159 stack.pop()
159160 assert stack.top is None
160161
168169
169170 def __deepcopy__(self, memo):
170171 return self
172
171173 f = Foo()
172174 p = local.LocalProxy(lambda: f)
173175 assert p.attr == 42
185187
186188 def test_local_proxy_wrapped_attribute():
187189 class SomeClassWithWrapped(object):
188 __wrapped__ = 'wrapped'
190 __wrapped__ = "wrapped"
189191
190192 def lookup_func():
191193 return 42
202204 ns.foo = SomeClassWithWrapped()
203205 ns.bar = 42
204206
205 assert ns('foo').__wrapped__ == 'wrapped'
206 pytest.raises(AttributeError, lambda: ns('bar').__wrapped__)
207 assert ns("foo").__wrapped__ == "wrapped"
208 pytest.raises(AttributeError, lambda: ns("bar").__wrapped__)
44
55 Routing tests.
66
7 :copyright: (c) 2014 by Armin Ronacher.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
10 import gc
11 import uuid
12
1013 import pytest
1114
12 import uuid
13
14 from tests import strict_eq
15
15 from . import strict_eq
1616 from werkzeug import routing as r
17 from werkzeug.datastructures import ImmutableDict
18 from werkzeug.datastructures import MultiDict
19 from werkzeug.test import create_environ
1720 from werkzeug.wrappers import Response
18 from werkzeug.datastructures import ImmutableDict, MultiDict
19 from werkzeug.test import create_environ
2021
2122
2223 def test_basic_routing():
23 map = r.Map([
24 r.Rule('/', endpoint='index'),
25 r.Rule('/foo', endpoint='foo'),
26 r.Rule('/bar/', endpoint='bar')
27 ])
28 adapter = map.bind('example.org', '/')
29 assert adapter.match('/') == ('index', {})
30 assert adapter.match('/foo') == ('foo', {})
31 assert adapter.match('/bar/') == ('bar', {})
32 pytest.raises(r.RequestRedirect, lambda: adapter.match('/bar'))
33 pytest.raises(r.NotFound, lambda: adapter.match('/blub'))
34
35 adapter = map.bind('example.org', '/test')
36 with pytest.raises(r.RequestRedirect) as excinfo:
37 adapter.match('/bar')
38 assert excinfo.value.new_url == 'http://example.org/test/bar/'
39
40 adapter = map.bind('example.org', '/')
41 with pytest.raises(r.RequestRedirect) as excinfo:
42 adapter.match('/bar')
43 assert excinfo.value.new_url == 'http://example.org/bar/'
44
45 adapter = map.bind('example.org', '/')
46 with pytest.raises(r.RequestRedirect) as excinfo:
47 adapter.match('/bar', query_args={'aha': 'muhaha'})
48 assert excinfo.value.new_url == 'http://example.org/bar/?aha=muhaha'
49
50 adapter = map.bind('example.org', '/')
51 with pytest.raises(r.RequestRedirect) as excinfo:
52 adapter.match('/bar', query_args='aha=muhaha')
53 assert excinfo.value.new_url == 'http://example.org/bar/?aha=muhaha'
54
55 adapter = map.bind_to_environ(create_environ('/bar?foo=bar',
56 'http://example.org/'))
24 map = r.Map(
25 [
26 r.Rule("/", endpoint="index"),
27 r.Rule("/foo", endpoint="foo"),
28 r.Rule("/bar/", endpoint="bar"),
29 ]
30 )
31 adapter = map.bind("example.org", "/")
32 assert adapter.match("/") == ("index", {})
33 assert adapter.match("/foo") == ("foo", {})
34 assert adapter.match("/bar/") == ("bar", {})
35 pytest.raises(r.RequestRedirect, lambda: adapter.match("/bar"))
36 pytest.raises(r.NotFound, lambda: adapter.match("/blub"))
37
38 adapter = map.bind("example.org", "/test")
39 with pytest.raises(r.RequestRedirect) as excinfo:
40 adapter.match("/bar")
41 assert excinfo.value.new_url == "http://example.org/test/bar/"
42
43 adapter = map.bind("example.org", "/")
44 with pytest.raises(r.RequestRedirect) as excinfo:
45 adapter.match("/bar")
46 assert excinfo.value.new_url == "http://example.org/bar/"
47
48 adapter = map.bind("example.org", "/")
49 with pytest.raises(r.RequestRedirect) as excinfo:
50 adapter.match("/bar", query_args={"aha": "muhaha"})
51 assert excinfo.value.new_url == "http://example.org/bar/?aha=muhaha"
52
53 adapter = map.bind("example.org", "/")
54 with pytest.raises(r.RequestRedirect) as excinfo:
55 adapter.match("/bar", query_args="aha=muhaha")
56 assert excinfo.value.new_url == "http://example.org/bar/?aha=muhaha"
57
58 adapter = map.bind_to_environ(create_environ("/bar?foo=bar", "http://example.org/"))
5759 with pytest.raises(r.RequestRedirect) as excinfo:
5860 adapter.match()
59 assert excinfo.value.new_url == 'http://example.org/bar/?foo=bar'
61 assert excinfo.value.new_url == "http://example.org/bar/?foo=bar"
6062
6163
6264 def test_strict_slashes_redirect():
63 map = r.Map([
64 r.Rule('/bar/', endpoint='get', methods=["GET"]),
65 r.Rule('/bar', endpoint='post', methods=["POST"]),
66 ])
67 adapter = map.bind('example.org', '/')
65 map = r.Map(
66 [
67 r.Rule("/bar/", endpoint="get", methods=["GET"]),
68 r.Rule("/bar", endpoint="post", methods=["POST"]),
69 r.Rule("/foo/", endpoint="foo", methods=["POST"]),
70 ]
71 )
72 adapter = map.bind("example.org", "/")
6873
6974 # Check if the actual routes works
70 assert adapter.match('/bar/', method='GET') == ('get', {})
71 assert adapter.match('/bar', method='POST') == ('post', {})
75 assert adapter.match("/bar/", method="GET") == ("get", {})
76 assert adapter.match("/bar", method="POST") == ("post", {})
7277
7378 # Check if exceptions are correct
74 pytest.raises(r.RequestRedirect, adapter.match, '/bar', method='GET')
75 pytest.raises(r.MethodNotAllowed, adapter.match, '/bar/', method='POST')
79 pytest.raises(r.RequestRedirect, adapter.match, "/bar", method="GET")
80 pytest.raises(r.MethodNotAllowed, adapter.match, "/bar/", method="POST")
81 with pytest.raises(r.RequestRedirect) as error_info:
82 adapter.match("/foo", method="POST")
83 assert error_info.value.code == 308
7684
7785 # Check differently defined order
78 map = r.Map([
79 r.Rule('/bar', endpoint='post', methods=["POST"]),
80 r.Rule('/bar/', endpoint='get', methods=["GET"]),
81 ])
82 adapter = map.bind('example.org', '/')
86 map = r.Map(
87 [
88 r.Rule("/bar", endpoint="post", methods=["POST"]),
89 r.Rule("/bar/", endpoint="get", methods=["GET"]),
90 ]
91 )
92 adapter = map.bind("example.org", "/")
8393
8494 # Check if the actual routes works
85 assert adapter.match('/bar/', method='GET') == ('get', {})
86 assert adapter.match('/bar', method='POST') == ('post', {})
95 assert adapter.match("/bar/", method="GET") == ("get", {})
96 assert adapter.match("/bar", method="POST") == ("post", {})
8797
8898 # Check if exceptions are correct
89 pytest.raises(r.RequestRedirect, adapter.match, '/bar', method='GET')
90 pytest.raises(r.MethodNotAllowed, adapter.match, '/bar/', method='POST')
99 pytest.raises(r.RequestRedirect, adapter.match, "/bar", method="GET")
100 pytest.raises(r.MethodNotAllowed, adapter.match, "/bar/", method="POST")
91101
92102 # Check what happens when only slash route is defined
93 map = r.Map([
94 r.Rule('/bar/', endpoint='get', methods=["GET"]),
95 ])
96 adapter = map.bind('example.org', '/')
103 map = r.Map([r.Rule("/bar/", endpoint="get", methods=["GET"])])
104 adapter = map.bind("example.org", "/")
97105
98106 # Check if the actual routes works
99 assert adapter.match('/bar/', method='GET') == ('get', {})
107 assert adapter.match("/bar/", method="GET") == ("get", {})
100108
101109 # Check if exceptions are correct
102 pytest.raises(r.RequestRedirect, adapter.match, '/bar', method='GET')
103 pytest.raises(r.MethodNotAllowed, adapter.match, '/bar/', method='POST')
104 pytest.raises(r.MethodNotAllowed, adapter.match, '/bar', method='POST')
110 pytest.raises(r.RequestRedirect, adapter.match, "/bar", method="GET")
111 pytest.raises(r.MethodNotAllowed, adapter.match, "/bar/", method="POST")
112 pytest.raises(r.MethodNotAllowed, adapter.match, "/bar", method="POST")
105113
106114
107115 def test_environ_defaults():
108116 environ = create_environ("/foo")
109 strict_eq(environ["PATH_INFO"], '/foo')
117 strict_eq(environ["PATH_INFO"], "/foo")
110118 m = r.Map([r.Rule("/foo", endpoint="foo"), r.Rule("/bar", endpoint="bar")])
111119 a = m.bind_to_environ(environ)
112 strict_eq(a.match("/foo"), ('foo', {}))
113 strict_eq(a.match(), ('foo', {}))
114 strict_eq(a.match("/bar"), ('bar', {}))
120 strict_eq(a.match("/foo"), ("foo", {}))
121 strict_eq(a.match(), ("foo", {}))
122 strict_eq(a.match("/bar"), ("bar", {}))
115123 pytest.raises(r.NotFound, a.match, "/bars")
116124
117125
118126 def test_environ_nonascii_pathinfo():
119 environ = create_environ(u'/лошадь')
120 m = r.Map([
121 r.Rule(u'/', endpoint='index'),
122 r.Rule(u'/лошадь', endpoint='horse')
123 ])
127 environ = create_environ(u"/лошадь")
128 m = r.Map([r.Rule(u"/", endpoint="index"), r.Rule(u"/лошадь", endpoint="horse")])
124129 a = m.bind_to_environ(environ)
125 strict_eq(a.match(u'/'), ('index', {}))
126 strict_eq(a.match(u'/лошадь'), ('horse', {}))
127 pytest.raises(r.NotFound, a.match, u'/барсук')
130 strict_eq(a.match(u"/"), ("index", {}))
131 strict_eq(a.match(u"/лошадь"), ("horse", {}))
132 pytest.raises(r.NotFound, a.match, u"/барсук")
128133
129134
130135 def test_basic_building():
131 map = r.Map([
132 r.Rule('/', endpoint='index'),
133 r.Rule('/foo', endpoint='foo'),
134 r.Rule('/bar/<baz>', endpoint='bar'),
135 r.Rule('/bar/<int:bazi>', endpoint='bari'),
136 r.Rule('/bar/<float:bazf>', endpoint='barf'),
137 r.Rule('/bar/<path:bazp>', endpoint='barp'),
138 r.Rule('/hehe', endpoint='blah', subdomain='blah')
139 ])
140 adapter = map.bind('example.org', '/', subdomain='blah')
141
142 assert adapter.build('index', {}) == 'http://example.org/'
143 assert adapter.build('foo', {}) == 'http://example.org/foo'
144 assert adapter.build('bar', {'baz': 'blub'}) == \
145 'http://example.org/bar/blub'
146 assert adapter.build('bari', {'bazi': 50}) == 'http://example.org/bar/50'
147 multivalues = MultiDict([('bazi', 50), ('bazi', None)])
148 assert adapter.build('bari', multivalues) == 'http://example.org/bar/50'
149 assert adapter.build('barf', {'bazf': 0.815}) == \
150 'http://example.org/bar/0.815'
151 assert adapter.build('barp', {'bazp': 'la/di'}) == \
152 'http://example.org/bar/la/di'
153 assert adapter.build('blah', {}) == '/hehe'
154 pytest.raises(r.BuildError, lambda: adapter.build('urks'))
155
156 adapter = map.bind('example.org', '/test', subdomain='blah')
157 assert adapter.build('index', {}) == 'http://example.org/test/'
158 assert adapter.build('foo', {}) == 'http://example.org/test/foo'
159 assert adapter.build('bar', {'baz': 'blub'}) == \
160 'http://example.org/test/bar/blub'
161 assert adapter.build('bari', {'bazi': 50}) == 'http://example.org/test/bar/50'
162 assert adapter.build('barf', {'bazf': 0.815}) == 'http://example.org/test/bar/0.815'
163 assert adapter.build('barp', {'bazp': 'la/di'}) == 'http://example.org/test/bar/la/di'
164 assert adapter.build('blah', {}) == '/test/hehe'
165
166 adapter = map.bind('example.org')
167 assert adapter.build('foo', {}) == '/foo'
168 assert adapter.build('foo', {}, force_external=True) == 'http://example.org/foo'
169 adapter = map.bind('example.org', url_scheme='')
170 assert adapter.build('foo', {}) == '/foo'
171 assert adapter.build('foo', {}, force_external=True) == '//example.org/foo'
136 map = r.Map(
137 [
138 r.Rule("/", endpoint="index"),
139 r.Rule("/foo", endpoint="foo"),
140 r.Rule("/bar/<baz>", endpoint="bar"),
141 r.Rule("/bar/<int:bazi>", endpoint="bari"),
142 r.Rule("/bar/<float:bazf>", endpoint="barf"),
143 r.Rule("/bar/<path:bazp>", endpoint="barp"),
144 r.Rule("/hehe", endpoint="blah", subdomain="blah"),
145 ]
146 )
147 adapter = map.bind("example.org", "/", subdomain="blah")
148
149 assert adapter.build("index", {}) == "http://example.org/"
150 assert adapter.build("foo", {}) == "http://example.org/foo"
151 assert adapter.build("bar", {"baz": "blub"}) == "http://example.org/bar/blub"
152 assert adapter.build("bari", {"bazi": 50}) == "http://example.org/bar/50"
153 assert adapter.build("barf", {"bazf": 0.815}) == "http://example.org/bar/0.815"
154 assert adapter.build("barp", {"bazp": "la/di"}) == "http://example.org/bar/la/di"
155 assert adapter.build("blah", {}) == "/hehe"
156 pytest.raises(r.BuildError, lambda: adapter.build("urks"))
157
158 adapter = map.bind("example.org", "/test", subdomain="blah")
159 assert adapter.build("index", {}) == "http://example.org/test/"
160 assert adapter.build("foo", {}) == "http://example.org/test/foo"
161 assert adapter.build("bar", {"baz": "blub"}) == "http://example.org/test/bar/blub"
162 assert adapter.build("bari", {"bazi": 50}) == "http://example.org/test/bar/50"
163 assert adapter.build("barf", {"bazf": 0.815}) == "http://example.org/test/bar/0.815"
164 assert (
165 adapter.build("barp", {"bazp": "la/di"}) == "http://example.org/test/bar/la/di"
166 )
167 assert adapter.build("blah", {}) == "/test/hehe"
168
169 adapter = map.bind("example.org")
170 assert adapter.build("foo", {}) == "/foo"
171 assert adapter.build("foo", {}, force_external=True) == "http://example.org/foo"
172 adapter = map.bind("example.org", url_scheme="")
173 assert adapter.build("foo", {}) == "/foo"
174 assert adapter.build("foo", {}, force_external=True) == "//example.org/foo"
175
176
177 def test_long_build():
178 long_args = dict(("v%d" % x, x) for x in range(10000))
179 map = r.Map(
180 [
181 r.Rule(
182 "".join("/<%s>" % k for k in long_args.keys()),
183 endpoint="bleep",
184 build_only=True,
185 )
186 ]
187 )
188 adapter = map.bind("localhost", "/")
189 url = adapter.build("bleep", long_args)
190 url += "/"
191 for v in long_args.values():
192 assert "/%d" % v in url
172193
173194
174195 def test_defaults():
175 map = r.Map([
176 r.Rule('/foo/', defaults={'page': 1}, endpoint='foo'),
177 r.Rule('/foo/<int:page>', endpoint='foo')
178 ])
179 adapter = map.bind('example.org', '/')
180
181 assert adapter.match('/foo/') == ('foo', {'page': 1})
182 pytest.raises(r.RequestRedirect, lambda: adapter.match('/foo/1'))
183 assert adapter.match('/foo/2') == ('foo', {'page': 2})
184 assert adapter.build('foo', {}) == '/foo/'
185 assert adapter.build('foo', {'page': 1}) == '/foo/'
186 assert adapter.build('foo', {'page': 2}) == '/foo/2'
196 map = r.Map(
197 [
198 r.Rule("/foo/", defaults={"page": 1}, endpoint="foo"),
199 r.Rule("/foo/<int:page>", endpoint="foo"),
200 ]
201 )
202 adapter = map.bind("example.org", "/")
203
204 assert adapter.match("/foo/") == ("foo", {"page": 1})
205 pytest.raises(r.RequestRedirect, lambda: adapter.match("/foo/1"))
206 assert adapter.match("/foo/2") == ("foo", {"page": 2})
207 assert adapter.build("foo", {}) == "/foo/"
208 assert adapter.build("foo", {"page": 1}) == "/foo/"
209 assert adapter.build("foo", {"page": 2}) == "/foo/2"
210
211
212 def test_negative():
213 map = r.Map(
214 [
215 r.Rule("/foos/<int(signed=True):page>", endpoint="foos"),
216 r.Rule("/bars/<float(signed=True):page>", endpoint="bars"),
217 r.Rule("/foo/<int:page>", endpoint="foo"),
218 r.Rule("/bar/<float:page>", endpoint="bar"),
219 ]
220 )
221 adapter = map.bind("example.org", "/")
222
223 assert adapter.match("/foos/-2") == ("foos", {"page": -2})
224 assert adapter.match("/foos/-50") == ("foos", {"page": -50})
225 assert adapter.match("/bars/-2.0") == ("bars", {"page": -2.0})
226 assert adapter.match("/bars/-0.185") == ("bars", {"page": -0.185})
227
228 # Make sure signed values are rejected in unsigned mode
229 pytest.raises(r.NotFound, lambda: adapter.match("/foo/-2"))
230 pytest.raises(r.NotFound, lambda: adapter.match("/foo/-50"))
231 pytest.raises(r.NotFound, lambda: adapter.match("/bar/-0.185"))
232 pytest.raises(r.NotFound, lambda: adapter.match("/bar/-2.0"))
187233
188234
189235 def test_greedy():
190 map = r.Map([
191 r.Rule('/foo', endpoint='foo'),
192 r.Rule('/<path:bar>', endpoint='bar'),
193 r.Rule('/<path:bar>/<path:blub>', endpoint='bar')
194 ])
195 adapter = map.bind('example.org', '/')
196
197 assert adapter.match('/foo') == ('foo', {})
198 assert adapter.match('/blub') == ('bar', {'bar': 'blub'})
199 assert adapter.match('/he/he') == ('bar', {'bar': 'he', 'blub': 'he'})
200
201 assert adapter.build('foo', {}) == '/foo'
202 assert adapter.build('bar', {'bar': 'blub'}) == '/blub'
203 assert adapter.build('bar', {'bar': 'blub', 'blub': 'bar'}) == '/blub/bar'
236 map = r.Map(
237 [
238 r.Rule("/foo", endpoint="foo"),
239 r.Rule("/<path:bar>", endpoint="bar"),
240 r.Rule("/<path:bar>/<path:blub>", endpoint="bar"),
241 ]
242 )
243 adapter = map.bind("example.org", "/")
244
245 assert adapter.match("/foo") == ("foo", {})
246 assert adapter.match("/blub") == ("bar", {"bar": "blub"})
247 assert adapter.match("/he/he") == ("bar", {"bar": "he", "blub": "he"})
248
249 assert adapter.build("foo", {}) == "/foo"
250 assert adapter.build("bar", {"bar": "blub"}) == "/blub"
251 assert adapter.build("bar", {"bar": "blub", "blub": "bar"}) == "/blub/bar"
204252
205253
206254 def test_path():
207 map = r.Map([
208 r.Rule('/', defaults={'name': 'FrontPage'}, endpoint='page'),
209 r.Rule('/Special', endpoint='special'),
210 r.Rule('/<int:year>', endpoint='year'),
211 r.Rule('/<path:name>:foo', endpoint='foopage'),
212 r.Rule('/<path:name>:<path:name2>', endpoint='twopage'),
213 r.Rule('/<path:name>', endpoint='page'),
214 r.Rule('/<path:name>/edit', endpoint='editpage'),
215 r.Rule('/<path:name>/silly/<path:name2>', endpoint='sillypage'),
216 r.Rule('/<path:name>/silly/<path:name2>/edit', endpoint='editsillypage'),
217 r.Rule('/Talk:<path:name>', endpoint='talk'),
218 r.Rule('/User:<username>', endpoint='user'),
219 r.Rule('/User:<username>/<path:name>', endpoint='userpage'),
220 r.Rule('/User:<username>/comment/<int:id>-<int:replyId>', endpoint='usercomment'),
221 r.Rule('/Files/<path:file>', endpoint='files'),
222 r.Rule('/<admin>/<manage>/<things>', endpoint='admin'),
223 ])
224 adapter = map.bind('example.org', '/')
225
226 assert adapter.match('/') == ('page', {'name': 'FrontPage'})
227 pytest.raises(r.RequestRedirect, lambda: adapter.match('/FrontPage'))
228 assert adapter.match('/Special') == ('special', {})
229 assert adapter.match('/2007') == ('year', {'year': 2007})
230 assert adapter.match('/Some:foo') == ('foopage', {'name': 'Some'})
231 assert adapter.match('/Some:bar') == ('twopage', {'name': 'Some', 'name2': 'bar'})
232 assert adapter.match('/Some/Page') == ('page', {'name': 'Some/Page'})
233 assert adapter.match('/Some/Page/edit') == ('editpage', {'name': 'Some/Page'})
234 assert adapter.match('/Foo/silly/bar') == ('sillypage', {'name': 'Foo', 'name2': 'bar'})
235 assert adapter.match(
236 '/Foo/silly/bar/edit') == ('editsillypage', {'name': 'Foo', 'name2': 'bar'})
237 assert adapter.match('/Talk:Foo/Bar') == ('talk', {'name': 'Foo/Bar'})
238 assert adapter.match('/User:thomas') == ('user', {'username': 'thomas'})
239 assert adapter.match('/User:thomas/projects/werkzeug') == \
240 ('userpage', {'username': 'thomas', 'name': 'projects/werkzeug'})
241 assert adapter.match('/User:thomas/comment/123-456') == \
242 ('usercomment', {'username': 'thomas', 'id': 123, 'replyId': 456})
243 assert adapter.match('/Files/downloads/werkzeug/0.2.zip') == \
244 ('files', {'file': 'downloads/werkzeug/0.2.zip'})
245 assert adapter.match('/Jerry/eats/cheese') == \
246 ('admin', {'admin': 'Jerry', 'manage': 'eats', 'things': 'cheese'})
255 map = r.Map(
256 [
257 r.Rule("/", defaults={"name": "FrontPage"}, endpoint="page"),
258 r.Rule("/Special", endpoint="special"),
259 r.Rule("/<int:year>", endpoint="year"),
260 r.Rule("/<path:name>:foo", endpoint="foopage"),
261 r.Rule("/<path:name>:<path:name2>", endpoint="twopage"),
262 r.Rule("/<path:name>", endpoint="page"),
263 r.Rule("/<path:name>/edit", endpoint="editpage"),
264 r.Rule("/<path:name>/silly/<path:name2>", endpoint="sillypage"),
265 r.Rule("/<path:name>/silly/<path:name2>/edit", endpoint="editsillypage"),
266 r.Rule("/Talk:<path:name>", endpoint="talk"),
267 r.Rule("/User:<username>", endpoint="user"),
268 r.Rule("/User:<username>/<path:name>", endpoint="userpage"),
269 r.Rule(
270 "/User:<username>/comment/<int:id>-<int:replyId>",
271 endpoint="usercomment",
272 ),
273 r.Rule("/Files/<path:file>", endpoint="files"),
274 r.Rule("/<admin>/<manage>/<things>", endpoint="admin"),
275 ]
276 )
277 adapter = map.bind("example.org", "/")
278
279 assert adapter.match("/") == ("page", {"name": "FrontPage"})
280 pytest.raises(r.RequestRedirect, lambda: adapter.match("/FrontPage"))
281 assert adapter.match("/Special") == ("special", {})
282 assert adapter.match("/2007") == ("year", {"year": 2007})
283 assert adapter.match("/Some:foo") == ("foopage", {"name": "Some"})
284 assert adapter.match("/Some:bar") == ("twopage", {"name": "Some", "name2": "bar"})
285 assert adapter.match("/Some/Page") == ("page", {"name": "Some/Page"})
286 assert adapter.match("/Some/Page/edit") == ("editpage", {"name": "Some/Page"})
287 assert adapter.match("/Foo/silly/bar") == (
288 "sillypage",
289 {"name": "Foo", "name2": "bar"},
290 )
291 assert adapter.match("/Foo/silly/bar/edit") == (
292 "editsillypage",
293 {"name": "Foo", "name2": "bar"},
294 )
295 assert adapter.match("/Talk:Foo/Bar") == ("talk", {"name": "Foo/Bar"})
296 assert adapter.match("/User:thomas") == ("user", {"username": "thomas"})
297 assert adapter.match("/User:thomas/projects/werkzeug") == (
298 "userpage",
299 {"username": "thomas", "name": "projects/werkzeug"},
300 )
301 assert adapter.match("/User:thomas/comment/123-456") == (
302 "usercomment",
303 {"username": "thomas", "id": 123, "replyId": 456},
304 )
305 assert adapter.match("/Files/downloads/werkzeug/0.2.zip") == (
306 "files",
307 {"file": "downloads/werkzeug/0.2.zip"},
308 )
309 assert adapter.match("/Jerry/eats/cheese") == (
310 "admin",
311 {"admin": "Jerry", "manage": "eats", "things": "cheese"},
312 )
247313
248314
249315 def test_dispatch():
250 env = create_environ('/')
251 map = r.Map([
252 r.Rule('/', endpoint='root'),
253 r.Rule('/foo/', endpoint='foo')
254 ])
316 env = create_environ("/")
317 map = r.Map([r.Rule("/", endpoint="root"), r.Rule("/foo/", endpoint="foo")])
255318 adapter = map.bind_to_environ(env)
256319
257320 raise_this = None
260323 if raise_this is not None:
261324 raise raise_this
262325 return Response(repr((endpoint, values)))
263 dispatch = lambda p, q=False: Response.force_type(
264 adapter.dispatch(view_func, p, catch_http_exceptions=q),
265 env
266 )
267
268 assert dispatch('/').data == b"('root', {})"
269 assert dispatch('/foo').status_code == 301
326
327 def dispatch(path, quiet=False):
328 return Response.force_type(
329 adapter.dispatch(view_func, path, catch_http_exceptions=quiet), env
330 )
331
332 assert dispatch("/").data == b"('root', {})"
333 assert dispatch("/foo").status_code == 308
270334 raise_this = r.NotFound()
271 pytest.raises(r.NotFound, lambda: dispatch('/bar'))
272 assert dispatch('/bar', True).status_code == 404
335 pytest.raises(r.NotFound, lambda: dispatch("/bar"))
336 assert dispatch("/bar", True).status_code == 404
273337
274338
275339 def test_http_host_before_server_name():
276340 env = {
277 'HTTP_HOST': 'wiki.example.com',
278 'SERVER_NAME': 'web0.example.com',
279 'SERVER_PORT': '80',
280 'SCRIPT_NAME': '',
281 'PATH_INFO': '',
282 'REQUEST_METHOD': 'GET',
283 'wsgi.url_scheme': 'http'
341 "HTTP_HOST": "wiki.example.com",
342 "SERVER_NAME": "web0.example.com",
343 "SERVER_PORT": "80",
344 "SCRIPT_NAME": "",
345 "PATH_INFO": "",
346 "REQUEST_METHOD": "GET",
347 "wsgi.url_scheme": "http",
284348 }
285 map = r.Map([r.Rule('/', endpoint='index', subdomain='wiki')])
286 adapter = map.bind_to_environ(env, server_name='example.com')
287 assert adapter.match('/') == ('index', {})
288 assert adapter.build('index', force_external=True) == 'http://wiki.example.com/'
289 assert adapter.build('index') == '/'
290
291 env['HTTP_HOST'] = 'admin.example.com'
292 adapter = map.bind_to_environ(env, server_name='example.com')
293 assert adapter.build('index') == 'http://wiki.example.com/'
349 map = r.Map([r.Rule("/", endpoint="index", subdomain="wiki")])
350 adapter = map.bind_to_environ(env, server_name="example.com")
351 assert adapter.match("/") == ("index", {})
352 assert adapter.build("index", force_external=True) == "http://wiki.example.com/"
353 assert adapter.build("index") == "/"
354
355 env["HTTP_HOST"] = "admin.example.com"
356 adapter = map.bind_to_environ(env, server_name="example.com")
357 assert adapter.build("index") == "http://wiki.example.com/"
294358
295359
296360 def test_adapter_url_parameter_sorting():
297 map = r.Map([r.Rule('/', endpoint='index')], sort_parameters=True,
298 sort_key=lambda x: x[1])
299 adapter = map.bind('localhost', '/')
300 assert adapter.build('index', {'x': 20, 'y': 10, 'z': 30},
301 force_external=True) == 'http://localhost/?y=10&x=20&z=30'
361 map = r.Map(
362 [r.Rule("/", endpoint="index")], sort_parameters=True, sort_key=lambda x: x[1]
363 )
364 adapter = map.bind("localhost", "/")
365 assert (
366 adapter.build("index", {"x": 20, "y": 10, "z": 30}, force_external=True)
367 == "http://localhost/?y=10&x=20&z=30"
368 )
302369
303370
304371 def test_request_direct_charset_bug():
305 map = r.Map([r.Rule(u'/öäü/')])
306 adapter = map.bind('localhost', '/')
307
308 with pytest.raises(r.RequestRedirect) as excinfo:
309 adapter.match(u'/öäü')
310 assert excinfo.value.new_url == 'http://localhost/%C3%B6%C3%A4%C3%BC/'
372 map = r.Map([r.Rule(u"/öäü/")])
373 adapter = map.bind("localhost", "/")
374
375 with pytest.raises(r.RequestRedirect) as excinfo:
376 adapter.match(u"/öäü")
377 assert excinfo.value.new_url == "http://localhost/%C3%B6%C3%A4%C3%BC/"
311378
312379
313380 def test_request_redirect_default():
314 map = r.Map([r.Rule(u'/foo', defaults={'bar': 42}),
315 r.Rule(u'/foo/<int:bar>')])
316 adapter = map.bind('localhost', '/')
317
318 with pytest.raises(r.RequestRedirect) as excinfo:
319 adapter.match(u'/foo/42')
320 assert excinfo.value.new_url == 'http://localhost/foo'
381 map = r.Map([r.Rule(u"/foo", defaults={"bar": 42}), r.Rule(u"/foo/<int:bar>")])
382 adapter = map.bind("localhost", "/")
383
384 with pytest.raises(r.RequestRedirect) as excinfo:
385 adapter.match(u"/foo/42")
386 assert excinfo.value.new_url == "http://localhost/foo"
321387
322388
323389 def test_request_redirect_default_subdomain():
324 map = r.Map([r.Rule(u'/foo', defaults={'bar': 42}, subdomain='test'),
325 r.Rule(u'/foo/<int:bar>', subdomain='other')])
326 adapter = map.bind('localhost', '/', subdomain='other')
327
328 with pytest.raises(r.RequestRedirect) as excinfo:
329 adapter.match(u'/foo/42')
330 assert excinfo.value.new_url == 'http://test.localhost/foo'
390 map = r.Map(
391 [
392 r.Rule(u"/foo", defaults={"bar": 42}, subdomain="test"),
393 r.Rule(u"/foo/<int:bar>", subdomain="other"),
394 ]
395 )
396 adapter = map.bind("localhost", "/", subdomain="other")
397
398 with pytest.raises(r.RequestRedirect) as excinfo:
399 adapter.match(u"/foo/42")
400 assert excinfo.value.new_url == "http://test.localhost/foo"
331401
332402
333403 def test_adapter_match_return_rule():
334 rule = r.Rule('/foo/', endpoint='foo')
404 rule = r.Rule("/foo/", endpoint="foo")
335405 map = r.Map([rule])
336 adapter = map.bind('localhost', '/')
337 assert adapter.match('/foo/', return_rule=True) == (rule, {})
406 adapter = map.bind("localhost", "/")
407 assert adapter.match("/foo/", return_rule=True) == (rule, {})
338408
339409
340410 def test_server_name_interpolation():
341 server_name = 'example.invalid'
342 map = r.Map([r.Rule('/', endpoint='index'),
343 r.Rule('/', endpoint='alt', subdomain='alt')])
344
345 env = create_environ('/', 'http://%s/' % server_name)
411 server_name = "example.invalid"
412 map = r.Map(
413 [r.Rule("/", endpoint="index"), r.Rule("/", endpoint="alt", subdomain="alt")]
414 )
415
416 env = create_environ("/", "http://%s/" % server_name)
346417 adapter = map.bind_to_environ(env, server_name=server_name)
347 assert adapter.match() == ('index', {})
348
349 env = create_environ('/', 'http://alt.%s/' % server_name)
418 assert adapter.match() == ("index", {})
419
420 env = create_environ("/", "http://alt.%s/" % server_name)
350421 adapter = map.bind_to_environ(env, server_name=server_name)
351 assert adapter.match() == ('alt', {})
352
353 env = create_environ('/', 'http://%s/' % server_name)
354 adapter = map.bind_to_environ(env, server_name='foo')
355 assert adapter.subdomain == '<invalid>'
422 assert adapter.match() == ("alt", {})
423
424 env = create_environ("/", "http://%s/" % server_name)
425 adapter = map.bind_to_environ(env, server_name="foo")
426 assert adapter.subdomain == "<invalid>"
356427
357428
358429 def test_rule_emptying():
359 rule = r.Rule('/foo', {'meh': 'muh'}, 'x', ['POST'],
360 False, 'x', True, None)
430 rule = r.Rule("/foo", {"meh": "muh"}, "x", ["POST"], False, "x", True, None)
361431 rule2 = rule.empty()
362432 assert rule.__dict__ == rule2.__dict__
363 rule.methods.add('GET')
433 rule.methods.add("GET")
364434 assert rule.__dict__ != rule2.__dict__
365 rule.methods.discard('GET')
366 rule.defaults['meh'] = 'aha'
435 rule.methods.discard("GET")
436 rule.defaults["meh"] = "aha"
367437 assert rule.__dict__ != rule2.__dict__
368438
369439
370440 def test_rule_unhashable():
371 rule = r.Rule('/foo', {'meh': 'muh'}, 'x', ['POST'],
372 False, 'x', True, None)
441 rule = r.Rule("/foo", {"meh": "muh"}, "x", ["POST"], False, "x", True, None)
373442 pytest.raises(TypeError, hash, rule)
374443
375444
376445 def test_rule_templates():
377 testcase = r.RuleTemplate([
378 r.Submount(
379 '/test/$app',
380 [r.Rule('/foo/', endpoint='handle_foo'),
381 r.Rule('/bar/', endpoint='handle_bar'),
382 r.Rule('/baz/', endpoint='handle_baz')]),
383 r.EndpointPrefix(
384 '${app}',
385 [r.Rule('/${app}-blah', endpoint='bar'),
386 r.Rule('/${app}-meh', endpoint='baz')]),
387 r.Subdomain(
388 '$app',
389 [r.Rule('/blah', endpoint='x_bar'),
390 r.Rule('/meh', endpoint='x_baz')])
391 ])
446 testcase = r.RuleTemplate(
447 [
448 r.Submount(
449 "/test/$app",
450 [
451 r.Rule("/foo/", endpoint="handle_foo"),
452 r.Rule("/bar/", endpoint="handle_bar"),
453 r.Rule("/baz/", endpoint="handle_baz"),
454 ],
455 ),
456 r.EndpointPrefix(
457 "${app}",
458 [
459 r.Rule("/${app}-blah", endpoint="bar"),
460 r.Rule("/${app}-meh", endpoint="baz"),
461 ],
462 ),
463 r.Subdomain(
464 "$app",
465 [r.Rule("/blah", endpoint="x_bar"), r.Rule("/meh", endpoint="x_baz")],
466 ),
467 ]
468 )
392469
393470 url_map = r.Map(
394 [testcase(app='test1'), testcase(app='test2'), testcase(app='test3'), testcase(app='test4')
395 ])
396
397 out = sorted([(x.rule, x.subdomain, x.endpoint)
398 for x in url_map.iter_rules()])
399
400 assert out == ([
401 ('/blah', 'test1', 'x_bar'),
402 ('/blah', 'test2', 'x_bar'),
403 ('/blah', 'test3', 'x_bar'),
404 ('/blah', 'test4', 'x_bar'),
405 ('/meh', 'test1', 'x_baz'),
406 ('/meh', 'test2', 'x_baz'),
407 ('/meh', 'test3', 'x_baz'),
408 ('/meh', 'test4', 'x_baz'),
409 ('/test/test1/bar/', '', 'handle_bar'),
410 ('/test/test1/baz/', '', 'handle_baz'),
411 ('/test/test1/foo/', '', 'handle_foo'),
412 ('/test/test2/bar/', '', 'handle_bar'),
413 ('/test/test2/baz/', '', 'handle_baz'),
414 ('/test/test2/foo/', '', 'handle_foo'),
415 ('/test/test3/bar/', '', 'handle_bar'),
416 ('/test/test3/baz/', '', 'handle_baz'),
417 ('/test/test3/foo/', '', 'handle_foo'),
418 ('/test/test4/bar/', '', 'handle_bar'),
419 ('/test/test4/baz/', '', 'handle_baz'),
420 ('/test/test4/foo/', '', 'handle_foo'),
421 ('/test1-blah', '', 'test1bar'),
422 ('/test1-meh', '', 'test1baz'),
423 ('/test2-blah', '', 'test2bar'),
424 ('/test2-meh', '', 'test2baz'),
425 ('/test3-blah', '', 'test3bar'),
426 ('/test3-meh', '', 'test3baz'),
427 ('/test4-blah', '', 'test4bar'),
428 ('/test4-meh', '', 'test4baz')
429 ])
471 [
472 testcase(app="test1"),
473 testcase(app="test2"),
474 testcase(app="test3"),
475 testcase(app="test4"),
476 ]
477 )
478
479 out = sorted([(x.rule, x.subdomain, x.endpoint) for x in url_map.iter_rules()])
480
481 assert out == [
482 ("/blah", "test1", "x_bar"),
483 ("/blah", "test2", "x_bar"),
484 ("/blah", "test3", "x_bar"),
485 ("/blah", "test4", "x_bar"),
486 ("/meh", "test1", "x_baz"),
487 ("/meh", "test2", "x_baz"),
488 ("/meh", "test3", "x_baz"),
489 ("/meh", "test4", "x_baz"),
490 ("/test/test1/bar/", "", "handle_bar"),
491 ("/test/test1/baz/", "", "handle_baz"),
492 ("/test/test1/foo/", "", "handle_foo"),
493 ("/test/test2/bar/", "", "handle_bar"),
494 ("/test/test2/baz/", "", "handle_baz"),
495 ("/test/test2/foo/", "", "handle_foo"),
496 ("/test/test3/bar/", "", "handle_bar"),
497 ("/test/test3/baz/", "", "handle_baz"),
498 ("/test/test3/foo/", "", "handle_foo"),
499 ("/test/test4/bar/", "", "handle_bar"),
500 ("/test/test4/baz/", "", "handle_baz"),
501 ("/test/test4/foo/", "", "handle_foo"),
502 ("/test1-blah", "", "test1bar"),
503 ("/test1-meh", "", "test1baz"),
504 ("/test2-blah", "", "test2bar"),
505 ("/test2-meh", "", "test2baz"),
506 ("/test3-blah", "", "test3bar"),
507 ("/test3-meh", "", "test3baz"),
508 ("/test4-blah", "", "test4bar"),
509 ("/test4-meh", "", "test4baz"),
510 ]
430511
431512
432513 def test_non_string_parts():
433 m = r.Map([
434 r.Rule('/<foo>', endpoint='foo')
435 ])
436 a = m.bind('example.com')
437 assert a.build('foo', {'foo': 42}) == '/42'
514 m = r.Map([r.Rule("/<foo>", endpoint="foo")])
515 a = m.bind("example.com")
516 assert a.build("foo", {"foo": 42}) == "/42"
438517
439518
440519 def test_complex_routing_rules():
441 m = r.Map([
442 r.Rule('/', endpoint='index'),
443 r.Rule('/<int:blub>', endpoint='an_int'),
444 r.Rule('/<blub>', endpoint='a_string'),
445 r.Rule('/foo/', endpoint='nested'),
446 r.Rule('/foobar/', endpoint='nestedbar'),
447 r.Rule('/foo/<path:testing>/', endpoint='nested_show'),
448 r.Rule('/foo/<path:testing>/edit', endpoint='nested_edit'),
449 r.Rule('/users/', endpoint='users', defaults={'page': 1}),
450 r.Rule('/users/page/<int:page>', endpoint='users'),
451 r.Rule('/foox', endpoint='foox'),
452 r.Rule('/<path:bar>/<path:blub>', endpoint='barx_path_path')
453 ])
454 a = m.bind('example.com')
455
456 assert a.match('/') == ('index', {})
457 assert a.match('/42') == ('an_int', {'blub': 42})
458 assert a.match('/blub') == ('a_string', {'blub': 'blub'})
459 assert a.match('/foo/') == ('nested', {})
460 assert a.match('/foobar/') == ('nestedbar', {})
461 assert a.match('/foo/1/2/3/') == ('nested_show', {'testing': '1/2/3'})
462 assert a.match('/foo/1/2/3/edit') == ('nested_edit', {'testing': '1/2/3'})
463 assert a.match('/users/') == ('users', {'page': 1})
464 assert a.match('/users/page/2') == ('users', {'page': 2})
465 assert a.match('/foox') == ('foox', {})
466 assert a.match('/1/2/3') == ('barx_path_path', {'bar': '1', 'blub': '2/3'})
467
468 assert a.build('index') == '/'
469 assert a.build('an_int', {'blub': 42}) == '/42'
470 assert a.build('a_string', {'blub': 'test'}) == '/test'
471 assert a.build('nested') == '/foo/'
472 assert a.build('nestedbar') == '/foobar/'
473 assert a.build('nested_show', {'testing': '1/2/3'}) == '/foo/1/2/3/'
474 assert a.build('nested_edit', {'testing': '1/2/3'}) == '/foo/1/2/3/edit'
475 assert a.build('users', {'page': 1}) == '/users/'
476 assert a.build('users', {'page': 2}) == '/users/page/2'
477 assert a.build('foox') == '/foox'
478 assert a.build('barx_path_path', {'bar': '1', 'blub': '2/3'}) == '/1/2/3'
520 m = r.Map(
521 [
522 r.Rule("/", endpoint="index"),
523 r.Rule("/<int:blub>", endpoint="an_int"),
524 r.Rule("/<blub>", endpoint="a_string"),
525 r.Rule("/foo/", endpoint="nested"),
526 r.Rule("/foobar/", endpoint="nestedbar"),
527 r.Rule("/foo/<path:testing>/", endpoint="nested_show"),
528 r.Rule("/foo/<path:testing>/edit", endpoint="nested_edit"),
529 r.Rule("/users/", endpoint="users", defaults={"page": 1}),
530 r.Rule("/users/page/<int:page>", endpoint="users"),
531 r.Rule("/foox", endpoint="foox"),
532 r.Rule("/<path:bar>/<path:blub>", endpoint="barx_path_path"),
533 ]
534 )
535 a = m.bind("example.com")
536
537 assert a.match("/") == ("index", {})
538 assert a.match("/42") == ("an_int", {"blub": 42})
539 assert a.match("/blub") == ("a_string", {"blub": "blub"})
540 assert a.match("/foo/") == ("nested", {})
541 assert a.match("/foobar/") == ("nestedbar", {})
542 assert a.match("/foo/1/2/3/") == ("nested_show", {"testing": "1/2/3"})
543 assert a.match("/foo/1/2/3/edit") == ("nested_edit", {"testing": "1/2/3"})
544 assert a.match("/users/") == ("users", {"page": 1})
545 assert a.match("/users/page/2") == ("users", {"page": 2})
546 assert a.match("/foox") == ("foox", {})
547 assert a.match("/1/2/3") == ("barx_path_path", {"bar": "1", "blub": "2/3"})
548
549 assert a.build("index") == "/"
550 assert a.build("an_int", {"blub": 42}) == "/42"
551 assert a.build("a_string", {"blub": "test"}) == "/test"
552 assert a.build("nested") == "/foo/"
553 assert a.build("nestedbar") == "/foobar/"
554 assert a.build("nested_show", {"testing": "1/2/3"}) == "/foo/1/2/3/"
555 assert a.build("nested_edit", {"testing": "1/2/3"}) == "/foo/1/2/3/edit"
556 assert a.build("users", {"page": 1}) == "/users/"
557 assert a.build("users", {"page": 2}) == "/users/page/2"
558 assert a.build("foox") == "/foox"
559 assert a.build("barx_path_path", {"bar": "1", "blub": "2/3"}) == "/1/2/3"
479560
480561
481562 def test_default_converters():
482563 class MyMap(r.Map):
483564 default_converters = r.Map.default_converters.copy()
484 default_converters['foo'] = r.UnicodeConverter
565 default_converters["foo"] = r.UnicodeConverter
566
485567 assert isinstance(r.Map.default_converters, ImmutableDict)
486 m = MyMap([
487 r.Rule('/a/<foo:a>', endpoint='a'),
488 r.Rule('/b/<foo:b>', endpoint='b'),
489 r.Rule('/c/<c>', endpoint='c')
490 ], converters={'bar': r.UnicodeConverter})
491 a = m.bind('example.org', '/')
492 assert a.match('/a/1') == ('a', {'a': '1'})
493 assert a.match('/b/2') == ('b', {'b': '2'})
494 assert a.match('/c/3') == ('c', {'c': '3'})
495 assert 'foo' not in r.Map.default_converters
568 m = MyMap(
569 [
570 r.Rule("/a/<foo:a>", endpoint="a"),
571 r.Rule("/b/<foo:b>", endpoint="b"),
572 r.Rule("/c/<c>", endpoint="c"),
573 ],
574 converters={"bar": r.UnicodeConverter},
575 )
576 a = m.bind("example.org", "/")
577 assert a.match("/a/1") == ("a", {"a": "1"})
578 assert a.match("/b/2") == ("b", {"b": "2"})
579 assert a.match("/c/3") == ("c", {"c": "3"})
580 assert "foo" not in r.Map.default_converters
496581
497582
498583 def test_uuid_converter():
499 m = r.Map([
500 r.Rule('/a/<uuid:a_uuid>', endpoint='a')
501 ])
502 a = m.bind('example.org', '/')
503 rooute, kwargs = a.match('/a/a8098c1a-f86e-11da-bd1a-00112444be1e')
504 assert type(kwargs['a_uuid']) == uuid.UUID
584 m = r.Map([r.Rule("/a/<uuid:a_uuid>", endpoint="a")])
585 a = m.bind("example.org", "/")
586 rooute, kwargs = a.match("/a/a8098c1a-f86e-11da-bd1a-00112444be1e")
587 assert type(kwargs["a_uuid"]) == uuid.UUID
505588
506589
507590 def test_converter_with_tuples():
508 '''
591 """
509592 Regression test for https://github.com/pallets/werkzeug/issues/709
510 '''
593 """
594
511595 class TwoValueConverter(r.BaseConverter):
512
513596 def __init__(self, *args, **kwargs):
514597 super(TwoValueConverter, self).__init__(*args, **kwargs)
515 self.regex = r'(\w\w+)/(\w\w+)'
598 self.regex = r"(\w\w+)/(\w\w+)"
516599
517600 def to_python(self, two_values):
518 one, two = two_values.split('/')
601 one, two = two_values.split("/")
519602 return one, two
520603
521604 def to_url(self, values):
522605 return "%s/%s" % (values[0], values[1])
523606
524 map = r.Map([
525 r.Rule('/<two:foo>/', endpoint='handler')
526 ], converters={'two': TwoValueConverter})
527 a = map.bind('example.org', '/')
528 route, kwargs = a.match('/qwert/yuiop/')
529 assert kwargs['foo'] == ('qwert', 'yuiop')
607 map = r.Map(
608 [r.Rule("/<two:foo>/", endpoint="handler")],
609 converters={"two": TwoValueConverter},
610 )
611 a = map.bind("example.org", "/")
612 route, kwargs = a.match("/qwert/yuiop/")
613 assert kwargs["foo"] == ("qwert", "yuiop")
530614
531615
532616 def test_anyconverter():
533 m = r.Map([
534 r.Rule('/<any(a1, a2):a>', endpoint='no_dot'),
535 r.Rule('/<any(a.1, a.2):a>', endpoint='yes_dot')
536 ])
537 a = m.bind('example.org', '/')
538 assert a.match('/a1') == ('no_dot', {'a': 'a1'})
539 assert a.match('/a2') == ('no_dot', {'a': 'a2'})
540 assert a.match('/a.1') == ('yes_dot', {'a': 'a.1'})
541 assert a.match('/a.2') == ('yes_dot', {'a': 'a.2'})
617 m = r.Map(
618 [
619 r.Rule("/<any(a1, a2):a>", endpoint="no_dot"),
620 r.Rule("/<any(a.1, a.2):a>", endpoint="yes_dot"),
621 ]
622 )
623 a = m.bind("example.org", "/")
624 assert a.match("/a1") == ("no_dot", {"a": "a1"})
625 assert a.match("/a2") == ("no_dot", {"a": "a2"})
626 assert a.match("/a.1") == ("yes_dot", {"a": "a.1"})
627 assert a.match("/a.2") == ("yes_dot", {"a": "a.2"})
542628
543629
544630 def test_build_append_unknown():
545 map = r.Map([
546 r.Rule('/bar/<float:bazf>', endpoint='barf')
547 ])
548 adapter = map.bind('example.org', '/', subdomain='blah')
549 assert adapter.build('barf', {'bazf': 0.815, 'bif': 1.0}) == \
550 'http://example.org/bar/0.815?bif=1.0'
551 assert adapter.build('barf', {'bazf': 0.815, 'bif': 1.0},
552 append_unknown=False) == 'http://example.org/bar/0.815'
631 map = r.Map([r.Rule("/bar/<float:bazf>", endpoint="barf")])
632 adapter = map.bind("example.org", "/", subdomain="blah")
633 assert (
634 adapter.build("barf", {"bazf": 0.815, "bif": 1.0})
635 == "http://example.org/bar/0.815?bif=1.0"
636 )
637 assert (
638 adapter.build("barf", {"bazf": 0.815, "bif": 1.0}, append_unknown=False)
639 == "http://example.org/bar/0.815"
640 )
553641
554642
555643 def test_build_append_multiple():
556 map = r.Map([
557 r.Rule('/bar/<float:bazf>', endpoint='barf')
558 ])
559 adapter = map.bind('example.org', '/', subdomain='blah')
560 params = {'bazf': 0.815, 'bif': [1.0, 3.0], 'pof': 2.0}
561 a, b = adapter.build('barf', params).split('?')
562 assert a == 'http://example.org/bar/0.815'
563 assert set(b.split('&')) == set('pof=2.0&bif=1.0&bif=3.0'.split('&'))
644 map = r.Map([r.Rule("/bar/<float:foo>", endpoint="endp")])
645 adapter = map.bind("example.org", "/", subdomain="subd")
646 params = {"foo": 0.815, "x": [1.0, 3.0], "y": 2.0}
647 a, b = adapter.build("endp", params).split("?")
648 assert a == "http://example.org/bar/0.815"
649 assert set(b.split("&")) == set("y=2.0&x=1.0&x=3.0".split("&"))
650
651
652 def test_build_append_multidict():
653 map = r.Map([r.Rule("/bar/<float:foo>", endpoint="endp")])
654 adapter = map.bind("example.org", "/", subdomain="subd")
655 params = MultiDict((("foo", 0.815), ("x", 1.0), ("x", 3.0), ("y", 2.0)))
656 a, b = adapter.build("endp", params).split("?")
657 assert a == "http://example.org/bar/0.815"
658 assert set(b.split("&")) == set("y=2.0&x=1.0&x=3.0".split("&"))
659
660
661 def test_build_drop_none():
662 map = r.Map([r.Rule("/flob/<flub>", endpoint="endp")])
663 adapter = map.bind("", "/")
664 params = {"flub": None, "flop": None}
665 with pytest.raises(r.BuildError):
666 x = adapter.build("endp", params)
667 assert not x
668 params = {"flub": "x", "flop": None}
669 url = adapter.build("endp", params)
670 assert "flop" not in url
564671
565672
566673 def test_method_fallback():
567 map = r.Map([
568 r.Rule('/', endpoint='index', methods=['GET']),
569 r.Rule('/<name>', endpoint='hello_name', methods=['GET']),
570 r.Rule('/select', endpoint='hello_select', methods=['POST']),
571 r.Rule('/search_get', endpoint='search', methods=['GET']),
572 r.Rule('/search_post', endpoint='search', methods=['POST'])
573 ])
574 adapter = map.bind('example.com')
575 assert adapter.build('index') == '/'
576 assert adapter.build('index', method='GET') == '/'
577 assert adapter.build('hello_name', {'name': 'foo'}) == '/foo'
578 assert adapter.build('hello_select') == '/select'
579 assert adapter.build('hello_select', method='POST') == '/select'
580 assert adapter.build('search') == '/search_get'
581 assert adapter.build('search', method='GET') == '/search_get'
582 assert adapter.build('search', method='POST') == '/search_post'
674 map = r.Map(
675 [
676 r.Rule("/", endpoint="index", methods=["GET"]),
677 r.Rule("/<name>", endpoint="hello_name", methods=["GET"]),
678 r.Rule("/select", endpoint="hello_select", methods=["POST"]),
679 r.Rule("/search_get", endpoint="search", methods=["GET"]),
680 r.Rule("/search_post", endpoint="search", methods=["POST"]),
681 ]
682 )
683 adapter = map.bind("example.com")
684 assert adapter.build("index") == "/"
685 assert adapter.build("index", method="GET") == "/"
686 assert adapter.build("hello_name", {"name": "foo"}) == "/foo"
687 assert adapter.build("hello_select") == "/select"
688 assert adapter.build("hello_select", method="POST") == "/select"
689 assert adapter.build("search") == "/search_get"
690 assert adapter.build("search", method="GET") == "/search_get"
691 assert adapter.build("search", method="POST") == "/search_post"
583692
584693
585694 def test_implicit_head():
586 url_map = r.Map([
587 r.Rule('/get', methods=['GET'], endpoint='a'),
588 r.Rule('/post', methods=['POST'], endpoint='b')
589 ])
590 adapter = url_map.bind('example.org')
591 assert adapter.match('/get', method='HEAD') == ('a', {})
592 pytest.raises(r.MethodNotAllowed, adapter.match,
593 '/post', method='HEAD')
695 url_map = r.Map(
696 [
697 r.Rule("/get", methods=["GET"], endpoint="a"),
698 r.Rule("/post", methods=["POST"], endpoint="b"),
699 ]
700 )
701 adapter = url_map.bind("example.org")
702 assert adapter.match("/get", method="HEAD") == ("a", {})
703 pytest.raises(r.MethodNotAllowed, adapter.match, "/post", method="HEAD")
594704
595705
596706 def test_pass_str_as_router_methods():
597707 with pytest.raises(TypeError):
598 r.Rule('/get', methods='GET')
708 r.Rule("/get", methods="GET")
599709
600710
601711 def test_protocol_joining_bug():
602 m = r.Map([r.Rule('/<foo>', endpoint='x')])
603 a = m.bind('example.org')
604 assert a.build('x', {'foo': 'x:y'}) == '/x:y'
605 assert a.build('x', {'foo': 'x:y'}, force_external=True) == \
606 'http://example.org/x:y'
712 m = r.Map([r.Rule("/<foo>", endpoint="x")])
713 a = m.bind("example.org")
714 assert a.build("x", {"foo": "x:y"}) == "/x:y"
715 assert a.build("x", {"foo": "x:y"}, force_external=True) == "http://example.org/x:y"
607716
608717
609718 def test_allowed_methods_querying():
610 m = r.Map([r.Rule('/<foo>', methods=['GET', 'HEAD']),
611 r.Rule('/foo', methods=['POST'])])
612 a = m.bind('example.org')
613 assert sorted(a.allowed_methods('/foo')) == ['GET', 'HEAD', 'POST']
719 m = r.Map(
720 [r.Rule("/<foo>", methods=["GET", "HEAD"]), r.Rule("/foo", methods=["POST"])]
721 )
722 a = m.bind("example.org")
723 assert sorted(a.allowed_methods("/foo")) == ["GET", "HEAD", "POST"]
614724
615725
616726 def test_external_building_with_port():
617 map = r.Map([
618 r.Rule('/', endpoint='index'),
619 ])
620 adapter = map.bind('example.org:5000', '/')
621 built_url = adapter.build('index', {}, force_external=True)
622 assert built_url == 'http://example.org:5000/', built_url
727 map = r.Map([r.Rule("/", endpoint="index")])
728 adapter = map.bind("example.org:5000", "/")
729 built_url = adapter.build("index", {}, force_external=True)
730 assert built_url == "http://example.org:5000/", built_url
623731
624732
625733 def test_external_building_with_port_bind_to_environ():
626 map = r.Map([
627 r.Rule('/', endpoint='index'),
628 ])
734 map = r.Map([r.Rule("/", endpoint="index")])
629735 adapter = map.bind_to_environ(
630 create_environ('/', 'http://example.org:5000/'),
631 server_name="example.org:5000"
632 )
633 built_url = adapter.build('index', {}, force_external=True)
634 assert built_url == 'http://example.org:5000/', built_url
736 create_environ("/", "http://example.org:5000/"), server_name="example.org:5000"
737 )
738 built_url = adapter.build("index", {}, force_external=True)
739 assert built_url == "http://example.org:5000/", built_url
635740
636741
637742 def test_external_building_with_port_bind_to_environ_wrong_servername():
638 map = r.Map([
639 r.Rule('/', endpoint='index'),
640 ])
641 environ = create_environ('/', 'http://example.org:5000/')
743 map = r.Map([r.Rule("/", endpoint="index")])
744 environ = create_environ("/", "http://example.org:5000/")
642745 adapter = map.bind_to_environ(environ, server_name="example.org")
643 assert adapter.subdomain == '<invalid>'
746 assert adapter.subdomain == "<invalid>"
644747
645748
646749 def test_converter_parser():
647 args, kwargs = r.parse_converter_args(u'test, a=1, b=3.0')
648
649 assert args == ('test',)
650 assert kwargs == {'a': 1, 'b': 3.0}
651
652 args, kwargs = r.parse_converter_args('')
750 args, kwargs = r.parse_converter_args(u"test, a=1, b=3.0")
751
752 assert args == ("test",)
753 assert kwargs == {"a": 1, "b": 3.0}
754
755 args, kwargs = r.parse_converter_args("")
653756 assert not args and not kwargs
654757
655 args, kwargs = r.parse_converter_args('a, b, c,')
656 assert args == ('a', 'b', 'c')
758 args, kwargs = r.parse_converter_args("a, b, c,")
759 assert args == ("a", "b", "c")
657760 assert not kwargs
658761
659 args, kwargs = r.parse_converter_args('True, False, None')
762 args, kwargs = r.parse_converter_args("True, False, None")
660763 assert args == (True, False, None)
661764
662765 args, kwargs = r.parse_converter_args('"foo", u"bar"')
663 assert args == ('foo', 'bar')
766 assert args == ("foo", "bar")
664767
665768
666769 def test_alias_redirects():
667 m = r.Map([
668 r.Rule('/', endpoint='index'),
669 r.Rule('/index.html', endpoint='index', alias=True),
670 r.Rule('/users/', defaults={'page': 1}, endpoint='users'),
671 r.Rule('/users/index.html', defaults={'page': 1}, alias=True,
672 endpoint='users'),
673 r.Rule('/users/page/<int:page>', endpoint='users'),
674 r.Rule('/users/page-<int:page>.html', alias=True, endpoint='users'),
675 ])
676 a = m.bind('example.com')
770 m = r.Map(
771 [
772 r.Rule("/", endpoint="index"),
773 r.Rule("/index.html", endpoint="index", alias=True),
774 r.Rule("/users/", defaults={"page": 1}, endpoint="users"),
775 r.Rule(
776 "/users/index.html", defaults={"page": 1}, alias=True, endpoint="users"
777 ),
778 r.Rule("/users/page/<int:page>", endpoint="users"),
779 r.Rule("/users/page-<int:page>.html", alias=True, endpoint="users"),
780 ]
781 )
782 a = m.bind("example.com")
677783
678784 def ensure_redirect(path, new_url, args=None):
679785 with pytest.raises(r.RequestRedirect) as excinfo:
680786 a.match(path, query_args=args)
681 assert excinfo.value.new_url == 'http://example.com' + new_url
682
683 ensure_redirect('/index.html', '/')
684 ensure_redirect('/users/index.html', '/users/')
685 ensure_redirect('/users/page-2.html', '/users/page/2')
686 ensure_redirect('/users/page-1.html', '/users/')
687 ensure_redirect('/users/page-1.html', '/users/?foo=bar', {'foo': 'bar'})
688
689 assert a.build('index') == '/'
690 assert a.build('users', {'page': 1}) == '/users/'
691 assert a.build('users', {'page': 2}) == '/users/page/2'
692
693
694 @pytest.mark.parametrize('prefix', ('', '/aaa'))
787 assert excinfo.value.new_url == "http://example.com" + new_url
788
789 ensure_redirect("/index.html", "/")
790 ensure_redirect("/users/index.html", "/users/")
791 ensure_redirect("/users/page-2.html", "/users/page/2")
792 ensure_redirect("/users/page-1.html", "/users/")
793 ensure_redirect("/users/page-1.html", "/users/?foo=bar", {"foo": "bar"})
794
795 assert a.build("index") == "/"
796 assert a.build("users", {"page": 1}) == "/users/"
797 assert a.build("users", {"page": 2}) == "/users/page/2"
798
799
800 @pytest.mark.parametrize("prefix", ("", "/aaa"))
695801 def test_double_defaults(prefix):
696 m = r.Map([
697 r.Rule(prefix + '/', defaults={'foo': 1, 'bar': False}, endpoint='x'),
698 r.Rule(prefix + '/<int:foo>', defaults={'bar': False}, endpoint='x'),
699 r.Rule(prefix + '/bar/', defaults={'foo': 1, 'bar': True}, endpoint='x'),
700 r.Rule(prefix + '/bar/<int:foo>', defaults={'bar': True}, endpoint='x')
701 ])
702 a = m.bind('example.com')
703
704 assert a.match(prefix + '/') == ('x', {'foo': 1, 'bar': False})
705 assert a.match(prefix + '/2') == ('x', {'foo': 2, 'bar': False})
706 assert a.match(prefix + '/bar/') == ('x', {'foo': 1, 'bar': True})
707 assert a.match(prefix + '/bar/2') == ('x', {'foo': 2, 'bar': True})
708
709 assert a.build('x', {'foo': 1, 'bar': False}) == prefix + '/'
710 assert a.build('x', {'foo': 2, 'bar': False}) == prefix + '/2'
711 assert a.build('x', {'bar': False}) == prefix + '/'
712 assert a.build('x', {'foo': 1, 'bar': True}) == prefix + '/bar/'
713 assert a.build('x', {'foo': 2, 'bar': True}) == prefix + '/bar/2'
714 assert a.build('x', {'bar': True}) == prefix + '/bar/'
802 m = r.Map(
803 [
804 r.Rule(prefix + "/", defaults={"foo": 1, "bar": False}, endpoint="x"),
805 r.Rule(prefix + "/<int:foo>", defaults={"bar": False}, endpoint="x"),
806 r.Rule(prefix + "/bar/", defaults={"foo": 1, "bar": True}, endpoint="x"),
807 r.Rule(prefix + "/bar/<int:foo>", defaults={"bar": True}, endpoint="x"),
808 ]
809 )
810 a = m.bind("example.com")
811
812 assert a.match(prefix + "/") == ("x", {"foo": 1, "bar": False})
813 assert a.match(prefix + "/2") == ("x", {"foo": 2, "bar": False})
814 assert a.match(prefix + "/bar/") == ("x", {"foo": 1, "bar": True})
815 assert a.match(prefix + "/bar/2") == ("x", {"foo": 2, "bar": True})
816
817 assert a.build("x", {"foo": 1, "bar": False}) == prefix + "/"
818 assert a.build("x", {"foo": 2, "bar": False}) == prefix + "/2"
819 assert a.build("x", {"bar": False}) == prefix + "/"
820 assert a.build("x", {"foo": 1, "bar": True}) == prefix + "/bar/"
821 assert a.build("x", {"foo": 2, "bar": True}) == prefix + "/bar/2"
822 assert a.build("x", {"bar": True}) == prefix + "/bar/"
823
824
825 def test_building_bytes():
826 m = r.Map(
827 [
828 r.Rule("/<a>", endpoint="a"),
829 r.Rule("/<b>", defaults={"b": b"\x01\x02\x03"}, endpoint="b"),
830 ]
831 )
832 a = m.bind("example.org", "/")
833 assert a.build("a", {"a": b"\x01\x02\x03"}) == "/%01%02%03"
834 assert a.build("b") == "/%01%02%03"
715835
716836
717837 def test_host_matching():
718 m = r.Map([
719 r.Rule('/', endpoint='index', host='www.<domain>'),
720 r.Rule('/', endpoint='files', host='files.<domain>'),
721 r.Rule('/foo/', defaults={'page': 1}, host='www.<domain>', endpoint='x'),
722 r.Rule('/<int:page>', host='files.<domain>', endpoint='x')
723 ], host_matching=True)
724
725 a = m.bind('www.example.com')
726 assert a.match('/') == ('index', {'domain': 'example.com'})
727 assert a.match('/foo/') == ('x', {'domain': 'example.com', 'page': 1})
728
729 with pytest.raises(r.RequestRedirect) as excinfo:
730 a.match('/foo')
731 assert excinfo.value.new_url == 'http://www.example.com/foo/'
732
733 a = m.bind('files.example.com')
734 assert a.match('/') == ('files', {'domain': 'example.com'})
735 assert a.match('/2') == ('x', {'domain': 'example.com', 'page': 2})
736
737 with pytest.raises(r.RequestRedirect) as excinfo:
738 a.match('/1')
739 assert excinfo.value.new_url == 'http://www.example.com/foo/'
838 m = r.Map(
839 [
840 r.Rule("/", endpoint="index", host="www.<domain>"),
841 r.Rule("/", endpoint="files", host="files.<domain>"),
842 r.Rule("/foo/", defaults={"page": 1}, host="www.<domain>", endpoint="x"),
843 r.Rule("/<int:page>", host="files.<domain>", endpoint="x"),
844 ],
845 host_matching=True,
846 )
847
848 a = m.bind("www.example.com")
849 assert a.match("/") == ("index", {"domain": "example.com"})
850 assert a.match("/foo/") == ("x", {"domain": "example.com", "page": 1})
851
852 with pytest.raises(r.RequestRedirect) as excinfo:
853 a.match("/foo")
854 assert excinfo.value.new_url == "http://www.example.com/foo/"
855
856 a = m.bind("files.example.com")
857 assert a.match("/") == ("files", {"domain": "example.com"})
858 assert a.match("/2") == ("x", {"domain": "example.com", "page": 2})
859
860 with pytest.raises(r.RequestRedirect) as excinfo:
861 a.match("/1")
862 assert excinfo.value.new_url == "http://www.example.com/foo/"
740863
741864
742865 def test_host_matching_building():
743 m = r.Map([
744 r.Rule('/', endpoint='index', host='www.domain.com'),
745 r.Rule('/', endpoint='foo', host='my.domain.com')
746 ], host_matching=True)
747
748 www = m.bind('www.domain.com')
749 assert www.match('/') == ('index', {})
750 assert www.build('index') == '/'
751 assert www.build('foo') == 'http://my.domain.com/'
752
753 my = m.bind('my.domain.com')
754 assert my.match('/') == ('foo', {})
755 assert my.build('foo') == '/'
756 assert my.build('index') == 'http://www.domain.com/'
866 m = r.Map(
867 [
868 r.Rule("/", endpoint="index", host="www.domain.com"),
869 r.Rule("/", endpoint="foo", host="my.domain.com"),
870 ],
871 host_matching=True,
872 )
873
874 www = m.bind("www.domain.com")
875 assert www.match("/") == ("index", {})
876 assert www.build("index") == "/"
877 assert www.build("foo") == "http://my.domain.com/"
878
879 my = m.bind("my.domain.com")
880 assert my.match("/") == ("foo", {})
881 assert my.build("foo") == "/"
882 assert my.build("index") == "http://www.domain.com/"
757883
758884
759885 def test_server_name_casing():
760 m = r.Map([
761 r.Rule('/', endpoint='index', subdomain='foo')
762 ])
886 m = r.Map([r.Rule("/", endpoint="index", subdomain="foo")])
763887
764888 env = create_environ()
765 env['SERVER_NAME'] = env['HTTP_HOST'] = 'FOO.EXAMPLE.COM'
766 a = m.bind_to_environ(env, server_name='example.com')
767 assert a.match('/') == ('index', {})
889 env["SERVER_NAME"] = env["HTTP_HOST"] = "FOO.EXAMPLE.COM"
890 a = m.bind_to_environ(env, server_name="example.com")
891 assert a.match("/") == ("index", {})
768892
769893 env = create_environ()
770 env['SERVER_NAME'] = '127.0.0.1'
771 env['SERVER_PORT'] = '5000'
772 del env['HTTP_HOST']
773 a = m.bind_to_environ(env, server_name='example.com')
894 env["SERVER_NAME"] = "127.0.0.1"
895 env["SERVER_PORT"] = "5000"
896 del env["HTTP_HOST"]
897 a = m.bind_to_environ(env, server_name="example.com")
774898 with pytest.raises(r.NotFound):
775899 a.match()
776900
777901
778902 def test_redirect_request_exception_code():
779 exc = r.RequestRedirect('http://www.google.com/')
903 exc = r.RequestRedirect("http://www.google.com/")
780904 exc.code = 307
781905 env = create_environ()
782906 strict_eq(exc.get_response(env).status_code, exc.code)
783907
784908
785909 def test_redirect_path_quoting():
786 url_map = r.Map([
787 r.Rule('/<category>', defaults={'page': 1}, endpoint='category'),
788 r.Rule('/<category>/page/<int:page>', endpoint='category')
789 ])
790 adapter = url_map.bind('example.com')
791
792 with pytest.raises(r.RequestRedirect) as excinfo:
793 adapter.match('/foo bar/page/1')
910 url_map = r.Map(
911 [
912 r.Rule("/<category>", defaults={"page": 1}, endpoint="category"),
913 r.Rule("/<category>/page/<int:page>", endpoint="category"),
914 ]
915 )
916 adapter = url_map.bind("example.com")
917
918 with pytest.raises(r.RequestRedirect) as excinfo:
919 adapter.match("/foo bar/page/1")
794920 response = excinfo.value.get_response({})
795 strict_eq(response.headers['location'],
796 u'http://example.com/foo%20bar')
921 strict_eq(response.headers["location"], u"http://example.com/foo%20bar")
797922
798923
799924 def test_unicode_rules():
800 m = r.Map([
801 r.Rule(u'/войти/', endpoint='enter'),
802 r.Rule(u'/foo+bar/', endpoint='foobar')
803 ])
804 a = m.bind(u'☃.example.com')
805 with pytest.raises(r.RequestRedirect) as excinfo:
806 a.match(u'/войти')
807 strict_eq(excinfo.value.new_url,
808 'http://xn--n3h.example.com/%D0%B2%D0%BE%D0%B9%D1%82%D0%B8/')
809
810 endpoint, values = a.match(u'/войти/')
811 strict_eq(endpoint, 'enter')
925 m = r.Map(
926 [r.Rule(u"/войти/", endpoint="enter"), r.Rule(u"/foo+bar/", endpoint="foobar")]
927 )
928 a = m.bind(u"☃.example.com")
929 with pytest.raises(r.RequestRedirect) as excinfo:
930 a.match(u"/войти")
931 strict_eq(
932 excinfo.value.new_url,
933 "http://xn--n3h.example.com/%D0%B2%D0%BE%D0%B9%D1%82%D0%B8/",
934 )
935
936 endpoint, values = a.match(u"/войти/")
937 strict_eq(endpoint, "enter")
812938 strict_eq(values, {})
813939
814940 with pytest.raises(r.RequestRedirect) as excinfo:
815 a.match(u'/foo+bar')
816 strict_eq(excinfo.value.new_url, 'http://xn--n3h.example.com/foo+bar/')
817
818 endpoint, values = a.match(u'/foo+bar/')
819 strict_eq(endpoint, 'foobar')
941 a.match(u"/foo+bar")
942 strict_eq(excinfo.value.new_url, "http://xn--n3h.example.com/foo+bar/")
943
944 endpoint, values = a.match(u"/foo+bar/")
945 strict_eq(endpoint, "foobar")
820946 strict_eq(values, {})
821947
822 url = a.build('enter', {}, force_external=True)
823 strict_eq(url, 'http://xn--n3h.example.com/%D0%B2%D0%BE%D0%B9%D1%82%D0%B8/')
824
825 url = a.build('foobar', {}, force_external=True)
826 strict_eq(url, 'http://xn--n3h.example.com/foo+bar/')
948 url = a.build("enter", {}, force_external=True)
949 strict_eq(url, "http://xn--n3h.example.com/%D0%B2%D0%BE%D0%B9%D1%82%D0%B8/")
950
951 url = a.build("foobar", {}, force_external=True)
952 strict_eq(url, "http://xn--n3h.example.com/foo+bar/")
827953
828954
829955 def test_empty_path_info():
830 m = r.Map([
831 r.Rule("/", endpoint="index"),
832 ])
956 m = r.Map([r.Rule("/", endpoint="index")])
833957
834958 b = m.bind("example.com", script_name="/approot")
835959 with pytest.raises(r.RequestRedirect) as excinfo:
842966 assert excinfo.value.new_url == "http://example.com/"
843967
844968
969 def test_both_bind_and_match_path_info_are_none():
970 m = r.Map([r.Rule(u"/", endpoint="index")])
971 ma = m.bind("example.org")
972 strict_eq(ma.match(), ("index", {}))
973
974
845975 def test_map_repr():
846 m = r.Map([
847 r.Rule(u'/wat', endpoint='enter'),
848 r.Rule(u'/woop', endpoint='foobar')
849 ])
976 m = r.Map([r.Rule(u"/wat", endpoint="enter"), r.Rule(u"/woop", endpoint="foobar")])
850977 rv = repr(m)
851 strict_eq(rv,
852 "Map([<Rule '/woop' -> foobar>, <Rule '/wat' -> enter>])")
978 strict_eq(rv, "Map([<Rule '/woop' -> foobar>, <Rule '/wat' -> enter>])")
853979
854980
855981 def test_empty_subclass_rules_with_custom_kwargs():
856982 class CustomRule(r.Rule):
857
858983 def __init__(self, string=None, custom=None, *args, **kwargs):
859984 self.custom = custom
860985 super(CustomRule, self).__init__(string, *args, **kwargs)
861986
862 rule1 = CustomRule(u'/foo', endpoint='bar')
987 rule1 = CustomRule(u"/foo", endpoint="bar")
863988 try:
864989 rule2 = rule1.empty()
865990 assert rule1.rule == rule2.rule
868993
869994
870995 def test_finding_closest_match_by_endpoint():
871 m = r.Map([
872 r.Rule(u'/foo/', endpoint='users.here'),
873 r.Rule(u'/wat/', endpoint='admin.users'),
874 r.Rule(u'/woop', endpoint='foo.users'),
875 ])
876 adapter = m.bind('example.com')
877 assert r.BuildError('admin.user', None, None, adapter).suggested.endpoint \
878 == 'admin.users'
996 m = r.Map(
997 [
998 r.Rule(u"/foo/", endpoint="users.here"),
999 r.Rule(u"/wat/", endpoint="admin.users"),
1000 r.Rule(u"/woop", endpoint="foo.users"),
1001 ]
1002 )
1003 adapter = m.bind("example.com")
1004 assert (
1005 r.BuildError("admin.user", None, None, adapter).suggested.endpoint
1006 == "admin.users"
1007 )
8791008
8801009
8811010 def test_finding_closest_match_by_values():
882 rule_id = r.Rule(u'/user/id/<id>/', endpoint='users')
883 rule_slug = r.Rule(u'/user/<slug>/', endpoint='users')
884 rule_random = r.Rule(u'/user/emails/<email>/', endpoint='users')
1011 rule_id = r.Rule(u"/user/id/<id>/", endpoint="users")
1012 rule_slug = r.Rule(u"/user/<slug>/", endpoint="users")
1013 rule_random = r.Rule(u"/user/emails/<email>/", endpoint="users")
8851014 m = r.Map([rule_id, rule_slug, rule_random])
886 adapter = m.bind('example.com')
887 assert r.BuildError('x', {'slug': ''}, None, adapter).suggested == \
888 rule_slug
1015 adapter = m.bind("example.com")
1016 assert r.BuildError("x", {"slug": ""}, None, adapter).suggested == rule_slug
8891017
8901018
8911019 def test_finding_closest_match_by_method():
892 post = r.Rule(u'/post/', endpoint='foobar', methods=['POST'])
893 get = r.Rule(u'/get/', endpoint='foobar', methods=['GET'])
894 put = r.Rule(u'/put/', endpoint='foobar', methods=['PUT'])
1020 post = r.Rule(u"/post/", endpoint="foobar", methods=["POST"])
1021 get = r.Rule(u"/get/", endpoint="foobar", methods=["GET"])
1022 put = r.Rule(u"/put/", endpoint="foobar", methods=["PUT"])
8951023 m = r.Map([post, get, put])
896 adapter = m.bind('example.com')
897 assert r.BuildError('invalid', {}, 'POST', adapter).suggested == post
898 assert r.BuildError('invalid', {}, 'GET', adapter).suggested == get
899 assert r.BuildError('invalid', {}, 'PUT', adapter).suggested == put
1024 adapter = m.bind("example.com")
1025 assert r.BuildError("invalid", {}, "POST", adapter).suggested == post
1026 assert r.BuildError("invalid", {}, "GET", adapter).suggested == get
1027 assert r.BuildError("invalid", {}, "PUT", adapter).suggested == put
9001028
9011029
9021030 def test_finding_closest_match_when_none_exist():
9031031 m = r.Map([])
904 assert not r.BuildError('invalid', {}, None, m.bind('test.com')).suggested
1032 assert not r.BuildError("invalid", {}, None, m.bind("test.com")).suggested
9051033
9061034
9071035 def test_error_message_without_suggested_rule():
908 m = r.Map([
909 r.Rule(u'/foo/', endpoint='world', methods=['GET']),
910 ])
911 adapter = m.bind('example.com')
1036 m = r.Map([r.Rule(u"/foo/", endpoint="world", methods=["GET"])])
1037 adapter = m.bind("example.com")
9121038
9131039 with pytest.raises(r.BuildError) as excinfo:
914 adapter.build('urks')
915 assert str(excinfo.value).startswith(
916 "Could not build url for endpoint 'urks'."
917 )
1040 adapter.build("urks")
1041 assert str(excinfo.value).startswith("Could not build url for endpoint 'urks'.")
9181042
9191043 with pytest.raises(r.BuildError) as excinfo:
920 adapter.build('world', method='POST')
1044 adapter.build("world", method="POST")
9211045 assert str(excinfo.value).startswith(
9221046 "Could not build url for endpoint 'world' ('POST')."
9231047 )
9241048
9251049 with pytest.raises(r.BuildError) as excinfo:
926 adapter.build('urks', values={'user_id': 5})
1050 adapter.build("urks", values={"user_id": 5})
9271051 assert str(excinfo.value).startswith(
9281052 "Could not build url for endpoint 'urks' with values ['user_id']."
9291053 )
9301054
9311055
9321056 def test_error_message_suggestion():
933 m = r.Map([
934 r.Rule(u'/foo/<id>/', endpoint='world', methods=['GET']),
935 ])
936 adapter = m.bind('example.com')
1057 m = r.Map([r.Rule(u"/foo/<id>/", endpoint="world", methods=["GET"])])
1058 adapter = m.bind("example.com")
9371059
9381060 with pytest.raises(r.BuildError) as excinfo:
939 adapter.build('helloworld')
1061 adapter.build("helloworld")
9401062 assert "Did you mean 'world' instead?" in str(excinfo.value)
9411063
9421064 with pytest.raises(r.BuildError) as excinfo:
943 adapter.build('world')
1065 adapter.build("world")
9441066 assert "Did you forget to specify values ['id']?" in str(excinfo.value)
9451067 assert "Did you mean to use methods" not in str(excinfo.value)
9461068
9471069 with pytest.raises(r.BuildError) as excinfo:
948 adapter.build('world', {'id': 2}, method='POST')
1070 adapter.build("world", {"id": 2}, method="POST")
9491071 assert "Did you mean to use methods ['GET', 'HEAD']?" in str(excinfo.value)
1072
1073
1074 def test_no_memory_leak_from_Rule_builder():
1075 """See #1520"""
1076
1077 # generate a bunch of objects that *should* get collected
1078 for _ in range(100):
1079 r.Map([r.Rule("/a/<string:b>")])
1080
1081 # ensure that the garbage collection has had a chance to collect cyclic
1082 # objects
1083 for _ in range(5):
1084 gc.collect()
1085
1086 # assert they got collected!
1087 count = sum(1 for obj in gc.get_objects() if isinstance(obj, r.Rule))
1088 assert count == 0
1089
1090
1091 def test_build_url_with_arg_self():
1092 map = r.Map([r.Rule("/foo/<string:self>", endpoint="foo")])
1093 adapter = map.bind("example.org", "/", subdomain="blah")
1094
1095 ret = adapter.build("foo", {"self": "bar"})
1096 assert ret == "http://example.org/foo/bar"
1097
1098
1099 def test_build_url_with_arg_keyword():
1100 map = r.Map([r.Rule("/foo/<string:class>", endpoint="foo")])
1101 adapter = map.bind("example.org", "/", subdomain="blah")
1102
1103 ret = adapter.build("foo", {"class": "bar"})
1104 assert ret == "http://example.org/foo/bar"
44
55 Tests the security helpers.
66
7 :copyright: (c) 2014 by Armin Ronacher.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
1010 import os
1111 import posixpath
12
1213 import pytest
1314
14 from werkzeug.security import check_password_hash, generate_password_hash, \
15 safe_join, pbkdf2_hex, safe_str_cmp
15 from werkzeug.security import check_password_hash
16 from werkzeug.security import generate_password_hash
17 from werkzeug.security import pbkdf2_hex
18 from werkzeug.security import safe_join
19 from werkzeug.security import safe_str_cmp
1620
1721
1822 def test_safe_str_cmp():
19 assert safe_str_cmp('a', 'a') is True
20 assert safe_str_cmp(b'a', u'a') is True
21 assert safe_str_cmp('a', 'b') is False
22 assert safe_str_cmp(b'aaa', 'aa') is False
23 assert safe_str_cmp(b'aaa', 'bbb') is False
24 assert safe_str_cmp(b'aaa', u'aaa') is True
25 assert safe_str_cmp(u'aaa', u'aaa') is True
23 assert safe_str_cmp("a", "a") is True
24 assert safe_str_cmp(b"a", u"a") is True
25 assert safe_str_cmp("a", "b") is False
26 assert safe_str_cmp(b"aaa", "aa") is False
27 assert safe_str_cmp(b"aaa", "bbb") is False
28 assert safe_str_cmp(b"aaa", u"aaa") is True
29 assert safe_str_cmp(u"aaa", u"aaa") is True
2630
2731
2832 def test_safe_str_cmp_no_builtin():
2933 import werkzeug.security as sec
34
3035 prev_value = sec._builtin_safe_str_cmp
3136 sec._builtin_safe_str_cmp = None
32 assert safe_str_cmp('a', 'ab') is False
33
34 assert safe_str_cmp('str', 'str') is True
35 assert safe_str_cmp('str1', 'str2') is False
37 assert safe_str_cmp("a", "ab") is False
38
39 assert safe_str_cmp("str", "str") is True
40 assert safe_str_cmp("str1", "str2") is False
3641 sec._builtin_safe_str_cmp = prev_value
3742
3843
3944 def test_password_hashing():
40 hash0 = generate_password_hash('default')
41 assert check_password_hash(hash0, 'default')
42 assert hash0.startswith('pbkdf2:sha256:50000$')
43
44 hash1 = generate_password_hash('default', 'sha1')
45 hash2 = generate_password_hash(u'default', method='sha1')
45 hash0 = generate_password_hash("default")
46 assert check_password_hash(hash0, "default")
47 assert hash0.startswith("pbkdf2:sha256:150000$")
48
49 hash1 = generate_password_hash("default", "sha1")
50 hash2 = generate_password_hash(u"default", method="sha1")
4651 assert hash1 != hash2
47 assert check_password_hash(hash1, 'default')
48 assert check_password_hash(hash2, 'default')
49 assert hash1.startswith('sha1$')
50 assert hash2.startswith('sha1$')
51
52 with pytest.raises(TypeError):
53 check_password_hash('$made$up$', 'default')
52 assert check_password_hash(hash1, "default")
53 assert check_password_hash(hash2, "default")
54 assert hash1.startswith("sha1$")
55 assert hash2.startswith("sha1$")
5456
5557 with pytest.raises(ValueError):
56 generate_password_hash('default', 'sha1', salt_length=0)
57
58 fakehash = generate_password_hash('default', method='plain')
59 assert fakehash == 'plain$$default'
60 assert check_password_hash(fakehash, 'default')
61
62 mhash = generate_password_hash(u'default', method='md5')
63 assert mhash.startswith('md5$')
64 assert check_password_hash(mhash, 'default')
65
66 legacy = 'md5$$c21f969b5f03d33d43e04f8f136e7682'
67 assert check_password_hash(legacy, 'default')
68
69 legacy = u'md5$$c21f969b5f03d33d43e04f8f136e7682'
70 assert check_password_hash(legacy, 'default')
58 check_password_hash("$made$up$", "default")
59
60 with pytest.raises(ValueError):
61 generate_password_hash("default", "sha1", salt_length=0)
62
63 fakehash = generate_password_hash("default", method="plain")
64 assert fakehash == "plain$$default"
65 assert check_password_hash(fakehash, "default")
66
67 mhash = generate_password_hash(u"default", method="md5")
68 assert mhash.startswith("md5$")
69 assert check_password_hash(mhash, "default")
70
71 legacy = "md5$$c21f969b5f03d33d43e04f8f136e7682"
72 assert check_password_hash(legacy, "default")
73
74 legacy = u"md5$$c21f969b5f03d33d43e04f8f136e7682"
75 assert check_password_hash(legacy, "default")
7176
7277
7378 def test_safe_join():
74 assert safe_join('foo', 'bar/baz') == posixpath.join('foo', 'bar/baz')
75 assert safe_join('foo', '../bar/baz') is None
76 if os.name == 'nt':
77 assert safe_join('foo', 'foo\\bar') is None
79 assert safe_join("foo", "bar/baz") == posixpath.join("foo", "bar/baz")
80 assert safe_join("foo", "../bar/baz") is None
81 if os.name == "nt":
82 assert safe_join("foo", "foo\\bar") is None
7883
7984
8085 def test_safe_join_os_sep():
8186 import werkzeug.security as sec
87
8288 prev_value = sec._os_alt_seps
83 sec._os_alt_seps = '*'
84 assert safe_join('foo', 'bar/baz*') is None
89 sec._os_alt_seps = "*"
90 assert safe_join("foo", "bar/baz*") is None
8591 sec._os_alt_steps = prev_value
8692
8793
95101 # Assumes default keylen is 20
96102 # check('password', 'salt', 1, None,
97103 # '0c60c80f961f0e71f3a9b524af6012062fe037a6')
98 check('password', 'salt', 1, 20, 'sha1',
99 '0c60c80f961f0e71f3a9b524af6012062fe037a6')
100 check('password', 'salt', 2, 20, 'sha1',
101 'ea6c014dc72d6f8ccd1ed92ace1d41f0d8de8957')
102 check('password', 'salt', 4096, 20, 'sha1',
103 '4b007901b765489abead49d926f721d065a429c1')
104 check('passwordPASSWORDpassword', 'saltSALTsaltSALTsaltSALTsaltSALTsalt',
105 4096, 25, 'sha1', '3d2eec4fe41c849b80c8d83662c0e44a8b291a964cf2f07038')
106 check('pass\x00word', 'sa\x00lt', 4096, 16, 'sha1',
107 '56fa6aa75548099dcc37d7f03425e0c3')
104 check("password", "salt", 1, 20, "sha1", "0c60c80f961f0e71f3a9b524af6012062fe037a6")
105 check("password", "salt", 2, 20, "sha1", "ea6c014dc72d6f8ccd1ed92ace1d41f0d8de8957")
106 check(
107 "password", "salt", 4096, 20, "sha1", "4b007901b765489abead49d926f721d065a429c1"
108 )
109 check(
110 "passwordPASSWORDpassword",
111 "saltSALTsaltSALTsaltSALTsaltSALTsalt",
112 4096,
113 25,
114 "sha1",
115 "3d2eec4fe41c849b80c8d83662c0e44a8b291a964cf2f07038",
116 )
117 check(
118 "pass\x00word", "sa\x00lt", 4096, 16, "sha1", "56fa6aa75548099dcc37d7f03425e0c3"
119 )
108120
109121 # PBKDF2-HMAC-SHA256 test vectors
110 check('password', 'salt', 1, 32, 'sha256',
111 '120fb6cffcf8b32c43e7225256c4f837a86548c92ccc35480805987cb70be17b')
112 check('password', 'salt', 2, 32, 'sha256',
113 'ae4d0c95af6b46d32d0adff928f06dd02a303f8ef3c251dfd6e2d85a95474c43')
114 check('password', 'salt', 4096, 20, 'sha256',
115 'c5e478d59288c841aa530db6845c4c8d962893a0')
122 check(
123 "password",
124 "salt",
125 1,
126 32,
127 "sha256",
128 "120fb6cffcf8b32c43e7225256c4f837a86548c92ccc35480805987cb70be17b",
129 )
130 check(
131 "password",
132 "salt",
133 2,
134 32,
135 "sha256",
136 "ae4d0c95af6b46d32d0adff928f06dd02a303f8ef3c251dfd6e2d85a95474c43",
137 )
138 check(
139 "password",
140 "salt",
141 4096,
142 20,
143 "sha256",
144 "c5e478d59288c841aa530db6845c4c8d962893a0",
145 )
116146
117147 # This one is from the RFC but it just takes for ages
118148 # check('password', 'salt', 16777216, 20,
119149 # 'eefe3d61cd4da4e4e9945b3d6ba2158c2634e984')
120150
121151 # From Crypt-PBKDF2
122 check('password', 'ATHENA.MIT.EDUraeburn', 1, 16, 'sha1',
123 'cdedb5281bb2f801565a1122b2563515')
124 check('password', 'ATHENA.MIT.EDUraeburn', 1, 32, 'sha1',
125 'cdedb5281bb2f801565a1122b25635150ad1f7a04bb9f3a333ecc0e2e1f70837')
126 check('password', 'ATHENA.MIT.EDUraeburn', 2, 16, 'sha1',
127 '01dbee7f4a9e243e988b62c73cda935d')
128 check('password', 'ATHENA.MIT.EDUraeburn', 2, 32, 'sha1',
129 '01dbee7f4a9e243e988b62c73cda935da05378b93244ec8f48a99e61ad799d86')
130 check('password', 'ATHENA.MIT.EDUraeburn', 1200, 32, 'sha1',
131 '5c08eb61fdf71e4e4ec3cf6ba1f5512ba7e52ddbc5e5142f708a31e2e62b1e13')
132 check('X' * 64, 'pass phrase equals block size', 1200, 32, 'sha1',
133 '139c30c0966bc32ba55fdbf212530ac9c5ec59f1a452f5cc9ad940fea0598ed1')
134 check('X' * 65, 'pass phrase exceeds block size', 1200, 32, 'sha1',
135 '9ccad6d468770cd51b10e6a68721be611a8b4d282601db3b36be9246915ec82a')
136
137
138 def test_pbkdf2_non_native():
139 import werkzeug.security as sec
140 prev_value = sec._has_native_pbkdf2
141 sec._has_native_pbkdf2 = None
142
143 assert pbkdf2_hex('password', 'salt', 1, 20, 'sha1') \
144 == '0c60c80f961f0e71f3a9b524af6012062fe037a6'
145 sec._has_native_pbkdf2 = prev_value
152 check(
153 "password",
154 "ATHENA.MIT.EDUraeburn",
155 1,
156 16,
157 "sha1",
158 "cdedb5281bb2f801565a1122b2563515",
159 )
160 check(
161 "password",
162 "ATHENA.MIT.EDUraeburn",
163 1,
164 32,
165 "sha1",
166 "cdedb5281bb2f801565a1122b25635150ad1f7a04bb9f3a333ecc0e2e1f70837",
167 )
168 check(
169 "password",
170 "ATHENA.MIT.EDUraeburn",
171 2,
172 16,
173 "sha1",
174 "01dbee7f4a9e243e988b62c73cda935d",
175 )
176 check(
177 "password",
178 "ATHENA.MIT.EDUraeburn",
179 2,
180 32,
181 "sha1",
182 "01dbee7f4a9e243e988b62c73cda935da05378b93244ec8f48a99e61ad799d86",
183 )
184 check(
185 "password",
186 "ATHENA.MIT.EDUraeburn",
187 1200,
188 32,
189 "sha1",
190 "5c08eb61fdf71e4e4ec3cf6ba1f5512ba7e52ddbc5e5142f708a31e2e62b1e13",
191 )
192 check(
193 "X" * 64,
194 "pass phrase equals block size",
195 1200,
196 32,
197 "sha1",
198 "139c30c0966bc32ba55fdbf212530ac9c5ec59f1a452f5cc9ad940fea0598ed1",
199 )
200 check(
201 "X" * 65,
202 "pass phrase exceeds block size",
203 1200,
204 32,
205 "sha1",
206 "9ccad6d468770cd51b10e6a68721be611a8b4d282601db3b36be9246915ec82a",
207 )
44
55 Added serving tests.
66
7 :copyright: (c) 2014 by Armin Ronacher.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
1010 import os
11 import socket
1112 import ssl
13 import subprocess
1214 import sys
1315 import textwrap
1416 import time
15 import subprocess
16
17
18 import pytest
19 import requests.exceptions
20
21 from werkzeug import __version__ as version
22 from werkzeug import _reloader
23 from werkzeug import serving
1724
1825 try:
1926 import OpenSSL
2633 watchdog = None
2734
2835 try:
36 from http import client as httplib
37 except ImportError:
2938 import httplib
30 except ImportError:
31 from http import client as httplib
32
33 import requests
34 import requests.exceptions
35 import pytest
36
37 from werkzeug import __version__ as version, serving, _reloader
3839
3940
4041 def test_serving(dev_server):
41 server = dev_server('from werkzeug.testapp import test_app as app')
42 rv = requests.get('http://%s/?foo=bar&baz=blah' % server.addr).content
43 assert b'WSGI Information' in rv
44 assert b'foo=bar&amp;baz=blah' in rv
45 assert b'Werkzeug/' + version.encode('ascii') in rv
42 server = dev_server("from werkzeug.testapp import test_app as app")
43 rv = requests.get("http://%s/?foo=bar&baz=blah" % server.addr).content
44 assert b"WSGI Information" in rv
45 assert b"foo=bar&amp;baz=blah" in rv
46 assert b"Werkzeug/" + version.encode("ascii") in rv
4647
4748
4849 def test_absolute_requests(dev_server):
49 server = dev_server('''
50 def app(environ, start_response):
51 assert environ['HTTP_HOST'] == 'surelynotexisting.example.com:1337'
52 assert environ['PATH_INFO'] == '/index.htm'
53 addr = environ['HTTP_X_WERKZEUG_ADDR']
54 assert environ['SERVER_PORT'] == addr.split(':')[1]
55 start_response('200 OK', [('Content-Type', 'text/html')])
56 return [b'YES']
57 ''')
50 server = dev_server(
51 """
52 def app(environ, start_response):
53 assert environ['HTTP_HOST'] == 'surelynotexisting.example.com:1337'
54 assert environ['PATH_INFO'] == '/index.htm'
55 addr = environ['HTTP_X_WERKZEUG_ADDR']
56 assert environ['SERVER_PORT'] == addr.split(':')[1]
57 start_response('200 OK', [('Content-Type', 'text/html')])
58 return [b'YES']
59 """
60 )
5861
5962 conn = httplib.HTTPConnection(server.addr)
60 conn.request('GET', 'http://surelynotexisting.example.com:1337/index.htm#ignorethis',
61 headers={'X-Werkzeug-Addr': server.addr})
63 conn.request(
64 "GET",
65 "http://surelynotexisting.example.com:1337/index.htm#ignorethis",
66 headers={"X-Werkzeug-Addr": server.addr},
67 )
6268 res = conn.getresponse()
63 assert res.read() == b'YES'
69 assert res.read() == b"YES"
6470
6571
6672 def test_double_slash_path(dev_server):
67 server = dev_server('''
68 def app(environ, start_response):
69 assert 'fail' not in environ['HTTP_HOST']
70 start_response('200 OK', [('Content-Type', 'text/plain')])
71 return [b'YES']
72 ''')
73
74 r = requests.get(server.url + '//fail')
75 assert r.content == b'YES'
73 server = dev_server(
74 """
75 def app(environ, start_response):
76 assert 'fail' not in environ['HTTP_HOST']
77 start_response('200 OK', [('Content-Type', 'text/plain')])
78 return [b'YES']
79 """
80 )
81
82 r = requests.get(server.url + "//fail")
83 assert r.content == b"YES"
7684
7785
7886 def test_broken_app(dev_server):
79 server = dev_server('''
80 def app(environ, start_response):
81 1 // 0
82 ''')
83
84 r = requests.get(server.url + '/?foo=bar&baz=blah')
87 server = dev_server(
88 """
89 def app(environ, start_response):
90 1 // 0
91 """
92 )
93
94 r = requests.get(server.url + "/?foo=bar&baz=blah")
8595 assert r.status_code == 500
86 assert 'Internal Server Error' in r.text
87
88
89 @pytest.mark.skipif(not hasattr(ssl, 'SSLContext'),
90 reason='Missing PEP 466 (Python 2.7.9+) or Python 3.')
91 @pytest.mark.skipif(OpenSSL is None,
92 reason='OpenSSL is required for cert generation.')
96 assert "Internal Server Error" in r.text
97
98
99 @pytest.mark.skipif(
100 not hasattr(ssl, "SSLContext"),
101 reason="Missing PEP 466 (Python 2.7.9+) or Python 3.",
102 )
103 @pytest.mark.skipif(OpenSSL is None, reason="OpenSSL is required for cert generation.")
93104 def test_stdlib_ssl_contexts(dev_server, tmpdir):
94 certificate, private_key = \
95 serving.make_ssl_devcert(str(tmpdir.mkdir('certs')))
96
97 server = dev_server('''
98 def app(environ, start_response):
99 start_response('200 OK', [('Content-Type', 'text/html')])
100 return [b'hello']
101
102 import ssl
103 ctx = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
104 ctx.load_cert_chain("%s", "%s")
105 kwargs['ssl_context'] = ctx
106 ''' % (certificate, private_key))
105 certificate, private_key = serving.make_ssl_devcert(str(tmpdir.mkdir("certs")))
106
107 server = dev_server(
108 """
109 def app(environ, start_response):
110 start_response('200 OK', [('Content-Type', 'text/html')])
111 return [b'hello']
112
113 import ssl
114 ctx = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
115 ctx.load_cert_chain(r"%s", r"%s")
116 kwargs['ssl_context'] = ctx
117 """
118 % (certificate, private_key)
119 )
107120
108121 assert server.addr is not None
109122 r = requests.get(server.url, verify=False)
110 assert r.content == b'hello'
111
112
113 @pytest.mark.skipif(OpenSSL is None, reason='OpenSSL is not installed.')
123 assert r.content == b"hello"
124
125
126 @pytest.mark.skipif(OpenSSL is None, reason="OpenSSL is not installed.")
114127 def test_ssl_context_adhoc(dev_server):
115 server = dev_server('''
116 def app(environ, start_response):
117 start_response('200 OK', [('Content-Type', 'text/html')])
118 return [b'hello']
119
120 kwargs['ssl_context'] = 'adhoc'
121 ''')
128 server = dev_server(
129 """
130 def app(environ, start_response):
131 start_response('200 OK', [('Content-Type', 'text/html')])
132 return [b'hello']
133
134 kwargs['ssl_context'] = 'adhoc'
135 """
136 )
122137 r = requests.get(server.url, verify=False)
123 assert r.content == b'hello'
124
125
126 @pytest.mark.skipif(OpenSSL is None, reason='OpenSSL is not installed.')
138 assert r.content == b"hello"
139
140
141 @pytest.mark.skipif(OpenSSL is None, reason="OpenSSL is not installed.")
127142 def test_make_ssl_devcert(tmpdir):
128 certificate, private_key = \
129 serving.make_ssl_devcert(str(tmpdir))
143 certificate, private_key = serving.make_ssl_devcert(str(tmpdir))
130144 assert os.path.isfile(certificate)
131145 assert os.path.isfile(private_key)
132146
133147
134 @pytest.mark.skipif(watchdog is None, reason='Watchdog not installed.')
148 @pytest.mark.skipif(watchdog is None, reason="Watchdog not installed.")
135149 def test_reloader_broken_imports(tmpdir, dev_server):
136150 # We explicitly assert that the server reloads on change, even though in
137151 # this case the import could've just been retried. This is to assert
141155 # of directories, this only works for the watchdog reloader. The stat
142156 # reloader is too inefficient to watch such a large amount of files.
143157
144 real_app = tmpdir.join('real_app.py')
158 real_app = tmpdir.join("real_app.py")
145159 real_app.write("lol syntax error")
146160
147 server = dev_server('''
148 trials = []
149 def app(environ, start_response):
150 assert not trials, 'should have reloaded'
151 trials.append(1)
152 import real_app
153 return real_app.real_app(environ, start_response)
154
155 kwargs['use_reloader'] = True
156 kwargs['reloader_interval'] = 0.1
157 kwargs['reloader_type'] = 'watchdog'
158 ''')
161 server = dev_server(
162 """
163 trials = []
164 def app(environ, start_response):
165 assert not trials, 'should have reloaded'
166 trials.append(1)
167 import real_app
168 return real_app.real_app(environ, start_response)
169
170 kwargs['use_reloader'] = True
171 kwargs['reloader_interval'] = 0.1
172 kwargs['reloader_type'] = 'watchdog'
173 """
174 )
159175 server.wait_for_reloader_loop()
160176
161177 r = requests.get(server.url)
162178 assert r.status_code == 500
163179
164 real_app.write(textwrap.dedent('''
165 def real_app(environ, start_response):
166 start_response('200 OK', [('Content-Type', 'text/html')])
167 return [b'hello']
168 '''))
180 real_app.write(
181 textwrap.dedent(
182 """
183 def real_app(environ, start_response):
184 start_response('200 OK', [('Content-Type', 'text/html')])
185 return [b'hello']
186 """
187 )
188 )
169189 server.wait_for_reloader()
170190
171191 r = requests.get(server.url)
172192 assert r.status_code == 200
173 assert r.content == b'hello'
174
175
176 @pytest.mark.skipif(watchdog is None, reason='Watchdog not installed.')
193 assert r.content == b"hello"
194
195
196 @pytest.mark.skipif(watchdog is None, reason="Watchdog not installed.")
177197 def test_reloader_nested_broken_imports(tmpdir, dev_server):
178 real_app = tmpdir.mkdir('real_app')
179 real_app.join('__init__.py').write('from real_app.sub import real_app')
180 sub = real_app.mkdir('sub').join('__init__.py')
198 real_app = tmpdir.mkdir("real_app")
199 real_app.join("__init__.py").write("from real_app.sub import real_app")
200 sub = real_app.mkdir("sub").join("__init__.py")
181201 sub.write("lol syntax error")
182202
183 server = dev_server('''
184 trials = []
185 def app(environ, start_response):
186 assert not trials, 'should have reloaded'
187 trials.append(1)
188 import real_app
189 return real_app.real_app(environ, start_response)
190
191 kwargs['use_reloader'] = True
192 kwargs['reloader_interval'] = 0.1
193 kwargs['reloader_type'] = 'watchdog'
194 ''')
203 server = dev_server(
204 """
205 trials = []
206 def app(environ, start_response):
207 assert not trials, 'should have reloaded'
208 trials.append(1)
209 import real_app
210 return real_app.real_app(environ, start_response)
211
212 kwargs['use_reloader'] = True
213 kwargs['reloader_interval'] = 0.1
214 kwargs['reloader_type'] = 'watchdog'
215 """
216 )
195217 server.wait_for_reloader_loop()
196218
197219 r = requests.get(server.url)
198220 assert r.status_code == 500
199221
200 sub.write(textwrap.dedent('''
201 def real_app(environ, start_response):
202 start_response('200 OK', [('Content-Type', 'text/html')])
203 return [b'hello']
204 '''))
222 sub.write(
223 textwrap.dedent(
224 """
225 def real_app(environ, start_response):
226 start_response('200 OK', [('Content-Type', 'text/html')])
227 return [b'hello']
228 """
229 )
230 )
205231 server.wait_for_reloader()
206232
207233 r = requests.get(server.url)
208234 assert r.status_code == 200
209 assert r.content == b'hello'
210
211
212 @pytest.mark.skipif(watchdog is None, reason='Watchdog not installed.')
235 assert r.content == b"hello"
236
237
238 @pytest.mark.skipif(watchdog is None, reason="Watchdog not installed.")
213239 def test_reloader_reports_correct_file(tmpdir, dev_server):
214 real_app = tmpdir.join('real_app.py')
215 real_app.write(textwrap.dedent('''
216 def real_app(environ, start_response):
217 start_response('200 OK', [('Content-Type', 'text/html')])
218 return [b'hello']
219 '''))
220
221 server = dev_server('''
222 trials = []
223 def app(environ, start_response):
224 assert not trials, 'should have reloaded'
225 trials.append(1)
226 import real_app
227 return real_app.real_app(environ, start_response)
228
229 kwargs['use_reloader'] = True
230 kwargs['reloader_interval'] = 0.1
231 kwargs['reloader_type'] = 'watchdog'
232 ''')
240 real_app = tmpdir.join("real_app.py")
241 real_app.write(
242 textwrap.dedent(
243 """
244 def real_app(environ, start_response):
245 start_response('200 OK', [('Content-Type', 'text/html')])
246 return [b'hello']
247 """
248 )
249 )
250
251 server = dev_server(
252 """
253 trials = []
254 def app(environ, start_response):
255 assert not trials, 'should have reloaded'
256 trials.append(1)
257 import real_app
258 return real_app.real_app(environ, start_response)
259
260 kwargs['use_reloader'] = True
261 kwargs['reloader_interval'] = 0.1
262 kwargs['reloader_type'] = 'watchdog'
263 """
264 )
233265 server.wait_for_reloader_loop()
234266
235267 r = requests.get(server.url)
236268 assert r.status_code == 200
237 assert r.content == b'hello'
238
239 real_app_binary = tmpdir.join('real_app.pyc')
240 real_app_binary.write('anything is fine here')
269 assert r.content == b"hello"
270
271 real_app_binary = tmpdir.join("real_app.pyc")
272 real_app_binary.write("anything is fine here")
241273 server.wait_for_reloader()
242274
243275 change_event = " * Detected change in '%(path)s', reloading" % {
244 'path': real_app_binary
276 # need to double escape Windows paths
277 "path": str(real_app_binary).replace("\\", "\\\\")
245278 }
246279 server.logfile.seek(0)
247280 for i in range(20):
250283 if change_event in log:
251284 break
252285 else:
253 raise RuntimeError('Change event not detected.')
286 raise RuntimeError("Change event not detected.")
254287
255288
256289 def test_windows_get_args_for_reloading(monkeypatch, tmpdir):
257 test_py_exe = r'C:\Users\test\AppData\Local\Programs\Python\Python36\python.exe'
258 monkeypatch.setattr(os, 'name', 'nt')
259 monkeypatch.setattr(sys, 'executable', test_py_exe)
260 test_exe = tmpdir.mkdir('test').join('test.exe')
261 monkeypatch.setattr(sys, 'argv', [test_exe.strpath, 'run'])
290 test_py_exe = r"C:\Users\test\AppData\Local\Programs\Python\Python36\python.exe"
291 monkeypatch.setattr(os, "name", "nt")
292 monkeypatch.setattr(sys, "executable", test_py_exe)
293 test_exe = tmpdir.mkdir("test").join("test.exe")
294 monkeypatch.setattr(sys, "argv", [test_exe.strpath, "run"])
262295 rv = _reloader._get_args_for_reloading()
263 assert rv == [test_exe.strpath, 'run']
264
265
266 def test_monkeypached_sleep(tmpdir):
296 assert rv == [test_exe.strpath, "run"]
297
298
299 def test_monkeypatched_sleep(tmpdir):
267300 # removing the staticmethod wrapper in the definition of
268301 # ReloaderLoop._sleep works most of the time, since `sleep` is a c
269302 # function, and unlike python functions which are descriptors, doesn't
271304 # `eventlet.monkey_patch` before importing `_reloader`, `time.sleep` is a
272305 # python function, and subsequently calling `ReloaderLoop._sleep` fails
273306 # with a TypeError. This test checks that _sleep is attached correctly.
274 script = tmpdir.mkdir('app').join('test.py')
275 script.write(textwrap.dedent('''
276 import time
277
278 def sleep(secs):
279 pass
280
281 # simulate eventlet.monkey_patch by replacing the builtin sleep
282 # with a regular function before _reloader is imported
283 time.sleep = sleep
284
285 from werkzeug._reloader import ReloaderLoop
286 ReloaderLoop()._sleep(0)
287 '''))
288 subprocess.check_call(['python', str(script)])
307 script = tmpdir.mkdir("app").join("test.py")
308 script.write(
309 textwrap.dedent(
310 """
311 import time
312
313 def sleep(secs):
314 pass
315
316 # simulate eventlet.monkey_patch by replacing the builtin sleep
317 # with a regular function before _reloader is imported
318 time.sleep = sleep
319
320 from werkzeug._reloader import ReloaderLoop
321 ReloaderLoop()._sleep(0)
322 """
323 )
324 )
325 subprocess.check_call([sys.executable, str(script)])
289326
290327
291328 def test_wrong_protocol(dev_server):
293330 # traceback
294331 # See https://github.com/pallets/werkzeug/pull/838
295332
296 server = dev_server('''
297 def app(environ, start_response):
298 start_response('200 OK', [('Content-Type', 'text/html')])
299 return [b'hello']
300 ''')
333 server = dev_server(
334 """
335 def app(environ, start_response):
336 start_response('200 OK', [('Content-Type', 'text/html')])
337 return [b'hello']
338 """
339 )
301340 with pytest.raises(requests.exceptions.ConnectionError):
302 requests.get('https://%s/' % server.addr)
341 requests.get("https://%s/" % server.addr)
303342
304343 log = server.logfile.read()
305 assert 'Traceback' not in log
306 assert '\n127.0.0.1' in log
344 assert "Traceback" not in log
345 assert "\n127.0.0.1" in log
307346
308347
309348 def test_absent_content_length_and_content_type(dev_server):
310 server = dev_server('''
311 def app(environ, start_response):
312 assert 'CONTENT_LENGTH' not in environ
313 assert 'CONTENT_TYPE' not in environ
314 start_response('200 OK', [('Content-Type', 'text/html')])
315 return [b'YES']
316 ''')
349 server = dev_server(
350 """
351 def app(environ, start_response):
352 assert 'CONTENT_LENGTH' not in environ
353 assert 'CONTENT_TYPE' not in environ
354 start_response('200 OK', [('Content-Type', 'text/html')])
355 return [b'YES']
356 """
357 )
317358
318359 r = requests.get(server.url)
319 assert r.content == b'YES'
360 assert r.content == b"YES"
320361
321362
322363 def test_set_content_length_and_content_type_if_provided_by_client(dev_server):
323 server = dev_server('''
324 def app(environ, start_response):
325 assert environ['CONTENT_LENGTH'] == '233'
326 assert environ['CONTENT_TYPE'] == 'application/json'
327 start_response('200 OK', [('Content-Type', 'text/html')])
328 return [b'YES']
329 ''')
330
331 r = requests.get(server.url, headers={
332 'content_length': '233',
333 'content_type': 'application/json'
334 })
335 assert r.content == b'YES'
364 server = dev_server(
365 """
366 def app(environ, start_response):
367 assert environ['CONTENT_LENGTH'] == '233'
368 assert environ['CONTENT_TYPE'] == 'application/json'
369 start_response('200 OK', [('Content-Type', 'text/html')])
370 return [b'YES']
371 """
372 )
373
374 r = requests.get(
375 server.url,
376 headers={"content_length": "233", "content_type": "application/json"},
377 )
378 assert r.content == b"YES"
336379
337380
338381 def test_port_must_be_integer(dev_server):
339382 def app(environ, start_response):
340 start_response('200 OK', [('Content-Type', 'text/html')])
341 return [b'hello']
383 start_response("200 OK", [("Content-Type", "text/html")])
384 return [b"hello"]
342385
343386 with pytest.raises(TypeError) as excinfo:
344 serving.run_simple(hostname='localhost', port='5001',
345 application=app, use_reloader=True)
346 assert 'port must be an integer' in str(excinfo.value)
387 serving.run_simple(
388 hostname="localhost", port="5001", application=app, use_reloader=True
389 )
390 assert "port must be an integer" in str(excinfo.value)
347391
348392 with pytest.raises(TypeError) as excinfo:
349 serving.run_simple(hostname='localhost', port='5001',
350 application=app, use_reloader=False)
351 assert 'port must be an integer' in str(excinfo.value)
393 serving.run_simple(
394 hostname="localhost", port="5001", application=app, use_reloader=False
395 )
396 assert "port must be an integer" in str(excinfo.value)
352397
353398
354399 def test_chunked_encoding(dev_server):
355 server = dev_server(r'''
356 from werkzeug.wrappers import Request
357 def app(environ, start_response):
358 assert environ['HTTP_TRANSFER_ENCODING'] == 'chunked'
359 assert environ.get('wsgi.input_terminated', False)
360 request = Request(environ)
361 assert request.mimetype == 'multipart/form-data'
362 assert request.files['file'].read() == b'This is a test\n'
363 assert request.form['type'] == 'text/plain'
364 start_response('200 OK', [('Content-Type', 'text/plain')])
365 return [b'YES']
366 ''')
367
368 testfile = os.path.join(os.path.dirname(__file__), 'res', 'chunked.txt')
369
370 if sys.version_info[0] == 2:
371 from httplib import HTTPConnection
372 else:
373 from http.client import HTTPConnection
374
375 conn = HTTPConnection('127.0.0.1', server.port)
400 server = dev_server(
401 r"""
402 from werkzeug.wrappers import Request
403 def app(environ, start_response):
404 assert environ['HTTP_TRANSFER_ENCODING'] == 'chunked'
405 assert environ.get('wsgi.input_terminated', False)
406 request = Request(environ)
407 assert request.mimetype == 'multipart/form-data'
408 assert request.files['file'].read() == b'This is a test\n'
409 assert request.form['type'] == 'text/plain'
410 start_response('200 OK', [('Content-Type', 'text/plain')])
411 return [b'YES']
412 """
413 )
414
415 testfile = os.path.join(os.path.dirname(__file__), "res", "chunked.http")
416
417 conn = httplib.HTTPConnection("127.0.0.1", server.port)
376418 conn.connect()
377 conn.putrequest('POST', '/', skip_host=1, skip_accept_encoding=1)
378 conn.putheader('Accept', 'text/plain')
379 conn.putheader('Transfer-Encoding', 'chunked')
419 conn.putrequest("POST", "/", skip_host=1, skip_accept_encoding=1)
420 conn.putheader("Accept", "text/plain")
421 conn.putheader("Transfer-Encoding", "chunked")
380422 conn.putheader(
381 'Content-Type',
382 'multipart/form-data; boundary='
383 '--------------------------898239224156930639461866')
423 "Content-Type",
424 "multipart/form-data; boundary="
425 "--------------------------898239224156930639461866",
426 )
384427 conn.endheaders()
385428
386 with open(testfile, 'rb') as f:
429 with open(testfile, "rb") as f:
387430 conn.send(f.read())
388431
389432 res = conn.getresponse()
390433 assert res.status == 200
391 assert res.read() == b'YES'
434 assert res.read() == b"YES"
392435
393436 conn.close()
394437
395438
396439 def test_chunked_encoding_with_content_length(dev_server):
397 server = dev_server(r'''
398 from werkzeug.wrappers import Request
399 def app(environ, start_response):
400 assert environ['HTTP_TRANSFER_ENCODING'] == 'chunked'
401 assert environ.get('wsgi.input_terminated', False)
402 request = Request(environ)
403 assert request.mimetype == 'multipart/form-data'
404 assert request.files['file'].read() == b'This is a test\n'
405 assert request.form['type'] == 'text/plain'
406 start_response('200 OK', [('Content-Type', 'text/plain')])
407 return [b'YES']
408 ''')
409
410 testfile = os.path.join(os.path.dirname(__file__), 'res', 'chunked.txt')
411
412 if sys.version_info[0] == 2:
413 from httplib import HTTPConnection
414 else:
415 from http.client import HTTPConnection
416
417 conn = HTTPConnection('127.0.0.1', server.port)
440 server = dev_server(
441 r"""
442 from werkzeug.wrappers import Request
443 def app(environ, start_response):
444 assert environ['HTTP_TRANSFER_ENCODING'] == 'chunked'
445 assert environ.get('wsgi.input_terminated', False)
446 request = Request(environ)
447 assert request.mimetype == 'multipart/form-data'
448 assert request.files['file'].read() == b'This is a test\n'
449 assert request.form['type'] == 'text/plain'
450 start_response('200 OK', [('Content-Type', 'text/plain')])
451 return [b'YES']
452 """
453 )
454
455 testfile = os.path.join(os.path.dirname(__file__), "res", "chunked.http")
456
457 conn = httplib.HTTPConnection("127.0.0.1", server.port)
418458 conn.connect()
419 conn.putrequest('POST', '/', skip_host=1, skip_accept_encoding=1)
420 conn.putheader('Accept', 'text/plain')
421 conn.putheader('Transfer-Encoding', 'chunked')
459 conn.putrequest("POST", "/", skip_host=1, skip_accept_encoding=1)
460 conn.putheader("Accept", "text/plain")
461 conn.putheader("Transfer-Encoding", "chunked")
422462 # Content-Length is invalid for chunked, but some libraries might send it
423 conn.putheader('Content-Length', '372')
463 conn.putheader("Content-Length", "372")
424464 conn.putheader(
425 'Content-Type',
426 'multipart/form-data; boundary='
427 '--------------------------898239224156930639461866')
465 "Content-Type",
466 "multipart/form-data; boundary="
467 "--------------------------898239224156930639461866",
468 )
428469 conn.endheaders()
429470
430 with open(testfile, 'rb') as f:
471 with open(testfile, "rb") as f:
431472 conn.send(f.read())
432473
433474 res = conn.getresponse()
434475 assert res.status == 200
435 assert res.read() == b'YES'
476 assert res.read() == b"YES"
436477
437478 conn.close()
479
480
481 def test_multiple_headers_concatenated_per_rfc_3875_section_4_1_18(dev_server):
482 server = dev_server(
483 r"""
484 from werkzeug.wrappers import Response
485 def app(environ, start_response):
486 start_response('200 OK', [('Content-Type', 'text/plain')])
487 return [environ['HTTP_XYZ'].encode()]
488 """
489 )
490
491 conn = httplib.HTTPConnection("127.0.0.1", server.port)
492 conn.connect()
493 conn.putrequest("GET", "/")
494 conn.putheader("Accept", "text/plain")
495 conn.putheader("XYZ", " a ")
496 conn.putheader("X-INGNORE-1", "Some nonsense")
497 conn.putheader("XYZ", " b")
498 conn.putheader("X-INGNORE-2", "Some nonsense")
499 conn.putheader("XYZ", "c ")
500 conn.putheader("X-INGNORE-3", "Some nonsense")
501 conn.putheader("XYZ", "d")
502 conn.endheaders()
503 conn.send(b"")
504 res = conn.getresponse()
505
506 assert res.status == 200
507 assert res.read() == b"a ,b,c ,d"
508
509 conn.close()
510
511
512 def test_multiline_header_folding_for_http_1_1(dev_server):
513 """
514 This is testing the provision of multi-line header folding per:
515 * RFC 2616 Section 2.2
516 * RFC 3875 Section 4.1.18
517 """
518 server = dev_server(
519 r"""
520 from werkzeug.wrappers import Response
521 def app(environ, start_response):
522 start_response('200 OK', [('Content-Type', 'text/plain')])
523 return [environ['HTTP_XYZ'].encode()]
524 """
525 )
526
527 conn = httplib.HTTPConnection("127.0.0.1", server.port)
528 conn.connect()
529 conn.putrequest("GET", "/")
530 conn.putheader("Accept", "text/plain")
531 conn.putheader("XYZ", "first-line", "second-line", "third-line")
532 conn.endheaders()
533 conn.send(b"")
534 res = conn.getresponse()
535
536 assert res.status == 200
537 assert res.read() == b"first-line\tsecond-line\tthird-line"
538
539 conn.close()
540
541
542 def can_test_unix_socket():
543 if not hasattr(socket, "AF_UNIX"):
544 return False
545 try:
546 import requests_unixsocket # noqa: F401
547 except ImportError:
548 return False
549 return True
550
551
552 @pytest.mark.skipif(not can_test_unix_socket(), reason="Only works on UNIX")
553 def test_unix_socket(tmpdir, dev_server):
554 socket_f = str(tmpdir.join("socket"))
555 dev_server(
556 """
557 app = None
558 kwargs['hostname'] = {socket!r}
559 """.format(
560 socket="unix://" + socket_f
561 )
562 )
563 assert os.path.exists(socket_f)
44
55 Tests the testing tools.
66
7 :copyright: (c) 2014 by Armin Ronacher.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
10
11 from __future__ import with_statement
10 import json
11 import sys
12 from functools import partial
13 from io import BytesIO
1214
1315 import pytest
1416
15 import sys
16 from io import BytesIO
17 from werkzeug._compat import iteritems, to_bytes, implements_iterator
18 from functools import partial
19
20 from tests import strict_eq
21
22 from werkzeug.wrappers import Request, Response, BaseResponse
23 from werkzeug.test import Client, EnvironBuilder, create_environ, \
24 ClientRedirectError, stream_encode_multipart, run_wsgi_app
17 from . import strict_eq
18 from werkzeug._compat import implements_iterator
19 from werkzeug._compat import iteritems
20 from werkzeug._compat import to_bytes
21 from werkzeug.datastructures import FileStorage
22 from werkzeug.datastructures import MultiDict
23 from werkzeug.formparser import parse_form_data
24 from werkzeug.test import Client
25 from werkzeug.test import ClientRedirectError
26 from werkzeug.test import create_environ
27 from werkzeug.test import EnvironBuilder
28 from werkzeug.test import run_wsgi_app
29 from werkzeug.test import stream_encode_multipart
2530 from werkzeug.utils import redirect
26 from werkzeug.formparser import parse_form_data
27 from werkzeug.datastructures import MultiDict, FileStorage
31 from werkzeug.wrappers import BaseResponse
32 from werkzeug.wrappers import Request
33 from werkzeug.wrappers import Response
34 from werkzeug.wsgi import pop_path_info
2835
2936
3037 def cookie_app(environ, start_response):
3138 """A WSGI application which sets a cookie, and returns as a response any
3239 cookie which exists.
3340 """
34 response = Response(environ.get('HTTP_COOKIE', 'No Cookie'),
35 mimetype='text/plain')
36 response.set_cookie('test', 'test')
41 response = Response(environ.get("HTTP_COOKIE", "No Cookie"), mimetype="text/plain")
42 response.set_cookie("test", "test")
3743 return response(environ, start_response)
3844
3945
4046 def redirect_loop_app(environ, start_response):
41 response = redirect('http://localhost/some/redirect/')
47 response = redirect("http://localhost/some/redirect/")
4248 return response(environ, start_response)
4349
4450
4551 def redirect_with_get_app(environ, start_response):
4652 req = Request(environ)
47 if req.url not in ('http://localhost/',
48 'http://localhost/first/request',
49 'http://localhost/some/redirect/'):
50 assert False, 'redirect_demo_app() did not expect URL "%s"' % req.url
51 if '/some/redirect' not in req.url:
52 response = redirect('http://localhost/some/redirect/')
53 if req.url not in (
54 "http://localhost/",
55 "http://localhost/first/request",
56 "http://localhost/some/redirect/",
57 ):
58 raise AssertionError('redirect_demo_app() did not expect URL "%s"' % req.url)
59 if "/some/redirect" not in req.url:
60 response = redirect("http://localhost/some/redirect/")
5361 else:
54 response = Response('current url: %s' % req.url)
62 response = Response("current url: %s" % req.url)
5563 return response(environ, start_response)
5664
5765
58 def redirect_with_post_app(environ, start_response):
59 req = Request(environ)
60 if req.url == 'http://localhost/some/redirect/':
61 assert req.method == 'GET', 'request should be GET'
62 assert not req.form, 'request should not have data'
63 response = Response('current url: %s' % req.url)
66 def external_redirect_demo_app(environ, start_response):
67 response = redirect("http://example.com/")
68 return response(environ, start_response)
69
70
71 def external_subdomain_redirect_demo_app(environ, start_response):
72 if "test.example.com" in environ["HTTP_HOST"]:
73 response = Response("redirected successfully to subdomain")
6474 else:
65 response = redirect('http://localhost/some/redirect/')
66 return response(environ, start_response)
67
68
69 def external_redirect_demo_app(environ, start_response):
70 response = redirect('http://example.com/')
71 return response(environ, start_response)
72
73
74 def external_subdomain_redirect_demo_app(environ, start_response):
75 if 'test.example.com' in environ['HTTP_HOST']:
76 response = Response('redirected successfully to subdomain')
77 else:
78 response = redirect('http://test.example.com/login')
75 response = redirect("http://test.example.com/login")
7976 return response(environ, start_response)
8077
8178
8279 def multi_value_post_app(environ, start_response):
8380 req = Request(environ)
84 assert req.form['field'] == 'val1', req.form['field']
85 assert req.form.getlist('field') == ['val1', 'val2'], req.form.getlist('field')
86 response = Response('ok')
81 assert req.form["field"] == "val1", req.form["field"]
82 assert req.form.getlist("field") == ["val1", "val2"], req.form.getlist("field")
83 response = Response("ok")
8784 return response(environ, start_response)
8885
8986
9087 def test_cookie_forging():
9188 c = Client(cookie_app)
92 c.set_cookie('localhost', 'foo', 'bar')
89 c.set_cookie("localhost", "foo", "bar")
9390 appiter, code, headers = c.open()
94 strict_eq(list(appiter), [b'foo=bar'])
91 strict_eq(list(appiter), [b"foo=bar"])
9592
9693
9794 def test_set_cookie_app():
9895 c = Client(cookie_app)
9996 appiter, code, headers = c.open()
100 assert 'Set-Cookie' in dict(headers)
97 assert "Set-Cookie" in dict(headers)
10198
10299
103100 def test_cookiejar_stores_cookie():
104101 c = Client(cookie_app)
105102 appiter, code, headers = c.open()
106 assert 'test' in c.cookie_jar._cookies['localhost.local']['/']
103 assert "test" in c.cookie_jar._cookies["localhost.local"]["/"]
107104
108105
109106 def test_no_initial_cookie():
110107 c = Client(cookie_app)
111108 appiter, code, headers = c.open()
112 strict_eq(b''.join(appiter), b'No Cookie')
109 strict_eq(b"".join(appiter), b"No Cookie")
113110
114111
115112 def test_resent_cookie():
116113 c = Client(cookie_app)
117114 c.open()
118115 appiter, code, headers = c.open()
119 strict_eq(b''.join(appiter), b'test=test')
116 strict_eq(b"".join(appiter), b"test=test")
120117
121118
122119 def test_disable_cookies():
123120 c = Client(cookie_app, use_cookies=False)
124121 c.open()
125122 appiter, code, headers = c.open()
126 strict_eq(b''.join(appiter), b'No Cookie')
123 strict_eq(b"".join(appiter), b"No Cookie")
127124
128125
129126 def test_cookie_for_different_path():
130127 c = Client(cookie_app)
131 c.open('/path1')
132 appiter, code, headers = c.open('/path2')
133 strict_eq(b''.join(appiter), b'test=test')
128 c.open("/path1")
129 appiter, code, headers = c.open("/path2")
130 strict_eq(b"".join(appiter), b"test=test")
134131
135132
136133 def test_environ_builder_basics():
137134 b = EnvironBuilder()
138135 assert b.content_type is None
139 b.method = 'POST'
136 b.method = "POST"
140137 assert b.content_type is None
141 b.form['test'] = 'normal value'
142 assert b.content_type == 'application/x-www-form-urlencoded'
143 b.files.add_file('test', BytesIO(b'test contents'), 'test.txt')
144 assert b.files['test'].content_type == 'text/plain'
145 b.form['test_int'] = 1
146 assert b.content_type == 'multipart/form-data'
138 b.form["test"] = "normal value"
139 assert b.content_type == "application/x-www-form-urlencoded"
140 b.files.add_file("test", BytesIO(b"test contents"), "test.txt")
141 assert b.files["test"].content_type == "text/plain"
142 b.form["test_int"] = 1
143 assert b.content_type == "multipart/form-data"
147144
148145 req = b.get_request()
149146 b.close()
150147
151 strict_eq(req.url, u'http://localhost/')
152 strict_eq(req.method, 'POST')
153 strict_eq(req.form['test'], u'normal value')
154 assert req.files['test'].content_type == 'text/plain'
155 strict_eq(req.files['test'].filename, u'test.txt')
156 strict_eq(req.files['test'].read(), b'test contents')
148 strict_eq(req.url, u"http://localhost/")
149 strict_eq(req.method, "POST")
150 strict_eq(req.form["test"], u"normal value")
151 assert req.files["test"].content_type == "text/plain"
152 strict_eq(req.files["test"].filename, u"test.txt")
153 strict_eq(req.files["test"].read(), b"test contents")
157154
158155
159156 def test_environ_builder_data():
160 b = EnvironBuilder(data='foo')
161 assert b.input_stream.getvalue() == b'foo'
162 b = EnvironBuilder(data=b'foo')
163 assert b.input_stream.getvalue() == b'foo'
164
165 b = EnvironBuilder(data={'foo': 'bar'})
166 assert b.form['foo'] == 'bar'
167 b = EnvironBuilder(data={'foo': ['bar1', 'bar2']})
168 assert b.form.getlist('foo') == ['bar1', 'bar2']
157 b = EnvironBuilder(data="foo")
158 assert b.input_stream.getvalue() == b"foo"
159 b = EnvironBuilder(data=b"foo")
160 assert b.input_stream.getvalue() == b"foo"
161
162 b = EnvironBuilder(data={"foo": "bar"})
163 assert b.form["foo"] == "bar"
164 b = EnvironBuilder(data={"foo": ["bar1", "bar2"]})
165 assert b.form.getlist("foo") == ["bar1", "bar2"]
169166
170167 def check_list_content(b, length):
171 foo = b.files.getlist('foo')
168 foo = b.files.getlist("foo")
172169 assert len(foo) == length
173170 for obj in foo:
174171 assert isinstance(obj, FileStorage)
175172
176 b = EnvironBuilder(data={'foo': BytesIO()})
173 b = EnvironBuilder(data={"foo": BytesIO()})
177174 check_list_content(b, 1)
178 b = EnvironBuilder(data={'foo': [BytesIO(), BytesIO()]})
175 b = EnvironBuilder(data={"foo": [BytesIO(), BytesIO()]})
179176 check_list_content(b, 2)
180177
181 b = EnvironBuilder(data={'foo': (BytesIO(),)})
178 b = EnvironBuilder(data={"foo": (BytesIO(),)})
182179 check_list_content(b, 1)
183 b = EnvironBuilder(data={'foo': [(BytesIO(),), (BytesIO(),)]})
180 b = EnvironBuilder(data={"foo": [(BytesIO(),), (BytesIO(),)]})
184181 check_list_content(b, 2)
185182
186183
184 def test_environ_builder_json():
185 @Request.application
186 def app(request):
187 assert request.content_type == "application/json"
188 return Response(json.loads(request.get_data(as_text=True))["foo"])
189
190 c = Client(app, Response)
191 response = c.post("/", json={"foo": "bar"})
192 assert response.data == b"bar"
193
194 with pytest.raises(TypeError):
195 c.post("/", json={"foo": "bar"}, data={"baz": "qux"})
196
197
187198 def test_environ_builder_headers():
188 b = EnvironBuilder(environ_base={'HTTP_USER_AGENT': 'Foo/0.1'},
189 environ_overrides={'wsgi.version': (1, 1)})
190 b.headers['X-Beat-My-Horse'] = 'very well sir'
199 b = EnvironBuilder(
200 environ_base={"HTTP_USER_AGENT": "Foo/0.1"},
201 environ_overrides={"wsgi.version": (1, 1)},
202 )
203 b.headers["X-Beat-My-Horse"] = "very well sir"
191204 env = b.get_environ()
192 strict_eq(env['HTTP_USER_AGENT'], 'Foo/0.1')
193 strict_eq(env['HTTP_X_BEAT_MY_HORSE'], 'very well sir')
194 strict_eq(env['wsgi.version'], (1, 1))
195
196 b.headers['User-Agent'] = 'Bar/1.0'
205 strict_eq(env["HTTP_USER_AGENT"], "Foo/0.1")
206 strict_eq(env["HTTP_X_BEAT_MY_HORSE"], "very well sir")
207 strict_eq(env["wsgi.version"], (1, 1))
208
209 b.headers["User-Agent"] = "Bar/1.0"
197210 env = b.get_environ()
198 strict_eq(env['HTTP_USER_AGENT'], 'Bar/1.0')
211 strict_eq(env["HTTP_USER_AGENT"], "Bar/1.0")
199212
200213
201214 def test_environ_builder_headers_content_type():
202 b = EnvironBuilder(headers={'Content-Type': 'text/plain'})
215 b = EnvironBuilder(headers={"Content-Type": "text/plain"})
203216 env = b.get_environ()
204 assert env['CONTENT_TYPE'] == 'text/plain'
205 b = EnvironBuilder(content_type='text/html',
206 headers={'Content-Type': 'text/plain'})
217 assert env["CONTENT_TYPE"] == "text/plain"
218 b = EnvironBuilder(content_type="text/html", headers={"Content-Type": "text/plain"})
207219 env = b.get_environ()
208 assert env['CONTENT_TYPE'] == 'text/html'
220 assert env["CONTENT_TYPE"] == "text/html"
221 b = EnvironBuilder()
222 env = b.get_environ()
223 assert "CONTENT_TYPE" not in env
209224
210225
211226 def test_environ_builder_paths():
212 b = EnvironBuilder(path='/foo', base_url='http://example.com/')
213 strict_eq(b.base_url, 'http://example.com/')
214 strict_eq(b.path, '/foo')
215 strict_eq(b.script_root, '')
216 strict_eq(b.host, 'example.com')
217
218 b = EnvironBuilder(path='/foo', base_url='http://example.com/bar')
219 strict_eq(b.base_url, 'http://example.com/bar/')
220 strict_eq(b.path, '/foo')
221 strict_eq(b.script_root, '/bar')
222 strict_eq(b.host, 'example.com')
223
224 b.host = 'localhost'
225 strict_eq(b.base_url, 'http://localhost/bar/')
226 b.base_url = 'http://localhost:8080/'
227 strict_eq(b.host, 'localhost:8080')
228 strict_eq(b.server_name, 'localhost')
227 b = EnvironBuilder(path="/foo", base_url="http://example.com/")
228 strict_eq(b.base_url, "http://example.com/")
229 strict_eq(b.path, "/foo")
230 strict_eq(b.script_root, "")
231 strict_eq(b.host, "example.com")
232
233 b = EnvironBuilder(path="/foo", base_url="http://example.com/bar")
234 strict_eq(b.base_url, "http://example.com/bar/")
235 strict_eq(b.path, "/foo")
236 strict_eq(b.script_root, "/bar")
237 strict_eq(b.host, "example.com")
238
239 b.host = "localhost"
240 strict_eq(b.base_url, "http://localhost/bar/")
241 b.base_url = "http://localhost:8080/"
242 strict_eq(b.host, "localhost:8080")
243 strict_eq(b.server_name, "localhost")
229244 strict_eq(b.server_port, 8080)
230245
231 b.host = 'foo.invalid'
232 b.url_scheme = 'https'
233 b.script_root = '/test'
246 b.host = "foo.invalid"
247 b.url_scheme = "https"
248 b.script_root = "/test"
234249 env = b.get_environ()
235 strict_eq(env['SERVER_NAME'], 'foo.invalid')
236 strict_eq(env['SERVER_PORT'], '443')
237 strict_eq(env['SCRIPT_NAME'], '/test')
238 strict_eq(env['PATH_INFO'], '/foo')
239 strict_eq(env['HTTP_HOST'], 'foo.invalid')
240 strict_eq(env['wsgi.url_scheme'], 'https')
241 strict_eq(b.base_url, 'https://foo.invalid/test/')
250 strict_eq(env["SERVER_NAME"], "foo.invalid")
251 strict_eq(env["SERVER_PORT"], "443")
252 strict_eq(env["SCRIPT_NAME"], "/test")
253 strict_eq(env["PATH_INFO"], "/foo")
254 strict_eq(env["HTTP_HOST"], "foo.invalid")
255 strict_eq(env["wsgi.url_scheme"], "https")
256 strict_eq(b.base_url, "https://foo.invalid/test/")
242257
243258
244259 def test_environ_builder_content_type():
245260 builder = EnvironBuilder()
246261 assert builder.content_type is None
247 builder.method = 'POST'
262 builder.method = "POST"
248263 assert builder.content_type is None
249 builder.method = 'PUT'
264 builder.method = "PUT"
250265 assert builder.content_type is None
251 builder.method = 'PATCH'
266 builder.method = "PATCH"
252267 assert builder.content_type is None
253 builder.method = 'DELETE'
268 builder.method = "DELETE"
254269 assert builder.content_type is None
255 builder.method = 'GET'
270 builder.method = "GET"
256271 assert builder.content_type is None
257 builder.form['foo'] = 'bar'
258 assert builder.content_type == 'application/x-www-form-urlencoded'
259 builder.files.add_file('blafasel', BytesIO(b'foo'), 'test.txt')
260 assert builder.content_type == 'multipart/form-data'
272 builder.form["foo"] = "bar"
273 assert builder.content_type == "application/x-www-form-urlencoded"
274 builder.files.add_file("blafasel", BytesIO(b"foo"), "test.txt")
275 assert builder.content_type == "multipart/form-data"
261276 req = builder.get_request()
262 strict_eq(req.form['foo'], u'bar')
263 strict_eq(req.files['blafasel'].read(), b'foo')
277 strict_eq(req.form["foo"], u"bar")
278 strict_eq(req.files["blafasel"].read(), b"foo")
264279
265280
266281 def test_environ_builder_stream_switch():
267 d = MultiDict(dict(foo=u'bar', blub=u'blah', hu=u'hum'))
282 d = MultiDict(dict(foo=u"bar", blub=u"blah", hu=u"hum"))
268283 for use_tempfile in False, True:
269284 stream, length, boundary = stream_encode_multipart(
270 d, use_tempfile, threshold=150)
285 d, use_tempfile, threshold=150
286 )
271287 assert isinstance(stream, BytesIO) != use_tempfile
272288
273 form = parse_form_data({'wsgi.input': stream, 'CONTENT_LENGTH': str(length),
274 'CONTENT_TYPE': 'multipart/form-data; boundary="%s"' %
275 boundary})[1]
289 form = parse_form_data(
290 {
291 "wsgi.input": stream,
292 "CONTENT_LENGTH": str(length),
293 "CONTENT_TYPE": 'multipart/form-data; boundary="%s"' % boundary,
294 }
295 )[1]
276296 strict_eq(form, d)
277297 stream.close()
278298
279299
280300 def test_environ_builder_unicode_file_mix():
281301 for use_tempfile in False, True:
282 f = FileStorage(BytesIO(u'\N{SNOWMAN}'.encode('utf-8')),
283 'snowman.txt')
284 d = MultiDict(dict(f=f, s=u'\N{SNOWMAN}'))
302 f = FileStorage(BytesIO(u"\N{SNOWMAN}".encode("utf-8")), "snowman.txt")
303 d = MultiDict(dict(f=f, s=u"\N{SNOWMAN}"))
285304 stream, length, boundary = stream_encode_multipart(
286 d, use_tempfile, threshold=150)
305 d, use_tempfile, threshold=150
306 )
287307 assert isinstance(stream, BytesIO) != use_tempfile
288308
289 _, form, files = parse_form_data({
290 'wsgi.input': stream,
291 'CONTENT_LENGTH': str(length),
292 'CONTENT_TYPE': 'multipart/form-data; boundary="%s"' %
293 boundary
294 })
295 strict_eq(form['s'], u'\N{SNOWMAN}')
296 strict_eq(files['f'].name, 'f')
297 strict_eq(files['f'].filename, u'snowman.txt')
298 strict_eq(files['f'].read(),
299 u'\N{SNOWMAN}'.encode('utf-8'))
309 _, form, files = parse_form_data(
310 {
311 "wsgi.input": stream,
312 "CONTENT_LENGTH": str(length),
313 "CONTENT_TYPE": 'multipart/form-data; boundary="%s"' % boundary,
314 }
315 )
316 strict_eq(form["s"], u"\N{SNOWMAN}")
317 strict_eq(files["f"].name, "f")
318 strict_eq(files["f"].filename, u"snowman.txt")
319 strict_eq(files["f"].read(), u"\N{SNOWMAN}".encode("utf-8"))
300320 stream.close()
301321
302322
303323 def test_create_environ():
304 env = create_environ('/foo?bar=baz', 'http://example.org/')
324 env = create_environ("/foo?bar=baz", "http://example.org/")
305325 expected = {
306 'wsgi.multiprocess': False,
307 'wsgi.version': (1, 0),
308 'wsgi.run_once': False,
309 'wsgi.errors': sys.stderr,
310 'wsgi.multithread': False,
311 'wsgi.url_scheme': 'http',
312 'SCRIPT_NAME': '',
313 'CONTENT_TYPE': '',
314 'CONTENT_LENGTH': '0',
315 'SERVER_NAME': 'example.org',
316 'REQUEST_METHOD': 'GET',
317 'HTTP_HOST': 'example.org',
318 'PATH_INFO': '/foo',
319 'SERVER_PORT': '80',
320 'SERVER_PROTOCOL': 'HTTP/1.1',
321 'QUERY_STRING': 'bar=baz'
326 "wsgi.multiprocess": False,
327 "wsgi.version": (1, 0),
328 "wsgi.run_once": False,
329 "wsgi.errors": sys.stderr,
330 "wsgi.multithread": False,
331 "wsgi.url_scheme": "http",
332 "SCRIPT_NAME": "",
333 "SERVER_NAME": "example.org",
334 "REQUEST_METHOD": "GET",
335 "HTTP_HOST": "example.org",
336 "PATH_INFO": "/foo",
337 "SERVER_PORT": "80",
338 "SERVER_PROTOCOL": "HTTP/1.1",
339 "QUERY_STRING": "bar=baz",
322340 }
323341 for key, value in iteritems(expected):
324342 assert env[key] == value
325 strict_eq(env['wsgi.input'].read(0), b'')
326 strict_eq(create_environ('/foo', 'http://example.com/')['SCRIPT_NAME'], '')
343 strict_eq(env["wsgi.input"].read(0), b"")
344 strict_eq(create_environ("/foo", "http://example.com/")["SCRIPT_NAME"], "")
345
346
347 def test_create_environ_query_string_error():
348 with pytest.raises(ValueError):
349 create_environ("/foo?bar=baz", query_string={"a": "b"})
350
351
352 def test_builder_from_environ():
353 environ = create_environ(
354 "/foo",
355 base_url="https://example.com/base",
356 query_string={"name": "Werkzeug"},
357 data={"foo": "bar"},
358 headers={"X-Foo": "bar"},
359 )
360 builder = EnvironBuilder.from_environ(environ)
361 try:
362 new_environ = builder.get_environ()
363 finally:
364 builder.close()
365 assert new_environ == environ
327366
328367
329368 def test_file_closing():
330369 closed = []
331370
332371 class SpecialInput(object):
333
334372 def read(self, size):
335 return ''
373 return ""
336374
337375 def close(self):
338376 closed.append(self)
339377
340 create_environ(data={'foo': SpecialInput()})
378 create_environ(data={"foo": SpecialInput()})
341379 strict_eq(len(closed), 1)
342380 builder = EnvironBuilder()
343 builder.files.add_file('blah', SpecialInput())
381 builder.files.add_file("blah", SpecialInput())
344382 builder.close()
345383 strict_eq(len(closed), 2)
346384
347385
348386 def test_follow_redirect():
349 env = create_environ('/', base_url='http://localhost')
387 env = create_environ("/", base_url="http://localhost")
350388 c = Client(redirect_with_get_app)
351389 appiter, code, headers = c.open(environ_overrides=env, follow_redirects=True)
352 strict_eq(code, '200 OK')
353 strict_eq(b''.join(appiter), b'current url: http://localhost/some/redirect/')
390 strict_eq(code, "200 OK")
391 strict_eq(b"".join(appiter), b"current url: http://localhost/some/redirect/")
354392
355393 # Test that the :cls:`Client` is aware of user defined response wrappers
356394 c = Client(redirect_with_get_app, response_wrapper=BaseResponse)
357 resp = c.get('/', follow_redirects=True)
395 resp = c.get("/", follow_redirects=True)
358396 strict_eq(resp.status_code, 200)
359 strict_eq(resp.data, b'current url: http://localhost/some/redirect/')
397 strict_eq(resp.data, b"current url: http://localhost/some/redirect/")
360398
361399 # test with URL other than '/' to make sure redirected URL's are correct
362400 c = Client(redirect_with_get_app, response_wrapper=BaseResponse)
363 resp = c.get('/first/request', follow_redirects=True)
401 resp = c.get("/first/request", follow_redirects=True)
364402 strict_eq(resp.status_code, 200)
365 strict_eq(resp.data, b'current url: http://localhost/some/redirect/')
403 strict_eq(resp.data, b"current url: http://localhost/some/redirect/")
366404
367405
368406 def test_follow_local_redirect():
371409
372410 def local_redirect_app(environ, start_response):
373411 req = Request(environ)
374 if '/from/location' in req.url:
375 response = redirect('/to/location', Response=LocalResponse)
412 if "/from/location" in req.url:
413 response = redirect("/to/location", Response=LocalResponse)
376414 else:
377 response = Response('current path: %s' % req.path)
415 response = Response("current path: %s" % req.path)
378416 return response(environ, start_response)
379417
380418 c = Client(local_redirect_app, response_wrapper=BaseResponse)
381 resp = c.get('/from/location', follow_redirects=True)
419 resp = c.get("/from/location", follow_redirects=True)
382420 strict_eq(resp.status_code, 200)
383 strict_eq(resp.data, b'current path: /to/location')
384
385
386 def test_follow_redirect_with_post_307():
387 def redirect_with_post_307_app(environ, start_response):
388 req = Request(environ)
389 if req.url == 'http://localhost/some/redirect/':
390 assert req.method == 'POST', 'request should be POST'
391 assert not req.form, 'request should not have data'
392 response = Response('current url: %s' % req.url)
393 else:
394 response = redirect('http://localhost/some/redirect/', code=307)
395 return response(environ, start_response)
396
397 c = Client(redirect_with_post_307_app, response_wrapper=BaseResponse)
398 resp = c.post('/', follow_redirects=True, data='foo=blub+hehe&blah=42')
399 assert resp.status_code == 200
400 assert resp.data == b'current url: http://localhost/some/redirect/'
421 strict_eq(resp.data, b"current path: /to/location")
422
423
424 @pytest.mark.parametrize(
425 ("code", "keep"), ((302, False), (301, False), (307, True), (308, True))
426 )
427 def test_follow_redirect_body(code, keep):
428 @Request.application
429 def app(request):
430 if request.url == "http://localhost/some/redirect/":
431 assert request.method == "POST" if keep else "GET"
432 assert request.headers["X-Foo"] == "bar"
433
434 if keep:
435 assert request.form["foo"] == "bar"
436 else:
437 assert not request.form
438
439 return Response("current url: %s" % request.url)
440
441 return redirect("http://localhost/some/redirect/", code=code)
442
443 c = Client(app, response_wrapper=BaseResponse)
444 response = c.post(
445 "/", follow_redirects=True, data={"foo": "bar"}, headers={"X-Foo": "bar"}
446 )
447 assert response.status_code == 200
448 assert response.data == b"current url: http://localhost/some/redirect/"
401449
402450
403451 def test_follow_external_redirect():
404 env = create_environ('/', base_url='http://localhost')
452 env = create_environ("/", base_url="http://localhost")
405453 c = Client(external_redirect_demo_app)
406 pytest.raises(RuntimeError, lambda:
407 c.get(environ_overrides=env, follow_redirects=True))
454 pytest.raises(
455 RuntimeError, lambda: c.get(environ_overrides=env, follow_redirects=True)
456 )
408457
409458
410459 def test_follow_external_redirect_on_same_subdomain():
411 env = create_environ('/', base_url='http://example.com')
460 env = create_environ("/", base_url="http://example.com")
412461 c = Client(external_subdomain_redirect_demo_app, allow_subdomain_redirects=True)
413462 c.get(environ_overrides=env, follow_redirects=True)
414463
415464 # check that this does not work for real external domains
416 env = create_environ('/', base_url='http://localhost')
417 pytest.raises(RuntimeError, lambda:
418 c.get(environ_overrides=env, follow_redirects=True))
465 env = create_environ("/", base_url="http://localhost")
466 pytest.raises(
467 RuntimeError, lambda: c.get(environ_overrides=env, follow_redirects=True)
468 )
419469
420470 # check that subdomain redirects fail if no `allow_subdomain_redirects` is applied
421471 c = Client(external_subdomain_redirect_demo_app)
422 pytest.raises(RuntimeError, lambda:
423 c.get(environ_overrides=env, follow_redirects=True))
472 pytest.raises(
473 RuntimeError, lambda: c.get(environ_overrides=env, follow_redirects=True)
474 )
424475
425476
426477 def test_follow_redirect_loop():
427478 c = Client(redirect_loop_app, response_wrapper=BaseResponse)
428479 with pytest.raises(ClientRedirectError):
429 c.get('/', follow_redirects=True)
430
431
432 def test_follow_redirect_with_post():
433 c = Client(redirect_with_post_app, response_wrapper=BaseResponse)
434 resp = c.post('/', follow_redirects=True, data='foo=blub+hehe&blah=42')
435 strict_eq(resp.status_code, 200)
436 strict_eq(resp.data, b'current url: http://localhost/some/redirect/')
480 c.get("/", follow_redirects=True)
481
482
483 def test_follow_redirect_non_root_base_url():
484 @Request.application
485 def app(request):
486 if request.path == "/redirect":
487 return redirect("done")
488
489 return Response(request.path)
490
491 c = Client(app, response_wrapper=Response)
492 response = c.get(
493 "/redirect", base_url="http://localhost/other", follow_redirects=True
494 )
495 assert response.data == b"/done"
496
497
498 def test_follow_redirect_exhaust_intermediate():
499 class Middleware(object):
500 def __init__(self, app):
501 self.app = app
502 self.active = 0
503
504 def __call__(self, environ, start_response):
505 # Test client must exhaust response stream, otherwise the
506 # cleanup code that decrements this won't have run by the
507 # time the next request is started.
508 assert not self.active
509 self.active += 1
510 try:
511 for chunk in self.app(environ, start_response):
512 yield chunk
513 finally:
514 self.active -= 1
515
516 app = Middleware(redirect_with_get_app)
517 client = Client(Middleware(redirect_with_get_app), Response)
518 response = client.get("/", follow_redirects=True, buffered=False)
519 assert response.data == b"current url: http://localhost/some/redirect/"
520 assert not app.active
521
522
523 def test_cookie_across_redirect():
524 @Request.application
525 def app(request):
526 if request.path == "/":
527 return Response(request.cookies.get("auth", "out"))
528
529 if request.path == "/in":
530 rv = redirect("/")
531 rv.set_cookie("auth", "in")
532 return rv
533
534 if request.path == "/out":
535 rv = redirect("/")
536 rv.delete_cookie("auth")
537 return rv
538
539 c = Client(app, Response)
540 assert c.get("/").data == b"out"
541 assert c.get("/in", follow_redirects=True).data == b"in"
542 assert c.get("/").data == b"in"
543 assert c.get("/out", follow_redirects=True).data == b"out"
544 assert c.get("/").data == b"out"
545
546
547 def test_redirect_mutate_environ():
548 @Request.application
549 def app(request):
550 if request.path == "/first":
551 return redirect("/prefix/second")
552
553 return Response(request.path)
554
555 def middleware(environ, start_response):
556 # modify the environ in place, shouldn't propagate to redirect request
557 pop_path_info(environ)
558 return app(environ, start_response)
559
560 c = Client(middleware, Response)
561 rv = c.get("/prefix/first", follow_redirects=True)
562 # if modified environ was used by client, this would be /
563 assert rv.data == b"/second"
437564
438565
439566 def test_path_info_script_name_unquoting():
440567 def test_app(environ, start_response):
441 start_response('200 OK', [('Content-Type', 'text/plain')])
442 return [environ['PATH_INFO'] + '\n' + environ['SCRIPT_NAME']]
568 start_response("200 OK", [("Content-Type", "text/plain")])
569 return [environ["PATH_INFO"] + "\n" + environ["SCRIPT_NAME"]]
570
443571 c = Client(test_app, response_wrapper=BaseResponse)
444 resp = c.get('/foo%40bar')
445 strict_eq(resp.data, b'/foo@bar\n')
572 resp = c.get("/foo%40bar")
573 strict_eq(resp.data, b"/foo@bar\n")
446574 c = Client(test_app, response_wrapper=BaseResponse)
447 resp = c.get('/foo%40bar', 'http://localhost/bar%40baz')
448 strict_eq(resp.data, b'/foo@bar\n/bar@baz')
575 resp = c.get("/foo%40bar", "http://localhost/bar%40baz")
576 strict_eq(resp.data, b"/foo@bar\n/bar@baz")
449577
450578
451579 def test_multi_value_submit():
452580 c = Client(multi_value_post_app, response_wrapper=BaseResponse)
453 data = {
454 'field': ['val1', 'val2']
455 }
456 resp = c.post('/', data=data)
581 data = {"field": ["val1", "val2"]}
582 resp = c.post("/", data=data)
457583 strict_eq(resp.status_code, 200)
458584 c = Client(multi_value_post_app, response_wrapper=BaseResponse)
459 data = MultiDict({
460 'field': ['val1', 'val2']
461 })
462 resp = c.post('/', data=data)
585 data = MultiDict({"field": ["val1", "val2"]})
586 resp = c.post("/", data=data)
463587 strict_eq(resp.status_code, 200)
464588
465589
466590 def test_iri_support():
467 b = EnvironBuilder(u'/föö-bar', base_url=u'http://☃.net/')
468 strict_eq(b.path, '/f%C3%B6%C3%B6-bar')
469 strict_eq(b.base_url, 'http://xn--n3h.net/')
470
471
472 @pytest.mark.parametrize('buffered', (True, False))
473 @pytest.mark.parametrize('iterable', (True, False))
591 b = EnvironBuilder(u"/föö-bar", base_url=u"http://☃.net/")
592 strict_eq(b.path, "/f%C3%B6%C3%B6-bar")
593 strict_eq(b.base_url, "http://xn--n3h.net/")
594
595
596 @pytest.mark.parametrize("buffered", (True, False))
597 @pytest.mark.parametrize("iterable", (True, False))
474598 def test_run_wsgi_apps(buffered, iterable):
475599 leaked_data = []
476600
477601 def simple_app(environ, start_response):
478 start_response('200 OK', [('Content-Type', 'text/html')])
479 return ['Hello World!']
602 start_response("200 OK", [("Content-Type", "text/html")])
603 return ["Hello World!"]
480604
481605 def yielding_app(environ, start_response):
482 start_response('200 OK', [('Content-Type', 'text/html')])
483 yield 'Hello '
484 yield 'World!'
606 start_response("200 OK", [("Content-Type", "text/html")])
607 yield "Hello "
608 yield "World!"
485609
486610 def late_start_response(environ, start_response):
487 yield 'Hello '
488 yield 'World'
489 start_response('200 OK', [('Content-Type', 'text/html')])
490 yield '!'
611 yield "Hello "
612 yield "World"
613 start_response("200 OK", [("Content-Type", "text/html")])
614 yield "!"
491615
492616 def depends_on_close(environ, start_response):
493 leaked_data.append('harhar')
494 start_response('200 OK', [('Content-Type', 'text/html')])
617 leaked_data.append("harhar")
618 start_response("200 OK", [("Content-Type", "text/html")])
495619
496620 class Rv(object):
497
498621 def __iter__(self):
499 yield 'Hello '
500 yield 'World'
501 yield '!'
622 yield "Hello "
623 yield "World"
624 yield "!"
502625
503626 def close(self):
504 assert leaked_data.pop() == 'harhar'
627 assert leaked_data.pop() == "harhar"
505628
506629 return Rv()
507630
508 for app in (simple_app, yielding_app, late_start_response,
509 depends_on_close):
631 for app in (simple_app, yielding_app, late_start_response, depends_on_close):
510632 if iterable:
511633 app = iterable_middleware(app)
512634 app_iter, status, headers = run_wsgi_app(app, {}, buffered=buffered)
513 strict_eq(status, '200 OK')
514 strict_eq(list(headers), [('Content-Type', 'text/html')])
515 strict_eq(''.join(app_iter), 'Hello World!')
516
517 if hasattr(app_iter, 'close'):
635 strict_eq(status, "200 OK")
636 strict_eq(list(headers), [("Content-Type", "text/html")])
637 strict_eq("".join(app_iter), "Hello World!")
638
639 if hasattr(app_iter, "close"):
518640 app_iter.close()
519641 assert not leaked_data
520642
521643
644 @pytest.mark.parametrize("buffered", (True, False))
645 @pytest.mark.parametrize("iterable", (True, False))
646 def test_lazy_start_response_empty_response_app(buffered, iterable):
647 @implements_iterator
648 class app:
649 def __init__(self, environ, start_response):
650 self.start_response = start_response
651
652 def __iter__(self):
653 return self
654
655 def __next__(self):
656 self.start_response("200 OK", [("Content-Type", "text/html")])
657 raise StopIteration
658
659 if iterable:
660 app = iterable_middleware(app)
661 app_iter, status, headers = run_wsgi_app(app, {}, buffered=buffered)
662 strict_eq(status, "200 OK")
663 strict_eq(list(headers), [("Content-Type", "text/html")])
664 strict_eq("".join(app_iter), "")
665
666
522667 def test_run_wsgi_app_closing_iterator():
523668 got_close = []
524669
525670 @implements_iterator
526671 class CloseIter(object):
527
528672 def __init__(self):
529673 self.iterated = False
530674
538682 if self.iterated:
539683 raise StopIteration()
540684 self.iterated = True
541 return 'bar'
685 return "bar"
542686
543687 def bar(environ, start_response):
544 start_response('200 OK', [('Content-Type', 'text/plain')])
688 start_response("200 OK", [("Content-Type", "text/plain")])
545689 return CloseIter()
546690
547691 app_iter, status, headers = run_wsgi_app(bar, {})
548 assert status == '200 OK'
549 assert list(headers) == [('Content-Type', 'text/plain')]
550 assert next(app_iter) == 'bar'
692 assert status == "200 OK"
693 assert list(headers) == [("Content-Type", "text/plain")]
694 assert next(app_iter) == "bar"
551695 pytest.raises(StopIteration, partial(next, app_iter))
552696 app_iter.close()
553697
554 assert run_wsgi_app(bar, {}, True)[0] == ['bar']
698 assert run_wsgi_app(bar, {}, True)[0] == ["bar"]
555699
556700 assert len(got_close) == 2
557701
558702
559703 def iterable_middleware(app):
560 '''Guarantee that the app returns an iterable'''
704 """Guarantee that the app returns an iterable"""
705
561706 def inner(environ, start_response):
562707 rv = app(environ, start_response)
563708
564709 class Iterable(object):
565
566710 def __iter__(self):
567711 return iter(rv)
568712
569 if hasattr(rv, 'close'):
713 if hasattr(rv, "close"):
714
570715 def close(self):
571716 rv.close()
572717
573718 return Iterable()
719
574720 return inner
575721
576722
578724 @Request.application
579725 def test_app(request):
580726 response = Response(repr(sorted(request.cookies.items())))
581 response.set_cookie(u'test1', b'foo')
582 response.set_cookie(u'test2', b'bar')
727 response.set_cookie(u"test1", b"foo")
728 response.set_cookie(u"test2", b"bar")
583729 return response
730
584731 client = Client(test_app, Response)
585 resp = client.get('/')
586 strict_eq(resp.data, b'[]')
587 resp = client.get('/')
588 strict_eq(resp.data,
589 to_bytes(repr([('test1', u'foo'), ('test2', u'bar')]), 'ascii'))
732 resp = client.get("/")
733 strict_eq(resp.data, b"[]")
734 resp = client.get("/")
735 strict_eq(
736 resp.data, to_bytes(repr([("test1", u"foo"), ("test2", u"bar")]), "ascii")
737 )
590738
591739
592740 def test_correct_open_invocation_on_redirect():
595743
596744 def open(self, *args, **kwargs):
597745 self.counter += 1
598 env = kwargs.setdefault('environ_overrides', {})
599 env['werkzeug._foo'] = self.counter
746 env = kwargs.setdefault("environ_overrides", {})
747 env["werkzeug._foo"] = self.counter
600748 return Client.open(self, *args, **kwargs)
601749
602750 @Request.application
603751 def test_app(request):
604 return Response(str(request.environ['werkzeug._foo']))
752 return Response(str(request.environ["werkzeug._foo"]))
605753
606754 c = MyClient(test_app, response_wrapper=Response)
607 strict_eq(c.get('/').data, b'1')
608 strict_eq(c.get('/').data, b'2')
609 strict_eq(c.get('/').data, b'3')
755 strict_eq(c.get("/").data, b"1")
756 strict_eq(c.get("/").data, b"2")
757 strict_eq(c.get("/").data, b"3")
610758
611759
612760 def test_correct_encoding():
613 req = Request.from_values(u'/\N{SNOWMAN}', u'http://example.com/foo')
614 strict_eq(req.script_root, u'/foo')
615 strict_eq(req.path, u'/\N{SNOWMAN}')
761 req = Request.from_values(u"/\N{SNOWMAN}", u"http://example.com/foo")
762 strict_eq(req.script_root, u"/foo")
763 strict_eq(req.path, u"/\N{SNOWMAN}")
616764
617765
618766 def test_full_url_requests_with_args():
619 base = 'http://example.com/'
767 base = "http://example.com/"
620768
621769 @Request.application
622770 def test_app(request):
623 return Response(request.args['x'])
771 return Response(request.args["x"])
772
624773 client = Client(test_app, Response)
625 resp = client.get('/?x=42', base)
626 strict_eq(resp.data, b'42')
627 resp = client.get('http://www.example.com/?x=23', base)
628 strict_eq(resp.data, b'23')
774 resp = client.get("/?x=42", base)
775 strict_eq(resp.data, b"42")
776 resp = client.get("http://www.example.com/?x=23", base)
777 strict_eq(resp.data, b"23")
629778
630779
631780 def test_delete_requests_with_form():
632781 @Request.application
633782 def test_app(request):
634 return Response(request.form.get('x', None))
783 return Response(request.form.get("x", None))
635784
636785 client = Client(test_app, Response)
637 resp = client.delete('/', data={'x': 42})
638 strict_eq(resp.data, b'42')
786 resp = client.delete("/", data={"x": 42})
787 strict_eq(resp.data, b"42")
639788
640789
641790 def test_post_with_file_descriptor(tmpdir):
642791 c = Client(Response(), response_wrapper=Response)
643 f = tmpdir.join('some-file.txt')
644 f.write('foo')
645 with open(f.strpath, mode='rt') as data:
646 resp = c.post('/', data=data)
792 f = tmpdir.join("some-file.txt")
793 f.write("foo")
794 with open(f.strpath, mode="rt") as data:
795 resp = c.post("/", data=data)
647796 strict_eq(resp.status_code, 200)
648 with open(f.strpath, mode='rb') as data:
649 resp = c.post('/', data=data)
797 with open(f.strpath, mode="rb") as data:
798 resp = c.post("/", data=data)
650799 strict_eq(resp.status_code, 200)
651800
652801
654803 @Request.application
655804 def test_app(request):
656805 return Response(request.content_type)
806
657807 client = Client(test_app, Response)
658808
659 resp = client.get('/', data=b'testing', mimetype='text/css')
660 strict_eq(resp.data, b'text/css; charset=utf-8')
661
662 resp = client.get('/', data=b'testing', mimetype='application/octet-stream')
663 strict_eq(resp.data, b'application/octet-stream')
809 resp = client.get("/", data=b"testing", mimetype="text/css")
810 strict_eq(resp.data, b"text/css; charset=utf-8")
811
812 resp = client.get("/", data=b"testing", mimetype="application/octet-stream")
813 strict_eq(resp.data, b"application/octet-stream")
814
815
816 def test_raw_request_uri():
817 @Request.application
818 def app(request):
819 path_info = request.path
820 request_uri = request.environ["REQUEST_URI"]
821 return Response("\n".join((path_info, request_uri)))
822
823 client = Client(app, Response)
824 response = client.get("/hello%2fworld")
825 data = response.get_data(as_text=True)
826 assert data == "/hello/world\n/hello%2fworld"
44
55 URL helper tests.
66
7 :copyright: (c) 2014 by Armin Ronacher.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
1010 import pytest
1111
12 from tests import strict_eq
13
12 from . import strict_eq
13 from werkzeug import urls
14 from werkzeug._compat import BytesIO
15 from werkzeug._compat import NativeStringIO
16 from werkzeug._compat import text_type
1417 from werkzeug.datastructures import OrderedMultiDict
15 from werkzeug import urls
16 from werkzeug._compat import text_type, NativeStringIO, BytesIO
1718
1819
1920 def test_parsing():
20 url = urls.url_parse('http://anon:hunter2@[2001:db8:0:1]:80/a/b/c')
21 assert url.netloc == 'anon:hunter2@[2001:db8:0:1]:80'
22 assert url.username == 'anon'
23 assert url.password == 'hunter2'
21 url = urls.url_parse("http://anon:hunter2@[2001:db8:0:1]:80/a/b/c")
22 assert url.netloc == "anon:hunter2@[2001:db8:0:1]:80"
23 assert url.username == "anon"
24 assert url.password == "hunter2"
2425 assert url.port == 80
25 assert url.ascii_host == '2001:db8:0:1'
26 assert url.ascii_host == "2001:db8:0:1"
2627
2728 assert url.get_file_location() == (None, None) # no file scheme
2829
2930
30 @pytest.mark.parametrize('implicit_format', (True, False))
31 @pytest.mark.parametrize('localhost', ('127.0.0.1', '::1', 'localhost'))
31 @pytest.mark.parametrize("implicit_format", (True, False))
32 @pytest.mark.parametrize("localhost", ("127.0.0.1", "::1", "localhost"))
3233 def test_fileurl_parsing_windows(implicit_format, localhost, monkeypatch):
3334 if implicit_format:
3435 pathformat = None
35 monkeypatch.setattr('os.name', 'nt')
36 monkeypatch.setattr("os.name", "nt")
3637 else:
37 pathformat = 'windows'
38 monkeypatch.delattr('os.name') # just to make sure it won't get used
39
40 url = urls.url_parse('file:///C:/Documents and Settings/Foobar/stuff.txt')
41 assert url.netloc == ''
42 assert url.scheme == 'file'
43 assert url.get_file_location(pathformat) == \
44 (None, r'C:\Documents and Settings\Foobar\stuff.txt')
45
46 url = urls.url_parse('file://///server.tld/file.txt')
47 assert url.get_file_location(pathformat) == ('server.tld', r'file.txt')
48
49 url = urls.url_parse('file://///server.tld')
50 assert url.get_file_location(pathformat) == ('server.tld', '')
51
52 url = urls.url_parse('file://///%s' % localhost)
53 assert url.get_file_location(pathformat) == (None, '')
54
55 url = urls.url_parse('file://///%s/file.txt' % localhost)
56 assert url.get_file_location(pathformat) == (None, r'file.txt')
38 pathformat = "windows"
39 monkeypatch.delattr("os.name") # just to make sure it won't get used
40
41 url = urls.url_parse("file:///C:/Documents and Settings/Foobar/stuff.txt")
42 assert url.netloc == ""
43 assert url.scheme == "file"
44 assert url.get_file_location(pathformat) == (
45 None,
46 r"C:\Documents and Settings\Foobar\stuff.txt",
47 )
48
49 url = urls.url_parse("file://///server.tld/file.txt")
50 assert url.get_file_location(pathformat) == ("server.tld", r"file.txt")
51
52 url = urls.url_parse("file://///server.tld")
53 assert url.get_file_location(pathformat) == ("server.tld", "")
54
55 url = urls.url_parse("file://///%s" % localhost)
56 assert url.get_file_location(pathformat) == (None, "")
57
58 url = urls.url_parse("file://///%s/file.txt" % localhost)
59 assert url.get_file_location(pathformat) == (None, r"file.txt")
5760
5861
5962 def test_replace():
60 url = urls.url_parse('http://de.wikipedia.org/wiki/Troll')
61 strict_eq(url.replace(query='foo=bar'),
62 urls.url_parse('http://de.wikipedia.org/wiki/Troll?foo=bar'))
63 strict_eq(url.replace(scheme='https'),
64 urls.url_parse('https://de.wikipedia.org/wiki/Troll'))
63 url = urls.url_parse("http://de.wikipedia.org/wiki/Troll")
64 strict_eq(
65 url.replace(query="foo=bar"),
66 urls.url_parse("http://de.wikipedia.org/wiki/Troll?foo=bar"),
67 )
68 strict_eq(
69 url.replace(scheme="https"),
70 urls.url_parse("https://de.wikipedia.org/wiki/Troll"),
71 )
6572
6673
6774 def test_quoting():
68 strict_eq(urls.url_quote(u'\xf6\xe4\xfc'), '%C3%B6%C3%A4%C3%BC')
75 strict_eq(urls.url_quote(u"\xf6\xe4\xfc"), "%C3%B6%C3%A4%C3%BC")
6976 strict_eq(urls.url_unquote(urls.url_quote(u'#%="\xf6')), u'#%="\xf6')
70 strict_eq(urls.url_quote_plus('foo bar'), 'foo+bar')
71 strict_eq(urls.url_unquote_plus('foo+bar'), u'foo bar')
72 strict_eq(urls.url_quote_plus('foo+bar'), 'foo%2Bbar')
73 strict_eq(urls.url_unquote_plus('foo%2Bbar'), u'foo+bar')
74 strict_eq(urls.url_encode({b'a': None, b'b': b'foo bar'}), 'b=foo+bar')
75 strict_eq(urls.url_encode({u'a': None, u'b': u'foo bar'}), 'b=foo+bar')
76 strict_eq(urls.url_fix(u'http://de.wikipedia.org/wiki/Elf (Begriffsklärung)'),
77 'http://de.wikipedia.org/wiki/Elf%20(Begriffskl%C3%A4rung)')
78 strict_eq(urls.url_quote_plus(42), '42')
79 strict_eq(urls.url_quote(b'\xff'), '%FF')
77 strict_eq(urls.url_quote_plus("foo bar"), "foo+bar")
78 strict_eq(urls.url_unquote_plus("foo+bar"), u"foo bar")
79 strict_eq(urls.url_quote_plus("foo+bar"), "foo%2Bbar")
80 strict_eq(urls.url_unquote_plus("foo%2Bbar"), u"foo+bar")
81 strict_eq(urls.url_encode({b"a": None, b"b": b"foo bar"}), "b=foo+bar")
82 strict_eq(urls.url_encode({u"a": None, u"b": u"foo bar"}), "b=foo+bar")
83 strict_eq(
84 urls.url_fix(u"http://de.wikipedia.org/wiki/Elf (Begriffsklärung)"),
85 "http://de.wikipedia.org/wiki/Elf%20(Begriffskl%C3%A4rung)",
86 )
87 strict_eq(urls.url_quote_plus(42), "42")
88 strict_eq(urls.url_quote(b"\xff"), "%FF")
8089
8190
8291 def test_bytes_unquoting():
83 strict_eq(urls.url_unquote(urls.url_quote(
84 u'#%="\xf6', charset='latin1'), charset=None), b'#%="\xf6')
92 strict_eq(
93 urls.url_unquote(urls.url_quote(u'#%="\xf6', charset="latin1"), charset=None),
94 b'#%="\xf6',
95 )
8596
8697
8798 def test_url_decoding():
88 x = urls.url_decode(b'foo=42&bar=23&uni=H%C3%A4nsel')
89 strict_eq(x['foo'], u'42')
90 strict_eq(x['bar'], u'23')
91 strict_eq(x['uni'], u'Hänsel')
92
93 x = urls.url_decode(b'foo=42;bar=23;uni=H%C3%A4nsel', separator=b';')
94 strict_eq(x['foo'], u'42')
95 strict_eq(x['bar'], u'23')
96 strict_eq(x['uni'], u'Hänsel')
97
98 x = urls.url_decode(b'%C3%9Ch=H%C3%A4nsel', decode_keys=True)
99 strict_eq(x[u'Üh'], u'Hänsel')
99 x = urls.url_decode(b"foo=42&bar=23&uni=H%C3%A4nsel")
100 strict_eq(x["foo"], u"42")
101 strict_eq(x["bar"], u"23")
102 strict_eq(x["uni"], u"Hänsel")
103
104 x = urls.url_decode(b"foo=42;bar=23;uni=H%C3%A4nsel", separator=b";")
105 strict_eq(x["foo"], u"42")
106 strict_eq(x["bar"], u"23")
107 strict_eq(x["uni"], u"Hänsel")
108
109 x = urls.url_decode(b"%C3%9Ch=H%C3%A4nsel", decode_keys=True)
110 strict_eq(x[u"Üh"], u"Hänsel")
100111
101112
102113 def test_url_bytes_decoding():
103 x = urls.url_decode(b'foo=42&bar=23&uni=H%C3%A4nsel', charset=None)
104 strict_eq(x[b'foo'], b'42')
105 strict_eq(x[b'bar'], b'23')
106 strict_eq(x[b'uni'], u'Hänsel'.encode('utf-8'))
114 x = urls.url_decode(b"foo=42&bar=23&uni=H%C3%A4nsel", charset=None)
115 strict_eq(x[b"foo"], b"42")
116 strict_eq(x[b"bar"], b"23")
117 strict_eq(x[b"uni"], u"Hänsel".encode("utf-8"))
107118
108119
109120 def test_streamed_url_decoding():
110 item1 = u'a' * 100000
111 item2 = u'b' * 400
112 string = ('a=%s&b=%s&c=%s' % (item1, item2, item2)).encode('ascii')
113 gen = urls.url_decode_stream(BytesIO(string), limit=len(string),
114 return_iterator=True)
115 strict_eq(next(gen), ('a', item1))
116 strict_eq(next(gen), ('b', item2))
117 strict_eq(next(gen), ('c', item2))
121 item1 = u"a" * 100000
122 item2 = u"b" * 400
123 string = ("a=%s&b=%s&c=%s" % (item1, item2, item2)).encode("ascii")
124 gen = urls.url_decode_stream(
125 BytesIO(string), limit=len(string), return_iterator=True
126 )
127 strict_eq(next(gen), ("a", item1))
128 strict_eq(next(gen), ("b", item2))
129 strict_eq(next(gen), ("c", item2))
118130 pytest.raises(StopIteration, lambda: next(gen))
119131
120132
121133 def test_stream_decoding_string_fails():
122 pytest.raises(TypeError, urls.url_decode_stream, 'testing')
134 pytest.raises(TypeError, urls.url_decode_stream, "testing")
123135
124136
125137 def test_url_encoding():
126 strict_eq(urls.url_encode({'foo': 'bar 45'}), 'foo=bar+45')
127 d = {'foo': 1, 'bar': 23, 'blah': u'Hänsel'}
128 strict_eq(urls.url_encode(d, sort=True), 'bar=23&blah=H%C3%A4nsel&foo=1')
129 strict_eq(urls.url_encode(d, sort=True, separator=u';'), 'bar=23;blah=H%C3%A4nsel;foo=1')
138 strict_eq(urls.url_encode({"foo": "bar 45"}), "foo=bar+45")
139 d = {"foo": 1, "bar": 23, "blah": u"Hänsel"}
140 strict_eq(urls.url_encode(d, sort=True), "bar=23&blah=H%C3%A4nsel&foo=1")
141 strict_eq(
142 urls.url_encode(d, sort=True, separator=u";"), "bar=23;blah=H%C3%A4nsel;foo=1"
143 )
130144
131145
132146 def test_sorted_url_encode():
133 strict_eq(urls.url_encode({u"a": 42, u"b": 23, 1: 1, 2: 2},
134 sort=True, key=lambda i: text_type(i[0])), '1=1&2=2&a=42&b=23')
135 strict_eq(urls.url_encode({u'A': 1, u'a': 2, u'B': 3, 'b': 4}, sort=True,
136 key=lambda x: x[0].lower() + x[0]), 'A=1&a=2&B=3&b=4')
147 strict_eq(
148 urls.url_encode(
149 {u"a": 42, u"b": 23, 1: 1, 2: 2}, sort=True, key=lambda i: text_type(i[0])
150 ),
151 "1=1&2=2&a=42&b=23",
152 )
153 strict_eq(
154 urls.url_encode(
155 {u"A": 1, u"a": 2, u"B": 3, "b": 4},
156 sort=True,
157 key=lambda x: x[0].lower() + x[0],
158 ),
159 "A=1&a=2&B=3&b=4",
160 )
137161
138162
139163 def test_streamed_url_encoding():
140164 out = NativeStringIO()
141 urls.url_encode_stream({'foo': 'bar 45'}, out)
142 strict_eq(out.getvalue(), 'foo=bar+45')
143
144 d = {'foo': 1, 'bar': 23, 'blah': u'Hänsel'}
165 urls.url_encode_stream({"foo": "bar 45"}, out)
166 strict_eq(out.getvalue(), "foo=bar+45")
167
168 d = {"foo": 1, "bar": 23, "blah": u"Hänsel"}
145169 out = NativeStringIO()
146170 urls.url_encode_stream(d, out, sort=True)
147 strict_eq(out.getvalue(), 'bar=23&blah=H%C3%A4nsel&foo=1')
171 strict_eq(out.getvalue(), "bar=23&blah=H%C3%A4nsel&foo=1")
148172 out = NativeStringIO()
149 urls.url_encode_stream(d, out, sort=True, separator=u';')
150 strict_eq(out.getvalue(), 'bar=23;blah=H%C3%A4nsel;foo=1')
173 urls.url_encode_stream(d, out, sort=True, separator=u";")
174 strict_eq(out.getvalue(), "bar=23;blah=H%C3%A4nsel;foo=1")
151175
152176 gen = urls.url_encode_stream(d, sort=True)
153 strict_eq(next(gen), 'bar=23')
154 strict_eq(next(gen), 'blah=H%C3%A4nsel')
155 strict_eq(next(gen), 'foo=1')
177 strict_eq(next(gen), "bar=23")
178 strict_eq(next(gen), "blah=H%C3%A4nsel")
179 strict_eq(next(gen), "foo=1")
156180 pytest.raises(StopIteration, lambda: next(gen))
157181
158182
159183 def test_url_fixing():
160 x = urls.url_fix(u'http://de.wikipedia.org/wiki/Elf (Begriffskl\xe4rung)')
161 assert x == 'http://de.wikipedia.org/wiki/Elf%20(Begriffskl%C3%A4rung)'
184 x = urls.url_fix(u"http://de.wikipedia.org/wiki/Elf (Begriffskl\xe4rung)")
185 assert x == "http://de.wikipedia.org/wiki/Elf%20(Begriffskl%C3%A4rung)"
162186
163187 x = urls.url_fix("http://just.a.test/$-_.+!*'(),")
164188 assert x == "http://just.a.test/$-_.+!*'(),"
165189
166 x = urls.url_fix('http://höhöhö.at/höhöhö/hähähä')
167 assert x == r'http://xn--hhh-snabb.at/h%C3%B6h%C3%B6h%C3%B6/h%C3%A4h%C3%A4h%C3%A4'
190 x = urls.url_fix("http://höhöhö.at/höhöhö/hähähä")
191 assert x == r"http://xn--hhh-snabb.at/h%C3%B6h%C3%B6h%C3%B6/h%C3%A4h%C3%A4h%C3%A4"
168192
169193
170194 def test_url_fixing_filepaths():
171 x = urls.url_fix(r'file://C:\Users\Administrator\My Documents\ÑÈáÇíí')
172 assert x == (r'file:///C%3A/Users/Administrator/My%20Documents/'
173 r'%C3%91%C3%88%C3%A1%C3%87%C3%AD%C3%AD')
174
175 a = urls.url_fix(r'file:/C:/')
176 b = urls.url_fix(r'file://C:/')
177 c = urls.url_fix(r'file:///C:/')
178 assert a == b == c == r'file:///C%3A/'
179
180 x = urls.url_fix(r'file://host/sub/path')
181 assert x == r'file://host/sub/path'
182
183 x = urls.url_fix(r'file:///')
184 assert x == r'file:///'
195 x = urls.url_fix(r"file://C:\Users\Administrator\My Documents\ÑÈáÇíí")
196 assert x == (
197 r"file:///C%3A/Users/Administrator/My%20Documents/"
198 r"%C3%91%C3%88%C3%A1%C3%87%C3%AD%C3%AD"
199 )
200
201 a = urls.url_fix(r"file:/C:/")
202 b = urls.url_fix(r"file://C:/")
203 c = urls.url_fix(r"file:///C:/")
204 assert a == b == c == r"file:///C%3A/"
205
206 x = urls.url_fix(r"file://host/sub/path")
207 assert x == r"file://host/sub/path"
208
209 x = urls.url_fix(r"file:///")
210 assert x == r"file:///"
185211
186212
187213 def test_url_fixing_qs():
188 x = urls.url_fix(b'http://example.com/?foo=%2f%2f')
189 assert x == 'http://example.com/?foo=%2f%2f'
190
191 x = urls.url_fix('http://acronyms.thefreedictionary.com/'
192 'Algebraic+Methods+of+Solving+the+Schr%C3%B6dinger+Equation')
193 assert x == ('http://acronyms.thefreedictionary.com/'
194 'Algebraic+Methods+of+Solving+the+Schr%C3%B6dinger+Equation')
214 x = urls.url_fix(b"http://example.com/?foo=%2f%2f")
215 assert x == "http://example.com/?foo=%2f%2f"
216
217 x = urls.url_fix(
218 "http://acronyms.thefreedictionary.com/"
219 "Algebraic+Methods+of+Solving+the+Schr%C3%B6dinger+Equation"
220 )
221 assert x == (
222 "http://acronyms.thefreedictionary.com/"
223 "Algebraic+Methods+of+Solving+the+Schr%C3%B6dinger+Equation"
224 )
195225
196226
197227 def test_iri_support():
198 strict_eq(urls.uri_to_iri('http://xn--n3h.net/'),
199 u'http://\u2603.net/')
200 strict_eq(
201 urls.uri_to_iri(b'http://%C3%BCser:p%C3%A4ssword@xn--n3h.net/p%C3%A5th'),
202 u'http://\xfcser:p\xe4ssword@\u2603.net/p\xe5th')
203 strict_eq(urls.iri_to_uri(u'http://☃.net/'), 'http://xn--n3h.net/')
204 strict_eq(
205 urls.iri_to_uri(u'http://üser:pässword@☃.net/påth'),
206 'http://%C3%BCser:p%C3%A4ssword@xn--n3h.net/p%C3%A5th')
207
208 strict_eq(urls.uri_to_iri('http://test.com/%3Fmeh?foo=%26%2F'),
209 u'http://test.com/%3Fmeh?foo=%26%2F')
228 strict_eq(urls.uri_to_iri("http://xn--n3h.net/"), u"http://\u2603.net/")
229 strict_eq(
230 urls.uri_to_iri(b"http://%C3%BCser:p%C3%A4ssword@xn--n3h.net/p%C3%A5th"),
231 u"http://\xfcser:p\xe4ssword@\u2603.net/p\xe5th",
232 )
233 strict_eq(urls.iri_to_uri(u"http://☃.net/"), "http://xn--n3h.net/")
234 strict_eq(
235 urls.iri_to_uri(u"http://üser:pässword@☃.net/påth"),
236 "http://%C3%BCser:p%C3%A4ssword@xn--n3h.net/p%C3%A5th",
237 )
238
239 strict_eq(
240 urls.uri_to_iri("http://test.com/%3Fmeh?foo=%26%2F"),
241 u"http://test.com/%3Fmeh?foo=%26%2F",
242 )
210243
211244 # this should work as well, might break on 2.4 because of a broken
212245 # idna codec
213 strict_eq(urls.uri_to_iri(b'/foo'), u'/foo')
214 strict_eq(urls.iri_to_uri(u'/foo'), '/foo')
215
216 strict_eq(urls.iri_to_uri(u'http://föö.com:8080/bam/baz'),
217 'http://xn--f-1gaa.com:8080/bam/baz')
246 strict_eq(urls.uri_to_iri(b"/foo"), u"/foo")
247 strict_eq(urls.iri_to_uri(u"/foo"), "/foo")
248
249 strict_eq(
250 urls.iri_to_uri(u"http://föö.com:8080/bam/baz"),
251 "http://xn--f-1gaa.com:8080/bam/baz",
252 )
218253
219254
220255 def test_iri_safe_conversion():
221 strict_eq(urls.iri_to_uri(u'magnet:?foo=bar'),
222 'magnet:?foo=bar')
223 strict_eq(urls.iri_to_uri(u'itms-service://?foo=bar'),
224 'itms-service:?foo=bar')
225 strict_eq(urls.iri_to_uri(u'itms-service://?foo=bar',
226 safe_conversion=True),
227 'itms-service://?foo=bar')
256 strict_eq(urls.iri_to_uri(u"magnet:?foo=bar"), "magnet:?foo=bar")
257 strict_eq(urls.iri_to_uri(u"itms-service://?foo=bar"), "itms-service:?foo=bar")
258 strict_eq(
259 urls.iri_to_uri(u"itms-service://?foo=bar", safe_conversion=True),
260 "itms-service://?foo=bar",
261 )
228262
229263
230264 def test_iri_safe_quoting():
231 uri = 'http://xn--f-1gaa.com/%2F%25?q=%C3%B6&x=%3D%25#%25'
232 iri = u'http://föö.com/%2F%25?q=ö&x=%3D%25#%25'
265 uri = "http://xn--f-1gaa.com/%2F%25?q=%C3%B6&x=%3D%25#%25"
266 iri = u"http://föö.com/%2F%25?q=ö&x=%3D%25#%25"
233267 strict_eq(urls.uri_to_iri(uri), iri)
234268 strict_eq(urls.iri_to_uri(urls.uri_to_iri(uri)), uri)
235269
236270
237271 def test_ordered_multidict_encoding():
238272 d = OrderedMultiDict()
239 d.add('foo', 1)
240 d.add('foo', 2)
241 d.add('foo', 3)
242 d.add('bar', 0)
243 d.add('foo', 4)
244 assert urls.url_encode(d) == 'foo=1&foo=2&foo=3&bar=0&foo=4'
273 d.add("foo", 1)
274 d.add("foo", 2)
275 d.add("foo", 3)
276 d.add("bar", 0)
277 d.add("foo", 4)
278 assert urls.url_encode(d) == "foo=1&foo=2&foo=3&bar=0&foo=4"
245279
246280
247281 def test_multidict_encoding():
248282 d = OrderedMultiDict()
249 d.add('2013-10-10T23:26:05.657975+0000', '2013-10-10T23:26:05.657975+0000')
250 assert urls.url_encode(
251 d) == '2013-10-10T23%3A26%3A05.657975%2B0000=2013-10-10T23%3A26%3A05.657975%2B0000'
283 d.add("2013-10-10T23:26:05.657975+0000", "2013-10-10T23:26:05.657975+0000")
284 assert (
285 urls.url_encode(d)
286 == "2013-10-10T23%3A26%3A05.657975%2B0000=2013-10-10T23%3A26%3A05.657975%2B0000"
287 )
252288
253289
254290 def test_href():
255 x = urls.Href('http://www.example.com/')
256 strict_eq(x(u'foo'), 'http://www.example.com/foo')
257 strict_eq(x.foo(u'bar'), 'http://www.example.com/foo/bar')
258 strict_eq(x.foo(u'bar', x=42), 'http://www.example.com/foo/bar?x=42')
259 strict_eq(x.foo(u'bar', class_=42), 'http://www.example.com/foo/bar?class=42')
260 strict_eq(x.foo(u'bar', {u'class': 42}), 'http://www.example.com/foo/bar?class=42')
291 x = urls.Href("http://www.example.com/")
292 strict_eq(x(u"foo"), "http://www.example.com/foo")
293 strict_eq(x.foo(u"bar"), "http://www.example.com/foo/bar")
294 strict_eq(x.foo(u"bar", x=42), "http://www.example.com/foo/bar?x=42")
295 strict_eq(x.foo(u"bar", class_=42), "http://www.example.com/foo/bar?class=42")
296 strict_eq(x.foo(u"bar", {u"class": 42}), "http://www.example.com/foo/bar?class=42")
261297 pytest.raises(AttributeError, lambda: x.__blah__)
262298
263 x = urls.Href('blah')
264 strict_eq(x.foo(u'bar'), 'blah/foo/bar')
299 x = urls.Href("blah")
300 strict_eq(x.foo(u"bar"), "blah/foo/bar")
265301
266302 pytest.raises(TypeError, x.foo, {u"foo": 23}, x=42)
267303
268 x = urls.Href('')
269 strict_eq(x('foo'), 'foo')
304 x = urls.Href("")
305 strict_eq(x("foo"), "foo")
270306
271307
272308 def test_href_url_join():
273 x = urls.Href(u'test')
274 assert x(u'foo:bar') == u'test/foo:bar'
275 assert x(u'http://example.com/') == u'test/http://example.com/'
276 assert x.a() == u'test/a'
309 x = urls.Href(u"test")
310 assert x(u"foo:bar") == u"test/foo:bar"
311 assert x(u"http://example.com/") == u"test/http://example.com/"
312 assert x.a() == u"test/a"
277313
278314
279315 def test_href_past_root():
280 base_href = urls.Href('http://www.blagga.com/1/2/3')
281 strict_eq(base_href('../foo'), 'http://www.blagga.com/1/2/foo')
282 strict_eq(base_href('../../foo'), 'http://www.blagga.com/1/foo')
283 strict_eq(base_href('../../../foo'), 'http://www.blagga.com/foo')
284 strict_eq(base_href('../../../../foo'), 'http://www.blagga.com/foo')
285 strict_eq(base_href('../../../../../foo'), 'http://www.blagga.com/foo')
286 strict_eq(base_href('../../../../../../foo'), 'http://www.blagga.com/foo')
316 base_href = urls.Href("http://www.blagga.com/1/2/3")
317 strict_eq(base_href("../foo"), "http://www.blagga.com/1/2/foo")
318 strict_eq(base_href("../../foo"), "http://www.blagga.com/1/foo")
319 strict_eq(base_href("../../../foo"), "http://www.blagga.com/foo")
320 strict_eq(base_href("../../../../foo"), "http://www.blagga.com/foo")
321 strict_eq(base_href("../../../../../foo"), "http://www.blagga.com/foo")
322 strict_eq(base_href("../../../../../../foo"), "http://www.blagga.com/foo")
287323
288324
289325 def test_url_unquote_plus_unicode():
290326 # was broken in 0.6
291 strict_eq(urls.url_unquote_plus(u'\x6d'), u'\x6d')
292 assert type(urls.url_unquote_plus(u'\x6d')) is text_type
327 strict_eq(urls.url_unquote_plus(u"\x6d"), u"\x6d")
328 assert type(urls.url_unquote_plus(u"\x6d")) is text_type
293329
294330
295331 def test_quoting_of_local_urls():
296 rv = urls.iri_to_uri(u'/foo\x8f')
297 strict_eq(rv, '/foo%C2%8F')
332 rv = urls.iri_to_uri(u"/foo\x8f")
333 strict_eq(rv, "/foo%C2%8F")
298334 assert type(rv) is str
299335
300336
301337 def test_url_attributes():
302 rv = urls.url_parse('http://foo%3a:bar%3a@[::1]:80/123?x=y#frag')
303 strict_eq(rv.scheme, 'http')
304 strict_eq(rv.auth, 'foo%3a:bar%3a')
305 strict_eq(rv.username, u'foo:')
306 strict_eq(rv.password, u'bar:')
307 strict_eq(rv.raw_username, 'foo%3a')
308 strict_eq(rv.raw_password, 'bar%3a')
309 strict_eq(rv.host, '::1')
338 rv = urls.url_parse("http://foo%3a:bar%3a@[::1]:80/123?x=y#frag")
339 strict_eq(rv.scheme, "http")
340 strict_eq(rv.auth, "foo%3a:bar%3a")
341 strict_eq(rv.username, u"foo:")
342 strict_eq(rv.password, u"bar:")
343 strict_eq(rv.raw_username, "foo%3a")
344 strict_eq(rv.raw_password, "bar%3a")
345 strict_eq(rv.host, "::1")
310346 assert rv.port == 80
311 strict_eq(rv.path, '/123')
312 strict_eq(rv.query, 'x=y')
313 strict_eq(rv.fragment, 'frag')
314
315 rv = urls.url_parse(u'http://\N{SNOWMAN}.com/')
316 strict_eq(rv.host, u'\N{SNOWMAN}.com')
317 strict_eq(rv.ascii_host, 'xn--n3h.com')
347 strict_eq(rv.path, "/123")
348 strict_eq(rv.query, "x=y")
349 strict_eq(rv.fragment, "frag")
350
351 rv = urls.url_parse(u"http://\N{SNOWMAN}.com/")
352 strict_eq(rv.host, u"\N{SNOWMAN}.com")
353 strict_eq(rv.ascii_host, "xn--n3h.com")
318354
319355
320356 def test_url_attributes_bytes():
321 rv = urls.url_parse(b'http://foo%3a:bar%3a@[::1]:80/123?x=y#frag')
322 strict_eq(rv.scheme, b'http')
323 strict_eq(rv.auth, b'foo%3a:bar%3a')
324 strict_eq(rv.username, u'foo:')
325 strict_eq(rv.password, u'bar:')
326 strict_eq(rv.raw_username, b'foo%3a')
327 strict_eq(rv.raw_password, b'bar%3a')
328 strict_eq(rv.host, b'::1')
357 rv = urls.url_parse(b"http://foo%3a:bar%3a@[::1]:80/123?x=y#frag")
358 strict_eq(rv.scheme, b"http")
359 strict_eq(rv.auth, b"foo%3a:bar%3a")
360 strict_eq(rv.username, u"foo:")
361 strict_eq(rv.password, u"bar:")
362 strict_eq(rv.raw_username, b"foo%3a")
363 strict_eq(rv.raw_password, b"bar%3a")
364 strict_eq(rv.host, b"::1")
329365 assert rv.port == 80
330 strict_eq(rv.path, b'/123')
331 strict_eq(rv.query, b'x=y')
332 strict_eq(rv.fragment, b'frag')
366 strict_eq(rv.path, b"/123")
367 strict_eq(rv.query, b"x=y")
368 strict_eq(rv.fragment, b"frag")
333369
334370
335371 def test_url_joining():
336 strict_eq(urls.url_join('/foo', '/bar'), '/bar')
337 strict_eq(urls.url_join('http://example.com/foo', '/bar'),
338 'http://example.com/bar')
339 strict_eq(urls.url_join('file:///tmp/', 'test.html'),
340 'file:///tmp/test.html')
341 strict_eq(urls.url_join('file:///tmp/x', 'test.html'),
342 'file:///tmp/test.html')
343 strict_eq(urls.url_join('file:///tmp/x', '../../../x.html'),
344 'file:///x.html')
372 strict_eq(urls.url_join("/foo", "/bar"), "/bar")
373 strict_eq(urls.url_join("http://example.com/foo", "/bar"), "http://example.com/bar")
374 strict_eq(urls.url_join("file:///tmp/", "test.html"), "file:///tmp/test.html")
375 strict_eq(urls.url_join("file:///tmp/x", "test.html"), "file:///tmp/test.html")
376 strict_eq(urls.url_join("file:///tmp/x", "../../../x.html"), "file:///x.html")
345377
346378
347379 def test_partial_unencoded_decode():
348 ref = u'foo=정상처리'.encode('euc-kr')
349 x = urls.url_decode(ref, charset='euc-kr')
350 strict_eq(x['foo'], u'정상처리')
380 ref = u"foo=정상처리".encode("euc-kr")
381 x = urls.url_decode(ref, charset="euc-kr")
382 strict_eq(x["foo"], u"정상처리")
351383
352384
353385 def test_iri_to_uri_idempotence_ascii_only():
354 uri = u'http://www.idempoten.ce'
386 uri = u"http://www.idempoten.ce"
355387 uri = urls.iri_to_uri(uri)
356388 assert urls.iri_to_uri(uri) == uri
357389
358390
359391 def test_iri_to_uri_idempotence_non_ascii():
360 uri = u'http://\N{SNOWMAN}/\N{SNOWMAN}'
392 uri = u"http://\N{SNOWMAN}/\N{SNOWMAN}"
361393 uri = urls.iri_to_uri(uri)
362394 assert urls.iri_to_uri(uri) == uri
363395
364396
365397 def test_uri_to_iri_idempotence_ascii_only():
366 uri = 'http://www.idempoten.ce'
398 uri = "http://www.idempoten.ce"
367399 uri = urls.uri_to_iri(uri)
368400 assert urls.uri_to_iri(uri) == uri
369401
370402
371403 def test_uri_to_iri_idempotence_non_ascii():
372 uri = 'http://xn--n3h/%E2%98%83'
404 uri = "http://xn--n3h/%E2%98%83"
373405 uri = urls.uri_to_iri(uri)
374406 assert urls.uri_to_iri(uri) == uri
375407
376408
377409 def test_iri_to_uri_to_iri():
378 iri = u'http://föö.com/'
410 iri = u"http://föö.com/"
379411 uri = urls.iri_to_uri(iri)
380412 assert urls.uri_to_iri(uri) == iri
381413
382414
383415 def test_uri_to_iri_to_uri():
384 uri = 'http://xn--f-rgao.com/%C3%9E'
416 uri = "http://xn--f-rgao.com/%C3%9E"
385417 iri = urls.uri_to_iri(uri)
386418 assert urls.iri_to_uri(iri) == uri
387419
388420
389421 def test_uri_iri_normalization():
390 uri = 'http://xn--f-rgao.com/%E2%98%90/fred?utf8=%E2%9C%93'
391 iri = u'http://föñ.com/\N{BALLOT BOX}/fred?utf8=\u2713'
422 uri = "http://xn--f-rgao.com/%E2%98%90/fred?utf8=%E2%9C%93"
423 iri = u"http://föñ.com/\N{BALLOT BOX}/fred?utf8=\u2713"
392424
393425 tests = [
394 u'http://föñ.com/\N{BALLOT BOX}/fred?utf8=\u2713',
395 u'http://xn--f-rgao.com/\u2610/fred?utf8=\N{CHECK MARK}',
396 b'http://xn--f-rgao.com/%E2%98%90/fred?utf8=%E2%9C%93',
397 u'http://xn--f-rgao.com/%E2%98%90/fred?utf8=%E2%9C%93',
398 u'http://föñ.com/\u2610/fred?utf8=%E2%9C%93',
399 b'http://xn--f-rgao.com/\xe2\x98\x90/fred?utf8=\xe2\x9c\x93',
426 u"http://föñ.com/\N{BALLOT BOX}/fred?utf8=\u2713",
427 u"http://xn--f-rgao.com/\u2610/fred?utf8=\N{CHECK MARK}",
428 b"http://xn--f-rgao.com/%E2%98%90/fred?utf8=%E2%9C%93",
429 u"http://xn--f-rgao.com/%E2%98%90/fred?utf8=%E2%9C%93",
430 u"http://föñ.com/\u2610/fred?utf8=%E2%9C%93",
431 b"http://xn--f-rgao.com/\xe2\x98\x90/fred?utf8=\xe2\x9c\x93",
400432 ]
401433
402434 for test in tests:
406438 assert urls.iri_to_uri(urls.uri_to_iri(test)) == uri
407439 assert urls.uri_to_iri(urls.uri_to_iri(test)) == iri
408440 assert urls.iri_to_uri(urls.iri_to_uri(test)) == uri
441
442
443 def test_uri_to_iri_dont_unquote_space():
444 assert urls.uri_to_iri("abc%20def") == "abc%20def"
445
446
447 def test_iri_to_uri_dont_quote_reserved():
448 assert urls.iri_to_uri("/path[bracket]?(paren)") == "/path[bracket]?(paren)"
44
55 General utilities.
66
7 :copyright: (c) 2014 by Armin Ronacher.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
10 from __future__ import with_statement
10 import inspect
11 from datetime import datetime
1112
1213 import pytest
1314
14 from datetime import datetime
15 import inspect
16
1715 from werkzeug import utils
16 from werkzeug._compat import text_type
1817 from werkzeug.datastructures import Headers
19 from werkzeug.http import parse_date, http_date
18 from werkzeug.http import http_date
19 from werkzeug.http import parse_date
20 from werkzeug.test import Client
2021 from werkzeug.wrappers import BaseResponse
21 from werkzeug.test import Client
22 from werkzeug._compat import text_type
2322
2423
2524 def test_redirect():
26 resp = utils.redirect(u'/füübär')
27 assert b'/f%C3%BC%C3%BCb%C3%A4r' in resp.get_data()
28 assert resp.headers['Location'] == '/f%C3%BC%C3%BCb%C3%A4r'
25 resp = utils.redirect(u"/füübär")
26 assert b"/f%C3%BC%C3%BCb%C3%A4r" in resp.get_data()
27 assert resp.headers["Location"] == "/f%C3%BC%C3%BCb%C3%A4r"
2928 assert resp.status_code == 302
3029
31 resp = utils.redirect(u'http://☃.net/', 307)
32 assert b'http://xn--n3h.net/' in resp.get_data()
33 assert resp.headers['Location'] == 'http://xn--n3h.net/'
30 resp = utils.redirect(u"http://☃.net/", 307)
31 assert b"http://xn--n3h.net/" in resp.get_data()
32 assert resp.headers["Location"] == "http://xn--n3h.net/"
3433 assert resp.status_code == 307
3534
36 resp = utils.redirect('http://example.com/', 305)
37 assert resp.headers['Location'] == 'http://example.com/'
38 assert resp.status_code == 305
39
40
41 def test_redirect_no_unicode_header_keys():
42 # Make sure all headers are native keys. This was a bug at one point
43 # due to an incorrect conversion.
44 resp = utils.redirect('http://example.com/', 305)
45 for key, value in resp.headers.items():
46 assert type(key) == str
47 assert type(value) == text_type
48 assert resp.headers['Location'] == 'http://example.com/'
35 resp = utils.redirect("http://example.com/", 305)
36 assert resp.headers["Location"] == "http://example.com/"
4937 assert resp.status_code == 305
5038
5139
5240 def test_redirect_xss():
5341 location = 'http://example.com/?xss="><script>alert(1)</script>'
5442 resp = utils.redirect(location)
55 assert b'<script>alert(1)</script>' not in resp.get_data()
43 assert b"<script>alert(1)</script>" not in resp.get_data()
5644
5745 location = 'http://example.com/?xss="onmouseover="alert(1)'
5846 resp = utils.redirect(location)
59 assert b'href="http://example.com/?xss="onmouseover="alert(1)"' not in resp.get_data()
47 assert (
48 b'href="http://example.com/?xss="onmouseover="alert(1)"' not in resp.get_data()
49 )
6050
6151
6252 def test_redirect_with_custom_response_class():
6757 resp = utils.redirect(location, Response=MyResponse)
6858
6959 assert isinstance(resp, MyResponse)
70 assert resp.headers['Location'] == location
60 assert resp.headers["Location"] == location
7161
7262
7363 def test_cached_property():
7464 foo = []
7565
7666 class A(object):
77
7867 def prop(self):
7968 foo.append(42)
8069 return 42
70
8171 prop = utils.cached_property(prop)
8272
8373 a = A()
8979 foo = []
9080
9181 class A(object):
92
9382 def _prop(self):
9483 foo.append(42)
9584 return 42
96 prop = utils.cached_property(_prop, name='prop')
85
86 prop = utils.cached_property(_prop, name="prop")
9787 del _prop
9888
9989 a = A()
10595
10696 def test_can_set_cached_property():
10797 class A(object):
108
10998 @utils.cached_property
11099 def _prop(self):
111 return 'cached_property return value'
100 return "cached_property return value"
112101
113102 a = A()
114 a._prop = 'value'
115 assert a._prop == 'value'
103 a._prop = "value"
104 assert a._prop == "value"
116105
117106
118107 def test_inspect_treats_cached_property_as_property():
119108 class A(object):
120
121109 @utils.cached_property
122110 def _prop(self):
123 return 'cached_property return value'
111 return "cached_property return value"
124112
125113 attrs = inspect.classify_class_attrs(A)
126114 for attr in attrs:
127 if attr.name == '_prop':
115 if attr.name == "_prop":
128116 break
129 assert attr.kind == 'property'
117 assert attr.kind == "property"
130118
131119
132120 def test_environ_property():
133121 class A(object):
134 environ = {'string': 'abc', 'number': '42'}
135
136 string = utils.environ_property('string')
137 missing = utils.environ_property('missing', 'spam')
138 read_only = utils.environ_property('number')
139 number = utils.environ_property('number', load_func=int)
140 broken_number = utils.environ_property('broken_number', load_func=int)
141 date = utils.environ_property('date', None, parse_date, http_date,
142 read_only=False)
143 foo = utils.environ_property('foo')
122 environ = {"string": "abc", "number": "42"}
123
124 string = utils.environ_property("string")
125 missing = utils.environ_property("missing", "spam")
126 read_only = utils.environ_property("number")
127 number = utils.environ_property("number", load_func=int)
128 broken_number = utils.environ_property("broken_number", load_func=int)
129 date = utils.environ_property(
130 "date", None, parse_date, http_date, read_only=False
131 )
132 foo = utils.environ_property("foo")
144133
145134 a = A()
146 assert a.string == 'abc'
147 assert a.missing == 'spam'
135 assert a.string == "abc"
136 assert a.missing == "spam"
148137
149138 def test_assign():
150 a.read_only = 'something'
139 a.read_only = "something"
140
151141 pytest.raises(AttributeError, test_assign)
152142 assert a.number == 42
153143 assert a.broken_number is None
154144 assert a.date is None
155145 a.date = datetime(2008, 1, 22, 10, 0, 0, 0)
156 assert a.environ['date'] == 'Tue, 22 Jan 2008 10:00:00 GMT'
146 assert a.environ["date"] == "Tue, 22 Jan 2008 10:00:00 GMT"
157147
158148
159149 def test_escape():
160150 class Foo(str):
161
162151 def __html__(self):
163152 return text_type(self)
164 assert utils.escape(None) == ''
165 assert utils.escape(42) == '42'
166 assert utils.escape('<>') == '&lt;&gt;'
167 assert utils.escape('"foo"') == '&quot;foo&quot;'
168 assert utils.escape(Foo('<foo>')) == '<foo>'
153
154 assert utils.escape(None) == ""
155 assert utils.escape(42) == "42"
156 assert utils.escape("<>") == "&lt;&gt;"
157 assert utils.escape('"foo"') == "&quot;foo&quot;"
158 assert utils.escape(Foo("<foo>")) == "<foo>"
169159
170160
171161 def test_unescape():
172 assert utils.unescape('&lt;&auml;&gt;') == u'<ä>'
162 assert utils.unescape("&lt;&auml;&gt;") == u"<ä>"
173163
174164
175165 def test_import_string():
176 import cgi
166 from datetime import date
177167 from werkzeug.debug import DebuggedApplication
178 assert utils.import_string('cgi.escape') is cgi.escape
179 assert utils.import_string(u'cgi.escape') is cgi.escape
180 assert utils.import_string('cgi:escape') is cgi.escape
181 assert utils.import_string('XXXXXXXXXXXX', True) is None
182 assert utils.import_string('cgi.XXXXXXXXXXXX', True) is None
183 assert utils.import_string(u'werkzeug.debug.DebuggedApplication') is DebuggedApplication
184 pytest.raises(ImportError, utils.import_string, 'XXXXXXXXXXXXXXXX')
185 pytest.raises(ImportError, utils.import_string, 'cgi.XXXXXXXXXX')
168
169 assert utils.import_string("datetime.date") is date
170 assert utils.import_string(u"datetime.date") is date
171 assert utils.import_string("datetime:date") is date
172 assert utils.import_string("XXXXXXXXXXXX", True) is None
173 assert utils.import_string("datetime.XXXXXXXXXXXX", True) is None
174 assert (
175 utils.import_string(u"werkzeug.debug.DebuggedApplication")
176 is DebuggedApplication
177 )
178 pytest.raises(ImportError, utils.import_string, "XXXXXXXXXXXXXXXX")
179 pytest.raises(ImportError, utils.import_string, "datetime.XXXXXXXXXX")
180
181
182 def test_import_string_provides_traceback(tmpdir, monkeypatch):
183 monkeypatch.syspath_prepend(str(tmpdir))
184 # Couple of packages
185 dir_a = tmpdir.mkdir("a")
186 dir_b = tmpdir.mkdir("b")
187 # Totally packages, I promise
188 dir_a.join("__init__.py").write("")
189 dir_b.join("__init__.py").write("")
190 # 'aa.a' that depends on 'bb.b', which in turn has a broken import
191 dir_a.join("aa.py").write("from b import bb")
192 dir_b.join("bb.py").write("from os import a_typo")
193
194 # Do we get all the useful information in the traceback?
195 with pytest.raises(ImportError) as baz_exc:
196 utils.import_string("a.aa")
197 traceback = "".join((str(line) for line in baz_exc.traceback))
198 assert "bb.py':1" in traceback # a bit different than typical python tb
199 assert "from os import a_typo" in traceback
186200
187201
188202 def test_import_string_attribute_error(tmpdir, monkeypatch):
189203 monkeypatch.syspath_prepend(str(tmpdir))
190 tmpdir.join('foo_test.py').write('from bar_test import value')
191 tmpdir.join('bar_test.py').write('raise AttributeError("screw you!")')
204 tmpdir.join("foo_test.py").write("from bar_test import value")
205 tmpdir.join("bar_test.py").write('raise AttributeError("screw you!")')
192206 with pytest.raises(AttributeError) as foo_exc:
193 utils.import_string('foo_test')
194 assert 'screw you!' in str(foo_exc)
207 utils.import_string("foo_test")
208 assert "screw you!" in str(foo_exc)
195209
196210 with pytest.raises(AttributeError) as bar_exc:
197 utils.import_string('bar_test')
198 assert 'screw you!' in str(bar_exc)
211 utils.import_string("bar_test")
212 assert "screw you!" in str(bar_exc)
199213
200214
201215 def test_find_modules():
202 assert list(utils.find_modules('werkzeug.debug')) == [
203 'werkzeug.debug.console', 'werkzeug.debug.repr',
204 'werkzeug.debug.tbtools'
216 assert list(utils.find_modules("werkzeug.debug")) == [
217 "werkzeug.debug.console",
218 "werkzeug.debug.repr",
219 "werkzeug.debug.tbtools",
205220 ]
206221
207222
208223 def test_html_builder():
209224 html = utils.html
210225 xhtml = utils.xhtml
211 assert html.p('Hello World') == '<p>Hello World</p>'
212 assert html.a('Test', href='#') == '<a href="#">Test</a>'
213 assert html.br() == '<br>'
214 assert xhtml.br() == '<br />'
215 assert html.img(src='foo') == '<img src="foo">'
216 assert xhtml.img(src='foo') == '<img src="foo" />'
217 assert html.html(html.head(
218 html.title('foo'),
219 html.script(type='text/javascript')
220 )) == (
226 assert html.p("Hello World") == "<p>Hello World</p>"
227 assert html.a("Test", href="#") == '<a href="#">Test</a>'
228 assert html.br() == "<br>"
229 assert xhtml.br() == "<br />"
230 assert html.img(src="foo") == '<img src="foo">'
231 assert xhtml.img(src="foo") == '<img src="foo" />'
232 assert html.html(
233 html.head(html.title("foo"), html.script(type="text/javascript"))
234 ) == (
221235 '<html><head><title>foo</title><script type="text/javascript">'
222 '</script></head></html>'
223 )
224 assert html('<foo>') == '&lt;foo&gt;'
225 assert html.input(disabled=True) == '<input disabled>'
236 "</script></head></html>"
237 )
238 assert html("<foo>") == "&lt;foo&gt;"
239 assert html.input(disabled=True) == "<input disabled>"
226240 assert xhtml.input(disabled=True) == '<input disabled="disabled" />'
227 assert html.input(disabled='') == '<input>'
228 assert xhtml.input(disabled='') == '<input />'
229 assert html.input(disabled=None) == '<input>'
230 assert xhtml.input(disabled=None) == '<input />'
231 assert html.script('alert("Hello World");') == \
232 '<script>alert("Hello World");</script>'
233 assert xhtml.script('alert("Hello World");') == \
234 '<script>/*<![CDATA[*/alert("Hello World");/*]]>*/</script>'
241 assert html.input(disabled="") == "<input>"
242 assert xhtml.input(disabled="") == "<input />"
243 assert html.input(disabled=None) == "<input>"
244 assert xhtml.input(disabled=None) == "<input />"
245 assert (
246 html.script('alert("Hello World");') == '<script>alert("Hello World");</script>'
247 )
248 assert (
249 xhtml.script('alert("Hello World");')
250 == '<script>/*<![CDATA[*/alert("Hello World");/*]]>*/</script>'
251 )
235252
236253
237254 def test_validate_arguments():
238 take_none = lambda: None
239 take_two = lambda a, b: None
240 take_two_one_default = lambda a, b=0: None
241
242 assert utils.validate_arguments(take_two, (1, 2,), {}) == ((1, 2), {})
243 assert utils.validate_arguments(take_two, (1,), {'b': 2}) == ((1, 2), {})
255 def take_none():
256 pass
257
258 def take_two(a, b):
259 pass
260
261 def take_two_one_default(a, b=0):
262 pass
263
264 assert utils.validate_arguments(take_two, (1, 2), {}) == ((1, 2), {})
265 assert utils.validate_arguments(take_two, (1,), {"b": 2}) == ((1, 2), {})
244266 assert utils.validate_arguments(take_two_one_default, (1,), {}) == ((1, 0), {})
245267 assert utils.validate_arguments(take_two_one_default, (1, 2), {}) == ((1, 2), {})
246268
247 pytest.raises(utils.ArgumentValidationError,
248 utils.validate_arguments, take_two, (), {})
249
250 assert utils.validate_arguments(take_none, (1, 2,), {'c': 3}) == ((), {})
251 pytest.raises(utils.ArgumentValidationError,
252 utils.validate_arguments, take_none, (1,), {}, drop_extra=False)
253 pytest.raises(utils.ArgumentValidationError,
254 utils.validate_arguments, take_none, (), {'a': 1}, drop_extra=False)
269 pytest.raises(
270 utils.ArgumentValidationError, utils.validate_arguments, take_two, (), {}
271 )
272
273 assert utils.validate_arguments(take_none, (1, 2), {"c": 3}) == ((), {})
274 pytest.raises(
275 utils.ArgumentValidationError,
276 utils.validate_arguments,
277 take_none,
278 (1,),
279 {},
280 drop_extra=False,
281 )
282 pytest.raises(
283 utils.ArgumentValidationError,
284 utils.validate_arguments,
285 take_none,
286 (),
287 {"a": 1},
288 drop_extra=False,
289 )
255290
256291
257292 def test_header_set_duplication_bug():
258 headers = Headers([
259 ('Content-Type', 'text/html'),
260 ('Foo', 'bar'),
261 ('Blub', 'blah')
262 ])
263 headers['blub'] = 'hehe'
264 headers['blafasel'] = 'humm'
265 assert headers == Headers([
266 ('Content-Type', 'text/html'),
267 ('Foo', 'bar'),
268 ('blub', 'hehe'),
269 ('blafasel', 'humm')
270 ])
293 headers = Headers([("Content-Type", "text/html"), ("Foo", "bar"), ("Blub", "blah")])
294 headers["blub"] = "hehe"
295 headers["blafasel"] = "humm"
296 assert headers == Headers(
297 [
298 ("Content-Type", "text/html"),
299 ("Foo", "bar"),
300 ("blub", "hehe"),
301 ("blafasel", "humm"),
302 ]
303 )
271304
272305
273306 def test_append_slash_redirect():
274307 def app(env, sr):
275308 return utils.append_slash_redirect(env)(env, sr)
309
276310 client = Client(app, BaseResponse)
277 response = client.get('foo', base_url='http://example.org/app')
311 response = client.get("foo", base_url="http://example.org/app")
278312 assert response.status_code == 301
279 assert response.headers['Location'] == 'http://example.org/app/foo/'
313 assert response.headers["Location"] == "http://example.org/app/foo/"
280314
281315
282316 def test_cached_property_doc():
284318 def foo():
285319 """testing"""
286320 return 42
287 assert foo.__doc__ == 'testing'
288 assert foo.__name__ == 'foo'
321
322 assert foo.__doc__ == "testing"
323 assert foo.__name__ == "foo"
289324 assert foo.__module__ == __name__
290325
291326
292327 def test_secure_filename():
293 assert utils.secure_filename('My cool movie.mov') == 'My_cool_movie.mov'
294 assert utils.secure_filename('../../../etc/passwd') == 'etc_passwd'
295 assert utils.secure_filename(u'i contain cool \xfcml\xe4uts.txt') == \
296 'i_contain_cool_umlauts.txt'
297 assert utils.secure_filename('__filename__') == 'filename'
298 assert utils.secure_filename('foo$&^*)bar') == 'foobar'
328 assert utils.secure_filename("My cool movie.mov") == "My_cool_movie.mov"
329 assert utils.secure_filename("../../../etc/passwd") == "etc_passwd"
330 assert (
331 utils.secure_filename(u"i contain cool \xfcml\xe4uts.txt")
332 == "i_contain_cool_umlauts.txt"
333 )
334 assert utils.secure_filename("__filename__") == "filename"
335 assert utils.secure_filename("foo$&^*)bar") == "foobar"
44
55 Tests for the response and request objects.
66
7 :copyright: (c) 2014 by Armin Ronacher.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
1010 import contextlib
11 import json
1112 import os
13 import pickle
14 from datetime import datetime
15 from datetime import timedelta
16 from io import BytesIO
1217
1318 import pytest
1419
15 import pickle
16 from io import BytesIO
17 from datetime import datetime, timedelta
20 from . import strict_eq
21 from werkzeug import wrappers
22 from werkzeug._compat import implements_iterator
1823 from werkzeug._compat import iteritems
19
20 from tests import strict_eq
21
22 from werkzeug import wrappers
23 from werkzeug.exceptions import SecurityError, RequestedRangeNotSatisfiable, \
24 BadRequest
25 from werkzeug.wsgi import LimitedStream, wrap_file
26 from werkzeug.datastructures import MultiDict, ImmutableOrderedMultiDict, \
27 ImmutableList, ImmutableTypeConversionDict, CharsetAccept, \
28 MIMEAccept, LanguageAccept, Accept, CombinedMultiDict
29 from werkzeug.test import Client, create_environ, run_wsgi_app
30 from werkzeug._compat import implements_iterator, text_type
24 from werkzeug._compat import text_type
25 from werkzeug.datastructures import Accept
26 from werkzeug.datastructures import CharsetAccept
27 from werkzeug.datastructures import CombinedMultiDict
28 from werkzeug.datastructures import Headers
29 from werkzeug.datastructures import ImmutableList
30 from werkzeug.datastructures import ImmutableOrderedMultiDict
31 from werkzeug.datastructures import ImmutableTypeConversionDict
32 from werkzeug.datastructures import LanguageAccept
33 from werkzeug.datastructures import MIMEAccept
34 from werkzeug.datastructures import MultiDict
35 from werkzeug.exceptions import BadRequest
36 from werkzeug.exceptions import RequestedRangeNotSatisfiable
37 from werkzeug.exceptions import SecurityError
38 from werkzeug.http import generate_etag
39 from werkzeug.test import Client
40 from werkzeug.test import create_environ
41 from werkzeug.test import run_wsgi_app
42 from werkzeug.wrappers.json import JSONMixin
43 from werkzeug.wsgi import LimitedStream
44 from werkzeug.wsgi import wrap_file
3145
3246
3347 class RequestTestResponse(wrappers.BaseResponse):
34
3548 """Subclass of the normal response class we use to test response
3649 and base classes. Has some methods to test if things in the
3750 response match.
4760
4861 def request_demo_app(environ, start_response):
4962 request = wrappers.BaseRequest(environ)
50 assert 'werkzeug.request' in environ
51 start_response('200 OK', [('Content-Type', 'text/plain')])
52 return [pickle.dumps({
53 'args': request.args,
54 'args_as_list': list(request.args.lists()),
55 'form': request.form,
56 'form_as_list': list(request.form.lists()),
57 'environ': prepare_environ_pickle(request.environ),
58 'data': request.get_data()
59 })]
63 assert "werkzeug.request" in environ
64 start_response("200 OK", [("Content-Type", "text/plain")])
65 return [
66 pickle.dumps(
67 {
68 "args": request.args,
69 "args_as_list": list(request.args.lists()),
70 "form": request.form,
71 "form_as_list": list(request.form.lists()),
72 "environ": prepare_environ_pickle(request.environ),
73 "data": request.get_data(),
74 }
75 )
76 ]
6077
6178
6279 def prepare_environ_pickle(environ):
7188
7289
7390 def assert_environ(environ, method):
74 strict_eq(environ['REQUEST_METHOD'], method)
75 strict_eq(environ['PATH_INFO'], '/')
76 strict_eq(environ['SCRIPT_NAME'], '')
77 strict_eq(environ['SERVER_NAME'], 'localhost')
78 strict_eq(environ['wsgi.version'], (1, 0))
79 strict_eq(environ['wsgi.url_scheme'], 'http')
91 strict_eq(environ["REQUEST_METHOD"], method)
92 strict_eq(environ["PATH_INFO"], "/")
93 strict_eq(environ["SCRIPT_NAME"], "")
94 strict_eq(environ["SERVER_NAME"], "localhost")
95 strict_eq(environ["wsgi.version"], (1, 0))
96 strict_eq(environ["wsgi.url_scheme"], "http")
8097
8198
8299 def test_base_request():
83100 client = Client(request_demo_app, RequestTestResponse)
84101
85102 # get requests
86 response = client.get('/?foo=bar&foo=hehe')
87 strict_eq(response['args'], MultiDict([('foo', u'bar'), ('foo', u'hehe')]))
88 strict_eq(response['args_as_list'], [('foo', [u'bar', u'hehe'])])
89 strict_eq(response['form'], MultiDict())
90 strict_eq(response['form_as_list'], [])
91 strict_eq(response['data'], b'')
92 assert_environ(response['environ'], 'GET')
103 response = client.get("/?foo=bar&foo=hehe")
104 strict_eq(response["args"], MultiDict([("foo", u"bar"), ("foo", u"hehe")]))
105 strict_eq(response["args_as_list"], [("foo", [u"bar", u"hehe"])])
106 strict_eq(response["form"], MultiDict())
107 strict_eq(response["form_as_list"], [])
108 strict_eq(response["data"], b"")
109 assert_environ(response["environ"], "GET")
93110
94111 # post requests with form data
95 response = client.post('/?blub=blah', data='foo=blub+hehe&blah=42',
96 content_type='application/x-www-form-urlencoded')
97 strict_eq(response['args'], MultiDict([('blub', u'blah')]))
98 strict_eq(response['args_as_list'], [('blub', [u'blah'])])
99 strict_eq(response['form'], MultiDict([('foo', u'blub hehe'), ('blah', u'42')]))
100 strict_eq(response['data'], b'')
112 response = client.post(
113 "/?blub=blah",
114 data="foo=blub+hehe&blah=42",
115 content_type="application/x-www-form-urlencoded",
116 )
117 strict_eq(response["args"], MultiDict([("blub", u"blah")]))
118 strict_eq(response["args_as_list"], [("blub", [u"blah"])])
119 strict_eq(response["form"], MultiDict([("foo", u"blub hehe"), ("blah", u"42")]))
120 strict_eq(response["data"], b"")
101121 # currently we do not guarantee that the values are ordered correctly
102122 # for post data.
103123 # strict_eq(response['form_as_list'], [('foo', ['blub hehe']), ('blah', ['42'])])
104 assert_environ(response['environ'], 'POST')
124 assert_environ(response["environ"], "POST")
105125
106126 # patch requests with form data
107 response = client.patch('/?blub=blah', data='foo=blub+hehe&blah=42',
108 content_type='application/x-www-form-urlencoded')
109 strict_eq(response['args'], MultiDict([('blub', u'blah')]))
110 strict_eq(response['args_as_list'], [('blub', [u'blah'])])
111 strict_eq(response['form'],
112 MultiDict([('foo', u'blub hehe'), ('blah', u'42')]))
113 strict_eq(response['data'], b'')
114 assert_environ(response['environ'], 'PATCH')
127 response = client.patch(
128 "/?blub=blah",
129 data="foo=blub+hehe&blah=42",
130 content_type="application/x-www-form-urlencoded",
131 )
132 strict_eq(response["args"], MultiDict([("blub", u"blah")]))
133 strict_eq(response["args_as_list"], [("blub", [u"blah"])])
134 strict_eq(response["form"], MultiDict([("foo", u"blub hehe"), ("blah", u"42")]))
135 strict_eq(response["data"], b"")
136 assert_environ(response["environ"], "PATCH")
115137
116138 # post requests with json data
117139 json = b'{"foo": "bar", "blub": "blah"}'
118 response = client.post('/?a=b', data=json, content_type='application/json')
119 strict_eq(response['data'], json)
120 strict_eq(response['args'], MultiDict([('a', u'b')]))
121 strict_eq(response['form'], MultiDict())
140 response = client.post("/?a=b", data=json, content_type="application/json")
141 strict_eq(response["data"], json)
142 strict_eq(response["args"], MultiDict([("a", u"b")]))
143 strict_eq(response["form"], MultiDict())
122144
123145
124146 def test_query_string_is_bytes():
125 req = wrappers.Request.from_values(u'/?foo=%2f')
126 strict_eq(req.query_string, b'foo=%2f')
147 req = wrappers.Request.from_values(u"/?foo=%2f")
148 strict_eq(req.query_string, b"foo=%2f")
127149
128150
129151 def test_request_repr():
130 req = wrappers.Request.from_values('/foobar')
152 req = wrappers.Request.from_values("/foobar")
131153 assert "<Request 'http://localhost/foobar' [GET]>" == repr(req)
132154 # test with non-ascii characters
133 req = wrappers.Request.from_values('/привет')
155 req = wrappers.Request.from_values("/привет")
134156 assert "<Request 'http://localhost/привет' [GET]>" == repr(req)
135157 # test with unicode type for python 2
136 req = wrappers.Request.from_values(u'/привет')
158 req = wrappers.Request.from_values(u"/привет")
137159 assert "<Request 'http://localhost/привет' [GET]>" == repr(req)
138160
139161
140162 def test_access_route():
141 req = wrappers.Request.from_values(headers={
142 'X-Forwarded-For': '192.168.1.2, 192.168.1.1'
143 })
144 req.environ['REMOTE_ADDR'] = '192.168.1.3'
145 assert req.access_route == ['192.168.1.2', '192.168.1.1']
146 strict_eq(req.remote_addr, '192.168.1.3')
163 req = wrappers.Request.from_values(
164 headers={"X-Forwarded-For": "192.168.1.2, 192.168.1.1"}
165 )
166 req.environ["REMOTE_ADDR"] = "192.168.1.3"
167 assert req.access_route == ["192.168.1.2", "192.168.1.1"]
168 strict_eq(req.remote_addr, "192.168.1.3")
147169
148170 req = wrappers.Request.from_values()
149 req.environ['REMOTE_ADDR'] = '192.168.1.3'
150 strict_eq(list(req.access_route), ['192.168.1.3'])
171 req.environ["REMOTE_ADDR"] = "192.168.1.3"
172 strict_eq(list(req.access_route), ["192.168.1.3"])
151173
152174
153175 def test_url_request_descriptors():
154 req = wrappers.Request.from_values('/bar?foo=baz', 'http://example.com/test')
155 strict_eq(req.path, u'/bar')
156 strict_eq(req.full_path, u'/bar?foo=baz')
157 strict_eq(req.script_root, u'/test')
158 strict_eq(req.url, u'http://example.com/test/bar?foo=baz')
159 strict_eq(req.base_url, u'http://example.com/test/bar')
160 strict_eq(req.url_root, u'http://example.com/test/')
161 strict_eq(req.host_url, u'http://example.com/')
162 strict_eq(req.host, 'example.com')
163 strict_eq(req.scheme, 'http')
164
165 req = wrappers.Request.from_values('/bar?foo=baz', 'https://example.com/test')
166 strict_eq(req.scheme, 'https')
176 req = wrappers.Request.from_values("/bar?foo=baz", "http://example.com/test")
177 strict_eq(req.path, u"/bar")
178 strict_eq(req.full_path, u"/bar?foo=baz")
179 strict_eq(req.script_root, u"/test")
180 strict_eq(req.url, u"http://example.com/test/bar?foo=baz")
181 strict_eq(req.base_url, u"http://example.com/test/bar")
182 strict_eq(req.url_root, u"http://example.com/test/")
183 strict_eq(req.host_url, u"http://example.com/")
184 strict_eq(req.host, "example.com")
185 strict_eq(req.scheme, "http")
186
187 req = wrappers.Request.from_values("/bar?foo=baz", "https://example.com/test")
188 strict_eq(req.scheme, "https")
167189
168190
169191 def test_url_request_descriptors_query_quoting():
170 next = 'http%3A%2F%2Fwww.example.com%2F%3Fnext%3D%2Fbaz%23my%3Dhash'
171 req = wrappers.Request.from_values('/bar?next=' + next, 'http://example.com/')
172 assert req.path == u'/bar'
173 strict_eq(req.full_path, u'/bar?next=' + next)
174 strict_eq(req.url, u'http://example.com/bar?next=' + next)
192 next = "http%3A%2F%2Fwww.example.com%2F%3Fnext%3D%2Fbaz%23my%3Dhash"
193 req = wrappers.Request.from_values("/bar?next=" + next, "http://example.com/")
194 assert req.path == u"/bar"
195 strict_eq(req.full_path, u"/bar?next=" + next)
196 strict_eq(req.url, u"http://example.com/bar?next=" + next)
175197
176198
177199 def test_url_request_descriptors_hosts():
178 req = wrappers.Request.from_values('/bar?foo=baz', 'http://example.com/test')
179 req.trusted_hosts = ['example.com']
180 strict_eq(req.path, u'/bar')
181 strict_eq(req.full_path, u'/bar?foo=baz')
182 strict_eq(req.script_root, u'/test')
183 strict_eq(req.url, u'http://example.com/test/bar?foo=baz')
184 strict_eq(req.base_url, u'http://example.com/test/bar')
185 strict_eq(req.url_root, u'http://example.com/test/')
186 strict_eq(req.host_url, u'http://example.com/')
187 strict_eq(req.host, 'example.com')
188 strict_eq(req.scheme, 'http')
189
190 req = wrappers.Request.from_values('/bar?foo=baz', 'https://example.com/test')
191 strict_eq(req.scheme, 'https')
192
193 req = wrappers.Request.from_values('/bar?foo=baz', 'http://example.com/test')
194 req.trusted_hosts = ['example.org']
200 req = wrappers.Request.from_values("/bar?foo=baz", "http://example.com/test")
201 req.trusted_hosts = ["example.com"]
202 strict_eq(req.path, u"/bar")
203 strict_eq(req.full_path, u"/bar?foo=baz")
204 strict_eq(req.script_root, u"/test")
205 strict_eq(req.url, u"http://example.com/test/bar?foo=baz")
206 strict_eq(req.base_url, u"http://example.com/test/bar")
207 strict_eq(req.url_root, u"http://example.com/test/")
208 strict_eq(req.host_url, u"http://example.com/")
209 strict_eq(req.host, "example.com")
210 strict_eq(req.scheme, "http")
211
212 req = wrappers.Request.from_values("/bar?foo=baz", "https://example.com/test")
213 strict_eq(req.scheme, "https")
214
215 req = wrappers.Request.from_values("/bar?foo=baz", "http://example.com/test")
216 req.trusted_hosts = ["example.org"]
195217 pytest.raises(SecurityError, lambda: req.url)
196218 pytest.raises(SecurityError, lambda: req.base_url)
197219 pytest.raises(SecurityError, lambda: req.url_root)
200222
201223
202224 def test_authorization_mixin():
203 request = wrappers.Request.from_values(headers={
204 'Authorization': 'Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ=='
205 })
225 request = wrappers.Request.from_values(
226 headers={"Authorization": "Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ=="}
227 )
206228 a = request.authorization
207 strict_eq(a.type, 'basic')
208 strict_eq(a.username, 'Aladdin')
209 strict_eq(a.password, 'open sesame')
229 strict_eq(a.type, "basic")
230 strict_eq(a.username, u"Aladdin")
231 strict_eq(a.password, u"open sesame")
232
233
234 def test_authorization_with_unicode():
235 request = wrappers.Request.from_values(
236 headers={"Authorization": "Basic 0YDRg9GB0YHQutC40IE60JHRg9C60LLRiw=="}
237 )
238 a = request.authorization
239 strict_eq(a.type, "basic")
240 strict_eq(a.username, u"русскиЁ")
241 strict_eq(a.password, u"Буквы")
210242
211243
212244 def test_stream_only_mixing():
213245 request = wrappers.PlainRequest.from_values(
214 data=b'foo=blub+hehe',
215 content_type='application/x-www-form-urlencoded'
246 data=b"foo=blub+hehe", content_type="application/x-www-form-urlencoded"
216247 )
217248 assert list(request.files.items()) == []
218249 assert list(request.form.items()) == []
219250 pytest.raises(AttributeError, lambda: request.data)
220 strict_eq(request.stream.read(), b'foo=blub+hehe')
251 strict_eq(request.stream.read(), b"foo=blub+hehe")
221252
222253
223254 def test_request_application():
224255 @wrappers.Request.application
225256 def application(request):
226 return wrappers.Response('Hello World!')
257 return wrappers.Response("Hello World!")
227258
228259 @wrappers.Request.application
229260 def failing_application(request):
230261 raise BadRequest()
231262
232263 resp = wrappers.Response.from_app(application, create_environ())
233 assert resp.data == b'Hello World!'
264 assert resp.data == b"Hello World!"
234265 assert resp.status_code == 200
235266
236267 resp = wrappers.Response.from_app(failing_application, create_environ())
237 assert b'Bad Request' in resp.data
268 assert b"Bad Request" in resp.data
238269 assert resp.status_code == 400
239270
240271
241272 def test_base_response():
242273 # unicode
243 response = wrappers.BaseResponse(u'öäü')
244 strict_eq(response.get_data(), u'öäü'.encode('utf-8'))
274 response = wrappers.BaseResponse(u"öäü")
275 strict_eq(response.get_data(), u"öäü".encode("utf-8"))
245276
246277 # writing
247 response = wrappers.Response('foo')
248 response.stream.write('bar')
249 strict_eq(response.get_data(), b'foobar')
278 response = wrappers.Response("foo")
279 response.stream.write("bar")
280 strict_eq(response.get_data(), b"foobar")
250281
251282 # set cookie
252283 response = wrappers.BaseResponse()
253 response.set_cookie('foo', value='bar', max_age=60, expires=0,
254 path='/blub', domain='example.org', samesite='Strict')
255 strict_eq(response.headers.to_wsgi_list(), [
256 ('Content-Type', 'text/plain; charset=utf-8'),
257 ('Set-Cookie', 'foo=bar; Domain=example.org; Expires=Thu, '
258 '01-Jan-1970 00:00:00 GMT; Max-Age=60; Path=/blub; '
259 'SameSite=Strict')
260 ])
284 response.set_cookie(
285 "foo",
286 value="bar",
287 max_age=60,
288 expires=0,
289 path="/blub",
290 domain="example.org",
291 samesite="Strict",
292 )
293 strict_eq(
294 response.headers.to_wsgi_list(),
295 [
296 ("Content-Type", "text/plain; charset=utf-8"),
297 (
298 "Set-Cookie",
299 "foo=bar; Domain=example.org; Expires=Thu, "
300 "01-Jan-1970 00:00:00 GMT; Max-Age=60; Path=/blub; "
301 "SameSite=Strict",
302 ),
303 ],
304 )
261305
262306 # delete cookie
263307 response = wrappers.BaseResponse()
264 response.delete_cookie('foo')
265 strict_eq(response.headers.to_wsgi_list(), [
266 ('Content-Type', 'text/plain; charset=utf-8'),
267 ('Set-Cookie', 'foo=; Expires=Thu, 01-Jan-1970 00:00:00 GMT; Max-Age=0; Path=/')
268 ])
308 response.delete_cookie("foo")
309 strict_eq(
310 response.headers.to_wsgi_list(),
311 [
312 ("Content-Type", "text/plain; charset=utf-8"),
313 (
314 "Set-Cookie",
315 "foo=; Expires=Thu, 01-Jan-1970 00:00:00 GMT; Max-Age=0; Path=/",
316 ),
317 ],
318 )
269319
270320 # close call forwarding
271321 closed = []
272322
273323 @implements_iterator
274324 class Iterable(object):
275
276325 def __next__(self):
277326 raise StopIteration()
278327
281330
282331 def close(self):
283332 closed.append(True)
333
284334 response = wrappers.BaseResponse(Iterable())
285335 response.call_on_close(lambda: closed.append(True))
286 app_iter, status, headers = run_wsgi_app(response,
287 create_environ(),
288 buffered=True)
289 strict_eq(status, '200 OK')
290 strict_eq(''.join(app_iter), '')
336 app_iter, status, headers = run_wsgi_app(response, create_environ(), buffered=True)
337 strict_eq(status, "200 OK")
338 strict_eq("".join(app_iter), "")
291339 strict_eq(len(closed), 2)
292340
293341 # with statement
301349 def test_response_status_codes():
302350 response = wrappers.BaseResponse()
303351 response.status_code = 404
304 strict_eq(response.status, '404 NOT FOUND')
305 response.status = '200 OK'
352 strict_eq(response.status, "404 NOT FOUND")
353 response.status = "200 OK"
306354 strict_eq(response.status_code, 200)
307 response.status = '999 WTF'
355 response.status = "999 WTF"
308356 strict_eq(response.status_code, 999)
309357 response.status_code = 588
310358 strict_eq(response.status_code, 588)
311 strict_eq(response.status, '588 UNKNOWN')
312 response.status = 'wtf'
359 strict_eq(response.status, "588 UNKNOWN")
360 response.status = "wtf"
313361 strict_eq(response.status_code, 0)
314 strict_eq(response.status, '0 wtf')
362 strict_eq(response.status, "0 wtf")
315363
316364 # invalid status codes
317365 with pytest.raises(ValueError) as empty_string_error:
318 wrappers.BaseResponse(None, '')
319 assert 'Empty status argument' in str(empty_string_error)
366 wrappers.BaseResponse(None, "")
367 assert "Empty status argument" in str(empty_string_error)
320368
321369 with pytest.raises(TypeError) as invalid_type_error:
322370 wrappers.BaseResponse(None, tuple())
323 assert 'Invalid status argument' in str(invalid_type_error)
371 assert "Invalid status argument" in str(invalid_type_error)
324372
325373
326374 def test_type_forcing():
327375 def wsgi_application(environ, start_response):
328 start_response('200 OK', [('Content-Type', 'text/html')])
329 return ['Hello World!']
330 base_response = wrappers.BaseResponse('Hello World!', content_type='text/html')
376 start_response("200 OK", [("Content-Type", "text/html")])
377 return ["Hello World!"]
378
379 base_response = wrappers.BaseResponse("Hello World!", content_type="text/html")
331380
332381 class SpecialResponse(wrappers.Response):
333
334382 def foo(self):
335383 return 42
336384
342390 response = SpecialResponse.force_type(orig_resp, fake_env)
343391 assert response.__class__ is SpecialResponse
344392 strict_eq(response.foo(), 42)
345 strict_eq(response.get_data(), b'Hello World!')
346 assert response.content_type == 'text/html'
393 strict_eq(response.get_data(), b"Hello World!")
394 assert response.content_type == "text/html"
347395
348396 # without env, no arbitrary conversion
349397 pytest.raises(TypeError, SpecialResponse.force_type, wsgi_application)
350398
351399
352400 def test_accept_mixin():
353 request = wrappers.Request({
354 'HTTP_ACCEPT': 'text/xml,application/xml,application/xhtml+xml,'
355 'text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5',
356 'HTTP_ACCEPT_CHARSET': 'ISO-8859-1,utf-8;q=0.7,*;q=0.7',
357 'HTTP_ACCEPT_ENCODING': 'gzip,deflate',
358 'HTTP_ACCEPT_LANGUAGE': 'en-us,en;q=0.5'
359 })
360 assert request.accept_mimetypes == MIMEAccept([
361 ('text/xml', 1), ('image/png', 1), ('application/xml', 1),
362 ('application/xhtml+xml', 1), ('text/html', 0.9),
363 ('text/plain', 0.8), ('*/*', 0.5)
364 ])
365 strict_eq(request.accept_charsets, CharsetAccept([
366 ('ISO-8859-1', 1), ('utf-8', 0.7), ('*', 0.7)
367 ]))
368 strict_eq(request.accept_encodings, Accept([
369 ('gzip', 1), ('deflate', 1)]))
370 strict_eq(request.accept_languages, LanguageAccept([
371 ('en-us', 1), ('en', 0.5)]))
372
373 request = wrappers.Request({'HTTP_ACCEPT': ''})
401 request = wrappers.Request(
402 {
403 "HTTP_ACCEPT": "text/xml,application/xml,application/xhtml+xml,"
404 "text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5",
405 "HTTP_ACCEPT_CHARSET": "ISO-8859-1,utf-8;q=0.7,*;q=0.7",
406 "HTTP_ACCEPT_ENCODING": "gzip,deflate",
407 "HTTP_ACCEPT_LANGUAGE": "en-us,en;q=0.5",
408 }
409 )
410 assert request.accept_mimetypes == MIMEAccept(
411 [
412 ("text/xml", 1),
413 ("image/png", 1),
414 ("application/xml", 1),
415 ("application/xhtml+xml", 1),
416 ("text/html", 0.9),
417 ("text/plain", 0.8),
418 ("*/*", 0.5),
419 ]
420 )
421 strict_eq(
422 request.accept_charsets,
423 CharsetAccept([("ISO-8859-1", 1), ("utf-8", 0.7), ("*", 0.7)]),
424 )
425 strict_eq(request.accept_encodings, Accept([("gzip", 1), ("deflate", 1)]))
426 strict_eq(request.accept_languages, LanguageAccept([("en-us", 1), ("en", 0.5)]))
427
428 request = wrappers.Request({"HTTP_ACCEPT": ""})
374429 strict_eq(request.accept_mimetypes, MIMEAccept())
375430
376431
377432 def test_etag_request_mixin():
378 request = wrappers.Request({
379 'HTTP_CACHE_CONTROL': 'no-store, no-cache',
380 'HTTP_IF_MATCH': 'W/"foo", bar, "baz"',
381 'HTTP_IF_NONE_MATCH': 'W/"foo", bar, "baz"',
382 'HTTP_IF_MODIFIED_SINCE': 'Tue, 22 Jan 2008 11:18:44 GMT',
383 'HTTP_IF_UNMODIFIED_SINCE': 'Tue, 22 Jan 2008 11:18:44 GMT'
384 })
433 request = wrappers.Request(
434 {
435 "HTTP_CACHE_CONTROL": "no-store, no-cache",
436 "HTTP_IF_MATCH": 'W/"foo", bar, "baz"',
437 "HTTP_IF_NONE_MATCH": 'W/"foo", bar, "baz"',
438 "HTTP_IF_MODIFIED_SINCE": "Tue, 22 Jan 2008 11:18:44 GMT",
439 "HTTP_IF_UNMODIFIED_SINCE": "Tue, 22 Jan 2008 11:18:44 GMT",
440 }
441 )
385442 assert request.cache_control.no_store
386443 assert request.cache_control.no_cache
387444
388445 for etags in request.if_match, request.if_none_match:
389 assert etags('bar')
446 assert etags("bar")
390447 assert etags.contains_raw('W/"foo"')
391 assert etags.contains_weak('foo')
392 assert not etags.contains('foo')
448 assert etags.contains_weak("foo")
449 assert not etags.contains("foo")
393450
394451 assert request.if_modified_since == datetime(2008, 1, 22, 11, 18, 44)
395452 assert request.if_unmodified_since == datetime(2008, 1, 22, 11, 18, 44)
397454
398455 def test_user_agent_mixin():
399456 user_agents = [
400 ('Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.11) '
401 'Gecko/20071127 Firefox/2.0.0.11', 'firefox', 'macos', '2.0.0.11',
402 'en-US'),
403 ('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; de-DE) Opera 8.54',
404 'opera', 'windows', '8.54', 'de-DE'),
405 ('Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en) AppleWebKit/420 '
406 '(KHTML, like Gecko) Version/3.0 Mobile/1A543a Safari/419.3',
407 'safari', 'iphone', '3.0', 'en'),
408 ('Bot Googlebot/2.1 ( http://www.googlebot.com/bot.html)',
409 'google', None, '2.1', None),
410 ('Mozilla/5.0 (X11; CrOS armv7l 3701.81.0) AppleWebKit/537.31 '
411 '(KHTML, like Gecko) Chrome/26.0.1410.57 Safari/537.31',
412 'chrome', 'chromeos', '26.0.1410.57', None),
413 ('Mozilla/5.0 (Windows NT 6.3; Trident/7.0; .NET4.0E; rv:11.0) like Gecko',
414 'msie', 'windows', '11.0', None),
415 ('Mozilla/5.0 (SymbianOS/9.3; Series60/3.2 NokiaE5-00/101.003; '
416 'Profile/MIDP-2.1 Configuration/CLDC-1.1 ) AppleWebKit/533.4 (KHTML, like Gecko) '
417 'NokiaBrowser/7.3.1.35 Mobile Safari/533.4 3gpp-gba',
418 'safari', 'symbian', '533.4', None),
419 ('Mozilla/5.0 (X11; OpenBSD amd64; rv:45.0) Gecko/20100101 Firefox/45.0',
420 'firefox', 'openbsd', '45.0', None),
421 ('Mozilla/5.0 (X11; NetBSD amd64; rv:45.0) Gecko/20100101 Firefox/45.0',
422 'firefox', 'netbsd', '45.0', None),
423 ('Mozilla/5.0 (X11; FreeBSD amd64) AppleWebKit/537.36 (KHTML, like Gecko) '
424 'Chrome/48.0.2564.103 Safari/537.36',
425 'chrome', 'freebsd', '48.0.2564.103', None),
426 ('Mozilla/5.0 (X11; FreeBSD amd64; rv:45.0) Gecko/20100101 Firefox/45.0',
427 'firefox', 'freebsd', '45.0', None),
428 ('Mozilla/5.0 (X11; U; NetBSD amd64; en-US; rv:) Gecko/20150921 SeaMonkey/1.1.18',
429 'seamonkey', 'netbsd', '1.1.18', 'en-US'),
430 ('Mozilla/5.0 (Windows; U; Windows NT 6.2; WOW64; rv:1.8.0.7) '
431 'Gecko/20110321 MultiZilla/4.33.2.6a SeaMonkey/8.6.55',
432 'seamonkey', 'windows', '8.6.55', None),
433 ('Mozilla/5.0 (X11; Linux x86_64; rv:12.0) Gecko/20120427 Firefox/12.0 SeaMonkey/2.9',
434 'seamonkey', 'linux', '2.9', None),
435 ('Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)',
436 'baidu', None, '2.0', None),
437 ('Mozilla/5.0 (X11; SunOS i86pc; rv:38.0) Gecko/20100101 Firefox/38.0',
438 'firefox', 'solaris', '38.0', None),
439 ('Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.7.1',
440 'firefox', 'linux', '38.0', None),
441 ('Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) '
442 'Chrome/50.0.2661.75 Safari/537.36',
443 'chrome', 'windows', '50.0.2661.75', None),
444 ('Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)',
445 'bing', None, '2.0', None),
446 ('Mozilla/5.0 (X11; DragonFly x86_64) AppleWebKit/537.36 (KHTML, like Gecko) '
447 'Chrome/47.0.2526.106 Safari/537.36', 'chrome', 'dragonflybsd', '47.0.2526.106', None),
448 ('Mozilla/5.0 (X11; U; DragonFly i386; de; rv:1.9.1) Gecko/20090720 Firefox/3.5.1',
449 'firefox', 'dragonflybsd', '3.5.1', 'de')
450
457 (
458 "Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.11) "
459 "Gecko/20071127 Firefox/2.0.0.11",
460 "firefox",
461 "macos",
462 "2.0.0.11",
463 "en-US",
464 ),
465 (
466 "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; de-DE) Opera 8.54",
467 "opera",
468 "windows",
469 "8.54",
470 "de-DE",
471 ),
472 (
473 "Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en) AppleWebKit/420 "
474 "(KHTML, like Gecko) Version/3.0 Mobile/1A543a Safari/419.3",
475 "safari",
476 "iphone",
477 "3.0",
478 "en",
479 ),
480 (
481 "Bot Googlebot/2.1 ( http://www.googlebot.com/bot.html)",
482 "google",
483 None,
484 "2.1",
485 None,
486 ),
487 (
488 "Mozilla/5.0 (X11; CrOS armv7l 3701.81.0) AppleWebKit/537.31 "
489 "(KHTML, like Gecko) Chrome/26.0.1410.57 Safari/537.31",
490 "chrome",
491 "chromeos",
492 "26.0.1410.57",
493 None,
494 ),
495 (
496 "Mozilla/5.0 (Windows NT 6.3; Trident/7.0; .NET4.0E; rv:11.0) like Gecko",
497 "msie",
498 "windows",
499 "11.0",
500 None,
501 ),
502 (
503 "Mozilla/5.0 (SymbianOS/9.3; Series60/3.2 NokiaE5-00/101.003; "
504 "Profile/MIDP-2.1 Configuration/CLDC-1.1 ) AppleWebKit/533.4 "
505 "(KHTML, like Gecko) NokiaBrowser/7.3.1.35 Mobile Safari/533.4 3gpp-gba",
506 "safari",
507 "symbian",
508 "533.4",
509 None,
510 ),
511 (
512 "Mozilla/5.0 (X11; OpenBSD amd64; rv:45.0) Gecko/20100101 Firefox/45.0",
513 "firefox",
514 "openbsd",
515 "45.0",
516 None,
517 ),
518 (
519 "Mozilla/5.0 (X11; NetBSD amd64; rv:45.0) Gecko/20100101 Firefox/45.0",
520 "firefox",
521 "netbsd",
522 "45.0",
523 None,
524 ),
525 (
526 "Mozilla/5.0 (X11; FreeBSD amd64) AppleWebKit/537.36 (KHTML, like Gecko) "
527 "Chrome/48.0.2564.103 Safari/537.36",
528 "chrome",
529 "freebsd",
530 "48.0.2564.103",
531 None,
532 ),
533 (
534 "Mozilla/5.0 (X11; FreeBSD amd64; rv:45.0) Gecko/20100101 Firefox/45.0",
535 "firefox",
536 "freebsd",
537 "45.0",
538 None,
539 ),
540 (
541 "Mozilla/5.0 (X11; U; NetBSD amd64; en-US; rv:) Gecko/20150921 "
542 "SeaMonkey/1.1.18",
543 "seamonkey",
544 "netbsd",
545 "1.1.18",
546 "en-US",
547 ),
548 (
549 "Mozilla/5.0 (Windows; U; Windows NT 6.2; WOW64; rv:1.8.0.7) "
550 "Gecko/20110321 MultiZilla/4.33.2.6a SeaMonkey/8.6.55",
551 "seamonkey",
552 "windows",
553 "8.6.55",
554 None,
555 ),
556 (
557 "Mozilla/5.0 (X11; Linux x86_64; rv:12.0) Gecko/20120427 Firefox/12.0 "
558 "SeaMonkey/2.9",
559 "seamonkey",
560 "linux",
561 "2.9",
562 None,
563 ),
564 (
565 "Mozilla/5.0 (compatible; Baiduspider/2.0; "
566 "+http://www.baidu.com/search/spider.html)",
567 "baidu",
568 None,
569 "2.0",
570 None,
571 ),
572 (
573 "Mozilla/5.0 (X11; SunOS i86pc; rv:38.0) Gecko/20100101 Firefox/38.0",
574 "firefox",
575 "solaris",
576 "38.0",
577 None,
578 ),
579 (
580 "Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0 "
581 "Iceweasel/38.7.1",
582 "firefox",
583 "linux",
584 "38.0",
585 None,
586 ),
587 (
588 "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 "
589 "(KHTML, like Gecko) Chrome/50.0.2661.75 Safari/537.36",
590 "chrome",
591 "windows",
592 "50.0.2661.75",
593 None,
594 ),
595 (
596 "Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)",
597 "bing",
598 None,
599 "2.0",
600 None,
601 ),
602 (
603 "Mozilla/5.0 (X11; DragonFly x86_64) AppleWebKit/537.36 "
604 "(KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36",
605 "chrome",
606 "dragonflybsd",
607 "47.0.2526.106",
608 None,
609 ),
610 (
611 "Mozilla/5.0 (X11; U; DragonFly i386; de; rv:1.9.1) "
612 "Gecko/20090720 Firefox/3.5.1",
613 "firefox",
614 "dragonflybsd",
615 "3.5.1",
616 "de",
617 ),
451618 ]
452619 for ua, browser, platform, version, lang in user_agents:
453 request = wrappers.Request({'HTTP_USER_AGENT': ua})
620 request = wrappers.Request({"HTTP_USER_AGENT": ua})
454621 strict_eq(request.user_agent.browser, browser)
455622 strict_eq(request.user_agent.platform, platform)
456623 strict_eq(request.user_agent.version, version)
459626 strict_eq(request.user_agent.to_header(), ua)
460627 strict_eq(str(request.user_agent), ua)
461628
462 request = wrappers.Request({'HTTP_USER_AGENT': 'foo'})
629 request = wrappers.Request({"HTTP_USER_AGENT": "foo"})
463630 assert not request.user_agent
464631
465632
466633 def test_stream_wrapping():
467634 class LowercasingStream(object):
468
469635 def __init__(self, stream):
470636 self._stream = stream
471637
475641 def readline(self, size=-1):
476642 return self._stream.readline(size).lower()
477643
478 data = b'foo=Hello+World'
644 data = b"foo=Hello+World"
479645 req = wrappers.Request.from_values(
480 '/', method='POST', data=data,
481 content_type='application/x-www-form-urlencoded')
646 "/", method="POST", data=data, content_type="application/x-www-form-urlencoded"
647 )
482648 req.stream = LowercasingStream(req.stream)
483 assert req.form['foo'] == 'hello world'
649 assert req.form["foo"] == "hello world"
484650
485651
486652 def test_data_descriptor_triggers_parsing():
487 data = b'foo=Hello+World'
653 data = b"foo=Hello+World"
488654 req = wrappers.Request.from_values(
489 '/', method='POST', data=data,
490 content_type='application/x-www-form-urlencoded')
491
492 assert req.data == b''
493 assert req.form['foo'] == u'Hello World'
655 "/", method="POST", data=data, content_type="application/x-www-form-urlencoded"
656 )
657
658 assert req.data == b""
659 assert req.form["foo"] == u"Hello World"
494660
495661
496662 def test_get_data_method_parsing_caching_behavior():
497 data = b'foo=Hello+World'
663 data = b"foo=Hello+World"
498664 req = wrappers.Request.from_values(
499 '/', method='POST', data=data,
500 content_type='application/x-www-form-urlencoded')
665 "/", method="POST", data=data, content_type="application/x-www-form-urlencoded"
666 )
501667
502668 # get_data() caches, so form stays available
503669 assert req.get_data() == data
504 assert req.form['foo'] == u'Hello World'
670 assert req.form["foo"] == u"Hello World"
505671 assert req.get_data() == data
506672
507673 # here we access the form data first, caching is bypassed
508674 req = wrappers.Request.from_values(
509 '/', method='POST', data=data,
510 content_type='application/x-www-form-urlencoded')
511 assert req.form['foo'] == u'Hello World'
512 assert req.get_data() == b''
675 "/", method="POST", data=data, content_type="application/x-www-form-urlencoded"
676 )
677 assert req.form["foo"] == u"Hello World"
678 assert req.get_data() == b""
513679
514680 # Another case is uncached get data which trashes everything
515681 req = wrappers.Request.from_values(
516 '/', method='POST', data=data,
517 content_type='application/x-www-form-urlencoded')
682 "/", method="POST", data=data, content_type="application/x-www-form-urlencoded"
683 )
518684 assert req.get_data(cache=False) == data
519 assert req.get_data(cache=False) == b''
685 assert req.get_data(cache=False) == b""
520686 assert req.form == {}
521687
522688 # Or we can implicitly start the form parser which is similar to
523689 # the old .data behavior
524690 req = wrappers.Request.from_values(
525 '/', method='POST', data=data,
526 content_type='application/x-www-form-urlencoded')
527 assert req.get_data(parse_form_data=True) == b''
528 assert req.form['foo'] == u'Hello World'
691 "/", method="POST", data=data, content_type="application/x-www-form-urlencoded"
692 )
693 assert req.get_data(parse_form_data=True) == b""
694 assert req.form["foo"] == u"Hello World"
529695
530696
531697 def test_etag_response_mixin():
532 response = wrappers.Response('Hello World')
698 response = wrappers.Response("Hello World")
533699 assert response.get_etag() == (None, None)
534700 response.add_etag()
535 assert response.get_etag() == ('b10a8db164e0754105b7a99be72e3fe5', False)
701 assert response.get_etag() == ("b10a8db164e0754105b7a99be72e3fe5", False)
536702 assert not response.cache_control
537703 response.cache_control.must_revalidate = True
538704 response.cache_control.max_age = 60
539 response.headers['Content-Length'] = len(response.get_data())
540 assert response.headers['Cache-Control'] in ('must-revalidate, max-age=60',
541 'max-age=60, must-revalidate')
542
543 assert 'date' not in response.headers
705 response.headers["Content-Length"] = len(response.get_data())
706 assert response.headers["Cache-Control"] in (
707 "must-revalidate, max-age=60",
708 "max-age=60, must-revalidate",
709 )
710
711 assert "date" not in response.headers
544712 env = create_environ()
545 env.update({
546 'REQUEST_METHOD': 'GET',
547 'HTTP_IF_NONE_MATCH': response.get_etag()[0]
548 })
713 env.update({"REQUEST_METHOD": "GET", "HTTP_IF_NONE_MATCH": response.get_etag()[0]})
549714 response.make_conditional(env)
550 assert 'date' in response.headers
715 assert "date" in response.headers
551716
552717 # after the thing is invoked by the server as wsgi application
553718 # (we're emulating this here), there must not be any entity
554719 # headers left and the status code would have to be 304
555720 resp = wrappers.Response.from_app(response, env)
556721 assert resp.status_code == 304
557 assert 'content-length' not in resp.headers
722 assert "content-length" not in resp.headers
558723
559724 # make sure date is not overriden
560 response = wrappers.Response('Hello World')
725 response = wrappers.Response("Hello World")
561726 response.date = 1337
562727 d = response.date
563728 response.make_conditional(env)
564729 assert response.date == d
565730
566731 # make sure content length is only set if missing
567 response = wrappers.Response('Hello World')
732 response = wrappers.Response("Hello World")
568733 response.content_length = 999
569734 response.make_conditional(env)
570735 assert response.content_length == 999
571736
572737
573738 def test_etag_response_412():
574 response = wrappers.Response('Hello World')
739 response = wrappers.Response("Hello World")
575740 assert response.get_etag() == (None, None)
576741 response.add_etag()
577 assert response.get_etag() == ('b10a8db164e0754105b7a99be72e3fe5', False)
742 assert response.get_etag() == ("b10a8db164e0754105b7a99be72e3fe5", False)
578743 assert not response.cache_control
579744 response.cache_control.must_revalidate = True
580745 response.cache_control.max_age = 60
581 response.headers['Content-Length'] = len(response.get_data())
582 assert response.headers['Cache-Control'] in ('must-revalidate, max-age=60',
583 'max-age=60, must-revalidate')
584
585 assert 'date' not in response.headers
746 response.headers["Content-Length"] = len(response.get_data())
747 assert response.headers["Cache-Control"] in (
748 "must-revalidate, max-age=60",
749 "max-age=60, must-revalidate",
750 )
751
752 assert "date" not in response.headers
586753 env = create_environ()
587 env.update({
588 'REQUEST_METHOD': 'GET',
589 'HTTP_IF_MATCH': response.get_etag()[0] + "xyz"
590 })
754 env.update(
755 {"REQUEST_METHOD": "GET", "HTTP_IF_MATCH": response.get_etag()[0] + "xyz"}
756 )
591757 response.make_conditional(env)
592 assert 'date' in response.headers
758 assert "date" in response.headers
593759
594760 # after the thing is invoked by the server as wsgi application
595761 # (we're emulating this here), there must not be any entity
596762 # headers left and the status code would have to be 412
597763 resp = wrappers.Response.from_app(response, env)
598764 assert resp.status_code == 412
599 assert 'content-length' not in resp.headers
765 # Make sure there is a body still
766 assert resp.data != b""
600767
601768 # make sure date is not overriden
602 response = wrappers.Response('Hello World')
769 response = wrappers.Response("Hello World")
603770 response.date = 1337
604771 d = response.date
605772 response.make_conditional(env)
606773 assert response.date == d
607774
608775 # make sure content length is only set if missing
609 response = wrappers.Response('Hello World')
776 response = wrappers.Response("Hello World")
610777 response.content_length = 999
611778 response.make_conditional(env)
612779 assert response.content_length == 999
614781
615782 def test_range_request_basic():
616783 env = create_environ()
617 response = wrappers.Response('Hello World')
618 env['HTTP_RANGE'] = 'bytes=0-4'
784 response = wrappers.Response("Hello World")
785 env["HTTP_RANGE"] = "bytes=0-4"
619786 response.make_conditional(env, accept_ranges=True, complete_length=11)
620787 assert response.status_code == 206
621 assert response.headers['Accept-Ranges'] == 'bytes'
622 assert response.headers['Content-Range'] == 'bytes 0-4/11'
623 assert response.headers['Content-Length'] == '5'
624 assert response.data == b'Hello'
788 assert response.headers["Accept-Ranges"] == "bytes"
789 assert response.headers["Content-Range"] == "bytes 0-4/11"
790 assert response.headers["Content-Length"] == "5"
791 assert response.data == b"Hello"
625792
626793
627794 def test_range_request_out_of_bound():
628795 env = create_environ()
629 response = wrappers.Response('Hello World')
630 env['HTTP_RANGE'] = 'bytes=6-666'
796 response = wrappers.Response("Hello World")
797 env["HTTP_RANGE"] = "bytes=6-666"
631798 response.make_conditional(env, accept_ranges=True, complete_length=11)
632799 assert response.status_code == 206
633 assert response.headers['Accept-Ranges'] == 'bytes'
634 assert response.headers['Content-Range'] == 'bytes 6-10/11'
635 assert response.headers['Content-Length'] == '5'
636 assert response.data == b'World'
800 assert response.headers["Accept-Ranges"] == "bytes"
801 assert response.headers["Content-Range"] == "bytes 6-10/11"
802 assert response.headers["Content-Length"] == "5"
803 assert response.data == b"World"
637804
638805
639806 def test_range_request_with_file():
640807 env = create_environ()
641 resources = os.path.join(os.path.dirname(__file__), 'res')
642 fname = os.path.join(resources, 'test.txt')
643 with open(fname, 'rb') as f:
808 resources = os.path.join(os.path.dirname(__file__), "res")
809 fname = os.path.join(resources, "test.txt")
810 with open(fname, "rb") as f:
644811 fcontent = f.read()
645 with open(fname, 'rb') as f:
812 with open(fname, "rb") as f:
646813 response = wrappers.Response(wrap_file(env, f))
647 env['HTTP_RANGE'] = 'bytes=0-0'
648 response.make_conditional(env, accept_ranges=True, complete_length=len(fcontent))
814 env["HTTP_RANGE"] = "bytes=0-0"
815 response.make_conditional(
816 env, accept_ranges=True, complete_length=len(fcontent)
817 )
649818 assert response.status_code == 206
650 assert response.headers['Accept-Ranges'] == 'bytes'
651 assert response.headers['Content-Range'] == 'bytes 0-0/%d' % len(fcontent)
652 assert response.headers['Content-Length'] == '1'
819 assert response.headers["Accept-Ranges"] == "bytes"
820 assert response.headers["Content-Range"] == "bytes 0-0/%d" % len(fcontent)
821 assert response.headers["Content-Length"] == "1"
653822 assert response.data == fcontent[:1]
654823
655824
656825 def test_range_request_with_complete_file():
657826 env = create_environ()
658 resources = os.path.join(os.path.dirname(__file__), 'res')
659 fname = os.path.join(resources, 'test.txt')
660 with open(fname, 'rb') as f:
827 resources = os.path.join(os.path.dirname(__file__), "res")
828 fname = os.path.join(resources, "test.txt")
829 with open(fname, "rb") as f:
661830 fcontent = f.read()
662 with open(fname, 'rb') as f:
831 with open(fname, "rb") as f:
663832 fsize = os.path.getsize(fname)
664833 response = wrappers.Response(wrap_file(env, f))
665 env['HTTP_RANGE'] = 'bytes=0-%d' % (fsize - 1)
666 response.make_conditional(env, accept_ranges=True,
667 complete_length=fsize)
834 env["HTTP_RANGE"] = "bytes=0-%d" % (fsize - 1)
835 response.make_conditional(env, accept_ranges=True, complete_length=fsize)
668836 assert response.status_code == 200
669 assert response.headers['Accept-Ranges'] == 'bytes'
670 assert 'Content-Range' not in response.headers
671 assert response.headers['Content-Length'] == str(fsize)
837 assert response.headers["Accept-Ranges"] == "bytes"
838 assert "Content-Range" not in response.headers
839 assert response.headers["Content-Length"] == str(fsize)
672840 assert response.data == fcontent
673841
674842
675843 def test_range_request_without_complete_length():
676844 env = create_environ()
677 response = wrappers.Response('Hello World')
678 env['HTTP_RANGE'] = 'bytes=-'
845 response = wrappers.Response("Hello World")
846 env["HTTP_RANGE"] = "bytes=-"
679847 response.make_conditional(env, accept_ranges=True, complete_length=None)
680848 assert response.status_code == 200
681 assert response.data == b'Hello World'
849 assert response.data == b"Hello World"
682850
683851
684852 def test_invalid_range_request():
685853 env = create_environ()
686 response = wrappers.Response('Hello World')
687 env['HTTP_RANGE'] = 'bytes=-'
854 response = wrappers.Response("Hello World")
855 env["HTTP_RANGE"] = "bytes=-"
688856 with pytest.raises(RequestedRangeNotSatisfiable):
689857 response.make_conditional(env, accept_ranges=True, complete_length=11)
690858
696864 class WithoutFreeze(wrappers.BaseResponse, wrappers.ETagResponseMixin):
697865 pass
698866
699 response = WithFreeze('Hello World')
867 response = WithFreeze("Hello World")
700868 response.freeze()
701 strict_eq(response.get_etag(),
702 (text_type(wrappers.generate_etag(b'Hello World')), False))
703 response = WithoutFreeze('Hello World')
869 strict_eq(response.get_etag(), (text_type(generate_etag(b"Hello World")), False))
870 response = WithoutFreeze("Hello World")
704871 response.freeze()
705872 assert response.get_etag() == (None, None)
706 response = wrappers.Response('Hello World')
873 response = wrappers.Response("Hello World")
707874 response.freeze()
708875 assert response.get_etag() == (None, None)
709876
710877
711878 def test_authenticate_mixin():
712879 resp = wrappers.Response()
713 resp.www_authenticate.type = 'basic'
714 resp.www_authenticate.realm = 'Testing'
715 strict_eq(resp.headers['WWW-Authenticate'], u'Basic realm="Testing"')
880 resp.www_authenticate.type = "basic"
881 resp.www_authenticate.realm = "Testing"
882 strict_eq(resp.headers["WWW-Authenticate"], u'Basic realm="Testing"')
716883 resp.www_authenticate.realm = None
717884 resp.www_authenticate.type = None
718 assert 'WWW-Authenticate' not in resp.headers
885 assert "WWW-Authenticate" not in resp.headers
719886
720887
721888 def test_authenticate_mixin_quoted_qop():
722889 # Example taken from https://github.com/pallets/werkzeug/issues/633
723890 resp = wrappers.Response()
724 resp.www_authenticate.set_digest('REALM', 'NONCE', qop=("auth", "auth-int"))
725
726 actual = set((resp.headers['WWW-Authenticate'] + ',').split())
891 resp.www_authenticate.set_digest("REALM", "NONCE", qop=("auth", "auth-int"))
892
893 actual = set((resp.headers["WWW-Authenticate"] + ",").split())
727894 expected = set('Digest nonce="NONCE", realm="REALM", qop="auth, auth-int",'.split())
728895 assert actual == expected
729896
730 resp.www_authenticate.set_digest('REALM', 'NONCE', qop=("auth",))
731
732 actual = set((resp.headers['WWW-Authenticate'] + ',').split())
897 resp.www_authenticate.set_digest("REALM", "NONCE", qop=("auth",))
898
899 actual = set((resp.headers["WWW-Authenticate"] + ",").split())
733900 expected = set('Digest nonce="NONCE", realm="REALM", qop="auth",'.split())
734901 assert actual == expected
735902
736903
737904 def test_response_stream_mixin():
738905 response = wrappers.Response()
739 response.stream.write('Hello ')
740 response.stream.write('World!')
741 assert response.response == ['Hello ', 'World!']
742 assert response.get_data() == b'Hello World!'
906 response.stream.write("Hello ")
907 response.stream.write("World!")
908 assert response.response == ["Hello ", "World!"]
909 assert response.get_data() == b"Hello World!"
743910
744911
745912 def test_common_response_descriptors_mixin():
746913 response = wrappers.Response()
747 response.mimetype = 'text/html'
748 assert response.mimetype == 'text/html'
749 assert response.content_type == 'text/html; charset=utf-8'
750 assert response.mimetype_params == {'charset': 'utf-8'}
751 response.mimetype_params['x-foo'] = 'yep'
752 del response.mimetype_params['charset']
753 assert response.content_type == 'text/html; x-foo=yep'
914 response.mimetype = "text/html"
915 assert response.mimetype == "text/html"
916 assert response.content_type == "text/html; charset=utf-8"
917 assert response.mimetype_params == {"charset": "utf-8"}
918 response.mimetype_params["x-foo"] = "yep"
919 del response.mimetype_params["charset"]
920 assert response.content_type == "text/html; x-foo=yep"
754921
755922 now = datetime.utcnow().replace(microsecond=0)
756923
757924 assert response.content_length is None
758 response.content_length = '42'
925 response.content_length = "42"
759926 assert response.content_length == 42
760927
761 for attr in 'date', 'expires':
928 for attr in "date", "expires":
762929 assert getattr(response, attr) is None
763930 setattr(response, attr, now)
764931 assert getattr(response, attr) == now
775942 assert response.retry_after == now
776943
777944 assert not response.vary
778 response.vary.add('Cookie')
779 response.vary.add('Content-Language')
780 assert 'cookie' in response.vary
781 assert response.vary.to_header() == 'Cookie, Content-Language'
782 response.headers['Vary'] = 'Content-Encoding'
783 assert response.vary.as_set() == set(['content-encoding'])
784
785 response.allow.update(['GET', 'POST'])
786 assert response.headers['Allow'] == 'GET, POST'
787
788 response.content_language.add('en-US')
789 response.content_language.add('fr')
790 assert response.headers['Content-Language'] == 'en-US, fr'
945 response.vary.add("Cookie")
946 response.vary.add("Content-Language")
947 assert "cookie" in response.vary
948 assert response.vary.to_header() == "Cookie, Content-Language"
949 response.headers["Vary"] = "Content-Encoding"
950 assert response.vary.as_set() == {"content-encoding"}
951
952 response.allow.update(["GET", "POST"])
953 assert response.headers["Allow"] == "GET, POST"
954
955 response.content_language.add("en-US")
956 response.content_language.add("fr")
957 assert response.headers["Content-Language"] == "en-US, fr"
791958
792959
793960 def test_common_request_descriptors_mixin():
794961 request = wrappers.Request.from_values(
795 content_type='text/html; charset=utf-8',
796 content_length='23',
962 content_type="text/html; charset=utf-8",
963 content_length="23",
797964 headers={
798 'Referer': 'http://www.example.com/',
799 'Date': 'Sat, 28 Feb 2009 19:04:35 GMT',
800 'Max-Forwards': '10',
801 'Pragma': 'no-cache',
802 'Content-Encoding': 'gzip',
803 'Content-MD5': '9a3bc6dbc47a70db25b84c6e5867a072'
804 }
805 )
806
807 assert request.content_type == 'text/html; charset=utf-8'
808 assert request.mimetype == 'text/html'
809 assert request.mimetype_params == {'charset': 'utf-8'}
965 "Referer": "http://www.example.com/",
966 "Date": "Sat, 28 Feb 2009 19:04:35 GMT",
967 "Max-Forwards": "10",
968 "Pragma": "no-cache",
969 "Content-Encoding": "gzip",
970 "Content-MD5": "9a3bc6dbc47a70db25b84c6e5867a072",
971 },
972 )
973
974 assert request.content_type == "text/html; charset=utf-8"
975 assert request.mimetype == "text/html"
976 assert request.mimetype_params == {"charset": "utf-8"}
810977 assert request.content_length == 23
811 assert request.referrer == 'http://www.example.com/'
978 assert request.referrer == "http://www.example.com/"
812979 assert request.date == datetime(2009, 2, 28, 19, 4, 35)
813980 assert request.max_forwards == 10
814 assert 'no-cache' in request.pragma
815 assert request.content_encoding == 'gzip'
816 assert request.content_md5 == '9a3bc6dbc47a70db25b84c6e5867a072'
981 assert "no-cache" in request.pragma
982 assert request.content_encoding == "gzip"
983 assert request.content_md5 == "9a3bc6dbc47a70db25b84c6e5867a072"
817984
818985
819986 def test_request_mimetype_always_lowercase():
820 request = wrappers.Request.from_values(content_type='APPLICATION/JSON')
821 assert request.mimetype == 'application/json'
987 request = wrappers.Request.from_values(content_type="APPLICATION/JSON")
988 assert request.mimetype == "application/json"
822989
823990
824991 def test_shallow_mode():
825 request = wrappers.Request({'QUERY_STRING': 'foo=bar'}, shallow=True)
826 assert request.args['foo'] == 'bar'
827 pytest.raises(RuntimeError, lambda: request.form['foo'])
992 request = wrappers.Request({"QUERY_STRING": "foo=bar"}, shallow=True)
993 assert request.args["foo"] == "bar"
994 pytest.raises(RuntimeError, lambda: request.form["foo"])
828995
829996
830997 def test_form_parsing_failed():
831 data = b'--blah\r\n'
998 data = b"--blah\r\n"
832999 request = wrappers.Request.from_values(
8331000 input_stream=BytesIO(data),
8341001 content_length=len(data),
835 content_type='multipart/form-data; boundary=foo',
836 method='POST'
1002 content_type="multipart/form-data; boundary=foo",
1003 method="POST",
8371004 )
8381005 assert not request.files
8391006 assert not request.form
8401007
8411008 # Bad Content-Type
842 data = b'test'
1009 data = b"test"
8431010 request = wrappers.Request.from_values(
8441011 input_stream=BytesIO(data),
8451012 content_length=len(data),
846 content_type=', ',
847 method='POST'
1013 content_type=", ",
1014 method="POST",
8481015 )
8491016 assert not request.form
8501017
8511018
8521019 def test_file_closing():
853 data = (b'--foo\r\n'
854 b'Content-Disposition: form-data; name="foo"; filename="foo.txt"\r\n'
855 b'Content-Type: text/plain; charset=utf-8\r\n\r\n'
856 b'file contents, just the contents\r\n'
857 b'--foo--')
1020 data = (
1021 b"--foo\r\n"
1022 b'Content-Disposition: form-data; name="foo"; filename="foo.txt"\r\n'
1023 b"Content-Type: text/plain; charset=utf-8\r\n\r\n"
1024 b"file contents, just the contents\r\n"
1025 b"--foo--"
1026 )
8581027 req = wrappers.Request.from_values(
8591028 input_stream=BytesIO(data),
8601029 content_length=len(data),
861 content_type='multipart/form-data; boundary=foo',
862 method='POST'
863 )
864 foo = req.files['foo']
865 assert foo.mimetype == 'text/plain'
866 assert foo.filename == 'foo.txt'
1030 content_type="multipart/form-data; boundary=foo",
1031 method="POST",
1032 )
1033 foo = req.files["foo"]
1034 assert foo.mimetype == "text/plain"
1035 assert foo.filename == "foo.txt"
8671036
8681037 assert foo.closed is False
8691038 req.close()
8711040
8721041
8731042 def test_file_closing_with():
874 data = (b'--foo\r\n'
875 b'Content-Disposition: form-data; name="foo"; filename="foo.txt"\r\n'
876 b'Content-Type: text/plain; charset=utf-8\r\n\r\n'
877 b'file contents, just the contents\r\n'
878 b'--foo--')
1043 data = (
1044 b"--foo\r\n"
1045 b'Content-Disposition: form-data; name="foo"; filename="foo.txt"\r\n'
1046 b"Content-Type: text/plain; charset=utf-8\r\n\r\n"
1047 b"file contents, just the contents\r\n"
1048 b"--foo--"
1049 )
8791050 req = wrappers.Request.from_values(
8801051 input_stream=BytesIO(data),
8811052 content_length=len(data),
882 content_type='multipart/form-data; boundary=foo',
883 method='POST'
1053 content_type="multipart/form-data; boundary=foo",
1054 method="POST",
8841055 )
8851056 with req:
886 foo = req.files['foo']
887 assert foo.mimetype == 'text/plain'
888 assert foo.filename == 'foo.txt'
1057 foo = req.files["foo"]
1058 assert foo.mimetype == "text/plain"
1059 assert foo.filename == "foo.txt"
8891060
8901061 assert foo.closed is True
8911062
8921063
8931064 def test_url_charset_reflection():
8941065 req = wrappers.Request.from_values()
895 req.charset = 'utf-7'
896 assert req.url_charset == 'utf-7'
1066 req.charset = "utf-7"
1067 assert req.url_charset == "utf-7"
8971068
8981069
8991070 def test_response_streamed():
9071078 def gen():
9081079 if 0:
9091080 yield None
1081
9101082 r = wrappers.Response(gen())
9111083 assert r.is_streamed
9121084
9171089 yield item.upper()
9181090
9191091 def generator():
920 yield 'foo'
921 yield 'bar'
1092 yield "foo"
1093 yield "bar"
1094
9221095 req = wrappers.Request.from_values()
9231096 resp = wrappers.Response(generator())
924 del resp.headers['Content-Length']
1097 del resp.headers["Content-Length"]
9251098 resp.response = uppercasing(resp.iter_encoded())
9261099 actual_resp = wrappers.Response.from_app(resp, req.environ, buffered=True)
927 assert actual_resp.get_data() == b'FOOBAR'
1100 assert actual_resp.get_data() == b"FOOBAR"
9281101
9291102
9301103 def test_response_freeze():
9311104 def generate():
9321105 yield "foo"
9331106 yield "bar"
1107
9341108 resp = wrappers.Response(generate())
9351109 resp.freeze()
936 assert resp.response == [b'foo', b'bar']
937 assert resp.headers['content-length'] == '6'
1110 assert resp.response == [b"foo", b"bar"]
1111 assert resp.headers["content-length"] == "6"
9381112
9391113
9401114 def test_response_content_length_uses_encode():
941 r = wrappers.Response(u'你好')
1115 r = wrappers.Response(u"你好")
9421116 assert r.calculate_content_length() == 6
9431117
9441118
9451119 def test_other_method_payload():
946 data = b'Hello World'
947 req = wrappers.Request.from_values(input_stream=BytesIO(data),
948 content_length=len(data),
949 content_type='text/plain',
950 method='WHAT_THE_FUCK')
1120 data = b"Hello World"
1121 req = wrappers.Request.from_values(
1122 input_stream=BytesIO(data),
1123 content_length=len(data),
1124 content_type="text/plain",
1125 method="WHAT_THE_FUCK",
1126 )
9511127 assert req.get_data() == data
9521128 assert isinstance(req.stream, LimitedStream)
9531129
9541130
9551131 def test_urlfication():
9561132 resp = wrappers.Response()
957 resp.headers['Location'] = u'http://üser:pässword@☃.net/påth'
958 resp.headers['Content-Location'] = u'http://☃.net/'
1133 resp.headers["Location"] = u"http://üser:pässword@☃.net/påth"
1134 resp.headers["Content-Location"] = u"http://☃.net/"
9591135 headers = resp.get_wsgi_headers(create_environ())
960 assert headers['location'] == \
961 'http://%C3%BCser:p%C3%A4ssword@xn--n3h.net/p%C3%A5th'
962 assert headers['content-location'] == 'http://xn--n3h.net/'
1136 assert headers["location"] == "http://%C3%BCser:p%C3%A4ssword@xn--n3h.net/p%C3%A5th"
1137 assert headers["content-location"] == "http://xn--n3h.net/"
9631138
9641139
9651140 def test_new_response_iterator_behavior():
9661141 req = wrappers.Request.from_values()
967 resp = wrappers.Response(u'Hello Wörld!')
1142 resp = wrappers.Response(u"Hello Wörld!")
9681143
9691144 def get_content_length(resp):
9701145 headers = resp.get_wsgi_headers(req.environ)
971 return headers.get('content-length', type=int)
1146 return headers.get("content-length", type=int)
9721147
9731148 def generate_items():
9741149 yield "Hello "
9761151
9771152 # werkzeug encodes when set to `data` now, which happens
9781153 # if a string is passed to the response object.
979 assert resp.response == [u'Hello Wörld!'.encode('utf-8')]
980 assert resp.get_data() == u'Hello Wörld!'.encode('utf-8')
1154 assert resp.response == [u"Hello Wörld!".encode("utf-8")]
1155 assert resp.get_data() == u"Hello Wörld!".encode("utf-8")
9811156 assert get_content_length(resp) == 13
9821157 assert not resp.is_streamed
9831158 assert resp.is_sequence
9841159
9851160 # try the same for manual assignment
986 resp.set_data(u'Wörd')
987 assert resp.response == [u'Wörd'.encode('utf-8')]
988 assert resp.get_data() == u'Wörd'.encode('utf-8')
1161 resp.set_data(u"Wörd")
1162 assert resp.response == [u"Wörd".encode("utf-8")]
1163 assert resp.get_data() == u"Wörd".encode("utf-8")
9891164 assert get_content_length(resp) == 5
9901165 assert not resp.is_streamed
9911166 assert resp.is_sequence
9941169 resp.response = generate_items()
9951170 assert resp.is_streamed
9961171 assert not resp.is_sequence
997 assert resp.get_data() == u'Hello Wörld!'.encode('utf-8')
998 assert resp.response == [b'Hello ', u'Wörld!'.encode('utf-8')]
1172 assert resp.get_data() == u"Hello Wörld!".encode("utf-8")
1173 assert resp.response == [b"Hello ", u"Wörld!".encode("utf-8")]
9991174 assert not resp.is_streamed
10001175 assert resp.is_sequence
10011176
10061181 assert not resp.is_sequence
10071182 pytest.raises(RuntimeError, lambda: resp.get_data())
10081183 resp.make_sequence()
1009 assert resp.get_data() == u'Hello Wörld!'.encode('utf-8')
1010 assert resp.response == [b'Hello ', u'Wörld!'.encode('utf-8')]
1184 assert resp.get_data() == u"Hello Wörld!".encode("utf-8")
1185 assert resp.response == [b"Hello ", u"Wörld!".encode("utf-8")]
10111186 assert not resp.is_streamed
10121187 assert resp.is_sequence
10131188
10161191 resp.implicit_sequence_conversion = val
10171192 resp.response = ("foo", "bar")
10181193 assert resp.is_sequence
1019 resp.stream.write('baz')
1020 assert resp.response == ['foo', 'bar', 'baz']
1194 resp.stream.write("baz")
1195 assert resp.response == ["foo", "bar", "baz"]
10211196
10221197
10231198 def test_form_data_ordering():
10241199 class MyRequest(wrappers.Request):
10251200 parameter_storage_class = ImmutableOrderedMultiDict
10261201
1027 req = MyRequest.from_values('/?foo=1&bar=0&foo=3')
1028 assert list(req.args) == ['foo', 'bar']
1202 req = MyRequest.from_values("/?foo=1&bar=0&foo=3")
1203 assert list(req.args) == ["foo", "bar"]
10291204 assert list(req.args.items(multi=True)) == [
1030 ('foo', '1'),
1031 ('bar', '0'),
1032 ('foo', '3')
1205 ("foo", "1"),
1206 ("bar", "0"),
1207 ("foo", "3"),
10331208 ]
10341209 assert isinstance(req.args, ImmutableOrderedMultiDict)
10351210 assert isinstance(req.values, CombinedMultiDict)
1036 assert req.values['foo'] == '1'
1037 assert req.values.getlist('foo') == ['1', '3']
1211 assert req.values["foo"] == "1"
1212 assert req.values.getlist("foo") == ["1", "3"]
10381213
10391214
10401215 def test_storage_classes():
10421217 dict_storage_class = dict
10431218 list_storage_class = list
10441219 parameter_storage_class = dict
1045 req = MyRequest.from_values('/?foo=baz', headers={
1046 'Cookie': 'foo=bar'
1047 })
1220
1221 req = MyRequest.from_values("/?foo=baz", headers={"Cookie": "foo=bar"})
10481222 assert type(req.cookies) is dict
1049 assert req.cookies == {'foo': 'bar'}
1223 assert req.cookies == {"foo": "bar"}
10501224 assert type(req.access_route) is list
10511225
10521226 assert type(req.args) is dict
10531227 assert type(req.values) is CombinedMultiDict
1054 assert req.values['foo'] == u'baz'
1055
1056 req = wrappers.Request.from_values(headers={
1057 'Cookie': 'foo=bar'
1058 })
1228 assert req.values["foo"] == u"baz"
1229
1230 req = wrappers.Request.from_values(headers={"Cookie": "foo=bar"})
10591231 assert type(req.cookies) is ImmutableTypeConversionDict
1060 assert req.cookies == {'foo': 'bar'}
1232 assert req.cookies == {"foo": "bar"}
10611233 assert type(req.access_route) is ImmutableList
10621234
10631235 MyRequest.list_storage_class = tuple
10661238
10671239
10681240 def test_response_headers_passthrough():
1069 headers = wrappers.Headers()
1241 headers = Headers()
10701242 resp = wrappers.Response(headers=headers)
10711243 assert resp.headers is headers
10721244
10731245
10741246 def test_response_304_no_content_length():
1075 resp = wrappers.Response('Test', status=304)
1247 resp = wrappers.Response("Test", status=304)
10761248 env = create_environ()
1077 assert 'content-length' not in resp.get_wsgi_headers(env)
1249 assert "content-length" not in resp.get_wsgi_headers(env)
10781250
10791251
10801252 def test_ranges():
10811253 # basic range stuff
10821254 req = wrappers.Request.from_values()
10831255 assert req.range is None
1084 req = wrappers.Request.from_values(headers={'Range': 'bytes=0-499'})
1256 req = wrappers.Request.from_values(headers={"Range": "bytes=0-499"})
10851257 assert req.range.ranges == [(0, 500)]
10861258
10871259 resp = wrappers.Response()
10881260 resp.content_range = req.range.make_content_range(1000)
1089 assert resp.content_range.units == 'bytes'
1261 assert resp.content_range.units == "bytes"
10901262 assert resp.content_range.start == 0
10911263 assert resp.content_range.stop == 500
10921264 assert resp.content_range.length == 1000
1093 assert resp.headers['Content-Range'] == 'bytes 0-499/1000'
1265 assert resp.headers["Content-Range"] == "bytes 0-499/1000"
10941266
10951267 resp.content_range.unset()
1096 assert 'Content-Range' not in resp.headers
1097
1098 resp.headers['Content-Range'] = 'bytes 0-499/1000'
1099 assert resp.content_range.units == 'bytes'
1268 assert "Content-Range" not in resp.headers
1269
1270 resp.headers["Content-Range"] = "bytes 0-499/1000"
1271 assert resp.content_range.units == "bytes"
11001272 assert resp.content_range.start == 0
11011273 assert resp.content_range.stop == 500
11021274 assert resp.content_range.length == 1000
11031275
11041276
11051277 def test_auto_content_length():
1106 resp = wrappers.Response('Hello World!')
1278 resp = wrappers.Response("Hello World!")
11071279 assert resp.content_length == 12
11081280
1109 resp = wrappers.Response(['Hello World!'])
1281 resp = wrappers.Response(["Hello World!"])
11101282 assert resp.content_length is None
1111 assert resp.get_wsgi_headers({})['Content-Length'] == '12'
1283 assert resp.get_wsgi_headers({})["Content-Length"] == "12"
11121284
11131285
11141286 def test_stream_content_length():
11151287 resp = wrappers.Response()
1116 resp.stream.writelines(['foo', 'bar', 'baz'])
1117 assert resp.get_wsgi_headers({})['Content-Length'] == '9'
1288 resp.stream.writelines(["foo", "bar", "baz"])
1289 assert resp.get_wsgi_headers({})["Content-Length"] == "9"
11181290
11191291 resp = wrappers.Response()
1120 resp.make_conditional({'REQUEST_METHOD': 'GET'})
1121 resp.stream.writelines(['foo', 'bar', 'baz'])
1122 assert resp.get_wsgi_headers({})['Content-Length'] == '9'
1123
1124 resp = wrappers.Response('foo')
1125 resp.stream.writelines(['bar', 'baz'])
1126 assert resp.get_wsgi_headers({})['Content-Length'] == '9'
1292 resp.make_conditional({"REQUEST_METHOD": "GET"})
1293 resp.stream.writelines(["foo", "bar", "baz"])
1294 assert resp.get_wsgi_headers({})["Content-Length"] == "9"
1295
1296 resp = wrappers.Response("foo")
1297 resp.stream.writelines(["bar", "baz"])
1298 assert resp.get_wsgi_headers({})["Content-Length"] == "9"
11271299
11281300
11291301 def test_disabled_auto_content_length():
11301302 class MyResponse(wrappers.Response):
11311303 automatically_set_content_length = False
1132 resp = MyResponse('Hello World!')
1304
1305 resp = MyResponse("Hello World!")
11331306 assert resp.content_length is None
11341307
1135 resp = MyResponse(['Hello World!'])
1308 resp = MyResponse(["Hello World!"])
11361309 assert resp.content_length is None
1137 assert 'Content-Length' not in resp.get_wsgi_headers({})
1310 assert "Content-Length" not in resp.get_wsgi_headers({})
11381311
11391312 resp = MyResponse()
1140 resp.make_conditional({
1141 'REQUEST_METHOD': 'GET'
1142 })
1313 resp.make_conditional({"REQUEST_METHOD": "GET"})
11431314 assert resp.content_length is None
1144 assert 'Content-Length' not in resp.get_wsgi_headers({})
1145
1146
1147 def test_location_header_autocorrect():
1148 env = create_environ()
1149
1150 class MyResponse(wrappers.Response):
1151 autocorrect_location_header = False
1152 resp = MyResponse('Hello World!')
1153 resp.headers['Location'] = '/test'
1154 assert resp.get_wsgi_headers(env)['Location'] == '/test'
1155
1156 resp = wrappers.Response('Hello World!')
1157 resp.headers['Location'] = '/test'
1158 assert resp.get_wsgi_headers(env)['Location'] == 'http://localhost/test'
1315 assert "Content-Length" not in resp.get_wsgi_headers({})
1316
1317
1318 @pytest.mark.parametrize(
1319 ("auto", "location", "expect"),
1320 (
1321 (False, "/test", "/test"),
1322 (True, "/test", "http://localhost/test"),
1323 (True, "test", "http://localhost/a/b/test"),
1324 (True, "./test", "http://localhost/a/b/test"),
1325 (True, "../test", "http://localhost/a/test"),
1326 ),
1327 )
1328 def test_location_header_autocorrect(monkeypatch, auto, location, expect):
1329 monkeypatch.setattr(wrappers.Response, "autocorrect_location_header", auto)
1330 env = create_environ("/a/b/c")
1331 resp = wrappers.Response("Hello World!")
1332 resp.headers["Location"] = location
1333 assert resp.get_wsgi_headers(env)["Location"] == expect
11591334
11601335
11611336 def test_204_and_1XX_response_has_no_content_length():
11631338 assert response.content_length is None
11641339
11651340 headers = response.get_wsgi_headers(create_environ())
1166 assert 'Content-Length' not in headers
1341 assert "Content-Length" not in headers
11671342
11681343 response = wrappers.Response(status=100)
11691344 assert response.content_length is None
11701345
11711346 headers = response.get_wsgi_headers(create_environ())
1172 assert 'Content-Length' not in headers
1347 assert "Content-Length" not in headers
1348
1349
1350 def test_malformed_204_response_has_no_content_length():
1351 # flask-restful can generate a malformed response when doing `return '', 204`
1352 response = wrappers.Response(status=204)
1353 response.set_data(b"test")
1354 assert response.content_length == 4
1355
1356 env = create_environ()
1357 app_iter, status, headers = response.get_wsgi_response(env)
1358 assert status == "204 NO CONTENT"
1359 assert "Content-Length" not in headers
1360 assert b"".join(app_iter) == b"" # ensure data will not be sent
11731361
11741362
11751363 def test_modified_url_encoding():
11761364 class ModifiedRequest(wrappers.Request):
1177 url_charset = 'euc-kr'
1178
1179 req = ModifiedRequest.from_values(u'/?foo=정상처리'.encode('euc-kr'))
1180 strict_eq(req.args['foo'], u'정상처리')
1365 url_charset = "euc-kr"
1366
1367 req = ModifiedRequest.from_values(u"/?foo=정상처리".encode("euc-kr"))
1368 strict_eq(req.args["foo"], u"정상처리")
11811369
11821370
11831371 def test_request_method_case_sensitivity():
1184 req = wrappers.Request({'REQUEST_METHOD': 'get'})
1185 assert req.method == 'GET'
1372 req = wrappers.Request({"REQUEST_METHOD": "get"})
1373 assert req.method == "GET"
11861374
11871375
11881376 def test_is_xhr_warning():
11891377 req = wrappers.Request.from_values()
11901378
1191 with pytest.warns(DeprecationWarning) as record:
1379 with pytest.warns(DeprecationWarning):
11921380 req.is_xhr
1193
1194 assert len(record) == 1
1195 assert 'Request.is_xhr is deprecated' in str(record[0].message)
11961381
11971382
11981383 def test_write_length():
11991384 response = wrappers.Response()
1200 length = response.stream.write(b'bar')
1385 length = response.stream.write(b"bar")
12011386 assert length == 3
12021387
12031388
12051390 import zipfile
12061391
12071392 response = wrappers.Response()
1208 with contextlib.closing(zipfile.ZipFile(response.stream, mode='w')) as z:
1393 with contextlib.closing(zipfile.ZipFile(response.stream, mode="w")) as z:
12091394 z.writestr("foo", b"bar")
12101395
12111396 buffer = BytesIO(response.get_data())
1212 with contextlib.closing(zipfile.ZipFile(buffer, mode='r')) as z:
1213 assert z.namelist() == ['foo']
1214 assert z.read('foo') == b'bar'
1397 with contextlib.closing(zipfile.ZipFile(buffer, mode="r")) as z:
1398 assert z.namelist() == ["foo"]
1399 assert z.read("foo") == b"bar"
12151400
12161401
12171402 class TestSetCookie(object):
12191404
12201405 def test_secure(self):
12211406 response = wrappers.BaseResponse()
1222 response.set_cookie('foo', value='bar', max_age=60, expires=0,
1223 path='/blub', domain='example.org', secure=True,
1224 samesite=None)
1225 strict_eq(response.headers.to_wsgi_list(), [
1226 ('Content-Type', 'text/plain; charset=utf-8'),
1227 ('Set-Cookie', 'foo=bar; Domain=example.org; Expires=Thu, '
1228 '01-Jan-1970 00:00:00 GMT; Max-Age=60; Secure; Path=/blub')
1229 ])
1407 response.set_cookie(
1408 "foo",
1409 value="bar",
1410 max_age=60,
1411 expires=0,
1412 path="/blub",
1413 domain="example.org",
1414 secure=True,
1415 samesite=None,
1416 )
1417 strict_eq(
1418 response.headers.to_wsgi_list(),
1419 [
1420 ("Content-Type", "text/plain; charset=utf-8"),
1421 (
1422 "Set-Cookie",
1423 "foo=bar; Domain=example.org; Expires=Thu, "
1424 "01-Jan-1970 00:00:00 GMT; Max-Age=60; Secure; Path=/blub",
1425 ),
1426 ],
1427 )
12301428
12311429 def test_httponly(self):
12321430 response = wrappers.BaseResponse()
1233 response.set_cookie('foo', value='bar', max_age=60, expires=0,
1234 path='/blub', domain='example.org', secure=False,
1235 httponly=True, samesite=None)
1236 strict_eq(response.headers.to_wsgi_list(), [
1237 ('Content-Type', 'text/plain; charset=utf-8'),
1238 ('Set-Cookie', 'foo=bar; Domain=example.org; Expires=Thu, '
1239 '01-Jan-1970 00:00:00 GMT; Max-Age=60; HttpOnly; Path=/blub')
1240 ])
1431 response.set_cookie(
1432 "foo",
1433 value="bar",
1434 max_age=60,
1435 expires=0,
1436 path="/blub",
1437 domain="example.org",
1438 secure=False,
1439 httponly=True,
1440 samesite=None,
1441 )
1442 strict_eq(
1443 response.headers.to_wsgi_list(),
1444 [
1445 ("Content-Type", "text/plain; charset=utf-8"),
1446 (
1447 "Set-Cookie",
1448 "foo=bar; Domain=example.org; Expires=Thu, "
1449 "01-Jan-1970 00:00:00 GMT; Max-Age=60; HttpOnly; Path=/blub",
1450 ),
1451 ],
1452 )
12411453
12421454 def test_secure_and_httponly(self):
12431455 response = wrappers.BaseResponse()
1244 response.set_cookie('foo', value='bar', max_age=60, expires=0,
1245 path='/blub', domain='example.org', secure=True,
1246 httponly=True, samesite=None)
1247 strict_eq(response.headers.to_wsgi_list(), [
1248 ('Content-Type', 'text/plain; charset=utf-8'),
1249 ('Set-Cookie', 'foo=bar; Domain=example.org; Expires=Thu, '
1250 '01-Jan-1970 00:00:00 GMT; Max-Age=60; Secure; HttpOnly; '
1251 'Path=/blub')
1252 ])
1456 response.set_cookie(
1457 "foo",
1458 value="bar",
1459 max_age=60,
1460 expires=0,
1461 path="/blub",
1462 domain="example.org",
1463 secure=True,
1464 httponly=True,
1465 samesite=None,
1466 )
1467 strict_eq(
1468 response.headers.to_wsgi_list(),
1469 [
1470 ("Content-Type", "text/plain; charset=utf-8"),
1471 (
1472 "Set-Cookie",
1473 "foo=bar; Domain=example.org; Expires=Thu, "
1474 "01-Jan-1970 00:00:00 GMT; Max-Age=60; Secure; HttpOnly; "
1475 "Path=/blub",
1476 ),
1477 ],
1478 )
12531479
12541480 def test_samesite(self):
12551481 response = wrappers.BaseResponse()
1256 response.set_cookie('foo', value='bar', max_age=60, expires=0,
1257 path='/blub', domain='example.org', secure=False,
1258 samesite='strict')
1259 strict_eq(response.headers.to_wsgi_list(), [
1260 ('Content-Type', 'text/plain; charset=utf-8'),
1261 ('Set-Cookie', 'foo=bar; Domain=example.org; Expires=Thu, '
1262 '01-Jan-1970 00:00:00 GMT; Max-Age=60; Path=/blub; '
1263 'SameSite=Strict')
1264 ])
1482 response.set_cookie(
1483 "foo",
1484 value="bar",
1485 max_age=60,
1486 expires=0,
1487 path="/blub",
1488 domain="example.org",
1489 secure=False,
1490 samesite="strict",
1491 )
1492 strict_eq(
1493 response.headers.to_wsgi_list(),
1494 [
1495 ("Content-Type", "text/plain; charset=utf-8"),
1496 (
1497 "Set-Cookie",
1498 "foo=bar; Domain=example.org; Expires=Thu, "
1499 "01-Jan-1970 00:00:00 GMT; Max-Age=60; Path=/blub; "
1500 "SameSite=Strict",
1501 ),
1502 ],
1503 )
1504
1505
1506 class TestJSONMixin(object):
1507 class Request(JSONMixin, wrappers.Request):
1508 pass
1509
1510 class Response(JSONMixin, wrappers.Response):
1511 pass
1512
1513 def test_request(self):
1514 value = {u"ä": "b"}
1515 request = self.Request.from_values(json=value)
1516 assert request.json == value
1517 assert request.get_data()
1518
1519 def test_response(self):
1520 value = {u"ä": "b"}
1521 response = self.Response(
1522 response=json.dumps(value), content_type="application/json"
1523 )
1524 assert response.json == value
1525
1526 def test_force(self):
1527 value = [1, 2, 3]
1528 request = self.Request.from_values(json=value, content_type="text/plain")
1529 assert request.json is None
1530 assert request.get_json(force=True) == value
1531
1532 def test_silent(self):
1533 request = self.Request.from_values(
1534 data=b'{"a":}', content_type="application/json"
1535 )
1536 assert request.get_json(silent=True) is None
1537
1538 with pytest.raises(BadRequest):
1539 request.get_json()
1540
1541 def test_cache_disabled(self):
1542 value = [1, 2, 3]
1543 request = self.Request.from_values(json=value)
1544 assert request.get_json(cache=False) == [1, 2, 3]
1545 assert not request.get_data()
1546
1547 with pytest.raises(BadRequest):
1548 request.get_json()
44
55 Tests the WSGI utilities.
66
7 :copyright: (c) 2014 by Armin Ronacher.
8 :license: BSD, see LICENSE for more details.
7 :copyright: 2007 Pallets
8 :license: BSD-3-Clause
99 """
1010 import io
1111 import json
1212 import os
13 from contextlib import closing
14 from os import path
1513
1614 import pytest
1715
18 from tests import strict_eq
16 from . import strict_eq
1917 from werkzeug import wsgi
20 from werkzeug._compat import BytesIO, NativeStringIO, StringIO, to_bytes, \
21 to_native
22 from werkzeug.exceptions import BadRequest, ClientDisconnected
23 from werkzeug.test import Client, create_environ, run_wsgi_app
18 from werkzeug._compat import BytesIO
19 from werkzeug._compat import NativeStringIO
20 from werkzeug._compat import StringIO
21 from werkzeug.exceptions import BadRequest
22 from werkzeug.exceptions import ClientDisconnected
23 from werkzeug.test import Client
24 from werkzeug.test import create_environ
25 from werkzeug.test import run_wsgi_app
2426 from werkzeug.wrappers import BaseResponse
25 from werkzeug.urls import url_parse
26 from werkzeug.wsgi import _RangeWrapper, wrap_file
27
28
29 def test_shareddatamiddleware_get_file_loader():
30 app = wsgi.SharedDataMiddleware(None, {})
31 assert callable(app.get_file_loader('foo'))
32
33
34 def test_shared_data_middleware(tmpdir):
35 def null_application(environ, start_response):
36 start_response('404 NOT FOUND', [('Content-Type', 'text/plain')])
37 yield b'NOT FOUND'
38
39 test_dir = str(tmpdir)
40 with open(path.join(test_dir, to_native(u'äöü', 'utf-8')), 'w') as test_file:
41 test_file.write(u'FOUND')
42
43 for t in [list, dict]:
44 app = wsgi.SharedDataMiddleware(null_application, t([
45 ('/', path.join(path.dirname(__file__), 'res')),
46 ('/sources', path.join(path.dirname(__file__), 'res')),
47 ('/pkg', ('werkzeug.debug', 'shared')),
48 ('/foo', test_dir)
49 ]))
50
51 for p in '/test.txt', '/sources/test.txt', '/foo/äöü':
52 app_iter, status, headers = run_wsgi_app(app, create_environ(p))
53 assert status == '200 OK'
54 with closing(app_iter) as app_iter:
55 data = b''.join(app_iter).strip()
56 assert data == b'FOUND'
57
58 app_iter, status, headers = run_wsgi_app(
59 app, create_environ('/pkg/debugger.js'))
60 with closing(app_iter) as app_iter:
61 contents = b''.join(app_iter)
62 assert b'$(function() {' in contents
63
64 app_iter, status, headers = run_wsgi_app(
65 app, create_environ('/missing'))
66 assert status == '404 NOT FOUND'
67 assert b''.join(app_iter).strip() == b'NOT FOUND'
68
69
70 def test_dispatchermiddleware():
71 def null_application(environ, start_response):
72 start_response('404 NOT FOUND', [('Content-Type', 'text/plain')])
73 yield b'NOT FOUND'
74
75 def dummy_application(environ, start_response):
76 start_response('200 OK', [('Content-Type', 'text/plain')])
77 yield to_bytes(environ['SCRIPT_NAME'])
78
79 app = wsgi.DispatcherMiddleware(null_application, {
80 '/test1': dummy_application,
81 '/test2/very': dummy_application,
82 })
83 tests = {
84 '/test1': ('/test1', '/test1/asfd', '/test1/very'),
85 '/test2/very': ('/test2/very', '/test2/very/long/path/after/script/name')
86 }
87 for name, urls in tests.items():
88 for p in urls:
89 environ = create_environ(p)
90 app_iter, status, headers = run_wsgi_app(app, environ)
91 assert status == '200 OK'
92 assert b''.join(app_iter).strip() == to_bytes(name)
93
94 app_iter, status, headers = run_wsgi_app(
95 app, create_environ('/missing'))
96 assert status == '404 NOT FOUND'
97 assert b''.join(app_iter).strip() == b'NOT FOUND'
98
99
100 def test_get_host():
101 env = {'HTTP_X_FORWARDED_HOST': 'example.org',
102 'SERVER_NAME': 'bullshit', 'HOST_NAME': 'ignore me dammit'}
103 assert wsgi.get_host(env) == 'example.org'
104 assert wsgi.get_host(create_environ('/', 'http://example.org')) == \
105 'example.org'
106
107
108 def test_get_host_multiple_forwarded():
109 env = {'HTTP_X_FORWARDED_HOST': 'example.com, example.org',
110 'SERVER_NAME': 'bullshit', 'HOST_NAME': 'ignore me dammit'}
111 assert wsgi.get_host(env) == 'example.com'
112 assert wsgi.get_host(create_environ('/', 'http://example.com')) == \
113 'example.com'
114
115
116 def test_get_host_validation():
117 env = {'HTTP_X_FORWARDED_HOST': 'example.org',
118 'SERVER_NAME': 'bullshit', 'HOST_NAME': 'ignore me dammit'}
119 assert wsgi.get_host(env, trusted_hosts=['.example.org']) == 'example.org'
120 pytest.raises(BadRequest, wsgi.get_host, env,
121 trusted_hosts=['example.com'])
27 from werkzeug.wsgi import _RangeWrapper
28 from werkzeug.wsgi import ClosingIterator
29 from werkzeug.wsgi import wrap_file
30
31
32 @pytest.mark.parametrize(
33 ("environ", "expect"),
34 (
35 pytest.param({"HTTP_HOST": "spam"}, "spam", id="host"),
36 pytest.param({"HTTP_HOST": "spam:80"}, "spam", id="host, strip http port"),
37 pytest.param(
38 {"wsgi.url_scheme": "https", "HTTP_HOST": "spam:443"},
39 "spam",
40 id="host, strip https port",
41 ),
42 pytest.param({"HTTP_HOST": "spam:8080"}, "spam:8080", id="host, custom port"),
43 pytest.param(
44 {"HTTP_HOST": "spam", "SERVER_NAME": "eggs", "SERVER_PORT": "80"},
45 "spam",
46 id="prefer host",
47 ),
48 pytest.param(
49 {"SERVER_NAME": "eggs", "SERVER_PORT": "80"},
50 "eggs",
51 id="name, ignore http port",
52 ),
53 pytest.param(
54 {"wsgi.url_scheme": "https", "SERVER_NAME": "eggs", "SERVER_PORT": "443"},
55 "eggs",
56 id="name, ignore https port",
57 ),
58 pytest.param(
59 {"SERVER_NAME": "eggs", "SERVER_PORT": "8080"},
60 "eggs:8080",
61 id="name, custom port",
62 ),
63 pytest.param(
64 {"HTTP_HOST": "ham", "HTTP_X_FORWARDED_HOST": "eggs"},
65 "ham",
66 id="ignore x-forwarded-host",
67 ),
68 ),
69 )
70 def test_get_host(environ, expect):
71 environ.setdefault("wsgi.url_scheme", "http")
72 assert wsgi.get_host(environ) == expect
73
74
75 def test_get_host_validate_trusted_hosts():
76 env = {"SERVER_NAME": "example.org", "SERVER_PORT": "80", "wsgi.url_scheme": "http"}
77 assert wsgi.get_host(env, trusted_hosts=[".example.org"]) == "example.org"
78 pytest.raises(BadRequest, wsgi.get_host, env, trusted_hosts=["example.com"])
79 env["SERVER_PORT"] = "8080"
80 assert wsgi.get_host(env, trusted_hosts=[".example.org:8080"]) == "example.org:8080"
81 pytest.raises(BadRequest, wsgi.get_host, env, trusted_hosts=[".example.com"])
82 env = {"HTTP_HOST": "example.org", "wsgi.url_scheme": "http"}
83 assert wsgi.get_host(env, trusted_hosts=[".example.org"]) == "example.org"
84 pytest.raises(BadRequest, wsgi.get_host, env, trusted_hosts=["example.com"])
12285
12386
12487 def test_responder():
12588 def foo(environ, start_response):
126 return BaseResponse(b'Test')
89 return BaseResponse(b"Test")
90
12791 client = Client(wsgi.responder(foo), BaseResponse)
128 response = client.get('/')
92 response = client.get("/")
12993 assert response.status_code == 200
130 assert response.data == b'Test'
94 assert response.data == b"Test"
13195
13296
13397 def test_pop_path_info():
134 original_env = {'SCRIPT_NAME': '/foo', 'PATH_INFO': '/a/b///c'}
98 original_env = {"SCRIPT_NAME": "/foo", "PATH_INFO": "/a/b///c"}
13599
136100 # regular path info popping
137101 def assert_tuple(script_name, path_info):
138 assert env.get('SCRIPT_NAME') == script_name
139 assert env.get('PATH_INFO') == path_info
102 assert env.get("SCRIPT_NAME") == script_name
103 assert env.get("PATH_INFO") == path_info
104
140105 env = original_env.copy()
141 pop = lambda: wsgi.pop_path_info(env)
142
143 assert_tuple('/foo', '/a/b///c')
144 assert pop() == 'a'
145 assert_tuple('/foo/a', '/b///c')
146 assert pop() == 'b'
147 assert_tuple('/foo/a/b', '///c')
148 assert pop() == 'c'
149 assert_tuple('/foo/a/b///c', '')
106
107 def pop():
108 return wsgi.pop_path_info(env)
109
110 assert_tuple("/foo", "/a/b///c")
111 assert pop() == "a"
112 assert_tuple("/foo/a", "/b///c")
113 assert pop() == "b"
114 assert_tuple("/foo/a/b", "///c")
115 assert pop() == "c"
116 assert_tuple("/foo/a/b///c", "")
150117 assert pop() is None
151118
152119
153120 def test_peek_path_info():
154 env = {
155 'SCRIPT_NAME': '/foo',
156 'PATH_INFO': '/aaa/b///c'
157 }
158
159 assert wsgi.peek_path_info(env) == 'aaa'
160 assert wsgi.peek_path_info(env) == 'aaa'
161 assert wsgi.peek_path_info(env, charset=None) == b'aaa'
162 assert wsgi.peek_path_info(env, charset=None) == b'aaa'
121 env = {"SCRIPT_NAME": "/foo", "PATH_INFO": "/aaa/b///c"}
122
123 assert wsgi.peek_path_info(env) == "aaa"
124 assert wsgi.peek_path_info(env) == "aaa"
125 assert wsgi.peek_path_info(env, charset=None) == b"aaa"
126 assert wsgi.peek_path_info(env, charset=None) == b"aaa"
163127
164128
165129 def test_path_info_and_script_name_fetching():
166 env = create_environ(u'/\N{SNOWMAN}', u'http://example.com/\N{COMET}/')
167 assert wsgi.get_path_info(env) == u'/\N{SNOWMAN}'
168 assert wsgi.get_path_info(env, charset=None) == u'/\N{SNOWMAN}'.encode('utf-8')
169 assert wsgi.get_script_name(env) == u'/\N{COMET}'
170 assert wsgi.get_script_name(env, charset=None) == u'/\N{COMET}'.encode('utf-8')
130 env = create_environ(u"/\N{SNOWMAN}", u"http://example.com/\N{COMET}/")
131 assert wsgi.get_path_info(env) == u"/\N{SNOWMAN}"
132 assert wsgi.get_path_info(env, charset=None) == u"/\N{SNOWMAN}".encode("utf-8")
133 assert wsgi.get_script_name(env) == u"/\N{COMET}"
134 assert wsgi.get_script_name(env, charset=None) == u"/\N{COMET}".encode("utf-8")
171135
172136
173137 def test_query_string_fetching():
174 env = create_environ(u'/?\N{SNOWMAN}=\N{COMET}')
138 env = create_environ(u"/?\N{SNOWMAN}=\N{COMET}")
175139 qs = wsgi.get_query_string(env)
176 strict_eq(qs, '%E2%98%83=%E2%98%84')
140 strict_eq(qs, "%E2%98%83=%E2%98%84")
177141
178142
179143 def test_limited_stream():
180144 class RaisingLimitedStream(wsgi.LimitedStream):
181
182145 def on_exhausted(self):
183 raise BadRequest('input stream exhausted')
184
185 io = BytesIO(b'123456')
146 raise BadRequest("input stream exhausted")
147
148 io = BytesIO(b"123456")
186149 stream = RaisingLimitedStream(io, 3)
187 strict_eq(stream.read(), b'123')
150 strict_eq(stream.read(), b"123")
188151 pytest.raises(BadRequest, stream.read)
189152
190 io = BytesIO(b'123456')
153 io = BytesIO(b"123456")
191154 stream = RaisingLimitedStream(io, 3)
192155 strict_eq(stream.tell(), 0)
193 strict_eq(stream.read(1), b'1')
156 strict_eq(stream.read(1), b"1")
194157 strict_eq(stream.tell(), 1)
195 strict_eq(stream.read(1), b'2')
158 strict_eq(stream.read(1), b"2")
196159 strict_eq(stream.tell(), 2)
197 strict_eq(stream.read(1), b'3')
160 strict_eq(stream.read(1), b"3")
198161 strict_eq(stream.tell(), 3)
199162 pytest.raises(BadRequest, stream.read)
200163
201 io = BytesIO(b'123456\nabcdefg')
164 io = BytesIO(b"123456\nabcdefg")
202165 stream = wsgi.LimitedStream(io, 9)
203 strict_eq(stream.readline(), b'123456\n')
204 strict_eq(stream.readline(), b'ab')
205
206 io = BytesIO(b'123456\nabcdefg')
166 strict_eq(stream.readline(), b"123456\n")
167 strict_eq(stream.readline(), b"ab")
168
169 io = BytesIO(b"123456\nabcdefg")
207170 stream = wsgi.LimitedStream(io, 9)
208 strict_eq(stream.readlines(), [b'123456\n', b'ab'])
209
210 io = BytesIO(b'123456\nabcdefg')
171 strict_eq(stream.readlines(), [b"123456\n", b"ab"])
172
173 io = BytesIO(b"123456\nabcdefg")
211174 stream = wsgi.LimitedStream(io, 9)
212 strict_eq(stream.readlines(2), [b'12'])
213 strict_eq(stream.readlines(2), [b'34'])
214 strict_eq(stream.readlines(), [b'56\n', b'ab'])
215
216 io = BytesIO(b'123456\nabcdefg')
175 strict_eq(stream.readlines(2), [b"12"])
176 strict_eq(stream.readlines(2), [b"34"])
177 strict_eq(stream.readlines(), [b"56\n", b"ab"])
178
179 io = BytesIO(b"123456\nabcdefg")
217180 stream = wsgi.LimitedStream(io, 9)
218 strict_eq(stream.readline(100), b'123456\n')
219
220 io = BytesIO(b'123456\nabcdefg')
181 strict_eq(stream.readline(100), b"123456\n")
182
183 io = BytesIO(b"123456\nabcdefg")
221184 stream = wsgi.LimitedStream(io, 9)
222 strict_eq(stream.readlines(100), [b'123456\n', b'ab'])
223
224 io = BytesIO(b'123456')
185 strict_eq(stream.readlines(100), [b"123456\n", b"ab"])
186
187 io = BytesIO(b"123456")
225188 stream = wsgi.LimitedStream(io, 3)
226 strict_eq(stream.read(1), b'1')
227 strict_eq(stream.read(1), b'2')
228 strict_eq(stream.read(), b'3')
229 strict_eq(stream.read(), b'')
230
231 io = BytesIO(b'123456')
189 strict_eq(stream.read(1), b"1")
190 strict_eq(stream.read(1), b"2")
191 strict_eq(stream.read(), b"3")
192 strict_eq(stream.read(), b"")
193
194 io = BytesIO(b"123456")
232195 stream = wsgi.LimitedStream(io, 3)
233 strict_eq(stream.read(-1), b'123')
234
235 io = BytesIO(b'123456')
196 strict_eq(stream.read(-1), b"123")
197
198 io = BytesIO(b"123456")
236199 stream = wsgi.LimitedStream(io, 0)
237 strict_eq(stream.read(-1), b'')
238
239 io = StringIO(u'123456')
200 strict_eq(stream.read(-1), b"")
201
202 io = StringIO(u"123456")
240203 stream = wsgi.LimitedStream(io, 0)
241 strict_eq(stream.read(-1), u'')
242
243 io = StringIO(u'123\n456\n')
204 strict_eq(stream.read(-1), u"")
205
206 io = StringIO(u"123\n456\n")
244207 stream = wsgi.LimitedStream(io, 8)
245 strict_eq(list(stream), [u'123\n', u'456\n'])
208 strict_eq(list(stream), [u"123\n", u"456\n"])
246209
247210
248211 def test_limited_stream_json_load():
249212 stream = wsgi.LimitedStream(BytesIO(b'{"hello": "test"}'), 17)
250213 # flask.json adapts bytes to text with TextIOWrapper
251214 # this expects stream.readable() to exist and return true
252 stream = io.TextIOWrapper(io.BufferedReader(stream), 'UTF-8')
215 stream = io.TextIOWrapper(io.BufferedReader(stream), "UTF-8")
253216 data = json.load(stream)
254 assert data == {'hello': 'test'}
217 assert data == {"hello": "test"}
255218
256219
257220 def test_limited_stream_disconnection():
258 io = BytesIO(b'A bit of content')
221 io = BytesIO(b"A bit of content")
259222
260223 # disconnect detection on out of bytes
261224 stream = wsgi.LimitedStream(io, 255)
263226 stream.read()
264227
265228 # disconnect detection because file close
266 io = BytesIO(b'x' * 255)
229 io = BytesIO(b"x" * 255)
267230 io.close()
268231 stream = wsgi.LimitedStream(io, 255)
269232 with pytest.raises(ClientDisconnected):
271234
272235
273236 def test_path_info_extraction():
274 x = wsgi.extract_path_info('http://example.com/app', '/app/hello')
275 assert x == u'/hello'
276 x = wsgi.extract_path_info('http://example.com/app',
277 'https://example.com/app/hello')
278 assert x == u'/hello'
279 x = wsgi.extract_path_info('http://example.com/app/',
280 'https://example.com/app/hello')
281 assert x == u'/hello'
282 x = wsgi.extract_path_info('http://example.com/app/',
283 'https://example.com/app')
284 assert x == u'/'
285 x = wsgi.extract_path_info(u'http://☃.net/', u'/fööbär')
286 assert x == u'/fööbär'
287 x = wsgi.extract_path_info(u'http://☃.net/x', u'http://☃.net/x/fööbär')
288 assert x == u'/fööbär'
289
290 env = create_environ(u'/fööbär', u'http://☃.net/x/')
291 x = wsgi.extract_path_info(env, u'http://☃.net/x/fööbär')
292 assert x == u'/fööbär'
293
294 x = wsgi.extract_path_info('http://example.com/app/',
295 'https://example.com/a/hello')
237 x = wsgi.extract_path_info("http://example.com/app", "/app/hello")
238 assert x == u"/hello"
239 x = wsgi.extract_path_info(
240 "http://example.com/app", "https://example.com/app/hello"
241 )
242 assert x == u"/hello"
243 x = wsgi.extract_path_info(
244 "http://example.com/app/", "https://example.com/app/hello"
245 )
246 assert x == u"/hello"
247 x = wsgi.extract_path_info("http://example.com/app/", "https://example.com/app")
248 assert x == u"/"
249 x = wsgi.extract_path_info(u"http://☃.net/", u"/fööbär")
250 assert x == u"/fööbär"
251 x = wsgi.extract_path_info(u"http://☃.net/x", u"http://☃.net/x/fööbär")
252 assert x == u"/fööbär"
253
254 env = create_environ(u"/fööbär", u"http://☃.net/x/")
255 x = wsgi.extract_path_info(env, u"http://☃.net/x/fööbär")
256 assert x == u"/fööbär"
257
258 x = wsgi.extract_path_info("http://example.com/app/", "https://example.com/a/hello")
296259 assert x is None
297 x = wsgi.extract_path_info('http://example.com/app/',
298 'https://example.com/app/hello',
299 collapse_http_schemes=False)
260 x = wsgi.extract_path_info(
261 "http://example.com/app/",
262 "https://example.com/app/hello",
263 collapse_http_schemes=False,
264 )
300265 assert x is None
301266
302267
303268 def test_get_host_fallback():
304 assert wsgi.get_host({
305 'SERVER_NAME': 'foobar.example.com',
306 'wsgi.url_scheme': 'http',
307 'SERVER_PORT': '80'
308 }) == 'foobar.example.com'
309 assert wsgi.get_host({
310 'SERVER_NAME': 'foobar.example.com',
311 'wsgi.url_scheme': 'http',
312 'SERVER_PORT': '81'
313 }) == 'foobar.example.com:81'
269 assert (
270 wsgi.get_host(
271 {
272 "SERVER_NAME": "foobar.example.com",
273 "wsgi.url_scheme": "http",
274 "SERVER_PORT": "80",
275 }
276 )
277 == "foobar.example.com"
278 )
279 assert (
280 wsgi.get_host(
281 {
282 "SERVER_NAME": "foobar.example.com",
283 "wsgi.url_scheme": "http",
284 "SERVER_PORT": "81",
285 }
286 )
287 == "foobar.example.com:81"
288 )
314289
315290
316291 def test_get_current_url_unicode():
292 env = create_environ(query_string=u"foo=bar&baz=blah&meh=\xcf")
293 rv = wsgi.get_current_url(env)
294 strict_eq(rv, u"http://localhost/?foo=bar&baz=blah&meh=\xcf")
295
296
297 def test_get_current_url_invalid_utf8():
317298 env = create_environ()
318 env['QUERY_STRING'] = 'foo=bar&baz=blah&meh=\xcf'
299 # set the query string *after* wsgi dance, so \xcf is invalid
300 env["QUERY_STRING"] = "foo=bar&baz=blah&meh=\xcf"
319301 rv = wsgi.get_current_url(env)
320 strict_eq(rv,
321 u'http://localhost/?foo=bar&baz=blah&meh=\ufffd')
302 # it remains percent-encoded
303 strict_eq(rv, u"http://localhost/?foo=bar&baz=blah&meh=%CF")
322304
323305
324306 def test_multi_part_line_breaks():
325 data = 'abcdef\r\nghijkl\r\nmnopqrstuvwxyz\r\nABCDEFGHIJK'
307 data = "abcdef\r\nghijkl\r\nmnopqrstuvwxyz\r\nABCDEFGHIJK"
326308 test_stream = NativeStringIO(data)
327 lines = list(wsgi.make_line_iter(test_stream, limit=len(data),
328 buffer_size=16))
329 assert lines == ['abcdef\r\n', 'ghijkl\r\n', 'mnopqrstuvwxyz\r\n',
330 'ABCDEFGHIJK']
331
332 data = 'abc\r\nThis line is broken by the buffer length.' \
333 '\r\nFoo bar baz'
309 lines = list(wsgi.make_line_iter(test_stream, limit=len(data), buffer_size=16))
310 assert lines == ["abcdef\r\n", "ghijkl\r\n", "mnopqrstuvwxyz\r\n", "ABCDEFGHIJK"]
311
312 data = "abc\r\nThis line is broken by the buffer length.\r\nFoo bar baz"
334313 test_stream = NativeStringIO(data)
335 lines = list(wsgi.make_line_iter(test_stream, limit=len(data),
336 buffer_size=24))
337 assert lines == ['abc\r\n', 'This line is broken by the buffer '
338 'length.\r\n', 'Foo bar baz']
314 lines = list(wsgi.make_line_iter(test_stream, limit=len(data), buffer_size=24))
315 assert lines == [
316 "abc\r\n",
317 "This line is broken by the buffer length.\r\n",
318 "Foo bar baz",
319 ]
339320
340321
341322 def test_multi_part_line_breaks_bytes():
342 data = b'abcdef\r\nghijkl\r\nmnopqrstuvwxyz\r\nABCDEFGHIJK'
323 data = b"abcdef\r\nghijkl\r\nmnopqrstuvwxyz\r\nABCDEFGHIJK"
343324 test_stream = BytesIO(data)
344 lines = list(wsgi.make_line_iter(test_stream, limit=len(data),
345 buffer_size=16))
346 assert lines == [b'abcdef\r\n', b'ghijkl\r\n', b'mnopqrstuvwxyz\r\n',
347 b'ABCDEFGHIJK']
348
349 data = b'abc\r\nThis line is broken by the buffer length.' \
350 b'\r\nFoo bar baz'
325 lines = list(wsgi.make_line_iter(test_stream, limit=len(data), buffer_size=16))
326 assert lines == [
327 b"abcdef\r\n",
328 b"ghijkl\r\n",
329 b"mnopqrstuvwxyz\r\n",
330 b"ABCDEFGHIJK",
331 ]
332
333 data = b"abc\r\nThis line is broken by the buffer length." b"\r\nFoo bar baz"
351334 test_stream = BytesIO(data)
352 lines = list(wsgi.make_line_iter(test_stream, limit=len(data),
353 buffer_size=24))
354 assert lines == [b'abc\r\n', b'This line is broken by the buffer '
355 b'length.\r\n', b'Foo bar baz']
335 lines = list(wsgi.make_line_iter(test_stream, limit=len(data), buffer_size=24))
336 assert lines == [
337 b"abc\r\n",
338 b"This line is broken by the buffer " b"length.\r\n",
339 b"Foo bar baz",
340 ]
356341
357342
358343 def test_multi_part_line_breaks_problematic():
359 data = 'abc\rdef\r\nghi'
360 for x in range(1, 10):
344 data = "abc\rdef\r\nghi"
345 for _ in range(1, 10):
361346 test_stream = NativeStringIO(data)
362 lines = list(wsgi.make_line_iter(test_stream, limit=len(data),
363 buffer_size=4))
364 assert lines == ['abc\r', 'def\r\n', 'ghi']
347 lines = list(wsgi.make_line_iter(test_stream, limit=len(data), buffer_size=4))
348 assert lines == ["abc\r", "def\r\n", "ghi"]
365349
366350
367351 def test_iter_functions_support_iterators():
368 data = ['abcdef\r\nghi', 'jkl\r\nmnopqrstuvwxyz\r', '\nABCDEFGHIJK']
352 data = ["abcdef\r\nghi", "jkl\r\nmnopqrstuvwxyz\r", "\nABCDEFGHIJK"]
369353 lines = list(wsgi.make_line_iter(data))
370 assert lines == ['abcdef\r\n', 'ghijkl\r\n', 'mnopqrstuvwxyz\r\n',
371 'ABCDEFGHIJK']
354 assert lines == ["abcdef\r\n", "ghijkl\r\n", "mnopqrstuvwxyz\r\n", "ABCDEFGHIJK"]
372355
373356
374357 def test_make_chunk_iter():
375 data = [u'abcdefXghi', u'jklXmnopqrstuvwxyzX', u'ABCDEFGHIJK']
376 rv = list(wsgi.make_chunk_iter(data, 'X'))
377 assert rv == [u'abcdef', u'ghijkl', u'mnopqrstuvwxyz', u'ABCDEFGHIJK']
378
379 data = u'abcdefXghijklXmnopqrstuvwxyzXABCDEFGHIJK'
358 data = [u"abcdefXghi", u"jklXmnopqrstuvwxyzX", u"ABCDEFGHIJK"]
359 rv = list(wsgi.make_chunk_iter(data, "X"))
360 assert rv == [u"abcdef", u"ghijkl", u"mnopqrstuvwxyz", u"ABCDEFGHIJK"]
361
362 data = u"abcdefXghijklXmnopqrstuvwxyzXABCDEFGHIJK"
380363 test_stream = StringIO(data)
381 rv = list(wsgi.make_chunk_iter(test_stream, 'X', limit=len(data),
382 buffer_size=4))
383 assert rv == [u'abcdef', u'ghijkl', u'mnopqrstuvwxyz', u'ABCDEFGHIJK']
364 rv = list(wsgi.make_chunk_iter(test_stream, "X", limit=len(data), buffer_size=4))
365 assert rv == [u"abcdef", u"ghijkl", u"mnopqrstuvwxyz", u"ABCDEFGHIJK"]
384366
385367
386368 def test_make_chunk_iter_bytes():
387 data = [b'abcdefXghi', b'jklXmnopqrstuvwxyzX', b'ABCDEFGHIJK']
388 rv = list(wsgi.make_chunk_iter(data, 'X'))
389 assert rv == [b'abcdef', b'ghijkl', b'mnopqrstuvwxyz', b'ABCDEFGHIJK']
390
391 data = b'abcdefXghijklXmnopqrstuvwxyzXABCDEFGHIJK'
369 data = [b"abcdefXghi", b"jklXmnopqrstuvwxyzX", b"ABCDEFGHIJK"]
370 rv = list(wsgi.make_chunk_iter(data, "X"))
371 assert rv == [b"abcdef", b"ghijkl", b"mnopqrstuvwxyz", b"ABCDEFGHIJK"]
372
373 data = b"abcdefXghijklXmnopqrstuvwxyzXABCDEFGHIJK"
392374 test_stream = BytesIO(data)
393 rv = list(wsgi.make_chunk_iter(test_stream, 'X', limit=len(data),
394 buffer_size=4))
395 assert rv == [b'abcdef', b'ghijkl', b'mnopqrstuvwxyz', b'ABCDEFGHIJK']
396
397 data = b'abcdefXghijklXmnopqrstuvwxyzXABCDEFGHIJK'
375 rv = list(wsgi.make_chunk_iter(test_stream, "X", limit=len(data), buffer_size=4))
376 assert rv == [b"abcdef", b"ghijkl", b"mnopqrstuvwxyz", b"ABCDEFGHIJK"]
377
378 data = b"abcdefXghijklXmnopqrstuvwxyzXABCDEFGHIJK"
398379 test_stream = BytesIO(data)
399 rv = list(wsgi.make_chunk_iter(test_stream, 'X', limit=len(data),
400 buffer_size=4, cap_at_buffer=True))
401 assert rv == [b'abcd', b'ef', b'ghij', b'kl', b'mnop', b'qrst', b'uvwx',
402 b'yz', b'ABCD', b'EFGH', b'IJK']
380 rv = list(
381 wsgi.make_chunk_iter(
382 test_stream, "X", limit=len(data), buffer_size=4, cap_at_buffer=True
383 )
384 )
385 assert rv == [
386 b"abcd",
387 b"ef",
388 b"ghij",
389 b"kl",
390 b"mnop",
391 b"qrst",
392 b"uvwx",
393 b"yz",
394 b"ABCD",
395 b"EFGH",
396 b"IJK",
397 ]
403398
404399
405400 def test_lines_longer_buffer_size():
406 data = '1234567890\n1234567890\n'
401 data = "1234567890\n1234567890\n"
407402 for bufsize in range(1, 15):
408 lines = list(wsgi.make_line_iter(NativeStringIO(data), limit=len(data),
409 buffer_size=4))
410 assert lines == ['1234567890\n', '1234567890\n']
403 lines = list(
404 wsgi.make_line_iter(
405 NativeStringIO(data), limit=len(data), buffer_size=bufsize
406 )
407 )
408 assert lines == ["1234567890\n", "1234567890\n"]
411409
412410
413411 def test_lines_longer_buffer_size_cap():
414 data = '1234567890\n1234567890\n'
412 data = "1234567890\n1234567890\n"
415413 for bufsize in range(1, 15):
416 lines = list(wsgi.make_line_iter(NativeStringIO(data), limit=len(data),
417 buffer_size=4, cap_at_buffer=True))
418 assert lines == ['1234', '5678', '90\n', '1234', '5678', '90\n']
414 lines = list(
415 wsgi.make_line_iter(
416 NativeStringIO(data),
417 limit=len(data),
418 buffer_size=bufsize,
419 cap_at_buffer=True,
420 )
421 )
422 assert len(lines[0]) == bufsize or lines[0].endswith("\n")
419423
420424
421425 def test_range_wrapper():
422 response = BaseResponse(b'Hello World')
426 response = BaseResponse(b"Hello World")
423427 range_wrapper = _RangeWrapper(response.response, 6, 4)
424 assert next(range_wrapper) == b'Worl'
425
426 response = BaseResponse(b'Hello World')
428 assert next(range_wrapper) == b"Worl"
429
430 response = BaseResponse(b"Hello World")
427431 range_wrapper = _RangeWrapper(response.response, 1, 0)
428432 with pytest.raises(StopIteration):
429433 next(range_wrapper)
430434
431 response = BaseResponse(b'Hello World')
435 response = BaseResponse(b"Hello World")
432436 range_wrapper = _RangeWrapper(response.response, 6, 100)
433 assert next(range_wrapper) == b'World'
434
435 response = BaseResponse((x for x in (b'He', b'll', b'o ', b'Wo', b'rl', b'd')))
437 assert next(range_wrapper) == b"World"
438
439 response = BaseResponse((x for x in (b"He", b"ll", b"o ", b"Wo", b"rl", b"d")))
436440 range_wrapper = _RangeWrapper(response.response, 6, 4)
437441 assert not range_wrapper.seekable
438 assert next(range_wrapper) == b'Wo'
439 assert next(range_wrapper) == b'rl'
440
441 response = BaseResponse((x for x in (b'He', b'll', b'o W', b'o', b'rld')))
442 assert next(range_wrapper) == b"Wo"
443 assert next(range_wrapper) == b"rl"
444
445 response = BaseResponse((x for x in (b"He", b"ll", b"o W", b"o", b"rld")))
442446 range_wrapper = _RangeWrapper(response.response, 6, 4)
443 assert next(range_wrapper) == b'W'
444 assert next(range_wrapper) == b'o'
445 assert next(range_wrapper) == b'rl'
447 assert next(range_wrapper) == b"W"
448 assert next(range_wrapper) == b"o"
449 assert next(range_wrapper) == b"rl"
446450 with pytest.raises(StopIteration):
447451 next(range_wrapper)
448452
449 response = BaseResponse((x for x in (b'Hello', b' World')))
453 response = BaseResponse((x for x in (b"Hello", b" World")))
450454 range_wrapper = _RangeWrapper(response.response, 1, 1)
451 assert next(range_wrapper) == b'e'
455 assert next(range_wrapper) == b"e"
452456 with pytest.raises(StopIteration):
453457 next(range_wrapper)
454458
455 resources = os.path.join(os.path.dirname(__file__), 'res')
459 resources = os.path.join(os.path.dirname(__file__), "res")
456460 env = create_environ()
457 with open(os.path.join(resources, 'test.txt'), 'rb') as f:
461 with open(os.path.join(resources, "test.txt"), "rb") as f:
458462 response = BaseResponse(wrap_file(env, f))
459463 range_wrapper = _RangeWrapper(response.response, 1, 2)
460464 assert range_wrapper.seekable
461 assert next(range_wrapper) == b'OU'
465 assert next(range_wrapper) == b"OU"
462466 with pytest.raises(StopIteration):
463467 next(range_wrapper)
464468
465 with open(os.path.join(resources, 'test.txt'), 'rb') as f:
469 with open(os.path.join(resources, "test.txt"), "rb") as f:
466470 response = BaseResponse(wrap_file(env, f))
467471 range_wrapper = _RangeWrapper(response.response, 2)
468 assert next(range_wrapper) == b'UND\n'
472 assert next(range_wrapper) == b"UND\n"
469473 with pytest.raises(StopIteration):
470474 next(range_wrapper)
471475
472476
473 def test_http_proxy(dev_server):
474 APP_TEMPLATE = r'''
475 from werkzeug.wrappers import Request, Response
476
477 @Request.application
478 def app(request):
479 return Response(u'%s|%s|%s' % (
480 request.headers.get('X-Special'),
481 request.environ['HTTP_HOST'],
482 request.path,
483 ))
484 '''
485
486 server = dev_server(APP_TEMPLATE)
487
488 app = wsgi.ProxyMiddleware(BaseResponse('ROOT'), {
489 '/foo': {
490 'target': server.url,
491 'host': 'faked.invalid',
492 'headers': {'X-Special': 'foo'},
493 },
494 '/bar': {
495 'target': server.url,
496 'host': None,
497 'remove_prefix': True,
498 'headers': {'X-Special': 'bar'},
499 },
500 '/autohost': {
501 'target': server.url,
502 },
503 })
504
505 client = Client(app, response_wrapper=BaseResponse)
506
507 rv = client.get('/')
508 assert rv.data == b'ROOT'
509
510 rv = client.get('/foo/bar')
511 assert rv.data.decode('ascii') == 'foo|faked.invalid|/foo/bar'
512
513 rv = client.get('/bar/baz')
514 assert rv.data.decode('ascii') == 'bar|localhost|/baz'
515
516 rv = client.get('/autohost/aha')
517 assert rv.data.decode('ascii') == 'None|%s|/autohost/aha' % url_parse(
518 server.url).ascii_host
477 def test_closing_iterator():
478 class Namespace(object):
479 got_close = False
480 got_additional = False
481
482 class Response(object):
483 def __init__(self, environ, start_response):
484 self.start = start_response
485
486 # Return a generator instead of making the object its own
487 # iterator. This ensures that ClosingIterator calls close on
488 # the iterable (the object), not the iterator.
489 def __iter__(self):
490 self.start("200 OK", [("Content-Type", "text/plain")])
491 yield "some content"
492
493 def close(self):
494 Namespace.got_close = True
495
496 def additional():
497 Namespace.got_additional = True
498
499 def app(environ, start_response):
500 return ClosingIterator(Response(environ, start_response), additional)
501
502 app_iter, status, headers = run_wsgi_app(app, create_environ(), buffered=True)
503
504 assert "".join(app_iter) == "some content"
505 assert Namespace.got_close
506 assert Namespace.got_additional
00 [tox]
11 envlist =
2 py{36,27}-hypothesis-uwsgi
3 py{35,34,py}
4 # run py33, py26 manually
2 py{37,36,35,34,27,py3,py}
53 stylecheck
64 docs-html
75 coverage-report
6 skip_missing_interpreters = true
87
98 [testenv]
10 passenv = LANG
11 setenv =
12 TOX_ENVTMPDIR={envtmpdir}
13
14 usedevelop = true
159 deps =
16 # remove once we drop support for 2.6, 3.3
17 py26,py33: py<1.5
18 py26,py33: pytest<3.3
19
10 coverage
11 pytest
12 pytest-timeout
2013 pytest-xprocess
21 coverage
2214 requests
15 requests_unixsocket
2316 pyopenssl
2417 greenlet
25 redis
26 python-memcached
2718 watchdog
28
29 hypothesis: hypothesis
30
31 uwsgi: uwsgi
32
33 whitelist_externals =
34 redis-server
35 memcached
36 uwsgi
37
38 commands =
39 coverage run -p -m pytest []
40 hypothesis: coverage run -p -m pytest [] tests/hypothesis
41
42 uwsgi: uwsgi --pyrun {envbindir}/coverage --pyargv 'run -p -m pytest -kUWSGI' --cache2=name=werkzeugtest,items=20 --master
43 # --pyrun doesn't pass pytest exit code up, so check for a marker
44 uwsgi: python -c 'import os, sys; sys.exit(os.path.exists("{envtmpdir}/test_uwsgi_failed"))'
19 commands = coverage run -p -m pytest --tb=short --basetemp={envtmpdir} {posargs}
4520
4621 [testenv:stylecheck]
47 deps = flake8
48 commands = flake8 []
22 deps = pre-commit
23 skip_install = true
24 commands = pre-commit run --all-files --show-diff-on-failure
4925
5026 [testenv:docs-html]
51 deps = sphinx
52 commands = sphinx-build -W -b html -d {envtmpdir}/doctrees docs docs/_build/html
53
54 [testenv:docs-linkcheck]
55 deps = sphinx
56 commands = sphinx-build -W -b linkcheck -d {envtmpdir}/doctrees docs docs/_build/linkcheck
27 deps =
28 Sphinx
29 Pallets-Sphinx-Themes
30 sphinx-issues
31 commands = sphinx-build -W -b html -d {envtmpdir}/doctrees docs {envtmpdir}/html
5732
5833 [testenv:coverage-report]
5934 deps = coverage
6035 skip_install = true
6136 commands =
6237 coverage combine
38 coverage html
6339 coverage report
64 coverage html
6540
6641 [testenv:codecov]
67 passenv = CI TRAVIS TRAVIS_*
42 passenv = CI TRAVIS TRAVIS_* APPVEYOR APPVEYOR_*
6843 deps = codecov
6944 skip_install = true
7045 commands =
7146 coverage combine
47 codecov
7248 coverage report
73 codecov
+0
-151
werkzeug/__init__.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug
3 ~~~~~~~~
4
5 Werkzeug is the Swiss Army knife of Python web development.
6
7 It provides useful classes and functions for any WSGI application to make
8 the life of a python web developer much easier. All of the provided
9 classes are independent from each other so you can mix it with any other
10 library.
11
12
13 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
14 :license: BSD, see LICENSE for more details.
15 """
16 from types import ModuleType
17 import sys
18
19 from werkzeug._compat import iteritems
20
21 __version__ = '0.14.1'
22
23
24 # This import magic raises concerns quite often which is why the implementation
25 # and motivation is explained here in detail now.
26 #
27 # The majority of the functions and classes provided by Werkzeug work on the
28 # HTTP and WSGI layer. There is no useful grouping for those which is why
29 # they are all importable from "werkzeug" instead of the modules where they are
30 # implemented. The downside of that is, that now everything would be loaded at
31 # once, even if unused.
32 #
33 # The implementation of a lazy-loading module in this file replaces the
34 # werkzeug package when imported from within. Attribute access to the werkzeug
35 # module will then lazily import from the modules that implement the objects.
36
37
38 # import mapping to objects in other modules
39 all_by_module = {
40 'werkzeug.debug': ['DebuggedApplication'],
41 'werkzeug.local': ['Local', 'LocalManager', 'LocalProxy', 'LocalStack',
42 'release_local'],
43 'werkzeug.serving': ['run_simple'],
44 'werkzeug.test': ['Client', 'EnvironBuilder', 'create_environ',
45 'run_wsgi_app'],
46 'werkzeug.testapp': ['test_app'],
47 'werkzeug.exceptions': ['abort', 'Aborter'],
48 'werkzeug.urls': ['url_decode', 'url_encode', 'url_quote',
49 'url_quote_plus', 'url_unquote', 'url_unquote_plus',
50 'url_fix', 'Href', 'iri_to_uri', 'uri_to_iri'],
51 'werkzeug.formparser': ['parse_form_data'],
52 'werkzeug.utils': ['escape', 'environ_property', 'append_slash_redirect',
53 'redirect', 'cached_property', 'import_string',
54 'dump_cookie', 'parse_cookie', 'unescape',
55 'format_string', 'find_modules', 'header_property',
56 'html', 'xhtml', 'HTMLBuilder', 'validate_arguments',
57 'ArgumentValidationError', 'bind_arguments',
58 'secure_filename'],
59 'werkzeug.wsgi': ['get_current_url', 'get_host', 'pop_path_info',
60 'peek_path_info', 'SharedDataMiddleware',
61 'DispatcherMiddleware', 'ClosingIterator', 'FileWrapper',
62 'make_line_iter', 'LimitedStream', 'responder',
63 'wrap_file', 'extract_path_info'],
64 'werkzeug.datastructures': ['MultiDict', 'CombinedMultiDict', 'Headers',
65 'EnvironHeaders', 'ImmutableList',
66 'ImmutableDict', 'ImmutableMultiDict',
67 'TypeConversionDict',
68 'ImmutableTypeConversionDict', 'Accept',
69 'MIMEAccept', 'CharsetAccept',
70 'LanguageAccept', 'RequestCacheControl',
71 'ResponseCacheControl', 'ETags', 'HeaderSet',
72 'WWWAuthenticate', 'Authorization',
73 'FileMultiDict', 'CallbackDict', 'FileStorage',
74 'OrderedMultiDict', 'ImmutableOrderedMultiDict'
75 ],
76 'werkzeug.useragents': ['UserAgent'],
77 'werkzeug.http': ['parse_etags', 'parse_date', 'http_date', 'cookie_date',
78 'parse_cache_control_header', 'is_resource_modified',
79 'parse_accept_header', 'parse_set_header', 'quote_etag',
80 'unquote_etag', 'generate_etag', 'dump_header',
81 'parse_list_header', 'parse_dict_header',
82 'parse_authorization_header',
83 'parse_www_authenticate_header', 'remove_entity_headers',
84 'is_entity_header', 'remove_hop_by_hop_headers',
85 'parse_options_header', 'dump_options_header',
86 'is_hop_by_hop_header', 'unquote_header_value',
87 'quote_header_value', 'HTTP_STATUS_CODES'],
88 'werkzeug.wrappers': ['BaseResponse', 'BaseRequest', 'Request', 'Response',
89 'AcceptMixin', 'ETagRequestMixin',
90 'ETagResponseMixin', 'ResponseStreamMixin',
91 'CommonResponseDescriptorsMixin', 'UserAgentMixin',
92 'AuthorizationMixin', 'WWWAuthenticateMixin',
93 'CommonRequestDescriptorsMixin'],
94 'werkzeug.security': ['generate_password_hash', 'check_password_hash'],
95 # the undocumented easteregg ;-)
96 'werkzeug._internal': ['_easteregg']
97 }
98
99 # modules that should be imported when accessed as attributes of werkzeug
100 attribute_modules = frozenset(['exceptions', 'routing'])
101
102
103 object_origins = {}
104 for module, items in iteritems(all_by_module):
105 for item in items:
106 object_origins[item] = module
107
108
109 class module(ModuleType):
110
111 """Automatically import objects from the modules."""
112
113 def __getattr__(self, name):
114 if name in object_origins:
115 module = __import__(object_origins[name], None, None, [name])
116 for extra_name in all_by_module[module.__name__]:
117 setattr(self, extra_name, getattr(module, extra_name))
118 return getattr(module, name)
119 elif name in attribute_modules:
120 __import__('werkzeug.' + name)
121 return ModuleType.__getattribute__(self, name)
122
123 def __dir__(self):
124 """Just show what we want to show."""
125 result = list(new_module.__all__)
126 result.extend(('__file__', '__doc__', '__all__',
127 '__docformat__', '__name__', '__path__',
128 '__package__', '__version__'))
129 return result
130
131 # keep a reference to this module so that it's not garbage collected
132 old_module = sys.modules['werkzeug']
133
134
135 # setup the new module and patch it into the dict of loaded modules
136 new_module = sys.modules['werkzeug'] = module('werkzeug')
137 new_module.__dict__.update({
138 '__file__': __file__,
139 '__package__': 'werkzeug',
140 '__path__': __path__,
141 '__doc__': __doc__,
142 '__version__': __version__,
143 '__all__': tuple(object_origins) + tuple(attribute_modules),
144 '__docformat__': 'restructuredtext en'
145 })
146
147
148 # Due to bootstrapping issues we need to import exceptions here.
149 # Don't ask :-(
150 __import__('werkzeug.exceptions')
+0
-206
werkzeug/_compat.py less more
0 # flake8: noqa
1 # This whole file is full of lint errors
2 import codecs
3 import sys
4 import operator
5 import functools
6 import warnings
7
8 try:
9 import builtins
10 except ImportError:
11 import __builtin__ as builtins
12
13
14 PY2 = sys.version_info[0] == 2
15 WIN = sys.platform.startswith('win')
16
17 _identity = lambda x: x
18
19 if PY2:
20 unichr = unichr
21 text_type = unicode
22 string_types = (str, unicode)
23 integer_types = (int, long)
24
25 iterkeys = lambda d, *args, **kwargs: d.iterkeys(*args, **kwargs)
26 itervalues = lambda d, *args, **kwargs: d.itervalues(*args, **kwargs)
27 iteritems = lambda d, *args, **kwargs: d.iteritems(*args, **kwargs)
28
29 iterlists = lambda d, *args, **kwargs: d.iterlists(*args, **kwargs)
30 iterlistvalues = lambda d, *args, **kwargs: d.iterlistvalues(*args, **kwargs)
31
32 int_to_byte = chr
33 iter_bytes = iter
34
35 exec('def reraise(tp, value, tb=None):\n raise tp, value, tb')
36
37 def fix_tuple_repr(obj):
38 def __repr__(self):
39 cls = self.__class__
40 return '%s(%s)' % (cls.__name__, ', '.join(
41 '%s=%r' % (field, self[index])
42 for index, field in enumerate(cls._fields)
43 ))
44 obj.__repr__ = __repr__
45 return obj
46
47 def implements_iterator(cls):
48 cls.next = cls.__next__
49 del cls.__next__
50 return cls
51
52 def implements_to_string(cls):
53 cls.__unicode__ = cls.__str__
54 cls.__str__ = lambda x: x.__unicode__().encode('utf-8')
55 return cls
56
57 def native_string_result(func):
58 def wrapper(*args, **kwargs):
59 return func(*args, **kwargs).encode('utf-8')
60 return functools.update_wrapper(wrapper, func)
61
62 def implements_bool(cls):
63 cls.__nonzero__ = cls.__bool__
64 del cls.__bool__
65 return cls
66
67 from itertools import imap, izip, ifilter
68 range_type = xrange
69
70 from StringIO import StringIO
71 from cStringIO import StringIO as BytesIO
72 NativeStringIO = BytesIO
73
74 def make_literal_wrapper(reference):
75 return _identity
76
77 def normalize_string_tuple(tup):
78 """Normalizes a string tuple to a common type. Following Python 2
79 rules, upgrades to unicode are implicit.
80 """
81 if any(isinstance(x, text_type) for x in tup):
82 return tuple(to_unicode(x) for x in tup)
83 return tup
84
85 def try_coerce_native(s):
86 """Try to coerce a unicode string to native if possible. Otherwise,
87 leave it as unicode.
88 """
89 try:
90 return to_native(s)
91 except UnicodeError:
92 return s
93
94 wsgi_get_bytes = _identity
95
96 def wsgi_decoding_dance(s, charset='utf-8', errors='replace'):
97 return s.decode(charset, errors)
98
99 def wsgi_encoding_dance(s, charset='utf-8', errors='replace'):
100 if isinstance(s, bytes):
101 return s
102 return s.encode(charset, errors)
103
104 def to_bytes(x, charset=sys.getdefaultencoding(), errors='strict'):
105 if x is None:
106 return None
107 if isinstance(x, (bytes, bytearray, buffer)):
108 return bytes(x)
109 if isinstance(x, unicode):
110 return x.encode(charset, errors)
111 raise TypeError('Expected bytes')
112
113 def to_native(x, charset=sys.getdefaultencoding(), errors='strict'):
114 if x is None or isinstance(x, str):
115 return x
116 return x.encode(charset, errors)
117
118 else:
119 unichr = chr
120 text_type = str
121 string_types = (str, )
122 integer_types = (int, )
123
124 iterkeys = lambda d, *args, **kwargs: iter(d.keys(*args, **kwargs))
125 itervalues = lambda d, *args, **kwargs: iter(d.values(*args, **kwargs))
126 iteritems = lambda d, *args, **kwargs: iter(d.items(*args, **kwargs))
127
128 iterlists = lambda d, *args, **kwargs: iter(d.lists(*args, **kwargs))
129 iterlistvalues = lambda d, *args, **kwargs: iter(d.listvalues(*args, **kwargs))
130
131 int_to_byte = operator.methodcaller('to_bytes', 1, 'big')
132 iter_bytes = functools.partial(map, int_to_byte)
133
134 def reraise(tp, value, tb=None):
135 if value.__traceback__ is not tb:
136 raise value.with_traceback(tb)
137 raise value
138
139 fix_tuple_repr = _identity
140 implements_iterator = _identity
141 implements_to_string = _identity
142 implements_bool = _identity
143 native_string_result = _identity
144 imap = map
145 izip = zip
146 ifilter = filter
147 range_type = range
148
149 from io import StringIO, BytesIO
150 NativeStringIO = StringIO
151
152 _latin1_encode = operator.methodcaller('encode', 'latin1')
153
154 def make_literal_wrapper(reference):
155 if isinstance(reference, text_type):
156 return _identity
157 return _latin1_encode
158
159 def normalize_string_tuple(tup):
160 """Ensures that all types in the tuple are either strings
161 or bytes.
162 """
163 tupiter = iter(tup)
164 is_text = isinstance(next(tupiter, None), text_type)
165 for arg in tupiter:
166 if isinstance(arg, text_type) != is_text:
167 raise TypeError('Cannot mix str and bytes arguments (got %s)'
168 % repr(tup))
169 return tup
170
171 try_coerce_native = _identity
172 wsgi_get_bytes = _latin1_encode
173
174 def wsgi_decoding_dance(s, charset='utf-8', errors='replace'):
175 return s.encode('latin1').decode(charset, errors)
176
177 def wsgi_encoding_dance(s, charset='utf-8', errors='replace'):
178 if isinstance(s, text_type):
179 s = s.encode(charset)
180 return s.decode('latin1', errors)
181
182 def to_bytes(x, charset=sys.getdefaultencoding(), errors='strict'):
183 if x is None:
184 return None
185 if isinstance(x, (bytes, bytearray, memoryview)): # noqa
186 return bytes(x)
187 if isinstance(x, str):
188 return x.encode(charset, errors)
189 raise TypeError('Expected bytes')
190
191 def to_native(x, charset=sys.getdefaultencoding(), errors='strict'):
192 if x is None or isinstance(x, str):
193 return x
194 return x.decode(charset, errors)
195
196
197 def to_unicode(x, charset=sys.getdefaultencoding(), errors='strict',
198 allow_none_charset=False):
199 if x is None:
200 return None
201 if not isinstance(x, bytes):
202 return text_type(x)
203 if charset is None and allow_none_charset:
204 return x
205 return x.decode(charset, errors)
+0
-419
werkzeug/_internal.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug._internal
3 ~~~~~~~~~~~~~~~~~~
4
5 This module provides internally used helpers and constants.
6
7 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD, see LICENSE for more details.
9 """
10 import re
11 import string
12 import inspect
13 from weakref import WeakKeyDictionary
14 from datetime import datetime, date
15 from itertools import chain
16
17 from werkzeug._compat import iter_bytes, text_type, BytesIO, int_to_byte, \
18 range_type, integer_types
19
20
21 _logger = None
22 _empty_stream = BytesIO()
23 _signature_cache = WeakKeyDictionary()
24 _epoch_ord = date(1970, 1, 1).toordinal()
25 _cookie_params = set((b'expires', b'path', b'comment',
26 b'max-age', b'secure', b'httponly',
27 b'version'))
28 _legal_cookie_chars = (string.ascii_letters +
29 string.digits +
30 u"/=!#$%&'*+-.^_`|~:").encode('ascii')
31
32 _cookie_quoting_map = {
33 b',': b'\\054',
34 b';': b'\\073',
35 b'"': b'\\"',
36 b'\\': b'\\\\',
37 }
38 for _i in chain(range_type(32), range_type(127, 256)):
39 _cookie_quoting_map[int_to_byte(_i)] = ('\\%03o' % _i).encode('latin1')
40
41
42 _octal_re = re.compile(br'\\[0-3][0-7][0-7]')
43 _quote_re = re.compile(br'[\\].')
44 _legal_cookie_chars_re = br'[\w\d!#%&\'~_`><@,:/\$\*\+\-\.\^\|\)\(\?\}\{\=]'
45 _cookie_re = re.compile(br"""
46 (?P<key>[^=;]+)
47 (?:\s*=\s*
48 (?P<val>
49 "(?:[^\\"]|\\.)*" |
50 (?:.*?)
51 )
52 )?
53 \s*;
54 """, flags=re.VERBOSE)
55
56
57 class _Missing(object):
58
59 def __repr__(self):
60 return 'no value'
61
62 def __reduce__(self):
63 return '_missing'
64
65 _missing = _Missing()
66
67
68 def _get_environ(obj):
69 env = getattr(obj, 'environ', obj)
70 assert isinstance(env, dict), \
71 '%r is not a WSGI environment (has to be a dict)' % type(obj).__name__
72 return env
73
74
75 def _log(type, message, *args, **kwargs):
76 """Log into the internal werkzeug logger."""
77 global _logger
78 if _logger is None:
79 import logging
80 _logger = logging.getLogger('werkzeug')
81 # Only set up a default log handler if the
82 # end-user application didn't set anything up.
83 if not logging.root.handlers and _logger.level == logging.NOTSET:
84 _logger.setLevel(logging.INFO)
85 handler = logging.StreamHandler()
86 _logger.addHandler(handler)
87 getattr(_logger, type)(message.rstrip(), *args, **kwargs)
88
89
90 def _parse_signature(func):
91 """Return a signature object for the function."""
92 if hasattr(func, 'im_func'):
93 func = func.im_func
94
95 # if we have a cached validator for this function, return it
96 parse = _signature_cache.get(func)
97 if parse is not None:
98 return parse
99
100 # inspect the function signature and collect all the information
101 if hasattr(inspect, 'getfullargspec'):
102 tup = inspect.getfullargspec(func)
103 else:
104 tup = inspect.getargspec(func)
105 positional, vararg_var, kwarg_var, defaults = tup[:4]
106 defaults = defaults or ()
107 arg_count = len(positional)
108 arguments = []
109 for idx, name in enumerate(positional):
110 if isinstance(name, list):
111 raise TypeError('cannot parse functions that unpack tuples '
112 'in the function signature')
113 try:
114 default = defaults[idx - arg_count]
115 except IndexError:
116 param = (name, False, None)
117 else:
118 param = (name, True, default)
119 arguments.append(param)
120 arguments = tuple(arguments)
121
122 def parse(args, kwargs):
123 new_args = []
124 missing = []
125 extra = {}
126
127 # consume as many arguments as positional as possible
128 for idx, (name, has_default, default) in enumerate(arguments):
129 try:
130 new_args.append(args[idx])
131 except IndexError:
132 try:
133 new_args.append(kwargs.pop(name))
134 except KeyError:
135 if has_default:
136 new_args.append(default)
137 else:
138 missing.append(name)
139 else:
140 if name in kwargs:
141 extra[name] = kwargs.pop(name)
142
143 # handle extra arguments
144 extra_positional = args[arg_count:]
145 if vararg_var is not None:
146 new_args.extend(extra_positional)
147 extra_positional = ()
148 if kwargs and kwarg_var is None:
149 extra.update(kwargs)
150 kwargs = {}
151
152 return new_args, kwargs, missing, extra, extra_positional, \
153 arguments, vararg_var, kwarg_var
154 _signature_cache[func] = parse
155 return parse
156
157
158 def _date_to_unix(arg):
159 """Converts a timetuple, integer or datetime object into the seconds from
160 epoch in utc.
161 """
162 if isinstance(arg, datetime):
163 arg = arg.utctimetuple()
164 elif isinstance(arg, integer_types + (float,)):
165 return int(arg)
166 year, month, day, hour, minute, second = arg[:6]
167 days = date(year, month, 1).toordinal() - _epoch_ord + day - 1
168 hours = days * 24 + hour
169 minutes = hours * 60 + minute
170 seconds = minutes * 60 + second
171 return seconds
172
173
174 class _DictAccessorProperty(object):
175
176 """Baseclass for `environ_property` and `header_property`."""
177 read_only = False
178
179 def __init__(self, name, default=None, load_func=None, dump_func=None,
180 read_only=None, doc=None):
181 self.name = name
182 self.default = default
183 self.load_func = load_func
184 self.dump_func = dump_func
185 if read_only is not None:
186 self.read_only = read_only
187 self.__doc__ = doc
188
189 def __get__(self, obj, type=None):
190 if obj is None:
191 return self
192 storage = self.lookup(obj)
193 if self.name not in storage:
194 return self.default
195 rv = storage[self.name]
196 if self.load_func is not None:
197 try:
198 rv = self.load_func(rv)
199 except (ValueError, TypeError):
200 rv = self.default
201 return rv
202
203 def __set__(self, obj, value):
204 if self.read_only:
205 raise AttributeError('read only property')
206 if self.dump_func is not None:
207 value = self.dump_func(value)
208 self.lookup(obj)[self.name] = value
209
210 def __delete__(self, obj):
211 if self.read_only:
212 raise AttributeError('read only property')
213 self.lookup(obj).pop(self.name, None)
214
215 def __repr__(self):
216 return '<%s %s>' % (
217 self.__class__.__name__,
218 self.name
219 )
220
221
222 def _cookie_quote(b):
223 buf = bytearray()
224 all_legal = True
225 _lookup = _cookie_quoting_map.get
226 _push = buf.extend
227
228 for char in iter_bytes(b):
229 if char not in _legal_cookie_chars:
230 all_legal = False
231 char = _lookup(char, char)
232 _push(char)
233
234 if all_legal:
235 return bytes(buf)
236 return bytes(b'"' + buf + b'"')
237
238
239 def _cookie_unquote(b):
240 if len(b) < 2:
241 return b
242 if b[:1] != b'"' or b[-1:] != b'"':
243 return b
244
245 b = b[1:-1]
246
247 i = 0
248 n = len(b)
249 rv = bytearray()
250 _push = rv.extend
251
252 while 0 <= i < n:
253 o_match = _octal_re.search(b, i)
254 q_match = _quote_re.search(b, i)
255 if not o_match and not q_match:
256 rv.extend(b[i:])
257 break
258 j = k = -1
259 if o_match:
260 j = o_match.start(0)
261 if q_match:
262 k = q_match.start(0)
263 if q_match and (not o_match or k < j):
264 _push(b[i:k])
265 _push(b[k + 1:k + 2])
266 i = k + 2
267 else:
268 _push(b[i:j])
269 rv.append(int(b[j + 1:j + 4], 8))
270 i = j + 4
271
272 return bytes(rv)
273
274
275 def _cookie_parse_impl(b):
276 """Lowlevel cookie parsing facility that operates on bytes."""
277 i = 0
278 n = len(b)
279
280 while i < n:
281 match = _cookie_re.search(b + b';', i)
282 if not match:
283 break
284
285 key = match.group('key').strip()
286 value = match.group('val') or b''
287 i = match.end(0)
288
289 # Ignore parameters. We have no interest in them.
290 if key.lower() not in _cookie_params:
291 yield _cookie_unquote(key), _cookie_unquote(value)
292
293
294 def _encode_idna(domain):
295 # If we're given bytes, make sure they fit into ASCII
296 if not isinstance(domain, text_type):
297 domain.decode('ascii')
298 return domain
299
300 # Otherwise check if it's already ascii, then return
301 try:
302 return domain.encode('ascii')
303 except UnicodeError:
304 pass
305
306 # Otherwise encode each part separately
307 parts = domain.split('.')
308 for idx, part in enumerate(parts):
309 parts[idx] = part.encode('idna')
310 return b'.'.join(parts)
311
312
313 def _decode_idna(domain):
314 # If the input is a string try to encode it to ascii to
315 # do the idna decoding. if that fails because of an
316 # unicode error, then we already have a decoded idna domain
317 if isinstance(domain, text_type):
318 try:
319 domain = domain.encode('ascii')
320 except UnicodeError:
321 return domain
322
323 # Decode each part separately. If a part fails, try to
324 # decode it with ascii and silently ignore errors. This makes
325 # most sense because the idna codec does not have error handling
326 parts = domain.split(b'.')
327 for idx, part in enumerate(parts):
328 try:
329 parts[idx] = part.decode('idna')
330 except UnicodeError:
331 parts[idx] = part.decode('ascii', 'ignore')
332
333 return '.'.join(parts)
334
335
336 def _make_cookie_domain(domain):
337 if domain is None:
338 return None
339 domain = _encode_idna(domain)
340 if b':' in domain:
341 domain = domain.split(b':', 1)[0]
342 if b'.' in domain:
343 return domain
344 raise ValueError(
345 'Setting \'domain\' for a cookie on a server running locally (ex: '
346 'localhost) is not supported by complying browsers. You should '
347 'have something like: \'127.0.0.1 localhost dev.localhost\' on '
348 'your hosts file and then point your server to run on '
349 '\'dev.localhost\' and also set \'domain\' for \'dev.localhost\''
350 )
351
352
353 def _easteregg(app=None):
354 """Like the name says. But who knows how it works?"""
355 def bzzzzzzz(gyver):
356 import base64
357 import zlib
358 return zlib.decompress(base64.b64decode(gyver)).decode('ascii')
359 gyver = u'\n'.join([x + (77 - len(x)) * u' ' for x in bzzzzzzz(b'''
360 eJyFlzuOJDkMRP06xRjymKgDJCDQStBYT8BCgK4gTwfQ2fcFs2a2FzvZk+hvlcRvRJD148efHt9m
361 9Xz94dRY5hGt1nrYcXx7us9qlcP9HHNh28rz8dZj+q4rynVFFPdlY4zH873NKCexrDM6zxxRymzz
362 4QIxzK4bth1PV7+uHn6WXZ5C4ka/+prFzx3zWLMHAVZb8RRUxtFXI5DTQ2n3Hi2sNI+HK43AOWSY
363 jmEzE4naFp58PdzhPMdslLVWHTGUVpSxImw+pS/D+JhzLfdS1j7PzUMxij+mc2U0I9zcbZ/HcZxc
364 q1QjvvcThMYFnp93agEx392ZdLJWXbi/Ca4Oivl4h/Y1ErEqP+lrg7Xa4qnUKu5UE9UUA4xeqLJ5
365 jWlPKJvR2yhRI7xFPdzPuc6adXu6ovwXwRPXXnZHxlPtkSkqWHilsOrGrvcVWXgGP3daXomCj317
366 8P2UOw/NnA0OOikZyFf3zZ76eN9QXNwYdD8f8/LdBRFg0BO3bB+Pe/+G8er8tDJv83XTkj7WeMBJ
367 v/rnAfdO51d6sFglfi8U7zbnr0u9tyJHhFZNXYfH8Iafv2Oa+DT6l8u9UYlajV/hcEgk1x8E8L/r
368 XJXl2SK+GJCxtnyhVKv6GFCEB1OO3f9YWAIEbwcRWv/6RPpsEzOkXURMN37J0PoCSYeBnJQd9Giu
369 LxYQJNlYPSo/iTQwgaihbART7Fcyem2tTSCcwNCs85MOOpJtXhXDe0E7zgZJkcxWTar/zEjdIVCk
370 iXy87FW6j5aGZhttDBoAZ3vnmlkx4q4mMmCdLtnHkBXFMCReqthSGkQ+MDXLLCpXwBs0t+sIhsDI
371 tjBB8MwqYQpLygZ56rRHHpw+OAVyGgaGRHWy2QfXez+ZQQTTBkmRXdV/A9LwH6XGZpEAZU8rs4pE
372 1R4FQ3Uwt8RKEtRc0/CrANUoes3EzM6WYcFyskGZ6UTHJWenBDS7h163Eo2bpzqxNE9aVgEM2CqI
373 GAJe9Yra4P5qKmta27VjzYdR04Vc7KHeY4vs61C0nbywFmcSXYjzBHdiEjraS7PGG2jHHTpJUMxN
374 Jlxr3pUuFvlBWLJGE3GcA1/1xxLcHmlO+LAXbhrXah1tD6Ze+uqFGdZa5FM+3eHcKNaEarutAQ0A
375 QMAZHV+ve6LxAwWnXbbSXEG2DmCX5ijeLCKj5lhVFBrMm+ryOttCAeFpUdZyQLAQkA06RLs56rzG
376 8MID55vqr/g64Qr/wqwlE0TVxgoiZhHrbY2h1iuuyUVg1nlkpDrQ7Vm1xIkI5XRKLedN9EjzVchu
377 jQhXcVkjVdgP2O99QShpdvXWoSwkp5uMwyjt3jiWCqWGSiaaPAzohjPanXVLbM3x0dNskJsaCEyz
378 DTKIs+7WKJD4ZcJGfMhLFBf6hlbnNkLEePF8Cx2o2kwmYF4+MzAxa6i+6xIQkswOqGO+3x9NaZX8
379 MrZRaFZpLeVTYI9F/djY6DDVVs340nZGmwrDqTCiiqD5luj3OzwpmQCiQhdRYowUYEA3i1WWGwL4
380 GCtSoO4XbIPFeKGU13XPkDf5IdimLpAvi2kVDVQbzOOa4KAXMFlpi/hV8F6IDe0Y2reg3PuNKT3i
381 RYhZqtkQZqSB2Qm0SGtjAw7RDwaM1roESC8HWiPxkoOy0lLTRFG39kvbLZbU9gFKFRvixDZBJmpi
382 Xyq3RE5lW00EJjaqwp/v3EByMSpVZYsEIJ4APaHmVtpGSieV5CALOtNUAzTBiw81GLgC0quyzf6c
383 NlWknzJeCsJ5fup2R4d8CYGN77mu5vnO1UqbfElZ9E6cR6zbHjgsr9ly18fXjZoPeDjPuzlWbFwS
384 pdvPkhntFvkc13qb9094LL5NrA3NIq3r9eNnop9DizWOqCEbyRBFJTHn6Tt3CG1o8a4HevYh0XiJ
385 sR0AVVHuGuMOIfbuQ/OKBkGRC6NJ4u7sbPX8bG/n5sNIOQ6/Y/BX3IwRlTSabtZpYLB85lYtkkgm
386 p1qXK3Du2mnr5INXmT/78KI12n11EFBkJHHp0wJyLe9MvPNUGYsf+170maayRoy2lURGHAIapSpQ
387 krEDuNoJCHNlZYhKpvw4mspVWxqo415n8cD62N9+EfHrAvqQnINStetek7RY2Urv8nxsnGaZfRr/
388 nhXbJ6m/yl1LzYqscDZA9QHLNbdaSTTr+kFg3bC0iYbX/eQy0Bv3h4B50/SGYzKAXkCeOLI3bcAt
389 mj2Z/FM1vQWgDynsRwNvrWnJHlespkrp8+vO1jNaibm+PhqXPPv30YwDZ6jApe3wUjFQobghvW9p
390 7f2zLkGNv8b191cD/3vs9Q833z8t''').splitlines()])
391
392 def easteregged(environ, start_response):
393 def injecting_start_response(status, headers, exc_info=None):
394 headers.append(('X-Powered-By', 'Werkzeug'))
395 return start_response(status, headers, exc_info)
396 if app is not None and environ.get('QUERY_STRING') != 'macgybarchakku':
397 return app(environ, injecting_start_response)
398 injecting_start_response('200 OK', [('Content-Type', 'text/html')])
399 return [(u'''
400 <!DOCTYPE html>
401 <html>
402 <head>
403 <title>About Werkzeug</title>
404 <style type="text/css">
405 body { font: 15px Georgia, serif; text-align: center; }
406 a { color: #333; text-decoration: none; }
407 h1 { font-size: 30px; margin: 20px 0 10px 0; }
408 p { margin: 0 0 30px 0; }
409 pre { font: 11px 'Consolas', 'Monaco', monospace; line-height: 0.95; }
410 </style>
411 </head>
412 <body>
413 <h1><a href="http://werkzeug.pocoo.org/">Werkzeug</a></h1>
414 <p>the Swiss Army knife of Python web development.</p>
415 <pre>%s\n\n\n</pre>
416 </body>
417 </html>''' % gyver).encode('latin1')]
418 return easteregged
+0
-277
werkzeug/_reloader.py less more
0 import os
1 import sys
2 import time
3 import subprocess
4 import threading
5 from itertools import chain
6
7 from werkzeug._internal import _log
8 from werkzeug._compat import PY2, iteritems, text_type
9
10
11 def _iter_module_files():
12 """This iterates over all relevant Python files. It goes through all
13 loaded files from modules, all files in folders of already loaded modules
14 as well as all files reachable through a package.
15 """
16 # The list call is necessary on Python 3 in case the module
17 # dictionary modifies during iteration.
18 for module in list(sys.modules.values()):
19 if module is None:
20 continue
21 filename = getattr(module, '__file__', None)
22 if filename:
23 if os.path.isdir(filename) and \
24 os.path.exists(os.path.join(filename, "__init__.py")):
25 filename = os.path.join(filename, "__init__.py")
26
27 old = None
28 while not os.path.isfile(filename):
29 old = filename
30 filename = os.path.dirname(filename)
31 if filename == old:
32 break
33 else:
34 if filename[-4:] in ('.pyc', '.pyo'):
35 filename = filename[:-1]
36 yield filename
37
38
39 def _find_observable_paths(extra_files=None):
40 """Finds all paths that should be observed."""
41 rv = set(os.path.dirname(os.path.abspath(x))
42 if os.path.isfile(x) else os.path.abspath(x)
43 for x in sys.path)
44
45 for filename in extra_files or ():
46 rv.add(os.path.dirname(os.path.abspath(filename)))
47
48 for module in list(sys.modules.values()):
49 fn = getattr(module, '__file__', None)
50 if fn is None:
51 continue
52 fn = os.path.abspath(fn)
53 rv.add(os.path.dirname(fn))
54
55 return _find_common_roots(rv)
56
57
58 def _get_args_for_reloading():
59 """Returns the executable. This contains a workaround for windows
60 if the executable is incorrectly reported to not have the .exe
61 extension which can cause bugs on reloading.
62 """
63 rv = [sys.executable]
64 py_script = sys.argv[0]
65 if os.name == 'nt' and not os.path.exists(py_script) and \
66 os.path.exists(py_script + '.exe'):
67 py_script += '.exe'
68 if os.path.splitext(rv[0])[1] == '.exe' and os.path.splitext(py_script)[1] == '.exe':
69 rv.pop(0)
70 rv.append(py_script)
71 rv.extend(sys.argv[1:])
72 return rv
73
74
75 def _find_common_roots(paths):
76 """Out of some paths it finds the common roots that need monitoring."""
77 paths = [x.split(os.path.sep) for x in paths]
78 root = {}
79 for chunks in sorted(paths, key=len, reverse=True):
80 node = root
81 for chunk in chunks:
82 node = node.setdefault(chunk, {})
83 node.clear()
84
85 rv = set()
86
87 def _walk(node, path):
88 for prefix, child in iteritems(node):
89 _walk(child, path + (prefix,))
90 if not node:
91 rv.add('/'.join(path))
92 _walk(root, ())
93 return rv
94
95
96 class ReloaderLoop(object):
97 name = None
98
99 # monkeypatched by testsuite. wrapping with `staticmethod` is required in
100 # case time.sleep has been replaced by a non-c function (e.g. by
101 # `eventlet.monkey_patch`) before we get here
102 _sleep = staticmethod(time.sleep)
103
104 def __init__(self, extra_files=None, interval=1):
105 self.extra_files = set(os.path.abspath(x)
106 for x in extra_files or ())
107 self.interval = interval
108
109 def run(self):
110 pass
111
112 def restart_with_reloader(self):
113 """Spawn a new Python interpreter with the same arguments as this one,
114 but running the reloader thread.
115 """
116 while 1:
117 _log('info', ' * Restarting with %s' % self.name)
118 args = _get_args_for_reloading()
119 new_environ = os.environ.copy()
120 new_environ['WERKZEUG_RUN_MAIN'] = 'true'
121
122 # a weird bug on windows. sometimes unicode strings end up in the
123 # environment and subprocess.call does not like this, encode them
124 # to latin1 and continue.
125 if os.name == 'nt' and PY2:
126 for key, value in iteritems(new_environ):
127 if isinstance(value, text_type):
128 new_environ[key] = value.encode('iso-8859-1')
129
130 exit_code = subprocess.call(args, env=new_environ,
131 close_fds=False)
132 if exit_code != 3:
133 return exit_code
134
135 def trigger_reload(self, filename):
136 self.log_reload(filename)
137 sys.exit(3)
138
139 def log_reload(self, filename):
140 filename = os.path.abspath(filename)
141 _log('info', ' * Detected change in %r, reloading' % filename)
142
143
144 class StatReloaderLoop(ReloaderLoop):
145 name = 'stat'
146
147 def run(self):
148 mtimes = {}
149 while 1:
150 for filename in chain(_iter_module_files(),
151 self.extra_files):
152 try:
153 mtime = os.stat(filename).st_mtime
154 except OSError:
155 continue
156
157 old_time = mtimes.get(filename)
158 if old_time is None:
159 mtimes[filename] = mtime
160 continue
161 elif mtime > old_time:
162 self.trigger_reload(filename)
163 self._sleep(self.interval)
164
165
166 class WatchdogReloaderLoop(ReloaderLoop):
167
168 def __init__(self, *args, **kwargs):
169 ReloaderLoop.__init__(self, *args, **kwargs)
170 from watchdog.observers import Observer
171 from watchdog.events import FileSystemEventHandler
172 self.observable_paths = set()
173
174 def _check_modification(filename):
175 if filename in self.extra_files:
176 self.trigger_reload(filename)
177 dirname = os.path.dirname(filename)
178 if dirname.startswith(tuple(self.observable_paths)):
179 if filename.endswith(('.pyc', '.pyo', '.py')):
180 self.trigger_reload(filename)
181
182 class _CustomHandler(FileSystemEventHandler):
183
184 def on_created(self, event):
185 _check_modification(event.src_path)
186
187 def on_modified(self, event):
188 _check_modification(event.src_path)
189
190 def on_moved(self, event):
191 _check_modification(event.src_path)
192 _check_modification(event.dest_path)
193
194 def on_deleted(self, event):
195 _check_modification(event.src_path)
196
197 reloader_name = Observer.__name__.lower()
198 if reloader_name.endswith('observer'):
199 reloader_name = reloader_name[:-8]
200 reloader_name += ' reloader'
201
202 self.name = reloader_name
203
204 self.observer_class = Observer
205 self.event_handler = _CustomHandler()
206 self.should_reload = False
207
208 def trigger_reload(self, filename):
209 # This is called inside an event handler, which means throwing
210 # SystemExit has no effect.
211 # https://github.com/gorakhargosh/watchdog/issues/294
212 self.should_reload = True
213 self.log_reload(filename)
214
215 def run(self):
216 watches = {}
217 observer = self.observer_class()
218 observer.start()
219
220 try:
221 while not self.should_reload:
222 to_delete = set(watches)
223 paths = _find_observable_paths(self.extra_files)
224 for path in paths:
225 if path not in watches:
226 try:
227 watches[path] = observer.schedule(
228 self.event_handler, path, recursive=True)
229 except OSError:
230 # Clear this path from list of watches We don't want
231 # the same error message showing again in the next
232 # iteration.
233 watches[path] = None
234 to_delete.discard(path)
235 for path in to_delete:
236 watch = watches.pop(path, None)
237 if watch is not None:
238 observer.unschedule(watch)
239 self.observable_paths = paths
240 self._sleep(self.interval)
241 finally:
242 observer.stop()
243 observer.join()
244
245 sys.exit(3)
246
247
248 reloader_loops = {
249 'stat': StatReloaderLoop,
250 'watchdog': WatchdogReloaderLoop,
251 }
252
253 try:
254 __import__('watchdog.observers')
255 except ImportError:
256 reloader_loops['auto'] = reloader_loops['stat']
257 else:
258 reloader_loops['auto'] = reloader_loops['watchdog']
259
260
261 def run_with_reloader(main_func, extra_files=None, interval=1,
262 reloader_type='auto'):
263 """Run the given function in an independent python interpreter."""
264 import signal
265 reloader = reloader_loops[reloader_type](extra_files, interval)
266 signal.signal(signal.SIGTERM, lambda *args: sys.exit(0))
267 try:
268 if os.environ.get('WERKZEUG_RUN_MAIN') == 'true':
269 t = threading.Thread(target=main_func, args=())
270 t.setDaemon(True)
271 t.start()
272 reloader.run()
273 else:
274 sys.exit(reloader.restart_with_reloader())
275 except KeyboardInterrupt:
276 pass
+0
-16
werkzeug/contrib/__init__.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.contrib
3 ~~~~~~~~~~~~~~~~
4
5 Contains user-submitted code that other users may find useful, but which
6 is not part of the Werkzeug core. Anyone can write code for inclusion in
7 the `contrib` package. All modules in this package are distributed as an
8 add-on library and thus are not part of Werkzeug itself.
9
10 This file itself is mostly for informational purposes and to tell the
11 Python interpreter that `contrib` is a package.
12
13 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
14 :license: BSD, see LICENSE for more details.
15 """
+0
-355
werkzeug/contrib/atom.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.contrib.atom
3 ~~~~~~~~~~~~~~~~~~~~~
4
5 This module provides a class called :class:`AtomFeed` which can be
6 used to generate feeds in the Atom syndication format (see :rfc:`4287`).
7
8 Example::
9
10 def atom_feed(request):
11 feed = AtomFeed("My Blog", feed_url=request.url,
12 url=request.host_url,
13 subtitle="My example blog for a feed test.")
14 for post in Post.query.limit(10).all():
15 feed.add(post.title, post.body, content_type='html',
16 author=post.author, url=post.url, id=post.uid,
17 updated=post.last_update, published=post.pub_date)
18 return feed.get_response()
19
20 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
21 :license: BSD, see LICENSE for more details.
22 """
23 from datetime import datetime
24
25 from werkzeug.utils import escape
26 from werkzeug.wrappers import BaseResponse
27 from werkzeug._compat import implements_to_string, string_types
28
29
30 XHTML_NAMESPACE = 'http://www.w3.org/1999/xhtml'
31
32
33 def _make_text_block(name, content, content_type=None):
34 """Helper function for the builder that creates an XML text block."""
35 if content_type == 'xhtml':
36 return u'<%s type="xhtml"><div xmlns="%s">%s</div></%s>\n' % \
37 (name, XHTML_NAMESPACE, content, name)
38 if not content_type:
39 return u'<%s>%s</%s>\n' % (name, escape(content), name)
40 return u'<%s type="%s">%s</%s>\n' % (name, content_type,
41 escape(content), name)
42
43
44 def format_iso8601(obj):
45 """Format a datetime object for iso8601"""
46 iso8601 = obj.isoformat()
47 if obj.tzinfo:
48 return iso8601
49 return iso8601 + 'Z'
50
51
52 @implements_to_string
53 class AtomFeed(object):
54
55 """A helper class that creates Atom feeds.
56
57 :param title: the title of the feed. Required.
58 :param title_type: the type attribute for the title element. One of
59 ``'html'``, ``'text'`` or ``'xhtml'``.
60 :param url: the url for the feed (not the url *of* the feed)
61 :param id: a globally unique id for the feed. Must be an URI. If
62 not present the `feed_url` is used, but one of both is
63 required.
64 :param updated: the time the feed was modified the last time. Must
65 be a :class:`datetime.datetime` object. If not
66 present the latest entry's `updated` is used.
67 Treated as UTC if naive datetime.
68 :param feed_url: the URL to the feed. Should be the URL that was
69 requested.
70 :param author: the author of the feed. Must be either a string (the
71 name) or a dict with name (required) and uri or
72 email (both optional). Can be a list of (may be
73 mixed, too) strings and dicts, too, if there are
74 multiple authors. Required if not every entry has an
75 author element.
76 :param icon: an icon for the feed.
77 :param logo: a logo for the feed.
78 :param rights: copyright information for the feed.
79 :param rights_type: the type attribute for the rights element. One of
80 ``'html'``, ``'text'`` or ``'xhtml'``. Default is
81 ``'text'``.
82 :param subtitle: a short description of the feed.
83 :param subtitle_type: the type attribute for the subtitle element.
84 One of ``'text'``, ``'html'``, ``'text'``
85 or ``'xhtml'``. Default is ``'text'``.
86 :param links: additional links. Must be a list of dictionaries with
87 href (required) and rel, type, hreflang, title, length
88 (all optional)
89 :param generator: the software that generated this feed. This must be
90 a tuple in the form ``(name, url, version)``. If
91 you don't want to specify one of them, set the item
92 to `None`.
93 :param entries: a list with the entries for the feed. Entries can also
94 be added later with :meth:`add`.
95
96 For more information on the elements see
97 http://www.atomenabled.org/developers/syndication/
98
99 Everywhere where a list is demanded, any iterable can be used.
100 """
101
102 default_generator = ('Werkzeug', None, None)
103
104 def __init__(self, title=None, entries=None, **kwargs):
105 self.title = title
106 self.title_type = kwargs.get('title_type', 'text')
107 self.url = kwargs.get('url')
108 self.feed_url = kwargs.get('feed_url', self.url)
109 self.id = kwargs.get('id', self.feed_url)
110 self.updated = kwargs.get('updated')
111 self.author = kwargs.get('author', ())
112 self.icon = kwargs.get('icon')
113 self.logo = kwargs.get('logo')
114 self.rights = kwargs.get('rights')
115 self.rights_type = kwargs.get('rights_type')
116 self.subtitle = kwargs.get('subtitle')
117 self.subtitle_type = kwargs.get('subtitle_type', 'text')
118 self.generator = kwargs.get('generator')
119 if self.generator is None:
120 self.generator = self.default_generator
121 self.links = kwargs.get('links', [])
122 self.entries = entries and list(entries) or []
123
124 if not hasattr(self.author, '__iter__') \
125 or isinstance(self.author, string_types + (dict,)):
126 self.author = [self.author]
127 for i, author in enumerate(self.author):
128 if not isinstance(author, dict):
129 self.author[i] = {'name': author}
130
131 if not self.title:
132 raise ValueError('title is required')
133 if not self.id:
134 raise ValueError('id is required')
135 for author in self.author:
136 if 'name' not in author:
137 raise TypeError('author must contain at least a name')
138
139 def add(self, *args, **kwargs):
140 """Add a new entry to the feed. This function can either be called
141 with a :class:`FeedEntry` or some keyword and positional arguments
142 that are forwarded to the :class:`FeedEntry` constructor.
143 """
144 if len(args) == 1 and not kwargs and isinstance(args[0], FeedEntry):
145 self.entries.append(args[0])
146 else:
147 kwargs['feed_url'] = self.feed_url
148 self.entries.append(FeedEntry(*args, **kwargs))
149
150 def __repr__(self):
151 return '<%s %r (%d entries)>' % (
152 self.__class__.__name__,
153 self.title,
154 len(self.entries)
155 )
156
157 def generate(self):
158 """Return a generator that yields pieces of XML."""
159 # atom demands either an author element in every entry or a global one
160 if not self.author:
161 if any(not e.author for e in self.entries):
162 self.author = ({'name': 'Unknown author'},)
163
164 if not self.updated:
165 dates = sorted([entry.updated for entry in self.entries])
166 self.updated = dates and dates[-1] or datetime.utcnow()
167
168 yield u'<?xml version="1.0" encoding="utf-8"?>\n'
169 yield u'<feed xmlns="http://www.w3.org/2005/Atom">\n'
170 yield ' ' + _make_text_block('title', self.title, self.title_type)
171 yield u' <id>%s</id>\n' % escape(self.id)
172 yield u' <updated>%s</updated>\n' % format_iso8601(self.updated)
173 if self.url:
174 yield u' <link href="%s" />\n' % escape(self.url)
175 if self.feed_url:
176 yield u' <link href="%s" rel="self" />\n' % \
177 escape(self.feed_url)
178 for link in self.links:
179 yield u' <link %s/>\n' % ''.join('%s="%s" ' %
180 (k, escape(link[k])) for k in link)
181 for author in self.author:
182 yield u' <author>\n'
183 yield u' <name>%s</name>\n' % escape(author['name'])
184 if 'uri' in author:
185 yield u' <uri>%s</uri>\n' % escape(author['uri'])
186 if 'email' in author:
187 yield ' <email>%s</email>\n' % escape(author['email'])
188 yield ' </author>\n'
189 if self.subtitle:
190 yield ' ' + _make_text_block('subtitle', self.subtitle,
191 self.subtitle_type)
192 if self.icon:
193 yield u' <icon>%s</icon>\n' % escape(self.icon)
194 if self.logo:
195 yield u' <logo>%s</logo>\n' % escape(self.logo)
196 if self.rights:
197 yield ' ' + _make_text_block('rights', self.rights,
198 self.rights_type)
199 generator_name, generator_url, generator_version = self.generator
200 if generator_name or generator_url or generator_version:
201 tmp = [u' <generator']
202 if generator_url:
203 tmp.append(u' uri="%s"' % escape(generator_url))
204 if generator_version:
205 tmp.append(u' version="%s"' % escape(generator_version))
206 tmp.append(u'>%s</generator>\n' % escape(generator_name))
207 yield u''.join(tmp)
208 for entry in self.entries:
209 for line in entry.generate():
210 yield u' ' + line
211 yield u'</feed>\n'
212
213 def to_string(self):
214 """Convert the feed into a string."""
215 return u''.join(self.generate())
216
217 def get_response(self):
218 """Return a response object for the feed."""
219 return BaseResponse(self.to_string(), mimetype='application/atom+xml')
220
221 def __call__(self, environ, start_response):
222 """Use the class as WSGI response object."""
223 return self.get_response()(environ, start_response)
224
225 def __str__(self):
226 return self.to_string()
227
228
229 @implements_to_string
230 class FeedEntry(object):
231
232 """Represents a single entry in a feed.
233
234 :param title: the title of the entry. Required.
235 :param title_type: the type attribute for the title element. One of
236 ``'html'``, ``'text'`` or ``'xhtml'``.
237 :param content: the content of the entry.
238 :param content_type: the type attribute for the content element. One
239 of ``'html'``, ``'text'`` or ``'xhtml'``.
240 :param summary: a summary of the entry's content.
241 :param summary_type: the type attribute for the summary element. One
242 of ``'html'``, ``'text'`` or ``'xhtml'``.
243 :param url: the url for the entry.
244 :param id: a globally unique id for the entry. Must be an URI. If
245 not present the URL is used, but one of both is required.
246 :param updated: the time the entry was modified the last time. Must
247 be a :class:`datetime.datetime` object. Treated as
248 UTC if naive datetime. Required.
249 :param author: the author of the entry. Must be either a string (the
250 name) or a dict with name (required) and uri or
251 email (both optional). Can be a list of (may be
252 mixed, too) strings and dicts, too, if there are
253 multiple authors. Required if the feed does not have an
254 author element.
255 :param published: the time the entry was initially published. Must
256 be a :class:`datetime.datetime` object. Treated as
257 UTC if naive datetime.
258 :param rights: copyright information for the entry.
259 :param rights_type: the type attribute for the rights element. One of
260 ``'html'``, ``'text'`` or ``'xhtml'``. Default is
261 ``'text'``.
262 :param links: additional links. Must be a list of dictionaries with
263 href (required) and rel, type, hreflang, title, length
264 (all optional)
265 :param categories: categories for the entry. Must be a list of dictionaries
266 with term (required), scheme and label (all optional)
267 :param xml_base: The xml base (url) for this feed item. If not provided
268 it will default to the item url.
269
270 For more information on the elements see
271 http://www.atomenabled.org/developers/syndication/
272
273 Everywhere where a list is demanded, any iterable can be used.
274 """
275
276 def __init__(self, title=None, content=None, feed_url=None, **kwargs):
277 self.title = title
278 self.title_type = kwargs.get('title_type', 'text')
279 self.content = content
280 self.content_type = kwargs.get('content_type', 'html')
281 self.url = kwargs.get('url')
282 self.id = kwargs.get('id', self.url)
283 self.updated = kwargs.get('updated')
284 self.summary = kwargs.get('summary')
285 self.summary_type = kwargs.get('summary_type', 'html')
286 self.author = kwargs.get('author', ())
287 self.published = kwargs.get('published')
288 self.rights = kwargs.get('rights')
289 self.links = kwargs.get('links', [])
290 self.categories = kwargs.get('categories', [])
291 self.xml_base = kwargs.get('xml_base', feed_url)
292
293 if not hasattr(self.author, '__iter__') \
294 or isinstance(self.author, string_types + (dict,)):
295 self.author = [self.author]
296 for i, author in enumerate(self.author):
297 if not isinstance(author, dict):
298 self.author[i] = {'name': author}
299
300 if not self.title:
301 raise ValueError('title is required')
302 if not self.id:
303 raise ValueError('id is required')
304 if not self.updated:
305 raise ValueError('updated is required')
306
307 def __repr__(self):
308 return '<%s %r>' % (
309 self.__class__.__name__,
310 self.title
311 )
312
313 def generate(self):
314 """Yields pieces of ATOM XML."""
315 base = ''
316 if self.xml_base:
317 base = ' xml:base="%s"' % escape(self.xml_base)
318 yield u'<entry%s>\n' % base
319 yield u' ' + _make_text_block('title', self.title, self.title_type)
320 yield u' <id>%s</id>\n' % escape(self.id)
321 yield u' <updated>%s</updated>\n' % format_iso8601(self.updated)
322 if self.published:
323 yield u' <published>%s</published>\n' % \
324 format_iso8601(self.published)
325 if self.url:
326 yield u' <link href="%s" />\n' % escape(self.url)
327 for author in self.author:
328 yield u' <author>\n'
329 yield u' <name>%s</name>\n' % escape(author['name'])
330 if 'uri' in author:
331 yield u' <uri>%s</uri>\n' % escape(author['uri'])
332 if 'email' in author:
333 yield u' <email>%s</email>\n' % escape(author['email'])
334 yield u' </author>\n'
335 for link in self.links:
336 yield u' <link %s/>\n' % ''.join('%s="%s" ' %
337 (k, escape(link[k])) for k in link)
338 for category in self.categories:
339 yield u' <category %s/>\n' % ''.join('%s="%s" ' %
340 (k, escape(category[k])) for k in category)
341 if self.summary:
342 yield u' ' + _make_text_block('summary', self.summary,
343 self.summary_type)
344 if self.content:
345 yield u' ' + _make_text_block('content', self.content,
346 self.content_type)
347 yield u'</entry>\n'
348
349 def to_string(self):
350 """Convert the feed item into a unicode object."""
351 return u''.join(self.generate())
352
353 def __str__(self):
354 return self.to_string()
+0
-913
werkzeug/contrib/cache.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.contrib.cache
3 ~~~~~~~~~~~~~~~~~~~~~~
4
5 The main problem with dynamic Web sites is, well, they're dynamic. Each
6 time a user requests a page, the webserver executes a lot of code, queries
7 the database, renders templates until the visitor gets the page he sees.
8
9 This is a lot more expensive than just loading a file from the file system
10 and sending it to the visitor.
11
12 For most Web applications, this overhead isn't a big deal but once it
13 becomes, you will be glad to have a cache system in place.
14
15 How Caching Works
16 =================
17
18 Caching is pretty simple. Basically you have a cache object lurking around
19 somewhere that is connected to a remote cache or the file system or
20 something else. When the request comes in you check if the current page
21 is already in the cache and if so, you're returning it from the cache.
22 Otherwise you generate the page and put it into the cache. (Or a fragment
23 of the page, you don't have to cache the full thing)
24
25 Here is a simple example of how to cache a sidebar for 5 minutes::
26
27 def get_sidebar(user):
28 identifier = 'sidebar_for/user%d' % user.id
29 value = cache.get(identifier)
30 if value is not None:
31 return value
32 value = generate_sidebar_for(user=user)
33 cache.set(identifier, value, timeout=60 * 5)
34 return value
35
36 Creating a Cache Object
37 =======================
38
39 To create a cache object you just import the cache system of your choice
40 from the cache module and instantiate it. Then you can start working
41 with that object:
42
43 >>> from werkzeug.contrib.cache import SimpleCache
44 >>> c = SimpleCache()
45 >>> c.set("foo", "value")
46 >>> c.get("foo")
47 'value'
48 >>> c.get("missing") is None
49 True
50
51 Please keep in mind that you have to create the cache and put it somewhere
52 you have access to it (either as a module global you can import or you just
53 put it into your WSGI application).
54
55 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
56 :license: BSD, see LICENSE for more details.
57 """
58 import os
59 import re
60 import errno
61 import tempfile
62 import platform
63 from hashlib import md5
64 from time import time
65 try:
66 import cPickle as pickle
67 except ImportError: # pragma: no cover
68 import pickle
69
70 from werkzeug._compat import iteritems, string_types, text_type, \
71 integer_types, to_native
72 from werkzeug.posixemulation import rename
73
74
75 def _items(mappingorseq):
76 """Wrapper for efficient iteration over mappings represented by dicts
77 or sequences::
78
79 >>> for k, v in _items((i, i*i) for i in xrange(5)):
80 ... assert k*k == v
81
82 >>> for k, v in _items(dict((i, i*i) for i in xrange(5))):
83 ... assert k*k == v
84
85 """
86 if hasattr(mappingorseq, 'items'):
87 return iteritems(mappingorseq)
88 return mappingorseq
89
90
91 class BaseCache(object):
92
93 """Baseclass for the cache systems. All the cache systems implement this
94 API or a superset of it.
95
96 :param default_timeout: the default timeout (in seconds) that is used if
97 no timeout is specified on :meth:`set`. A timeout
98 of 0 indicates that the cache never expires.
99 """
100
101 def __init__(self, default_timeout=300):
102 self.default_timeout = default_timeout
103
104 def _normalize_timeout(self, timeout):
105 if timeout is None:
106 timeout = self.default_timeout
107 return timeout
108
109 def get(self, key):
110 """Look up key in the cache and return the value for it.
111
112 :param key: the key to be looked up.
113 :returns: The value if it exists and is readable, else ``None``.
114 """
115 return None
116
117 def delete(self, key):
118 """Delete `key` from the cache.
119
120 :param key: the key to delete.
121 :returns: Whether the key existed and has been deleted.
122 :rtype: boolean
123 """
124 return True
125
126 def get_many(self, *keys):
127 """Returns a list of values for the given keys.
128 For each key an item in the list is created::
129
130 foo, bar = cache.get_many("foo", "bar")
131
132 Has the same error handling as :meth:`get`.
133
134 :param keys: The function accepts multiple keys as positional
135 arguments.
136 """
137 return [self.get(k) for k in keys]
138
139 def get_dict(self, *keys):
140 """Like :meth:`get_many` but return a dict::
141
142 d = cache.get_dict("foo", "bar")
143 foo = d["foo"]
144 bar = d["bar"]
145
146 :param keys: The function accepts multiple keys as positional
147 arguments.
148 """
149 return dict(zip(keys, self.get_many(*keys)))
150
151 def set(self, key, value, timeout=None):
152 """Add a new key/value to the cache (overwrites value, if key already
153 exists in the cache).
154
155 :param key: the key to set
156 :param value: the value for the key
157 :param timeout: the cache timeout for the key in seconds (if not
158 specified, it uses the default timeout). A timeout of
159 0 idicates that the cache never expires.
160 :returns: ``True`` if key has been updated, ``False`` for backend
161 errors. Pickling errors, however, will raise a subclass of
162 ``pickle.PickleError``.
163 :rtype: boolean
164 """
165 return True
166
167 def add(self, key, value, timeout=None):
168 """Works like :meth:`set` but does not overwrite the values of already
169 existing keys.
170
171 :param key: the key to set
172 :param value: the value for the key
173 :param timeout: the cache timeout for the key in seconds (if not
174 specified, it uses the default timeout). A timeout of
175 0 idicates that the cache never expires.
176 :returns: Same as :meth:`set`, but also ``False`` for already
177 existing keys.
178 :rtype: boolean
179 """
180 return True
181
182 def set_many(self, mapping, timeout=None):
183 """Sets multiple keys and values from a mapping.
184
185 :param mapping: a mapping with the keys/values to set.
186 :param timeout: the cache timeout for the key in seconds (if not
187 specified, it uses the default timeout). A timeout of
188 0 idicates that the cache never expires.
189 :returns: Whether all given keys have been set.
190 :rtype: boolean
191 """
192 rv = True
193 for key, value in _items(mapping):
194 if not self.set(key, value, timeout):
195 rv = False
196 return rv
197
198 def delete_many(self, *keys):
199 """Deletes multiple keys at once.
200
201 :param keys: The function accepts multiple keys as positional
202 arguments.
203 :returns: Whether all given keys have been deleted.
204 :rtype: boolean
205 """
206 return all(self.delete(key) for key in keys)
207
208 def has(self, key):
209 """Checks if a key exists in the cache without returning it. This is a
210 cheap operation that bypasses loading the actual data on the backend.
211
212 This method is optional and may not be implemented on all caches.
213
214 :param key: the key to check
215 """
216 raise NotImplementedError(
217 '%s doesn\'t have an efficient implementation of `has`. That '
218 'means it is impossible to check whether a key exists without '
219 'fully loading the key\'s data. Consider using `self.get` '
220 'explicitly if you don\'t care about performance.'
221 )
222
223 def clear(self):
224 """Clears the cache. Keep in mind that not all caches support
225 completely clearing the cache.
226
227 :returns: Whether the cache has been cleared.
228 :rtype: boolean
229 """
230 return True
231
232 def inc(self, key, delta=1):
233 """Increments the value of a key by `delta`. If the key does
234 not yet exist it is initialized with `delta`.
235
236 For supporting caches this is an atomic operation.
237
238 :param key: the key to increment.
239 :param delta: the delta to add.
240 :returns: The new value or ``None`` for backend errors.
241 """
242 value = (self.get(key) or 0) + delta
243 return value if self.set(key, value) else None
244
245 def dec(self, key, delta=1):
246 """Decrements the value of a key by `delta`. If the key does
247 not yet exist it is initialized with `-delta`.
248
249 For supporting caches this is an atomic operation.
250
251 :param key: the key to increment.
252 :param delta: the delta to subtract.
253 :returns: The new value or `None` for backend errors.
254 """
255 value = (self.get(key) or 0) - delta
256 return value if self.set(key, value) else None
257
258
259 class NullCache(BaseCache):
260
261 """A cache that doesn't cache. This can be useful for unit testing.
262
263 :param default_timeout: a dummy parameter that is ignored but exists
264 for API compatibility with other caches.
265 """
266
267 def has(self, key):
268 return False
269
270
271 class SimpleCache(BaseCache):
272
273 """Simple memory cache for single process environments. This class exists
274 mainly for the development server and is not 100% thread safe. It tries
275 to use as many atomic operations as possible and no locks for simplicity
276 but it could happen under heavy load that keys are added multiple times.
277
278 :param threshold: the maximum number of items the cache stores before
279 it starts deleting some.
280 :param default_timeout: the default timeout that is used if no timeout is
281 specified on :meth:`~BaseCache.set`. A timeout of
282 0 indicates that the cache never expires.
283 """
284
285 def __init__(self, threshold=500, default_timeout=300):
286 BaseCache.__init__(self, default_timeout)
287 self._cache = {}
288 self.clear = self._cache.clear
289 self._threshold = threshold
290
291 def _prune(self):
292 if len(self._cache) > self._threshold:
293 now = time()
294 toremove = []
295 for idx, (key, (expires, _)) in enumerate(self._cache.items()):
296 if (expires != 0 and expires <= now) or idx % 3 == 0:
297 toremove.append(key)
298 for key in toremove:
299 self._cache.pop(key, None)
300
301 def _normalize_timeout(self, timeout):
302 timeout = BaseCache._normalize_timeout(self, timeout)
303 if timeout > 0:
304 timeout = time() + timeout
305 return timeout
306
307 def get(self, key):
308 try:
309 expires, value = self._cache[key]
310 if expires == 0 or expires > time():
311 return pickle.loads(value)
312 except (KeyError, pickle.PickleError):
313 return None
314
315 def set(self, key, value, timeout=None):
316 expires = self._normalize_timeout(timeout)
317 self._prune()
318 self._cache[key] = (expires, pickle.dumps(value,
319 pickle.HIGHEST_PROTOCOL))
320 return True
321
322 def add(self, key, value, timeout=None):
323 expires = self._normalize_timeout(timeout)
324 self._prune()
325 item = (expires, pickle.dumps(value,
326 pickle.HIGHEST_PROTOCOL))
327 if key in self._cache:
328 return False
329 self._cache.setdefault(key, item)
330 return True
331
332 def delete(self, key):
333 return self._cache.pop(key, None) is not None
334
335 def has(self, key):
336 try:
337 expires, value = self._cache[key]
338 return expires == 0 or expires > time()
339 except KeyError:
340 return False
341
342 _test_memcached_key = re.compile(r'[^\x00-\x21\xff]{1,250}$').match
343
344
345 class MemcachedCache(BaseCache):
346
347 """A cache that uses memcached as backend.
348
349 The first argument can either be an object that resembles the API of a
350 :class:`memcache.Client` or a tuple/list of server addresses. In the
351 event that a tuple/list is passed, Werkzeug tries to import the best
352 available memcache library.
353
354 This cache looks into the following packages/modules to find bindings for
355 memcached:
356
357 - ``pylibmc``
358 - ``google.appengine.api.memcached``
359 - ``memcached``
360 - ``libmc``
361
362 Implementation notes: This cache backend works around some limitations in
363 memcached to simplify the interface. For example unicode keys are encoded
364 to utf-8 on the fly. Methods such as :meth:`~BaseCache.get_dict` return
365 the keys in the same format as passed. Furthermore all get methods
366 silently ignore key errors to not cause problems when untrusted user data
367 is passed to the get methods which is often the case in web applications.
368
369 :param servers: a list or tuple of server addresses or alternatively
370 a :class:`memcache.Client` or a compatible client.
371 :param default_timeout: the default timeout that is used if no timeout is
372 specified on :meth:`~BaseCache.set`. A timeout of
373 0 indicates that the cache never expires.
374 :param key_prefix: a prefix that is added before all keys. This makes it
375 possible to use the same memcached server for different
376 applications. Keep in mind that
377 :meth:`~BaseCache.clear` will also clear keys with a
378 different prefix.
379 """
380
381 def __init__(self, servers=None, default_timeout=300, key_prefix=None):
382 BaseCache.__init__(self, default_timeout)
383 if servers is None or isinstance(servers, (list, tuple)):
384 if servers is None:
385 servers = ['127.0.0.1:11211']
386 self._client = self.import_preferred_memcache_lib(servers)
387 if self._client is None:
388 raise RuntimeError('no memcache module found')
389 else:
390 # NOTE: servers is actually an already initialized memcache
391 # client.
392 self._client = servers
393
394 self.key_prefix = to_native(key_prefix)
395
396 def _normalize_key(self, key):
397 key = to_native(key, 'utf-8')
398 if self.key_prefix:
399 key = self.key_prefix + key
400 return key
401
402 def _normalize_timeout(self, timeout):
403 timeout = BaseCache._normalize_timeout(self, timeout)
404 if timeout > 0:
405 timeout = int(time()) + timeout
406 return timeout
407
408 def get(self, key):
409 key = self._normalize_key(key)
410 # memcached doesn't support keys longer than that. Because often
411 # checks for so long keys can occur because it's tested from user
412 # submitted data etc we fail silently for getting.
413 if _test_memcached_key(key):
414 return self._client.get(key)
415
416 def get_dict(self, *keys):
417 key_mapping = {}
418 have_encoded_keys = False
419 for key in keys:
420 encoded_key = self._normalize_key(key)
421 if not isinstance(key, str):
422 have_encoded_keys = True
423 if _test_memcached_key(key):
424 key_mapping[encoded_key] = key
425 _keys = list(key_mapping)
426 d = rv = self._client.get_multi(_keys)
427 if have_encoded_keys or self.key_prefix:
428 rv = {}
429 for key, value in iteritems(d):
430 rv[key_mapping[key]] = value
431 if len(rv) < len(keys):
432 for key in keys:
433 if key not in rv:
434 rv[key] = None
435 return rv
436
437 def add(self, key, value, timeout=None):
438 key = self._normalize_key(key)
439 timeout = self._normalize_timeout(timeout)
440 return self._client.add(key, value, timeout)
441
442 def set(self, key, value, timeout=None):
443 key = self._normalize_key(key)
444 timeout = self._normalize_timeout(timeout)
445 return self._client.set(key, value, timeout)
446
447 def get_many(self, *keys):
448 d = self.get_dict(*keys)
449 return [d[key] for key in keys]
450
451 def set_many(self, mapping, timeout=None):
452 new_mapping = {}
453 for key, value in _items(mapping):
454 key = self._normalize_key(key)
455 new_mapping[key] = value
456
457 timeout = self._normalize_timeout(timeout)
458 failed_keys = self._client.set_multi(new_mapping, timeout)
459 return not failed_keys
460
461 def delete(self, key):
462 key = self._normalize_key(key)
463 if _test_memcached_key(key):
464 return self._client.delete(key)
465
466 def delete_many(self, *keys):
467 new_keys = []
468 for key in keys:
469 key = self._normalize_key(key)
470 if _test_memcached_key(key):
471 new_keys.append(key)
472 return self._client.delete_multi(new_keys)
473
474 def has(self, key):
475 key = self._normalize_key(key)
476 if _test_memcached_key(key):
477 return self._client.append(key, '')
478 return False
479
480 def clear(self):
481 return self._client.flush_all()
482
483 def inc(self, key, delta=1):
484 key = self._normalize_key(key)
485 return self._client.incr(key, delta)
486
487 def dec(self, key, delta=1):
488 key = self._normalize_key(key)
489 return self._client.decr(key, delta)
490
491 def import_preferred_memcache_lib(self, servers):
492 """Returns an initialized memcache client. Used by the constructor."""
493 try:
494 import pylibmc
495 except ImportError:
496 pass
497 else:
498 return pylibmc.Client(servers)
499
500 try:
501 from google.appengine.api import memcache
502 except ImportError:
503 pass
504 else:
505 return memcache.Client()
506
507 try:
508 import memcache
509 except ImportError:
510 pass
511 else:
512 return memcache.Client(servers)
513
514 try:
515 import libmc
516 except ImportError:
517 pass
518 else:
519 return libmc.Client(servers)
520
521
522 # backwards compatibility
523 GAEMemcachedCache = MemcachedCache
524
525
526 class RedisCache(BaseCache):
527
528 """Uses the Redis key-value store as a cache backend.
529
530 The first argument can be either a string denoting address of the Redis
531 server or an object resembling an instance of a redis.Redis class.
532
533 Note: Python Redis API already takes care of encoding unicode strings on
534 the fly.
535
536 .. versionadded:: 0.7
537
538 .. versionadded:: 0.8
539 `key_prefix` was added.
540
541 .. versionchanged:: 0.8
542 This cache backend now properly serializes objects.
543
544 .. versionchanged:: 0.8.3
545 This cache backend now supports password authentication.
546
547 .. versionchanged:: 0.10
548 ``**kwargs`` is now passed to the redis object.
549
550 :param host: address of the Redis server or an object which API is
551 compatible with the official Python Redis client (redis-py).
552 :param port: port number on which Redis server listens for connections.
553 :param password: password authentication for the Redis server.
554 :param db: db (zero-based numeric index) on Redis Server to connect.
555 :param default_timeout: the default timeout that is used if no timeout is
556 specified on :meth:`~BaseCache.set`. A timeout of
557 0 indicates that the cache never expires.
558 :param key_prefix: A prefix that should be added to all keys.
559
560 Any additional keyword arguments will be passed to ``redis.Redis``.
561 """
562
563 def __init__(self, host='localhost', port=6379, password=None,
564 db=0, default_timeout=300, key_prefix=None, **kwargs):
565 BaseCache.__init__(self, default_timeout)
566 if host is None:
567 raise ValueError('RedisCache host parameter may not be None')
568 if isinstance(host, string_types):
569 try:
570 import redis
571 except ImportError:
572 raise RuntimeError('no redis module found')
573 if kwargs.get('decode_responses', None):
574 raise ValueError('decode_responses is not supported by '
575 'RedisCache.')
576 self._client = redis.Redis(host=host, port=port, password=password,
577 db=db, **kwargs)
578 else:
579 self._client = host
580 self.key_prefix = key_prefix or ''
581
582 def _normalize_timeout(self, timeout):
583 timeout = BaseCache._normalize_timeout(self, timeout)
584 if timeout == 0:
585 timeout = -1
586 return timeout
587
588 def dump_object(self, value):
589 """Dumps an object into a string for redis. By default it serializes
590 integers as regular string and pickle dumps everything else.
591 """
592 t = type(value)
593 if t in integer_types:
594 return str(value).encode('ascii')
595 return b'!' + pickle.dumps(value)
596
597 def load_object(self, value):
598 """The reversal of :meth:`dump_object`. This might be called with
599 None.
600 """
601 if value is None:
602 return None
603 if value.startswith(b'!'):
604 try:
605 return pickle.loads(value[1:])
606 except pickle.PickleError:
607 return None
608 try:
609 return int(value)
610 except ValueError:
611 # before 0.8 we did not have serialization. Still support that.
612 return value
613
614 def get(self, key):
615 return self.load_object(self._client.get(self.key_prefix + key))
616
617 def get_many(self, *keys):
618 if self.key_prefix:
619 keys = [self.key_prefix + key for key in keys]
620 return [self.load_object(x) for x in self._client.mget(keys)]
621
622 def set(self, key, value, timeout=None):
623 timeout = self._normalize_timeout(timeout)
624 dump = self.dump_object(value)
625 if timeout == -1:
626 result = self._client.set(name=self.key_prefix + key,
627 value=dump)
628 else:
629 result = self._client.setex(name=self.key_prefix + key,
630 value=dump, time=timeout)
631 return result
632
633 def add(self, key, value, timeout=None):
634 timeout = self._normalize_timeout(timeout)
635 dump = self.dump_object(value)
636 return (
637 self._client.setnx(name=self.key_prefix + key, value=dump) and
638 self._client.expire(name=self.key_prefix + key, time=timeout)
639 )
640
641 def set_many(self, mapping, timeout=None):
642 timeout = self._normalize_timeout(timeout)
643 # Use transaction=False to batch without calling redis MULTI
644 # which is not supported by twemproxy
645 pipe = self._client.pipeline(transaction=False)
646
647 for key, value in _items(mapping):
648 dump = self.dump_object(value)
649 if timeout == -1:
650 pipe.set(name=self.key_prefix + key, value=dump)
651 else:
652 pipe.setex(name=self.key_prefix + key, value=dump,
653 time=timeout)
654 return pipe.execute()
655
656 def delete(self, key):
657 return self._client.delete(self.key_prefix + key)
658
659 def delete_many(self, *keys):
660 if not keys:
661 return
662 if self.key_prefix:
663 keys = [self.key_prefix + key for key in keys]
664 return self._client.delete(*keys)
665
666 def has(self, key):
667 return self._client.exists(self.key_prefix + key)
668
669 def clear(self):
670 status = False
671 if self.key_prefix:
672 keys = self._client.keys(self.key_prefix + '*')
673 if keys:
674 status = self._client.delete(*keys)
675 else:
676 status = self._client.flushdb()
677 return status
678
679 def inc(self, key, delta=1):
680 return self._client.incr(name=self.key_prefix + key, amount=delta)
681
682 def dec(self, key, delta=1):
683 return self._client.decr(name=self.key_prefix + key, amount=delta)
684
685
686 class FileSystemCache(BaseCache):
687
688 """A cache that stores the items on the file system. This cache depends
689 on being the only user of the `cache_dir`. Make absolutely sure that
690 nobody but this cache stores files there or otherwise the cache will
691 randomly delete files therein.
692
693 :param cache_dir: the directory where cache files are stored.
694 :param threshold: the maximum number of items the cache stores before
695 it starts deleting some. A threshold value of 0
696 indicates no threshold.
697 :param default_timeout: the default timeout that is used if no timeout is
698 specified on :meth:`~BaseCache.set`. A timeout of
699 0 indicates that the cache never expires.
700 :param mode: the file mode wanted for the cache files, default 0600
701 """
702
703 #: used for temporary files by the FileSystemCache
704 _fs_transaction_suffix = '.__wz_cache'
705 #: keep amount of files in a cache element
706 _fs_count_file = '__wz_cache_count'
707
708 def __init__(self, cache_dir, threshold=500, default_timeout=300,
709 mode=0o600):
710 BaseCache.__init__(self, default_timeout)
711 self._path = cache_dir
712 self._threshold = threshold
713 self._mode = mode
714
715 try:
716 os.makedirs(self._path)
717 except OSError as ex:
718 if ex.errno != errno.EEXIST:
719 raise
720
721 self._update_count(value=len(self._list_dir()))
722
723 @property
724 def _file_count(self):
725 return self.get(self._fs_count_file) or 0
726
727 def _update_count(self, delta=None, value=None):
728 # If we have no threshold, don't count files
729 if self._threshold == 0:
730 return
731
732 if delta:
733 new_count = self._file_count + delta
734 else:
735 new_count = value or 0
736 self.set(self._fs_count_file, new_count, mgmt_element=True)
737
738 def _normalize_timeout(self, timeout):
739 timeout = BaseCache._normalize_timeout(self, timeout)
740 if timeout != 0:
741 timeout = time() + timeout
742 return int(timeout)
743
744 def _list_dir(self):
745 """return a list of (fully qualified) cache filenames
746 """
747 mgmt_files = [self._get_filename(name).split('/')[-1]
748 for name in (self._fs_count_file,)]
749 return [os.path.join(self._path, fn) for fn in os.listdir(self._path)
750 if not fn.endswith(self._fs_transaction_suffix)
751 and fn not in mgmt_files]
752
753 def _prune(self):
754 if self._threshold == 0 or not self._file_count > self._threshold:
755 return
756
757 entries = self._list_dir()
758 now = time()
759 for idx, fname in enumerate(entries):
760 try:
761 remove = False
762 with open(fname, 'rb') as f:
763 expires = pickle.load(f)
764 remove = (expires != 0 and expires <= now) or idx % 3 == 0
765
766 if remove:
767 os.remove(fname)
768 except (IOError, OSError):
769 pass
770 self._update_count(value=len(self._list_dir()))
771
772 def clear(self):
773 for fname in self._list_dir():
774 try:
775 os.remove(fname)
776 except (IOError, OSError):
777 self._update_count(value=len(self._list_dir()))
778 return False
779 self._update_count(value=0)
780 return True
781
782 def _get_filename(self, key):
783 if isinstance(key, text_type):
784 key = key.encode('utf-8') # XXX unicode review
785 hash = md5(key).hexdigest()
786 return os.path.join(self._path, hash)
787
788 def get(self, key):
789 filename = self._get_filename(key)
790 try:
791 with open(filename, 'rb') as f:
792 pickle_time = pickle.load(f)
793 if pickle_time == 0 or pickle_time >= time():
794 return pickle.load(f)
795 else:
796 os.remove(filename)
797 return None
798 except (IOError, OSError, pickle.PickleError):
799 return None
800
801 def add(self, key, value, timeout=None):
802 filename = self._get_filename(key)
803 if not os.path.exists(filename):
804 return self.set(key, value, timeout)
805 return False
806
807 def set(self, key, value, timeout=None, mgmt_element=False):
808 # Management elements have no timeout
809 if mgmt_element:
810 timeout = 0
811
812 # Don't prune on management element update, to avoid loop
813 else:
814 self._prune()
815
816 timeout = self._normalize_timeout(timeout)
817 filename = self._get_filename(key)
818 try:
819 fd, tmp = tempfile.mkstemp(suffix=self._fs_transaction_suffix,
820 dir=self._path)
821 with os.fdopen(fd, 'wb') as f:
822 pickle.dump(timeout, f, 1)
823 pickle.dump(value, f, pickle.HIGHEST_PROTOCOL)
824 rename(tmp, filename)
825 os.chmod(filename, self._mode)
826 except (IOError, OSError):
827 return False
828 else:
829 # Management elements should not count towards threshold
830 if not mgmt_element:
831 self._update_count(delta=1)
832 return True
833
834 def delete(self, key, mgmt_element=False):
835 try:
836 os.remove(self._get_filename(key))
837 except (IOError, OSError):
838 return False
839 else:
840 # Management elements should not count towards threshold
841 if not mgmt_element:
842 self._update_count(delta=-1)
843 return True
844
845 def has(self, key):
846 filename = self._get_filename(key)
847 try:
848 with open(filename, 'rb') as f:
849 pickle_time = pickle.load(f)
850 if pickle_time == 0 or pickle_time >= time():
851 return True
852 else:
853 os.remove(filename)
854 return False
855 except (IOError, OSError, pickle.PickleError):
856 return False
857
858
859 class UWSGICache(BaseCache):
860 """ Implements the cache using uWSGI's caching framework.
861
862 .. note::
863 This class cannot be used when running under PyPy, because the uWSGI
864 API implementation for PyPy is lacking the needed functionality.
865
866 :param default_timeout: The default timeout in seconds.
867 :param cache: The name of the caching instance to connect to, for
868 example: mycache@localhost:3031, defaults to an empty string, which
869 means uWSGI will cache in the local instance. If the cache is in the
870 same instance as the werkzeug app, you only have to provide the name of
871 the cache.
872 """
873 def __init__(self, default_timeout=300, cache=''):
874 BaseCache.__init__(self, default_timeout)
875
876 if platform.python_implementation() == 'PyPy':
877 raise RuntimeError("uWSGI caching does not work under PyPy, see "
878 "the docs for more details.")
879
880 try:
881 import uwsgi
882 self._uwsgi = uwsgi
883 except ImportError:
884 raise RuntimeError("uWSGI could not be imported, are you "
885 "running under uWSGI?")
886
887 self.cache = cache
888
889 def get(self, key):
890 rv = self._uwsgi.cache_get(key, self.cache)
891 if rv is None:
892 return
893 return pickle.loads(rv)
894
895 def delete(self, key):
896 return self._uwsgi.cache_del(key, self.cache)
897
898 def set(self, key, value, timeout=None):
899 return self._uwsgi.cache_update(key, pickle.dumps(value),
900 self._normalize_timeout(timeout),
901 self.cache)
902
903 def add(self, key, value, timeout=None):
904 return self._uwsgi.cache_set(key, pickle.dumps(value),
905 self._normalize_timeout(timeout),
906 self.cache)
907
908 def clear(self):
909 return self._uwsgi.cache_clear(self.cache)
910
911 def has(self, key):
912 return self._uwsgi.cache_exists(key, self.cache) is not None
+0
-254
werkzeug/contrib/fixers.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.contrib.fixers
3 ~~~~~~~~~~~~~~~~~~~~~~~
4
5 .. versionadded:: 0.5
6
7 This module includes various helpers that fix bugs in web servers. They may
8 be necessary for some versions of a buggy web server but not others. We try
9 to stay updated with the status of the bugs as good as possible but you have
10 to make sure whether they fix the problem you encounter.
11
12 If you notice bugs in webservers not fixed in this module consider
13 contributing a patch.
14
15 :copyright: Copyright 2009 by the Werkzeug Team, see AUTHORS for more details.
16 :license: BSD, see LICENSE for more details.
17 """
18 try:
19 from urllib import unquote
20 except ImportError:
21 from urllib.parse import unquote
22
23 from werkzeug.http import parse_options_header, parse_cache_control_header, \
24 parse_set_header
25 from werkzeug.useragents import UserAgent
26 from werkzeug.datastructures import Headers, ResponseCacheControl
27
28
29 class CGIRootFix(object):
30
31 """Wrap the application in this middleware if you are using FastCGI or CGI
32 and you have problems with your app root being set to the cgi script's path
33 instead of the path users are going to visit
34
35 .. versionchanged:: 0.9
36 Added `app_root` parameter and renamed from `LighttpdCGIRootFix`.
37
38 :param app: the WSGI application
39 :param app_root: Defaulting to ``'/'``, you can set this to something else
40 if your app is mounted somewhere else.
41 """
42
43 def __init__(self, app, app_root='/'):
44 self.app = app
45 self.app_root = app_root
46
47 def __call__(self, environ, start_response):
48 # only set PATH_INFO for older versions of Lighty or if no
49 # server software is provided. That's because the test was
50 # added in newer Werkzeug versions and we don't want to break
51 # people's code if they are using this fixer in a test that
52 # does not set the SERVER_SOFTWARE key.
53 if 'SERVER_SOFTWARE' not in environ or \
54 environ['SERVER_SOFTWARE'] < 'lighttpd/1.4.28':
55 environ['PATH_INFO'] = environ.get('SCRIPT_NAME', '') + \
56 environ.get('PATH_INFO', '')
57 environ['SCRIPT_NAME'] = self.app_root.strip('/')
58 return self.app(environ, start_response)
59
60 # backwards compatibility
61 LighttpdCGIRootFix = CGIRootFix
62
63
64 class PathInfoFromRequestUriFix(object):
65
66 """On windows environment variables are limited to the system charset
67 which makes it impossible to store the `PATH_INFO` variable in the
68 environment without loss of information on some systems.
69
70 This is for example a problem for CGI scripts on a Windows Apache.
71
72 This fixer works by recreating the `PATH_INFO` from `REQUEST_URI`,
73 `REQUEST_URL`, or `UNENCODED_URL` (whatever is available). Thus the
74 fix can only be applied if the webserver supports either of these
75 variables.
76
77 :param app: the WSGI application
78 """
79
80 def __init__(self, app):
81 self.app = app
82
83 def __call__(self, environ, start_response):
84 for key in 'REQUEST_URL', 'REQUEST_URI', 'UNENCODED_URL':
85 if key not in environ:
86 continue
87 request_uri = unquote(environ[key])
88 script_name = unquote(environ.get('SCRIPT_NAME', ''))
89 if request_uri.startswith(script_name):
90 environ['PATH_INFO'] = request_uri[len(script_name):] \
91 .split('?', 1)[0]
92 break
93 return self.app(environ, start_response)
94
95
96 class ProxyFix(object):
97
98 """This middleware can be applied to add HTTP proxy support to an
99 application that was not designed with HTTP proxies in mind. It
100 sets `REMOTE_ADDR`, `HTTP_HOST` from `X-Forwarded` headers. While
101 Werkzeug-based applications already can use
102 :py:func:`werkzeug.wsgi.get_host` to retrieve the current host even if
103 behind proxy setups, this middleware can be used for applications which
104 access the WSGI environment directly.
105
106 If you have more than one proxy server in front of your app, set
107 `num_proxies` accordingly.
108
109 Do not use this middleware in non-proxy setups for security reasons.
110
111 The original values of `REMOTE_ADDR` and `HTTP_HOST` are stored in
112 the WSGI environment as `werkzeug.proxy_fix.orig_remote_addr` and
113 `werkzeug.proxy_fix.orig_http_host`.
114
115 :param app: the WSGI application
116 :param num_proxies: the number of proxy servers in front of the app.
117 """
118
119 def __init__(self, app, num_proxies=1):
120 self.app = app
121 self.num_proxies = num_proxies
122
123 def get_remote_addr(self, forwarded_for):
124 """Selects the new remote addr from the given list of ips in
125 X-Forwarded-For. By default it picks the one that the `num_proxies`
126 proxy server provides. Before 0.9 it would always pick the first.
127
128 .. versionadded:: 0.8
129 """
130 if len(forwarded_for) >= self.num_proxies:
131 return forwarded_for[-self.num_proxies]
132
133 def __call__(self, environ, start_response):
134 getter = environ.get
135 forwarded_proto = getter('HTTP_X_FORWARDED_PROTO', '')
136 forwarded_for = getter('HTTP_X_FORWARDED_FOR', '').split(',')
137 forwarded_host = getter('HTTP_X_FORWARDED_HOST', '')
138 environ.update({
139 'werkzeug.proxy_fix.orig_wsgi_url_scheme': getter('wsgi.url_scheme'),
140 'werkzeug.proxy_fix.orig_remote_addr': getter('REMOTE_ADDR'),
141 'werkzeug.proxy_fix.orig_http_host': getter('HTTP_HOST')
142 })
143 forwarded_for = [x for x in [x.strip() for x in forwarded_for] if x]
144 remote_addr = self.get_remote_addr(forwarded_for)
145 if remote_addr is not None:
146 environ['REMOTE_ADDR'] = remote_addr
147 if forwarded_host:
148 environ['HTTP_HOST'] = forwarded_host
149 if forwarded_proto:
150 environ['wsgi.url_scheme'] = forwarded_proto
151 return self.app(environ, start_response)
152
153
154 class HeaderRewriterFix(object):
155
156 """This middleware can remove response headers and add others. This
157 is for example useful to remove the `Date` header from responses if you
158 are using a server that adds that header, no matter if it's present or
159 not or to add `X-Powered-By` headers::
160
161 app = HeaderRewriterFix(app, remove_headers=['Date'],
162 add_headers=[('X-Powered-By', 'WSGI')])
163
164 :param app: the WSGI application
165 :param remove_headers: a sequence of header keys that should be
166 removed.
167 :param add_headers: a sequence of ``(key, value)`` tuples that should
168 be added.
169 """
170
171 def __init__(self, app, remove_headers=None, add_headers=None):
172 self.app = app
173 self.remove_headers = set(x.lower() for x in (remove_headers or ()))
174 self.add_headers = list(add_headers or ())
175
176 def __call__(self, environ, start_response):
177 def rewriting_start_response(status, headers, exc_info=None):
178 new_headers = []
179 for key, value in headers:
180 if key.lower() not in self.remove_headers:
181 new_headers.append((key, value))
182 new_headers += self.add_headers
183 return start_response(status, new_headers, exc_info)
184 return self.app(environ, rewriting_start_response)
185
186
187 class InternetExplorerFix(object):
188
189 """This middleware fixes a couple of bugs with Microsoft Internet
190 Explorer. Currently the following fixes are applied:
191
192 - removing of `Vary` headers for unsupported mimetypes which
193 causes troubles with caching. Can be disabled by passing
194 ``fix_vary=False`` to the constructor.
195 see: http://support.microsoft.com/kb/824847/en-us
196
197 - removes offending headers to work around caching bugs in
198 Internet Explorer if `Content-Disposition` is set. Can be
199 disabled by passing ``fix_attach=False`` to the constructor.
200
201 If it does not detect affected Internet Explorer versions it won't touch
202 the request / response.
203 """
204
205 # This code was inspired by Django fixers for the same bugs. The
206 # fix_vary and fix_attach fixers were originally implemented in Django
207 # by Michael Axiak and is available as part of the Django project:
208 # http://code.djangoproject.com/ticket/4148
209
210 def __init__(self, app, fix_vary=True, fix_attach=True):
211 self.app = app
212 self.fix_vary = fix_vary
213 self.fix_attach = fix_attach
214
215 def fix_headers(self, environ, headers, status=None):
216 if self.fix_vary:
217 header = headers.get('content-type', '')
218 mimetype, options = parse_options_header(header)
219 if mimetype not in ('text/html', 'text/plain', 'text/sgml'):
220 headers.pop('vary', None)
221
222 if self.fix_attach and 'content-disposition' in headers:
223 pragma = parse_set_header(headers.get('pragma', ''))
224 pragma.discard('no-cache')
225 header = pragma.to_header()
226 if not header:
227 headers.pop('pragma', '')
228 else:
229 headers['Pragma'] = header
230 header = headers.get('cache-control', '')
231 if header:
232 cc = parse_cache_control_header(header,
233 cls=ResponseCacheControl)
234 cc.no_cache = None
235 cc.no_store = False
236 header = cc.to_header()
237 if not header:
238 headers.pop('cache-control', '')
239 else:
240 headers['Cache-Control'] = header
241
242 def run_fixed(self, environ, start_response):
243 def fixing_start_response(status, headers, exc_info=None):
244 headers = Headers(headers)
245 self.fix_headers(environ, headers, status)
246 return start_response(status, headers.to_wsgi_list(), exc_info)
247 return self.app(environ, fixing_start_response)
248
249 def __call__(self, environ, start_response):
250 ua = UserAgent(environ)
251 if ua.browser != 'msie':
252 return self.app(environ, start_response)
253 return self.run_fixed(environ, start_response)
+0
-352
werkzeug/contrib/iterio.py less more
0 # -*- coding: utf-8 -*-
1 r"""
2 werkzeug.contrib.iterio
3 ~~~~~~~~~~~~~~~~~~~~~~~
4
5 This module implements a :class:`IterIO` that converts an iterator into
6 a stream object and the other way round. Converting streams into
7 iterators requires the `greenlet`_ module.
8
9 To convert an iterator into a stream all you have to do is to pass it
10 directly to the :class:`IterIO` constructor. In this example we pass it
11 a newly created generator::
12
13 def foo():
14 yield "something\n"
15 yield "otherthings"
16 stream = IterIO(foo())
17 print stream.read() # read the whole iterator
18
19 The other way round works a bit different because we have to ensure that
20 the code execution doesn't take place yet. An :class:`IterIO` call with a
21 callable as first argument does two things. The function itself is passed
22 an :class:`IterIO` stream it can feed. The object returned by the
23 :class:`IterIO` constructor on the other hand is not an stream object but
24 an iterator::
25
26 def foo(stream):
27 stream.write("some")
28 stream.write("thing")
29 stream.flush()
30 stream.write("otherthing")
31 iterator = IterIO(foo)
32 print iterator.next() # prints something
33 print iterator.next() # prints otherthing
34 iterator.next() # raises StopIteration
35
36 .. _greenlet: https://github.com/python-greenlet/greenlet
37
38 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
39 :license: BSD, see LICENSE for more details.
40 """
41 try:
42 import greenlet
43 except ImportError:
44 greenlet = None
45
46 from werkzeug._compat import implements_iterator
47
48
49 def _mixed_join(iterable, sentinel):
50 """concatenate any string type in an intelligent way."""
51 iterator = iter(iterable)
52 first_item = next(iterator, sentinel)
53 if isinstance(first_item, bytes):
54 return first_item + b''.join(iterator)
55 return first_item + u''.join(iterator)
56
57
58 def _newline(reference_string):
59 if isinstance(reference_string, bytes):
60 return b'\n'
61 return u'\n'
62
63
64 @implements_iterator
65 class IterIO(object):
66
67 """Instances of this object implement an interface compatible with the
68 standard Python :class:`file` object. Streams are either read-only or
69 write-only depending on how the object is created.
70
71 If the first argument is an iterable a file like object is returned that
72 returns the contents of the iterable. In case the iterable is empty
73 read operations will return the sentinel value.
74
75 If the first argument is a callable then the stream object will be
76 created and passed to that function. The caller itself however will
77 not receive a stream but an iterable. The function will be be executed
78 step by step as something iterates over the returned iterable. Each
79 call to :meth:`flush` will create an item for the iterable. If
80 :meth:`flush` is called without any writes in-between the sentinel
81 value will be yielded.
82
83 Note for Python 3: due to the incompatible interface of bytes and
84 streams you should set the sentinel value explicitly to an empty
85 bytestring (``b''``) if you are expecting to deal with bytes as
86 otherwise the end of the stream is marked with the wrong sentinel
87 value.
88
89 .. versionadded:: 0.9
90 `sentinel` parameter was added.
91 """
92
93 def __new__(cls, obj, sentinel=''):
94 try:
95 iterator = iter(obj)
96 except TypeError:
97 return IterI(obj, sentinel)
98 return IterO(iterator, sentinel)
99
100 def __iter__(self):
101 return self
102
103 def tell(self):
104 if self.closed:
105 raise ValueError('I/O operation on closed file')
106 return self.pos
107
108 def isatty(self):
109 if self.closed:
110 raise ValueError('I/O operation on closed file')
111 return False
112
113 def seek(self, pos, mode=0):
114 if self.closed:
115 raise ValueError('I/O operation on closed file')
116 raise IOError(9, 'Bad file descriptor')
117
118 def truncate(self, size=None):
119 if self.closed:
120 raise ValueError('I/O operation on closed file')
121 raise IOError(9, 'Bad file descriptor')
122
123 def write(self, s):
124 if self.closed:
125 raise ValueError('I/O operation on closed file')
126 raise IOError(9, 'Bad file descriptor')
127
128 def writelines(self, list):
129 if self.closed:
130 raise ValueError('I/O operation on closed file')
131 raise IOError(9, 'Bad file descriptor')
132
133 def read(self, n=-1):
134 if self.closed:
135 raise ValueError('I/O operation on closed file')
136 raise IOError(9, 'Bad file descriptor')
137
138 def readlines(self, sizehint=0):
139 if self.closed:
140 raise ValueError('I/O operation on closed file')
141 raise IOError(9, 'Bad file descriptor')
142
143 def readline(self, length=None):
144 if self.closed:
145 raise ValueError('I/O operation on closed file')
146 raise IOError(9, 'Bad file descriptor')
147
148 def flush(self):
149 if self.closed:
150 raise ValueError('I/O operation on closed file')
151 raise IOError(9, 'Bad file descriptor')
152
153 def __next__(self):
154 if self.closed:
155 raise StopIteration()
156 line = self.readline()
157 if not line:
158 raise StopIteration()
159 return line
160
161
162 class IterI(IterIO):
163
164 """Convert an stream into an iterator."""
165
166 def __new__(cls, func, sentinel=''):
167 if greenlet is None:
168 raise RuntimeError('IterI requires greenlet support')
169 stream = object.__new__(cls)
170 stream._parent = greenlet.getcurrent()
171 stream._buffer = []
172 stream.closed = False
173 stream.sentinel = sentinel
174 stream.pos = 0
175
176 def run():
177 func(stream)
178 stream.close()
179
180 g = greenlet.greenlet(run, stream._parent)
181 while 1:
182 rv = g.switch()
183 if not rv:
184 return
185 yield rv[0]
186
187 def close(self):
188 if not self.closed:
189 self.closed = True
190 self._flush_impl()
191
192 def write(self, s):
193 if self.closed:
194 raise ValueError('I/O operation on closed file')
195 if s:
196 self.pos += len(s)
197 self._buffer.append(s)
198
199 def writelines(self, list):
200 for item in list:
201 self.write(item)
202
203 def flush(self):
204 if self.closed:
205 raise ValueError('I/O operation on closed file')
206 self._flush_impl()
207
208 def _flush_impl(self):
209 data = _mixed_join(self._buffer, self.sentinel)
210 self._buffer = []
211 if not data and self.closed:
212 self._parent.switch()
213 else:
214 self._parent.switch((data,))
215
216
217 class IterO(IterIO):
218
219 """Iter output. Wrap an iterator and give it a stream like interface."""
220
221 def __new__(cls, gen, sentinel=''):
222 self = object.__new__(cls)
223 self._gen = gen
224 self._buf = None
225 self.sentinel = sentinel
226 self.closed = False
227 self.pos = 0
228 return self
229
230 def __iter__(self):
231 return self
232
233 def _buf_append(self, string):
234 '''Replace string directly without appending to an empty string,
235 avoiding type issues.'''
236 if not self._buf:
237 self._buf = string
238 else:
239 self._buf += string
240
241 def close(self):
242 if not self.closed:
243 self.closed = True
244 if hasattr(self._gen, 'close'):
245 self._gen.close()
246
247 def seek(self, pos, mode=0):
248 if self.closed:
249 raise ValueError('I/O operation on closed file')
250 if mode == 1:
251 pos += self.pos
252 elif mode == 2:
253 self.read()
254 self.pos = min(self.pos, self.pos + pos)
255 return
256 elif mode != 0:
257 raise IOError('Invalid argument')
258 buf = []
259 try:
260 tmp_end_pos = len(self._buf)
261 while pos > tmp_end_pos:
262 item = next(self._gen)
263 tmp_end_pos += len(item)
264 buf.append(item)
265 except StopIteration:
266 pass
267 if buf:
268 self._buf_append(_mixed_join(buf, self.sentinel))
269 self.pos = max(0, pos)
270
271 def read(self, n=-1):
272 if self.closed:
273 raise ValueError('I/O operation on closed file')
274 if n < 0:
275 self._buf_append(_mixed_join(self._gen, self.sentinel))
276 result = self._buf[self.pos:]
277 self.pos += len(result)
278 return result
279 new_pos = self.pos + n
280 buf = []
281 try:
282 tmp_end_pos = 0 if self._buf is None else len(self._buf)
283 while new_pos > tmp_end_pos or (self._buf is None and not buf):
284 item = next(self._gen)
285 tmp_end_pos += len(item)
286 buf.append(item)
287 except StopIteration:
288 pass
289 if buf:
290 self._buf_append(_mixed_join(buf, self.sentinel))
291
292 if self._buf is None:
293 return self.sentinel
294
295 new_pos = max(0, new_pos)
296 try:
297 return self._buf[self.pos:new_pos]
298 finally:
299 self.pos = min(new_pos, len(self._buf))
300
301 def readline(self, length=None):
302 if self.closed:
303 raise ValueError('I/O operation on closed file')
304
305 nl_pos = -1
306 if self._buf:
307 nl_pos = self._buf.find(_newline(self._buf), self.pos)
308 buf = []
309 try:
310 if self._buf is None:
311 pos = self.pos
312 else:
313 pos = len(self._buf)
314 while nl_pos < 0:
315 item = next(self._gen)
316 local_pos = item.find(_newline(item))
317 buf.append(item)
318 if local_pos >= 0:
319 nl_pos = pos + local_pos
320 break
321 pos += len(item)
322 except StopIteration:
323 pass
324 if buf:
325 self._buf_append(_mixed_join(buf, self.sentinel))
326
327 if self._buf is None:
328 return self.sentinel
329
330 if nl_pos < 0:
331 new_pos = len(self._buf)
332 else:
333 new_pos = nl_pos + 1
334 if length is not None and self.pos + length < new_pos:
335 new_pos = self.pos + length
336 try:
337 return self._buf[self.pos:new_pos]
338 finally:
339 self.pos = min(new_pos, len(self._buf))
340
341 def readlines(self, sizehint=0):
342 total = 0
343 lines = []
344 line = self.readline()
345 while line:
346 lines.append(line)
347 total += len(line)
348 if 0 < sizehint <= total:
349 break
350 line = self.readline()
351 return lines
+0
-264
werkzeug/contrib/jsrouting.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.contrib.jsrouting
3 ~~~~~~~~~~~~~~~~~~~~~~~~~~
4
5 Addon module that allows to create a JavaScript function from a map
6 that generates rules.
7
8 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
9 :license: BSD, see LICENSE for more details.
10 """
11 try:
12 from simplejson import dumps
13 except ImportError:
14 try:
15 from json import dumps
16 except ImportError:
17 def dumps(*args):
18 raise RuntimeError('simplejson required for jsrouting')
19
20 from inspect import getmro
21 from werkzeug.routing import NumberConverter
22 from werkzeug._compat import iteritems
23
24
25 def render_template(name_parts, rules, converters):
26 result = u''
27 if name_parts:
28 for idx in range(0, len(name_parts) - 1):
29 name = u'.'.join(name_parts[:idx + 1])
30 result += u"if (typeof %s === 'undefined') %s = {}\n" % (name, name)
31 result += '%s = ' % '.'.join(name_parts)
32 result += """(function (server_name, script_name, subdomain, url_scheme) {
33 var converters = [%(converters)s];
34 var rules = %(rules)s;
35 function in_array(array, value) {
36 if (array.indexOf != undefined) {
37 return array.indexOf(value) != -1;
38 }
39 for (var i = 0; i < array.length; i++) {
40 if (array[i] == value) {
41 return true;
42 }
43 }
44 return false;
45 }
46 function array_diff(array1, array2) {
47 array1 = array1.slice();
48 for (var i = array1.length-1; i >= 0; i--) {
49 if (in_array(array2, array1[i])) {
50 array1.splice(i, 1);
51 }
52 }
53 return array1;
54 }
55 function split_obj(obj) {
56 var names = [];
57 var values = [];
58 for (var name in obj) {
59 if (typeof(obj[name]) != 'function') {
60 names.push(name);
61 values.push(obj[name]);
62 }
63 }
64 return {names: names, values: values, original: obj};
65 }
66 function suitable(rule, args) {
67 var default_args = split_obj(rule.defaults || {});
68 var diff_arg_names = array_diff(rule.arguments, default_args.names);
69
70 for (var i = 0; i < diff_arg_names.length; i++) {
71 if (!in_array(args.names, diff_arg_names[i])) {
72 return false;
73 }
74 }
75
76 if (array_diff(rule.arguments, args.names).length == 0) {
77 if (rule.defaults == null) {
78 return true;
79 }
80 for (var i = 0; i < default_args.names.length; i++) {
81 var key = default_args.names[i];
82 var value = default_args.values[i];
83 if (value != args.original[key]) {
84 return false;
85 }
86 }
87 }
88
89 return true;
90 }
91 function build(rule, args) {
92 var tmp = [];
93 var processed = rule.arguments.slice();
94 for (var i = 0; i < rule.trace.length; i++) {
95 var part = rule.trace[i];
96 if (part.is_dynamic) {
97 var converter = converters[rule.converters[part.data]];
98 var data = converter(args.original[part.data]);
99 if (data == null) {
100 return null;
101 }
102 tmp.push(data);
103 processed.push(part.name);
104 } else {
105 tmp.push(part.data);
106 }
107 }
108 tmp = tmp.join('');
109 var pipe = tmp.indexOf('|');
110 var subdomain = tmp.substring(0, pipe);
111 var url = tmp.substring(pipe+1);
112
113 var unprocessed = array_diff(args.names, processed);
114 var first_query_var = true;
115 for (var i = 0; i < unprocessed.length; i++) {
116 if (first_query_var) {
117 url += '?';
118 } else {
119 url += '&';
120 }
121 first_query_var = false;
122 url += encodeURIComponent(unprocessed[i]);
123 url += '=';
124 url += encodeURIComponent(args.original[unprocessed[i]]);
125 }
126 return {subdomain: subdomain, path: url};
127 }
128 function lstrip(s, c) {
129 while (s && s.substring(0, 1) == c) {
130 s = s.substring(1);
131 }
132 return s;
133 }
134 function rstrip(s, c) {
135 while (s && s.substring(s.length-1, s.length) == c) {
136 s = s.substring(0, s.length-1);
137 }
138 return s;
139 }
140 return function(endpoint, args, force_external) {
141 args = split_obj(args);
142 var rv = null;
143 for (var i = 0; i < rules.length; i++) {
144 var rule = rules[i];
145 if (rule.endpoint != endpoint) continue;
146 if (suitable(rule, args)) {
147 rv = build(rule, args);
148 if (rv != null) {
149 break;
150 }
151 }
152 }
153 if (rv == null) {
154 return null;
155 }
156 if (!force_external && rv.subdomain == subdomain) {
157 return rstrip(script_name, '/') + '/' + lstrip(rv.path, '/');
158 } else {
159 return url_scheme + '://'
160 + (rv.subdomain ? rv.subdomain + '.' : '')
161 + server_name + rstrip(script_name, '/')
162 + '/' + lstrip(rv.path, '/');
163 }
164 };
165 })""" % {'converters': u', '.join(converters),
166 'rules': rules}
167
168 return result
169
170
171 def generate_map(map, name='url_map'):
172 """
173 Generates a JavaScript function containing the rules defined in
174 this map, to be used with a MapAdapter's generate_javascript
175 method. If you don't pass a name the returned JavaScript code is
176 an expression that returns a function. Otherwise it's a standalone
177 script that assigns the function with that name. Dotted names are
178 resolved (so you an use a name like 'obj.url_for')
179
180 In order to use JavaScript generation, simplejson must be installed.
181
182 Note that using this feature will expose the rules
183 defined in your map to users. If your rules contain sensitive
184 information, don't use JavaScript generation!
185 """
186 from warnings import warn
187 warn(DeprecationWarning('This module is deprecated'))
188 map.update()
189 rules = []
190 converters = []
191 for rule in map.iter_rules():
192 trace = [{
193 'is_dynamic': is_dynamic,
194 'data': data
195 } for is_dynamic, data in rule._trace]
196 rule_converters = {}
197 for key, converter in iteritems(rule._converters):
198 js_func = js_to_url_function(converter)
199 try:
200 index = converters.index(js_func)
201 except ValueError:
202 converters.append(js_func)
203 index = len(converters) - 1
204 rule_converters[key] = index
205 rules.append({
206 u'endpoint': rule.endpoint,
207 u'arguments': list(rule.arguments),
208 u'converters': rule_converters,
209 u'trace': trace,
210 u'defaults': rule.defaults
211 })
212
213 return render_template(name_parts=name and name.split('.') or [],
214 rules=dumps(rules),
215 converters=converters)
216
217
218 def generate_adapter(adapter, name='url_for', map_name='url_map'):
219 """Generates the url building function for a map."""
220 values = {
221 u'server_name': dumps(adapter.server_name),
222 u'script_name': dumps(adapter.script_name),
223 u'subdomain': dumps(adapter.subdomain),
224 u'url_scheme': dumps(adapter.url_scheme),
225 u'name': name,
226 u'map_name': map_name
227 }
228 return u'''\
229 var %(name)s = %(map_name)s(
230 %(server_name)s,
231 %(script_name)s,
232 %(subdomain)s,
233 %(url_scheme)s
234 );''' % values
235
236
237 def js_to_url_function(converter):
238 """Get the JavaScript converter function from a rule."""
239 if hasattr(converter, 'js_to_url_function'):
240 data = converter.js_to_url_function()
241 else:
242 for cls in getmro(type(converter)):
243 if cls in js_to_url_functions:
244 data = js_to_url_functions[cls](converter)
245 break
246 else:
247 return 'encodeURIComponent'
248 return '(function(value) { %s })' % data
249
250
251 def NumberConverter_js_to_url(conv):
252 if conv.fixed_digits:
253 return u'''\
254 var result = value.toString();
255 while (result.length < %s)
256 result = '0' + result;
257 return result;''' % conv.fixed_digits
258 return u'return value.toString();'
259
260
261 js_to_url_functions = {
262 NumberConverter: NumberConverter_js_to_url
263 }
+0
-41
werkzeug/contrib/limiter.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.contrib.limiter
3 ~~~~~~~~~~~~~~~~~~~~~~~~
4
5 A middleware that limits incoming data. This works around problems with
6 Trac_ or Django_ because those directly stream into the memory.
7
8 .. _Trac: http://trac.edgewall.org/
9 .. _Django: http://www.djangoproject.com/
10
11 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
12 :license: BSD, see LICENSE for more details.
13 """
14 from warnings import warn
15
16 from werkzeug.wsgi import LimitedStream
17
18
19 class StreamLimitMiddleware(object):
20
21 """Limits the input stream to a given number of bytes. This is useful if
22 you have a WSGI application that reads form data into memory (django for
23 example) and you don't want users to harm the server by uploading tons of
24 data.
25
26 Default is 10MB
27
28 .. versionchanged:: 0.9
29 Deprecated middleware.
30 """
31
32 def __init__(self, app, maximum_size=1024 * 1024 * 10):
33 warn(DeprecationWarning('This middleware is deprecated'))
34 self.app = app
35 self.maximum_size = maximum_size
36
37 def __call__(self, environ, start_response):
38 limit = min(self.maximum_size, int(environ.get('CONTENT_LENGTH') or 0))
39 environ['wsgi.input'] = LimitedStream(environ['wsgi.input'], limit)
40 return self.app(environ, start_response)
+0
-343
werkzeug/contrib/lint.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.contrib.lint
3 ~~~~~~~~~~~~~~~~~~~~~
4
5 .. versionadded:: 0.5
6
7 This module provides a middleware that performs sanity checks of the WSGI
8 application. It checks that :pep:`333` is properly implemented and warns
9 on some common HTTP errors such as non-empty responses for 304 status
10 codes.
11
12 This module provides a middleware, the :class:`LintMiddleware`. Wrap your
13 application with it and it will warn about common problems with WSGI and
14 HTTP while your application is running.
15
16 It's strongly recommended to use it during development.
17
18 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
19 :license: BSD, see LICENSE for more details.
20 """
21 try:
22 from urllib.parse import urlparse
23 except ImportError:
24 from urlparse import urlparse
25
26 from warnings import warn
27
28 from werkzeug.datastructures import Headers
29 from werkzeug.http import is_entity_header
30 from werkzeug.wsgi import FileWrapper
31 from werkzeug._compat import string_types
32
33
34 class WSGIWarning(Warning):
35
36 """Warning class for WSGI warnings."""
37
38
39 class HTTPWarning(Warning):
40
41 """Warning class for HTTP warnings."""
42
43
44 def check_string(context, obj, stacklevel=3):
45 if type(obj) is not str:
46 warn(WSGIWarning('%s requires bytestrings, got %s' %
47 (context, obj.__class__.__name__)))
48
49
50 class InputStream(object):
51
52 def __init__(self, stream):
53 self._stream = stream
54
55 def read(self, *args):
56 if len(args) == 0:
57 warn(WSGIWarning('wsgi does not guarantee an EOF marker on the '
58 'input stream, thus making calls to '
59 'wsgi.input.read() unsafe. Conforming servers '
60 'may never return from this call.'),
61 stacklevel=2)
62 elif len(args) != 1:
63 warn(WSGIWarning('too many parameters passed to wsgi.input.read()'),
64 stacklevel=2)
65 return self._stream.read(*args)
66
67 def readline(self, *args):
68 if len(args) == 0:
69 warn(WSGIWarning('Calls to wsgi.input.readline() without arguments'
70 ' are unsafe. Use wsgi.input.read() instead.'),
71 stacklevel=2)
72 elif len(args) == 1:
73 warn(WSGIWarning('wsgi.input.readline() was called with a size hint. '
74 'WSGI does not support this, although it\'s available '
75 'on all major servers.'),
76 stacklevel=2)
77 else:
78 raise TypeError('too many arguments passed to wsgi.input.readline()')
79 return self._stream.readline(*args)
80
81 def __iter__(self):
82 try:
83 return iter(self._stream)
84 except TypeError:
85 warn(WSGIWarning('wsgi.input is not iterable.'), stacklevel=2)
86 return iter(())
87
88 def close(self):
89 warn(WSGIWarning('application closed the input stream!'),
90 stacklevel=2)
91 self._stream.close()
92
93
94 class ErrorStream(object):
95
96 def __init__(self, stream):
97 self._stream = stream
98
99 def write(self, s):
100 check_string('wsgi.error.write()', s)
101 self._stream.write(s)
102
103 def flush(self):
104 self._stream.flush()
105
106 def writelines(self, seq):
107 for line in seq:
108 self.write(seq)
109
110 def close(self):
111 warn(WSGIWarning('application closed the error stream!'),
112 stacklevel=2)
113 self._stream.close()
114
115
116 class GuardedWrite(object):
117
118 def __init__(self, write, chunks):
119 self._write = write
120 self._chunks = chunks
121
122 def __call__(self, s):
123 check_string('write()', s)
124 self._write.write(s)
125 self._chunks.append(len(s))
126
127
128 class GuardedIterator(object):
129
130 def __init__(self, iterator, headers_set, chunks):
131 self._iterator = iterator
132 self._next = iter(iterator).next
133 self.closed = False
134 self.headers_set = headers_set
135 self.chunks = chunks
136
137 def __iter__(self):
138 return self
139
140 def next(self):
141 if self.closed:
142 warn(WSGIWarning('iterated over closed app_iter'),
143 stacklevel=2)
144 rv = self._next()
145 if not self.headers_set:
146 warn(WSGIWarning('Application returned before it '
147 'started the response'), stacklevel=2)
148 check_string('application iterator items', rv)
149 self.chunks.append(len(rv))
150 return rv
151
152 def close(self):
153 self.closed = True
154 if hasattr(self._iterator, 'close'):
155 self._iterator.close()
156
157 if self.headers_set:
158 status_code, headers = self.headers_set
159 bytes_sent = sum(self.chunks)
160 content_length = headers.get('content-length', type=int)
161
162 if status_code == 304:
163 for key, value in headers:
164 key = key.lower()
165 if key not in ('expires', 'content-location') and \
166 is_entity_header(key):
167 warn(HTTPWarning('entity header %r found in 304 '
168 'response' % key))
169 if bytes_sent:
170 warn(HTTPWarning('304 responses must not have a body'))
171 elif 100 <= status_code < 200 or status_code == 204:
172 if content_length != 0:
173 warn(HTTPWarning('%r responses must have an empty '
174 'content length' % status_code))
175 if bytes_sent:
176 warn(HTTPWarning('%r responses must not have a body' %
177 status_code))
178 elif content_length is not None and content_length != bytes_sent:
179 warn(WSGIWarning('Content-Length and the number of bytes '
180 'sent to the client do not match.'))
181
182 def __del__(self):
183 if not self.closed:
184 try:
185 warn(WSGIWarning('Iterator was garbage collected before '
186 'it was closed.'))
187 except Exception:
188 pass
189
190
191 class LintMiddleware(object):
192
193 """This middleware wraps an application and warns on common errors.
194 Among other thing it currently checks for the following problems:
195
196 - invalid status codes
197 - non-bytestrings sent to the WSGI server
198 - strings returned from the WSGI application
199 - non-empty conditional responses
200 - unquoted etags
201 - relative URLs in the Location header
202 - unsafe calls to wsgi.input
203 - unclosed iterators
204
205 Detected errors are emitted using the standard Python :mod:`warnings`
206 system and usually end up on :data:`stderr`.
207
208 ::
209
210 from werkzeug.contrib.lint import LintMiddleware
211 app = LintMiddleware(app)
212
213 :param app: the application to wrap
214 """
215
216 def __init__(self, app):
217 self.app = app
218
219 def check_environ(self, environ):
220 if type(environ) is not dict:
221 warn(WSGIWarning('WSGI environment is not a standard python dict.'),
222 stacklevel=4)
223 for key in ('REQUEST_METHOD', 'SERVER_NAME', 'SERVER_PORT',
224 'wsgi.version', 'wsgi.input', 'wsgi.errors',
225 'wsgi.multithread', 'wsgi.multiprocess',
226 'wsgi.run_once'):
227 if key not in environ:
228 warn(WSGIWarning('required environment key %r not found'
229 % key), stacklevel=3)
230 if environ['wsgi.version'] != (1, 0):
231 warn(WSGIWarning('environ is not a WSGI 1.0 environ'),
232 stacklevel=3)
233
234 script_name = environ.get('SCRIPT_NAME', '')
235 if script_name and script_name[:1] != '/':
236 warn(WSGIWarning('SCRIPT_NAME does not start with a slash: %r'
237 % script_name), stacklevel=3)
238 path_info = environ.get('PATH_INFO', '')
239 if path_info[:1] != '/':
240 warn(WSGIWarning('PATH_INFO does not start with a slash: %r'
241 % path_info), stacklevel=3)
242
243 def check_start_response(self, status, headers, exc_info):
244 check_string('status', status)
245 status_code = status.split(None, 1)[0]
246 if len(status_code) != 3 or not status_code.isdigit():
247 warn(WSGIWarning('Status code must be three digits'), stacklevel=3)
248 if len(status) < 4 or status[3] != ' ':
249 warn(WSGIWarning('Invalid value for status %r. Valid '
250 'status strings are three digits, a space '
251 'and a status explanation'), stacklevel=3)
252 status_code = int(status_code)
253 if status_code < 100:
254 warn(WSGIWarning('status code < 100 detected'), stacklevel=3)
255
256 if type(headers) is not list:
257 warn(WSGIWarning('header list is not a list'), stacklevel=3)
258 for item in headers:
259 if type(item) is not tuple or len(item) != 2:
260 warn(WSGIWarning('Headers must tuple 2-item tuples'),
261 stacklevel=3)
262 name, value = item
263 if type(name) is not str or type(value) is not str:
264 warn(WSGIWarning('header items must be strings'),
265 stacklevel=3)
266 if name.lower() == 'status':
267 warn(WSGIWarning('The status header is not supported due to '
268 'conflicts with the CGI spec.'),
269 stacklevel=3)
270
271 if exc_info is not None and not isinstance(exc_info, tuple):
272 warn(WSGIWarning('invalid value for exc_info'), stacklevel=3)
273
274 headers = Headers(headers)
275 self.check_headers(headers)
276
277 return status_code, headers
278
279 def check_headers(self, headers):
280 etag = headers.get('etag')
281 if etag is not None:
282 if etag.startswith(('W/', 'w/')):
283 if etag.startswith('w/'):
284 warn(HTTPWarning('weak etag indicator should be upcase.'),
285 stacklevel=4)
286 etag = etag[2:]
287 if not (etag[:1] == etag[-1:] == '"'):
288 warn(HTTPWarning('unquoted etag emitted.'), stacklevel=4)
289
290 location = headers.get('location')
291 if location is not None:
292 if not urlparse(location).netloc:
293 warn(HTTPWarning('absolute URLs required for location header'),
294 stacklevel=4)
295
296 def check_iterator(self, app_iter):
297 if isinstance(app_iter, string_types):
298 warn(WSGIWarning('application returned string. Response will '
299 'send character for character to the client '
300 'which will kill the performance. Return a '
301 'list or iterable instead.'), stacklevel=3)
302
303 def __call__(self, *args, **kwargs):
304 if len(args) != 2:
305 warn(WSGIWarning('Two arguments to WSGI app required'), stacklevel=2)
306 if kwargs:
307 warn(WSGIWarning('No keyword arguments to WSGI app allowed'),
308 stacklevel=2)
309 environ, start_response = args
310
311 self.check_environ(environ)
312 environ['wsgi.input'] = InputStream(environ['wsgi.input'])
313 environ['wsgi.errors'] = ErrorStream(environ['wsgi.errors'])
314
315 # hook our own file wrapper in so that applications will always
316 # iterate to the end and we can check the content length
317 environ['wsgi.file_wrapper'] = FileWrapper
318
319 headers_set = []
320 chunks = []
321
322 def checking_start_response(*args, **kwargs):
323 if len(args) not in (2, 3):
324 warn(WSGIWarning('Invalid number of arguments: %s, expected '
325 '2 or 3' % len(args), stacklevel=2))
326 if kwargs:
327 warn(WSGIWarning('no keyword arguments allowed.'))
328
329 status, headers = args[:2]
330 if len(args) == 3:
331 exc_info = args[2]
332 else:
333 exc_info = None
334
335 headers_set[:] = self.check_start_response(status, headers,
336 exc_info)
337 return GuardedWrite(start_response(status, headers, exc_info),
338 chunks)
339
340 app_iter = self.app(environ, checking_start_response)
341 self.check_iterator(app_iter)
342 return GuardedIterator(app_iter, headers_set, chunks)
+0
-147
werkzeug/contrib/profiler.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.contrib.profiler
3 ~~~~~~~~~~~~~~~~~~~~~~~~~
4
5 This module provides a simple WSGI profiler middleware for finding
6 bottlenecks in web application. It uses the :mod:`profile` or
7 :mod:`cProfile` module to do the profiling and writes the stats to the
8 stream provided (defaults to stderr).
9
10 Example usage::
11
12 from werkzeug.contrib.profiler import ProfilerMiddleware
13 app = ProfilerMiddleware(app)
14
15 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
16 :license: BSD, see LICENSE for more details.
17 """
18 import sys
19 import time
20 import os.path
21 try:
22 try:
23 from cProfile import Profile
24 except ImportError:
25 from profile import Profile
26 from pstats import Stats
27 available = True
28 except ImportError:
29 available = False
30
31
32 class MergeStream(object):
33
34 """An object that redirects `write` calls to multiple streams.
35 Use this to log to both `sys.stdout` and a file::
36
37 f = open('profiler.log', 'w')
38 stream = MergeStream(sys.stdout, f)
39 profiler = ProfilerMiddleware(app, stream)
40 """
41
42 def __init__(self, *streams):
43 if not streams:
44 raise TypeError('at least one stream must be given')
45 self.streams = streams
46
47 def write(self, data):
48 for stream in self.streams:
49 stream.write(data)
50
51
52 class ProfilerMiddleware(object):
53
54 """Simple profiler middleware. Wraps a WSGI application and profiles
55 a request. This intentionally buffers the response so that timings are
56 more exact.
57
58 By giving the `profile_dir` argument, pstat.Stats files are saved to that
59 directory, one file per request. Without it, a summary is printed to
60 `stream` instead.
61
62 For the exact meaning of `sort_by` and `restrictions` consult the
63 :mod:`profile` documentation.
64
65 .. versionadded:: 0.9
66 Added support for `restrictions` and `profile_dir`.
67
68 :param app: the WSGI application to profile.
69 :param stream: the stream for the profiled stats. defaults to stderr.
70 :param sort_by: a tuple of columns to sort the result by.
71 :param restrictions: a tuple of profiling strictions, not used if dumping
72 to `profile_dir`.
73 :param profile_dir: directory name to save pstat files
74 """
75
76 def __init__(self, app, stream=None,
77 sort_by=('time', 'calls'), restrictions=(), profile_dir=None):
78 if not available:
79 raise RuntimeError('the profiler is not available because '
80 'profile or pstat is not installed.')
81 self._app = app
82 self._stream = stream or sys.stdout
83 self._sort_by = sort_by
84 self._restrictions = restrictions
85 self._profile_dir = profile_dir
86
87 def __call__(self, environ, start_response):
88 response_body = []
89
90 def catching_start_response(status, headers, exc_info=None):
91 start_response(status, headers, exc_info)
92 return response_body.append
93
94 def runapp():
95 appiter = self._app(environ, catching_start_response)
96 response_body.extend(appiter)
97 if hasattr(appiter, 'close'):
98 appiter.close()
99
100 p = Profile()
101 start = time.time()
102 p.runcall(runapp)
103 body = b''.join(response_body)
104 elapsed = time.time() - start
105
106 if self._profile_dir is not None:
107 prof_filename = os.path.join(self._profile_dir,
108 '%s.%s.%06dms.%d.prof' % (
109 environ['REQUEST_METHOD'],
110 environ.get('PATH_INFO').strip(
111 '/').replace('/', '.') or 'root',
112 elapsed * 1000.0,
113 time.time()
114 ))
115 p.dump_stats(prof_filename)
116
117 else:
118 stats = Stats(p, stream=self._stream)
119 stats.sort_stats(*self._sort_by)
120
121 self._stream.write('-' * 80)
122 self._stream.write('\nPATH: %r\n' % environ.get('PATH_INFO'))
123 stats.print_stats(*self._restrictions)
124 self._stream.write('-' * 80 + '\n\n')
125
126 return [body]
127
128
129 def make_action(app_factory, hostname='localhost', port=5000,
130 threaded=False, processes=1, stream=None,
131 sort_by=('time', 'calls'), restrictions=()):
132 """Return a new callback for :mod:`werkzeug.script` that starts a local
133 server with the profiler enabled.
134
135 ::
136
137 from werkzeug.contrib import profiler
138 action_profile = profiler.make_action(make_app)
139 """
140 def action(hostname=('h', hostname), port=('p', port),
141 threaded=threaded, processes=processes):
142 """Start a new development server."""
143 from werkzeug.serving import run_simple
144 app = ProfilerMiddleware(app_factory(), stream, sort_by, restrictions)
145 run_simple(hostname, port, app, False, None, threaded, processes)
146 return action
+0
-323
werkzeug/contrib/securecookie.py less more
0 # -*- coding: utf-8 -*-
1 r"""
2 werkzeug.contrib.securecookie
3 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
4
5 This module implements a cookie that is not alterable from the client
6 because it adds a checksum the server checks for. You can use it as
7 session replacement if all you have is a user id or something to mark
8 a logged in user.
9
10 Keep in mind that the data is still readable from the client as a
11 normal cookie is. However you don't have to store and flush the
12 sessions you have at the server.
13
14 Example usage:
15
16 >>> from werkzeug.contrib.securecookie import SecureCookie
17 >>> x = SecureCookie({"foo": 42, "baz": (1, 2, 3)}, "deadbeef")
18
19 Dumping into a string so that one can store it in a cookie:
20
21 >>> value = x.serialize()
22
23 Loading from that string again:
24
25 >>> x = SecureCookie.unserialize(value, "deadbeef")
26 >>> x["baz"]
27 (1, 2, 3)
28
29 If someone modifies the cookie and the checksum is wrong the unserialize
30 method will fail silently and return a new empty `SecureCookie` object.
31
32 Keep in mind that the values will be visible in the cookie so do not
33 store data in a cookie you don't want the user to see.
34
35 Application Integration
36 =======================
37
38 If you are using the werkzeug request objects you could integrate the
39 secure cookie into your application like this::
40
41 from werkzeug.utils import cached_property
42 from werkzeug.wrappers import BaseRequest
43 from werkzeug.contrib.securecookie import SecureCookie
44
45 # don't use this key but a different one; you could just use
46 # os.urandom(20) to get something random
47 SECRET_KEY = '\xfa\xdd\xb8z\xae\xe0}4\x8b\xea'
48
49 class Request(BaseRequest):
50
51 @cached_property
52 def client_session(self):
53 data = self.cookies.get('session_data')
54 if not data:
55 return SecureCookie(secret_key=SECRET_KEY)
56 return SecureCookie.unserialize(data, SECRET_KEY)
57
58 def application(environ, start_response):
59 request = Request(environ)
60
61 # get a response object here
62 response = ...
63
64 if request.client_session.should_save:
65 session_data = request.client_session.serialize()
66 response.set_cookie('session_data', session_data,
67 httponly=True)
68 return response(environ, start_response)
69
70 A less verbose integration can be achieved by using shorthand methods::
71
72 class Request(BaseRequest):
73
74 @cached_property
75 def client_session(self):
76 return SecureCookie.load_cookie(self, secret_key=COOKIE_SECRET)
77
78 def application(environ, start_response):
79 request = Request(environ)
80
81 # get a response object here
82 response = ...
83
84 request.client_session.save_cookie(response)
85 return response(environ, start_response)
86
87 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
88 :license: BSD, see LICENSE for more details.
89 """
90 import pickle
91 import base64
92 from hmac import new as hmac
93 from time import time
94 from hashlib import sha1 as _default_hash
95
96 from werkzeug._compat import iteritems, text_type, to_bytes
97 from werkzeug.urls import url_quote_plus, url_unquote_plus
98 from werkzeug._internal import _date_to_unix
99 from werkzeug.contrib.sessions import ModificationTrackingDict
100 from werkzeug.security import safe_str_cmp
101 from werkzeug._compat import to_native
102
103
104 class UnquoteError(Exception):
105
106 """Internal exception used to signal failures on quoting."""
107
108
109 class SecureCookie(ModificationTrackingDict):
110
111 """Represents a secure cookie. You can subclass this class and provide
112 an alternative mac method. The import thing is that the mac method
113 is a function with a similar interface to the hashlib. Required
114 methods are update() and digest().
115
116 Example usage:
117
118 >>> x = SecureCookie({"foo": 42, "baz": (1, 2, 3)}, "deadbeef")
119 >>> x["foo"]
120 42
121 >>> x["baz"]
122 (1, 2, 3)
123 >>> x["blafasel"] = 23
124 >>> x.should_save
125 True
126
127 :param data: the initial data. Either a dict, list of tuples or `None`.
128 :param secret_key: the secret key. If not set `None` or not specified
129 it has to be set before :meth:`serialize` is called.
130 :param new: The initial value of the `new` flag.
131 """
132
133 #: The hash method to use. This has to be a module with a new function
134 #: or a function that creates a hashlib object. Such as `hashlib.md5`
135 #: Subclasses can override this attribute. The default hash is sha1.
136 #: Make sure to wrap this in staticmethod() if you store an arbitrary
137 #: function there such as hashlib.sha1 which might be implemented
138 #: as a function.
139 hash_method = staticmethod(_default_hash)
140
141 #: the module used for serialization. Unless overriden by subclasses
142 #: the standard pickle module is used.
143 serialization_method = pickle
144
145 #: if the contents should be base64 quoted. This can be disabled if the
146 #: serialization process returns cookie safe strings only.
147 quote_base64 = True
148
149 def __init__(self, data=None, secret_key=None, new=True):
150 ModificationTrackingDict.__init__(self, data or ())
151 # explicitly convert it into a bytestring because python 2.6
152 # no longer performs an implicit string conversion on hmac
153 if secret_key is not None:
154 secret_key = to_bytes(secret_key, 'utf-8')
155 self.secret_key = secret_key
156 self.new = new
157
158 def __repr__(self):
159 return '<%s %s%s>' % (
160 self.__class__.__name__,
161 dict.__repr__(self),
162 self.should_save and '*' or ''
163 )
164
165 @property
166 def should_save(self):
167 """True if the session should be saved. By default this is only true
168 for :attr:`modified` cookies, not :attr:`new`.
169 """
170 return self.modified
171
172 @classmethod
173 def quote(cls, value):
174 """Quote the value for the cookie. This can be any object supported
175 by :attr:`serialization_method`.
176
177 :param value: the value to quote.
178 """
179 if cls.serialization_method is not None:
180 value = cls.serialization_method.dumps(value)
181 if cls.quote_base64:
182 value = b''.join(base64.b64encode(value).splitlines()).strip()
183 return value
184
185 @classmethod
186 def unquote(cls, value):
187 """Unquote the value for the cookie. If unquoting does not work a
188 :exc:`UnquoteError` is raised.
189
190 :param value: the value to unquote.
191 """
192 try:
193 if cls.quote_base64:
194 value = base64.b64decode(value)
195 if cls.serialization_method is not None:
196 value = cls.serialization_method.loads(value)
197 return value
198 except Exception:
199 # unfortunately pickle and other serialization modules can
200 # cause pretty every error here. if we get one we catch it
201 # and convert it into an UnquoteError
202 raise UnquoteError()
203
204 def serialize(self, expires=None):
205 """Serialize the secure cookie into a string.
206
207 If expires is provided, the session will be automatically invalidated
208 after expiration when you unseralize it. This provides better
209 protection against session cookie theft.
210
211 :param expires: an optional expiration date for the cookie (a
212 :class:`datetime.datetime` object)
213 """
214 if self.secret_key is None:
215 raise RuntimeError('no secret key defined')
216 if expires:
217 self['_expires'] = _date_to_unix(expires)
218 result = []
219 mac = hmac(self.secret_key, None, self.hash_method)
220 for key, value in sorted(self.items()):
221 result.append(('%s=%s' % (
222 url_quote_plus(key),
223 self.quote(value).decode('ascii')
224 )).encode('ascii'))
225 mac.update(b'|' + result[-1])
226 return b'?'.join([
227 base64.b64encode(mac.digest()).strip(),
228 b'&'.join(result)
229 ])
230
231 @classmethod
232 def unserialize(cls, string, secret_key):
233 """Load the secure cookie from a serialized string.
234
235 :param string: the cookie value to unserialize.
236 :param secret_key: the secret key used to serialize the cookie.
237 :return: a new :class:`SecureCookie`.
238 """
239 if isinstance(string, text_type):
240 string = string.encode('utf-8', 'replace')
241 if isinstance(secret_key, text_type):
242 secret_key = secret_key.encode('utf-8', 'replace')
243 try:
244 base64_hash, data = string.split(b'?', 1)
245 except (ValueError, IndexError):
246 items = ()
247 else:
248 items = {}
249 mac = hmac(secret_key, None, cls.hash_method)
250 for item in data.split(b'&'):
251 mac.update(b'|' + item)
252 if b'=' not in item:
253 items = None
254 break
255 key, value = item.split(b'=', 1)
256 # try to make the key a string
257 key = url_unquote_plus(key.decode('ascii'))
258 try:
259 key = to_native(key)
260 except UnicodeError:
261 pass
262 items[key] = value
263
264 # no parsing error and the mac looks okay, we can now
265 # sercurely unpickle our cookie.
266 try:
267 client_hash = base64.b64decode(base64_hash)
268 except TypeError:
269 items = client_hash = None
270 if items is not None and safe_str_cmp(client_hash, mac.digest()):
271 try:
272 for key, value in iteritems(items):
273 items[key] = cls.unquote(value)
274 except UnquoteError:
275 items = ()
276 else:
277 if '_expires' in items:
278 if time() > items['_expires']:
279 items = ()
280 else:
281 del items['_expires']
282 else:
283 items = ()
284 return cls(items, secret_key, False)
285
286 @classmethod
287 def load_cookie(cls, request, key='session', secret_key=None):
288 """Loads a :class:`SecureCookie` from a cookie in request. If the
289 cookie is not set, a new :class:`SecureCookie` instanced is
290 returned.
291
292 :param request: a request object that has a `cookies` attribute
293 which is a dict of all cookie values.
294 :param key: the name of the cookie.
295 :param secret_key: the secret key used to unquote the cookie.
296 Always provide the value even though it has
297 no default!
298 """
299 data = request.cookies.get(key)
300 if not data:
301 return cls(secret_key=secret_key)
302 return cls.unserialize(data, secret_key)
303
304 def save_cookie(self, response, key='session', expires=None,
305 session_expires=None, max_age=None, path='/', domain=None,
306 secure=None, httponly=False, force=False):
307 """Saves the SecureCookie in a cookie on response object. All
308 parameters that are not described here are forwarded directly
309 to :meth:`~BaseResponse.set_cookie`.
310
311 :param response: a response object that has a
312 :meth:`~BaseResponse.set_cookie` method.
313 :param key: the name of the cookie.
314 :param session_expires: the expiration date of the secure cookie
315 stored information. If this is not provided
316 the cookie `expires` date is used instead.
317 """
318 if force or self.should_save:
319 data = self.serialize(session_expires or expires)
320 response.set_cookie(key, data, expires=expires, max_age=max_age,
321 path=path, domain=domain, secure=secure,
322 httponly=httponly)
+0
-352
werkzeug/contrib/sessions.py less more
0 # -*- coding: utf-8 -*-
1 r"""
2 werkzeug.contrib.sessions
3 ~~~~~~~~~~~~~~~~~~~~~~~~~
4
5 This module contains some helper classes that help one to add session
6 support to a python WSGI application. For full client-side session
7 storage see :mod:`~werkzeug.contrib.securecookie` which implements a
8 secure, client-side session storage.
9
10
11 Application Integration
12 =======================
13
14 ::
15
16 from werkzeug.contrib.sessions import SessionMiddleware, \
17 FilesystemSessionStore
18
19 app = SessionMiddleware(app, FilesystemSessionStore())
20
21 The current session will then appear in the WSGI environment as
22 `werkzeug.session`. However it's recommended to not use the middleware
23 but the stores directly in the application. However for very simple
24 scripts a middleware for sessions could be sufficient.
25
26 This module does not implement methods or ways to check if a session is
27 expired. That should be done by a cronjob and storage specific. For
28 example to prune unused filesystem sessions one could check the modified
29 time of the files. If sessions are stored in the database the new()
30 method should add an expiration timestamp for the session.
31
32 For better flexibility it's recommended to not use the middleware but the
33 store and session object directly in the application dispatching::
34
35 session_store = FilesystemSessionStore()
36
37 def application(environ, start_response):
38 request = Request(environ)
39 sid = request.cookies.get('cookie_name')
40 if sid is None:
41 request.session = session_store.new()
42 else:
43 request.session = session_store.get(sid)
44 response = get_the_response_object(request)
45 if request.session.should_save:
46 session_store.save(request.session)
47 response.set_cookie('cookie_name', request.session.sid)
48 return response(environ, start_response)
49
50 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
51 :license: BSD, see LICENSE for more details.
52 """
53 import re
54 import os
55 import tempfile
56 from os import path
57 from time import time
58 from random import random
59 from hashlib import sha1
60 from pickle import dump, load, HIGHEST_PROTOCOL
61
62 from werkzeug.datastructures import CallbackDict
63 from werkzeug.utils import dump_cookie, parse_cookie
64 from werkzeug.wsgi import ClosingIterator
65 from werkzeug.posixemulation import rename
66 from werkzeug._compat import PY2, text_type
67 from werkzeug.filesystem import get_filesystem_encoding
68
69
70 _sha1_re = re.compile(r'^[a-f0-9]{40}$')
71
72
73 def _urandom():
74 if hasattr(os, 'urandom'):
75 return os.urandom(30)
76 return text_type(random()).encode('ascii')
77
78
79 def generate_key(salt=None):
80 if salt is None:
81 salt = repr(salt).encode('ascii')
82 return sha1(b''.join([
83 salt,
84 str(time()).encode('ascii'),
85 _urandom()
86 ])).hexdigest()
87
88
89 class ModificationTrackingDict(CallbackDict):
90 __slots__ = ('modified',)
91
92 def __init__(self, *args, **kwargs):
93 def on_update(self):
94 self.modified = True
95 self.modified = False
96 CallbackDict.__init__(self, on_update=on_update)
97 dict.update(self, *args, **kwargs)
98
99 def copy(self):
100 """Create a flat copy of the dict."""
101 missing = object()
102 result = object.__new__(self.__class__)
103 for name in self.__slots__:
104 val = getattr(self, name, missing)
105 if val is not missing:
106 setattr(result, name, val)
107 return result
108
109 def __copy__(self):
110 return self.copy()
111
112
113 class Session(ModificationTrackingDict):
114
115 """Subclass of a dict that keeps track of direct object changes. Changes
116 in mutable structures are not tracked, for those you have to set
117 `modified` to `True` by hand.
118 """
119 __slots__ = ModificationTrackingDict.__slots__ + ('sid', 'new')
120
121 def __init__(self, data, sid, new=False):
122 ModificationTrackingDict.__init__(self, data)
123 self.sid = sid
124 self.new = new
125
126 def __repr__(self):
127 return '<%s %s%s>' % (
128 self.__class__.__name__,
129 dict.__repr__(self),
130 self.should_save and '*' or ''
131 )
132
133 @property
134 def should_save(self):
135 """True if the session should be saved.
136
137 .. versionchanged:: 0.6
138 By default the session is now only saved if the session is
139 modified, not if it is new like it was before.
140 """
141 return self.modified
142
143
144 class SessionStore(object):
145
146 """Baseclass for all session stores. The Werkzeug contrib module does not
147 implement any useful stores besides the filesystem store, application
148 developers are encouraged to create their own stores.
149
150 :param session_class: The session class to use. Defaults to
151 :class:`Session`.
152 """
153
154 def __init__(self, session_class=None):
155 if session_class is None:
156 session_class = Session
157 self.session_class = session_class
158
159 def is_valid_key(self, key):
160 """Check if a key has the correct format."""
161 return _sha1_re.match(key) is not None
162
163 def generate_key(self, salt=None):
164 """Simple function that generates a new session key."""
165 return generate_key(salt)
166
167 def new(self):
168 """Generate a new session."""
169 return self.session_class({}, self.generate_key(), True)
170
171 def save(self, session):
172 """Save a session."""
173
174 def save_if_modified(self, session):
175 """Save if a session class wants an update."""
176 if session.should_save:
177 self.save(session)
178
179 def delete(self, session):
180 """Delete a session."""
181
182 def get(self, sid):
183 """Get a session for this sid or a new session object. This method
184 has to check if the session key is valid and create a new session if
185 that wasn't the case.
186 """
187 return self.session_class({}, sid, True)
188
189
190 #: used for temporary files by the filesystem session store
191 _fs_transaction_suffix = '.__wz_sess'
192
193
194 class FilesystemSessionStore(SessionStore):
195
196 """Simple example session store that saves sessions on the filesystem.
197 This store works best on POSIX systems and Windows Vista / Windows
198 Server 2008 and newer.
199
200 .. versionchanged:: 0.6
201 `renew_missing` was added. Previously this was considered `True`,
202 now the default changed to `False` and it can be explicitly
203 deactivated.
204
205 :param path: the path to the folder used for storing the sessions.
206 If not provided the default temporary directory is used.
207 :param filename_template: a string template used to give the session
208 a filename. ``%s`` is replaced with the
209 session id.
210 :param session_class: The session class to use. Defaults to
211 :class:`Session`.
212 :param renew_missing: set to `True` if you want the store to
213 give the user a new sid if the session was
214 not yet saved.
215 """
216
217 def __init__(self, path=None, filename_template='werkzeug_%s.sess',
218 session_class=None, renew_missing=False, mode=0o644):
219 SessionStore.__init__(self, session_class)
220 if path is None:
221 path = tempfile.gettempdir()
222 self.path = path
223 if isinstance(filename_template, text_type) and PY2:
224 filename_template = filename_template.encode(
225 get_filesystem_encoding())
226 assert not filename_template.endswith(_fs_transaction_suffix), \
227 'filename templates may not end with %s' % _fs_transaction_suffix
228 self.filename_template = filename_template
229 self.renew_missing = renew_missing
230 self.mode = mode
231
232 def get_session_filename(self, sid):
233 # out of the box, this should be a strict ASCII subset but
234 # you might reconfigure the session object to have a more
235 # arbitrary string.
236 if isinstance(sid, text_type) and PY2:
237 sid = sid.encode(get_filesystem_encoding())
238 return path.join(self.path, self.filename_template % sid)
239
240 def save(self, session):
241 fn = self.get_session_filename(session.sid)
242 fd, tmp = tempfile.mkstemp(suffix=_fs_transaction_suffix,
243 dir=self.path)
244 f = os.fdopen(fd, 'wb')
245 try:
246 dump(dict(session), f, HIGHEST_PROTOCOL)
247 finally:
248 f.close()
249 try:
250 rename(tmp, fn)
251 os.chmod(fn, self.mode)
252 except (IOError, OSError):
253 pass
254
255 def delete(self, session):
256 fn = self.get_session_filename(session.sid)
257 try:
258 os.unlink(fn)
259 except OSError:
260 pass
261
262 def get(self, sid):
263 if not self.is_valid_key(sid):
264 return self.new()
265 try:
266 f = open(self.get_session_filename(sid), 'rb')
267 except IOError:
268 if self.renew_missing:
269 return self.new()
270 data = {}
271 else:
272 try:
273 try:
274 data = load(f)
275 except Exception:
276 data = {}
277 finally:
278 f.close()
279 return self.session_class(data, sid, False)
280
281 def list(self):
282 """Lists all sessions in the store.
283
284 .. versionadded:: 0.6
285 """
286 before, after = self.filename_template.split('%s', 1)
287 filename_re = re.compile(r'%s(.{5,})%s$' % (re.escape(before),
288 re.escape(after)))
289 result = []
290 for filename in os.listdir(self.path):
291 #: this is a session that is still being saved.
292 if filename.endswith(_fs_transaction_suffix):
293 continue
294 match = filename_re.match(filename)
295 if match is not None:
296 result.append(match.group(1))
297 return result
298
299
300 class SessionMiddleware(object):
301
302 """A simple middleware that puts the session object of a store provided
303 into the WSGI environ. It automatically sets cookies and restores
304 sessions.
305
306 However a middleware is not the preferred solution because it won't be as
307 fast as sessions managed by the application itself and will put a key into
308 the WSGI environment only relevant for the application which is against
309 the concept of WSGI.
310
311 The cookie parameters are the same as for the :func:`~dump_cookie`
312 function just prefixed with ``cookie_``. Additionally `max_age` is
313 called `cookie_age` and not `cookie_max_age` because of backwards
314 compatibility.
315 """
316
317 def __init__(self, app, store, cookie_name='session_id',
318 cookie_age=None, cookie_expires=None, cookie_path='/',
319 cookie_domain=None, cookie_secure=None,
320 cookie_httponly=False, environ_key='werkzeug.session'):
321 self.app = app
322 self.store = store
323 self.cookie_name = cookie_name
324 self.cookie_age = cookie_age
325 self.cookie_expires = cookie_expires
326 self.cookie_path = cookie_path
327 self.cookie_domain = cookie_domain
328 self.cookie_secure = cookie_secure
329 self.cookie_httponly = cookie_httponly
330 self.environ_key = environ_key
331
332 def __call__(self, environ, start_response):
333 cookie = parse_cookie(environ.get('HTTP_COOKIE', ''))
334 sid = cookie.get(self.cookie_name, None)
335 if sid is None:
336 session = self.store.new()
337 else:
338 session = self.store.get(sid)
339 environ[self.environ_key] = session
340
341 def injecting_start_response(status, headers, exc_info=None):
342 if session.should_save:
343 self.store.save(session)
344 headers.append(('Set-Cookie', dump_cookie(self.cookie_name,
345 session.sid, self.cookie_age,
346 self.cookie_expires, self.cookie_path,
347 self.cookie_domain, self.cookie_secure,
348 self.cookie_httponly)))
349 return start_response(status, headers, exc_info)
350 return ClosingIterator(self.app(environ, injecting_start_response),
351 lambda: self.store.save_if_modified(session))
+0
-73
werkzeug/contrib/testtools.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.contrib.testtools
3 ~~~~~~~~~~~~~~~~~~~~~~~~~~
4
5 This module implements extended wrappers for simplified testing.
6
7 `TestResponse`
8 A response wrapper which adds various cached attributes for
9 simplified assertions on various content types.
10
11 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
12 :license: BSD, see LICENSE for more details.
13 """
14 from werkzeug.utils import cached_property, import_string
15 from werkzeug.wrappers import Response
16
17 from warnings import warn
18 warn(DeprecationWarning('werkzeug.contrib.testtools is deprecated and '
19 'will be removed with Werkzeug 1.0'))
20
21
22 class ContentAccessors(object):
23
24 """
25 A mixin class for response objects that provides a couple of useful
26 accessors for unittesting.
27 """
28
29 def xml(self):
30 """Get an etree if possible."""
31 if 'xml' not in self.mimetype:
32 raise AttributeError(
33 'Not a XML response (Content-Type: %s)'
34 % self.mimetype)
35 for module in ['xml.etree.ElementTree', 'ElementTree',
36 'elementtree.ElementTree']:
37 etree = import_string(module, silent=True)
38 if etree is not None:
39 return etree.XML(self.body)
40 raise RuntimeError('You must have ElementTree installed '
41 'to use TestResponse.xml')
42 xml = cached_property(xml)
43
44 def lxml(self):
45 """Get an lxml etree if possible."""
46 if ('html' not in self.mimetype and 'xml' not in self.mimetype):
47 raise AttributeError('Not an HTML/XML response')
48 from lxml import etree
49 try:
50 from lxml.html import fromstring
51 except ImportError:
52 fromstring = etree.HTML
53 if self.mimetype == 'text/html':
54 return fromstring(self.data)
55 return etree.XML(self.data)
56 lxml = cached_property(lxml)
57
58 def json(self):
59 """Get the result of simplejson.loads if possible."""
60 if 'json' not in self.mimetype:
61 raise AttributeError('Not a JSON response')
62 try:
63 from simplejson import loads
64 except ImportError:
65 from json import loads
66 return loads(self.data)
67 json = cached_property(json)
68
69
70 class TestResponse(Response, ContentAccessors):
71
72 """Pass this to `werkzeug.test.Client` for easier unittesting."""
+0
-284
werkzeug/contrib/wrappers.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.contrib.wrappers
3 ~~~~~~~~~~~~~~~~~~~~~~~~~
4
5 Extra wrappers or mixins contributed by the community. These wrappers can
6 be mixed in into request objects to add extra functionality.
7
8 Example::
9
10 from werkzeug.wrappers import Request as RequestBase
11 from werkzeug.contrib.wrappers import JSONRequestMixin
12
13 class Request(RequestBase, JSONRequestMixin):
14 pass
15
16 Afterwards this request object provides the extra functionality of the
17 :class:`JSONRequestMixin`.
18
19 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
20 :license: BSD, see LICENSE for more details.
21 """
22 import codecs
23 try:
24 from simplejson import loads
25 except ImportError:
26 from json import loads
27
28 from werkzeug.exceptions import BadRequest
29 from werkzeug.utils import cached_property
30 from werkzeug.http import dump_options_header, parse_options_header
31 from werkzeug._compat import wsgi_decoding_dance
32
33
34 def is_known_charset(charset):
35 """Checks if the given charset is known to Python."""
36 try:
37 codecs.lookup(charset)
38 except LookupError:
39 return False
40 return True
41
42
43 class JSONRequestMixin(object):
44
45 """Add json method to a request object. This will parse the input data
46 through simplejson if possible.
47
48 :exc:`~werkzeug.exceptions.BadRequest` will be raised if the content-type
49 is not json or if the data itself cannot be parsed as json.
50 """
51
52 @cached_property
53 def json(self):
54 """Get the result of simplejson.loads if possible."""
55 if 'json' not in self.environ.get('CONTENT_TYPE', ''):
56 raise BadRequest('Not a JSON request')
57 try:
58 return loads(self.data.decode(self.charset, self.encoding_errors))
59 except Exception:
60 raise BadRequest('Unable to read JSON request')
61
62
63 class ProtobufRequestMixin(object):
64
65 """Add protobuf parsing method to a request object. This will parse the
66 input data through `protobuf`_ if possible.
67
68 :exc:`~werkzeug.exceptions.BadRequest` will be raised if the content-type
69 is not protobuf or if the data itself cannot be parsed property.
70
71 .. _protobuf: http://code.google.com/p/protobuf/
72 """
73
74 #: by default the :class:`ProtobufRequestMixin` will raise a
75 #: :exc:`~werkzeug.exceptions.BadRequest` if the object is not
76 #: initialized. You can bypass that check by setting this
77 #: attribute to `False`.
78 protobuf_check_initialization = True
79
80 def parse_protobuf(self, proto_type):
81 """Parse the data into an instance of proto_type."""
82 if 'protobuf' not in self.environ.get('CONTENT_TYPE', ''):
83 raise BadRequest('Not a Protobuf request')
84
85 obj = proto_type()
86 try:
87 obj.ParseFromString(self.data)
88 except Exception:
89 raise BadRequest("Unable to parse Protobuf request")
90
91 # Fail if not all required fields are set
92 if self.protobuf_check_initialization and not obj.IsInitialized():
93 raise BadRequest("Partial Protobuf request")
94
95 return obj
96
97
98 class RoutingArgsRequestMixin(object):
99
100 """This request mixin adds support for the wsgiorg routing args
101 `specification`_.
102
103 .. _specification: https://wsgi.readthedocs.io/en/latest/specifications/routing_args.html
104 """
105
106 def _get_routing_args(self):
107 return self.environ.get('wsgiorg.routing_args', (()))[0]
108
109 def _set_routing_args(self, value):
110 if self.shallow:
111 raise RuntimeError('A shallow request tried to modify the WSGI '
112 'environment. If you really want to do that, '
113 'set `shallow` to False.')
114 self.environ['wsgiorg.routing_args'] = (value, self.routing_vars)
115
116 routing_args = property(_get_routing_args, _set_routing_args, doc='''
117 The positional URL arguments as `tuple`.''')
118 del _get_routing_args, _set_routing_args
119
120 def _get_routing_vars(self):
121 rv = self.environ.get('wsgiorg.routing_args')
122 if rv is not None:
123 return rv[1]
124 rv = {}
125 if not self.shallow:
126 self.routing_vars = rv
127 return rv
128
129 def _set_routing_vars(self, value):
130 if self.shallow:
131 raise RuntimeError('A shallow request tried to modify the WSGI '
132 'environment. If you really want to do that, '
133 'set `shallow` to False.')
134 self.environ['wsgiorg.routing_args'] = (self.routing_args, value)
135
136 routing_vars = property(_get_routing_vars, _set_routing_vars, doc='''
137 The keyword URL arguments as `dict`.''')
138 del _get_routing_vars, _set_routing_vars
139
140
141 class ReverseSlashBehaviorRequestMixin(object):
142
143 """This mixin reverses the trailing slash behavior of :attr:`script_root`
144 and :attr:`path`. This makes it possible to use :func:`~urlparse.urljoin`
145 directly on the paths.
146
147 Because it changes the behavior or :class:`Request` this class has to be
148 mixed in *before* the actual request class::
149
150 class MyRequest(ReverseSlashBehaviorRequestMixin, Request):
151 pass
152
153 This example shows the differences (for an application mounted on
154 `/application` and the request going to `/application/foo/bar`):
155
156 +---------------+-------------------+---------------------+
157 | | normal behavior | reverse behavior |
158 +===============+===================+=====================+
159 | `script_root` | ``/application`` | ``/application/`` |
160 +---------------+-------------------+---------------------+
161 | `path` | ``/foo/bar`` | ``foo/bar`` |
162 +---------------+-------------------+---------------------+
163 """
164
165 @cached_property
166 def path(self):
167 """Requested path as unicode. This works a bit like the regular path
168 info in the WSGI environment but will not include a leading slash.
169 """
170 path = wsgi_decoding_dance(self.environ.get('PATH_INFO') or '',
171 self.charset, self.encoding_errors)
172 return path.lstrip('/')
173
174 @cached_property
175 def script_root(self):
176 """The root path of the script includling a trailing slash."""
177 path = wsgi_decoding_dance(self.environ.get('SCRIPT_NAME') or '',
178 self.charset, self.encoding_errors)
179 return path.rstrip('/') + '/'
180
181
182 class DynamicCharsetRequestMixin(object):
183
184 """"If this mixin is mixed into a request class it will provide
185 a dynamic `charset` attribute. This means that if the charset is
186 transmitted in the content type headers it's used from there.
187
188 Because it changes the behavior or :class:`Request` this class has
189 to be mixed in *before* the actual request class::
190
191 class MyRequest(DynamicCharsetRequestMixin, Request):
192 pass
193
194 By default the request object assumes that the URL charset is the
195 same as the data charset. If the charset varies on each request
196 based on the transmitted data it's not a good idea to let the URLs
197 change based on that. Most browsers assume either utf-8 or latin1
198 for the URLs if they have troubles figuring out. It's strongly
199 recommended to set the URL charset to utf-8::
200
201 class MyRequest(DynamicCharsetRequestMixin, Request):
202 url_charset = 'utf-8'
203
204 .. versionadded:: 0.6
205 """
206
207 #: the default charset that is assumed if the content type header
208 #: is missing or does not contain a charset parameter. The default
209 #: is latin1 which is what HTTP specifies as default charset.
210 #: You may however want to set this to utf-8 to better support
211 #: browsers that do not transmit a charset for incoming data.
212 default_charset = 'latin1'
213
214 def unknown_charset(self, charset):
215 """Called if a charset was provided but is not supported by
216 the Python codecs module. By default latin1 is assumed then
217 to not lose any information, you may override this method to
218 change the behavior.
219
220 :param charset: the charset that was not found.
221 :return: the replacement charset.
222 """
223 return 'latin1'
224
225 @cached_property
226 def charset(self):
227 """The charset from the content type."""
228 header = self.environ.get('CONTENT_TYPE')
229 if header:
230 ct, options = parse_options_header(header)
231 charset = options.get('charset')
232 if charset:
233 if is_known_charset(charset):
234 return charset
235 return self.unknown_charset(charset)
236 return self.default_charset
237
238
239 class DynamicCharsetResponseMixin(object):
240
241 """If this mixin is mixed into a response class it will provide
242 a dynamic `charset` attribute. This means that if the charset is
243 looked up and stored in the `Content-Type` header and updates
244 itself automatically. This also means a small performance hit but
245 can be useful if you're working with different charsets on
246 responses.
247
248 Because the charset attribute is no a property at class-level, the
249 default value is stored in `default_charset`.
250
251 Because it changes the behavior or :class:`Response` this class has
252 to be mixed in *before* the actual response class::
253
254 class MyResponse(DynamicCharsetResponseMixin, Response):
255 pass
256
257 .. versionadded:: 0.6
258 """
259
260 #: the default charset.
261 default_charset = 'utf-8'
262
263 def _get_charset(self):
264 header = self.headers.get('content-type')
265 if header:
266 charset = parse_options_header(header)[1].get('charset')
267 if charset:
268 return charset
269 return self.default_charset
270
271 def _set_charset(self, charset):
272 header = self.headers.get('content-type')
273 ct, options = parse_options_header(header)
274 if not ct:
275 raise TypeError('Cannot set charset if Content-Type '
276 'header is missing.')
277 options['charset'] = charset
278 self.headers['Content-Type'] = dump_options_header(ct, options)
279
280 charset = property(_get_charset, _set_charset, doc="""
281 The charset for the response. It's stored inside the
282 Content-Type header as a parameter.""")
283 del _get_charset, _set_charset
+0
-2762
werkzeug/datastructures.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.datastructures
3 ~~~~~~~~~~~~~~~~~~~~~~~
4
5 This module provides mixins and classes with an immutable interface.
6
7 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD, see LICENSE for more details.
9 """
10 import re
11 import codecs
12 import mimetypes
13 from copy import deepcopy
14 from itertools import repeat
15 from collections import Container, Iterable, MutableSet
16
17 from werkzeug._internal import _missing, _empty_stream
18 from werkzeug._compat import iterkeys, itervalues, iteritems, iterlists, \
19 PY2, text_type, integer_types, string_types, make_literal_wrapper, \
20 to_native
21 from werkzeug.filesystem import get_filesystem_encoding
22
23
24 _locale_delim_re = re.compile(r'[_-]')
25
26
27 def is_immutable(self):
28 raise TypeError('%r objects are immutable' % self.__class__.__name__)
29
30
31 def iter_multi_items(mapping):
32 """Iterates over the items of a mapping yielding keys and values
33 without dropping any from more complex structures.
34 """
35 if isinstance(mapping, MultiDict):
36 for item in iteritems(mapping, multi=True):
37 yield item
38 elif isinstance(mapping, dict):
39 for key, value in iteritems(mapping):
40 if isinstance(value, (tuple, list)):
41 for value in value:
42 yield key, value
43 else:
44 yield key, value
45 else:
46 for item in mapping:
47 yield item
48
49
50 def native_itermethods(names):
51 if not PY2:
52 return lambda x: x
53
54 def setviewmethod(cls, name):
55 viewmethod_name = 'view%s' % name
56 viewmethod = lambda self, *a, **kw: ViewItems(self, name, 'view_%s' % name, *a, **kw)
57 viewmethod.__doc__ = \
58 '"""`%s()` object providing a view on %s"""' % (viewmethod_name, name)
59 setattr(cls, viewmethod_name, viewmethod)
60
61 def setitermethod(cls, name):
62 itermethod = getattr(cls, name)
63 setattr(cls, 'iter%s' % name, itermethod)
64 listmethod = lambda self, *a, **kw: list(itermethod(self, *a, **kw))
65 listmethod.__doc__ = \
66 'Like :py:meth:`iter%s`, but returns a list.' % name
67 setattr(cls, name, listmethod)
68
69 def wrap(cls):
70 for name in names:
71 setitermethod(cls, name)
72 setviewmethod(cls, name)
73 return cls
74 return wrap
75
76
77 class ImmutableListMixin(object):
78
79 """Makes a :class:`list` immutable.
80
81 .. versionadded:: 0.5
82
83 :private:
84 """
85
86 _hash_cache = None
87
88 def __hash__(self):
89 if self._hash_cache is not None:
90 return self._hash_cache
91 rv = self._hash_cache = hash(tuple(self))
92 return rv
93
94 def __reduce_ex__(self, protocol):
95 return type(self), (list(self),)
96
97 def __delitem__(self, key):
98 is_immutable(self)
99
100 def __iadd__(self, other):
101 is_immutable(self)
102 __imul__ = __iadd__
103
104 def __setitem__(self, key, value):
105 is_immutable(self)
106
107 def append(self, item):
108 is_immutable(self)
109 remove = append
110
111 def extend(self, iterable):
112 is_immutable(self)
113
114 def insert(self, pos, value):
115 is_immutable(self)
116
117 def pop(self, index=-1):
118 is_immutable(self)
119
120 def reverse(self):
121 is_immutable(self)
122
123 def sort(self, cmp=None, key=None, reverse=None):
124 is_immutable(self)
125
126
127 class ImmutableList(ImmutableListMixin, list):
128
129 """An immutable :class:`list`.
130
131 .. versionadded:: 0.5
132
133 :private:
134 """
135
136 def __repr__(self):
137 return '%s(%s)' % (
138 self.__class__.__name__,
139 list.__repr__(self),
140 )
141
142
143 class ImmutableDictMixin(object):
144
145 """Makes a :class:`dict` immutable.
146
147 .. versionadded:: 0.5
148
149 :private:
150 """
151 _hash_cache = None
152
153 @classmethod
154 def fromkeys(cls, keys, value=None):
155 instance = super(cls, cls).__new__(cls)
156 instance.__init__(zip(keys, repeat(value)))
157 return instance
158
159 def __reduce_ex__(self, protocol):
160 return type(self), (dict(self),)
161
162 def _iter_hashitems(self):
163 return iteritems(self)
164
165 def __hash__(self):
166 if self._hash_cache is not None:
167 return self._hash_cache
168 rv = self._hash_cache = hash(frozenset(self._iter_hashitems()))
169 return rv
170
171 def setdefault(self, key, default=None):
172 is_immutable(self)
173
174 def update(self, *args, **kwargs):
175 is_immutable(self)
176
177 def pop(self, key, default=None):
178 is_immutable(self)
179
180 def popitem(self):
181 is_immutable(self)
182
183 def __setitem__(self, key, value):
184 is_immutable(self)
185
186 def __delitem__(self, key):
187 is_immutable(self)
188
189 def clear(self):
190 is_immutable(self)
191
192
193 class ImmutableMultiDictMixin(ImmutableDictMixin):
194
195 """Makes a :class:`MultiDict` immutable.
196
197 .. versionadded:: 0.5
198
199 :private:
200 """
201
202 def __reduce_ex__(self, protocol):
203 return type(self), (list(iteritems(self, multi=True)),)
204
205 def _iter_hashitems(self):
206 return iteritems(self, multi=True)
207
208 def add(self, key, value):
209 is_immutable(self)
210
211 def popitemlist(self):
212 is_immutable(self)
213
214 def poplist(self, key):
215 is_immutable(self)
216
217 def setlist(self, key, new_list):
218 is_immutable(self)
219
220 def setlistdefault(self, key, default_list=None):
221 is_immutable(self)
222
223
224 class UpdateDictMixin(object):
225
226 """Makes dicts call `self.on_update` on modifications.
227
228 .. versionadded:: 0.5
229
230 :private:
231 """
232
233 on_update = None
234
235 def calls_update(name):
236 def oncall(self, *args, **kw):
237 rv = getattr(super(UpdateDictMixin, self), name)(*args, **kw)
238 if self.on_update is not None:
239 self.on_update(self)
240 return rv
241 oncall.__name__ = name
242 return oncall
243
244 def setdefault(self, key, default=None):
245 modified = key not in self
246 rv = super(UpdateDictMixin, self).setdefault(key, default)
247 if modified and self.on_update is not None:
248 self.on_update(self)
249 return rv
250
251 def pop(self, key, default=_missing):
252 modified = key in self
253 if default is _missing:
254 rv = super(UpdateDictMixin, self).pop(key)
255 else:
256 rv = super(UpdateDictMixin, self).pop(key, default)
257 if modified and self.on_update is not None:
258 self.on_update(self)
259 return rv
260
261 __setitem__ = calls_update('__setitem__')
262 __delitem__ = calls_update('__delitem__')
263 clear = calls_update('clear')
264 popitem = calls_update('popitem')
265 update = calls_update('update')
266 del calls_update
267
268
269 class TypeConversionDict(dict):
270
271 """Works like a regular dict but the :meth:`get` method can perform
272 type conversions. :class:`MultiDict` and :class:`CombinedMultiDict`
273 are subclasses of this class and provide the same feature.
274
275 .. versionadded:: 0.5
276 """
277
278 def get(self, key, default=None, type=None):
279 """Return the default value if the requested data doesn't exist.
280 If `type` is provided and is a callable it should convert the value,
281 return it or raise a :exc:`ValueError` if that is not possible. In
282 this case the function will return the default as if the value was not
283 found:
284
285 >>> d = TypeConversionDict(foo='42', bar='blub')
286 >>> d.get('foo', type=int)
287 42
288 >>> d.get('bar', -1, type=int)
289 -1
290
291 :param key: The key to be looked up.
292 :param default: The default value to be returned if the key can't
293 be looked up. If not further specified `None` is
294 returned.
295 :param type: A callable that is used to cast the value in the
296 :class:`MultiDict`. If a :exc:`ValueError` is raised
297 by this callable the default value is returned.
298 """
299 try:
300 rv = self[key]
301 except KeyError:
302 return default
303 if type is not None:
304 try:
305 rv = type(rv)
306 except ValueError:
307 rv = default
308 return rv
309
310
311 class ImmutableTypeConversionDict(ImmutableDictMixin, TypeConversionDict):
312
313 """Works like a :class:`TypeConversionDict` but does not support
314 modifications.
315
316 .. versionadded:: 0.5
317 """
318
319 def copy(self):
320 """Return a shallow mutable copy of this object. Keep in mind that
321 the standard library's :func:`copy` function is a no-op for this class
322 like for any other python immutable type (eg: :class:`tuple`).
323 """
324 return TypeConversionDict(self)
325
326 def __copy__(self):
327 return self
328
329
330 class ViewItems(object):
331
332 def __init__(self, multi_dict, method, repr_name, *a, **kw):
333 self.__multi_dict = multi_dict
334 self.__method = method
335 self.__repr_name = repr_name
336 self.__a = a
337 self.__kw = kw
338
339 def __get_items(self):
340 return getattr(self.__multi_dict, self.__method)(*self.__a, **self.__kw)
341
342 def __repr__(self):
343 return '%s(%r)' % (self.__repr_name, list(self.__get_items()))
344
345 def __iter__(self):
346 return iter(self.__get_items())
347
348
349 @native_itermethods(['keys', 'values', 'items', 'lists', 'listvalues'])
350 class MultiDict(TypeConversionDict):
351
352 """A :class:`MultiDict` is a dictionary subclass customized to deal with
353 multiple values for the same key which is for example used by the parsing
354 functions in the wrappers. This is necessary because some HTML form
355 elements pass multiple values for the same key.
356
357 :class:`MultiDict` implements all standard dictionary methods.
358 Internally, it saves all values for a key as a list, but the standard dict
359 access methods will only return the first value for a key. If you want to
360 gain access to the other values, too, you have to use the `list` methods as
361 explained below.
362
363 Basic Usage:
364
365 >>> d = MultiDict([('a', 'b'), ('a', 'c')])
366 >>> d
367 MultiDict([('a', 'b'), ('a', 'c')])
368 >>> d['a']
369 'b'
370 >>> d.getlist('a')
371 ['b', 'c']
372 >>> 'a' in d
373 True
374
375 It behaves like a normal dict thus all dict functions will only return the
376 first value when multiple values for one key are found.
377
378 From Werkzeug 0.3 onwards, the `KeyError` raised by this class is also a
379 subclass of the :exc:`~exceptions.BadRequest` HTTP exception and will
380 render a page for a ``400 BAD REQUEST`` if caught in a catch-all for HTTP
381 exceptions.
382
383 A :class:`MultiDict` can be constructed from an iterable of
384 ``(key, value)`` tuples, a dict, a :class:`MultiDict` or from Werkzeug 0.2
385 onwards some keyword parameters.
386
387 :param mapping: the initial value for the :class:`MultiDict`. Either a
388 regular dict, an iterable of ``(key, value)`` tuples
389 or `None`.
390 """
391
392 def __init__(self, mapping=None):
393 if isinstance(mapping, MultiDict):
394 dict.__init__(self, ((k, l[:]) for k, l in iterlists(mapping)))
395 elif isinstance(mapping, dict):
396 tmp = {}
397 for key, value in iteritems(mapping):
398 if isinstance(value, (tuple, list)):
399 if len(value) == 0:
400 continue
401 value = list(value)
402 else:
403 value = [value]
404 tmp[key] = value
405 dict.__init__(self, tmp)
406 else:
407 tmp = {}
408 for key, value in mapping or ():
409 tmp.setdefault(key, []).append(value)
410 dict.__init__(self, tmp)
411
412 def __getstate__(self):
413 return dict(self.lists())
414
415 def __setstate__(self, value):
416 dict.clear(self)
417 dict.update(self, value)
418
419 def __getitem__(self, key):
420 """Return the first data value for this key;
421 raises KeyError if not found.
422
423 :param key: The key to be looked up.
424 :raise KeyError: if the key does not exist.
425 """
426 if key in self:
427 lst = dict.__getitem__(self, key)
428 if len(lst) > 0:
429 return lst[0]
430 raise exceptions.BadRequestKeyError(key)
431
432 def __setitem__(self, key, value):
433 """Like :meth:`add` but removes an existing key first.
434
435 :param key: the key for the value.
436 :param value: the value to set.
437 """
438 dict.__setitem__(self, key, [value])
439
440 def add(self, key, value):
441 """Adds a new value for the key.
442
443 .. versionadded:: 0.6
444
445 :param key: the key for the value.
446 :param value: the value to add.
447 """
448 dict.setdefault(self, key, []).append(value)
449
450 def getlist(self, key, type=None):
451 """Return the list of items for a given key. If that key is not in the
452 `MultiDict`, the return value will be an empty list. Just as `get`
453 `getlist` accepts a `type` parameter. All items will be converted
454 with the callable defined there.
455
456 :param key: The key to be looked up.
457 :param type: A callable that is used to cast the value in the
458 :class:`MultiDict`. If a :exc:`ValueError` is raised
459 by this callable the value will be removed from the list.
460 :return: a :class:`list` of all the values for the key.
461 """
462 try:
463 rv = dict.__getitem__(self, key)
464 except KeyError:
465 return []
466 if type is None:
467 return list(rv)
468 result = []
469 for item in rv:
470 try:
471 result.append(type(item))
472 except ValueError:
473 pass
474 return result
475
476 def setlist(self, key, new_list):
477 """Remove the old values for a key and add new ones. Note that the list
478 you pass the values in will be shallow-copied before it is inserted in
479 the dictionary.
480
481 >>> d = MultiDict()
482 >>> d.setlist('foo', ['1', '2'])
483 >>> d['foo']
484 '1'
485 >>> d.getlist('foo')
486 ['1', '2']
487
488 :param key: The key for which the values are set.
489 :param new_list: An iterable with the new values for the key. Old values
490 are removed first.
491 """
492 dict.__setitem__(self, key, list(new_list))
493
494 def setdefault(self, key, default=None):
495 """Returns the value for the key if it is in the dict, otherwise it
496 returns `default` and sets that value for `key`.
497
498 :param key: The key to be looked up.
499 :param default: The default value to be returned if the key is not
500 in the dict. If not further specified it's `None`.
501 """
502 if key not in self:
503 self[key] = default
504 else:
505 default = self[key]
506 return default
507
508 def setlistdefault(self, key, default_list=None):
509 """Like `setdefault` but sets multiple values. The list returned
510 is not a copy, but the list that is actually used internally. This
511 means that you can put new values into the dict by appending items
512 to the list:
513
514 >>> d = MultiDict({"foo": 1})
515 >>> d.setlistdefault("foo").extend([2, 3])
516 >>> d.getlist("foo")
517 [1, 2, 3]
518
519 :param key: The key to be looked up.
520 :param default_list: An iterable of default values. It is either copied
521 (in case it was a list) or converted into a list
522 before returned.
523 :return: a :class:`list`
524 """
525 if key not in self:
526 default_list = list(default_list or ())
527 dict.__setitem__(self, key, default_list)
528 else:
529 default_list = dict.__getitem__(self, key)
530 return default_list
531
532 def items(self, multi=False):
533 """Return an iterator of ``(key, value)`` pairs.
534
535 :param multi: If set to `True` the iterator returned will have a pair
536 for each value of each key. Otherwise it will only
537 contain pairs for the first value of each key.
538 """
539
540 for key, values in iteritems(dict, self):
541 if multi:
542 for value in values:
543 yield key, value
544 else:
545 yield key, values[0]
546
547 def lists(self):
548 """Return a list of ``(key, values)`` pairs, where values is the list
549 of all values associated with the key."""
550
551 for key, values in iteritems(dict, self):
552 yield key, list(values)
553
554 def keys(self):
555 return iterkeys(dict, self)
556
557 __iter__ = keys
558
559 def values(self):
560 """Returns an iterator of the first value on every key's value list."""
561 for values in itervalues(dict, self):
562 yield values[0]
563
564 def listvalues(self):
565 """Return an iterator of all values associated with a key. Zipping
566 :meth:`keys` and this is the same as calling :meth:`lists`:
567
568 >>> d = MultiDict({"foo": [1, 2, 3]})
569 >>> zip(d.keys(), d.listvalues()) == d.lists()
570 True
571 """
572
573 return itervalues(dict, self)
574
575 def copy(self):
576 """Return a shallow copy of this object."""
577 return self.__class__(self)
578
579 def deepcopy(self, memo=None):
580 """Return a deep copy of this object."""
581 return self.__class__(deepcopy(self.to_dict(flat=False), memo))
582
583 def to_dict(self, flat=True):
584 """Return the contents as regular dict. If `flat` is `True` the
585 returned dict will only have the first item present, if `flat` is
586 `False` all values will be returned as lists.
587
588 :param flat: If set to `False` the dict returned will have lists
589 with all the values in it. Otherwise it will only
590 contain the first value for each key.
591 :return: a :class:`dict`
592 """
593 if flat:
594 return dict(iteritems(self))
595 return dict(self.lists())
596
597 def update(self, other_dict):
598 """update() extends rather than replaces existing key lists:
599
600 >>> a = MultiDict({'x': 1})
601 >>> b = MultiDict({'x': 2, 'y': 3})
602 >>> a.update(b)
603 >>> a
604 MultiDict([('y', 3), ('x', 1), ('x', 2)])
605
606 If the value list for a key in ``other_dict`` is empty, no new values
607 will be added to the dict and the key will not be created:
608
609 >>> x = {'empty_list': []}
610 >>> y = MultiDict()
611 >>> y.update(x)
612 >>> y
613 MultiDict([])
614 """
615 for key, value in iter_multi_items(other_dict):
616 MultiDict.add(self, key, value)
617
618 def pop(self, key, default=_missing):
619 """Pop the first item for a list on the dict. Afterwards the
620 key is removed from the dict, so additional values are discarded:
621
622 >>> d = MultiDict({"foo": [1, 2, 3]})
623 >>> d.pop("foo")
624 1
625 >>> "foo" in d
626 False
627
628 :param key: the key to pop.
629 :param default: if provided the value to return if the key was
630 not in the dictionary.
631 """
632 try:
633 lst = dict.pop(self, key)
634
635 if len(lst) == 0:
636 raise exceptions.BadRequestKeyError()
637
638 return lst[0]
639 except KeyError as e:
640 if default is not _missing:
641 return default
642 raise exceptions.BadRequestKeyError(str(e))
643
644 def popitem(self):
645 """Pop an item from the dict."""
646 try:
647 item = dict.popitem(self)
648
649 if len(item[1]) == 0:
650 raise exceptions.BadRequestKeyError()
651
652 return (item[0], item[1][0])
653 except KeyError as e:
654 raise exceptions.BadRequestKeyError(str(e))
655
656 def poplist(self, key):
657 """Pop the list for a key from the dict. If the key is not in the dict
658 an empty list is returned.
659
660 .. versionchanged:: 0.5
661 If the key does no longer exist a list is returned instead of
662 raising an error.
663 """
664 return dict.pop(self, key, [])
665
666 def popitemlist(self):
667 """Pop a ``(key, list)`` tuple from the dict."""
668 try:
669 return dict.popitem(self)
670 except KeyError as e:
671 raise exceptions.BadRequestKeyError(str(e))
672
673 def __copy__(self):
674 return self.copy()
675
676 def __deepcopy__(self, memo):
677 return self.deepcopy(memo=memo)
678
679 def __repr__(self):
680 return '%s(%r)' % (self.__class__.__name__, list(iteritems(self, multi=True)))
681
682
683 class _omd_bucket(object):
684
685 """Wraps values in the :class:`OrderedMultiDict`. This makes it
686 possible to keep an order over multiple different keys. It requires
687 a lot of extra memory and slows down access a lot, but makes it
688 possible to access elements in O(1) and iterate in O(n).
689 """
690 __slots__ = ('prev', 'key', 'value', 'next')
691
692 def __init__(self, omd, key, value):
693 self.prev = omd._last_bucket
694 self.key = key
695 self.value = value
696 self.next = None
697
698 if omd._first_bucket is None:
699 omd._first_bucket = self
700 if omd._last_bucket is not None:
701 omd._last_bucket.next = self
702 omd._last_bucket = self
703
704 def unlink(self, omd):
705 if self.prev:
706 self.prev.next = self.next
707 if self.next:
708 self.next.prev = self.prev
709 if omd._first_bucket is self:
710 omd._first_bucket = self.next
711 if omd._last_bucket is self:
712 omd._last_bucket = self.prev
713
714
715 @native_itermethods(['keys', 'values', 'items', 'lists', 'listvalues'])
716 class OrderedMultiDict(MultiDict):
717
718 """Works like a regular :class:`MultiDict` but preserves the
719 order of the fields. To convert the ordered multi dict into a
720 list you can use the :meth:`items` method and pass it ``multi=True``.
721
722 In general an :class:`OrderedMultiDict` is an order of magnitude
723 slower than a :class:`MultiDict`.
724
725 .. admonition:: note
726
727 Due to a limitation in Python you cannot convert an ordered
728 multi dict into a regular dict by using ``dict(multidict)``.
729 Instead you have to use the :meth:`to_dict` method, otherwise
730 the internal bucket objects are exposed.
731 """
732
733 def __init__(self, mapping=None):
734 dict.__init__(self)
735 self._first_bucket = self._last_bucket = None
736 if mapping is not None:
737 OrderedMultiDict.update(self, mapping)
738
739 def __eq__(self, other):
740 if not isinstance(other, MultiDict):
741 return NotImplemented
742 if isinstance(other, OrderedMultiDict):
743 iter1 = iteritems(self, multi=True)
744 iter2 = iteritems(other, multi=True)
745 try:
746 for k1, v1 in iter1:
747 k2, v2 = next(iter2)
748 if k1 != k2 or v1 != v2:
749 return False
750 except StopIteration:
751 return False
752 try:
753 next(iter2)
754 except StopIteration:
755 return True
756 return False
757 if len(self) != len(other):
758 return False
759 for key, values in iterlists(self):
760 if other.getlist(key) != values:
761 return False
762 return True
763
764 __hash__ = None
765
766 def __ne__(self, other):
767 return not self.__eq__(other)
768
769 def __reduce_ex__(self, protocol):
770 return type(self), (list(iteritems(self, multi=True)),)
771
772 def __getstate__(self):
773 return list(iteritems(self, multi=True))
774
775 def __setstate__(self, values):
776 dict.clear(self)
777 for key, value in values:
778 self.add(key, value)
779
780 def __getitem__(self, key):
781 if key in self:
782 return dict.__getitem__(self, key)[0].value
783 raise exceptions.BadRequestKeyError(key)
784
785 def __setitem__(self, key, value):
786 self.poplist(key)
787 self.add(key, value)
788
789 def __delitem__(self, key):
790 self.pop(key)
791
792 def keys(self):
793 return (key for key, value in iteritems(self))
794
795 __iter__ = keys
796
797 def values(self):
798 return (value for key, value in iteritems(self))
799
800 def items(self, multi=False):
801 ptr = self._first_bucket
802 if multi:
803 while ptr is not None:
804 yield ptr.key, ptr.value
805 ptr = ptr.next
806 else:
807 returned_keys = set()
808 while ptr is not None:
809 if ptr.key not in returned_keys:
810 returned_keys.add(ptr.key)
811 yield ptr.key, ptr.value
812 ptr = ptr.next
813
814 def lists(self):
815 returned_keys = set()
816 ptr = self._first_bucket
817 while ptr is not None:
818 if ptr.key not in returned_keys:
819 yield ptr.key, self.getlist(ptr.key)
820 returned_keys.add(ptr.key)
821 ptr = ptr.next
822
823 def listvalues(self):
824 for key, values in iterlists(self):
825 yield values
826
827 def add(self, key, value):
828 dict.setdefault(self, key, []).append(_omd_bucket(self, key, value))
829
830 def getlist(self, key, type=None):
831 try:
832 rv = dict.__getitem__(self, key)
833 except KeyError:
834 return []
835 if type is None:
836 return [x.value for x in rv]
837 result = []
838 for item in rv:
839 try:
840 result.append(type(item.value))
841 except ValueError:
842 pass
843 return result
844
845 def setlist(self, key, new_list):
846 self.poplist(key)
847 for value in new_list:
848 self.add(key, value)
849
850 def setlistdefault(self, key, default_list=None):
851 raise TypeError('setlistdefault is unsupported for '
852 'ordered multi dicts')
853
854 def update(self, mapping):
855 for key, value in iter_multi_items(mapping):
856 OrderedMultiDict.add(self, key, value)
857
858 def poplist(self, key):
859 buckets = dict.pop(self, key, ())
860 for bucket in buckets:
861 bucket.unlink(self)
862 return [x.value for x in buckets]
863
864 def pop(self, key, default=_missing):
865 try:
866 buckets = dict.pop(self, key)
867 except KeyError as e:
868 if default is not _missing:
869 return default
870 raise exceptions.BadRequestKeyError(str(e))
871 for bucket in buckets:
872 bucket.unlink(self)
873 return buckets[0].value
874
875 def popitem(self):
876 try:
877 key, buckets = dict.popitem(self)
878 except KeyError as e:
879 raise exceptions.BadRequestKeyError(str(e))
880 for bucket in buckets:
881 bucket.unlink(self)
882 return key, buckets[0].value
883
884 def popitemlist(self):
885 try:
886 key, buckets = dict.popitem(self)
887 except KeyError as e:
888 raise exceptions.BadRequestKeyError(str(e))
889 for bucket in buckets:
890 bucket.unlink(self)
891 return key, [x.value for x in buckets]
892
893
894 def _options_header_vkw(value, kw):
895 return dump_options_header(value, dict((k.replace('_', '-'), v)
896 for k, v in kw.items()))
897
898
899 def _unicodify_header_value(value):
900 if isinstance(value, bytes):
901 value = value.decode('latin-1')
902 if not isinstance(value, text_type):
903 value = text_type(value)
904 return value
905
906
907 @native_itermethods(['keys', 'values', 'items'])
908 class Headers(object):
909
910 """An object that stores some headers. It has a dict-like interface
911 but is ordered and can store the same keys multiple times.
912
913 This data structure is useful if you want a nicer way to handle WSGI
914 headers which are stored as tuples in a list.
915
916 From Werkzeug 0.3 onwards, the :exc:`KeyError` raised by this class is
917 also a subclass of the :class:`~exceptions.BadRequest` HTTP exception
918 and will render a page for a ``400 BAD REQUEST`` if caught in a
919 catch-all for HTTP exceptions.
920
921 Headers is mostly compatible with the Python :class:`wsgiref.headers.Headers`
922 class, with the exception of `__getitem__`. :mod:`wsgiref` will return
923 `None` for ``headers['missing']``, whereas :class:`Headers` will raise
924 a :class:`KeyError`.
925
926 To create a new :class:`Headers` object pass it a list or dict of headers
927 which are used as default values. This does not reuse the list passed
928 to the constructor for internal usage.
929
930 :param defaults: The list of default values for the :class:`Headers`.
931
932 .. versionchanged:: 0.9
933 This data structure now stores unicode values similar to how the
934 multi dicts do it. The main difference is that bytes can be set as
935 well which will automatically be latin1 decoded.
936
937 .. versionchanged:: 0.9
938 The :meth:`linked` function was removed without replacement as it
939 was an API that does not support the changes to the encoding model.
940 """
941
942 def __init__(self, defaults=None):
943 self._list = []
944 if defaults is not None:
945 if isinstance(defaults, (list, Headers)):
946 self._list.extend(defaults)
947 else:
948 self.extend(defaults)
949
950 def __getitem__(self, key, _get_mode=False):
951 if not _get_mode:
952 if isinstance(key, integer_types):
953 return self._list[key]
954 elif isinstance(key, slice):
955 return self.__class__(self._list[key])
956 if not isinstance(key, string_types):
957 raise exceptions.BadRequestKeyError(key)
958 ikey = key.lower()
959 for k, v in self._list:
960 if k.lower() == ikey:
961 return v
962 # micro optimization: if we are in get mode we will catch that
963 # exception one stack level down so we can raise a standard
964 # key error instead of our special one.
965 if _get_mode:
966 raise KeyError()
967 raise exceptions.BadRequestKeyError(key)
968
969 def __eq__(self, other):
970 return other.__class__ is self.__class__ and \
971 set(other._list) == set(self._list)
972
973 __hash__ = None
974
975 def __ne__(self, other):
976 return not self.__eq__(other)
977
978 def get(self, key, default=None, type=None, as_bytes=False):
979 """Return the default value if the requested data doesn't exist.
980 If `type` is provided and is a callable it should convert the value,
981 return it or raise a :exc:`ValueError` if that is not possible. In
982 this case the function will return the default as if the value was not
983 found:
984
985 >>> d = Headers([('Content-Length', '42')])
986 >>> d.get('Content-Length', type=int)
987 42
988
989 If a headers object is bound you must not add unicode strings
990 because no encoding takes place.
991
992 .. versionadded:: 0.9
993 Added support for `as_bytes`.
994
995 :param key: The key to be looked up.
996 :param default: The default value to be returned if the key can't
997 be looked up. If not further specified `None` is
998 returned.
999 :param type: A callable that is used to cast the value in the
1000 :class:`Headers`. If a :exc:`ValueError` is raised
1001 by this callable the default value is returned.
1002 :param as_bytes: return bytes instead of unicode strings.
1003 """
1004 try:
1005 rv = self.__getitem__(key, _get_mode=True)
1006 except KeyError:
1007 return default
1008 if as_bytes:
1009 rv = rv.encode('latin1')
1010 if type is None:
1011 return rv
1012 try:
1013 return type(rv)
1014 except ValueError:
1015 return default
1016
1017 def getlist(self, key, type=None, as_bytes=False):
1018 """Return the list of items for a given key. If that key is not in the
1019 :class:`Headers`, the return value will be an empty list. Just as
1020 :meth:`get` :meth:`getlist` accepts a `type` parameter. All items will
1021 be converted with the callable defined there.
1022
1023 .. versionadded:: 0.9
1024 Added support for `as_bytes`.
1025
1026 :param key: The key to be looked up.
1027 :param type: A callable that is used to cast the value in the
1028 :class:`Headers`. If a :exc:`ValueError` is raised
1029 by this callable the value will be removed from the list.
1030 :return: a :class:`list` of all the values for the key.
1031 :param as_bytes: return bytes instead of unicode strings.
1032 """
1033 ikey = key.lower()
1034 result = []
1035 for k, v in self:
1036 if k.lower() == ikey:
1037 if as_bytes:
1038 v = v.encode('latin1')
1039 if type is not None:
1040 try:
1041 v = type(v)
1042 except ValueError:
1043 continue
1044 result.append(v)
1045 return result
1046
1047 def get_all(self, name):
1048 """Return a list of all the values for the named field.
1049
1050 This method is compatible with the :mod:`wsgiref`
1051 :meth:`~wsgiref.headers.Headers.get_all` method.
1052 """
1053 return self.getlist(name)
1054
1055 def items(self, lower=False):
1056 for key, value in self:
1057 if lower:
1058 key = key.lower()
1059 yield key, value
1060
1061 def keys(self, lower=False):
1062 for key, _ in iteritems(self, lower):
1063 yield key
1064
1065 def values(self):
1066 for _, value in iteritems(self):
1067 yield value
1068
1069 def extend(self, iterable):
1070 """Extend the headers with a dict or an iterable yielding keys and
1071 values.
1072 """
1073 if isinstance(iterable, dict):
1074 for key, value in iteritems(iterable):
1075 if isinstance(value, (tuple, list)):
1076 for v in value:
1077 self.add(key, v)
1078 else:
1079 self.add(key, value)
1080 else:
1081 for key, value in iterable:
1082 self.add(key, value)
1083
1084 def __delitem__(self, key, _index_operation=True):
1085 if _index_operation and isinstance(key, (integer_types, slice)):
1086 del self._list[key]
1087 return
1088 key = key.lower()
1089 new = []
1090 for k, v in self._list:
1091 if k.lower() != key:
1092 new.append((k, v))
1093 self._list[:] = new
1094
1095 def remove(self, key):
1096 """Remove a key.
1097
1098 :param key: The key to be removed.
1099 """
1100 return self.__delitem__(key, _index_operation=False)
1101
1102 def pop(self, key=None, default=_missing):
1103 """Removes and returns a key or index.
1104
1105 :param key: The key to be popped. If this is an integer the item at
1106 that position is removed, if it's a string the value for
1107 that key is. If the key is omitted or `None` the last
1108 item is removed.
1109 :return: an item.
1110 """
1111 if key is None:
1112 return self._list.pop()
1113 if isinstance(key, integer_types):
1114 return self._list.pop(key)
1115 try:
1116 rv = self[key]
1117 self.remove(key)
1118 except KeyError:
1119 if default is not _missing:
1120 return default
1121 raise
1122 return rv
1123
1124 def popitem(self):
1125 """Removes a key or index and returns a (key, value) item."""
1126 return self.pop()
1127
1128 def __contains__(self, key):
1129 """Check if a key is present."""
1130 try:
1131 self.__getitem__(key, _get_mode=True)
1132 except KeyError:
1133 return False
1134 return True
1135
1136 has_key = __contains__
1137
1138 def __iter__(self):
1139 """Yield ``(key, value)`` tuples."""
1140 return iter(self._list)
1141
1142 def __len__(self):
1143 return len(self._list)
1144
1145 def add(self, _key, _value, **kw):
1146 """Add a new header tuple to the list.
1147
1148 Keyword arguments can specify additional parameters for the header
1149 value, with underscores converted to dashes::
1150
1151 >>> d = Headers()
1152 >>> d.add('Content-Type', 'text/plain')
1153 >>> d.add('Content-Disposition', 'attachment', filename='foo.png')
1154
1155 The keyword argument dumping uses :func:`dump_options_header`
1156 behind the scenes.
1157
1158 .. versionadded:: 0.4.1
1159 keyword arguments were added for :mod:`wsgiref` compatibility.
1160 """
1161 if kw:
1162 _value = _options_header_vkw(_value, kw)
1163 _value = _unicodify_header_value(_value)
1164 self._validate_value(_value)
1165 self._list.append((_key, _value))
1166
1167 def _validate_value(self, value):
1168 if not isinstance(value, text_type):
1169 raise TypeError('Value should be unicode.')
1170 if u'\n' in value or u'\r' in value:
1171 raise ValueError('Detected newline in header value. This is '
1172 'a potential security problem')
1173
1174 def add_header(self, _key, _value, **_kw):
1175 """Add a new header tuple to the list.
1176
1177 An alias for :meth:`add` for compatibility with the :mod:`wsgiref`
1178 :meth:`~wsgiref.headers.Headers.add_header` method.
1179 """
1180 self.add(_key, _value, **_kw)
1181
1182 def clear(self):
1183 """Clears all headers."""
1184 del self._list[:]
1185
1186 def set(self, _key, _value, **kw):
1187 """Remove all header tuples for `key` and add a new one. The newly
1188 added key either appears at the end of the list if there was no
1189 entry or replaces the first one.
1190
1191 Keyword arguments can specify additional parameters for the header
1192 value, with underscores converted to dashes. See :meth:`add` for
1193 more information.
1194
1195 .. versionchanged:: 0.6.1
1196 :meth:`set` now accepts the same arguments as :meth:`add`.
1197
1198 :param key: The key to be inserted.
1199 :param value: The value to be inserted.
1200 """
1201 if kw:
1202 _value = _options_header_vkw(_value, kw)
1203 _value = _unicodify_header_value(_value)
1204 self._validate_value(_value)
1205 if not self._list:
1206 self._list.append((_key, _value))
1207 return
1208 listiter = iter(self._list)
1209 ikey = _key.lower()
1210 for idx, (old_key, old_value) in enumerate(listiter):
1211 if old_key.lower() == ikey:
1212 # replace first ocurrence
1213 self._list[idx] = (_key, _value)
1214 break
1215 else:
1216 self._list.append((_key, _value))
1217 return
1218 self._list[idx + 1:] = [t for t in listiter if t[0].lower() != ikey]
1219
1220 def setdefault(self, key, default):
1221 """Returns the value for the key if it is in the dict, otherwise it
1222 returns `default` and sets that value for `key`.
1223
1224 :param key: The key to be looked up.
1225 :param default: The default value to be returned if the key is not
1226 in the dict. If not further specified it's `None`.
1227 """
1228 if key in self:
1229 return self[key]
1230 self.set(key, default)
1231 return default
1232
1233 def __setitem__(self, key, value):
1234 """Like :meth:`set` but also supports index/slice based setting."""
1235 if isinstance(key, (slice, integer_types)):
1236 if isinstance(key, integer_types):
1237 value = [value]
1238 value = [(k, _unicodify_header_value(v)) for (k, v) in value]
1239 [self._validate_value(v) for (k, v) in value]
1240 if isinstance(key, integer_types):
1241 self._list[key] = value[0]
1242 else:
1243 self._list[key] = value
1244 else:
1245 self.set(key, value)
1246
1247 def to_list(self, charset='iso-8859-1'):
1248 """Convert the headers into a list suitable for WSGI."""
1249 from warnings import warn
1250 warn(DeprecationWarning('Method removed, use to_wsgi_list instead'),
1251 stacklevel=2)
1252 return self.to_wsgi_list()
1253
1254 def to_wsgi_list(self):
1255 """Convert the headers into a list suitable for WSGI.
1256
1257 The values are byte strings in Python 2 converted to latin1 and unicode
1258 strings in Python 3 for the WSGI server to encode.
1259
1260 :return: list
1261 """
1262 if PY2:
1263 return [(to_native(k), v.encode('latin1')) for k, v in self]
1264 return list(self)
1265
1266 def copy(self):
1267 return self.__class__(self._list)
1268
1269 def __copy__(self):
1270 return self.copy()
1271
1272 def __str__(self):
1273 """Returns formatted headers suitable for HTTP transmission."""
1274 strs = []
1275 for key, value in self.to_wsgi_list():
1276 strs.append('%s: %s' % (key, value))
1277 strs.append('\r\n')
1278 return '\r\n'.join(strs)
1279
1280 def __repr__(self):
1281 return '%s(%r)' % (
1282 self.__class__.__name__,
1283 list(self)
1284 )
1285
1286
1287 class ImmutableHeadersMixin(object):
1288
1289 """Makes a :class:`Headers` immutable. We do not mark them as
1290 hashable though since the only usecase for this datastructure
1291 in Werkzeug is a view on a mutable structure.
1292
1293 .. versionadded:: 0.5
1294
1295 :private:
1296 """
1297
1298 def __delitem__(self, key):
1299 is_immutable(self)
1300
1301 def __setitem__(self, key, value):
1302 is_immutable(self)
1303 set = __setitem__
1304
1305 def add(self, item):
1306 is_immutable(self)
1307 remove = add_header = add
1308
1309 def extend(self, iterable):
1310 is_immutable(self)
1311
1312 def insert(self, pos, value):
1313 is_immutable(self)
1314
1315 def pop(self, index=-1):
1316 is_immutable(self)
1317
1318 def popitem(self):
1319 is_immutable(self)
1320
1321 def setdefault(self, key, default):
1322 is_immutable(self)
1323
1324
1325 class EnvironHeaders(ImmutableHeadersMixin, Headers):
1326
1327 """Read only version of the headers from a WSGI environment. This
1328 provides the same interface as `Headers` and is constructed from
1329 a WSGI environment.
1330
1331 From Werkzeug 0.3 onwards, the `KeyError` raised by this class is also a
1332 subclass of the :exc:`~exceptions.BadRequest` HTTP exception and will
1333 render a page for a ``400 BAD REQUEST`` if caught in a catch-all for
1334 HTTP exceptions.
1335 """
1336
1337 def __init__(self, environ):
1338 self.environ = environ
1339
1340 def __eq__(self, other):
1341 return self.environ is other.environ
1342
1343 __hash__ = None
1344
1345 def __getitem__(self, key, _get_mode=False):
1346 # _get_mode is a no-op for this class as there is no index but
1347 # used because get() calls it.
1348 if not isinstance(key, string_types):
1349 raise KeyError(key)
1350 key = key.upper().replace('-', '_')
1351 if key in ('CONTENT_TYPE', 'CONTENT_LENGTH'):
1352 return _unicodify_header_value(self.environ[key])
1353 return _unicodify_header_value(self.environ['HTTP_' + key])
1354
1355 def __len__(self):
1356 # the iter is necessary because otherwise list calls our
1357 # len which would call list again and so forth.
1358 return len(list(iter(self)))
1359
1360 def __iter__(self):
1361 for key, value in iteritems(self.environ):
1362 if key.startswith('HTTP_') and key not in \
1363 ('HTTP_CONTENT_TYPE', 'HTTP_CONTENT_LENGTH'):
1364 yield (key[5:].replace('_', '-').title(),
1365 _unicodify_header_value(value))
1366 elif key in ('CONTENT_TYPE', 'CONTENT_LENGTH') and value:
1367 yield (key.replace('_', '-').title(),
1368 _unicodify_header_value(value))
1369
1370 def copy(self):
1371 raise TypeError('cannot create %r copies' % self.__class__.__name__)
1372
1373
1374 @native_itermethods(['keys', 'values', 'items', 'lists', 'listvalues'])
1375 class CombinedMultiDict(ImmutableMultiDictMixin, MultiDict):
1376
1377 """A read only :class:`MultiDict` that you can pass multiple :class:`MultiDict`
1378 instances as sequence and it will combine the return values of all wrapped
1379 dicts:
1380
1381 >>> from werkzeug.datastructures import CombinedMultiDict, MultiDict
1382 >>> post = MultiDict([('foo', 'bar')])
1383 >>> get = MultiDict([('blub', 'blah')])
1384 >>> combined = CombinedMultiDict([get, post])
1385 >>> combined['foo']
1386 'bar'
1387 >>> combined['blub']
1388 'blah'
1389
1390 This works for all read operations and will raise a `TypeError` for
1391 methods that usually change data which isn't possible.
1392
1393 From Werkzeug 0.3 onwards, the `KeyError` raised by this class is also a
1394 subclass of the :exc:`~exceptions.BadRequest` HTTP exception and will
1395 render a page for a ``400 BAD REQUEST`` if caught in a catch-all for HTTP
1396 exceptions.
1397 """
1398
1399 def __reduce_ex__(self, protocol):
1400 return type(self), (self.dicts,)
1401
1402 def __init__(self, dicts=None):
1403 self.dicts = dicts or []
1404
1405 @classmethod
1406 def fromkeys(cls):
1407 raise TypeError('cannot create %r instances by fromkeys' %
1408 cls.__name__)
1409
1410 def __getitem__(self, key):
1411 for d in self.dicts:
1412 if key in d:
1413 return d[key]
1414 raise exceptions.BadRequestKeyError(key)
1415
1416 def get(self, key, default=None, type=None):
1417 for d in self.dicts:
1418 if key in d:
1419 if type is not None:
1420 try:
1421 return type(d[key])
1422 except ValueError:
1423 continue
1424 return d[key]
1425 return default
1426
1427 def getlist(self, key, type=None):
1428 rv = []
1429 for d in self.dicts:
1430 rv.extend(d.getlist(key, type))
1431 return rv
1432
1433 def _keys_impl(self):
1434 """This function exists so __len__ can be implemented more efficiently,
1435 saving one list creation from an iterator.
1436
1437 Using this for Python 2's ``dict.keys`` behavior would be useless since
1438 `dict.keys` in Python 2 returns a list, while we have a set here.
1439 """
1440 rv = set()
1441 for d in self.dicts:
1442 rv.update(iterkeys(d))
1443 return rv
1444
1445 def keys(self):
1446 return iter(self._keys_impl())
1447
1448 __iter__ = keys
1449
1450 def items(self, multi=False):
1451 found = set()
1452 for d in self.dicts:
1453 for key, value in iteritems(d, multi):
1454 if multi:
1455 yield key, value
1456 elif key not in found:
1457 found.add(key)
1458 yield key, value
1459
1460 def values(self):
1461 for key, value in iteritems(self):
1462 yield value
1463
1464 def lists(self):
1465 rv = {}
1466 for d in self.dicts:
1467 for key, values in iterlists(d):
1468 rv.setdefault(key, []).extend(values)
1469 return iteritems(rv)
1470
1471 def listvalues(self):
1472 return (x[1] for x in self.lists())
1473
1474 def copy(self):
1475 """Return a shallow copy of this object."""
1476 return self.__class__(self.dicts[:])
1477
1478 def to_dict(self, flat=True):
1479 """Return the contents as regular dict. If `flat` is `True` the
1480 returned dict will only have the first item present, if `flat` is
1481 `False` all values will be returned as lists.
1482
1483 :param flat: If set to `False` the dict returned will have lists
1484 with all the values in it. Otherwise it will only
1485 contain the first item for each key.
1486 :return: a :class:`dict`
1487 """
1488 rv = {}
1489 for d in reversed(self.dicts):
1490 rv.update(d.to_dict(flat))
1491 return rv
1492
1493 def __len__(self):
1494 return len(self._keys_impl())
1495
1496 def __contains__(self, key):
1497 for d in self.dicts:
1498 if key in d:
1499 return True
1500 return False
1501
1502 has_key = __contains__
1503
1504 def __repr__(self):
1505 return '%s(%r)' % (self.__class__.__name__, self.dicts)
1506
1507
1508 class FileMultiDict(MultiDict):
1509
1510 """A special :class:`MultiDict` that has convenience methods to add
1511 files to it. This is used for :class:`EnvironBuilder` and generally
1512 useful for unittesting.
1513
1514 .. versionadded:: 0.5
1515 """
1516
1517 def add_file(self, name, file, filename=None, content_type=None):
1518 """Adds a new file to the dict. `file` can be a file name or
1519 a :class:`file`-like or a :class:`FileStorage` object.
1520
1521 :param name: the name of the field.
1522 :param file: a filename or :class:`file`-like object
1523 :param filename: an optional filename
1524 :param content_type: an optional content type
1525 """
1526 if isinstance(file, FileStorage):
1527 value = file
1528 else:
1529 if isinstance(file, string_types):
1530 if filename is None:
1531 filename = file
1532 file = open(file, 'rb')
1533 if filename and content_type is None:
1534 content_type = mimetypes.guess_type(filename)[0] or \
1535 'application/octet-stream'
1536 value = FileStorage(file, filename, name, content_type)
1537
1538 self.add(name, value)
1539
1540
1541 class ImmutableDict(ImmutableDictMixin, dict):
1542
1543 """An immutable :class:`dict`.
1544
1545 .. versionadded:: 0.5
1546 """
1547
1548 def __repr__(self):
1549 return '%s(%s)' % (
1550 self.__class__.__name__,
1551 dict.__repr__(self),
1552 )
1553
1554 def copy(self):
1555 """Return a shallow mutable copy of this object. Keep in mind that
1556 the standard library's :func:`copy` function is a no-op for this class
1557 like for any other python immutable type (eg: :class:`tuple`).
1558 """
1559 return dict(self)
1560
1561 def __copy__(self):
1562 return self
1563
1564
1565 class ImmutableMultiDict(ImmutableMultiDictMixin, MultiDict):
1566
1567 """An immutable :class:`MultiDict`.
1568
1569 .. versionadded:: 0.5
1570 """
1571
1572 def copy(self):
1573 """Return a shallow mutable copy of this object. Keep in mind that
1574 the standard library's :func:`copy` function is a no-op for this class
1575 like for any other python immutable type (eg: :class:`tuple`).
1576 """
1577 return MultiDict(self)
1578
1579 def __copy__(self):
1580 return self
1581
1582
1583 class ImmutableOrderedMultiDict(ImmutableMultiDictMixin, OrderedMultiDict):
1584
1585 """An immutable :class:`OrderedMultiDict`.
1586
1587 .. versionadded:: 0.6
1588 """
1589
1590 def _iter_hashitems(self):
1591 return enumerate(iteritems(self, multi=True))
1592
1593 def copy(self):
1594 """Return a shallow mutable copy of this object. Keep in mind that
1595 the standard library's :func:`copy` function is a no-op for this class
1596 like for any other python immutable type (eg: :class:`tuple`).
1597 """
1598 return OrderedMultiDict(self)
1599
1600 def __copy__(self):
1601 return self
1602
1603
1604 @native_itermethods(['values'])
1605 class Accept(ImmutableList):
1606
1607 """An :class:`Accept` object is just a list subclass for lists of
1608 ``(value, quality)`` tuples. It is automatically sorted by specificity
1609 and quality.
1610
1611 All :class:`Accept` objects work similar to a list but provide extra
1612 functionality for working with the data. Containment checks are
1613 normalized to the rules of that header:
1614
1615 >>> a = CharsetAccept([('ISO-8859-1', 1), ('utf-8', 0.7)])
1616 >>> a.best
1617 'ISO-8859-1'
1618 >>> 'iso-8859-1' in a
1619 True
1620 >>> 'UTF8' in a
1621 True
1622 >>> 'utf7' in a
1623 False
1624
1625 To get the quality for an item you can use normal item lookup:
1626
1627 >>> print a['utf-8']
1628 0.7
1629 >>> a['utf7']
1630 0
1631
1632 .. versionchanged:: 0.5
1633 :class:`Accept` objects are forced immutable now.
1634 """
1635
1636 def __init__(self, values=()):
1637 if values is None:
1638 list.__init__(self)
1639 self.provided = False
1640 elif isinstance(values, Accept):
1641 self.provided = values.provided
1642 list.__init__(self, values)
1643 else:
1644 self.provided = True
1645 values = sorted(values, key=lambda x: (self._specificity(x[0]), x[1], x[0]),
1646 reverse=True)
1647 list.__init__(self, values)
1648
1649 def _specificity(self, value):
1650 """Returns a tuple describing the value's specificity."""
1651 return value != '*',
1652
1653 def _value_matches(self, value, item):
1654 """Check if a value matches a given accept item."""
1655 return item == '*' or item.lower() == value.lower()
1656
1657 def __getitem__(self, key):
1658 """Besides index lookup (getting item n) you can also pass it a string
1659 to get the quality for the item. If the item is not in the list, the
1660 returned quality is ``0``.
1661 """
1662 if isinstance(key, string_types):
1663 return self.quality(key)
1664 return list.__getitem__(self, key)
1665
1666 def quality(self, key):
1667 """Returns the quality of the key.
1668
1669 .. versionadded:: 0.6
1670 In previous versions you had to use the item-lookup syntax
1671 (eg: ``obj[key]`` instead of ``obj.quality(key)``)
1672 """
1673 for item, quality in self:
1674 if self._value_matches(key, item):
1675 return quality
1676 return 0
1677
1678 def __contains__(self, value):
1679 for item, quality in self:
1680 if self._value_matches(value, item):
1681 return True
1682 return False
1683
1684 def __repr__(self):
1685 return '%s([%s])' % (
1686 self.__class__.__name__,
1687 ', '.join('(%r, %s)' % (x, y) for x, y in self)
1688 )
1689
1690 def index(self, key):
1691 """Get the position of an entry or raise :exc:`ValueError`.
1692
1693 :param key: The key to be looked up.
1694
1695 .. versionchanged:: 0.5
1696 This used to raise :exc:`IndexError`, which was inconsistent
1697 with the list API.
1698 """
1699 if isinstance(key, string_types):
1700 for idx, (item, quality) in enumerate(self):
1701 if self._value_matches(key, item):
1702 return idx
1703 raise ValueError(key)
1704 return list.index(self, key)
1705
1706 def find(self, key):
1707 """Get the position of an entry or return -1.
1708
1709 :param key: The key to be looked up.
1710 """
1711 try:
1712 return self.index(key)
1713 except ValueError:
1714 return -1
1715
1716 def values(self):
1717 """Iterate over all values."""
1718 for item in self:
1719 yield item[0]
1720
1721 def to_header(self):
1722 """Convert the header set into an HTTP header string."""
1723 result = []
1724 for value, quality in self:
1725 if quality != 1:
1726 value = '%s;q=%s' % (value, quality)
1727 result.append(value)
1728 return ','.join(result)
1729
1730 def __str__(self):
1731 return self.to_header()
1732
1733 def _best_single_match(self, match):
1734 for client_item, quality in self:
1735 if self._value_matches(match, client_item):
1736 # self is sorted by specificity descending, we can exit
1737 return client_item, quality
1738
1739 def best_match(self, matches, default=None):
1740 """Returns the best match from a list of possible matches based
1741 on the specificity and quality of the client. If two items have the
1742 same quality and specificity, the one is returned that comes first.
1743
1744 :param matches: a list of matches to check for
1745 :param default: the value that is returned if none match
1746 """
1747 result = default
1748 best_quality = -1
1749 best_specificity = (-1,)
1750 for server_item in matches:
1751 match = self._best_single_match(server_item)
1752 if not match:
1753 continue
1754 client_item, quality = match
1755 specificity = self._specificity(client_item)
1756 if quality <= 0 or quality < best_quality:
1757 continue
1758 # better quality or same quality but more specific => better match
1759 if quality > best_quality or specificity > best_specificity:
1760 result = server_item
1761 best_quality = quality
1762 best_specificity = specificity
1763 return result
1764
1765 @property
1766 def best(self):
1767 """The best match as value."""
1768 if self:
1769 return self[0][0]
1770
1771
1772 class MIMEAccept(Accept):
1773
1774 """Like :class:`Accept` but with special methods and behavior for
1775 mimetypes.
1776 """
1777
1778 def _specificity(self, value):
1779 return tuple(x != '*' for x in value.split('/', 1))
1780
1781 def _value_matches(self, value, item):
1782 def _normalize(x):
1783 x = x.lower()
1784 return x == '*' and ('*', '*') or x.split('/', 1)
1785
1786 # this is from the application which is trusted. to avoid developer
1787 # frustration we actually check these for valid values
1788 if '/' not in value:
1789 raise ValueError('invalid mimetype %r' % value)
1790 value_type, value_subtype = _normalize(value)
1791 if value_type == '*' and value_subtype != '*':
1792 raise ValueError('invalid mimetype %r' % value)
1793
1794 if '/' not in item:
1795 return False
1796 item_type, item_subtype = _normalize(item)
1797 if item_type == '*' and item_subtype != '*':
1798 return False
1799 return (
1800 (item_type == item_subtype == '*' or
1801 value_type == value_subtype == '*') or
1802 (item_type == value_type and (item_subtype == '*' or
1803 value_subtype == '*' or
1804 item_subtype == value_subtype))
1805 )
1806
1807 @property
1808 def accept_html(self):
1809 """True if this object accepts HTML."""
1810 return (
1811 'text/html' in self or
1812 'application/xhtml+xml' in self or
1813 self.accept_xhtml
1814 )
1815
1816 @property
1817 def accept_xhtml(self):
1818 """True if this object accepts XHTML."""
1819 return (
1820 'application/xhtml+xml' in self or
1821 'application/xml' in self
1822 )
1823
1824 @property
1825 def accept_json(self):
1826 """True if this object accepts JSON."""
1827 return 'application/json' in self
1828
1829
1830 class LanguageAccept(Accept):
1831
1832 """Like :class:`Accept` but with normalization for languages."""
1833
1834 def _value_matches(self, value, item):
1835 def _normalize(language):
1836 return _locale_delim_re.split(language.lower())
1837 return item == '*' or _normalize(value) == _normalize(item)
1838
1839
1840 class CharsetAccept(Accept):
1841
1842 """Like :class:`Accept` but with normalization for charsets."""
1843
1844 def _value_matches(self, value, item):
1845 def _normalize(name):
1846 try:
1847 return codecs.lookup(name).name
1848 except LookupError:
1849 return name.lower()
1850 return item == '*' or _normalize(value) == _normalize(item)
1851
1852
1853 def cache_property(key, empty, type):
1854 """Return a new property object for a cache header. Useful if you
1855 want to add support for a cache extension in a subclass."""
1856 return property(lambda x: x._get_cache_value(key, empty, type),
1857 lambda x, v: x._set_cache_value(key, v, type),
1858 lambda x: x._del_cache_value(key),
1859 'accessor for %r' % key)
1860
1861
1862 class _CacheControl(UpdateDictMixin, dict):
1863
1864 """Subclass of a dict that stores values for a Cache-Control header. It
1865 has accessors for all the cache-control directives specified in RFC 2616.
1866 The class does not differentiate between request and response directives.
1867
1868 Because the cache-control directives in the HTTP header use dashes the
1869 python descriptors use underscores for that.
1870
1871 To get a header of the :class:`CacheControl` object again you can convert
1872 the object into a string or call the :meth:`to_header` method. If you plan
1873 to subclass it and add your own items have a look at the sourcecode for
1874 that class.
1875
1876 .. versionchanged:: 0.4
1877
1878 Setting `no_cache` or `private` to boolean `True` will set the implicit
1879 none-value which is ``*``:
1880
1881 >>> cc = ResponseCacheControl()
1882 >>> cc.no_cache = True
1883 >>> cc
1884 <ResponseCacheControl 'no-cache'>
1885 >>> cc.no_cache
1886 '*'
1887 >>> cc.no_cache = None
1888 >>> cc
1889 <ResponseCacheControl ''>
1890
1891 In versions before 0.5 the behavior documented here affected the now
1892 no longer existing `CacheControl` class.
1893 """
1894
1895 no_cache = cache_property('no-cache', '*', None)
1896 no_store = cache_property('no-store', None, bool)
1897 max_age = cache_property('max-age', -1, int)
1898 no_transform = cache_property('no-transform', None, None)
1899
1900 def __init__(self, values=(), on_update=None):
1901 dict.__init__(self, values or ())
1902 self.on_update = on_update
1903 self.provided = values is not None
1904
1905 def _get_cache_value(self, key, empty, type):
1906 """Used internally by the accessor properties."""
1907 if type is bool:
1908 return key in self
1909 if key in self:
1910 value = self[key]
1911 if value is None:
1912 return empty
1913 elif type is not None:
1914 try:
1915 value = type(value)
1916 except ValueError:
1917 pass
1918 return value
1919
1920 def _set_cache_value(self, key, value, type):
1921 """Used internally by the accessor properties."""
1922 if type is bool:
1923 if value:
1924 self[key] = None
1925 else:
1926 self.pop(key, None)
1927 else:
1928 if value is None:
1929 self.pop(key)
1930 elif value is True:
1931 self[key] = None
1932 else:
1933 self[key] = value
1934
1935 def _del_cache_value(self, key):
1936 """Used internally by the accessor properties."""
1937 if key in self:
1938 del self[key]
1939
1940 def to_header(self):
1941 """Convert the stored values into a cache control header."""
1942 return dump_header(self)
1943
1944 def __str__(self):
1945 return self.to_header()
1946
1947 def __repr__(self):
1948 return '<%s %s>' % (
1949 self.__class__.__name__,
1950 " ".join(
1951 "%s=%r" % (k, v) for k, v in sorted(self.items())
1952 ),
1953 )
1954
1955
1956 class RequestCacheControl(ImmutableDictMixin, _CacheControl):
1957
1958 """A cache control for requests. This is immutable and gives access
1959 to all the request-relevant cache control headers.
1960
1961 To get a header of the :class:`RequestCacheControl` object again you can
1962 convert the object into a string or call the :meth:`to_header` method. If
1963 you plan to subclass it and add your own items have a look at the sourcecode
1964 for that class.
1965
1966 .. versionadded:: 0.5
1967 In previous versions a `CacheControl` class existed that was used
1968 both for request and response.
1969 """
1970
1971 max_stale = cache_property('max-stale', '*', int)
1972 min_fresh = cache_property('min-fresh', '*', int)
1973 no_transform = cache_property('no-transform', None, None)
1974 only_if_cached = cache_property('only-if-cached', None, bool)
1975
1976
1977 class ResponseCacheControl(_CacheControl):
1978
1979 """A cache control for responses. Unlike :class:`RequestCacheControl`
1980 this is mutable and gives access to response-relevant cache control
1981 headers.
1982
1983 To get a header of the :class:`ResponseCacheControl` object again you can
1984 convert the object into a string or call the :meth:`to_header` method. If
1985 you plan to subclass it and add your own items have a look at the sourcecode
1986 for that class.
1987
1988 .. versionadded:: 0.5
1989 In previous versions a `CacheControl` class existed that was used
1990 both for request and response.
1991 """
1992
1993 public = cache_property('public', None, bool)
1994 private = cache_property('private', '*', None)
1995 must_revalidate = cache_property('must-revalidate', None, bool)
1996 proxy_revalidate = cache_property('proxy-revalidate', None, bool)
1997 s_maxage = cache_property('s-maxage', None, None)
1998
1999
2000 # attach cache_property to the _CacheControl as staticmethod
2001 # so that others can reuse it.
2002 _CacheControl.cache_property = staticmethod(cache_property)
2003
2004
2005 class CallbackDict(UpdateDictMixin, dict):
2006
2007 """A dict that calls a function passed every time something is changed.
2008 The function is passed the dict instance.
2009 """
2010
2011 def __init__(self, initial=None, on_update=None):
2012 dict.__init__(self, initial or ())
2013 self.on_update = on_update
2014
2015 def __repr__(self):
2016 return '<%s %s>' % (
2017 self.__class__.__name__,
2018 dict.__repr__(self)
2019 )
2020
2021
2022 class HeaderSet(MutableSet):
2023
2024 """Similar to the :class:`ETags` class this implements a set-like structure.
2025 Unlike :class:`ETags` this is case insensitive and used for vary, allow, and
2026 content-language headers.
2027
2028 If not constructed using the :func:`parse_set_header` function the
2029 instantiation works like this:
2030
2031 >>> hs = HeaderSet(['foo', 'bar', 'baz'])
2032 >>> hs
2033 HeaderSet(['foo', 'bar', 'baz'])
2034 """
2035
2036 def __init__(self, headers=None, on_update=None):
2037 self._headers = list(headers or ())
2038 self._set = set([x.lower() for x in self._headers])
2039 self.on_update = on_update
2040
2041 def add(self, header):
2042 """Add a new header to the set."""
2043 self.update((header,))
2044
2045 def remove(self, header):
2046 """Remove a header from the set. This raises an :exc:`KeyError` if the
2047 header is not in the set.
2048
2049 .. versionchanged:: 0.5
2050 In older versions a :exc:`IndexError` was raised instead of a
2051 :exc:`KeyError` if the object was missing.
2052
2053 :param header: the header to be removed.
2054 """
2055 key = header.lower()
2056 if key not in self._set:
2057 raise KeyError(header)
2058 self._set.remove(key)
2059 for idx, key in enumerate(self._headers):
2060 if key.lower() == header:
2061 del self._headers[idx]
2062 break
2063 if self.on_update is not None:
2064 self.on_update(self)
2065
2066 def update(self, iterable):
2067 """Add all the headers from the iterable to the set.
2068
2069 :param iterable: updates the set with the items from the iterable.
2070 """
2071 inserted_any = False
2072 for header in iterable:
2073 key = header.lower()
2074 if key not in self._set:
2075 self._headers.append(header)
2076 self._set.add(key)
2077 inserted_any = True
2078 if inserted_any and self.on_update is not None:
2079 self.on_update(self)
2080
2081 def discard(self, header):
2082 """Like :meth:`remove` but ignores errors.
2083
2084 :param header: the header to be discarded.
2085 """
2086 try:
2087 return self.remove(header)
2088 except KeyError:
2089 pass
2090
2091 def find(self, header):
2092 """Return the index of the header in the set or return -1 if not found.
2093
2094 :param header: the header to be looked up.
2095 """
2096 header = header.lower()
2097 for idx, item in enumerate(self._headers):
2098 if item.lower() == header:
2099 return idx
2100 return -1
2101
2102 def index(self, header):
2103 """Return the index of the header in the set or raise an
2104 :exc:`IndexError`.
2105
2106 :param header: the header to be looked up.
2107 """
2108 rv = self.find(header)
2109 if rv < 0:
2110 raise IndexError(header)
2111 return rv
2112
2113 def clear(self):
2114 """Clear the set."""
2115 self._set.clear()
2116 del self._headers[:]
2117 if self.on_update is not None:
2118 self.on_update(self)
2119
2120 def as_set(self, preserve_casing=False):
2121 """Return the set as real python set type. When calling this, all
2122 the items are converted to lowercase and the ordering is lost.
2123
2124 :param preserve_casing: if set to `True` the items in the set returned
2125 will have the original case like in the
2126 :class:`HeaderSet`, otherwise they will
2127 be lowercase.
2128 """
2129 if preserve_casing:
2130 return set(self._headers)
2131 return set(self._set)
2132
2133 def to_header(self):
2134 """Convert the header set into an HTTP header string."""
2135 return ', '.join(map(quote_header_value, self._headers))
2136
2137 def __getitem__(self, idx):
2138 return self._headers[idx]
2139
2140 def __delitem__(self, idx):
2141 rv = self._headers.pop(idx)
2142 self._set.remove(rv.lower())
2143 if self.on_update is not None:
2144 self.on_update(self)
2145
2146 def __setitem__(self, idx, value):
2147 old = self._headers[idx]
2148 self._set.remove(old.lower())
2149 self._headers[idx] = value
2150 self._set.add(value.lower())
2151 if self.on_update is not None:
2152 self.on_update(self)
2153
2154 def __contains__(self, header):
2155 return header.lower() in self._set
2156
2157 def __len__(self):
2158 return len(self._set)
2159
2160 def __iter__(self):
2161 return iter(self._headers)
2162
2163 def __nonzero__(self):
2164 return bool(self._set)
2165
2166 def __str__(self):
2167 return self.to_header()
2168
2169 def __repr__(self):
2170 return '%s(%r)' % (
2171 self.__class__.__name__,
2172 self._headers
2173 )
2174
2175
2176 class ETags(Container, Iterable):
2177
2178 """A set that can be used to check if one etag is present in a collection
2179 of etags.
2180 """
2181
2182 def __init__(self, strong_etags=None, weak_etags=None, star_tag=False):
2183 self._strong = frozenset(not star_tag and strong_etags or ())
2184 self._weak = frozenset(weak_etags or ())
2185 self.star_tag = star_tag
2186
2187 def as_set(self, include_weak=False):
2188 """Convert the `ETags` object into a python set. Per default all the
2189 weak etags are not part of this set."""
2190 rv = set(self._strong)
2191 if include_weak:
2192 rv.update(self._weak)
2193 return rv
2194
2195 def is_weak(self, etag):
2196 """Check if an etag is weak."""
2197 return etag in self._weak
2198
2199 def is_strong(self, etag):
2200 """Check if an etag is strong."""
2201 return etag in self._strong
2202
2203 def contains_weak(self, etag):
2204 """Check if an etag is part of the set including weak and strong tags."""
2205 return self.is_weak(etag) or self.contains(etag)
2206
2207 def contains(self, etag):
2208 """Check if an etag is part of the set ignoring weak tags.
2209 It is also possible to use the ``in`` operator.
2210 """
2211 if self.star_tag:
2212 return True
2213 return self.is_strong(etag)
2214
2215 def contains_raw(self, etag):
2216 """When passed a quoted tag it will check if this tag is part of the
2217 set. If the tag is weak it is checked against weak and strong tags,
2218 otherwise strong only."""
2219 etag, weak = unquote_etag(etag)
2220 if weak:
2221 return self.contains_weak(etag)
2222 return self.contains(etag)
2223
2224 def to_header(self):
2225 """Convert the etags set into a HTTP header string."""
2226 if self.star_tag:
2227 return '*'
2228 return ', '.join(
2229 ['"%s"' % x for x in self._strong] +
2230 ['W/"%s"' % x for x in self._weak]
2231 )
2232
2233 def __call__(self, etag=None, data=None, include_weak=False):
2234 if [etag, data].count(None) != 1:
2235 raise TypeError('either tag or data required, but at least one')
2236 if etag is None:
2237 etag = generate_etag(data)
2238 if include_weak:
2239 if etag in self._weak:
2240 return True
2241 return etag in self._strong
2242
2243 def __bool__(self):
2244 return bool(self.star_tag or self._strong or self._weak)
2245
2246 __nonzero__ = __bool__
2247
2248 def __str__(self):
2249 return self.to_header()
2250
2251 def __iter__(self):
2252 return iter(self._strong)
2253
2254 def __contains__(self, etag):
2255 return self.contains(etag)
2256
2257 def __repr__(self):
2258 return '<%s %r>' % (self.__class__.__name__, str(self))
2259
2260
2261 class IfRange(object):
2262
2263 """Very simple object that represents the `If-Range` header in parsed
2264 form. It will either have neither a etag or date or one of either but
2265 never both.
2266
2267 .. versionadded:: 0.7
2268 """
2269
2270 def __init__(self, etag=None, date=None):
2271 #: The etag parsed and unquoted. Ranges always operate on strong
2272 #: etags so the weakness information is not necessary.
2273 self.etag = etag
2274 #: The date in parsed format or `None`.
2275 self.date = date
2276
2277 def to_header(self):
2278 """Converts the object back into an HTTP header."""
2279 if self.date is not None:
2280 return http_date(self.date)
2281 if self.etag is not None:
2282 return quote_etag(self.etag)
2283 return ''
2284
2285 def __str__(self):
2286 return self.to_header()
2287
2288 def __repr__(self):
2289 return '<%s %r>' % (self.__class__.__name__, str(self))
2290
2291
2292 class Range(object):
2293
2294 """Represents a range header. All the methods are only supporting bytes
2295 as unit. It does store multiple ranges but :meth:`range_for_length` will
2296 only work if only one range is provided.
2297
2298 .. versionadded:: 0.7
2299 """
2300
2301 def __init__(self, units, ranges):
2302 #: The units of this range. Usually "bytes".
2303 self.units = units
2304 #: A list of ``(begin, end)`` tuples for the range header provided.
2305 #: The ranges are non-inclusive.
2306 self.ranges = ranges
2307
2308 def range_for_length(self, length):
2309 """If the range is for bytes, the length is not None and there is
2310 exactly one range and it is satisfiable it returns a ``(start, stop)``
2311 tuple, otherwise `None`.
2312 """
2313 if self.units != 'bytes' or length is None or len(self.ranges) != 1:
2314 return None
2315 start, end = self.ranges[0]
2316 if end is None:
2317 end = length
2318 if start < 0:
2319 start += length
2320 if is_byte_range_valid(start, end, length):
2321 return start, min(end, length)
2322
2323 def make_content_range(self, length):
2324 """Creates a :class:`~werkzeug.datastructures.ContentRange` object
2325 from the current range and given content length.
2326 """
2327 rng = self.range_for_length(length)
2328 if rng is not None:
2329 return ContentRange(self.units, rng[0], rng[1], length)
2330
2331 def to_header(self):
2332 """Converts the object back into an HTTP header."""
2333 ranges = []
2334 for begin, end in self.ranges:
2335 if end is None:
2336 ranges.append(begin >= 0 and '%s-' % begin or str(begin))
2337 else:
2338 ranges.append('%s-%s' % (begin, end - 1))
2339 return '%s=%s' % (self.units, ','.join(ranges))
2340
2341 def to_content_range_header(self, length):
2342 """Converts the object into `Content-Range` HTTP header,
2343 based on given length
2344 """
2345 range_for_length = self.range_for_length(length)
2346 if range_for_length is not None:
2347 return '%s %d-%d/%d' % (self.units,
2348 range_for_length[0],
2349 range_for_length[1] - 1, length)
2350 return None
2351
2352 def __str__(self):
2353 return self.to_header()
2354
2355 def __repr__(self):
2356 return '<%s %r>' % (self.__class__.__name__, str(self))
2357
2358
2359 class ContentRange(object):
2360
2361 """Represents the content range header.
2362
2363 .. versionadded:: 0.7
2364 """
2365
2366 def __init__(self, units, start, stop, length=None, on_update=None):
2367 assert is_byte_range_valid(start, stop, length), \
2368 'Bad range provided'
2369 self.on_update = on_update
2370 self.set(start, stop, length, units)
2371
2372 def _callback_property(name):
2373 def fget(self):
2374 return getattr(self, name)
2375
2376 def fset(self, value):
2377 setattr(self, name, value)
2378 if self.on_update is not None:
2379 self.on_update(self)
2380 return property(fget, fset)
2381
2382 #: The units to use, usually "bytes"
2383 units = _callback_property('_units')
2384 #: The start point of the range or `None`.
2385 start = _callback_property('_start')
2386 #: The stop point of the range (non-inclusive) or `None`. Can only be
2387 #: `None` if also start is `None`.
2388 stop = _callback_property('_stop')
2389 #: The length of the range or `None`.
2390 length = _callback_property('_length')
2391
2392 def set(self, start, stop, length=None, units='bytes'):
2393 """Simple method to update the ranges."""
2394 assert is_byte_range_valid(start, stop, length), \
2395 'Bad range provided'
2396 self._units = units
2397 self._start = start
2398 self._stop = stop
2399 self._length = length
2400 if self.on_update is not None:
2401 self.on_update(self)
2402
2403 def unset(self):
2404 """Sets the units to `None` which indicates that the header should
2405 no longer be used.
2406 """
2407 self.set(None, None, units=None)
2408
2409 def to_header(self):
2410 if self.units is None:
2411 return ''
2412 if self.length is None:
2413 length = '*'
2414 else:
2415 length = self.length
2416 if self.start is None:
2417 return '%s */%s' % (self.units, length)
2418 return '%s %s-%s/%s' % (
2419 self.units,
2420 self.start,
2421 self.stop - 1,
2422 length
2423 )
2424
2425 def __nonzero__(self):
2426 return self.units is not None
2427
2428 __bool__ = __nonzero__
2429
2430 def __str__(self):
2431 return self.to_header()
2432
2433 def __repr__(self):
2434 return '<%s %r>' % (self.__class__.__name__, str(self))
2435
2436
2437 class Authorization(ImmutableDictMixin, dict):
2438
2439 """Represents an `Authorization` header sent by the client. You should
2440 not create this kind of object yourself but use it when it's returned by
2441 the `parse_authorization_header` function.
2442
2443 This object is a dict subclass and can be altered by setting dict items
2444 but it should be considered immutable as it's returned by the client and
2445 not meant for modifications.
2446
2447 .. versionchanged:: 0.5
2448 This object became immutable.
2449 """
2450
2451 def __init__(self, auth_type, data=None):
2452 dict.__init__(self, data or {})
2453 self.type = auth_type
2454
2455 username = property(lambda x: x.get('username'), doc='''
2456 The username transmitted. This is set for both basic and digest
2457 auth all the time.''')
2458 password = property(lambda x: x.get('password'), doc='''
2459 When the authentication type is basic this is the password
2460 transmitted by the client, else `None`.''')
2461 realm = property(lambda x: x.get('realm'), doc='''
2462 This is the server realm sent back for HTTP digest auth.''')
2463 nonce = property(lambda x: x.get('nonce'), doc='''
2464 The nonce the server sent for digest auth, sent back by the client.
2465 A nonce should be unique for every 401 response for HTTP digest
2466 auth.''')
2467 uri = property(lambda x: x.get('uri'), doc='''
2468 The URI from Request-URI of the Request-Line; duplicated because
2469 proxies are allowed to change the Request-Line in transit. HTTP
2470 digest auth only.''')
2471 nc = property(lambda x: x.get('nc'), doc='''
2472 The nonce count value transmitted by clients if a qop-header is
2473 also transmitted. HTTP digest auth only.''')
2474 cnonce = property(lambda x: x.get('cnonce'), doc='''
2475 If the server sent a qop-header in the ``WWW-Authenticate``
2476 header, the client has to provide this value for HTTP digest auth.
2477 See the RFC for more details.''')
2478 response = property(lambda x: x.get('response'), doc='''
2479 A string of 32 hex digits computed as defined in RFC 2617, which
2480 proves that the user knows a password. Digest auth only.''')
2481 opaque = property(lambda x: x.get('opaque'), doc='''
2482 The opaque header from the server returned unchanged by the client.
2483 It is recommended that this string be base64 or hexadecimal data.
2484 Digest auth only.''')
2485 qop = property(lambda x: x.get('qop'), doc='''
2486 Indicates what "quality of protection" the client has applied to
2487 the message for HTTP digest auth. Note that this is a single token,
2488 not a quoted list of alternatives as in WWW-Authenticate.''')
2489
2490
2491 class WWWAuthenticate(UpdateDictMixin, dict):
2492
2493 """Provides simple access to `WWW-Authenticate` headers."""
2494
2495 #: list of keys that require quoting in the generated header
2496 _require_quoting = frozenset(['domain', 'nonce', 'opaque', 'realm', 'qop'])
2497
2498 def __init__(self, auth_type=None, values=None, on_update=None):
2499 dict.__init__(self, values or ())
2500 if auth_type:
2501 self['__auth_type__'] = auth_type
2502 self.on_update = on_update
2503
2504 def set_basic(self, realm='authentication required'):
2505 """Clear the auth info and enable basic auth."""
2506 dict.clear(self)
2507 dict.update(self, {'__auth_type__': 'basic', 'realm': realm})
2508 if self.on_update:
2509 self.on_update(self)
2510
2511 def set_digest(self, realm, nonce, qop=('auth',), opaque=None,
2512 algorithm=None, stale=False):
2513 """Clear the auth info and enable digest auth."""
2514 d = {
2515 '__auth_type__': 'digest',
2516 'realm': realm,
2517 'nonce': nonce,
2518 'qop': dump_header(qop)
2519 }
2520 if stale:
2521 d['stale'] = 'TRUE'
2522 if opaque is not None:
2523 d['opaque'] = opaque
2524 if algorithm is not None:
2525 d['algorithm'] = algorithm
2526 dict.clear(self)
2527 dict.update(self, d)
2528 if self.on_update:
2529 self.on_update(self)
2530
2531 def to_header(self):
2532 """Convert the stored values into a WWW-Authenticate header."""
2533 d = dict(self)
2534 auth_type = d.pop('__auth_type__', None) or 'basic'
2535 return '%s %s' % (auth_type.title(), ', '.join([
2536 '%s=%s' % (key, quote_header_value(value,
2537 allow_token=key not in self._require_quoting))
2538 for key, value in iteritems(d)
2539 ]))
2540
2541 def __str__(self):
2542 return self.to_header()
2543
2544 def __repr__(self):
2545 return '<%s %r>' % (
2546 self.__class__.__name__,
2547 self.to_header()
2548 )
2549
2550 def auth_property(name, doc=None):
2551 """A static helper function for subclasses to add extra authentication
2552 system properties onto a class::
2553
2554 class FooAuthenticate(WWWAuthenticate):
2555 special_realm = auth_property('special_realm')
2556
2557 For more information have a look at the sourcecode to see how the
2558 regular properties (:attr:`realm` etc.) are implemented.
2559 """
2560
2561 def _set_value(self, value):
2562 if value is None:
2563 self.pop(name, None)
2564 else:
2565 self[name] = str(value)
2566 return property(lambda x: x.get(name), _set_value, doc=doc)
2567
2568 def _set_property(name, doc=None):
2569 def fget(self):
2570 def on_update(header_set):
2571 if not header_set and name in self:
2572 del self[name]
2573 elif header_set:
2574 self[name] = header_set.to_header()
2575 return parse_set_header(self.get(name), on_update)
2576 return property(fget, doc=doc)
2577
2578 type = auth_property('__auth_type__', doc='''
2579 The type of the auth mechanism. HTTP currently specifies
2580 `Basic` and `Digest`.''')
2581 realm = auth_property('realm', doc='''
2582 A string to be displayed to users so they know which username and
2583 password to use. This string should contain at least the name of
2584 the host performing the authentication and might additionally
2585 indicate the collection of users who might have access.''')
2586 domain = _set_property('domain', doc='''
2587 A list of URIs that define the protection space. If a URI is an
2588 absolute path, it is relative to the canonical root URL of the
2589 server being accessed.''')
2590 nonce = auth_property('nonce', doc='''
2591 A server-specified data string which should be uniquely generated
2592 each time a 401 response is made. It is recommended that this
2593 string be base64 or hexadecimal data.''')
2594 opaque = auth_property('opaque', doc='''
2595 A string of data, specified by the server, which should be returned
2596 by the client unchanged in the Authorization header of subsequent
2597 requests with URIs in the same protection space. It is recommended
2598 that this string be base64 or hexadecimal data.''')
2599 algorithm = auth_property('algorithm', doc='''
2600 A string indicating a pair of algorithms used to produce the digest
2601 and a checksum. If this is not present it is assumed to be "MD5".
2602 If the algorithm is not understood, the challenge should be ignored
2603 (and a different one used, if there is more than one).''')
2604 qop = _set_property('qop', doc='''
2605 A set of quality-of-privacy directives such as auth and auth-int.''')
2606
2607 def _get_stale(self):
2608 val = self.get('stale')
2609 if val is not None:
2610 return val.lower() == 'true'
2611
2612 def _set_stale(self, value):
2613 if value is None:
2614 self.pop('stale', None)
2615 else:
2616 self['stale'] = value and 'TRUE' or 'FALSE'
2617 stale = property(_get_stale, _set_stale, doc='''
2618 A flag, indicating that the previous request from the client was
2619 rejected because the nonce value was stale.''')
2620 del _get_stale, _set_stale
2621
2622 # make auth_property a staticmethod so that subclasses of
2623 # `WWWAuthenticate` can use it for new properties.
2624 auth_property = staticmethod(auth_property)
2625 del _set_property
2626
2627
2628 class FileStorage(object):
2629
2630 """The :class:`FileStorage` class is a thin wrapper over incoming files.
2631 It is used by the request object to represent uploaded files. All the
2632 attributes of the wrapper stream are proxied by the file storage so
2633 it's possible to do ``storage.read()`` instead of the long form
2634 ``storage.stream.read()``.
2635 """
2636
2637 def __init__(self, stream=None, filename=None, name=None,
2638 content_type=None, content_length=None,
2639 headers=None):
2640 self.name = name
2641 self.stream = stream or _empty_stream
2642
2643 # if no filename is provided we can attempt to get the filename
2644 # from the stream object passed. There we have to be careful to
2645 # skip things like <fdopen>, <stderr> etc. Python marks these
2646 # special filenames with angular brackets.
2647 if filename is None:
2648 filename = getattr(stream, 'name', None)
2649 s = make_literal_wrapper(filename)
2650 if filename and filename[0] == s('<') and filename[-1] == s('>'):
2651 filename = None
2652
2653 # On Python 3 we want to make sure the filename is always unicode.
2654 # This might not be if the name attribute is bytes due to the
2655 # file being opened from the bytes API.
2656 if not PY2 and isinstance(filename, bytes):
2657 filename = filename.decode(get_filesystem_encoding(),
2658 'replace')
2659
2660 self.filename = filename
2661 if headers is None:
2662 headers = Headers()
2663 self.headers = headers
2664 if content_type is not None:
2665 headers['Content-Type'] = content_type
2666 if content_length is not None:
2667 headers['Content-Length'] = str(content_length)
2668
2669 def _parse_content_type(self):
2670 if not hasattr(self, '_parsed_content_type'):
2671 self._parsed_content_type = \
2672 parse_options_header(self.content_type)
2673
2674 @property
2675 def content_type(self):
2676 """The content-type sent in the header. Usually not available"""
2677 return self.headers.get('content-type')
2678
2679 @property
2680 def content_length(self):
2681 """The content-length sent in the header. Usually not available"""
2682 return int(self.headers.get('content-length') or 0)
2683
2684 @property
2685 def mimetype(self):
2686 """Like :attr:`content_type`, but without parameters (eg, without
2687 charset, type etc.) and always lowercase. For example if the content
2688 type is ``text/HTML; charset=utf-8`` the mimetype would be
2689 ``'text/html'``.
2690
2691 .. versionadded:: 0.7
2692 """
2693 self._parse_content_type()
2694 return self._parsed_content_type[0].lower()
2695
2696 @property
2697 def mimetype_params(self):
2698 """The mimetype parameters as dict. For example if the content
2699 type is ``text/html; charset=utf-8`` the params would be
2700 ``{'charset': 'utf-8'}``.
2701
2702 .. versionadded:: 0.7
2703 """
2704 self._parse_content_type()
2705 return self._parsed_content_type[1]
2706
2707 def save(self, dst, buffer_size=16384):
2708 """Save the file to a destination path or file object. If the
2709 destination is a file object you have to close it yourself after the
2710 call. The buffer size is the number of bytes held in memory during
2711 the copy process. It defaults to 16KB.
2712
2713 For secure file saving also have a look at :func:`secure_filename`.
2714
2715 :param dst: a filename or open file object the uploaded file
2716 is saved to.
2717 :param buffer_size: the size of the buffer. This works the same as
2718 the `length` parameter of
2719 :func:`shutil.copyfileobj`.
2720 """
2721 from shutil import copyfileobj
2722 close_dst = False
2723 if isinstance(dst, string_types):
2724 dst = open(dst, 'wb')
2725 close_dst = True
2726 try:
2727 copyfileobj(self.stream, dst, buffer_size)
2728 finally:
2729 if close_dst:
2730 dst.close()
2731
2732 def close(self):
2733 """Close the underlying file if possible."""
2734 try:
2735 self.stream.close()
2736 except Exception:
2737 pass
2738
2739 def __nonzero__(self):
2740 return bool(self.filename)
2741 __bool__ = __nonzero__
2742
2743 def __getattr__(self, name):
2744 return getattr(self.stream, name)
2745
2746 def __iter__(self):
2747 return iter(self.stream)
2748
2749 def __repr__(self):
2750 return '<%s: %r (%r)>' % (
2751 self.__class__.__name__,
2752 self.filename,
2753 self.content_type
2754 )
2755
2756
2757 # circular dependencies
2758 from werkzeug.http import dump_options_header, dump_header, generate_etag, \
2759 quote_header_value, parse_set_header, unquote_etag, quote_etag, \
2760 parse_options_header, http_date, is_byte_range_valid
2761 from werkzeug import exceptions
+0
-470
werkzeug/debug/__init__.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.debug
3 ~~~~~~~~~~~~~~
4
5 WSGI application traceback debugger.
6
7 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD, see LICENSE for more details.
9 """
10 import os
11 import re
12 import sys
13 import uuid
14 import json
15 import time
16 import getpass
17 import hashlib
18 import mimetypes
19 from itertools import chain
20 from os.path import join, dirname, basename, isfile
21 from werkzeug.wrappers import BaseRequest as Request, BaseResponse as Response
22 from werkzeug.http import parse_cookie
23 from werkzeug.debug.tbtools import get_current_traceback, render_console_html
24 from werkzeug.debug.console import Console
25 from werkzeug.security import gen_salt
26 from werkzeug._internal import _log
27 from werkzeug._compat import text_type
28
29
30 # DEPRECATED
31 #: import this here because it once was documented as being available
32 #: from this module. In case there are users left ...
33 from werkzeug.debug.repr import debug_repr # noqa
34
35
36 # A week
37 PIN_TIME = 60 * 60 * 24 * 7
38
39
40 def hash_pin(pin):
41 if isinstance(pin, text_type):
42 pin = pin.encode('utf-8', 'replace')
43 return hashlib.md5(pin + b'shittysalt').hexdigest()[:12]
44
45
46 _machine_id = None
47
48
49 def get_machine_id():
50 global _machine_id
51 rv = _machine_id
52 if rv is not None:
53 return rv
54
55 def _generate():
56 # Potential sources of secret information on linux. The machine-id
57 # is stable across boots, the boot id is not
58 for filename in '/etc/machine-id', '/proc/sys/kernel/random/boot_id':
59 try:
60 with open(filename, 'rb') as f:
61 return f.readline().strip()
62 except IOError:
63 continue
64
65 # On OS X we can use the computer's serial number assuming that
66 # ioreg exists and can spit out that information.
67 try:
68 # Also catch import errors: subprocess may not be available, e.g.
69 # Google App Engine
70 # See https://github.com/pallets/werkzeug/issues/925
71 from subprocess import Popen, PIPE
72 dump = Popen(['ioreg', '-c', 'IOPlatformExpertDevice', '-d', '2'],
73 stdout=PIPE).communicate()[0]
74 match = re.search(b'"serial-number" = <([^>]+)', dump)
75 if match is not None:
76 return match.group(1)
77 except (OSError, ImportError):
78 pass
79
80 # On Windows we can use winreg to get the machine guid
81 wr = None
82 try:
83 import winreg as wr
84 except ImportError:
85 try:
86 import _winreg as wr
87 except ImportError:
88 pass
89 if wr is not None:
90 try:
91 with wr.OpenKey(wr.HKEY_LOCAL_MACHINE,
92 'SOFTWARE\\Microsoft\\Cryptography', 0,
93 wr.KEY_READ | wr.KEY_WOW64_64KEY) as rk:
94 machineGuid, wrType = wr.QueryValueEx(rk, 'MachineGuid')
95 if (wrType == wr.REG_SZ):
96 return machineGuid.encode('utf-8')
97 else:
98 return machineGuid
99 except WindowsError:
100 pass
101
102 _machine_id = rv = _generate()
103 return rv
104
105
106 class _ConsoleFrame(object):
107
108 """Helper class so that we can reuse the frame console code for the
109 standalone console.
110 """
111
112 def __init__(self, namespace):
113 self.console = Console(namespace)
114 self.id = 0
115
116
117 def get_pin_and_cookie_name(app):
118 """Given an application object this returns a semi-stable 9 digit pin
119 code and a random key. The hope is that this is stable between
120 restarts to not make debugging particularly frustrating. If the pin
121 was forcefully disabled this returns `None`.
122
123 Second item in the resulting tuple is the cookie name for remembering.
124 """
125 pin = os.environ.get('WERKZEUG_DEBUG_PIN')
126 rv = None
127 num = None
128
129 # Pin was explicitly disabled
130 if pin == 'off':
131 return None, None
132
133 # Pin was provided explicitly
134 if pin is not None and pin.replace('-', '').isdigit():
135 # If there are separators in the pin, return it directly
136 if '-' in pin:
137 rv = pin
138 else:
139 num = pin
140
141 modname = getattr(app, '__module__',
142 getattr(app.__class__, '__module__'))
143
144 try:
145 # `getpass.getuser()` imports the `pwd` module,
146 # which does not exist in the Google App Engine sandbox.
147 username = getpass.getuser()
148 except ImportError:
149 username = None
150
151 mod = sys.modules.get(modname)
152
153 # This information only exists to make the cookie unique on the
154 # computer, not as a security feature.
155 probably_public_bits = [
156 username,
157 modname,
158 getattr(app, '__name__', getattr(app.__class__, '__name__')),
159 getattr(mod, '__file__', None),
160 ]
161
162 # This information is here to make it harder for an attacker to
163 # guess the cookie name. They are unlikely to be contained anywhere
164 # within the unauthenticated debug page.
165 private_bits = [
166 str(uuid.getnode()),
167 get_machine_id(),
168 ]
169
170 h = hashlib.md5()
171 for bit in chain(probably_public_bits, private_bits):
172 if not bit:
173 continue
174 if isinstance(bit, text_type):
175 bit = bit.encode('utf-8')
176 h.update(bit)
177 h.update(b'cookiesalt')
178
179 cookie_name = '__wzd' + h.hexdigest()[:20]
180
181 # If we need to generate a pin we salt it a bit more so that we don't
182 # end up with the same value and generate out 9 digits
183 if num is None:
184 h.update(b'pinsalt')
185 num = ('%09d' % int(h.hexdigest(), 16))[:9]
186
187 # Format the pincode in groups of digits for easier remembering if
188 # we don't have a result yet.
189 if rv is None:
190 for group_size in 5, 4, 3:
191 if len(num) % group_size == 0:
192 rv = '-'.join(num[x:x + group_size].rjust(group_size, '0')
193 for x in range(0, len(num), group_size))
194 break
195 else:
196 rv = num
197
198 return rv, cookie_name
199
200
201 class DebuggedApplication(object):
202 """Enables debugging support for a given application::
203
204 from werkzeug.debug import DebuggedApplication
205 from myapp import app
206 app = DebuggedApplication(app, evalex=True)
207
208 The `evalex` keyword argument allows evaluating expressions in a
209 traceback's frame context.
210
211 .. versionadded:: 0.9
212 The `lodgeit_url` parameter was deprecated.
213
214 :param app: the WSGI application to run debugged.
215 :param evalex: enable exception evaluation feature (interactive
216 debugging). This requires a non-forking server.
217 :param request_key: The key that points to the request object in ths
218 environment. This parameter is ignored in current
219 versions.
220 :param console_path: the URL for a general purpose console.
221 :param console_init_func: the function that is executed before starting
222 the general purpose console. The return value
223 is used as initial namespace.
224 :param show_hidden_frames: by default hidden traceback frames are skipped.
225 You can show them by setting this parameter
226 to `True`.
227 :param pin_security: can be used to disable the pin based security system.
228 :param pin_logging: enables the logging of the pin system.
229 """
230
231 def __init__(self, app, evalex=False, request_key='werkzeug.request',
232 console_path='/console', console_init_func=None,
233 show_hidden_frames=False, lodgeit_url=None,
234 pin_security=True, pin_logging=True):
235 if lodgeit_url is not None:
236 from warnings import warn
237 warn(DeprecationWarning('Werkzeug now pastes into gists.'))
238 if not console_init_func:
239 console_init_func = None
240 self.app = app
241 self.evalex = evalex
242 self.frames = {}
243 self.tracebacks = {}
244 self.request_key = request_key
245 self.console_path = console_path
246 self.console_init_func = console_init_func
247 self.show_hidden_frames = show_hidden_frames
248 self.secret = gen_salt(20)
249 self._failed_pin_auth = 0
250
251 self.pin_logging = pin_logging
252 if pin_security:
253 # Print out the pin for the debugger on standard out.
254 if os.environ.get('WERKZEUG_RUN_MAIN') == 'true' and \
255 pin_logging:
256 _log('warning', ' * Debugger is active!')
257 if self.pin is None:
258 _log('warning', ' * Debugger PIN disabled. '
259 'DEBUGGER UNSECURED!')
260 else:
261 _log('info', ' * Debugger PIN: %s' % self.pin)
262 else:
263 self.pin = None
264
265 def _get_pin(self):
266 if not hasattr(self, '_pin'):
267 self._pin, self._pin_cookie = get_pin_and_cookie_name(self.app)
268 return self._pin
269
270 def _set_pin(self, value):
271 self._pin = value
272
273 pin = property(_get_pin, _set_pin)
274 del _get_pin, _set_pin
275
276 @property
277 def pin_cookie_name(self):
278 """The name of the pin cookie."""
279 if not hasattr(self, '_pin_cookie'):
280 self._pin, self._pin_cookie = get_pin_and_cookie_name(self.app)
281 return self._pin_cookie
282
283 def debug_application(self, environ, start_response):
284 """Run the application and conserve the traceback frames."""
285 app_iter = None
286 try:
287 app_iter = self.app(environ, start_response)
288 for item in app_iter:
289 yield item
290 if hasattr(app_iter, 'close'):
291 app_iter.close()
292 except Exception:
293 if hasattr(app_iter, 'close'):
294 app_iter.close()
295 traceback = get_current_traceback(
296 skip=1, show_hidden_frames=self.show_hidden_frames,
297 ignore_system_exceptions=True)
298 for frame in traceback.frames:
299 self.frames[frame.id] = frame
300 self.tracebacks[traceback.id] = traceback
301
302 try:
303 start_response('500 INTERNAL SERVER ERROR', [
304 ('Content-Type', 'text/html; charset=utf-8'),
305 # Disable Chrome's XSS protection, the debug
306 # output can cause false-positives.
307 ('X-XSS-Protection', '0'),
308 ])
309 except Exception:
310 # if we end up here there has been output but an error
311 # occurred. in that situation we can do nothing fancy any
312 # more, better log something into the error log and fall
313 # back gracefully.
314 environ['wsgi.errors'].write(
315 'Debugging middleware caught exception in streamed '
316 'response at a point where response headers were already '
317 'sent.\n')
318 else:
319 is_trusted = bool(self.check_pin_trust(environ))
320 yield traceback.render_full(evalex=self.evalex,
321 evalex_trusted=is_trusted,
322 secret=self.secret) \
323 .encode('utf-8', 'replace')
324
325 traceback.log(environ['wsgi.errors'])
326
327 def execute_command(self, request, command, frame):
328 """Execute a command in a console."""
329 return Response(frame.console.eval(command), mimetype='text/html')
330
331 def display_console(self, request):
332 """Display a standalone shell."""
333 if 0 not in self.frames:
334 if self.console_init_func is None:
335 ns = {}
336 else:
337 ns = dict(self.console_init_func())
338 ns.setdefault('app', self.app)
339 self.frames[0] = _ConsoleFrame(ns)
340 is_trusted = bool(self.check_pin_trust(request.environ))
341 return Response(render_console_html(secret=self.secret,
342 evalex_trusted=is_trusted),
343 mimetype='text/html')
344
345 def paste_traceback(self, request, traceback):
346 """Paste the traceback and return a JSON response."""
347 rv = traceback.paste()
348 return Response(json.dumps(rv), mimetype='application/json')
349
350 def get_resource(self, request, filename):
351 """Return a static resource from the shared folder."""
352 filename = join(dirname(__file__), 'shared', basename(filename))
353 if isfile(filename):
354 mimetype = mimetypes.guess_type(filename)[0] \
355 or 'application/octet-stream'
356 f = open(filename, 'rb')
357 try:
358 return Response(f.read(), mimetype=mimetype)
359 finally:
360 f.close()
361 return Response('Not Found', status=404)
362
363 def check_pin_trust(self, environ):
364 """Checks if the request passed the pin test. This returns `True` if the
365 request is trusted on a pin/cookie basis and returns `False` if not.
366 Additionally if the cookie's stored pin hash is wrong it will return
367 `None` so that appropriate action can be taken.
368 """
369 if self.pin is None:
370 return True
371 val = parse_cookie(environ).get(self.pin_cookie_name)
372 if not val or '|' not in val:
373 return False
374 ts, pin_hash = val.split('|', 1)
375 if not ts.isdigit():
376 return False
377 if pin_hash != hash_pin(self.pin):
378 return None
379 return (time.time() - PIN_TIME) < int(ts)
380
381 def _fail_pin_auth(self):
382 time.sleep(self._failed_pin_auth > 5 and 5.0 or 0.5)
383 self._failed_pin_auth += 1
384
385 def pin_auth(self, request):
386 """Authenticates with the pin."""
387 exhausted = False
388 auth = False
389 trust = self.check_pin_trust(request.environ)
390
391 # If the trust return value is `None` it means that the cookie is
392 # set but the stored pin hash value is bad. This means that the
393 # pin was changed. In this case we count a bad auth and unset the
394 # cookie. This way it becomes harder to guess the cookie name
395 # instead of the pin as we still count up failures.
396 bad_cookie = False
397 if trust is None:
398 self._fail_pin_auth()
399 bad_cookie = True
400
401 # If we're trusted, we're authenticated.
402 elif trust:
403 auth = True
404
405 # If we failed too many times, then we're locked out.
406 elif self._failed_pin_auth > 10:
407 exhausted = True
408
409 # Otherwise go through pin based authentication
410 else:
411 entered_pin = request.args.get('pin')
412 if entered_pin.strip().replace('-', '') == \
413 self.pin.replace('-', ''):
414 self._failed_pin_auth = 0
415 auth = True
416 else:
417 self._fail_pin_auth()
418
419 rv = Response(json.dumps({
420 'auth': auth,
421 'exhausted': exhausted,
422 }), mimetype='application/json')
423 if auth:
424 rv.set_cookie(self.pin_cookie_name, '%s|%s' % (
425 int(time.time()),
426 hash_pin(self.pin)
427 ), httponly=True)
428 elif bad_cookie:
429 rv.delete_cookie(self.pin_cookie_name)
430 return rv
431
432 def log_pin_request(self):
433 """Log the pin if needed."""
434 if self.pin_logging and self.pin is not None:
435 _log('info', ' * To enable the debugger you need to '
436 'enter the security pin:')
437 _log('info', ' * Debugger pin code: %s' % self.pin)
438 return Response('')
439
440 def __call__(self, environ, start_response):
441 """Dispatch the requests."""
442 # important: don't ever access a function here that reads the incoming
443 # form data! Otherwise the application won't have access to that data
444 # any more!
445 request = Request(environ)
446 response = self.debug_application
447 if request.args.get('__debugger__') == 'yes':
448 cmd = request.args.get('cmd')
449 arg = request.args.get('f')
450 secret = request.args.get('s')
451 traceback = self.tracebacks.get(request.args.get('tb', type=int))
452 frame = self.frames.get(request.args.get('frm', type=int))
453 if cmd == 'resource' and arg:
454 response = self.get_resource(request, arg)
455 elif cmd == 'paste' and traceback is not None and \
456 secret == self.secret:
457 response = self.paste_traceback(request, traceback)
458 elif cmd == 'pinauth' and secret == self.secret:
459 response = self.pin_auth(request)
460 elif cmd == 'printpin' and secret == self.secret:
461 response = self.log_pin_request()
462 elif self.evalex and cmd is not None and frame is not None \
463 and self.secret == secret and \
464 self.check_pin_trust(environ):
465 response = self.execute_command(request, cmd, frame)
466 elif self.evalex and self.console_path is not None and \
467 request.path == self.console_path:
468 response = self.display_console(request)
469 return response(environ, start_response)
+0
-215
werkzeug/debug/console.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.debug.console
3 ~~~~~~~~~~~~~~~~~~~~~~
4
5 Interactive console support.
6
7 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD.
9 """
10 import sys
11 import code
12 from types import CodeType
13
14 from werkzeug.utils import escape
15 from werkzeug.local import Local
16 from werkzeug.debug.repr import debug_repr, dump, helper
17
18
19 _local = Local()
20
21
22 class HTMLStringO(object):
23
24 """A StringO version that HTML escapes on write."""
25
26 def __init__(self):
27 self._buffer = []
28
29 def isatty(self):
30 return False
31
32 def close(self):
33 pass
34
35 def flush(self):
36 pass
37
38 def seek(self, n, mode=0):
39 pass
40
41 def readline(self):
42 if len(self._buffer) == 0:
43 return ''
44 ret = self._buffer[0]
45 del self._buffer[0]
46 return ret
47
48 def reset(self):
49 val = ''.join(self._buffer)
50 del self._buffer[:]
51 return val
52
53 def _write(self, x):
54 if isinstance(x, bytes):
55 x = x.decode('utf-8', 'replace')
56 self._buffer.append(x)
57
58 def write(self, x):
59 self._write(escape(x))
60
61 def writelines(self, x):
62 self._write(escape(''.join(x)))
63
64
65 class ThreadedStream(object):
66
67 """Thread-local wrapper for sys.stdout for the interactive console."""
68
69 def push():
70 if not isinstance(sys.stdout, ThreadedStream):
71 sys.stdout = ThreadedStream()
72 _local.stream = HTMLStringO()
73 push = staticmethod(push)
74
75 def fetch():
76 try:
77 stream = _local.stream
78 except AttributeError:
79 return ''
80 return stream.reset()
81 fetch = staticmethod(fetch)
82
83 def displayhook(obj):
84 try:
85 stream = _local.stream
86 except AttributeError:
87 return _displayhook(obj)
88 # stream._write bypasses escaping as debug_repr is
89 # already generating HTML for us.
90 if obj is not None:
91 _local._current_ipy.locals['_'] = obj
92 stream._write(debug_repr(obj))
93 displayhook = staticmethod(displayhook)
94
95 def __setattr__(self, name, value):
96 raise AttributeError('read only attribute %s' % name)
97
98 def __dir__(self):
99 return dir(sys.__stdout__)
100
101 def __getattribute__(self, name):
102 if name == '__members__':
103 return dir(sys.__stdout__)
104 try:
105 stream = _local.stream
106 except AttributeError:
107 stream = sys.__stdout__
108 return getattr(stream, name)
109
110 def __repr__(self):
111 return repr(sys.__stdout__)
112
113
114 # add the threaded stream as display hook
115 _displayhook = sys.displayhook
116 sys.displayhook = ThreadedStream.displayhook
117
118
119 class _ConsoleLoader(object):
120
121 def __init__(self):
122 self._storage = {}
123
124 def register(self, code, source):
125 self._storage[id(code)] = source
126 # register code objects of wrapped functions too.
127 for var in code.co_consts:
128 if isinstance(var, CodeType):
129 self._storage[id(var)] = source
130
131 def get_source_by_code(self, code):
132 try:
133 return self._storage[id(code)]
134 except KeyError:
135 pass
136
137
138 def _wrap_compiler(console):
139 compile = console.compile
140
141 def func(source, filename, symbol):
142 code = compile(source, filename, symbol)
143 console.loader.register(code, source)
144 return code
145 console.compile = func
146
147
148 class _InteractiveConsole(code.InteractiveInterpreter):
149
150 def __init__(self, globals, locals):
151 code.InteractiveInterpreter.__init__(self, locals)
152 self.globals = dict(globals)
153 self.globals['dump'] = dump
154 self.globals['help'] = helper
155 self.globals['__loader__'] = self.loader = _ConsoleLoader()
156 self.more = False
157 self.buffer = []
158 _wrap_compiler(self)
159
160 def runsource(self, source):
161 source = source.rstrip() + '\n'
162 ThreadedStream.push()
163 prompt = self.more and '... ' or '>>> '
164 try:
165 source_to_eval = ''.join(self.buffer + [source])
166 if code.InteractiveInterpreter.runsource(self,
167 source_to_eval, '<debugger>', 'single'):
168 self.more = True
169 self.buffer.append(source)
170 else:
171 self.more = False
172 del self.buffer[:]
173 finally:
174 output = ThreadedStream.fetch()
175 return prompt + escape(source) + output
176
177 def runcode(self, code):
178 try:
179 eval(code, self.globals, self.locals)
180 except Exception:
181 self.showtraceback()
182
183 def showtraceback(self):
184 from werkzeug.debug.tbtools import get_current_traceback
185 tb = get_current_traceback(skip=1)
186 sys.stdout._write(tb.render_summary())
187
188 def showsyntaxerror(self, filename=None):
189 from werkzeug.debug.tbtools import get_current_traceback
190 tb = get_current_traceback(skip=4)
191 sys.stdout._write(tb.render_summary())
192
193 def write(self, data):
194 sys.stdout.write(data)
195
196
197 class Console(object):
198
199 """An interactive console."""
200
201 def __init__(self, globals=None, locals=None):
202 if locals is None:
203 locals = {}
204 if globals is None:
205 globals = {}
206 self._ipy = _InteractiveConsole(globals, locals)
207
208 def eval(self, code):
209 _local._current_ipy = self._ipy
210 old_sys_stdout = sys.stdout
211 try:
212 return self._ipy.runsource(code)
213 finally:
214 sys.stdout = old_sys_stdout
+0
-280
werkzeug/debug/repr.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.debug.repr
3 ~~~~~~~~~~~~~~~~~~~
4
5 This module implements object representations for debugging purposes.
6 Unlike the default repr these reprs expose a lot more information and
7 produce HTML instead of ASCII.
8
9 Together with the CSS and JavaScript files of the debugger this gives
10 a colorful and more compact output.
11
12 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
13 :license: BSD.
14 """
15 import sys
16 import re
17 import codecs
18 from traceback import format_exception_only
19 try:
20 from collections import deque
21 except ImportError: # pragma: no cover
22 deque = None
23 from werkzeug.utils import escape
24 from werkzeug._compat import iteritems, PY2, text_type, integer_types, \
25 string_types
26
27
28 missing = object()
29 _paragraph_re = re.compile(r'(?:\r\n|\r|\n){2,}')
30 RegexType = type(_paragraph_re)
31
32
33 HELP_HTML = '''\
34 <div class=box>
35 <h3>%(title)s</h3>
36 <pre class=help>%(text)s</pre>
37 </div>\
38 '''
39 OBJECT_DUMP_HTML = '''\
40 <div class=box>
41 <h3>%(title)s</h3>
42 %(repr)s
43 <table>%(items)s</table>
44 </div>\
45 '''
46
47
48 def debug_repr(obj):
49 """Creates a debug repr of an object as HTML unicode string."""
50 return DebugReprGenerator().repr(obj)
51
52
53 def dump(obj=missing):
54 """Print the object details to stdout._write (for the interactive
55 console of the web debugger.
56 """
57 gen = DebugReprGenerator()
58 if obj is missing:
59 rv = gen.dump_locals(sys._getframe(1).f_locals)
60 else:
61 rv = gen.dump_object(obj)
62 sys.stdout._write(rv)
63
64
65 class _Helper(object):
66
67 """Displays an HTML version of the normal help, for the interactive
68 debugger only because it requires a patched sys.stdout.
69 """
70
71 def __repr__(self):
72 return 'Type help(object) for help about object.'
73
74 def __call__(self, topic=None):
75 if topic is None:
76 sys.stdout._write('<span class=help>%s</span>' % repr(self))
77 return
78 import pydoc
79 pydoc.help(topic)
80 rv = sys.stdout.reset()
81 if isinstance(rv, bytes):
82 rv = rv.decode('utf-8', 'ignore')
83 paragraphs = _paragraph_re.split(rv)
84 if len(paragraphs) > 1:
85 title = paragraphs[0]
86 text = '\n\n'.join(paragraphs[1:])
87 else: # pragma: no cover
88 title = 'Help'
89 text = paragraphs[0]
90 sys.stdout._write(HELP_HTML % {'title': title, 'text': text})
91
92
93 helper = _Helper()
94
95
96 def _add_subclass_info(inner, obj, base):
97 if isinstance(base, tuple):
98 for base in base:
99 if type(obj) is base:
100 return inner
101 elif type(obj) is base:
102 return inner
103 module = ''
104 if obj.__class__.__module__ not in ('__builtin__', 'exceptions'):
105 module = '<span class="module">%s.</span>' % obj.__class__.__module__
106 return '%s%s(%s)' % (module, obj.__class__.__name__, inner)
107
108
109 class DebugReprGenerator(object):
110
111 def __init__(self):
112 self._stack = []
113
114 def _sequence_repr_maker(left, right, base=object(), limit=8):
115 def proxy(self, obj, recursive):
116 if recursive:
117 return _add_subclass_info(left + '...' + right, obj, base)
118 buf = [left]
119 have_extended_section = False
120 for idx, item in enumerate(obj):
121 if idx:
122 buf.append(', ')
123 if idx == limit:
124 buf.append('<span class="extended">')
125 have_extended_section = True
126 buf.append(self.repr(item))
127 if have_extended_section:
128 buf.append('</span>')
129 buf.append(right)
130 return _add_subclass_info(u''.join(buf), obj, base)
131 return proxy
132
133 list_repr = _sequence_repr_maker('[', ']', list)
134 tuple_repr = _sequence_repr_maker('(', ')', tuple)
135 set_repr = _sequence_repr_maker('set([', '])', set)
136 frozenset_repr = _sequence_repr_maker('frozenset([', '])', frozenset)
137 if deque is not None:
138 deque_repr = _sequence_repr_maker('<span class="module">collections.'
139 '</span>deque([', '])', deque)
140 del _sequence_repr_maker
141
142 def regex_repr(self, obj):
143 pattern = repr(obj.pattern)
144 if PY2:
145 pattern = pattern.decode('string-escape', 'ignore')
146 else:
147 pattern = codecs.decode(pattern, 'unicode-escape', 'ignore')
148 if pattern[:1] == 'u':
149 pattern = 'ur' + pattern[1:]
150 else:
151 pattern = 'r' + pattern
152 return u're.compile(<span class="string regex">%s</span>)' % pattern
153
154 def string_repr(self, obj, limit=70):
155 buf = ['<span class="string">']
156 a = repr(obj[:limit])
157 b = repr(obj[limit:])
158 if isinstance(obj, text_type) and PY2:
159 buf.append('u')
160 a = a[1:]
161 b = b[1:]
162 if b != "''":
163 buf.extend((escape(a[:-1]), '<span class="extended">', escape(b[1:]), '</span>'))
164 else:
165 buf.append(escape(a))
166 buf.append('</span>')
167 return _add_subclass_info(u''.join(buf), obj, (bytes, text_type))
168
169 def dict_repr(self, d, recursive, limit=5):
170 if recursive:
171 return _add_subclass_info(u'{...}', d, dict)
172 buf = ['{']
173 have_extended_section = False
174 for idx, (key, value) in enumerate(iteritems(d)):
175 if idx:
176 buf.append(', ')
177 if idx == limit - 1:
178 buf.append('<span class="extended">')
179 have_extended_section = True
180 buf.append('<span class="pair"><span class="key">%s</span>: '
181 '<span class="value">%s</span></span>' %
182 (self.repr(key), self.repr(value)))
183 if have_extended_section:
184 buf.append('</span>')
185 buf.append('}')
186 return _add_subclass_info(u''.join(buf), d, dict)
187
188 def object_repr(self, obj):
189 r = repr(obj)
190 if PY2:
191 r = r.decode('utf-8', 'replace')
192 return u'<span class="object">%s</span>' % escape(r)
193
194 def dispatch_repr(self, obj, recursive):
195 if obj is helper:
196 return u'<span class="help">%r</span>' % helper
197 if isinstance(obj, (integer_types, float, complex)):
198 return u'<span class="number">%r</span>' % obj
199 if isinstance(obj, string_types):
200 return self.string_repr(obj)
201 if isinstance(obj, RegexType):
202 return self.regex_repr(obj)
203 if isinstance(obj, list):
204 return self.list_repr(obj, recursive)
205 if isinstance(obj, tuple):
206 return self.tuple_repr(obj, recursive)
207 if isinstance(obj, set):
208 return self.set_repr(obj, recursive)
209 if isinstance(obj, frozenset):
210 return self.frozenset_repr(obj, recursive)
211 if isinstance(obj, dict):
212 return self.dict_repr(obj, recursive)
213 if deque is not None and isinstance(obj, deque):
214 return self.deque_repr(obj, recursive)
215 return self.object_repr(obj)
216
217 def fallback_repr(self):
218 try:
219 info = ''.join(format_exception_only(*sys.exc_info()[:2]))
220 except Exception: # pragma: no cover
221 info = '?'
222 if PY2:
223 info = info.decode('utf-8', 'ignore')
224 return u'<span class="brokenrepr">&lt;broken repr (%s)&gt;' \
225 u'</span>' % escape(info.strip())
226
227 def repr(self, obj):
228 recursive = False
229 for item in self._stack:
230 if item is obj:
231 recursive = True
232 break
233 self._stack.append(obj)
234 try:
235 try:
236 return self.dispatch_repr(obj, recursive)
237 except Exception:
238 return self.fallback_repr()
239 finally:
240 self._stack.pop()
241
242 def dump_object(self, obj):
243 repr = items = None
244 if isinstance(obj, dict):
245 title = 'Contents of'
246 items = []
247 for key, value in iteritems(obj):
248 if not isinstance(key, string_types):
249 items = None
250 break
251 items.append((key, self.repr(value)))
252 if items is None:
253 items = []
254 repr = self.repr(obj)
255 for key in dir(obj):
256 try:
257 items.append((key, self.repr(getattr(obj, key))))
258 except Exception:
259 pass
260 title = 'Details for'
261 title += ' ' + object.__repr__(obj)[1:-1]
262 return self.render_object_dump(items, title, repr)
263
264 def dump_locals(self, d):
265 items = [(key, self.repr(value)) for key, value in d.items()]
266 return self.render_object_dump(items, 'Local variables in frame')
267
268 def render_object_dump(self, items, title, repr=None):
269 html_items = []
270 for key, value in items:
271 html_items.append('<tr><th>%s<td><pre class=repr>%s</pre>' %
272 (escape(key), value))
273 if not html_items:
274 html_items.append('<tr><td><em>Nothing</em>')
275 return OBJECT_DUMP_HTML % {
276 'title': escape(title),
277 'repr': repr and '<pre class=repr>%s</pre>' % repr or '',
278 'items': '\n'.join(html_items)
279 }
werkzeug/debug/shared/console.png less more
Binary diff not shown
+0
-205
werkzeug/debug/shared/debugger.js less more
0 $(function() {
1 if (!EVALEX_TRUSTED) {
2 initPinBox();
3 }
4
5 /**
6 * if we are in console mode, show the console.
7 */
8 if (CONSOLE_MODE && EVALEX) {
9 openShell(null, $('div.console div.inner').empty(), 0);
10 }
11
12 $('div.traceback div.frame').each(function() {
13 var
14 target = $('pre', this),
15 consoleNode = null,
16 frameID = this.id.substring(6);
17
18 target.click(function() {
19 $(this).parent().toggleClass('expanded');
20 });
21
22 /**
23 * Add an interactive console to the frames
24 */
25 if (EVALEX && target.is('.current')) {
26 $('<img src="?__debugger__=yes&cmd=resource&f=console.png">')
27 .attr('title', 'Open an interactive python shell in this frame')
28 .click(function() {
29 consoleNode = openShell(consoleNode, target, frameID);
30 return false;
31 })
32 .prependTo(target);
33 }
34 });
35
36 /**
37 * toggle traceback types on click.
38 */
39 $('h2.traceback').click(function() {
40 $(this).next().slideToggle('fast');
41 $('div.plain').slideToggle('fast');
42 }).css('cursor', 'pointer');
43 $('div.plain').hide();
44
45 /**
46 * Add extra info (this is here so that only users with JavaScript
47 * enabled see it.)
48 */
49 $('span.nojavascript')
50 .removeClass('nojavascript')
51 .html('<p>To switch between the interactive traceback and the plaintext ' +
52 'one, you can click on the "Traceback" headline. From the text ' +
53 'traceback you can also create a paste of it. ' + (!EVALEX ? '' :
54 'For code execution mouse-over the frame you want to debug and ' +
55 'click on the console icon on the right side.' +
56 '<p>You can execute arbitrary Python code in the stack frames and ' +
57 'there are some extra helpers available for introspection:' +
58 '<ul><li><code>dump()</code> shows all variables in the frame' +
59 '<li><code>dump(obj)</code> dumps all that\'s known about the object</ul>'));
60
61 /**
62 * Add the pastebin feature
63 */
64 $('div.plain form')
65 .submit(function() {
66 var label = $('input[type="submit"]', this);
67 var old_val = label.val();
68 label.val('submitting...');
69 $.ajax({
70 dataType: 'json',
71 url: document.location.pathname,
72 data: {__debugger__: 'yes', tb: TRACEBACK, cmd: 'paste',
73 s: SECRET},
74 success: function(data) {
75 $('div.plain span.pastemessage')
76 .removeClass('pastemessage')
77 .text('Paste created: ')
78 .append($('<a>#' + data.id + '</a>').attr('href', data.url));
79 },
80 error: function() {
81 alert('Error: Could not submit paste. No network connection?');
82 label.val(old_val);
83 }
84 });
85 return false;
86 });
87
88 // if we have javascript we submit by ajax anyways, so no need for the
89 // not scaling textarea.
90 var plainTraceback = $('div.plain textarea');
91 plainTraceback.replaceWith($('<pre>').text(plainTraceback.text()));
92 });
93
94 function initPinBox() {
95 $('.pin-prompt form').submit(function(evt) {
96 evt.preventDefault();
97 var pin = this.pin.value;
98 var btn = this.btn;
99 btn.disabled = true;
100 $.ajax({
101 dataType: 'json',
102 url: document.location.pathname,
103 data: {__debugger__: 'yes', cmd: 'pinauth', pin: pin,
104 s: SECRET},
105 success: function(data) {
106 btn.disabled = false;
107 if (data.auth) {
108 EVALEX_TRUSTED = true;
109 $('.pin-prompt').fadeOut();
110 } else {
111 if (data.exhausted) {
112 alert('Error: too many attempts. Restart server to retry.');
113 } else {
114 alert('Error: incorrect pin');
115 }
116 }
117 console.log(data);
118 },
119 error: function() {
120 btn.disabled = false;
121 alert('Error: Could not verify PIN. Network error?');
122 }
123 });
124 });
125 }
126
127 function promptForPin() {
128 if (!EVALEX_TRUSTED) {
129 $.ajax({
130 url: document.location.pathname,
131 data: {__debugger__: 'yes', cmd: 'printpin', s: SECRET}
132 });
133 $('.pin-prompt').fadeIn(function() {
134 $('.pin-prompt input[name="pin"]').focus();
135 });
136 }
137 }
138
139
140 /**
141 * Helper function for shell initialization
142 */
143 function openShell(consoleNode, target, frameID) {
144 promptForPin();
145 if (consoleNode)
146 return consoleNode.slideToggle('fast');
147 consoleNode = $('<pre class="console">')
148 .appendTo(target.parent())
149 .hide()
150 var historyPos = 0, history = [''];
151 var output = $('<div class="output">[console ready]</div>')
152 .appendTo(consoleNode);
153 var form = $('<form>&gt;&gt;&gt; </form>')
154 .submit(function() {
155 var cmd = command.val();
156 $.get('', {
157 __debugger__: 'yes', cmd: cmd, frm: frameID, s: SECRET}, function(data) {
158 var tmp = $('<div>').html(data);
159 $('span.extended', tmp).each(function() {
160 var hidden = $(this).wrap('<span>').hide();
161 hidden
162 .parent()
163 .append($('<a href="#" class="toggle">&nbsp;&nbsp;</a>')
164 .click(function() {
165 hidden.toggle();
166 $(this).toggleClass('open')
167 return false;
168 }));
169 });
170 output.append(tmp);
171 command.focus();
172 consoleNode.scrollTop(consoleNode.get(0).scrollHeight);
173 var old = history.pop();
174 history.push(cmd);
175 if (typeof old != 'undefined')
176 history.push(old);
177 historyPos = history.length - 1;
178 });
179 command.val('');
180 return false;
181 }).
182 appendTo(consoleNode);
183
184 var command = $('<input type="text" autocomplete="off" autocorrect="off" autocapitalize="off" spellcheck="false">')
185 .appendTo(form)
186 .keydown(function(e) {
187 if (e.charCode == 100 && e.ctrlKey) {
188 output.text('--- screen cleared ---');
189 return false;
190 }
191 else if (e.charCode == 0 && (e.keyCode == 38 || e.keyCode == 40)) {
192 if (e.keyCode == 38 && historyPos > 0)
193 historyPos--;
194 else if (e.keyCode == 40 && historyPos < history.length)
195 historyPos++;
196 command.val(history[historyPos]);
197 return false;
198 }
199 });
200
201 return consoleNode.slideDown('fast', function() {
202 command.focus();
203 });
204 }
werkzeug/debug/shared/less.png less more
Binary diff not shown
werkzeug/debug/shared/more.png less more
Binary diff not shown
werkzeug/debug/shared/source.png less more
Binary diff not shown
+0
-143
werkzeug/debug/shared/style.css less more
0 @font-face {
1 font-family: 'Ubuntu';
2 font-style: normal;
3 font-weight: normal;
4 src: local('Ubuntu'), local('Ubuntu-Regular'),
5 url('?__debugger__=yes&cmd=resource&f=ubuntu.ttf') format('truetype');
6 }
7
8 body, input { font-family: 'Lucida Grande', 'Lucida Sans Unicode', 'Geneva',
9 'Verdana', sans-serif; color: #000; text-align: center;
10 margin: 1em; padding: 0; font-size: 15px; }
11 h1, h2, h3 { font-family: 'Ubuntu', 'Lucida Grande', 'Lucida Sans Unicode',
12 'Geneva', 'Verdana', sans-serif; font-weight: normal; }
13
14 input { background-color: #fff; margin: 0; text-align: left;
15 outline: none !important; }
16 input[type="submit"] { padding: 3px 6px; }
17 a { color: #11557C; }
18 a:hover { color: #177199; }
19 pre, code,
20 textarea { font-family: 'Consolas', 'Monaco', 'Bitstream Vera Sans Mono',
21 monospace; font-size: 14px; }
22
23 div.debugger { text-align: left; padding: 12px; margin: auto;
24 background-color: white; }
25 h1 { font-size: 36px; margin: 0 0 0.3em 0; }
26 div.detail p { margin: 0 0 8px 13px; font-size: 14px; white-space: pre-wrap;
27 font-family: monospace; }
28 div.explanation { margin: 20px 13px; font-size: 15px; color: #555; }
29 div.footer { font-size: 13px; text-align: right; margin: 30px 0;
30 color: #86989B; }
31
32 h2 { font-size: 16px; margin: 1.3em 0 0.0 0; padding: 9px;
33 background-color: #11557C; color: white; }
34 h2 em, h3 em { font-style: normal; color: #A5D6D9; font-weight: normal; }
35
36 div.traceback, div.plain { border: 1px solid #ddd; margin: 0 0 1em 0; padding: 10px; }
37 div.plain p { margin: 0; }
38 div.plain textarea,
39 div.plain pre { margin: 10px 0 0 0; padding: 4px;
40 background-color: #E8EFF0; border: 1px solid #D3E7E9; }
41 div.plain textarea { width: 99%; height: 300px; }
42 div.traceback h3 { font-size: 1em; margin: 0 0 0.8em 0; }
43 div.traceback ul { list-style: none; margin: 0; padding: 0 0 0 1em; }
44 div.traceback h4 { font-size: 13px; font-weight: normal; margin: 0.7em 0 0.1em 0; }
45 div.traceback pre { margin: 0; padding: 5px 0 3px 15px;
46 background-color: #E8EFF0; border: 1px solid #D3E7E9; }
47 div.traceback pre:hover { background-color: #DDECEE; color: black; cursor: pointer; }
48 div.traceback div.source.expanded pre + pre { border-top: none; }
49
50 div.traceback span.ws { display: none; }
51 div.traceback pre.before, div.traceback pre.after { display: none; background: white; }
52 div.traceback div.source.expanded pre.before,
53 div.traceback div.source.expanded pre.after {
54 display: block;
55 }
56
57 div.traceback div.source.expanded span.ws {
58 display: inline;
59 }
60
61 div.traceback blockquote { margin: 1em 0 0 0; padding: 0; }
62 div.traceback img { float: right; padding: 2px; margin: -3px 2px 0 0; display: none; }
63 div.traceback img:hover { background-color: #ddd; cursor: pointer;
64 border-color: #BFDDE0; }
65 div.traceback pre:hover img { display: block; }
66 div.traceback cite.filename { font-style: normal; color: #3B666B; }
67
68 pre.console { border: 1px solid #ccc; background: white!important;
69 color: black; padding: 5px!important;
70 margin: 3px 0 0 0!important; cursor: default!important;
71 max-height: 400px; overflow: auto; }
72 pre.console form { color: #555; }
73 pre.console input { background-color: transparent; color: #555;
74 width: 90%; font-family: 'Consolas', 'Deja Vu Sans Mono',
75 'Bitstream Vera Sans Mono', monospace; font-size: 14px;
76 border: none!important; }
77
78 span.string { color: #30799B; }
79 span.number { color: #9C1A1C; }
80 span.help { color: #3A7734; }
81 span.object { color: #485F6E; }
82 span.extended { opacity: 0.5; }
83 span.extended:hover { opacity: 1; }
84 a.toggle { text-decoration: none; background-repeat: no-repeat;
85 background-position: center center;
86 background-image: url(?__debugger__=yes&cmd=resource&f=more.png); }
87 a.toggle:hover { background-color: #444; }
88 a.open { background-image: url(?__debugger__=yes&cmd=resource&f=less.png); }
89
90 pre.console div.traceback,
91 pre.console div.box { margin: 5px 10px; white-space: normal;
92 border: 1px solid #11557C; padding: 10px;
93 font-family: 'Lucida Grande', 'Lucida Sans Unicode', 'Geneva',
94 'Verdana', sans-serif; }
95 pre.console div.box h3,
96 pre.console div.traceback h3 { margin: -10px -10px 10px -10px; padding: 5px;
97 background: #11557C; color: white; }
98
99 pre.console div.traceback pre:hover { cursor: default; background: #E8EFF0; }
100 pre.console div.traceback pre.syntaxerror { background: inherit; border: none;
101 margin: 20px -10px -10px -10px;
102 padding: 10px; border-top: 1px solid #BFDDE0;
103 background: #E8EFF0; }
104 pre.console div.noframe-traceback pre.syntaxerror { margin-top: -10px; border: none; }
105
106 pre.console div.box pre.repr { padding: 0; margin: 0; background-color: white; border: none; }
107 pre.console div.box table { margin-top: 6px; }
108 pre.console div.box pre { border: none; }
109 pre.console div.box pre.help { background-color: white; }
110 pre.console div.box pre.help:hover { cursor: default; }
111 pre.console table tr { vertical-align: top; }
112 div.console { border: 1px solid #ccc; padding: 4px; background-color: #fafafa; }
113
114 div.traceback pre, div.console pre {
115 white-space: pre-wrap; /* css-3 should we be so lucky... */
116 white-space: -moz-pre-wrap; /* Mozilla, since 1999 */
117 white-space: -pre-wrap; /* Opera 4-6 ?? */
118 white-space: -o-pre-wrap; /* Opera 7 ?? */
119 word-wrap: break-word; /* Internet Explorer 5.5+ */
120 _white-space: pre; /* IE only hack to re-specify in
121 addition to word-wrap */
122 }
123
124
125 div.pin-prompt {
126 position: absolute;
127 display: none;
128 top: 0;
129 bottom: 0;
130 left: 0;
131 right: 0;
132 background: rgba(255, 255, 255, 0.8);
133 }
134
135 div.pin-prompt .inner {
136 background: #eee;
137 padding: 10px 50px;
138 width: 350px;
139 margin: 10% auto 0 auto;
140 border: 1px solid #ccc;
141 border-radius: 2px;
142 }
+0
-556
werkzeug/debug/tbtools.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.debug.tbtools
3 ~~~~~~~~~~~~~~~~~~~~~~
4
5 This module provides various traceback related utility functions.
6
7 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD.
9 """
10 import re
11
12 import os
13 import sys
14 import json
15 import inspect
16 import traceback
17 import codecs
18 from tokenize import TokenError
19
20 from werkzeug.utils import cached_property, escape
21 from werkzeug.debug.console import Console
22 from werkzeug._compat import range_type, PY2, text_type, string_types, \
23 to_native, to_unicode
24 from werkzeug.filesystem import get_filesystem_encoding
25
26
27 _coding_re = re.compile(br'coding[:=]\s*([-\w.]+)')
28 _line_re = re.compile(br'^(.*?)$', re.MULTILINE)
29 _funcdef_re = re.compile(r'^(\s*def\s)|(.*(?<!\w)lambda(:|\s))|^(\s*@)')
30 UTF8_COOKIE = b'\xef\xbb\xbf'
31
32 system_exceptions = (SystemExit, KeyboardInterrupt)
33 try:
34 system_exceptions += (GeneratorExit,)
35 except NameError:
36 pass
37
38
39 HEADER = u'''\
40 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
41 "http://www.w3.org/TR/html4/loose.dtd">
42 <html>
43 <head>
44 <title>%(title)s // Werkzeug Debugger</title>
45 <link rel="stylesheet" href="?__debugger__=yes&amp;cmd=resource&amp;f=style.css"
46 type="text/css">
47 <!-- We need to make sure this has a favicon so that the debugger does
48 not by accident trigger a request to /favicon.ico which might
49 change the application state. -->
50 <link rel="shortcut icon"
51 href="?__debugger__=yes&amp;cmd=resource&amp;f=console.png">
52 <script src="?__debugger__=yes&amp;cmd=resource&amp;f=jquery.js"></script>
53 <script src="?__debugger__=yes&amp;cmd=resource&amp;f=debugger.js"></script>
54 <script type="text/javascript">
55 var TRACEBACK = %(traceback_id)d,
56 CONSOLE_MODE = %(console)s,
57 EVALEX = %(evalex)s,
58 EVALEX_TRUSTED = %(evalex_trusted)s,
59 SECRET = "%(secret)s";
60 </script>
61 </head>
62 <body style="background-color: #fff">
63 <div class="debugger">
64 '''
65 FOOTER = u'''\
66 <div class="footer">
67 Brought to you by <strong class="arthur">DON'T PANIC</strong>, your
68 friendly Werkzeug powered traceback interpreter.
69 </div>
70 </div>
71
72 <div class="pin-prompt">
73 <div class="inner">
74 <h3>Console Locked</h3>
75 <p>
76 The console is locked and needs to be unlocked by entering the PIN.
77 You can find the PIN printed out on the standard output of your
78 shell that runs the server.
79 <form>
80 <p>PIN:
81 <input type=text name=pin size=14>
82 <input type=submit name=btn value="Confirm Pin">
83 </form>
84 </div>
85 </div>
86 </body>
87 </html>
88 '''
89
90 PAGE_HTML = HEADER + u'''\
91 <h1>%(exception_type)s</h1>
92 <div class="detail">
93 <p class="errormsg">%(exception)s</p>
94 </div>
95 <h2 class="traceback">Traceback <em>(most recent call last)</em></h2>
96 %(summary)s
97 <div class="plain">
98 <form action="/?__debugger__=yes&amp;cmd=paste" method="post">
99 <p>
100 <input type="hidden" name="language" value="pytb">
101 This is the Copy/Paste friendly version of the traceback. <span
102 class="pastemessage">You can also paste this traceback into
103 a <a href="https://gist.github.com/">gist</a>:
104 <input type="submit" value="create paste"></span>
105 </p>
106 <textarea cols="50" rows="10" name="code" readonly>%(plaintext)s</textarea>
107 </form>
108 </div>
109 <div class="explanation">
110 The debugger caught an exception in your WSGI application. You can now
111 look at the traceback which led to the error. <span class="nojavascript">
112 If you enable JavaScript you can also use additional features such as code
113 execution (if the evalex feature is enabled), automatic pasting of the
114 exceptions and much more.</span>
115 </div>
116 ''' + FOOTER + '''
117 <!--
118
119 %(plaintext_cs)s
120
121 -->
122 '''
123
124 CONSOLE_HTML = HEADER + u'''\
125 <h1>Interactive Console</h1>
126 <div class="explanation">
127 In this console you can execute Python expressions in the context of the
128 application. The initial namespace was created by the debugger automatically.
129 </div>
130 <div class="console"><div class="inner">The Console requires JavaScript.</div></div>
131 ''' + FOOTER
132
133 SUMMARY_HTML = u'''\
134 <div class="%(classes)s">
135 %(title)s
136 <ul>%(frames)s</ul>
137 %(description)s
138 </div>
139 '''
140
141 FRAME_HTML = u'''\
142 <div class="frame" id="frame-%(id)d">
143 <h4>File <cite class="filename">"%(filename)s"</cite>,
144 line <em class="line">%(lineno)s</em>,
145 in <code class="function">%(function_name)s</code></h4>
146 <div class="source">%(lines)s</div>
147 </div>
148 '''
149
150 SOURCE_LINE_HTML = u'''\
151 <tr class="%(classes)s">
152 <td class=lineno>%(lineno)s</td>
153 <td>%(code)s</td>
154 </tr>
155 '''
156
157
158 def render_console_html(secret, evalex_trusted=True):
159 return CONSOLE_HTML % {
160 'evalex': 'true',
161 'evalex_trusted': evalex_trusted and 'true' or 'false',
162 'console': 'true',
163 'title': 'Console',
164 'secret': secret,
165 'traceback_id': -1
166 }
167
168
169 def get_current_traceback(ignore_system_exceptions=False,
170 show_hidden_frames=False, skip=0):
171 """Get the current exception info as `Traceback` object. Per default
172 calling this method will reraise system exceptions such as generator exit,
173 system exit or others. This behavior can be disabled by passing `False`
174 to the function as first parameter.
175 """
176 exc_type, exc_value, tb = sys.exc_info()
177 if ignore_system_exceptions and exc_type in system_exceptions:
178 raise
179 for x in range_type(skip):
180 if tb.tb_next is None:
181 break
182 tb = tb.tb_next
183 tb = Traceback(exc_type, exc_value, tb)
184 if not show_hidden_frames:
185 tb.filter_hidden_frames()
186 return tb
187
188
189 class Line(object):
190 """Helper for the source renderer."""
191 __slots__ = ('lineno', 'code', 'in_frame', 'current')
192
193 def __init__(self, lineno, code):
194 self.lineno = lineno
195 self.code = code
196 self.in_frame = False
197 self.current = False
198
199 def classes(self):
200 rv = ['line']
201 if self.in_frame:
202 rv.append('in-frame')
203 if self.current:
204 rv.append('current')
205 return rv
206 classes = property(classes)
207
208 def render(self):
209 return SOURCE_LINE_HTML % {
210 'classes': u' '.join(self.classes),
211 'lineno': self.lineno,
212 'code': escape(self.code)
213 }
214
215
216 class Traceback(object):
217 """Wraps a traceback."""
218
219 def __init__(self, exc_type, exc_value, tb):
220 self.exc_type = exc_type
221 self.exc_value = exc_value
222 if not isinstance(exc_type, str):
223 exception_type = exc_type.__name__
224 if exc_type.__module__ not in ('__builtin__', 'exceptions'):
225 exception_type = exc_type.__module__ + '.' + exception_type
226 else:
227 exception_type = exc_type
228 self.exception_type = exception_type
229
230 # we only add frames to the list that are not hidden. This follows
231 # the the magic variables as defined by paste.exceptions.collector
232 self.frames = []
233 while tb:
234 self.frames.append(Frame(exc_type, exc_value, tb))
235 tb = tb.tb_next
236
237 def filter_hidden_frames(self):
238 """Remove the frames according to the paste spec."""
239 if not self.frames:
240 return
241
242 new_frames = []
243 hidden = False
244 for frame in self.frames:
245 hide = frame.hide
246 if hide in ('before', 'before_and_this'):
247 new_frames = []
248 hidden = False
249 if hide == 'before_and_this':
250 continue
251 elif hide in ('reset', 'reset_and_this'):
252 hidden = False
253 if hide == 'reset_and_this':
254 continue
255 elif hide in ('after', 'after_and_this'):
256 hidden = True
257 if hide == 'after_and_this':
258 continue
259 elif hide or hidden:
260 continue
261 new_frames.append(frame)
262
263 # if we only have one frame and that frame is from the codeop
264 # module, remove it.
265 if len(new_frames) == 1 and self.frames[0].module == 'codeop':
266 del self.frames[:]
267
268 # if the last frame is missing something went terrible wrong :(
269 elif self.frames[-1] in new_frames:
270 self.frames[:] = new_frames
271
272 def is_syntax_error(self):
273 """Is it a syntax error?"""
274 return isinstance(self.exc_value, SyntaxError)
275 is_syntax_error = property(is_syntax_error)
276
277 def exception(self):
278 """String representation of the exception."""
279 buf = traceback.format_exception_only(self.exc_type, self.exc_value)
280 rv = ''.join(buf).strip()
281 return rv.decode('utf-8', 'replace') if PY2 else rv
282 exception = property(exception)
283
284 def log(self, logfile=None):
285 """Log the ASCII traceback into a file object."""
286 if logfile is None:
287 logfile = sys.stderr
288 tb = self.plaintext.rstrip() + u'\n'
289 if PY2:
290 tb = tb.encode('utf-8', 'replace')
291 logfile.write(tb)
292
293 def paste(self):
294 """Create a paste and return the paste id."""
295 data = json.dumps({
296 'description': 'Werkzeug Internal Server Error',
297 'public': False,
298 'files': {
299 'traceback.txt': {
300 'content': self.plaintext
301 }
302 }
303 }).encode('utf-8')
304 try:
305 from urllib2 import urlopen
306 except ImportError:
307 from urllib.request import urlopen
308 rv = urlopen('https://api.github.com/gists', data=data)
309 resp = json.loads(rv.read().decode('utf-8'))
310 rv.close()
311 return {
312 'url': resp['html_url'],
313 'id': resp['id']
314 }
315
316 def render_summary(self, include_title=True):
317 """Render the traceback for the interactive console."""
318 title = ''
319 frames = []
320 classes = ['traceback']
321 if not self.frames:
322 classes.append('noframe-traceback')
323
324 if include_title:
325 if self.is_syntax_error:
326 title = u'Syntax Error'
327 else:
328 title = u'Traceback <em>(most recent call last)</em>:'
329
330 for frame in self.frames:
331 frames.append(u'<li%s>%s' % (
332 frame.info and u' title="%s"' % escape(frame.info) or u'',
333 frame.render()
334 ))
335
336 if self.is_syntax_error:
337 description_wrapper = u'<pre class=syntaxerror>%s</pre>'
338 else:
339 description_wrapper = u'<blockquote>%s</blockquote>'
340
341 return SUMMARY_HTML % {
342 'classes': u' '.join(classes),
343 'title': title and u'<h3>%s</h3>' % title or u'',
344 'frames': u'\n'.join(frames),
345 'description': description_wrapper % escape(self.exception)
346 }
347
348 def render_full(self, evalex=False, secret=None,
349 evalex_trusted=True):
350 """Render the Full HTML page with the traceback info."""
351 exc = escape(self.exception)
352 return PAGE_HTML % {
353 'evalex': evalex and 'true' or 'false',
354 'evalex_trusted': evalex_trusted and 'true' or 'false',
355 'console': 'false',
356 'title': exc,
357 'exception': exc,
358 'exception_type': escape(self.exception_type),
359 'summary': self.render_summary(include_title=False),
360 'plaintext': escape(self.plaintext),
361 'plaintext_cs': re.sub('-{2,}', '-', self.plaintext),
362 'traceback_id': self.id,
363 'secret': secret
364 }
365
366 def generate_plaintext_traceback(self):
367 """Like the plaintext attribute but returns a generator"""
368 yield u'Traceback (most recent call last):'
369 for frame in self.frames:
370 yield u' File "%s", line %s, in %s' % (
371 frame.filename,
372 frame.lineno,
373 frame.function_name
374 )
375 yield u' ' + frame.current_line.strip()
376 yield self.exception
377
378 def plaintext(self):
379 return u'\n'.join(self.generate_plaintext_traceback())
380 plaintext = cached_property(plaintext)
381
382 id = property(lambda x: id(x))
383
384
385 class Frame(object):
386
387 """A single frame in a traceback."""
388
389 def __init__(self, exc_type, exc_value, tb):
390 self.lineno = tb.tb_lineno
391 self.function_name = tb.tb_frame.f_code.co_name
392 self.locals = tb.tb_frame.f_locals
393 self.globals = tb.tb_frame.f_globals
394
395 fn = inspect.getsourcefile(tb) or inspect.getfile(tb)
396 if fn[-4:] in ('.pyo', '.pyc'):
397 fn = fn[:-1]
398 # if it's a file on the file system resolve the real filename.
399 if os.path.isfile(fn):
400 fn = os.path.realpath(fn)
401 self.filename = to_unicode(fn, get_filesystem_encoding())
402 self.module = self.globals.get('__name__')
403 self.loader = self.globals.get('__loader__')
404 self.code = tb.tb_frame.f_code
405
406 # support for paste's traceback extensions
407 self.hide = self.locals.get('__traceback_hide__', False)
408 info = self.locals.get('__traceback_info__')
409 if info is not None:
410 try:
411 info = text_type(info)
412 except UnicodeError:
413 info = str(info).decode('utf-8', 'replace')
414 self.info = info
415
416 def render(self):
417 """Render a single frame in a traceback."""
418 return FRAME_HTML % {
419 'id': self.id,
420 'filename': escape(self.filename),
421 'lineno': self.lineno,
422 'function_name': escape(self.function_name),
423 'lines': self.render_line_context(),
424 }
425
426 def render_line_context(self):
427 before, current, after = self.get_context_lines()
428 rv = []
429
430 def render_line(line, cls):
431 line = line.expandtabs().rstrip()
432 stripped_line = line.strip()
433 prefix = len(line) - len(stripped_line)
434 rv.append(
435 '<pre class="line %s"><span class="ws">%s</span>%s</pre>' % (
436 cls, ' ' * prefix, escape(stripped_line) or ' '))
437
438 for line in before:
439 render_line(line, 'before')
440 render_line(current, 'current')
441 for line in after:
442 render_line(line, 'after')
443
444 return '\n'.join(rv)
445
446 def get_annotated_lines(self):
447 """Helper function that returns lines with extra information."""
448 lines = [Line(idx + 1, x) for idx, x in enumerate(self.sourcelines)]
449
450 # find function definition and mark lines
451 if hasattr(self.code, 'co_firstlineno'):
452 lineno = self.code.co_firstlineno - 1
453 while lineno > 0:
454 if _funcdef_re.match(lines[lineno].code):
455 break
456 lineno -= 1
457 try:
458 offset = len(inspect.getblock([x.code + '\n' for x
459 in lines[lineno:]]))
460 except TokenError:
461 offset = 0
462 for line in lines[lineno:lineno + offset]:
463 line.in_frame = True
464
465 # mark current line
466 try:
467 lines[self.lineno - 1].current = True
468 except IndexError:
469 pass
470
471 return lines
472
473 def eval(self, code, mode='single'):
474 """Evaluate code in the context of the frame."""
475 if isinstance(code, string_types):
476 if PY2 and isinstance(code, unicode): # noqa
477 code = UTF8_COOKIE + code.encode('utf-8')
478 code = compile(code, '<interactive>', mode)
479 return eval(code, self.globals, self.locals)
480
481 @cached_property
482 def sourcelines(self):
483 """The sourcecode of the file as list of unicode strings."""
484 # get sourcecode from loader or file
485 source = None
486 if self.loader is not None:
487 try:
488 if hasattr(self.loader, 'get_source'):
489 source = self.loader.get_source(self.module)
490 elif hasattr(self.loader, 'get_source_by_code'):
491 source = self.loader.get_source_by_code(self.code)
492 except Exception:
493 # we munch the exception so that we don't cause troubles
494 # if the loader is broken.
495 pass
496
497 if source is None:
498 try:
499 f = open(to_native(self.filename, get_filesystem_encoding()),
500 mode='rb')
501 except IOError:
502 return []
503 try:
504 source = f.read()
505 finally:
506 f.close()
507
508 # already unicode? return right away
509 if isinstance(source, text_type):
510 return source.splitlines()
511
512 # yes. it should be ascii, but we don't want to reject too many
513 # characters in the debugger if something breaks
514 charset = 'utf-8'
515 if source.startswith(UTF8_COOKIE):
516 source = source[3:]
517 else:
518 for idx, match in enumerate(_line_re.finditer(source)):
519 match = _coding_re.search(match.group())
520 if match is not None:
521 charset = match.group(1)
522 break
523 if idx > 1:
524 break
525
526 # on broken cookies we fall back to utf-8 too
527 charset = to_native(charset)
528 try:
529 codecs.lookup(charset)
530 except LookupError:
531 charset = 'utf-8'
532
533 return source.decode(charset, 'replace').splitlines()
534
535 def get_context_lines(self, context=5):
536 before = self.sourcelines[self.lineno - context - 1:self.lineno - 1]
537 past = self.sourcelines[self.lineno:self.lineno + context]
538 return (
539 before,
540 self.current_line,
541 past,
542 )
543
544 @property
545 def current_line(self):
546 try:
547 return self.sourcelines[self.lineno - 1]
548 except IndexError:
549 return u''
550
551 @cached_property
552 def console(self):
553 return Console(self.globals, self.locals)
554
555 id = property(lambda x: id(x))
+0
-719
werkzeug/exceptions.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.exceptions
3 ~~~~~~~~~~~~~~~~~~~
4
5 This module implements a number of Python exceptions you can raise from
6 within your views to trigger a standard non-200 response.
7
8
9 Usage Example
10 -------------
11
12 ::
13
14 from werkzeug.wrappers import BaseRequest
15 from werkzeug.wsgi import responder
16 from werkzeug.exceptions import HTTPException, NotFound
17
18 def view(request):
19 raise NotFound()
20
21 @responder
22 def application(environ, start_response):
23 request = BaseRequest(environ)
24 try:
25 return view(request)
26 except HTTPException as e:
27 return e
28
29
30 As you can see from this example those exceptions are callable WSGI
31 applications. Because of Python 2.4 compatibility those do not extend
32 from the response objects but only from the python exception class.
33
34 As a matter of fact they are not Werkzeug response objects. However you
35 can get a response object by calling ``get_response()`` on a HTTP
36 exception.
37
38 Keep in mind that you have to pass an environment to ``get_response()``
39 because some errors fetch additional information from the WSGI
40 environment.
41
42 If you want to hook in a different exception page to say, a 404 status
43 code, you can add a second except for a specific subclass of an error::
44
45 @responder
46 def application(environ, start_response):
47 request = BaseRequest(environ)
48 try:
49 return view(request)
50 except NotFound, e:
51 return not_found(request)
52 except HTTPException, e:
53 return e
54
55
56 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
57 :license: BSD, see LICENSE for more details.
58 """
59 import sys
60
61 # Because of bootstrapping reasons we need to manually patch ourselves
62 # onto our parent module.
63 import werkzeug
64 werkzeug.exceptions = sys.modules[__name__]
65
66 from werkzeug._internal import _get_environ
67 from werkzeug._compat import iteritems, integer_types, text_type, \
68 implements_to_string
69
70 from werkzeug.wrappers import Response
71
72
73 @implements_to_string
74 class HTTPException(Exception):
75
76 """
77 Baseclass for all HTTP exceptions. This exception can be called as WSGI
78 application to render a default error page or you can catch the subclasses
79 of it independently and render nicer error messages.
80 """
81
82 code = None
83 description = None
84
85 def __init__(self, description=None, response=None):
86 Exception.__init__(self)
87 if description is not None:
88 self.description = description
89 self.response = response
90
91 @classmethod
92 def wrap(cls, exception, name=None):
93 """This method returns a new subclass of the exception provided that
94 also is a subclass of `BadRequest`.
95 """
96 class newcls(cls, exception):
97
98 def __init__(self, arg=None, *args, **kwargs):
99 cls.__init__(self, *args, **kwargs)
100 exception.__init__(self, arg)
101 newcls.__module__ = sys._getframe(1).f_globals.get('__name__')
102 newcls.__name__ = name or cls.__name__ + exception.__name__
103 return newcls
104
105 @property
106 def name(self):
107 """The status name."""
108 return HTTP_STATUS_CODES.get(self.code, 'Unknown Error')
109
110 def get_description(self, environ=None):
111 """Get the description."""
112 return u'<p>%s</p>' % escape(self.description)
113
114 def get_body(self, environ=None):
115 """Get the HTML body."""
116 return text_type((
117 u'<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n'
118 u'<title>%(code)s %(name)s</title>\n'
119 u'<h1>%(name)s</h1>\n'
120 u'%(description)s\n'
121 ) % {
122 'code': self.code,
123 'name': escape(self.name),
124 'description': self.get_description(environ)
125 })
126
127 def get_headers(self, environ=None):
128 """Get a list of headers."""
129 return [('Content-Type', 'text/html')]
130
131 def get_response(self, environ=None):
132 """Get a response object. If one was passed to the exception
133 it's returned directly.
134
135 :param environ: the optional environ for the request. This
136 can be used to modify the response depending
137 on how the request looked like.
138 :return: a :class:`Response` object or a subclass thereof.
139 """
140 if self.response is not None:
141 return self.response
142 if environ is not None:
143 environ = _get_environ(environ)
144 headers = self.get_headers(environ)
145 return Response(self.get_body(environ), self.code, headers)
146
147 def __call__(self, environ, start_response):
148 """Call the exception as WSGI application.
149
150 :param environ: the WSGI environment.
151 :param start_response: the response callable provided by the WSGI
152 server.
153 """
154 response = self.get_response(environ)
155 return response(environ, start_response)
156
157 def __str__(self):
158 code = self.code if self.code is not None else '???'
159 return '%s %s: %s' % (code, self.name, self.description)
160
161 def __repr__(self):
162 code = self.code if self.code is not None else '???'
163 return "<%s '%s: %s'>" % (self.__class__.__name__, code, self.name)
164
165
166 class BadRequest(HTTPException):
167
168 """*400* `Bad Request`
169
170 Raise if the browser sends something to the application the application
171 or server cannot handle.
172 """
173 code = 400
174 description = (
175 'The browser (or proxy) sent a request that this server could '
176 'not understand.'
177 )
178
179
180 class ClientDisconnected(BadRequest):
181
182 """Internal exception that is raised if Werkzeug detects a disconnected
183 client. Since the client is already gone at that point attempting to
184 send the error message to the client might not work and might ultimately
185 result in another exception in the server. Mainly this is here so that
186 it is silenced by default as far as Werkzeug is concerned.
187
188 Since disconnections cannot be reliably detected and are unspecified
189 by WSGI to a large extent this might or might not be raised if a client
190 is gone.
191
192 .. versionadded:: 0.8
193 """
194
195
196 class SecurityError(BadRequest):
197
198 """Raised if something triggers a security error. This is otherwise
199 exactly like a bad request error.
200
201 .. versionadded:: 0.9
202 """
203
204
205 class BadHost(BadRequest):
206
207 """Raised if the submitted host is badly formatted.
208
209 .. versionadded:: 0.11.2
210 """
211
212
213 class Unauthorized(HTTPException):
214
215 """*401* `Unauthorized`
216
217 Raise if the user is not authorized. Also used if you want to use HTTP
218 basic auth.
219 """
220 code = 401
221 description = (
222 'The server could not verify that you are authorized to access '
223 'the URL requested. You either supplied the wrong credentials (e.g. '
224 'a bad password), or your browser doesn\'t understand how to supply '
225 'the credentials required.'
226 )
227
228
229 class Forbidden(HTTPException):
230
231 """*403* `Forbidden`
232
233 Raise if the user doesn't have the permission for the requested resource
234 but was authenticated.
235 """
236 code = 403
237 description = (
238 'You don\'t have the permission to access the requested resource. '
239 'It is either read-protected or not readable by the server.'
240 )
241
242
243 class NotFound(HTTPException):
244
245 """*404* `Not Found`
246
247 Raise if a resource does not exist and never existed.
248 """
249 code = 404
250 description = (
251 'The requested URL was not found on the server. '
252 'If you entered the URL manually please check your spelling and '
253 'try again.'
254 )
255
256
257 class MethodNotAllowed(HTTPException):
258
259 """*405* `Method Not Allowed`
260
261 Raise if the server used a method the resource does not handle. For
262 example `POST` if the resource is view only. Especially useful for REST.
263
264 The first argument for this exception should be a list of allowed methods.
265 Strictly speaking the response would be invalid if you don't provide valid
266 methods in the header which you can do with that list.
267 """
268 code = 405
269 description = 'The method is not allowed for the requested URL.'
270
271 def __init__(self, valid_methods=None, description=None):
272 """Takes an optional list of valid http methods
273 starting with werkzeug 0.3 the list will be mandatory."""
274 HTTPException.__init__(self, description)
275 self.valid_methods = valid_methods
276
277 def get_headers(self, environ):
278 headers = HTTPException.get_headers(self, environ)
279 if self.valid_methods:
280 headers.append(('Allow', ', '.join(self.valid_methods)))
281 return headers
282
283
284 class NotAcceptable(HTTPException):
285
286 """*406* `Not Acceptable`
287
288 Raise if the server can't return any content conforming to the
289 `Accept` headers of the client.
290 """
291 code = 406
292
293 description = (
294 'The resource identified by the request is only capable of '
295 'generating response entities which have content characteristics '
296 'not acceptable according to the accept headers sent in the '
297 'request.'
298 )
299
300
301 class RequestTimeout(HTTPException):
302
303 """*408* `Request Timeout`
304
305 Raise to signalize a timeout.
306 """
307 code = 408
308 description = (
309 'The server closed the network connection because the browser '
310 'didn\'t finish the request within the specified time.'
311 )
312
313
314 class Conflict(HTTPException):
315
316 """*409* `Conflict`
317
318 Raise to signal that a request cannot be completed because it conflicts
319 with the current state on the server.
320
321 .. versionadded:: 0.7
322 """
323 code = 409
324 description = (
325 'A conflict happened while processing the request. The resource '
326 'might have been modified while the request was being processed.'
327 )
328
329
330 class Gone(HTTPException):
331
332 """*410* `Gone`
333
334 Raise if a resource existed previously and went away without new location.
335 """
336 code = 410
337 description = (
338 'The requested URL is no longer available on this server and there '
339 'is no forwarding address. If you followed a link from a foreign '
340 'page, please contact the author of this page.'
341 )
342
343
344 class LengthRequired(HTTPException):
345
346 """*411* `Length Required`
347
348 Raise if the browser submitted data but no ``Content-Length`` header which
349 is required for the kind of processing the server does.
350 """
351 code = 411
352 description = (
353 'A request with this method requires a valid <code>Content-'
354 'Length</code> header.'
355 )
356
357
358 class PreconditionFailed(HTTPException):
359
360 """*412* `Precondition Failed`
361
362 Status code used in combination with ``If-Match``, ``If-None-Match``, or
363 ``If-Unmodified-Since``.
364 """
365 code = 412
366 description = (
367 'The precondition on the request for the URL failed positive '
368 'evaluation.'
369 )
370
371
372 class RequestEntityTooLarge(HTTPException):
373
374 """*413* `Request Entity Too Large`
375
376 The status code one should return if the data submitted exceeded a given
377 limit.
378 """
379 code = 413
380 description = (
381 'The data value transmitted exceeds the capacity limit.'
382 )
383
384
385 class RequestURITooLarge(HTTPException):
386
387 """*414* `Request URI Too Large`
388
389 Like *413* but for too long URLs.
390 """
391 code = 414
392 description = (
393 'The length of the requested URL exceeds the capacity limit '
394 'for this server. The request cannot be processed.'
395 )
396
397
398 class UnsupportedMediaType(HTTPException):
399
400 """*415* `Unsupported Media Type`
401
402 The status code returned if the server is unable to handle the media type
403 the client transmitted.
404 """
405 code = 415
406 description = (
407 'The server does not support the media type transmitted in '
408 'the request.'
409 )
410
411
412 class RequestedRangeNotSatisfiable(HTTPException):
413
414 """*416* `Requested Range Not Satisfiable`
415
416 The client asked for an invalid part of the file.
417
418 .. versionadded:: 0.7
419 """
420 code = 416
421 description = (
422 'The server cannot provide the requested range.'
423 )
424
425 def __init__(self, length=None, units="bytes", description=None):
426 """Takes an optional `Content-Range` header value based on ``length``
427 parameter.
428 """
429 HTTPException.__init__(self, description)
430 self.length = length
431 self.units = units
432
433 def get_headers(self, environ):
434 headers = HTTPException.get_headers(self, environ)
435 if self.length is not None:
436 headers.append(
437 ('Content-Range', '%s */%d' % (self.units, self.length)))
438 return headers
439
440
441 class ExpectationFailed(HTTPException):
442
443 """*417* `Expectation Failed`
444
445 The server cannot meet the requirements of the Expect request-header.
446
447 .. versionadded:: 0.7
448 """
449 code = 417
450 description = (
451 'The server could not meet the requirements of the Expect header'
452 )
453
454
455 class ImATeapot(HTTPException):
456
457 """*418* `I'm a teapot`
458
459 The server should return this if it is a teapot and someone attempted
460 to brew coffee with it.
461
462 .. versionadded:: 0.7
463 """
464 code = 418
465 description = (
466 'This server is a teapot, not a coffee machine'
467 )
468
469
470 class UnprocessableEntity(HTTPException):
471
472 """*422* `Unprocessable Entity`
473
474 Used if the request is well formed, but the instructions are otherwise
475 incorrect.
476 """
477 code = 422
478 description = (
479 'The request was well-formed but was unable to be followed '
480 'due to semantic errors.'
481 )
482
483
484 class Locked(HTTPException):
485
486 """*423* `Locked`
487
488 Used if the resource that is being accessed is locked.
489 """
490 code = 423
491 description = (
492 'The resource that is being accessed is locked.'
493 )
494
495
496 class PreconditionRequired(HTTPException):
497
498 """*428* `Precondition Required`
499
500 The server requires this request to be conditional, typically to prevent
501 the lost update problem, which is a race condition between two or more
502 clients attempting to update a resource through PUT or DELETE. By requiring
503 each client to include a conditional header ("If-Match" or "If-Unmodified-
504 Since") with the proper value retained from a recent GET request, the
505 server ensures that each client has at least seen the previous revision of
506 the resource.
507 """
508 code = 428
509 description = (
510 'This request is required to be conditional; try using "If-Match" '
511 'or "If-Unmodified-Since".'
512 )
513
514
515 class TooManyRequests(HTTPException):
516
517 """*429* `Too Many Requests`
518
519 The server is limiting the rate at which this user receives responses, and
520 this request exceeds that rate. (The server may use any convenient method
521 to identify users and their request rates). The server may include a
522 "Retry-After" header to indicate how long the user should wait before
523 retrying.
524 """
525 code = 429
526 description = (
527 'This user has exceeded an allotted request count. Try again later.'
528 )
529
530
531 class RequestHeaderFieldsTooLarge(HTTPException):
532
533 """*431* `Request Header Fields Too Large`
534
535 The server refuses to process the request because the header fields are too
536 large. One or more individual fields may be too large, or the set of all
537 headers is too large.
538 """
539 code = 431
540 description = (
541 'One or more header fields exceeds the maximum size.'
542 )
543
544
545 class UnavailableForLegalReasons(HTTPException):
546
547 """*451* `Unavailable For Legal Reasons`
548
549 This status code indicates that the server is denying access to the
550 resource as a consequence of a legal demand.
551 """
552 code = 451
553 description = (
554 'Unavailable for legal reasons.'
555 )
556
557
558 class InternalServerError(HTTPException):
559
560 """*500* `Internal Server Error`
561
562 Raise if an internal server error occurred. This is a good fallback if an
563 unknown error occurred in the dispatcher.
564 """
565 code = 500
566 description = (
567 'The server encountered an internal error and was unable to '
568 'complete your request. Either the server is overloaded or there '
569 'is an error in the application.'
570 )
571
572
573 class NotImplemented(HTTPException):
574
575 """*501* `Not Implemented`
576
577 Raise if the application does not support the action requested by the
578 browser.
579 """
580 code = 501
581 description = (
582 'The server does not support the action requested by the '
583 'browser.'
584 )
585
586
587 class BadGateway(HTTPException):
588
589 """*502* `Bad Gateway`
590
591 If you do proxying in your application you should return this status code
592 if you received an invalid response from the upstream server it accessed
593 in attempting to fulfill the request.
594 """
595 code = 502
596 description = (
597 'The proxy server received an invalid response from an upstream '
598 'server.'
599 )
600
601
602 class ServiceUnavailable(HTTPException):
603
604 """*503* `Service Unavailable`
605
606 Status code you should return if a service is temporarily unavailable.
607 """
608 code = 503
609 description = (
610 'The server is temporarily unable to service your request due to '
611 'maintenance downtime or capacity problems. Please try again '
612 'later.'
613 )
614
615
616 class GatewayTimeout(HTTPException):
617
618 """*504* `Gateway Timeout`
619
620 Status code you should return if a connection to an upstream server
621 times out.
622 """
623 code = 504
624 description = (
625 'The connection to an upstream server timed out.'
626 )
627
628
629 class HTTPVersionNotSupported(HTTPException):
630
631 """*505* `HTTP Version Not Supported`
632
633 The server does not support the HTTP protocol version used in the request.
634 """
635 code = 505
636 description = (
637 'The server does not support the HTTP protocol version used in the '
638 'request.'
639 )
640
641
642 default_exceptions = {}
643 __all__ = ['HTTPException']
644
645
646 def _find_exceptions():
647 for name, obj in iteritems(globals()):
648 try:
649 is_http_exception = issubclass(obj, HTTPException)
650 except TypeError:
651 is_http_exception = False
652 if not is_http_exception or obj.code is None:
653 continue
654 __all__.append(obj.__name__)
655 old_obj = default_exceptions.get(obj.code, None)
656 if old_obj is not None and issubclass(obj, old_obj):
657 continue
658 default_exceptions[obj.code] = obj
659 _find_exceptions()
660 del _find_exceptions
661
662
663 class Aborter(object):
664
665 """
666 When passed a dict of code -> exception items it can be used as
667 callable that raises exceptions. If the first argument to the
668 callable is an integer it will be looked up in the mapping, if it's
669 a WSGI application it will be raised in a proxy exception.
670
671 The rest of the arguments are forwarded to the exception constructor.
672 """
673
674 def __init__(self, mapping=None, extra=None):
675 if mapping is None:
676 mapping = default_exceptions
677 self.mapping = dict(mapping)
678 if extra is not None:
679 self.mapping.update(extra)
680
681 def __call__(self, code, *args, **kwargs):
682 if not args and not kwargs and not isinstance(code, integer_types):
683 raise HTTPException(response=code)
684 if code not in self.mapping:
685 raise LookupError('no exception for %r' % code)
686 raise self.mapping[code](*args, **kwargs)
687
688
689 def abort(status, *args, **kwargs):
690 '''
691 Raises an :py:exc:`HTTPException` for the given status code or WSGI
692 application::
693
694 abort(404) # 404 Not Found
695 abort(Response('Hello World'))
696
697 Can be passed a WSGI application or a status code. If a status code is
698 given it's looked up in the list of exceptions and will raise that
699 exception, if passed a WSGI application it will wrap it in a proxy WSGI
700 exception and raise that::
701
702 abort(404)
703 abort(Response('Hello World'))
704
705 '''
706 return _aborter(status, *args, **kwargs)
707
708 _aborter = Aborter()
709
710
711 #: an exception that is used internally to signal both a key error and a
712 #: bad request. Used by a lot of the datastructures.
713 BadRequestKeyError = BadRequest.wrap(KeyError)
714
715
716 # imported here because of circular dependencies of werkzeug.utils
717 from werkzeug.utils import escape
718 from werkzeug.http import HTTP_STATUS_CODES
+0
-66
werkzeug/filesystem.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.filesystem
3 ~~~~~~~~~~~~~~~~~~~
4
5 Various utilities for the local filesystem.
6
7 :copyright: (c) 2015 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD, see LICENSE for more details.
9 """
10
11 import codecs
12 import sys
13 import warnings
14
15 # We do not trust traditional unixes.
16 has_likely_buggy_unicode_filesystem = \
17 sys.platform.startswith('linux') or 'bsd' in sys.platform
18
19
20 def _is_ascii_encoding(encoding):
21 """
22 Given an encoding this figures out if the encoding is actually ASCII (which
23 is something we don't actually want in most cases). This is necessary
24 because ASCII comes under many names such as ANSI_X3.4-1968.
25 """
26 if encoding is None:
27 return False
28 try:
29 return codecs.lookup(encoding).name == 'ascii'
30 except LookupError:
31 return False
32
33
34 class BrokenFilesystemWarning(RuntimeWarning, UnicodeWarning):
35 '''The warning used by Werkzeug to signal a broken filesystem. Will only be
36 used once per runtime.'''
37
38
39 _warned_about_filesystem_encoding = False
40
41
42 def get_filesystem_encoding():
43 """
44 Returns the filesystem encoding that should be used. Note that this is
45 different from the Python understanding of the filesystem encoding which
46 might be deeply flawed. Do not use this value against Python's unicode APIs
47 because it might be different. See :ref:`filesystem-encoding` for the exact
48 behavior.
49
50 The concept of a filesystem encoding in generally is not something you
51 should rely on. As such if you ever need to use this function except for
52 writing wrapper code reconsider.
53 """
54 global _warned_about_filesystem_encoding
55 rv = sys.getfilesystemencoding()
56 if has_likely_buggy_unicode_filesystem and not rv \
57 or _is_ascii_encoding(rv):
58 if not _warned_about_filesystem_encoding:
59 warnings.warn(
60 'Detected a misconfigured UNIX filesystem: Will use UTF-8 as '
61 'filesystem encoding instead of {0!r}'.format(rv),
62 BrokenFilesystemWarning)
63 _warned_about_filesystem_encoding = True
64 return 'utf-8'
65 return rv
+0
-534
werkzeug/formparser.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.formparser
3 ~~~~~~~~~~~~~~~~~~~
4
5 This module implements the form parsing. It supports url-encoded forms
6 as well as non-nested multipart uploads.
7
8 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
9 :license: BSD, see LICENSE for more details.
10 """
11 import re
12 import codecs
13
14 # there are some platforms where SpooledTemporaryFile is not available.
15 # In that case we need to provide a fallback.
16 try:
17 from tempfile import SpooledTemporaryFile
18 except ImportError:
19 from tempfile import TemporaryFile
20 SpooledTemporaryFile = None
21
22 from itertools import chain, repeat, tee
23 from functools import update_wrapper
24
25 from werkzeug._compat import to_native, text_type, BytesIO
26 from werkzeug.urls import url_decode_stream
27 from werkzeug.wsgi import make_line_iter, \
28 get_input_stream, get_content_length
29 from werkzeug.datastructures import Headers, FileStorage, MultiDict
30 from werkzeug.http import parse_options_header
31
32
33 #: an iterator that yields empty strings
34 _empty_string_iter = repeat('')
35
36 #: a regular expression for multipart boundaries
37 _multipart_boundary_re = re.compile('^[ -~]{0,200}[!-~]$')
38
39 #: supported http encodings that are also available in python we support
40 #: for multipart messages.
41 _supported_multipart_encodings = frozenset(['base64', 'quoted-printable'])
42
43
44 def default_stream_factory(total_content_length, filename, content_type,
45 content_length=None):
46 """The stream factory that is used per default."""
47 max_size = 1024 * 500
48 if SpooledTemporaryFile is not None:
49 return SpooledTemporaryFile(max_size=max_size, mode='wb+')
50 if total_content_length is None or total_content_length > max_size:
51 return TemporaryFile('wb+')
52 return BytesIO()
53
54
55 def parse_form_data(environ, stream_factory=None, charset='utf-8',
56 errors='replace', max_form_memory_size=None,
57 max_content_length=None, cls=None,
58 silent=True):
59 """Parse the form data in the environ and return it as tuple in the form
60 ``(stream, form, files)``. You should only call this method if the
61 transport method is `POST`, `PUT`, or `PATCH`.
62
63 If the mimetype of the data transmitted is `multipart/form-data` the
64 files multidict will be filled with `FileStorage` objects. If the
65 mimetype is unknown the input stream is wrapped and returned as first
66 argument, else the stream is empty.
67
68 This is a shortcut for the common usage of :class:`FormDataParser`.
69
70 Have a look at :ref:`dealing-with-request-data` for more details.
71
72 .. versionadded:: 0.5
73 The `max_form_memory_size`, `max_content_length` and
74 `cls` parameters were added.
75
76 .. versionadded:: 0.5.1
77 The optional `silent` flag was added.
78
79 :param environ: the WSGI environment to be used for parsing.
80 :param stream_factory: An optional callable that returns a new read and
81 writeable file descriptor. This callable works
82 the same as :meth:`~BaseResponse._get_file_stream`.
83 :param charset: The character set for URL and url encoded form data.
84 :param errors: The encoding error behavior.
85 :param max_form_memory_size: the maximum number of bytes to be accepted for
86 in-memory stored form data. If the data
87 exceeds the value specified an
88 :exc:`~exceptions.RequestEntityTooLarge`
89 exception is raised.
90 :param max_content_length: If this is provided and the transmitted data
91 is longer than this value an
92 :exc:`~exceptions.RequestEntityTooLarge`
93 exception is raised.
94 :param cls: an optional dict class to use. If this is not specified
95 or `None` the default :class:`MultiDict` is used.
96 :param silent: If set to False parsing errors will not be caught.
97 :return: A tuple in the form ``(stream, form, files)``.
98 """
99 return FormDataParser(stream_factory, charset, errors,
100 max_form_memory_size, max_content_length,
101 cls, silent).parse_from_environ(environ)
102
103
104 def exhaust_stream(f):
105 """Helper decorator for methods that exhausts the stream on return."""
106
107 def wrapper(self, stream, *args, **kwargs):
108 try:
109 return f(self, stream, *args, **kwargs)
110 finally:
111 exhaust = getattr(stream, 'exhaust', None)
112 if exhaust is not None:
113 exhaust()
114 else:
115 while 1:
116 chunk = stream.read(1024 * 64)
117 if not chunk:
118 break
119 return update_wrapper(wrapper, f)
120
121
122 class FormDataParser(object):
123
124 """This class implements parsing of form data for Werkzeug. By itself
125 it can parse multipart and url encoded form data. It can be subclassed
126 and extended but for most mimetypes it is a better idea to use the
127 untouched stream and expose it as separate attributes on a request
128 object.
129
130 .. versionadded:: 0.8
131
132 :param stream_factory: An optional callable that returns a new read and
133 writeable file descriptor. This callable works
134 the same as :meth:`~BaseResponse._get_file_stream`.
135 :param charset: The character set for URL and url encoded form data.
136 :param errors: The encoding error behavior.
137 :param max_form_memory_size: the maximum number of bytes to be accepted for
138 in-memory stored form data. If the data
139 exceeds the value specified an
140 :exc:`~exceptions.RequestEntityTooLarge`
141 exception is raised.
142 :param max_content_length: If this is provided and the transmitted data
143 is longer than this value an
144 :exc:`~exceptions.RequestEntityTooLarge`
145 exception is raised.
146 :param cls: an optional dict class to use. If this is not specified
147 or `None` the default :class:`MultiDict` is used.
148 :param silent: If set to False parsing errors will not be caught.
149 """
150
151 def __init__(self, stream_factory=None, charset='utf-8',
152 errors='replace', max_form_memory_size=None,
153 max_content_length=None, cls=None,
154 silent=True):
155 if stream_factory is None:
156 stream_factory = default_stream_factory
157 self.stream_factory = stream_factory
158 self.charset = charset
159 self.errors = errors
160 self.max_form_memory_size = max_form_memory_size
161 self.max_content_length = max_content_length
162 if cls is None:
163 cls = MultiDict
164 self.cls = cls
165 self.silent = silent
166
167 def get_parse_func(self, mimetype, options):
168 return self.parse_functions.get(mimetype)
169
170 def parse_from_environ(self, environ):
171 """Parses the information from the environment as form data.
172
173 :param environ: the WSGI environment to be used for parsing.
174 :return: A tuple in the form ``(stream, form, files)``.
175 """
176 content_type = environ.get('CONTENT_TYPE', '')
177 content_length = get_content_length(environ)
178 mimetype, options = parse_options_header(content_type)
179 return self.parse(get_input_stream(environ), mimetype,
180 content_length, options)
181
182 def parse(self, stream, mimetype, content_length, options=None):
183 """Parses the information from the given stream, mimetype,
184 content length and mimetype parameters.
185
186 :param stream: an input stream
187 :param mimetype: the mimetype of the data
188 :param content_length: the content length of the incoming data
189 :param options: optional mimetype parameters (used for
190 the multipart boundary for instance)
191 :return: A tuple in the form ``(stream, form, files)``.
192 """
193 if self.max_content_length is not None and \
194 content_length is not None and \
195 content_length > self.max_content_length:
196 raise exceptions.RequestEntityTooLarge()
197 if options is None:
198 options = {}
199
200 parse_func = self.get_parse_func(mimetype, options)
201 if parse_func is not None:
202 try:
203 return parse_func(self, stream, mimetype,
204 content_length, options)
205 except ValueError:
206 if not self.silent:
207 raise
208
209 return stream, self.cls(), self.cls()
210
211 @exhaust_stream
212 def _parse_multipart(self, stream, mimetype, content_length, options):
213 parser = MultiPartParser(self.stream_factory, self.charset, self.errors,
214 max_form_memory_size=self.max_form_memory_size,
215 cls=self.cls)
216 boundary = options.get('boundary')
217 if boundary is None:
218 raise ValueError('Missing boundary')
219 if isinstance(boundary, text_type):
220 boundary = boundary.encode('ascii')
221 form, files = parser.parse(stream, boundary, content_length)
222 return stream, form, files
223
224 @exhaust_stream
225 def _parse_urlencoded(self, stream, mimetype, content_length, options):
226 if self.max_form_memory_size is not None and \
227 content_length is not None and \
228 content_length > self.max_form_memory_size:
229 raise exceptions.RequestEntityTooLarge()
230 form = url_decode_stream(stream, self.charset,
231 errors=self.errors, cls=self.cls)
232 return stream, form, self.cls()
233
234 #: mapping of mimetypes to parsing functions
235 parse_functions = {
236 'multipart/form-data': _parse_multipart,
237 'application/x-www-form-urlencoded': _parse_urlencoded,
238 'application/x-url-encoded': _parse_urlencoded
239 }
240
241
242 def is_valid_multipart_boundary(boundary):
243 """Checks if the string given is a valid multipart boundary."""
244 return _multipart_boundary_re.match(boundary) is not None
245
246
247 def _line_parse(line):
248 """Removes line ending characters and returns a tuple (`stripped_line`,
249 `is_terminated`).
250 """
251 if line[-2:] in ['\r\n', b'\r\n']:
252 return line[:-2], True
253 elif line[-1:] in ['\r', '\n', b'\r', b'\n']:
254 return line[:-1], True
255 return line, False
256
257
258 def parse_multipart_headers(iterable):
259 """Parses multipart headers from an iterable that yields lines (including
260 the trailing newline symbol). The iterable has to be newline terminated.
261
262 The iterable will stop at the line where the headers ended so it can be
263 further consumed.
264
265 :param iterable: iterable of strings that are newline terminated
266 """
267 result = []
268 for line in iterable:
269 line = to_native(line)
270 line, line_terminated = _line_parse(line)
271 if not line_terminated:
272 raise ValueError('unexpected end of line in multipart header')
273 if not line:
274 break
275 elif line[0] in ' \t' and result:
276 key, value = result[-1]
277 result[-1] = (key, value + '\n ' + line[1:])
278 else:
279 parts = line.split(':', 1)
280 if len(parts) == 2:
281 result.append((parts[0].strip(), parts[1].strip()))
282
283 # we link the list to the headers, no need to create a copy, the
284 # list was not shared anyways.
285 return Headers(result)
286
287
288 _begin_form = 'begin_form'
289 _begin_file = 'begin_file'
290 _cont = 'cont'
291 _end = 'end'
292
293
294 class MultiPartParser(object):
295
296 def __init__(self, stream_factory=None, charset='utf-8', errors='replace',
297 max_form_memory_size=None, cls=None, buffer_size=64 * 1024):
298 self.charset = charset
299 self.errors = errors
300 self.max_form_memory_size = max_form_memory_size
301 self.stream_factory = default_stream_factory if stream_factory is None else stream_factory
302 self.cls = MultiDict if cls is None else cls
303
304 # make sure the buffer size is divisible by four so that we can base64
305 # decode chunk by chunk
306 assert buffer_size % 4 == 0, 'buffer size has to be divisible by 4'
307 # also the buffer size has to be at least 1024 bytes long or long headers
308 # will freak out the system
309 assert buffer_size >= 1024, 'buffer size has to be at least 1KB'
310
311 self.buffer_size = buffer_size
312
313 def _fix_ie_filename(self, filename):
314 """Internet Explorer 6 transmits the full file name if a file is
315 uploaded. This function strips the full path if it thinks the
316 filename is Windows-like absolute.
317 """
318 if filename[1:3] == ':\\' or filename[:2] == '\\\\':
319 return filename.split('\\')[-1]
320 return filename
321
322 def _find_terminator(self, iterator):
323 """The terminator might have some additional newlines before it.
324 There is at least one application that sends additional newlines
325 before headers (the python setuptools package).
326 """
327 for line in iterator:
328 if not line:
329 break
330 line = line.strip()
331 if line:
332 return line
333 return b''
334
335 def fail(self, message):
336 raise ValueError(message)
337
338 def get_part_encoding(self, headers):
339 transfer_encoding = headers.get('content-transfer-encoding')
340 if transfer_encoding is not None and \
341 transfer_encoding in _supported_multipart_encodings:
342 return transfer_encoding
343
344 def get_part_charset(self, headers):
345 # Figure out input charset for current part
346 content_type = headers.get('content-type')
347 if content_type:
348 mimetype, ct_params = parse_options_header(content_type)
349 return ct_params.get('charset', self.charset)
350 return self.charset
351
352 def start_file_streaming(self, filename, headers, total_content_length):
353 if isinstance(filename, bytes):
354 filename = filename.decode(self.charset, self.errors)
355 filename = self._fix_ie_filename(filename)
356 content_type = headers.get('content-type')
357 try:
358 content_length = int(headers['content-length'])
359 except (KeyError, ValueError):
360 content_length = 0
361 container = self.stream_factory(total_content_length, content_type,
362 filename, content_length)
363 return filename, container
364
365 def in_memory_threshold_reached(self, bytes):
366 raise exceptions.RequestEntityTooLarge()
367
368 def validate_boundary(self, boundary):
369 if not boundary:
370 self.fail('Missing boundary')
371 if not is_valid_multipart_boundary(boundary):
372 self.fail('Invalid boundary: %s' % boundary)
373 if len(boundary) > self.buffer_size: # pragma: no cover
374 # this should never happen because we check for a minimum size
375 # of 1024 and boundaries may not be longer than 200. The only
376 # situation when this happens is for non debug builds where
377 # the assert is skipped.
378 self.fail('Boundary longer than buffer size')
379
380 def parse_lines(self, file, boundary, content_length, cap_at_buffer=True):
381 """Generate parts of
382 ``('begin_form', (headers, name))``
383 ``('begin_file', (headers, name, filename))``
384 ``('cont', bytestring)``
385 ``('end', None)``
386
387 Always obeys the grammar
388 parts = ( begin_form cont* end |
389 begin_file cont* end )*
390 """
391 next_part = b'--' + boundary
392 last_part = next_part + b'--'
393
394 iterator = chain(make_line_iter(file, limit=content_length,
395 buffer_size=self.buffer_size,
396 cap_at_buffer=cap_at_buffer),
397 _empty_string_iter)
398
399 terminator = self._find_terminator(iterator)
400
401 if terminator == last_part:
402 return
403 elif terminator != next_part:
404 self.fail('Expected boundary at start of multipart data')
405
406 while terminator != last_part:
407 headers = parse_multipart_headers(iterator)
408
409 disposition = headers.get('content-disposition')
410 if disposition is None:
411 self.fail('Missing Content-Disposition header')
412 disposition, extra = parse_options_header(disposition)
413 transfer_encoding = self.get_part_encoding(headers)
414 name = extra.get('name')
415
416 # Accept filename* to support non-ascii filenames as per rfc2231
417 filename = extra.get('filename') or extra.get('filename*')
418
419 # if no content type is given we stream into memory. A list is
420 # used as a temporary container.
421 if filename is None:
422 yield _begin_form, (headers, name)
423
424 # otherwise we parse the rest of the headers and ask the stream
425 # factory for something we can write in.
426 else:
427 yield _begin_file, (headers, name, filename)
428
429 buf = b''
430 for line in iterator:
431 if not line:
432 self.fail('unexpected end of stream')
433
434 if line[:2] == b'--':
435 terminator = line.rstrip()
436 if terminator in (next_part, last_part):
437 break
438
439 if transfer_encoding is not None:
440 if transfer_encoding == 'base64':
441 transfer_encoding = 'base64_codec'
442 try:
443 line = codecs.decode(line, transfer_encoding)
444 except Exception:
445 self.fail('could not decode transfer encoded chunk')
446
447 # we have something in the buffer from the last iteration.
448 # this is usually a newline delimiter.
449 if buf:
450 yield _cont, buf
451 buf = b''
452
453 # If the line ends with windows CRLF we write everything except
454 # the last two bytes. In all other cases however we write
455 # everything except the last byte. If it was a newline, that's
456 # fine, otherwise it does not matter because we will write it
457 # the next iteration. this ensures we do not write the
458 # final newline into the stream. That way we do not have to
459 # truncate the stream. However we do have to make sure that
460 # if something else than a newline is in there we write it
461 # out.
462 if line[-2:] == b'\r\n':
463 buf = b'\r\n'
464 cutoff = -2
465 else:
466 buf = line[-1:]
467 cutoff = -1
468 yield _cont, line[:cutoff]
469
470 else: # pragma: no cover
471 raise ValueError('unexpected end of part')
472
473 # if we have a leftover in the buffer that is not a newline
474 # character we have to flush it, otherwise we will chop of
475 # certain values.
476 if buf not in (b'', b'\r', b'\n', b'\r\n'):
477 yield _cont, buf
478
479 yield _end, None
480
481 def parse_parts(self, file, boundary, content_length):
482 """Generate ``('file', (name, val))`` and
483 ``('form', (name, val))`` parts.
484 """
485 in_memory = 0
486
487 for ellt, ell in self.parse_lines(file, boundary, content_length):
488 if ellt == _begin_file:
489 headers, name, filename = ell
490 is_file = True
491 guard_memory = False
492 filename, container = self.start_file_streaming(
493 filename, headers, content_length)
494 _write = container.write
495
496 elif ellt == _begin_form:
497 headers, name = ell
498 is_file = False
499 container = []
500 _write = container.append
501 guard_memory = self.max_form_memory_size is not None
502
503 elif ellt == _cont:
504 _write(ell)
505 # if we write into memory and there is a memory size limit we
506 # count the number of bytes in memory and raise an exception if
507 # there is too much data in memory.
508 if guard_memory:
509 in_memory += len(ell)
510 if in_memory > self.max_form_memory_size:
511 self.in_memory_threshold_reached(in_memory)
512
513 elif ellt == _end:
514 if is_file:
515 container.seek(0)
516 yield ('file',
517 (name, FileStorage(container, filename, name,
518 headers=headers)))
519 else:
520 part_charset = self.get_part_charset(headers)
521 yield ('form',
522 (name, b''.join(container).decode(
523 part_charset, self.errors)))
524
525 def parse(self, file, boundary, content_length):
526 formstream, filestream = tee(
527 self.parse_parts(file, boundary, content_length), 2)
528 form = (p[1] for p in formstream if p[0] == 'form')
529 files = (p[1] for p in filestream if p[0] == 'file')
530 return self.cls(form), self.cls(files)
531
532
533 from werkzeug import exceptions
+0
-1158
werkzeug/http.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.http
3 ~~~~~~~~~~~~~
4
5 Werkzeug comes with a bunch of utilities that help Werkzeug to deal with
6 HTTP data. Most of the classes and functions provided by this module are
7 used by the wrappers, but they are useful on their own, too, especially if
8 the response and request objects are not used.
9
10 This covers some of the more HTTP centric features of WSGI, some other
11 utilities such as cookie handling are documented in the `werkzeug.utils`
12 module.
13
14
15 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
16 :license: BSD, see LICENSE for more details.
17 """
18 import re
19 import warnings
20 from time import time, gmtime
21 try:
22 from email.utils import parsedate_tz
23 except ImportError: # pragma: no cover
24 from email.Utils import parsedate_tz
25 try:
26 from urllib.request import parse_http_list as _parse_list_header
27 from urllib.parse import unquote_to_bytes as _unquote
28 except ImportError: # pragma: no cover
29 from urllib2 import parse_http_list as _parse_list_header, \
30 unquote as _unquote
31 from datetime import datetime, timedelta
32 from hashlib import md5
33 import base64
34
35 from werkzeug._internal import _cookie_quote, _make_cookie_domain, \
36 _cookie_parse_impl
37 from werkzeug._compat import to_unicode, iteritems, text_type, \
38 string_types, try_coerce_native, to_bytes, PY2, \
39 integer_types
40
41
42 _cookie_charset = 'latin1'
43 # for explanation of "media-range", etc. see Sections 5.3.{1,2} of RFC 7231
44 _accept_re = re.compile(
45 r'''( # media-range capturing-parenthesis
46 [^\s;,]+ # type/subtype
47 (?:[ \t]*;[ \t]* # ";"
48 (?: # parameter non-capturing-parenthesis
49 [^\s;,q][^\s;,]* # token that doesn't start with "q"
50 | # or
51 q[^\s;,=][^\s;,]* # token that is more than just "q"
52 )
53 )* # zero or more parameters
54 ) # end of media-range
55 (?:[ \t]*;[ \t]*q= # weight is a "q" parameter
56 (\d*(?:\.\d+)?) # qvalue capturing-parentheses
57 [^,]* # "extension" accept params: who cares?
58 )? # accept params are optional
59 ''', re.VERBOSE)
60 _token_chars = frozenset("!#$%&'*+-.0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"
61 '^_`abcdefghijklmnopqrstuvwxyz|~')
62 _etag_re = re.compile(r'([Ww]/)?(?:"(.*?)"|(.*?))(?:\s*,\s*|$)')
63 _unsafe_header_chars = set('()<>@,;:\"/[]?={} \t')
64 _option_header_piece_re = re.compile(r'''
65 ;\s*
66 (?P<key>
67 "[^"\\]*(?:\\.[^"\\]*)*" # quoted string
68 |
69 [^\s;,=*]+ # token
70 )
71 \s*
72 (?: # optionally followed by =value
73 (?: # equals sign, possibly with encoding
74 \*\s*=\s* # * indicates extended notation
75 (?P<encoding>[^\s]+?)
76 '(?P<language>[^\s]*?)'
77 |
78 =\s* # basic notation
79 )
80 (?P<value>
81 "[^"\\]*(?:\\.[^"\\]*)*" # quoted string
82 |
83 [^;,]+ # token
84 )?
85 )?
86 \s*
87 ''', flags=re.VERBOSE)
88 _option_header_start_mime_type = re.compile(r',\s*([^;,\s]+)([;,]\s*.+)?')
89
90 _entity_headers = frozenset([
91 'allow', 'content-encoding', 'content-language', 'content-length',
92 'content-location', 'content-md5', 'content-range', 'content-type',
93 'expires', 'last-modified'
94 ])
95 _hop_by_hop_headers = frozenset([
96 'connection', 'keep-alive', 'proxy-authenticate',
97 'proxy-authorization', 'te', 'trailer', 'transfer-encoding',
98 'upgrade'
99 ])
100
101
102 HTTP_STATUS_CODES = {
103 100: 'Continue',
104 101: 'Switching Protocols',
105 102: 'Processing',
106 200: 'OK',
107 201: 'Created',
108 202: 'Accepted',
109 203: 'Non Authoritative Information',
110 204: 'No Content',
111 205: 'Reset Content',
112 206: 'Partial Content',
113 207: 'Multi Status',
114 226: 'IM Used', # see RFC 3229
115 300: 'Multiple Choices',
116 301: 'Moved Permanently',
117 302: 'Found',
118 303: 'See Other',
119 304: 'Not Modified',
120 305: 'Use Proxy',
121 307: 'Temporary Redirect',
122 400: 'Bad Request',
123 401: 'Unauthorized',
124 402: 'Payment Required', # unused
125 403: 'Forbidden',
126 404: 'Not Found',
127 405: 'Method Not Allowed',
128 406: 'Not Acceptable',
129 407: 'Proxy Authentication Required',
130 408: 'Request Timeout',
131 409: 'Conflict',
132 410: 'Gone',
133 411: 'Length Required',
134 412: 'Precondition Failed',
135 413: 'Request Entity Too Large',
136 414: 'Request URI Too Long',
137 415: 'Unsupported Media Type',
138 416: 'Requested Range Not Satisfiable',
139 417: 'Expectation Failed',
140 418: 'I\'m a teapot', # see RFC 2324
141 422: 'Unprocessable Entity',
142 423: 'Locked',
143 424: 'Failed Dependency',
144 426: 'Upgrade Required',
145 428: 'Precondition Required', # see RFC 6585
146 429: 'Too Many Requests',
147 431: 'Request Header Fields Too Large',
148 449: 'Retry With', # proprietary MS extension
149 451: 'Unavailable For Legal Reasons',
150 500: 'Internal Server Error',
151 501: 'Not Implemented',
152 502: 'Bad Gateway',
153 503: 'Service Unavailable',
154 504: 'Gateway Timeout',
155 505: 'HTTP Version Not Supported',
156 507: 'Insufficient Storage',
157 510: 'Not Extended'
158 }
159
160
161 def wsgi_to_bytes(data):
162 """coerce wsgi unicode represented bytes to real ones
163
164 """
165 if isinstance(data, bytes):
166 return data
167 return data.encode('latin1') # XXX: utf8 fallback?
168
169
170 def bytes_to_wsgi(data):
171 assert isinstance(data, bytes), 'data must be bytes'
172 if isinstance(data, str):
173 return data
174 else:
175 return data.decode('latin1')
176
177
178 def quote_header_value(value, extra_chars='', allow_token=True):
179 """Quote a header value if necessary.
180
181 .. versionadded:: 0.5
182
183 :param value: the value to quote.
184 :param extra_chars: a list of extra characters to skip quoting.
185 :param allow_token: if this is enabled token values are returned
186 unchanged.
187 """
188 if isinstance(value, bytes):
189 value = bytes_to_wsgi(value)
190 value = str(value)
191 if allow_token:
192 token_chars = _token_chars | set(extra_chars)
193 if set(value).issubset(token_chars):
194 return value
195 return '"%s"' % value.replace('\\', '\\\\').replace('"', '\\"')
196
197
198 def unquote_header_value(value, is_filename=False):
199 r"""Unquotes a header value. (Reversal of :func:`quote_header_value`).
200 This does not use the real unquoting but what browsers are actually
201 using for quoting.
202
203 .. versionadded:: 0.5
204
205 :param value: the header value to unquote.
206 """
207 if value and value[0] == value[-1] == '"':
208 # this is not the real unquoting, but fixing this so that the
209 # RFC is met will result in bugs with internet explorer and
210 # probably some other browsers as well. IE for example is
211 # uploading files with "C:\foo\bar.txt" as filename
212 value = value[1:-1]
213
214 # if this is a filename and the starting characters look like
215 # a UNC path, then just return the value without quotes. Using the
216 # replace sequence below on a UNC path has the effect of turning
217 # the leading double slash into a single slash and then
218 # _fix_ie_filename() doesn't work correctly. See #458.
219 if not is_filename or value[:2] != '\\\\':
220 return value.replace('\\\\', '\\').replace('\\"', '"')
221 return value
222
223
224 def dump_options_header(header, options):
225 """The reverse function to :func:`parse_options_header`.
226
227 :param header: the header to dump
228 :param options: a dict of options to append.
229 """
230 segments = []
231 if header is not None:
232 segments.append(header)
233 for key, value in iteritems(options):
234 if value is None:
235 segments.append(key)
236 else:
237 segments.append('%s=%s' % (key, quote_header_value(value)))
238 return '; '.join(segments)
239
240
241 def dump_header(iterable, allow_token=True):
242 """Dump an HTTP header again. This is the reversal of
243 :func:`parse_list_header`, :func:`parse_set_header` and
244 :func:`parse_dict_header`. This also quotes strings that include an
245 equals sign unless you pass it as dict of key, value pairs.
246
247 >>> dump_header({'foo': 'bar baz'})
248 'foo="bar baz"'
249 >>> dump_header(('foo', 'bar baz'))
250 'foo, "bar baz"'
251
252 :param iterable: the iterable or dict of values to quote.
253 :param allow_token: if set to `False` tokens as values are disallowed.
254 See :func:`quote_header_value` for more details.
255 """
256 if isinstance(iterable, dict):
257 items = []
258 for key, value in iteritems(iterable):
259 if value is None:
260 items.append(key)
261 else:
262 items.append('%s=%s' % (
263 key,
264 quote_header_value(value, allow_token=allow_token)
265 ))
266 else:
267 items = [quote_header_value(x, allow_token=allow_token)
268 for x in iterable]
269 return ', '.join(items)
270
271
272 def parse_list_header(value):
273 """Parse lists as described by RFC 2068 Section 2.
274
275 In particular, parse comma-separated lists where the elements of
276 the list may include quoted-strings. A quoted-string could
277 contain a comma. A non-quoted string could have quotes in the
278 middle. Quotes are removed automatically after parsing.
279
280 It basically works like :func:`parse_set_header` just that items
281 may appear multiple times and case sensitivity is preserved.
282
283 The return value is a standard :class:`list`:
284
285 >>> parse_list_header('token, "quoted value"')
286 ['token', 'quoted value']
287
288 To create a header from the :class:`list` again, use the
289 :func:`dump_header` function.
290
291 :param value: a string with a list header.
292 :return: :class:`list`
293 """
294 result = []
295 for item in _parse_list_header(value):
296 if item[:1] == item[-1:] == '"':
297 item = unquote_header_value(item[1:-1])
298 result.append(item)
299 return result
300
301
302 def parse_dict_header(value, cls=dict):
303 """Parse lists of key, value pairs as described by RFC 2068 Section 2 and
304 convert them into a python dict (or any other mapping object created from
305 the type with a dict like interface provided by the `cls` argument):
306
307 >>> d = parse_dict_header('foo="is a fish", bar="as well"')
308 >>> type(d) is dict
309 True
310 >>> sorted(d.items())
311 [('bar', 'as well'), ('foo', 'is a fish')]
312
313 If there is no value for a key it will be `None`:
314
315 >>> parse_dict_header('key_without_value')
316 {'key_without_value': None}
317
318 To create a header from the :class:`dict` again, use the
319 :func:`dump_header` function.
320
321 .. versionchanged:: 0.9
322 Added support for `cls` argument.
323
324 :param value: a string with a dict header.
325 :param cls: callable to use for storage of parsed results.
326 :return: an instance of `cls`
327 """
328 result = cls()
329 if not isinstance(value, text_type):
330 # XXX: validate
331 value = bytes_to_wsgi(value)
332 for item in _parse_list_header(value):
333 if '=' not in item:
334 result[item] = None
335 continue
336 name, value = item.split('=', 1)
337 if value[:1] == value[-1:] == '"':
338 value = unquote_header_value(value[1:-1])
339 result[name] = value
340 return result
341
342
343 def parse_options_header(value, multiple=False):
344 """Parse a ``Content-Type`` like header into a tuple with the content
345 type and the options:
346
347 >>> parse_options_header('text/html; charset=utf8')
348 ('text/html', {'charset': 'utf8'})
349
350 This should not be used to parse ``Cache-Control`` like headers that use
351 a slightly different format. For these headers use the
352 :func:`parse_dict_header` function.
353
354 .. versionadded:: 0.5
355
356 :param value: the header to parse.
357 :param multiple: Whether try to parse and return multiple MIME types
358 :return: (mimetype, options) or (mimetype, options, mimetype, options, …)
359 if multiple=True
360 """
361 if not value:
362 return '', {}
363
364 result = []
365
366 value = "," + value.replace("\n", ",")
367 while value:
368 match = _option_header_start_mime_type.match(value)
369 if not match:
370 break
371 result.append(match.group(1)) # mimetype
372 options = {}
373 # Parse options
374 rest = match.group(2)
375 while rest:
376 optmatch = _option_header_piece_re.match(rest)
377 if not optmatch:
378 break
379 option, encoding, _, option_value = optmatch.groups()
380 option = unquote_header_value(option)
381 if option_value is not None:
382 option_value = unquote_header_value(
383 option_value,
384 option == 'filename')
385 if encoding is not None:
386 option_value = _unquote(option_value).decode(encoding)
387 options[option] = option_value
388 rest = rest[optmatch.end():]
389 result.append(options)
390 if multiple is False:
391 return tuple(result)
392 value = rest
393
394 return tuple(result) if result else ('', {})
395
396
397 def parse_accept_header(value, cls=None):
398 """Parses an HTTP Accept-* header. This does not implement a complete
399 valid algorithm but one that supports at least value and quality
400 extraction.
401
402 Returns a new :class:`Accept` object (basically a list of ``(value, quality)``
403 tuples sorted by the quality with some additional accessor methods).
404
405 The second parameter can be a subclass of :class:`Accept` that is created
406 with the parsed values and returned.
407
408 :param value: the accept header string to be parsed.
409 :param cls: the wrapper class for the return value (can be
410 :class:`Accept` or a subclass thereof)
411 :return: an instance of `cls`.
412 """
413 if cls is None:
414 cls = Accept
415
416 if not value:
417 return cls(None)
418
419 result = []
420 for match in _accept_re.finditer(value):
421 quality = match.group(2)
422 if not quality:
423 quality = 1
424 else:
425 quality = max(min(float(quality), 1), 0)
426 result.append((match.group(1), quality))
427 return cls(result)
428
429
430 def parse_cache_control_header(value, on_update=None, cls=None):
431 """Parse a cache control header. The RFC differs between response and
432 request cache control, this method does not. It's your responsibility
433 to not use the wrong control statements.
434
435 .. versionadded:: 0.5
436 The `cls` was added. If not specified an immutable
437 :class:`~werkzeug.datastructures.RequestCacheControl` is returned.
438
439 :param value: a cache control header to be parsed.
440 :param on_update: an optional callable that is called every time a value
441 on the :class:`~werkzeug.datastructures.CacheControl`
442 object is changed.
443 :param cls: the class for the returned object. By default
444 :class:`~werkzeug.datastructures.RequestCacheControl` is used.
445 :return: a `cls` object.
446 """
447 if cls is None:
448 cls = RequestCacheControl
449 if not value:
450 return cls(None, on_update)
451 return cls(parse_dict_header(value), on_update)
452
453
454 def parse_set_header(value, on_update=None):
455 """Parse a set-like header and return a
456 :class:`~werkzeug.datastructures.HeaderSet` object:
457
458 >>> hs = parse_set_header('token, "quoted value"')
459
460 The return value is an object that treats the items case-insensitively
461 and keeps the order of the items:
462
463 >>> 'TOKEN' in hs
464 True
465 >>> hs.index('quoted value')
466 1
467 >>> hs
468 HeaderSet(['token', 'quoted value'])
469
470 To create a header from the :class:`HeaderSet` again, use the
471 :func:`dump_header` function.
472
473 :param value: a set header to be parsed.
474 :param on_update: an optional callable that is called every time a
475 value on the :class:`~werkzeug.datastructures.HeaderSet`
476 object is changed.
477 :return: a :class:`~werkzeug.datastructures.HeaderSet`
478 """
479 if not value:
480 return HeaderSet(None, on_update)
481 return HeaderSet(parse_list_header(value), on_update)
482
483
484 def parse_authorization_header(value):
485 """Parse an HTTP basic/digest authorization header transmitted by the web
486 browser. The return value is either `None` if the header was invalid or
487 not given, otherwise an :class:`~werkzeug.datastructures.Authorization`
488 object.
489
490 :param value: the authorization header to parse.
491 :return: a :class:`~werkzeug.datastructures.Authorization` object or `None`.
492 """
493 if not value:
494 return
495 value = wsgi_to_bytes(value)
496 try:
497 auth_type, auth_info = value.split(None, 1)
498 auth_type = auth_type.lower()
499 except ValueError:
500 return
501 if auth_type == b'basic':
502 try:
503 username, password = base64.b64decode(auth_info).split(b':', 1)
504 except Exception:
505 return
506 return Authorization('basic', {'username': bytes_to_wsgi(username),
507 'password': bytes_to_wsgi(password)})
508 elif auth_type == b'digest':
509 auth_map = parse_dict_header(auth_info)
510 for key in 'username', 'realm', 'nonce', 'uri', 'response':
511 if key not in auth_map:
512 return
513 if 'qop' in auth_map:
514 if not auth_map.get('nc') or not auth_map.get('cnonce'):
515 return
516 return Authorization('digest', auth_map)
517
518
519 def parse_www_authenticate_header(value, on_update=None):
520 """Parse an HTTP WWW-Authenticate header into a
521 :class:`~werkzeug.datastructures.WWWAuthenticate` object.
522
523 :param value: a WWW-Authenticate header to parse.
524 :param on_update: an optional callable that is called every time a value
525 on the :class:`~werkzeug.datastructures.WWWAuthenticate`
526 object is changed.
527 :return: a :class:`~werkzeug.datastructures.WWWAuthenticate` object.
528 """
529 if not value:
530 return WWWAuthenticate(on_update=on_update)
531 try:
532 auth_type, auth_info = value.split(None, 1)
533 auth_type = auth_type.lower()
534 except (ValueError, AttributeError):
535 return WWWAuthenticate(value.strip().lower(), on_update=on_update)
536 return WWWAuthenticate(auth_type, parse_dict_header(auth_info),
537 on_update)
538
539
540 def parse_if_range_header(value):
541 """Parses an if-range header which can be an etag or a date. Returns
542 a :class:`~werkzeug.datastructures.IfRange` object.
543
544 .. versionadded:: 0.7
545 """
546 if not value:
547 return IfRange()
548 date = parse_date(value)
549 if date is not None:
550 return IfRange(date=date)
551 # drop weakness information
552 return IfRange(unquote_etag(value)[0])
553
554
555 def parse_range_header(value, make_inclusive=True):
556 """Parses a range header into a :class:`~werkzeug.datastructures.Range`
557 object. If the header is missing or malformed `None` is returned.
558 `ranges` is a list of ``(start, stop)`` tuples where the ranges are
559 non-inclusive.
560
561 .. versionadded:: 0.7
562 """
563 if not value or '=' not in value:
564 return None
565
566 ranges = []
567 last_end = 0
568 units, rng = value.split('=', 1)
569 units = units.strip().lower()
570
571 for item in rng.split(','):
572 item = item.strip()
573 if '-' not in item:
574 return None
575 if item.startswith('-'):
576 if last_end < 0:
577 return None
578 try:
579 begin = int(item)
580 except ValueError:
581 return None
582 end = None
583 last_end = -1
584 elif '-' in item:
585 begin, end = item.split('-', 1)
586 begin = begin.strip()
587 end = end.strip()
588 if not begin.isdigit():
589 return None
590 begin = int(begin)
591 if begin < last_end or last_end < 0:
592 return None
593 if end:
594 if not end.isdigit():
595 return None
596 end = int(end) + 1
597 if begin >= end:
598 return None
599 else:
600 end = None
601 last_end = end
602 ranges.append((begin, end))
603
604 return Range(units, ranges)
605
606
607 def parse_content_range_header(value, on_update=None):
608 """Parses a range header into a
609 :class:`~werkzeug.datastructures.ContentRange` object or `None` if
610 parsing is not possible.
611
612 .. versionadded:: 0.7
613
614 :param value: a content range header to be parsed.
615 :param on_update: an optional callable that is called every time a value
616 on the :class:`~werkzeug.datastructures.ContentRange`
617 object is changed.
618 """
619 if value is None:
620 return None
621 try:
622 units, rangedef = (value or '').strip().split(None, 1)
623 except ValueError:
624 return None
625
626 if '/' not in rangedef:
627 return None
628 rng, length = rangedef.split('/', 1)
629 if length == '*':
630 length = None
631 elif length.isdigit():
632 length = int(length)
633 else:
634 return None
635
636 if rng == '*':
637 return ContentRange(units, None, None, length, on_update=on_update)
638 elif '-' not in rng:
639 return None
640
641 start, stop = rng.split('-', 1)
642 try:
643 start = int(start)
644 stop = int(stop) + 1
645 except ValueError:
646 return None
647
648 if is_byte_range_valid(start, stop, length):
649 return ContentRange(units, start, stop, length, on_update=on_update)
650
651
652 def quote_etag(etag, weak=False):
653 """Quote an etag.
654
655 :param etag: the etag to quote.
656 :param weak: set to `True` to tag it "weak".
657 """
658 if '"' in etag:
659 raise ValueError('invalid etag')
660 etag = '"%s"' % etag
661 if weak:
662 etag = 'W/' + etag
663 return etag
664
665
666 def unquote_etag(etag):
667 """Unquote a single etag:
668
669 >>> unquote_etag('W/"bar"')
670 ('bar', True)
671 >>> unquote_etag('"bar"')
672 ('bar', False)
673
674 :param etag: the etag identifier to unquote.
675 :return: a ``(etag, weak)`` tuple.
676 """
677 if not etag:
678 return None, None
679 etag = etag.strip()
680 weak = False
681 if etag.startswith(('W/', 'w/')):
682 weak = True
683 etag = etag[2:]
684 if etag[:1] == etag[-1:] == '"':
685 etag = etag[1:-1]
686 return etag, weak
687
688
689 def parse_etags(value):
690 """Parse an etag header.
691
692 :param value: the tag header to parse
693 :return: an :class:`~werkzeug.datastructures.ETags` object.
694 """
695 if not value:
696 return ETags()
697 strong = []
698 weak = []
699 end = len(value)
700 pos = 0
701 while pos < end:
702 match = _etag_re.match(value, pos)
703 if match is None:
704 break
705 is_weak, quoted, raw = match.groups()
706 if raw == '*':
707 return ETags(star_tag=True)
708 elif quoted:
709 raw = quoted
710 if is_weak:
711 weak.append(raw)
712 else:
713 strong.append(raw)
714 pos = match.end()
715 return ETags(strong, weak)
716
717
718 def generate_etag(data):
719 """Generate an etag for some data."""
720 return md5(data).hexdigest()
721
722
723 def parse_date(value):
724 """Parse one of the following date formats into a datetime object:
725
726 .. sourcecode:: text
727
728 Sun, 06 Nov 1994 08:49:37 GMT ; RFC 822, updated by RFC 1123
729 Sunday, 06-Nov-94 08:49:37 GMT ; RFC 850, obsoleted by RFC 1036
730 Sun Nov 6 08:49:37 1994 ; ANSI C's asctime() format
731
732 If parsing fails the return value is `None`.
733
734 :param value: a string with a supported date format.
735 :return: a :class:`datetime.datetime` object.
736 """
737 if value:
738 t = parsedate_tz(value.strip())
739 if t is not None:
740 try:
741 year = t[0]
742 # unfortunately that function does not tell us if two digit
743 # years were part of the string, or if they were prefixed
744 # with two zeroes. So what we do is to assume that 69-99
745 # refer to 1900, and everything below to 2000
746 if year >= 0 and year <= 68:
747 year += 2000
748 elif year >= 69 and year <= 99:
749 year += 1900
750 return datetime(*((year,) + t[1:7])) - \
751 timedelta(seconds=t[-1] or 0)
752 except (ValueError, OverflowError):
753 return None
754
755
756 def _dump_date(d, delim):
757 """Used for `http_date` and `cookie_date`."""
758 if d is None:
759 d = gmtime()
760 elif isinstance(d, datetime):
761 d = d.utctimetuple()
762 elif isinstance(d, (integer_types, float)):
763 d = gmtime(d)
764 return '%s, %02d%s%s%s%s %02d:%02d:%02d GMT' % (
765 ('Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun')[d.tm_wday],
766 d.tm_mday, delim,
767 ('Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep',
768 'Oct', 'Nov', 'Dec')[d.tm_mon - 1],
769 delim, str(d.tm_year), d.tm_hour, d.tm_min, d.tm_sec
770 )
771
772
773 def cookie_date(expires=None):
774 """Formats the time to ensure compatibility with Netscape's cookie
775 standard.
776
777 Accepts a floating point number expressed in seconds since the epoch in, a
778 datetime object or a timetuple. All times in UTC. The :func:`parse_date`
779 function can be used to parse such a date.
780
781 Outputs a string in the format ``Wdy, DD-Mon-YYYY HH:MM:SS GMT``.
782
783 :param expires: If provided that date is used, otherwise the current.
784 """
785 return _dump_date(expires, '-')
786
787
788 def http_date(timestamp=None):
789 """Formats the time to match the RFC1123 date format.
790
791 Accepts a floating point number expressed in seconds since the epoch in, a
792 datetime object or a timetuple. All times in UTC. The :func:`parse_date`
793 function can be used to parse such a date.
794
795 Outputs a string in the format ``Wdy, DD Mon YYYY HH:MM:SS GMT``.
796
797 :param timestamp: If provided that date is used, otherwise the current.
798 """
799 return _dump_date(timestamp, ' ')
800
801
802 def parse_age(value=None):
803 """Parses a base-10 integer count of seconds into a timedelta.
804
805 If parsing fails, the return value is `None`.
806
807 :param value: a string consisting of an integer represented in base-10
808 :return: a :class:`datetime.timedelta` object or `None`.
809 """
810 if not value:
811 return None
812 try:
813 seconds = int(value)
814 except ValueError:
815 return None
816 if seconds < 0:
817 return None
818 try:
819 return timedelta(seconds=seconds)
820 except OverflowError:
821 return None
822
823
824 def dump_age(age=None):
825 """Formats the duration as a base-10 integer.
826
827 :param age: should be an integer number of seconds,
828 a :class:`datetime.timedelta` object, or,
829 if the age is unknown, `None` (default).
830 """
831 if age is None:
832 return
833 if isinstance(age, timedelta):
834 # do the equivalent of Python 2.7's timedelta.total_seconds(),
835 # but disregarding fractional seconds
836 age = age.seconds + (age.days * 24 * 3600)
837
838 age = int(age)
839 if age < 0:
840 raise ValueError('age cannot be negative')
841
842 return str(age)
843
844
845 def is_resource_modified(environ, etag=None, data=None, last_modified=None,
846 ignore_if_range=True):
847 """Convenience method for conditional requests.
848
849 :param environ: the WSGI environment of the request to be checked.
850 :param etag: the etag for the response for comparison.
851 :param data: or alternatively the data of the response to automatically
852 generate an etag using :func:`generate_etag`.
853 :param last_modified: an optional date of the last modification.
854 :param ignore_if_range: If `False`, `If-Range` header will be taken into
855 account.
856 :return: `True` if the resource was modified, otherwise `False`.
857 """
858 if etag is None and data is not None:
859 etag = generate_etag(data)
860 elif data is not None:
861 raise TypeError('both data and etag given')
862 if environ['REQUEST_METHOD'] not in ('GET', 'HEAD'):
863 return False
864
865 unmodified = False
866 if isinstance(last_modified, string_types):
867 last_modified = parse_date(last_modified)
868
869 # ensure that microsecond is zero because the HTTP spec does not transmit
870 # that either and we might have some false positives. See issue #39
871 if last_modified is not None:
872 last_modified = last_modified.replace(microsecond=0)
873
874 if_range = None
875 if not ignore_if_range and 'HTTP_RANGE' in environ:
876 # http://tools.ietf.org/html/rfc7233#section-3.2
877 # A server MUST ignore an If-Range header field received in a request
878 # that does not contain a Range header field.
879 if_range = parse_if_range_header(environ.get('HTTP_IF_RANGE'))
880
881 if if_range is not None and if_range.date is not None:
882 modified_since = if_range.date
883 else:
884 modified_since = parse_date(environ.get('HTTP_IF_MODIFIED_SINCE'))
885
886 if modified_since and last_modified and last_modified <= modified_since:
887 unmodified = True
888
889 if etag:
890 etag, _ = unquote_etag(etag)
891 if if_range is not None and if_range.etag is not None:
892 unmodified = parse_etags(if_range.etag).contains(etag)
893 else:
894 if_none_match = parse_etags(environ.get('HTTP_IF_NONE_MATCH'))
895 if if_none_match:
896 # http://tools.ietf.org/html/rfc7232#section-3.2
897 # "A recipient MUST use the weak comparison function when comparing
898 # entity-tags for If-None-Match"
899 unmodified = if_none_match.contains_weak(etag)
900
901 # https://tools.ietf.org/html/rfc7232#section-3.1
902 # "Origin server MUST use the strong comparison function when
903 # comparing entity-tags for If-Match"
904 if_match = parse_etags(environ.get('HTTP_IF_MATCH'))
905 if if_match:
906 unmodified = not if_match.is_strong(etag)
907
908 return not unmodified
909
910
911 def remove_entity_headers(headers, allowed=('expires', 'content-location')):
912 """Remove all entity headers from a list or :class:`Headers` object. This
913 operation works in-place. `Expires` and `Content-Location` headers are
914 by default not removed. The reason for this is :rfc:`2616` section
915 10.3.5 which specifies some entity headers that should be sent.
916
917 .. versionchanged:: 0.5
918 added `allowed` parameter.
919
920 :param headers: a list or :class:`Headers` object.
921 :param allowed: a list of headers that should still be allowed even though
922 they are entity headers.
923 """
924 allowed = set(x.lower() for x in allowed)
925 headers[:] = [(key, value) for key, value in headers if
926 not is_entity_header(key) or key.lower() in allowed]
927
928
929 def remove_hop_by_hop_headers(headers):
930 """Remove all HTTP/1.1 "Hop-by-Hop" headers from a list or
931 :class:`Headers` object. This operation works in-place.
932
933 .. versionadded:: 0.5
934
935 :param headers: a list or :class:`Headers` object.
936 """
937 headers[:] = [(key, value) for key, value in headers if
938 not is_hop_by_hop_header(key)]
939
940
941 def is_entity_header(header):
942 """Check if a header is an entity header.
943
944 .. versionadded:: 0.5
945
946 :param header: the header to test.
947 :return: `True` if it's an entity header, `False` otherwise.
948 """
949 return header.lower() in _entity_headers
950
951
952 def is_hop_by_hop_header(header):
953 """Check if a header is an HTTP/1.1 "Hop-by-Hop" header.
954
955 .. versionadded:: 0.5
956
957 :param header: the header to test.
958 :return: `True` if it's an HTTP/1.1 "Hop-by-Hop" header, `False` otherwise.
959 """
960 return header.lower() in _hop_by_hop_headers
961
962
963 def parse_cookie(header, charset='utf-8', errors='replace', cls=None):
964 """Parse a cookie. Either from a string or WSGI environ.
965
966 Per default encoding errors are ignored. If you want a different behavior
967 you can set `errors` to ``'replace'`` or ``'strict'``. In strict mode a
968 :exc:`HTTPUnicodeError` is raised.
969
970 .. versionchanged:: 0.5
971 This function now returns a :class:`TypeConversionDict` instead of a
972 regular dict. The `cls` parameter was added.
973
974 :param header: the header to be used to parse the cookie. Alternatively
975 this can be a WSGI environment.
976 :param charset: the charset for the cookie values.
977 :param errors: the error behavior for the charset decoding.
978 :param cls: an optional dict class to use. If this is not specified
979 or `None` the default :class:`TypeConversionDict` is
980 used.
981 """
982 if isinstance(header, dict):
983 header = header.get('HTTP_COOKIE', '')
984 elif header is None:
985 header = ''
986
987 # If the value is an unicode string it's mangled through latin1. This
988 # is done because on PEP 3333 on Python 3 all headers are assumed latin1
989 # which however is incorrect for cookies, which are sent in page encoding.
990 # As a result we
991 if isinstance(header, text_type):
992 header = header.encode('latin1', 'replace')
993
994 if cls is None:
995 cls = TypeConversionDict
996
997 def _parse_pairs():
998 for key, val in _cookie_parse_impl(header):
999 key = to_unicode(key, charset, errors, allow_none_charset=True)
1000 val = to_unicode(val, charset, errors, allow_none_charset=True)
1001 yield try_coerce_native(key), val
1002
1003 return cls(_parse_pairs())
1004
1005
1006 def dump_cookie(key, value='', max_age=None, expires=None, path='/',
1007 domain=None, secure=False, httponly=False,
1008 charset='utf-8', sync_expires=True, max_size=4093,
1009 samesite=None):
1010 """Creates a new Set-Cookie header without the ``Set-Cookie`` prefix
1011 The parameters are the same as in the cookie Morsel object in the
1012 Python standard library but it accepts unicode data, too.
1013
1014 On Python 3 the return value of this function will be a unicode
1015 string, on Python 2 it will be a native string. In both cases the
1016 return value is usually restricted to ascii as the vast majority of
1017 values are properly escaped, but that is no guarantee. If a unicode
1018 string is returned it's tunneled through latin1 as required by
1019 PEP 3333.
1020
1021 The return value is not ASCII safe if the key contains unicode
1022 characters. This is technically against the specification but
1023 happens in the wild. It's strongly recommended to not use
1024 non-ASCII values for the keys.
1025
1026 :param max_age: should be a number of seconds, or `None` (default) if
1027 the cookie should last only as long as the client's
1028 browser session. Additionally `timedelta` objects
1029 are accepted, too.
1030 :param expires: should be a `datetime` object or unix timestamp.
1031 :param path: limits the cookie to a given path, per default it will
1032 span the whole domain.
1033 :param domain: Use this if you want to set a cross-domain cookie. For
1034 example, ``domain=".example.com"`` will set a cookie
1035 that is readable by the domain ``www.example.com``,
1036 ``foo.example.com`` etc. Otherwise, a cookie will only
1037 be readable by the domain that set it.
1038 :param secure: The cookie will only be available via HTTPS
1039 :param httponly: disallow JavaScript to access the cookie. This is an
1040 extension to the cookie standard and probably not
1041 supported by all browsers.
1042 :param charset: the encoding for unicode values.
1043 :param sync_expires: automatically set expires if max_age is defined
1044 but expires not.
1045 :param max_size: Warn if the final header value exceeds this size. The
1046 default, 4093, should be safely `supported by most browsers
1047 <cookie_>`_. Set to 0 to disable this check.
1048 :param samesite: Limits the scope of the cookie such that it will only
1049 be attached to requests if those requests are "same-site".
1050
1051 .. _`cookie`: http://browsercookielimits.squawky.net/
1052 """
1053 key = to_bytes(key, charset)
1054 value = to_bytes(value, charset)
1055
1056 if path is not None:
1057 path = iri_to_uri(path, charset)
1058 domain = _make_cookie_domain(domain)
1059 if isinstance(max_age, timedelta):
1060 max_age = (max_age.days * 60 * 60 * 24) + max_age.seconds
1061 if expires is not None:
1062 if not isinstance(expires, string_types):
1063 expires = cookie_date(expires)
1064 elif max_age is not None and sync_expires:
1065 expires = to_bytes(cookie_date(time() + max_age))
1066
1067 samesite = samesite.title() if samesite else None
1068 if samesite not in ('Strict', 'Lax', None):
1069 raise ValueError("invalid SameSite value; must be 'Strict', 'Lax' or None")
1070
1071 buf = [key + b'=' + _cookie_quote(value)]
1072
1073 # XXX: In theory all of these parameters that are not marked with `None`
1074 # should be quoted. Because stdlib did not quote it before I did not
1075 # want to introduce quoting there now.
1076 for k, v, q in ((b'Domain', domain, True),
1077 (b'Expires', expires, False,),
1078 (b'Max-Age', max_age, False),
1079 (b'Secure', secure, None),
1080 (b'HttpOnly', httponly, None),
1081 (b'Path', path, False),
1082 (b'SameSite', samesite, False)):
1083 if q is None:
1084 if v:
1085 buf.append(k)
1086 continue
1087
1088 if v is None:
1089 continue
1090
1091 tmp = bytearray(k)
1092 if not isinstance(v, (bytes, bytearray)):
1093 v = to_bytes(text_type(v), charset)
1094 if q:
1095 v = _cookie_quote(v)
1096 tmp += b'=' + v
1097 buf.append(bytes(tmp))
1098
1099 # The return value will be an incorrectly encoded latin1 header on
1100 # Python 3 for consistency with the headers object and a bytestring
1101 # on Python 2 because that's how the API makes more sense.
1102 rv = b'; '.join(buf)
1103 if not PY2:
1104 rv = rv.decode('latin1')
1105
1106 # Warn if the final value of the cookie is less than the limit. If the
1107 # cookie is too large, then it may be silently ignored, which can be quite
1108 # hard to debug.
1109 cookie_size = len(rv)
1110
1111 if max_size and cookie_size > max_size:
1112 value_size = len(value)
1113 warnings.warn(
1114 'The "{key}" cookie is too large: the value was {value_size} bytes'
1115 ' but the header required {extra_size} extra bytes. The final size'
1116 ' was {cookie_size} bytes but the limit is {max_size} bytes.'
1117 ' Browsers may silently ignore cookies larger than this.'.format(
1118 key=key,
1119 value_size=value_size,
1120 extra_size=cookie_size - value_size,
1121 cookie_size=cookie_size,
1122 max_size=max_size
1123 ),
1124 stacklevel=2
1125 )
1126
1127 return rv
1128
1129
1130 def is_byte_range_valid(start, stop, length):
1131 """Checks if a given byte content range is valid for the given length.
1132
1133 .. versionadded:: 0.7
1134 """
1135 if (start is None) != (stop is None):
1136 return False
1137 elif start is None:
1138 return length is None or length >= 0
1139 elif length is None:
1140 return 0 <= start < stop
1141 elif start >= stop:
1142 return False
1143 return 0 <= start < length
1144
1145
1146 # circular dependency fun
1147 from werkzeug.datastructures import Accept, HeaderSet, ETags, Authorization, \
1148 WWWAuthenticate, TypeConversionDict, IfRange, Range, ContentRange, \
1149 RequestCacheControl
1150
1151
1152 # DEPRECATED
1153 # backwards compatible imports
1154 from werkzeug.datastructures import ( # noqa
1155 MIMEAccept, CharsetAccept, LanguageAccept, Headers
1156 )
1157 from werkzeug.urls import iri_to_uri
+0
-420
werkzeug/local.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.local
3 ~~~~~~~~~~~~~~
4
5 This module implements context-local objects.
6
7 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD, see LICENSE for more details.
9 """
10 import copy
11 from functools import update_wrapper
12 from werkzeug.wsgi import ClosingIterator
13 from werkzeug._compat import PY2, implements_bool
14
15 # since each thread has its own greenlet we can just use those as identifiers
16 # for the context. If greenlets are not available we fall back to the
17 # current thread ident depending on where it is.
18 try:
19 from greenlet import getcurrent as get_ident
20 except ImportError:
21 try:
22 from thread import get_ident
23 except ImportError:
24 from _thread import get_ident
25
26
27 def release_local(local):
28 """Releases the contents of the local for the current context.
29 This makes it possible to use locals without a manager.
30
31 Example::
32
33 >>> loc = Local()
34 >>> loc.foo = 42
35 >>> release_local(loc)
36 >>> hasattr(loc, 'foo')
37 False
38
39 With this function one can release :class:`Local` objects as well
40 as :class:`LocalStack` objects. However it is not possible to
41 release data held by proxies that way, one always has to retain
42 a reference to the underlying local object in order to be able
43 to release it.
44
45 .. versionadded:: 0.6.1
46 """
47 local.__release_local__()
48
49
50 class Local(object):
51 __slots__ = ('__storage__', '__ident_func__')
52
53 def __init__(self):
54 object.__setattr__(self, '__storage__', {})
55 object.__setattr__(self, '__ident_func__', get_ident)
56
57 def __iter__(self):
58 return iter(self.__storage__.items())
59
60 def __call__(self, proxy):
61 """Create a proxy for a name."""
62 return LocalProxy(self, proxy)
63
64 def __release_local__(self):
65 self.__storage__.pop(self.__ident_func__(), None)
66
67 def __getattr__(self, name):
68 try:
69 return self.__storage__[self.__ident_func__()][name]
70 except KeyError:
71 raise AttributeError(name)
72
73 def __setattr__(self, name, value):
74 ident = self.__ident_func__()
75 storage = self.__storage__
76 try:
77 storage[ident][name] = value
78 except KeyError:
79 storage[ident] = {name: value}
80
81 def __delattr__(self, name):
82 try:
83 del self.__storage__[self.__ident_func__()][name]
84 except KeyError:
85 raise AttributeError(name)
86
87
88 class LocalStack(object):
89
90 """This class works similar to a :class:`Local` but keeps a stack
91 of objects instead. This is best explained with an example::
92
93 >>> ls = LocalStack()
94 >>> ls.push(42)
95 >>> ls.top
96 42
97 >>> ls.push(23)
98 >>> ls.top
99 23
100 >>> ls.pop()
101 23
102 >>> ls.top
103 42
104
105 They can be force released by using a :class:`LocalManager` or with
106 the :func:`release_local` function but the correct way is to pop the
107 item from the stack after using. When the stack is empty it will
108 no longer be bound to the current context (and as such released).
109
110 By calling the stack without arguments it returns a proxy that resolves to
111 the topmost item on the stack.
112
113 .. versionadded:: 0.6.1
114 """
115
116 def __init__(self):
117 self._local = Local()
118
119 def __release_local__(self):
120 self._local.__release_local__()
121
122 def _get__ident_func__(self):
123 return self._local.__ident_func__
124
125 def _set__ident_func__(self, value):
126 object.__setattr__(self._local, '__ident_func__', value)
127 __ident_func__ = property(_get__ident_func__, _set__ident_func__)
128 del _get__ident_func__, _set__ident_func__
129
130 def __call__(self):
131 def _lookup():
132 rv = self.top
133 if rv is None:
134 raise RuntimeError('object unbound')
135 return rv
136 return LocalProxy(_lookup)
137
138 def push(self, obj):
139 """Pushes a new item to the stack"""
140 rv = getattr(self._local, 'stack', None)
141 if rv is None:
142 self._local.stack = rv = []
143 rv.append(obj)
144 return rv
145
146 def pop(self):
147 """Removes the topmost item from the stack, will return the
148 old value or `None` if the stack was already empty.
149 """
150 stack = getattr(self._local, 'stack', None)
151 if stack is None:
152 return None
153 elif len(stack) == 1:
154 release_local(self._local)
155 return stack[-1]
156 else:
157 return stack.pop()
158
159 @property
160 def top(self):
161 """The topmost item on the stack. If the stack is empty,
162 `None` is returned.
163 """
164 try:
165 return self._local.stack[-1]
166 except (AttributeError, IndexError):
167 return None
168
169
170 class LocalManager(object):
171
172 """Local objects cannot manage themselves. For that you need a local
173 manager. You can pass a local manager multiple locals or add them later
174 by appending them to `manager.locals`. Every time the manager cleans up,
175 it will clean up all the data left in the locals for this context.
176
177 The `ident_func` parameter can be added to override the default ident
178 function for the wrapped locals.
179
180 .. versionchanged:: 0.6.1
181 Instead of a manager the :func:`release_local` function can be used
182 as well.
183
184 .. versionchanged:: 0.7
185 `ident_func` was added.
186 """
187
188 def __init__(self, locals=None, ident_func=None):
189 if locals is None:
190 self.locals = []
191 elif isinstance(locals, Local):
192 self.locals = [locals]
193 else:
194 self.locals = list(locals)
195 if ident_func is not None:
196 self.ident_func = ident_func
197 for local in self.locals:
198 object.__setattr__(local, '__ident_func__', ident_func)
199 else:
200 self.ident_func = get_ident
201
202 def get_ident(self):
203 """Return the context identifier the local objects use internally for
204 this context. You cannot override this method to change the behavior
205 but use it to link other context local objects (such as SQLAlchemy's
206 scoped sessions) to the Werkzeug locals.
207
208 .. versionchanged:: 0.7
209 You can pass a different ident function to the local manager that
210 will then be propagated to all the locals passed to the
211 constructor.
212 """
213 return self.ident_func()
214
215 def cleanup(self):
216 """Manually clean up the data in the locals for this context. Call
217 this at the end of the request or use `make_middleware()`.
218 """
219 for local in self.locals:
220 release_local(local)
221
222 def make_middleware(self, app):
223 """Wrap a WSGI application so that cleaning up happens after
224 request end.
225 """
226 def application(environ, start_response):
227 return ClosingIterator(app(environ, start_response), self.cleanup)
228 return application
229
230 def middleware(self, func):
231 """Like `make_middleware` but for decorating functions.
232
233 Example usage::
234
235 @manager.middleware
236 def application(environ, start_response):
237 ...
238
239 The difference to `make_middleware` is that the function passed
240 will have all the arguments copied from the inner application
241 (name, docstring, module).
242 """
243 return update_wrapper(self.make_middleware(func), func)
244
245 def __repr__(self):
246 return '<%s storages: %d>' % (
247 self.__class__.__name__,
248 len(self.locals)
249 )
250
251
252 @implements_bool
253 class LocalProxy(object):
254
255 """Acts as a proxy for a werkzeug local. Forwards all operations to
256 a proxied object. The only operations not supported for forwarding
257 are right handed operands and any kind of assignment.
258
259 Example usage::
260
261 from werkzeug.local import Local
262 l = Local()
263
264 # these are proxies
265 request = l('request')
266 user = l('user')
267
268
269 from werkzeug.local import LocalStack
270 _response_local = LocalStack()
271
272 # this is a proxy
273 response = _response_local()
274
275 Whenever something is bound to l.user / l.request the proxy objects
276 will forward all operations. If no object is bound a :exc:`RuntimeError`
277 will be raised.
278
279 To create proxies to :class:`Local` or :class:`LocalStack` objects,
280 call the object as shown above. If you want to have a proxy to an
281 object looked up by a function, you can (as of Werkzeug 0.6.1) pass
282 a function to the :class:`LocalProxy` constructor::
283
284 session = LocalProxy(lambda: get_current_request().session)
285
286 .. versionchanged:: 0.6.1
287 The class can be instantiated with a callable as well now.
288 """
289 __slots__ = ('__local', '__dict__', '__name__', '__wrapped__')
290
291 def __init__(self, local, name=None):
292 object.__setattr__(self, '_LocalProxy__local', local)
293 object.__setattr__(self, '__name__', name)
294 if callable(local) and not hasattr(local, '__release_local__'):
295 # "local" is a callable that is not an instance of Local or
296 # LocalManager: mark it as a wrapped function.
297 object.__setattr__(self, '__wrapped__', local)
298
299 def _get_current_object(self):
300 """Return the current object. This is useful if you want the real
301 object behind the proxy at a time for performance reasons or because
302 you want to pass the object into a different context.
303 """
304 if not hasattr(self.__local, '__release_local__'):
305 return self.__local()
306 try:
307 return getattr(self.__local, self.__name__)
308 except AttributeError:
309 raise RuntimeError('no object bound to %s' % self.__name__)
310
311 @property
312 def __dict__(self):
313 try:
314 return self._get_current_object().__dict__
315 except RuntimeError:
316 raise AttributeError('__dict__')
317
318 def __repr__(self):
319 try:
320 obj = self._get_current_object()
321 except RuntimeError:
322 return '<%s unbound>' % self.__class__.__name__
323 return repr(obj)
324
325 def __bool__(self):
326 try:
327 return bool(self._get_current_object())
328 except RuntimeError:
329 return False
330
331 def __unicode__(self):
332 try:
333 return unicode(self._get_current_object()) # noqa
334 except RuntimeError:
335 return repr(self)
336
337 def __dir__(self):
338 try:
339 return dir(self._get_current_object())
340 except RuntimeError:
341 return []
342
343 def __getattr__(self, name):
344 if name == '__members__':
345 return dir(self._get_current_object())
346 return getattr(self._get_current_object(), name)
347
348 def __setitem__(self, key, value):
349 self._get_current_object()[key] = value
350
351 def __delitem__(self, key):
352 del self._get_current_object()[key]
353
354 if PY2:
355 __getslice__ = lambda x, i, j: x._get_current_object()[i:j]
356
357 def __setslice__(self, i, j, seq):
358 self._get_current_object()[i:j] = seq
359
360 def __delslice__(self, i, j):
361 del self._get_current_object()[i:j]
362
363 __setattr__ = lambda x, n, v: setattr(x._get_current_object(), n, v)
364 __delattr__ = lambda x, n: delattr(x._get_current_object(), n)
365 __str__ = lambda x: str(x._get_current_object())
366 __lt__ = lambda x, o: x._get_current_object() < o
367 __le__ = lambda x, o: x._get_current_object() <= o
368 __eq__ = lambda x, o: x._get_current_object() == o
369 __ne__ = lambda x, o: x._get_current_object() != o
370 __gt__ = lambda x, o: x._get_current_object() > o
371 __ge__ = lambda x, o: x._get_current_object() >= o
372 __cmp__ = lambda x, o: cmp(x._get_current_object(), o) # noqa
373 __hash__ = lambda x: hash(x._get_current_object())
374 __call__ = lambda x, *a, **kw: x._get_current_object()(*a, **kw)
375 __len__ = lambda x: len(x._get_current_object())
376 __getitem__ = lambda x, i: x._get_current_object()[i]
377 __iter__ = lambda x: iter(x._get_current_object())
378 __contains__ = lambda x, i: i in x._get_current_object()
379 __add__ = lambda x, o: x._get_current_object() + o
380 __sub__ = lambda x, o: x._get_current_object() - o
381 __mul__ = lambda x, o: x._get_current_object() * o
382 __floordiv__ = lambda x, o: x._get_current_object() // o
383 __mod__ = lambda x, o: x._get_current_object() % o
384 __divmod__ = lambda x, o: x._get_current_object().__divmod__(o)
385 __pow__ = lambda x, o: x._get_current_object() ** o
386 __lshift__ = lambda x, o: x._get_current_object() << o
387 __rshift__ = lambda x, o: x._get_current_object() >> o
388 __and__ = lambda x, o: x._get_current_object() & o
389 __xor__ = lambda x, o: x._get_current_object() ^ o
390 __or__ = lambda x, o: x._get_current_object() | o
391 __div__ = lambda x, o: x._get_current_object().__div__(o)
392 __truediv__ = lambda x, o: x._get_current_object().__truediv__(o)
393 __neg__ = lambda x: -(x._get_current_object())
394 __pos__ = lambda x: +(x._get_current_object())
395 __abs__ = lambda x: abs(x._get_current_object())
396 __invert__ = lambda x: ~(x._get_current_object())
397 __complex__ = lambda x: complex(x._get_current_object())
398 __int__ = lambda x: int(x._get_current_object())
399 __long__ = lambda x: long(x._get_current_object()) # noqa
400 __float__ = lambda x: float(x._get_current_object())
401 __oct__ = lambda x: oct(x._get_current_object())
402 __hex__ = lambda x: hex(x._get_current_object())
403 __index__ = lambda x: x._get_current_object().__index__()
404 __coerce__ = lambda x, o: x._get_current_object().__coerce__(x, o)
405 __enter__ = lambda x: x._get_current_object().__enter__()
406 __exit__ = lambda x, *a, **kw: x._get_current_object().__exit__(*a, **kw)
407 __radd__ = lambda x, o: o + x._get_current_object()
408 __rsub__ = lambda x, o: o - x._get_current_object()
409 __rmul__ = lambda x, o: o * x._get_current_object()
410 __rdiv__ = lambda x, o: o / x._get_current_object()
411 if PY2:
412 __rtruediv__ = lambda x, o: x._get_current_object().__rtruediv__(o)
413 else:
414 __rtruediv__ = __rdiv__
415 __rfloordiv__ = lambda x, o: o // x._get_current_object()
416 __rmod__ = lambda x, o: o % x._get_current_object()
417 __rdivmod__ = lambda x, o: x._get_current_object().__rdivmod__(o)
418 __copy__ = lambda x: copy.copy(x._get_current_object())
419 __deepcopy__ = lambda x, memo: copy.deepcopy(x._get_current_object(), memo)
+0
-106
werkzeug/posixemulation.py less more
0 # -*- coding: utf-8 -*-
1 r"""
2 werkzeug.posixemulation
3 ~~~~~~~~~~~~~~~~~~~~~~~
4
5 Provides a POSIX emulation for some features that are relevant to
6 web applications. The main purpose is to simplify support for
7 systems such as Windows NT that are not 100% POSIX compatible.
8
9 Currently this only implements a :func:`rename` function that
10 follows POSIX semantics. Eg: if the target file already exists it
11 will be replaced without asking.
12
13 This module was introduced in 0.6.1 and is not a public interface.
14 It might become one in later versions of Werkzeug.
15
16 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
17 :license: BSD, see LICENSE for more details.
18 """
19 import sys
20 import os
21 import errno
22 import time
23 import random
24
25 from ._compat import to_unicode
26 from .filesystem import get_filesystem_encoding
27
28
29 can_rename_open_file = False
30 if os.name == 'nt': # pragma: no cover
31 _rename = lambda src, dst: False
32 _rename_atomic = lambda src, dst: False
33
34 try:
35 import ctypes
36
37 _MOVEFILE_REPLACE_EXISTING = 0x1
38 _MOVEFILE_WRITE_THROUGH = 0x8
39 _MoveFileEx = ctypes.windll.kernel32.MoveFileExW
40
41 def _rename(src, dst):
42 src = to_unicode(src, get_filesystem_encoding())
43 dst = to_unicode(dst, get_filesystem_encoding())
44 if _rename_atomic(src, dst):
45 return True
46 retry = 0
47 rv = False
48 while not rv and retry < 100:
49 rv = _MoveFileEx(src, dst, _MOVEFILE_REPLACE_EXISTING |
50 _MOVEFILE_WRITE_THROUGH)
51 if not rv:
52 time.sleep(0.001)
53 retry += 1
54 return rv
55
56 # new in Vista and Windows Server 2008
57 _CreateTransaction = ctypes.windll.ktmw32.CreateTransaction
58 _CommitTransaction = ctypes.windll.ktmw32.CommitTransaction
59 _MoveFileTransacted = ctypes.windll.kernel32.MoveFileTransactedW
60 _CloseHandle = ctypes.windll.kernel32.CloseHandle
61 can_rename_open_file = True
62
63 def _rename_atomic(src, dst):
64 ta = _CreateTransaction(None, 0, 0, 0, 0, 1000, 'Werkzeug rename')
65 if ta == -1:
66 return False
67 try:
68 retry = 0
69 rv = False
70 while not rv and retry < 100:
71 rv = _MoveFileTransacted(src, dst, None, None,
72 _MOVEFILE_REPLACE_EXISTING |
73 _MOVEFILE_WRITE_THROUGH, ta)
74 if rv:
75 rv = _CommitTransaction(ta)
76 break
77 else:
78 time.sleep(0.001)
79 retry += 1
80 return rv
81 finally:
82 _CloseHandle(ta)
83 except Exception:
84 pass
85
86 def rename(src, dst):
87 # Try atomic or pseudo-atomic rename
88 if _rename(src, dst):
89 return
90 # Fall back to "move away and replace"
91 try:
92 os.rename(src, dst)
93 except OSError as e:
94 if e.errno != errno.EEXIST:
95 raise
96 old = "%s-%08x" % (dst, random.randint(0, sys.maxint))
97 os.rename(dst, old)
98 os.rename(src, dst)
99 try:
100 os.unlink(old)
101 except Exception:
102 pass
103 else:
104 rename = os.rename
105 can_rename_open_file = True
+0
-1792
werkzeug/routing.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.routing
3 ~~~~~~~~~~~~~~~~
4
5 When it comes to combining multiple controller or view functions (however
6 you want to call them) you need a dispatcher. A simple way would be
7 applying regular expression tests on the ``PATH_INFO`` and calling
8 registered callback functions that return the value then.
9
10 This module implements a much more powerful system than simple regular
11 expression matching because it can also convert values in the URLs and
12 build URLs.
13
14 Here a simple example that creates an URL map for an application with
15 two subdomains (www and kb) and some URL rules:
16
17 >>> m = Map([
18 ... # Static URLs
19 ... Rule('/', endpoint='static/index'),
20 ... Rule('/about', endpoint='static/about'),
21 ... Rule('/help', endpoint='static/help'),
22 ... # Knowledge Base
23 ... Subdomain('kb', [
24 ... Rule('/', endpoint='kb/index'),
25 ... Rule('/browse/', endpoint='kb/browse'),
26 ... Rule('/browse/<int:id>/', endpoint='kb/browse'),
27 ... Rule('/browse/<int:id>/<int:page>', endpoint='kb/browse')
28 ... ])
29 ... ], default_subdomain='www')
30
31 If the application doesn't use subdomains it's perfectly fine to not set
32 the default subdomain and not use the `Subdomain` rule factory. The endpoint
33 in the rules can be anything, for example import paths or unique
34 identifiers. The WSGI application can use those endpoints to get the
35 handler for that URL. It doesn't have to be a string at all but it's
36 recommended.
37
38 Now it's possible to create a URL adapter for one of the subdomains and
39 build URLs:
40
41 >>> c = m.bind('example.com')
42 >>> c.build("kb/browse", dict(id=42))
43 'http://kb.example.com/browse/42/'
44 >>> c.build("kb/browse", dict())
45 'http://kb.example.com/browse/'
46 >>> c.build("kb/browse", dict(id=42, page=3))
47 'http://kb.example.com/browse/42/3'
48 >>> c.build("static/about")
49 '/about'
50 >>> c.build("static/index", force_external=True)
51 'http://www.example.com/'
52
53 >>> c = m.bind('example.com', subdomain='kb')
54 >>> c.build("static/about")
55 'http://www.example.com/about'
56
57 The first argument to bind is the server name *without* the subdomain.
58 Per default it will assume that the script is mounted on the root, but
59 often that's not the case so you can provide the real mount point as
60 second argument:
61
62 >>> c = m.bind('example.com', '/applications/example')
63
64 The third argument can be the subdomain, if not given the default
65 subdomain is used. For more details about binding have a look at the
66 documentation of the `MapAdapter`.
67
68 And here is how you can match URLs:
69
70 >>> c = m.bind('example.com')
71 >>> c.match("/")
72 ('static/index', {})
73 >>> c.match("/about")
74 ('static/about', {})
75 >>> c = m.bind('example.com', '/', 'kb')
76 >>> c.match("/")
77 ('kb/index', {})
78 >>> c.match("/browse/42/23")
79 ('kb/browse', {'id': 42, 'page': 23})
80
81 If matching fails you get a `NotFound` exception, if the rule thinks
82 it's a good idea to redirect (for example because the URL was defined
83 to have a slash at the end but the request was missing that slash) it
84 will raise a `RequestRedirect` exception. Both are subclasses of the
85 `HTTPException` so you can use those errors as responses in the
86 application.
87
88 If matching succeeded but the URL rule was incompatible to the given
89 method (for example there were only rules for `GET` and `HEAD` and
90 routing system tried to match a `POST` request) a `MethodNotAllowed`
91 exception is raised.
92
93
94 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
95 :license: BSD, see LICENSE for more details.
96 """
97 import difflib
98 import re
99 import uuid
100 import posixpath
101
102 from pprint import pformat
103 from threading import Lock
104
105 from werkzeug.urls import url_encode, url_quote, url_join
106 from werkzeug.utils import redirect, format_string
107 from werkzeug.exceptions import HTTPException, NotFound, MethodNotAllowed, \
108 BadHost
109 from werkzeug._internal import _get_environ, _encode_idna
110 from werkzeug._compat import itervalues, iteritems, to_unicode, to_bytes, \
111 text_type, string_types, native_string_result, \
112 implements_to_string, wsgi_decoding_dance
113 from werkzeug.datastructures import ImmutableDict, MultiDict
114 from werkzeug.utils import cached_property
115
116
117 _rule_re = re.compile(r'''
118 (?P<static>[^<]*) # static rule data
119 <
120 (?:
121 (?P<converter>[a-zA-Z_][a-zA-Z0-9_]*) # converter name
122 (?:\((?P<args>.*?)\))? # converter arguments
123 \: # variable delimiter
124 )?
125 (?P<variable>[a-zA-Z_][a-zA-Z0-9_]*) # variable name
126 >
127 ''', re.VERBOSE)
128 _simple_rule_re = re.compile(r'<([^>]+)>')
129 _converter_args_re = re.compile(r'''
130 ((?P<name>\w+)\s*=\s*)?
131 (?P<value>
132 True|False|
133 \d+.\d+|
134 \d+.|
135 \d+|
136 [\w\d_.]+|
137 [urUR]?(?P<stringval>"[^"]*?"|'[^']*')
138 )\s*,
139 ''', re.VERBOSE | re.UNICODE)
140
141
142 _PYTHON_CONSTANTS = {
143 'None': None,
144 'True': True,
145 'False': False
146 }
147
148
149 def _pythonize(value):
150 if value in _PYTHON_CONSTANTS:
151 return _PYTHON_CONSTANTS[value]
152 for convert in int, float:
153 try:
154 return convert(value)
155 except ValueError:
156 pass
157 if value[:1] == value[-1:] and value[0] in '"\'':
158 value = value[1:-1]
159 return text_type(value)
160
161
162 def parse_converter_args(argstr):
163 argstr += ','
164 args = []
165 kwargs = {}
166
167 for item in _converter_args_re.finditer(argstr):
168 value = item.group('stringval')
169 if value is None:
170 value = item.group('value')
171 value = _pythonize(value)
172 if not item.group('name'):
173 args.append(value)
174 else:
175 name = item.group('name')
176 kwargs[name] = value
177
178 return tuple(args), kwargs
179
180
181 def parse_rule(rule):
182 """Parse a rule and return it as generator. Each iteration yields tuples
183 in the form ``(converter, arguments, variable)``. If the converter is
184 `None` it's a static url part, otherwise it's a dynamic one.
185
186 :internal:
187 """
188 pos = 0
189 end = len(rule)
190 do_match = _rule_re.match
191 used_names = set()
192 while pos < end:
193 m = do_match(rule, pos)
194 if m is None:
195 break
196 data = m.groupdict()
197 if data['static']:
198 yield None, None, data['static']
199 variable = data['variable']
200 converter = data['converter'] or 'default'
201 if variable in used_names:
202 raise ValueError('variable name %r used twice.' % variable)
203 used_names.add(variable)
204 yield converter, data['args'] or None, variable
205 pos = m.end()
206 if pos < end:
207 remaining = rule[pos:]
208 if '>' in remaining or '<' in remaining:
209 raise ValueError('malformed url rule: %r' % rule)
210 yield None, None, remaining
211
212
213 class RoutingException(Exception):
214
215 """Special exceptions that require the application to redirect, notifying
216 about missing urls, etc.
217
218 :internal:
219 """
220
221
222 class RequestRedirect(HTTPException, RoutingException):
223
224 """Raise if the map requests a redirect. This is for example the case if
225 `strict_slashes` are activated and an url that requires a trailing slash.
226
227 The attribute `new_url` contains the absolute destination url.
228 """
229 code = 301
230
231 def __init__(self, new_url):
232 RoutingException.__init__(self, new_url)
233 self.new_url = new_url
234
235 def get_response(self, environ):
236 return redirect(self.new_url, self.code)
237
238
239 class RequestSlash(RoutingException):
240
241 """Internal exception."""
242
243
244 class RequestAliasRedirect(RoutingException):
245
246 """This rule is an alias and wants to redirect to the canonical URL."""
247
248 def __init__(self, matched_values):
249 self.matched_values = matched_values
250
251
252 @implements_to_string
253 class BuildError(RoutingException, LookupError):
254
255 """Raised if the build system cannot find a URL for an endpoint with the
256 values provided.
257 """
258
259 def __init__(self, endpoint, values, method, adapter=None):
260 LookupError.__init__(self, endpoint, values, method)
261 self.endpoint = endpoint
262 self.values = values
263 self.method = method
264 self.adapter = adapter
265
266 @cached_property
267 def suggested(self):
268 return self.closest_rule(self.adapter)
269
270 def closest_rule(self, adapter):
271 def _score_rule(rule):
272 return sum([
273 0.98 * difflib.SequenceMatcher(
274 None, rule.endpoint, self.endpoint
275 ).ratio(),
276 0.01 * bool(set(self.values or ()).issubset(rule.arguments)),
277 0.01 * bool(rule.methods and self.method in rule.methods)
278 ])
279
280 if adapter and adapter.map._rules:
281 return max(adapter.map._rules, key=_score_rule)
282
283 def __str__(self):
284 message = []
285 message.append('Could not build url for endpoint %r' % self.endpoint)
286 if self.method:
287 message.append(' (%r)' % self.method)
288 if self.values:
289 message.append(' with values %r' % sorted(self.values.keys()))
290 message.append('.')
291 if self.suggested:
292 if self.endpoint == self.suggested.endpoint:
293 if self.method and self.method not in self.suggested.methods:
294 message.append(' Did you mean to use methods %r?' % sorted(
295 self.suggested.methods
296 ))
297 missing_values = self.suggested.arguments.union(
298 set(self.suggested.defaults or ())
299 ) - set(self.values.keys())
300 if missing_values:
301 message.append(
302 ' Did you forget to specify values %r?' %
303 sorted(missing_values)
304 )
305 else:
306 message.append(
307 ' Did you mean %r instead?' % self.suggested.endpoint
308 )
309 return u''.join(message)
310
311
312 class ValidationError(ValueError):
313
314 """Validation error. If a rule converter raises this exception the rule
315 does not match the current URL and the next URL is tried.
316 """
317
318
319 class RuleFactory(object):
320
321 """As soon as you have more complex URL setups it's a good idea to use rule
322 factories to avoid repetitive tasks. Some of them are builtin, others can
323 be added by subclassing `RuleFactory` and overriding `get_rules`.
324 """
325
326 def get_rules(self, map):
327 """Subclasses of `RuleFactory` have to override this method and return
328 an iterable of rules."""
329 raise NotImplementedError()
330
331
332 class Subdomain(RuleFactory):
333
334 """All URLs provided by this factory have the subdomain set to a
335 specific domain. For example if you want to use the subdomain for
336 the current language this can be a good setup::
337
338 url_map = Map([
339 Rule('/', endpoint='#select_language'),
340 Subdomain('<string(length=2):lang_code>', [
341 Rule('/', endpoint='index'),
342 Rule('/about', endpoint='about'),
343 Rule('/help', endpoint='help')
344 ])
345 ])
346
347 All the rules except for the ``'#select_language'`` endpoint will now
348 listen on a two letter long subdomain that holds the language code
349 for the current request.
350 """
351
352 def __init__(self, subdomain, rules):
353 self.subdomain = subdomain
354 self.rules = rules
355
356 def get_rules(self, map):
357 for rulefactory in self.rules:
358 for rule in rulefactory.get_rules(map):
359 rule = rule.empty()
360 rule.subdomain = self.subdomain
361 yield rule
362
363
364 class Submount(RuleFactory):
365
366 """Like `Subdomain` but prefixes the URL rule with a given string::
367
368 url_map = Map([
369 Rule('/', endpoint='index'),
370 Submount('/blog', [
371 Rule('/', endpoint='blog/index'),
372 Rule('/entry/<entry_slug>', endpoint='blog/show')
373 ])
374 ])
375
376 Now the rule ``'blog/show'`` matches ``/blog/entry/<entry_slug>``.
377 """
378
379 def __init__(self, path, rules):
380 self.path = path.rstrip('/')
381 self.rules = rules
382
383 def get_rules(self, map):
384 for rulefactory in self.rules:
385 for rule in rulefactory.get_rules(map):
386 rule = rule.empty()
387 rule.rule = self.path + rule.rule
388 yield rule
389
390
391 class EndpointPrefix(RuleFactory):
392
393 """Prefixes all endpoints (which must be strings for this factory) with
394 another string. This can be useful for sub applications::
395
396 url_map = Map([
397 Rule('/', endpoint='index'),
398 EndpointPrefix('blog/', [Submount('/blog', [
399 Rule('/', endpoint='index'),
400 Rule('/entry/<entry_slug>', endpoint='show')
401 ])])
402 ])
403 """
404
405 def __init__(self, prefix, rules):
406 self.prefix = prefix
407 self.rules = rules
408
409 def get_rules(self, map):
410 for rulefactory in self.rules:
411 for rule in rulefactory.get_rules(map):
412 rule = rule.empty()
413 rule.endpoint = self.prefix + rule.endpoint
414 yield rule
415
416
417 class RuleTemplate(object):
418
419 """Returns copies of the rules wrapped and expands string templates in
420 the endpoint, rule, defaults or subdomain sections.
421
422 Here a small example for such a rule template::
423
424 from werkzeug.routing import Map, Rule, RuleTemplate
425
426 resource = RuleTemplate([
427 Rule('/$name/', endpoint='$name.list'),
428 Rule('/$name/<int:id>', endpoint='$name.show')
429 ])
430
431 url_map = Map([resource(name='user'), resource(name='page')])
432
433 When a rule template is called the keyword arguments are used to
434 replace the placeholders in all the string parameters.
435 """
436
437 def __init__(self, rules):
438 self.rules = list(rules)
439
440 def __call__(self, *args, **kwargs):
441 return RuleTemplateFactory(self.rules, dict(*args, **kwargs))
442
443
444 class RuleTemplateFactory(RuleFactory):
445
446 """A factory that fills in template variables into rules. Used by
447 `RuleTemplate` internally.
448
449 :internal:
450 """
451
452 def __init__(self, rules, context):
453 self.rules = rules
454 self.context = context
455
456 def get_rules(self, map):
457 for rulefactory in self.rules:
458 for rule in rulefactory.get_rules(map):
459 new_defaults = subdomain = None
460 if rule.defaults:
461 new_defaults = {}
462 for key, value in iteritems(rule.defaults):
463 if isinstance(value, string_types):
464 value = format_string(value, self.context)
465 new_defaults[key] = value
466 if rule.subdomain is not None:
467 subdomain = format_string(rule.subdomain, self.context)
468 new_endpoint = rule.endpoint
469 if isinstance(new_endpoint, string_types):
470 new_endpoint = format_string(new_endpoint, self.context)
471 yield Rule(
472 format_string(rule.rule, self.context),
473 new_defaults,
474 subdomain,
475 rule.methods,
476 rule.build_only,
477 new_endpoint,
478 rule.strict_slashes
479 )
480
481
482 @implements_to_string
483 class Rule(RuleFactory):
484
485 """A Rule represents one URL pattern. There are some options for `Rule`
486 that change the way it behaves and are passed to the `Rule` constructor.
487 Note that besides the rule-string all arguments *must* be keyword arguments
488 in order to not break the application on Werkzeug upgrades.
489
490 `string`
491 Rule strings basically are just normal URL paths with placeholders in
492 the format ``<converter(arguments):name>`` where the converter and the
493 arguments are optional. If no converter is defined the `default`
494 converter is used which means `string` in the normal configuration.
495
496 URL rules that end with a slash are branch URLs, others are leaves.
497 If you have `strict_slashes` enabled (which is the default), all
498 branch URLs that are matched without a trailing slash will trigger a
499 redirect to the same URL with the missing slash appended.
500
501 The converters are defined on the `Map`.
502
503 `endpoint`
504 The endpoint for this rule. This can be anything. A reference to a
505 function, a string, a number etc. The preferred way is using a string
506 because the endpoint is used for URL generation.
507
508 `defaults`
509 An optional dict with defaults for other rules with the same endpoint.
510 This is a bit tricky but useful if you want to have unique URLs::
511
512 url_map = Map([
513 Rule('/all/', defaults={'page': 1}, endpoint='all_entries'),
514 Rule('/all/page/<int:page>', endpoint='all_entries')
515 ])
516
517 If a user now visits ``http://example.com/all/page/1`` he will be
518 redirected to ``http://example.com/all/``. If `redirect_defaults` is
519 disabled on the `Map` instance this will only affect the URL
520 generation.
521
522 `subdomain`
523 The subdomain rule string for this rule. If not specified the rule
524 only matches for the `default_subdomain` of the map. If the map is
525 not bound to a subdomain this feature is disabled.
526
527 Can be useful if you want to have user profiles on different subdomains
528 and all subdomains are forwarded to your application::
529
530 url_map = Map([
531 Rule('/', subdomain='<username>', endpoint='user/homepage'),
532 Rule('/stats', subdomain='<username>', endpoint='user/stats')
533 ])
534
535 `methods`
536 A sequence of http methods this rule applies to. If not specified, all
537 methods are allowed. For example this can be useful if you want different
538 endpoints for `POST` and `GET`. If methods are defined and the path
539 matches but the method matched against is not in this list or in the
540 list of another rule for that path the error raised is of the type
541 `MethodNotAllowed` rather than `NotFound`. If `GET` is present in the
542 list of methods and `HEAD` is not, `HEAD` is added automatically.
543
544 .. versionchanged:: 0.6.1
545 `HEAD` is now automatically added to the methods if `GET` is
546 present. The reason for this is that existing code often did not
547 work properly in servers not rewriting `HEAD` to `GET`
548 automatically and it was not documented how `HEAD` should be
549 treated. This was considered a bug in Werkzeug because of that.
550
551 `strict_slashes`
552 Override the `Map` setting for `strict_slashes` only for this rule. If
553 not specified the `Map` setting is used.
554
555 `build_only`
556 Set this to True and the rule will never match but will create a URL
557 that can be build. This is useful if you have resources on a subdomain
558 or folder that are not handled by the WSGI application (like static data)
559
560 `redirect_to`
561 If given this must be either a string or callable. In case of a
562 callable it's called with the url adapter that triggered the match and
563 the values of the URL as keyword arguments and has to return the target
564 for the redirect, otherwise it has to be a string with placeholders in
565 rule syntax::
566
567 def foo_with_slug(adapter, id):
568 # ask the database for the slug for the old id. this of
569 # course has nothing to do with werkzeug.
570 return 'foo/' + Foo.get_slug_for_id(id)
571
572 url_map = Map([
573 Rule('/foo/<slug>', endpoint='foo'),
574 Rule('/some/old/url/<slug>', redirect_to='foo/<slug>'),
575 Rule('/other/old/url/<int:id>', redirect_to=foo_with_slug)
576 ])
577
578 When the rule is matched the routing system will raise a
579 `RequestRedirect` exception with the target for the redirect.
580
581 Keep in mind that the URL will be joined against the URL root of the
582 script so don't use a leading slash on the target URL unless you
583 really mean root of that domain.
584
585 `alias`
586 If enabled this rule serves as an alias for another rule with the same
587 endpoint and arguments.
588
589 `host`
590 If provided and the URL map has host matching enabled this can be
591 used to provide a match rule for the whole host. This also means
592 that the subdomain feature is disabled.
593
594 .. versionadded:: 0.7
595 The `alias` and `host` parameters were added.
596 """
597
598 def __init__(self, string, defaults=None, subdomain=None, methods=None,
599 build_only=False, endpoint=None, strict_slashes=None,
600 redirect_to=None, alias=False, host=None):
601 if not string.startswith('/'):
602 raise ValueError('urls must start with a leading slash')
603 self.rule = string
604 self.is_leaf = not string.endswith('/')
605
606 self.map = None
607 self.strict_slashes = strict_slashes
608 self.subdomain = subdomain
609 self.host = host
610 self.defaults = defaults
611 self.build_only = build_only
612 self.alias = alias
613 if methods is None:
614 self.methods = None
615 else:
616 if isinstance(methods, str):
617 raise TypeError('param `methods` should be `Iterable[str]`, not `str`')
618 self.methods = set([x.upper() for x in methods])
619 if 'HEAD' not in self.methods and 'GET' in self.methods:
620 self.methods.add('HEAD')
621 self.endpoint = endpoint
622 self.redirect_to = redirect_to
623
624 if defaults:
625 self.arguments = set(map(str, defaults))
626 else:
627 self.arguments = set()
628 self._trace = self._converters = self._regex = self._argument_weights = None
629
630 def empty(self):
631 """
632 Return an unbound copy of this rule.
633
634 This can be useful if want to reuse an already bound URL for another
635 map. See ``get_empty_kwargs`` to override what keyword arguments are
636 provided to the new copy.
637 """
638 return type(self)(self.rule, **self.get_empty_kwargs())
639
640 def get_empty_kwargs(self):
641 """
642 Provides kwargs for instantiating empty copy with empty()
643
644 Use this method to provide custom keyword arguments to the subclass of
645 ``Rule`` when calling ``some_rule.empty()``. Helpful when the subclass
646 has custom keyword arguments that are needed at instantiation.
647
648 Must return a ``dict`` that will be provided as kwargs to the new
649 instance of ``Rule``, following the initial ``self.rule`` value which
650 is always provided as the first, required positional argument.
651 """
652 defaults = None
653 if self.defaults:
654 defaults = dict(self.defaults)
655 return dict(defaults=defaults, subdomain=self.subdomain,
656 methods=self.methods, build_only=self.build_only,
657 endpoint=self.endpoint, strict_slashes=self.strict_slashes,
658 redirect_to=self.redirect_to, alias=self.alias,
659 host=self.host)
660
661 def get_rules(self, map):
662 yield self
663
664 def refresh(self):
665 """Rebinds and refreshes the URL. Call this if you modified the
666 rule in place.
667
668 :internal:
669 """
670 self.bind(self.map, rebind=True)
671
672 def bind(self, map, rebind=False):
673 """Bind the url to a map and create a regular expression based on
674 the information from the rule itself and the defaults from the map.
675
676 :internal:
677 """
678 if self.map is not None and not rebind:
679 raise RuntimeError('url rule %r already bound to map %r' %
680 (self, self.map))
681 self.map = map
682 if self.strict_slashes is None:
683 self.strict_slashes = map.strict_slashes
684 if self.subdomain is None:
685 self.subdomain = map.default_subdomain
686 self.compile()
687
688 def get_converter(self, variable_name, converter_name, args, kwargs):
689 """Looks up the converter for the given parameter.
690
691 .. versionadded:: 0.9
692 """
693 if converter_name not in self.map.converters:
694 raise LookupError('the converter %r does not exist' % converter_name)
695 return self.map.converters[converter_name](self.map, *args, **kwargs)
696
697 def compile(self):
698 """Compiles the regular expression and stores it."""
699 assert self.map is not None, 'rule not bound'
700
701 if self.map.host_matching:
702 domain_rule = self.host or ''
703 else:
704 domain_rule = self.subdomain or ''
705
706 self._trace = []
707 self._converters = {}
708 self._static_weights = []
709 self._argument_weights = []
710 regex_parts = []
711
712 def _build_regex(rule):
713 index = 0
714 for converter, arguments, variable in parse_rule(rule):
715 if converter is None:
716 regex_parts.append(re.escape(variable))
717 self._trace.append((False, variable))
718 for part in variable.split('/'):
719 if part:
720 self._static_weights.append((index, -len(part)))
721 else:
722 if arguments:
723 c_args, c_kwargs = parse_converter_args(arguments)
724 else:
725 c_args = ()
726 c_kwargs = {}
727 convobj = self.get_converter(
728 variable, converter, c_args, c_kwargs)
729 regex_parts.append('(?P<%s>%s)' % (variable, convobj.regex))
730 self._converters[variable] = convobj
731 self._trace.append((True, variable))
732 self._argument_weights.append(convobj.weight)
733 self.arguments.add(str(variable))
734 index = index + 1
735
736 _build_regex(domain_rule)
737 regex_parts.append('\\|')
738 self._trace.append((False, '|'))
739 _build_regex(self.is_leaf and self.rule or self.rule.rstrip('/'))
740 if not self.is_leaf:
741 self._trace.append((False, '/'))
742
743 if self.build_only:
744 return
745 regex = r'^%s%s$' % (
746 u''.join(regex_parts),
747 (not self.is_leaf or not self.strict_slashes) and
748 '(?<!/)(?P<__suffix__>/?)' or ''
749 )
750 self._regex = re.compile(regex, re.UNICODE)
751
752 def match(self, path, method=None):
753 """Check if the rule matches a given path. Path is a string in the
754 form ``"subdomain|/path"`` and is assembled by the map. If
755 the map is doing host matching the subdomain part will be the host
756 instead.
757
758 If the rule matches a dict with the converted values is returned,
759 otherwise the return value is `None`.
760
761 :internal:
762 """
763 if not self.build_only:
764 m = self._regex.search(path)
765 if m is not None:
766 groups = m.groupdict()
767 # we have a folder like part of the url without a trailing
768 # slash and strict slashes enabled. raise an exception that
769 # tells the map to redirect to the same url but with a
770 # trailing slash
771 if self.strict_slashes and not self.is_leaf and \
772 not groups.pop('__suffix__') and \
773 (method is None or self.methods is None or
774 method in self.methods):
775 raise RequestSlash()
776 # if we are not in strict slashes mode we have to remove
777 # a __suffix__
778 elif not self.strict_slashes:
779 del groups['__suffix__']
780
781 result = {}
782 for name, value in iteritems(groups):
783 try:
784 value = self._converters[name].to_python(value)
785 except ValidationError:
786 return
787 result[str(name)] = value
788 if self.defaults:
789 result.update(self.defaults)
790
791 if self.alias and self.map.redirect_defaults:
792 raise RequestAliasRedirect(result)
793
794 return result
795
796 def build(self, values, append_unknown=True):
797 """Assembles the relative url for that rule and the subdomain.
798 If building doesn't work for some reasons `None` is returned.
799
800 :internal:
801 """
802 tmp = []
803 add = tmp.append
804 processed = set(self.arguments)
805 for is_dynamic, data in self._trace:
806 if is_dynamic:
807 try:
808 add(self._converters[data].to_url(values[data]))
809 except ValidationError:
810 return
811 processed.add(data)
812 else:
813 add(url_quote(to_bytes(data, self.map.charset), safe='/:|+'))
814 domain_part, url = (u''.join(tmp)).split(u'|', 1)
815
816 if append_unknown:
817 query_vars = MultiDict(values)
818 for key in processed:
819 if key in query_vars:
820 del query_vars[key]
821
822 if query_vars:
823 url += u'?' + url_encode(query_vars, charset=self.map.charset,
824 sort=self.map.sort_parameters,
825 key=self.map.sort_key)
826
827 return domain_part, url
828
829 def provides_defaults_for(self, rule):
830 """Check if this rule has defaults for a given rule.
831
832 :internal:
833 """
834 return not self.build_only and self.defaults and \
835 self.endpoint == rule.endpoint and self != rule and \
836 self.arguments == rule.arguments
837
838 def suitable_for(self, values, method=None):
839 """Check if the dict of values has enough data for url generation.
840
841 :internal:
842 """
843 # if a method was given explicitly and that method is not supported
844 # by this rule, this rule is not suitable.
845 if method is not None and self.methods is not None \
846 and method not in self.methods:
847 return False
848
849 defaults = self.defaults or ()
850
851 # all arguments required must be either in the defaults dict or
852 # the value dictionary otherwise it's not suitable
853 for key in self.arguments:
854 if key not in defaults and key not in values:
855 return False
856
857 # in case defaults are given we ensure taht either the value was
858 # skipped or the value is the same as the default value.
859 if defaults:
860 for key, value in iteritems(defaults):
861 if key in values and value != values[key]:
862 return False
863
864 return True
865
866 def match_compare_key(self):
867 """The match compare key for sorting.
868
869 Current implementation:
870
871 1. rules without any arguments come first for performance
872 reasons only as we expect them to match faster and some
873 common ones usually don't have any arguments (index pages etc.)
874 2. rules with more static parts come first so the second argument
875 is the negative length of the number of the static weights.
876 3. we order by static weights, which is a combination of index
877 and length
878 4. The more complex rules come first so the next argument is the
879 negative length of the number of argument weights.
880 5. lastly we order by the actual argument weights.
881
882 :internal:
883 """
884 return bool(self.arguments), -len(self._static_weights), self._static_weights,\
885 -len(self._argument_weights), self._argument_weights
886
887 def build_compare_key(self):
888 """The build compare key for sorting.
889
890 :internal:
891 """
892 return self.alias and 1 or 0, -len(self.arguments), \
893 -len(self.defaults or ())
894
895 def __eq__(self, other):
896 return self.__class__ is other.__class__ and \
897 self._trace == other._trace
898
899 __hash__ = None
900
901 def __ne__(self, other):
902 return not self.__eq__(other)
903
904 def __str__(self):
905 return self.rule
906
907 @native_string_result
908 def __repr__(self):
909 if self.map is None:
910 return u'<%s (unbound)>' % self.__class__.__name__
911 tmp = []
912 for is_dynamic, data in self._trace:
913 if is_dynamic:
914 tmp.append(u'<%s>' % data)
915 else:
916 tmp.append(data)
917 return u'<%s %s%s -> %s>' % (
918 self.__class__.__name__,
919 repr((u''.join(tmp)).lstrip(u'|')).lstrip(u'u'),
920 self.methods is not None
921 and u' (%s)' % u', '.join(self.methods)
922 or u'',
923 self.endpoint
924 )
925
926
927 class BaseConverter(object):
928
929 """Base class for all converters."""
930 regex = '[^/]+'
931 weight = 100
932
933 def __init__(self, map):
934 self.map = map
935
936 def to_python(self, value):
937 return value
938
939 def to_url(self, value):
940 return url_quote(value, charset=self.map.charset)
941
942
943 class UnicodeConverter(BaseConverter):
944
945 """This converter is the default converter and accepts any string but
946 only one path segment. Thus the string can not include a slash.
947
948 This is the default validator.
949
950 Example::
951
952 Rule('/pages/<page>'),
953 Rule('/<string(length=2):lang_code>')
954
955 :param map: the :class:`Map`.
956 :param minlength: the minimum length of the string. Must be greater
957 or equal 1.
958 :param maxlength: the maximum length of the string.
959 :param length: the exact length of the string.
960 """
961
962 def __init__(self, map, minlength=1, maxlength=None, length=None):
963 BaseConverter.__init__(self, map)
964 if length is not None:
965 length = '{%d}' % int(length)
966 else:
967 if maxlength is None:
968 maxlength = ''
969 else:
970 maxlength = int(maxlength)
971 length = '{%s,%s}' % (
972 int(minlength),
973 maxlength
974 )
975 self.regex = '[^/]' + length
976
977
978 class AnyConverter(BaseConverter):
979
980 """Matches one of the items provided. Items can either be Python
981 identifiers or strings::
982
983 Rule('/<any(about, help, imprint, class, "foo,bar"):page_name>')
984
985 :param map: the :class:`Map`.
986 :param items: this function accepts the possible items as positional
987 arguments.
988 """
989
990 def __init__(self, map, *items):
991 BaseConverter.__init__(self, map)
992 self.regex = '(?:%s)' % '|'.join([re.escape(x) for x in items])
993
994
995 class PathConverter(BaseConverter):
996
997 """Like the default :class:`UnicodeConverter`, but it also matches
998 slashes. This is useful for wikis and similar applications::
999
1000 Rule('/<path:wikipage>')
1001 Rule('/<path:wikipage>/edit')
1002
1003 :param map: the :class:`Map`.
1004 """
1005 regex = '[^/].*?'
1006 weight = 200
1007
1008
1009 class NumberConverter(BaseConverter):
1010
1011 """Baseclass for `IntegerConverter` and `FloatConverter`.
1012
1013 :internal:
1014 """
1015 weight = 50
1016
1017 def __init__(self, map, fixed_digits=0, min=None, max=None):
1018 BaseConverter.__init__(self, map)
1019 self.fixed_digits = fixed_digits
1020 self.min = min
1021 self.max = max
1022
1023 def to_python(self, value):
1024 if (self.fixed_digits and len(value) != self.fixed_digits):
1025 raise ValidationError()
1026 value = self.num_convert(value)
1027 if (self.min is not None and value < self.min) or \
1028 (self.max is not None and value > self.max):
1029 raise ValidationError()
1030 return value
1031
1032 def to_url(self, value):
1033 value = self.num_convert(value)
1034 if self.fixed_digits:
1035 value = ('%%0%sd' % self.fixed_digits) % value
1036 return str(value)
1037
1038
1039 class IntegerConverter(NumberConverter):
1040
1041 """This converter only accepts integer values::
1042
1043 Rule('/page/<int:page>')
1044
1045 This converter does not support negative values.
1046
1047 :param map: the :class:`Map`.
1048 :param fixed_digits: the number of fixed digits in the URL. If you set
1049 this to ``4`` for example, the application will
1050 only match if the url looks like ``/0001/``. The
1051 default is variable length.
1052 :param min: the minimal value.
1053 :param max: the maximal value.
1054 """
1055 regex = r'\d+'
1056 num_convert = int
1057
1058
1059 class FloatConverter(NumberConverter):
1060
1061 """This converter only accepts floating point values::
1062
1063 Rule('/probability/<float:probability>')
1064
1065 This converter does not support negative values.
1066
1067 :param map: the :class:`Map`.
1068 :param min: the minimal value.
1069 :param max: the maximal value.
1070 """
1071 regex = r'\d+\.\d+'
1072 num_convert = float
1073
1074 def __init__(self, map, min=None, max=None):
1075 NumberConverter.__init__(self, map, 0, min, max)
1076
1077
1078 class UUIDConverter(BaseConverter):
1079
1080 """This converter only accepts UUID strings::
1081
1082 Rule('/object/<uuid:identifier>')
1083
1084 .. versionadded:: 0.10
1085
1086 :param map: the :class:`Map`.
1087 """
1088 regex = r'[A-Fa-f0-9]{8}-[A-Fa-f0-9]{4}-' \
1089 r'[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{12}'
1090
1091 def to_python(self, value):
1092 return uuid.UUID(value)
1093
1094 def to_url(self, value):
1095 return str(value)
1096
1097
1098 #: the default converter mapping for the map.
1099 DEFAULT_CONVERTERS = {
1100 'default': UnicodeConverter,
1101 'string': UnicodeConverter,
1102 'any': AnyConverter,
1103 'path': PathConverter,
1104 'int': IntegerConverter,
1105 'float': FloatConverter,
1106 'uuid': UUIDConverter,
1107 }
1108
1109
1110 class Map(object):
1111
1112 """The map class stores all the URL rules and some configuration
1113 parameters. Some of the configuration values are only stored on the
1114 `Map` instance since those affect all rules, others are just defaults
1115 and can be overridden for each rule. Note that you have to specify all
1116 arguments besides the `rules` as keyword arguments!
1117
1118 :param rules: sequence of url rules for this map.
1119 :param default_subdomain: The default subdomain for rules without a
1120 subdomain defined.
1121 :param charset: charset of the url. defaults to ``"utf-8"``
1122 :param strict_slashes: Take care of trailing slashes.
1123 :param redirect_defaults: This will redirect to the default rule if it
1124 wasn't visited that way. This helps creating
1125 unique URLs.
1126 :param converters: A dict of converters that adds additional converters
1127 to the list of converters. If you redefine one
1128 converter this will override the original one.
1129 :param sort_parameters: If set to `True` the url parameters are sorted.
1130 See `url_encode` for more details.
1131 :param sort_key: The sort key function for `url_encode`.
1132 :param encoding_errors: the error method to use for decoding
1133 :param host_matching: if set to `True` it enables the host matching
1134 feature and disables the subdomain one. If
1135 enabled the `host` parameter to rules is used
1136 instead of the `subdomain` one.
1137
1138 .. versionadded:: 0.5
1139 `sort_parameters` and `sort_key` was added.
1140
1141 .. versionadded:: 0.7
1142 `encoding_errors` and `host_matching` was added.
1143 """
1144
1145 #: .. versionadded:: 0.6
1146 #: a dict of default converters to be used.
1147 default_converters = ImmutableDict(DEFAULT_CONVERTERS)
1148
1149 def __init__(self, rules=None, default_subdomain='', charset='utf-8',
1150 strict_slashes=True, redirect_defaults=True,
1151 converters=None, sort_parameters=False, sort_key=None,
1152 encoding_errors='replace', host_matching=False):
1153 self._rules = []
1154 self._rules_by_endpoint = {}
1155 self._remap = True
1156 self._remap_lock = Lock()
1157
1158 self.default_subdomain = default_subdomain
1159 self.charset = charset
1160 self.encoding_errors = encoding_errors
1161 self.strict_slashes = strict_slashes
1162 self.redirect_defaults = redirect_defaults
1163 self.host_matching = host_matching
1164
1165 self.converters = self.default_converters.copy()
1166 if converters:
1167 self.converters.update(converters)
1168
1169 self.sort_parameters = sort_parameters
1170 self.sort_key = sort_key
1171
1172 for rulefactory in rules or ():
1173 self.add(rulefactory)
1174
1175 def is_endpoint_expecting(self, endpoint, *arguments):
1176 """Iterate over all rules and check if the endpoint expects
1177 the arguments provided. This is for example useful if you have
1178 some URLs that expect a language code and others that do not and
1179 you want to wrap the builder a bit so that the current language
1180 code is automatically added if not provided but endpoints expect
1181 it.
1182
1183 :param endpoint: the endpoint to check.
1184 :param arguments: this function accepts one or more arguments
1185 as positional arguments. Each one of them is
1186 checked.
1187 """
1188 self.update()
1189 arguments = set(arguments)
1190 for rule in self._rules_by_endpoint[endpoint]:
1191 if arguments.issubset(rule.arguments):
1192 return True
1193 return False
1194
1195 def iter_rules(self, endpoint=None):
1196 """Iterate over all rules or the rules of an endpoint.
1197
1198 :param endpoint: if provided only the rules for that endpoint
1199 are returned.
1200 :return: an iterator
1201 """
1202 self.update()
1203 if endpoint is not None:
1204 return iter(self._rules_by_endpoint[endpoint])
1205 return iter(self._rules)
1206
1207 def add(self, rulefactory):
1208 """Add a new rule or factory to the map and bind it. Requires that the
1209 rule is not bound to another map.
1210
1211 :param rulefactory: a :class:`Rule` or :class:`RuleFactory`
1212 """
1213 for rule in rulefactory.get_rules(self):
1214 rule.bind(self)
1215 self._rules.append(rule)
1216 self._rules_by_endpoint.setdefault(rule.endpoint, []).append(rule)
1217 self._remap = True
1218
1219 def bind(self, server_name, script_name=None, subdomain=None,
1220 url_scheme='http', default_method='GET', path_info=None,
1221 query_args=None):
1222 """Return a new :class:`MapAdapter` with the details specified to the
1223 call. Note that `script_name` will default to ``'/'`` if not further
1224 specified or `None`. The `server_name` at least is a requirement
1225 because the HTTP RFC requires absolute URLs for redirects and so all
1226 redirect exceptions raised by Werkzeug will contain the full canonical
1227 URL.
1228
1229 If no path_info is passed to :meth:`match` it will use the default path
1230 info passed to bind. While this doesn't really make sense for
1231 manual bind calls, it's useful if you bind a map to a WSGI
1232 environment which already contains the path info.
1233
1234 `subdomain` will default to the `default_subdomain` for this map if
1235 no defined. If there is no `default_subdomain` you cannot use the
1236 subdomain feature.
1237
1238 .. versionadded:: 0.7
1239 `query_args` added
1240
1241 .. versionadded:: 0.8
1242 `query_args` can now also be a string.
1243 """
1244 server_name = server_name.lower()
1245 if self.host_matching:
1246 if subdomain is not None:
1247 raise RuntimeError('host matching enabled and a '
1248 'subdomain was provided')
1249 elif subdomain is None:
1250 subdomain = self.default_subdomain
1251 if script_name is None:
1252 script_name = '/'
1253 try:
1254 server_name = _encode_idna(server_name)
1255 except UnicodeError:
1256 raise BadHost()
1257 return MapAdapter(self, server_name, script_name, subdomain,
1258 url_scheme, path_info, default_method, query_args)
1259
1260 def bind_to_environ(self, environ, server_name=None, subdomain=None):
1261 """Like :meth:`bind` but you can pass it an WSGI environment and it
1262 will fetch the information from that dictionary. Note that because of
1263 limitations in the protocol there is no way to get the current
1264 subdomain and real `server_name` from the environment. If you don't
1265 provide it, Werkzeug will use `SERVER_NAME` and `SERVER_PORT` (or
1266 `HTTP_HOST` if provided) as used `server_name` with disabled subdomain
1267 feature.
1268
1269 If `subdomain` is `None` but an environment and a server name is
1270 provided it will calculate the current subdomain automatically.
1271 Example: `server_name` is ``'example.com'`` and the `SERVER_NAME`
1272 in the wsgi `environ` is ``'staging.dev.example.com'`` the calculated
1273 subdomain will be ``'staging.dev'``.
1274
1275 If the object passed as environ has an environ attribute, the value of
1276 this attribute is used instead. This allows you to pass request
1277 objects. Additionally `PATH_INFO` added as a default of the
1278 :class:`MapAdapter` so that you don't have to pass the path info to
1279 the match method.
1280
1281 .. versionchanged:: 0.5
1282 previously this method accepted a bogus `calculate_subdomain`
1283 parameter that did not have any effect. It was removed because
1284 of that.
1285
1286 .. versionchanged:: 0.8
1287 This will no longer raise a ValueError when an unexpected server
1288 name was passed.
1289
1290 :param environ: a WSGI environment.
1291 :param server_name: an optional server name hint (see above).
1292 :param subdomain: optionally the current subdomain (see above).
1293 """
1294 environ = _get_environ(environ)
1295
1296 if 'HTTP_HOST' in environ:
1297 wsgi_server_name = environ['HTTP_HOST']
1298
1299 if environ['wsgi.url_scheme'] == 'http' \
1300 and wsgi_server_name.endswith(':80'):
1301 wsgi_server_name = wsgi_server_name[:-3]
1302 elif environ['wsgi.url_scheme'] == 'https' \
1303 and wsgi_server_name.endswith(':443'):
1304 wsgi_server_name = wsgi_server_name[:-4]
1305 else:
1306 wsgi_server_name = environ['SERVER_NAME']
1307
1308 if (environ['wsgi.url_scheme'], environ['SERVER_PORT']) not \
1309 in (('https', '443'), ('http', '80')):
1310 wsgi_server_name += ':' + environ['SERVER_PORT']
1311
1312 wsgi_server_name = wsgi_server_name.lower()
1313
1314 if server_name is None:
1315 server_name = wsgi_server_name
1316 else:
1317 server_name = server_name.lower()
1318
1319 if subdomain is None and not self.host_matching:
1320 cur_server_name = wsgi_server_name.split('.')
1321 real_server_name = server_name.split('.')
1322 offset = -len(real_server_name)
1323 if cur_server_name[offset:] != real_server_name:
1324 # This can happen even with valid configs if the server was
1325 # accesssed directly by IP address under some situations.
1326 # Instead of raising an exception like in Werkzeug 0.7 or
1327 # earlier we go by an invalid subdomain which will result
1328 # in a 404 error on matching.
1329 subdomain = '<invalid>'
1330 else:
1331 subdomain = '.'.join(filter(None, cur_server_name[:offset]))
1332
1333 def _get_wsgi_string(name):
1334 val = environ.get(name)
1335 if val is not None:
1336 return wsgi_decoding_dance(val, self.charset)
1337
1338 script_name = _get_wsgi_string('SCRIPT_NAME')
1339 path_info = _get_wsgi_string('PATH_INFO')
1340 query_args = _get_wsgi_string('QUERY_STRING')
1341 return Map.bind(self, server_name, script_name,
1342 subdomain, environ['wsgi.url_scheme'],
1343 environ['REQUEST_METHOD'], path_info,
1344 query_args=query_args)
1345
1346 def update(self):
1347 """Called before matching and building to keep the compiled rules
1348 in the correct order after things changed.
1349 """
1350 if not self._remap:
1351 return
1352
1353 with self._remap_lock:
1354 if not self._remap:
1355 return
1356
1357 self._rules.sort(key=lambda x: x.match_compare_key())
1358 for rules in itervalues(self._rules_by_endpoint):
1359 rules.sort(key=lambda x: x.build_compare_key())
1360 self._remap = False
1361
1362 def __repr__(self):
1363 rules = self.iter_rules()
1364 return '%s(%s)' % (self.__class__.__name__, pformat(list(rules)))
1365
1366
1367 class MapAdapter(object):
1368
1369 """Returned by :meth:`Map.bind` or :meth:`Map.bind_to_environ` and does
1370 the URL matching and building based on runtime information.
1371 """
1372
1373 def __init__(self, map, server_name, script_name, subdomain,
1374 url_scheme, path_info, default_method, query_args=None):
1375 self.map = map
1376 self.server_name = to_unicode(server_name)
1377 script_name = to_unicode(script_name)
1378 if not script_name.endswith(u'/'):
1379 script_name += u'/'
1380 self.script_name = script_name
1381 self.subdomain = to_unicode(subdomain)
1382 self.url_scheme = to_unicode(url_scheme)
1383 self.path_info = to_unicode(path_info)
1384 self.default_method = to_unicode(default_method)
1385 self.query_args = query_args
1386
1387 def dispatch(self, view_func, path_info=None, method=None,
1388 catch_http_exceptions=False):
1389 """Does the complete dispatching process. `view_func` is called with
1390 the endpoint and a dict with the values for the view. It should
1391 look up the view function, call it, and return a response object
1392 or WSGI application. http exceptions are not caught by default
1393 so that applications can display nicer error messages by just
1394 catching them by hand. If you want to stick with the default
1395 error messages you can pass it ``catch_http_exceptions=True`` and
1396 it will catch the http exceptions.
1397
1398 Here a small example for the dispatch usage::
1399
1400 from werkzeug.wrappers import Request, Response
1401 from werkzeug.wsgi import responder
1402 from werkzeug.routing import Map, Rule
1403
1404 def on_index(request):
1405 return Response('Hello from the index')
1406
1407 url_map = Map([Rule('/', endpoint='index')])
1408 views = {'index': on_index}
1409
1410 @responder
1411 def application(environ, start_response):
1412 request = Request(environ)
1413 urls = url_map.bind_to_environ(environ)
1414 return urls.dispatch(lambda e, v: views[e](request, **v),
1415 catch_http_exceptions=True)
1416
1417 Keep in mind that this method might return exception objects, too, so
1418 use :class:`Response.force_type` to get a response object.
1419
1420 :param view_func: a function that is called with the endpoint as
1421 first argument and the value dict as second. Has
1422 to dispatch to the actual view function with this
1423 information. (see above)
1424 :param path_info: the path info to use for matching. Overrides the
1425 path info specified on binding.
1426 :param method: the HTTP method used for matching. Overrides the
1427 method specified on binding.
1428 :param catch_http_exceptions: set to `True` to catch any of the
1429 werkzeug :class:`HTTPException`\s.
1430 """
1431 try:
1432 try:
1433 endpoint, args = self.match(path_info, method)
1434 except RequestRedirect as e:
1435 return e
1436 return view_func(endpoint, args)
1437 except HTTPException as e:
1438 if catch_http_exceptions:
1439 return e
1440 raise
1441
1442 def match(self, path_info=None, method=None, return_rule=False,
1443 query_args=None):
1444 """The usage is simple: you just pass the match method the current
1445 path info as well as the method (which defaults to `GET`). The
1446 following things can then happen:
1447
1448 - you receive a `NotFound` exception that indicates that no URL is
1449 matching. A `NotFound` exception is also a WSGI application you
1450 can call to get a default page not found page (happens to be the
1451 same object as `werkzeug.exceptions.NotFound`)
1452
1453 - you receive a `MethodNotAllowed` exception that indicates that there
1454 is a match for this URL but not for the current request method.
1455 This is useful for RESTful applications.
1456
1457 - you receive a `RequestRedirect` exception with a `new_url`
1458 attribute. This exception is used to notify you about a request
1459 Werkzeug requests from your WSGI application. This is for example the
1460 case if you request ``/foo`` although the correct URL is ``/foo/``
1461 You can use the `RequestRedirect` instance as response-like object
1462 similar to all other subclasses of `HTTPException`.
1463
1464 - you get a tuple in the form ``(endpoint, arguments)`` if there is
1465 a match (unless `return_rule` is True, in which case you get a tuple
1466 in the form ``(rule, arguments)``)
1467
1468 If the path info is not passed to the match method the default path
1469 info of the map is used (defaults to the root URL if not defined
1470 explicitly).
1471
1472 All of the exceptions raised are subclasses of `HTTPException` so they
1473 can be used as WSGI responses. They will all render generic error or
1474 redirect pages.
1475
1476 Here is a small example for matching:
1477
1478 >>> m = Map([
1479 ... Rule('/', endpoint='index'),
1480 ... Rule('/downloads/', endpoint='downloads/index'),
1481 ... Rule('/downloads/<int:id>', endpoint='downloads/show')
1482 ... ])
1483 >>> urls = m.bind("example.com", "/")
1484 >>> urls.match("/", "GET")
1485 ('index', {})
1486 >>> urls.match("/downloads/42")
1487 ('downloads/show', {'id': 42})
1488
1489 And here is what happens on redirect and missing URLs:
1490
1491 >>> urls.match("/downloads")
1492 Traceback (most recent call last):
1493 ...
1494 RequestRedirect: http://example.com/downloads/
1495 >>> urls.match("/missing")
1496 Traceback (most recent call last):
1497 ...
1498 NotFound: 404 Not Found
1499
1500 :param path_info: the path info to use for matching. Overrides the
1501 path info specified on binding.
1502 :param method: the HTTP method used for matching. Overrides the
1503 method specified on binding.
1504 :param return_rule: return the rule that matched instead of just the
1505 endpoint (defaults to `False`).
1506 :param query_args: optional query arguments that are used for
1507 automatic redirects as string or dictionary. It's
1508 currently not possible to use the query arguments
1509 for URL matching.
1510
1511 .. versionadded:: 0.6
1512 `return_rule` was added.
1513
1514 .. versionadded:: 0.7
1515 `query_args` was added.
1516
1517 .. versionchanged:: 0.8
1518 `query_args` can now also be a string.
1519 """
1520 self.map.update()
1521 if path_info is None:
1522 path_info = self.path_info
1523 else:
1524 path_info = to_unicode(path_info, self.map.charset)
1525 if query_args is None:
1526 query_args = self.query_args
1527 method = (method or self.default_method).upper()
1528
1529 path = u'%s|%s' % (
1530 self.map.host_matching and self.server_name or self.subdomain,
1531 path_info and '/%s' % path_info.lstrip('/')
1532 )
1533
1534 have_match_for = set()
1535 for rule in self.map._rules:
1536 try:
1537 rv = rule.match(path, method)
1538 except RequestSlash:
1539 raise RequestRedirect(self.make_redirect_url(
1540 url_quote(path_info, self.map.charset,
1541 safe='/:|+') + '/', query_args))
1542 except RequestAliasRedirect as e:
1543 raise RequestRedirect(self.make_alias_redirect_url(
1544 path, rule.endpoint, e.matched_values, method, query_args))
1545 if rv is None:
1546 continue
1547 if rule.methods is not None and method not in rule.methods:
1548 have_match_for.update(rule.methods)
1549 continue
1550
1551 if self.map.redirect_defaults:
1552 redirect_url = self.get_default_redirect(rule, method, rv,
1553 query_args)
1554 if redirect_url is not None:
1555 raise RequestRedirect(redirect_url)
1556
1557 if rule.redirect_to is not None:
1558 if isinstance(rule.redirect_to, string_types):
1559 def _handle_match(match):
1560 value = rv[match.group(1)]
1561 return rule._converters[match.group(1)].to_url(value)
1562 redirect_url = _simple_rule_re.sub(_handle_match,
1563 rule.redirect_to)
1564 else:
1565 redirect_url = rule.redirect_to(self, **rv)
1566 raise RequestRedirect(str(url_join('%s://%s%s%s' % (
1567 self.url_scheme or 'http',
1568 self.subdomain and self.subdomain + '.' or '',
1569 self.server_name,
1570 self.script_name
1571 ), redirect_url)))
1572
1573 if return_rule:
1574 return rule, rv
1575 else:
1576 return rule.endpoint, rv
1577
1578 if have_match_for:
1579 raise MethodNotAllowed(valid_methods=list(have_match_for))
1580 raise NotFound()
1581
1582 def test(self, path_info=None, method=None):
1583 """Test if a rule would match. Works like `match` but returns `True`
1584 if the URL matches, or `False` if it does not exist.
1585
1586 :param path_info: the path info to use for matching. Overrides the
1587 path info specified on binding.
1588 :param method: the HTTP method used for matching. Overrides the
1589 method specified on binding.
1590 """
1591 try:
1592 self.match(path_info, method)
1593 except RequestRedirect:
1594 pass
1595 except HTTPException:
1596 return False
1597 return True
1598
1599 def allowed_methods(self, path_info=None):
1600 """Returns the valid methods that match for a given path.
1601
1602 .. versionadded:: 0.7
1603 """
1604 try:
1605 self.match(path_info, method='--')
1606 except MethodNotAllowed as e:
1607 return e.valid_methods
1608 except HTTPException as e:
1609 pass
1610 return []
1611
1612 def get_host(self, domain_part):
1613 """Figures out the full host name for the given domain part. The
1614 domain part is a subdomain in case host matching is disabled or
1615 a full host name.
1616 """
1617 if self.map.host_matching:
1618 if domain_part is None:
1619 return self.server_name
1620 return to_unicode(domain_part, 'ascii')
1621 subdomain = domain_part
1622 if subdomain is None:
1623 subdomain = self.subdomain
1624 else:
1625 subdomain = to_unicode(subdomain, 'ascii')
1626 return (subdomain and subdomain + u'.' or u'') + self.server_name
1627
1628 def get_default_redirect(self, rule, method, values, query_args):
1629 """A helper that returns the URL to redirect to if it finds one.
1630 This is used for default redirecting only.
1631
1632 :internal:
1633 """
1634 assert self.map.redirect_defaults
1635 for r in self.map._rules_by_endpoint[rule.endpoint]:
1636 # every rule that comes after this one, including ourself
1637 # has a lower priority for the defaults. We order the ones
1638 # with the highest priority up for building.
1639 if r is rule:
1640 break
1641 if r.provides_defaults_for(rule) and \
1642 r.suitable_for(values, method):
1643 values.update(r.defaults)
1644 domain_part, path = r.build(values)
1645 return self.make_redirect_url(
1646 path, query_args, domain_part=domain_part)
1647
1648 def encode_query_args(self, query_args):
1649 if not isinstance(query_args, string_types):
1650 query_args = url_encode(query_args, self.map.charset)
1651 return query_args
1652
1653 def make_redirect_url(self, path_info, query_args=None, domain_part=None):
1654 """Creates a redirect URL.
1655
1656 :internal:
1657 """
1658 suffix = ''
1659 if query_args:
1660 suffix = '?' + self.encode_query_args(query_args)
1661 return str('%s://%s/%s%s' % (
1662 self.url_scheme or 'http',
1663 self.get_host(domain_part),
1664 posixpath.join(self.script_name[:-1].lstrip('/'),
1665 path_info.lstrip('/')),
1666 suffix
1667 ))
1668
1669 def make_alias_redirect_url(self, path, endpoint, values, method, query_args):
1670 """Internally called to make an alias redirect URL."""
1671 url = self.build(endpoint, values, method, append_unknown=False,
1672 force_external=True)
1673 if query_args:
1674 url += '?' + self.encode_query_args(query_args)
1675 assert url != path, 'detected invalid alias setting. No canonical ' \
1676 'URL found'
1677 return url
1678
1679 def _partial_build(self, endpoint, values, method, append_unknown):
1680 """Helper for :meth:`build`. Returns subdomain and path for the
1681 rule that accepts this endpoint, values and method.
1682
1683 :internal:
1684 """
1685 # in case the method is none, try with the default method first
1686 if method is None:
1687 rv = self._partial_build(endpoint, values, self.default_method,
1688 append_unknown)
1689 if rv is not None:
1690 return rv
1691
1692 # default method did not match or a specific method is passed,
1693 # check all and go with first result.
1694 for rule in self.map._rules_by_endpoint.get(endpoint, ()):
1695 if rule.suitable_for(values, method):
1696 rv = rule.build(values, append_unknown)
1697 if rv is not None:
1698 return rv
1699
1700 def build(self, endpoint, values=None, method=None, force_external=False,
1701 append_unknown=True):
1702 """Building URLs works pretty much the other way round. Instead of
1703 `match` you call `build` and pass it the endpoint and a dict of
1704 arguments for the placeholders.
1705
1706 The `build` function also accepts an argument called `force_external`
1707 which, if you set it to `True` will force external URLs. Per default
1708 external URLs (include the server name) will only be used if the
1709 target URL is on a different subdomain.
1710
1711 >>> m = Map([
1712 ... Rule('/', endpoint='index'),
1713 ... Rule('/downloads/', endpoint='downloads/index'),
1714 ... Rule('/downloads/<int:id>', endpoint='downloads/show')
1715 ... ])
1716 >>> urls = m.bind("example.com", "/")
1717 >>> urls.build("index", {})
1718 '/'
1719 >>> urls.build("downloads/show", {'id': 42})
1720 '/downloads/42'
1721 >>> urls.build("downloads/show", {'id': 42}, force_external=True)
1722 'http://example.com/downloads/42'
1723
1724 Because URLs cannot contain non ASCII data you will always get
1725 bytestrings back. Non ASCII characters are urlencoded with the
1726 charset defined on the map instance.
1727
1728 Additional values are converted to unicode and appended to the URL as
1729 URL querystring parameters:
1730
1731 >>> urls.build("index", {'q': 'My Searchstring'})
1732 '/?q=My+Searchstring'
1733
1734 When processing those additional values, lists are furthermore
1735 interpreted as multiple values (as per
1736 :py:class:`werkzeug.datastructures.MultiDict`):
1737
1738 >>> urls.build("index", {'q': ['a', 'b', 'c']})
1739 '/?q=a&q=b&q=c'
1740
1741 If a rule does not exist when building a `BuildError` exception is
1742 raised.
1743
1744 The build method accepts an argument called `method` which allows you
1745 to specify the method you want to have an URL built for if you have
1746 different methods for the same endpoint specified.
1747
1748 .. versionadded:: 0.6
1749 the `append_unknown` parameter was added.
1750
1751 :param endpoint: the endpoint of the URL to build.
1752 :param values: the values for the URL to build. Unhandled values are
1753 appended to the URL as query parameters.
1754 :param method: the HTTP method for the rule if there are different
1755 URLs for different methods on the same endpoint.
1756 :param force_external: enforce full canonical external URLs. If the URL
1757 scheme is not provided, this will generate
1758 a protocol-relative URL.
1759 :param append_unknown: unknown parameters are appended to the generated
1760 URL as query string argument. Disable this
1761 if you want the builder to ignore those.
1762 """
1763 self.map.update()
1764 if values:
1765 if isinstance(values, MultiDict):
1766 valueiter = iteritems(values, multi=True)
1767 else:
1768 valueiter = iteritems(values)
1769 values = dict((k, v) for k, v in valueiter if v is not None)
1770 else:
1771 values = {}
1772
1773 rv = self._partial_build(endpoint, values, method, append_unknown)
1774 if rv is None:
1775 raise BuildError(endpoint, values, method, self)
1776 domain_part, path = rv
1777
1778 host = self.get_host(domain_part)
1779
1780 # shortcut this.
1781 if not force_external and (
1782 (self.map.host_matching and host == self.server_name) or
1783 (not self.map.host_matching and domain_part == self.subdomain)
1784 ):
1785 return str(url_join(self.script_name, './' + path.lstrip('/')))
1786 return str('%s//%s%s/%s' % (
1787 self.url_scheme + ':' if self.url_scheme else '',
1788 host,
1789 self.script_name[:-1],
1790 path.lstrip('/')
1791 ))
+0
-270
werkzeug/security.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.security
3 ~~~~~~~~~~~~~~~~~
4
5 Security related helpers such as secure password hashing tools.
6
7 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD, see LICENSE for more details.
9 """
10 import os
11 import hmac
12 import hashlib
13 import posixpath
14 import codecs
15 from struct import Struct
16 from random import SystemRandom
17 from operator import xor
18 from itertools import starmap
19
20 from werkzeug._compat import range_type, PY2, text_type, izip, to_bytes, \
21 string_types, to_native
22
23
24 SALT_CHARS = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
25 DEFAULT_PBKDF2_ITERATIONS = 50000
26
27
28 _pack_int = Struct('>I').pack
29 _builtin_safe_str_cmp = getattr(hmac, 'compare_digest', None)
30 _sys_rng = SystemRandom()
31 _os_alt_seps = list(sep for sep in [os.path.sep, os.path.altsep]
32 if sep not in (None, '/'))
33
34
35 def _find_hashlib_algorithms():
36 algos = getattr(hashlib, 'algorithms', None)
37 if algos is None:
38 algos = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512')
39 rv = {}
40 for algo in algos:
41 func = getattr(hashlib, algo, None)
42 if func is not None:
43 rv[algo] = func
44 return rv
45 _hash_funcs = _find_hashlib_algorithms()
46
47
48 def pbkdf2_hex(data, salt, iterations=DEFAULT_PBKDF2_ITERATIONS,
49 keylen=None, hashfunc=None):
50 """Like :func:`pbkdf2_bin`, but returns a hex-encoded string.
51
52 .. versionadded:: 0.9
53
54 :param data: the data to derive.
55 :param salt: the salt for the derivation.
56 :param iterations: the number of iterations.
57 :param keylen: the length of the resulting key. If not provided,
58 the digest size will be used.
59 :param hashfunc: the hash function to use. This can either be the
60 string name of a known hash function, or a function
61 from the hashlib module. Defaults to sha256.
62 """
63 rv = pbkdf2_bin(data, salt, iterations, keylen, hashfunc)
64 return to_native(codecs.encode(rv, 'hex_codec'))
65
66
67 _has_native_pbkdf2 = hasattr(hashlib, 'pbkdf2_hmac')
68
69
70 def pbkdf2_bin(data, salt, iterations=DEFAULT_PBKDF2_ITERATIONS,
71 keylen=None, hashfunc=None):
72 """Returns a binary digest for the PBKDF2 hash algorithm of `data`
73 with the given `salt`. It iterates `iterations` times and produces a
74 key of `keylen` bytes. By default, SHA-256 is used as hash function;
75 a different hashlib `hashfunc` can be provided.
76
77 .. versionadded:: 0.9
78
79 :param data: the data to derive.
80 :param salt: the salt for the derivation.
81 :param iterations: the number of iterations.
82 :param keylen: the length of the resulting key. If not provided
83 the digest size will be used.
84 :param hashfunc: the hash function to use. This can either be the
85 string name of a known hash function or a function
86 from the hashlib module. Defaults to sha256.
87 """
88 if isinstance(hashfunc, string_types):
89 hashfunc = _hash_funcs[hashfunc]
90 elif not hashfunc:
91 hashfunc = hashlib.sha256
92 data = to_bytes(data)
93 salt = to_bytes(salt)
94
95 # If we're on Python with pbkdf2_hmac we can try to use it for
96 # compatible digests.
97 if _has_native_pbkdf2:
98 _test_hash = hashfunc()
99 if hasattr(_test_hash, 'name') and \
100 _test_hash.name in _hash_funcs:
101 return hashlib.pbkdf2_hmac(_test_hash.name,
102 data, salt, iterations,
103 keylen)
104
105 mac = hmac.HMAC(data, None, hashfunc)
106 if not keylen:
107 keylen = mac.digest_size
108
109 def _pseudorandom(x, mac=mac):
110 h = mac.copy()
111 h.update(x)
112 return bytearray(h.digest())
113 buf = bytearray()
114 for block in range_type(1, -(-keylen // mac.digest_size) + 1):
115 rv = u = _pseudorandom(salt + _pack_int(block))
116 for i in range_type(iterations - 1):
117 u = _pseudorandom(bytes(u))
118 rv = bytearray(starmap(xor, izip(rv, u)))
119 buf.extend(rv)
120 return bytes(buf[:keylen])
121
122
123 def safe_str_cmp(a, b):
124 """This function compares strings in somewhat constant time. This
125 requires that the length of at least one string is known in advance.
126
127 Returns `True` if the two strings are equal, or `False` if they are not.
128
129 .. versionadded:: 0.7
130 """
131 if isinstance(a, text_type):
132 a = a.encode('utf-8')
133 if isinstance(b, text_type):
134 b = b.encode('utf-8')
135
136 if _builtin_safe_str_cmp is not None:
137 return _builtin_safe_str_cmp(a, b)
138
139 if len(a) != len(b):
140 return False
141
142 rv = 0
143 if PY2:
144 for x, y in izip(a, b):
145 rv |= ord(x) ^ ord(y)
146 else:
147 for x, y in izip(a, b):
148 rv |= x ^ y
149
150 return rv == 0
151
152
153 def gen_salt(length):
154 """Generate a random string of SALT_CHARS with specified ``length``."""
155 if length <= 0:
156 raise ValueError('Salt length must be positive')
157 return ''.join(_sys_rng.choice(SALT_CHARS) for _ in range_type(length))
158
159
160 def _hash_internal(method, salt, password):
161 """Internal password hash helper. Supports plaintext without salt,
162 unsalted and salted passwords. In case salted passwords are used
163 hmac is used.
164 """
165 if method == 'plain':
166 return password, method
167
168 if isinstance(password, text_type):
169 password = password.encode('utf-8')
170
171 if method.startswith('pbkdf2:'):
172 args = method[7:].split(':')
173 if len(args) not in (1, 2):
174 raise ValueError('Invalid number of arguments for PBKDF2')
175 method = args.pop(0)
176 iterations = args and int(args[0] or 0) or DEFAULT_PBKDF2_ITERATIONS
177 is_pbkdf2 = True
178 actual_method = 'pbkdf2:%s:%d' % (method, iterations)
179 else:
180 is_pbkdf2 = False
181 actual_method = method
182
183 hash_func = _hash_funcs.get(method)
184 if hash_func is None:
185 raise TypeError('invalid method %r' % method)
186
187 if is_pbkdf2:
188 if not salt:
189 raise ValueError('Salt is required for PBKDF2')
190 rv = pbkdf2_hex(password, salt, iterations,
191 hashfunc=hash_func)
192 elif salt:
193 if isinstance(salt, text_type):
194 salt = salt.encode('utf-8')
195 rv = hmac.HMAC(salt, password, hash_func).hexdigest()
196 else:
197 h = hash_func()
198 h.update(password)
199 rv = h.hexdigest()
200 return rv, actual_method
201
202
203 def generate_password_hash(password, method='pbkdf2:sha256', salt_length=8):
204 """Hash a password with the given method and salt with a string of
205 the given length. The format of the string returned includes the method
206 that was used so that :func:`check_password_hash` can check the hash.
207
208 The format for the hashed string looks like this::
209
210 method$salt$hash
211
212 This method can **not** generate unsalted passwords but it is possible
213 to set param method='plain' in order to enforce plaintext passwords.
214 If a salt is used, hmac is used internally to salt the password.
215
216 If PBKDF2 is wanted it can be enabled by setting the method to
217 ``pbkdf2:method:iterations`` where iterations is optional::
218
219 pbkdf2:sha256:80000$salt$hash
220 pbkdf2:sha256$salt$hash
221
222 :param password: the password to hash.
223 :param method: the hash method to use (one that hashlib supports). Can
224 optionally be in the format ``pbkdf2:<method>[:iterations]``
225 to enable PBKDF2.
226 :param salt_length: the length of the salt in letters.
227 """
228 salt = method != 'plain' and gen_salt(salt_length) or ''
229 h, actual_method = _hash_internal(method, salt, password)
230 return '%s$%s$%s' % (actual_method, salt, h)
231
232
233 def check_password_hash(pwhash, password):
234 """check a password against a given salted and hashed password value.
235 In order to support unsalted legacy passwords this method supports
236 plain text passwords, md5 and sha1 hashes (both salted and unsalted).
237
238 Returns `True` if the password matched, `False` otherwise.
239
240 :param pwhash: a hashed string like returned by
241 :func:`generate_password_hash`.
242 :param password: the plaintext password to compare against the hash.
243 """
244 if pwhash.count('$') < 2:
245 return False
246 method, salt, hashval = pwhash.split('$', 2)
247 return safe_str_cmp(_hash_internal(method, salt, password)[0], hashval)
248
249
250 def safe_join(directory, *pathnames):
251 """Safely join `directory` and one or more untrusted `pathnames`. If this
252 cannot be done, this function returns ``None``.
253
254 :param directory: the base directory.
255 :param pathnames: the untrusted pathnames relative to that directory.
256 """
257 parts = [directory]
258 for filename in pathnames:
259 if filename != '':
260 filename = posixpath.normpath(filename)
261 for sep in _os_alt_seps:
262 if sep in filename:
263 return None
264 if os.path.isabs(filename) or \
265 filename == '..' or \
266 filename.startswith('../'):
267 return None
268 parts.append(filename)
269 return posixpath.join(*parts)
+0
-862
werkzeug/serving.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.serving
3 ~~~~~~~~~~~~~~~~
4
5 There are many ways to serve a WSGI application. While you're developing
6 it you usually don't want a full blown webserver like Apache but a simple
7 standalone one. From Python 2.5 onwards there is the `wsgiref`_ server in
8 the standard library. If you're using older versions of Python you can
9 download the package from the cheeseshop.
10
11 However there are some caveats. Sourcecode won't reload itself when
12 changed and each time you kill the server using ``^C`` you get an
13 `KeyboardInterrupt` error. While the latter is easy to solve the first
14 one can be a pain in the ass in some situations.
15
16 The easiest way is creating a small ``start-myproject.py`` that runs the
17 application::
18
19 #!/usr/bin/env python
20 # -*- coding: utf-8 -*-
21 from myproject import make_app
22 from werkzeug.serving import run_simple
23
24 app = make_app(...)
25 run_simple('localhost', 8080, app, use_reloader=True)
26
27 You can also pass it a `extra_files` keyword argument with a list of
28 additional files (like configuration files) you want to observe.
29
30 For bigger applications you should consider using `click`
31 (http://click.pocoo.org) instead of a simple start file.
32
33
34 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
35 :license: BSD, see LICENSE for more details.
36 """
37 from __future__ import with_statement
38
39 import io
40 import os
41 import socket
42 import sys
43 import signal
44
45
46 can_fork = hasattr(os, "fork")
47
48
49 try:
50 import termcolor
51 except ImportError:
52 termcolor = None
53
54 try:
55 import ssl
56 except ImportError:
57 class _SslDummy(object):
58 def __getattr__(self, name):
59 raise RuntimeError('SSL support unavailable')
60 ssl = _SslDummy()
61
62
63 def _get_openssl_crypto_module():
64 try:
65 from OpenSSL import crypto
66 except ImportError:
67 raise TypeError('Using ad-hoc certificates requires the pyOpenSSL '
68 'library.')
69 else:
70 return crypto
71
72
73 try:
74 import SocketServer as socketserver
75 from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler
76 except ImportError:
77 import socketserver
78 from http.server import HTTPServer, BaseHTTPRequestHandler
79
80 ThreadingMixIn = socketserver.ThreadingMixIn
81
82 if can_fork:
83 ForkingMixIn = socketserver.ForkingMixIn
84 else:
85 class ForkingMixIn(object):
86 pass
87
88 # important: do not use relative imports here or python -m will break
89 import werkzeug
90 from werkzeug._internal import _log
91 from werkzeug._compat import PY2, WIN, reraise, wsgi_encoding_dance
92 from werkzeug.urls import url_parse, url_unquote
93 from werkzeug.exceptions import InternalServerError
94
95
96 LISTEN_QUEUE = 128
97 can_open_by_fd = not WIN and hasattr(socket, 'fromfd')
98
99
100 class DechunkedInput(io.RawIOBase):
101 """An input stream that handles Transfer-Encoding 'chunked'"""
102
103 def __init__(self, rfile):
104 self._rfile = rfile
105 self._done = False
106 self._len = 0
107
108 def readable(self):
109 return True
110
111 def read_chunk_len(self):
112 try:
113 line = self._rfile.readline().decode('latin1')
114 _len = int(line.strip(), 16)
115 except ValueError:
116 raise IOError('Invalid chunk header')
117 if _len < 0:
118 raise IOError('Negative chunk length not allowed')
119 return _len
120
121 def readinto(self, buf):
122 read = 0
123 while not self._done and read < len(buf):
124 if self._len == 0:
125 # This is the first chunk or we fully consumed the previous
126 # one. Read the next length of the next chunk
127 self._len = self.read_chunk_len()
128
129 if self._len == 0:
130 # Found the final chunk of size 0. The stream is now exhausted,
131 # but there is still a final newline that should be consumed
132 self._done = True
133
134 if self._len > 0:
135 # There is data (left) in this chunk, so append it to the
136 # buffer. If this operation fully consumes the chunk, this will
137 # reset self._len to 0.
138 n = min(len(buf), self._len)
139 buf[read:read + n] = self._rfile.read(n)
140 self._len -= n
141 read += n
142
143 if self._len == 0:
144 # Skip the terminating newline of a chunk that has been fully
145 # consumed. This also applies to the 0-sized final chunk
146 terminator = self._rfile.readline()
147 if terminator not in (b'\n', b'\r\n', b'\r'):
148 raise IOError('Missing chunk terminating newline')
149
150 return read
151
152
153 class WSGIRequestHandler(BaseHTTPRequestHandler, object):
154
155 """A request handler that implements WSGI dispatching."""
156
157 @property
158 def server_version(self):
159 return 'Werkzeug/' + werkzeug.__version__
160
161 def make_environ(self):
162 request_url = url_parse(self.path)
163
164 def shutdown_server():
165 self.server.shutdown_signal = True
166
167 url_scheme = self.server.ssl_context is None and 'http' or 'https'
168 path_info = url_unquote(request_url.path)
169
170 environ = {
171 'wsgi.version': (1, 0),
172 'wsgi.url_scheme': url_scheme,
173 'wsgi.input': self.rfile,
174 'wsgi.errors': sys.stderr,
175 'wsgi.multithread': self.server.multithread,
176 'wsgi.multiprocess': self.server.multiprocess,
177 'wsgi.run_once': False,
178 'werkzeug.server.shutdown': shutdown_server,
179 'SERVER_SOFTWARE': self.server_version,
180 'REQUEST_METHOD': self.command,
181 'SCRIPT_NAME': '',
182 'PATH_INFO': wsgi_encoding_dance(path_info),
183 'QUERY_STRING': wsgi_encoding_dance(request_url.query),
184 'REMOTE_ADDR': self.address_string(),
185 'REMOTE_PORT': self.port_integer(),
186 'SERVER_NAME': self.server.server_address[0],
187 'SERVER_PORT': str(self.server.server_address[1]),
188 'SERVER_PROTOCOL': self.request_version
189 }
190
191 for key, value in self.headers.items():
192 key = key.upper().replace('-', '_')
193 if key not in ('CONTENT_TYPE', 'CONTENT_LENGTH'):
194 key = 'HTTP_' + key
195 environ[key] = value
196
197 if environ.get('HTTP_TRANSFER_ENCODING', '').strip().lower() == 'chunked':
198 environ['wsgi.input_terminated'] = True
199 environ['wsgi.input'] = DechunkedInput(environ['wsgi.input'])
200
201 if request_url.scheme and request_url.netloc:
202 environ['HTTP_HOST'] = request_url.netloc
203
204 return environ
205
206 def run_wsgi(self):
207 if self.headers.get('Expect', '').lower().strip() == '100-continue':
208 self.wfile.write(b'HTTP/1.1 100 Continue\r\n\r\n')
209
210 self.environ = environ = self.make_environ()
211 headers_set = []
212 headers_sent = []
213
214 def write(data):
215 assert headers_set, 'write() before start_response'
216 if not headers_sent:
217 status, response_headers = headers_sent[:] = headers_set
218 try:
219 code, msg = status.split(None, 1)
220 except ValueError:
221 code, msg = status, ""
222 code = int(code)
223 self.send_response(code, msg)
224 header_keys = set()
225 for key, value in response_headers:
226 self.send_header(key, value)
227 key = key.lower()
228 header_keys.add(key)
229 if not ('content-length' in header_keys or
230 environ['REQUEST_METHOD'] == 'HEAD' or
231 code < 200 or code in (204, 304)):
232 self.close_connection = True
233 self.send_header('Connection', 'close')
234 if 'server' not in header_keys:
235 self.send_header('Server', self.version_string())
236 if 'date' not in header_keys:
237 self.send_header('Date', self.date_time_string())
238 self.end_headers()
239
240 assert isinstance(data, bytes), 'applications must write bytes'
241 self.wfile.write(data)
242 self.wfile.flush()
243
244 def start_response(status, response_headers, exc_info=None):
245 if exc_info:
246 try:
247 if headers_sent:
248 reraise(*exc_info)
249 finally:
250 exc_info = None
251 elif headers_set:
252 raise AssertionError('Headers already set')
253 headers_set[:] = [status, response_headers]
254 return write
255
256 def execute(app):
257 application_iter = app(environ, start_response)
258 try:
259 for data in application_iter:
260 write(data)
261 if not headers_sent:
262 write(b'')
263 finally:
264 if hasattr(application_iter, 'close'):
265 application_iter.close()
266 application_iter = None
267
268 try:
269 execute(self.server.app)
270 except (socket.error, socket.timeout) as e:
271 self.connection_dropped(e, environ)
272 except Exception:
273 if self.server.passthrough_errors:
274 raise
275 from werkzeug.debug.tbtools import get_current_traceback
276 traceback = get_current_traceback(ignore_system_exceptions=True)
277 try:
278 # if we haven't yet sent the headers but they are set
279 # we roll back to be able to set them again.
280 if not headers_sent:
281 del headers_set[:]
282 execute(InternalServerError())
283 except Exception:
284 pass
285 self.server.log('error', 'Error on request:\n%s',
286 traceback.plaintext)
287
288 def handle(self):
289 """Handles a request ignoring dropped connections."""
290 rv = None
291 try:
292 rv = BaseHTTPRequestHandler.handle(self)
293 except (socket.error, socket.timeout) as e:
294 self.connection_dropped(e)
295 except Exception:
296 if self.server.ssl_context is None or not is_ssl_error():
297 raise
298 if self.server.shutdown_signal:
299 self.initiate_shutdown()
300 return rv
301
302 def initiate_shutdown(self):
303 """A horrible, horrible way to kill the server for Python 2.6 and
304 later. It's the best we can do.
305 """
306 # Windows does not provide SIGKILL, go with SIGTERM then.
307 sig = getattr(signal, 'SIGKILL', signal.SIGTERM)
308 # reloader active
309 if os.environ.get('WERKZEUG_RUN_MAIN') == 'true':
310 os.kill(os.getpid(), sig)
311 # python 2.7
312 self.server._BaseServer__shutdown_request = True
313 # python 2.6
314 self.server._BaseServer__serving = False
315
316 def connection_dropped(self, error, environ=None):
317 """Called if the connection was closed by the client. By default
318 nothing happens.
319 """
320
321 def handle_one_request(self):
322 """Handle a single HTTP request."""
323 self.raw_requestline = self.rfile.readline()
324 if not self.raw_requestline:
325 self.close_connection = 1
326 elif self.parse_request():
327 return self.run_wsgi()
328
329 def send_response(self, code, message=None):
330 """Send the response header and log the response code."""
331 self.log_request(code)
332 if message is None:
333 message = code in self.responses and self.responses[code][0] or ''
334 if self.request_version != 'HTTP/0.9':
335 hdr = "%s %d %s\r\n" % (self.protocol_version, code, message)
336 self.wfile.write(hdr.encode('ascii'))
337
338 def version_string(self):
339 return BaseHTTPRequestHandler.version_string(self).strip()
340
341 def address_string(self):
342 if getattr(self, 'environ', None):
343 return self.environ['REMOTE_ADDR']
344 else:
345 return self.client_address[0]
346
347 def port_integer(self):
348 return self.client_address[1]
349
350 def log_request(self, code='-', size='-'):
351 msg = self.requestline
352 code = str(code)
353
354 if termcolor:
355 color = termcolor.colored
356
357 if code[0] == '1': # 1xx - Informational
358 msg = color(msg, attrs=['bold'])
359 elif code[0] == '2': # 2xx - Success
360 msg = color(msg, color='white')
361 elif code == '304': # 304 - Resource Not Modified
362 msg = color(msg, color='cyan')
363 elif code[0] == '3': # 3xx - Redirection
364 msg = color(msg, color='green')
365 elif code == '404': # 404 - Resource Not Found
366 msg = color(msg, color='yellow')
367 elif code[0] == '4': # 4xx - Client Error
368 msg = color(msg, color='red', attrs=['bold'])
369 else: # 5xx, or any other response
370 msg = color(msg, color='magenta', attrs=['bold'])
371
372 self.log('info', '"%s" %s %s', msg, code, size)
373
374 def log_error(self, *args):
375 self.log('error', *args)
376
377 def log_message(self, format, *args):
378 self.log('info', format, *args)
379
380 def log(self, type, message, *args):
381 _log(type, '%s - - [%s] %s\n' % (self.address_string(),
382 self.log_date_time_string(),
383 message % args))
384
385
386 #: backwards compatible name if someone is subclassing it
387 BaseRequestHandler = WSGIRequestHandler
388
389
390 def generate_adhoc_ssl_pair(cn=None):
391 from random import random
392 crypto = _get_openssl_crypto_module()
393
394 # pretty damn sure that this is not actually accepted by anyone
395 if cn is None:
396 cn = '*'
397
398 cert = crypto.X509()
399 cert.set_serial_number(int(random() * sys.maxsize))
400 cert.gmtime_adj_notBefore(0)
401 cert.gmtime_adj_notAfter(60 * 60 * 24 * 365)
402
403 subject = cert.get_subject()
404 subject.CN = cn
405 subject.O = 'Dummy Certificate' # noqa: E741
406
407 issuer = cert.get_issuer()
408 issuer.CN = 'Untrusted Authority'
409 issuer.O = 'Self-Signed' # noqa: E741
410
411 pkey = crypto.PKey()
412 pkey.generate_key(crypto.TYPE_RSA, 2048)
413 cert.set_pubkey(pkey)
414 cert.sign(pkey, 'sha256')
415
416 return cert, pkey
417
418
419 def make_ssl_devcert(base_path, host=None, cn=None):
420 """Creates an SSL key for development. This should be used instead of
421 the ``'adhoc'`` key which generates a new cert on each server start.
422 It accepts a path for where it should store the key and cert and
423 either a host or CN. If a host is given it will use the CN
424 ``*.host/CN=host``.
425
426 For more information see :func:`run_simple`.
427
428 .. versionadded:: 0.9
429
430 :param base_path: the path to the certificate and key. The extension
431 ``.crt`` is added for the certificate, ``.key`` is
432 added for the key.
433 :param host: the name of the host. This can be used as an alternative
434 for the `cn`.
435 :param cn: the `CN` to use.
436 """
437 from OpenSSL import crypto
438 if host is not None:
439 cn = '*.%s/CN=%s' % (host, host)
440 cert, pkey = generate_adhoc_ssl_pair(cn=cn)
441
442 cert_file = base_path + '.crt'
443 pkey_file = base_path + '.key'
444
445 with open(cert_file, 'wb') as f:
446 f.write(crypto.dump_certificate(crypto.FILETYPE_PEM, cert))
447 with open(pkey_file, 'wb') as f:
448 f.write(crypto.dump_privatekey(crypto.FILETYPE_PEM, pkey))
449
450 return cert_file, pkey_file
451
452
453 def generate_adhoc_ssl_context():
454 """Generates an adhoc SSL context for the development server."""
455 crypto = _get_openssl_crypto_module()
456 import tempfile
457 import atexit
458
459 cert, pkey = generate_adhoc_ssl_pair()
460 cert_handle, cert_file = tempfile.mkstemp()
461 pkey_handle, pkey_file = tempfile.mkstemp()
462 atexit.register(os.remove, pkey_file)
463 atexit.register(os.remove, cert_file)
464
465 os.write(cert_handle, crypto.dump_certificate(crypto.FILETYPE_PEM, cert))
466 os.write(pkey_handle, crypto.dump_privatekey(crypto.FILETYPE_PEM, pkey))
467 os.close(cert_handle)
468 os.close(pkey_handle)
469 ctx = load_ssl_context(cert_file, pkey_file)
470 return ctx
471
472
473 def load_ssl_context(cert_file, pkey_file=None, protocol=None):
474 """Loads SSL context from cert/private key files and optional protocol.
475 Many parameters are directly taken from the API of
476 :py:class:`ssl.SSLContext`.
477
478 :param cert_file: Path of the certificate to use.
479 :param pkey_file: Path of the private key to use. If not given, the key
480 will be obtained from the certificate file.
481 :param protocol: One of the ``PROTOCOL_*`` constants in the stdlib ``ssl``
482 module. Defaults to ``PROTOCOL_SSLv23``.
483 """
484 if protocol is None:
485 protocol = ssl.PROTOCOL_SSLv23
486 ctx = _SSLContext(protocol)
487 ctx.load_cert_chain(cert_file, pkey_file)
488 return ctx
489
490
491 class _SSLContext(object):
492
493 '''A dummy class with a small subset of Python3's ``ssl.SSLContext``, only
494 intended to be used with and by Werkzeug.'''
495
496 def __init__(self, protocol):
497 self._protocol = protocol
498 self._certfile = None
499 self._keyfile = None
500 self._password = None
501
502 def load_cert_chain(self, certfile, keyfile=None, password=None):
503 self._certfile = certfile
504 self._keyfile = keyfile or certfile
505 self._password = password
506
507 def wrap_socket(self, sock, **kwargs):
508 return ssl.wrap_socket(sock, keyfile=self._keyfile,
509 certfile=self._certfile,
510 ssl_version=self._protocol, **kwargs)
511
512
513 def is_ssl_error(error=None):
514 """Checks if the given error (or the current one) is an SSL error."""
515 exc_types = (ssl.SSLError,)
516 try:
517 from OpenSSL.SSL import Error
518 exc_types += (Error,)
519 except ImportError:
520 pass
521
522 if error is None:
523 error = sys.exc_info()[1]
524 return isinstance(error, exc_types)
525
526
527 def select_ip_version(host, port):
528 """Returns AF_INET4 or AF_INET6 depending on where to connect to."""
529 # disabled due to problems with current ipv6 implementations
530 # and various operating systems. Probably this code also is
531 # not supposed to work, but I can't come up with any other
532 # ways to implement this.
533 # try:
534 # info = socket.getaddrinfo(host, port, socket.AF_UNSPEC,
535 # socket.SOCK_STREAM, 0,
536 # socket.AI_PASSIVE)
537 # if info:
538 # return info[0][0]
539 # except socket.gaierror:
540 # pass
541 if ':' in host and hasattr(socket, 'AF_INET6'):
542 return socket.AF_INET6
543 return socket.AF_INET
544
545
546 def get_sockaddr(host, port, family):
547 """Returns a fully qualified socket address, that can properly used by
548 socket.bind"""
549 try:
550 res = socket.getaddrinfo(host, port, family,
551 socket.SOCK_STREAM, socket.SOL_TCP)
552 except socket.gaierror:
553 return (host, port)
554 return res[0][4]
555
556
557 class BaseWSGIServer(HTTPServer, object):
558
559 """Simple single-threaded, single-process WSGI server."""
560 multithread = False
561 multiprocess = False
562 request_queue_size = LISTEN_QUEUE
563
564 def __init__(self, host, port, app, handler=None,
565 passthrough_errors=False, ssl_context=None, fd=None):
566 if handler is None:
567 handler = WSGIRequestHandler
568
569 self.address_family = select_ip_version(host, port)
570
571 if fd is not None:
572 real_sock = socket.fromfd(fd, self.address_family,
573 socket.SOCK_STREAM)
574 port = 0
575 HTTPServer.__init__(self, get_sockaddr(host, int(port),
576 self.address_family), handler)
577 self.app = app
578 self.passthrough_errors = passthrough_errors
579 self.shutdown_signal = False
580 self.host = host
581 self.port = self.socket.getsockname()[1]
582
583 # Patch in the original socket.
584 if fd is not None:
585 self.socket.close()
586 self.socket = real_sock
587 self.server_address = self.socket.getsockname()
588
589 if ssl_context is not None:
590 if isinstance(ssl_context, tuple):
591 ssl_context = load_ssl_context(*ssl_context)
592 if ssl_context == 'adhoc':
593 ssl_context = generate_adhoc_ssl_context()
594 # If we are on Python 2 the return value from socket.fromfd
595 # is an internal socket object but what we need for ssl wrap
596 # is the wrapper around it :(
597 sock = self.socket
598 if PY2 and not isinstance(sock, socket.socket):
599 sock = socket.socket(sock.family, sock.type, sock.proto, sock)
600 self.socket = ssl_context.wrap_socket(sock, server_side=True)
601 self.ssl_context = ssl_context
602 else:
603 self.ssl_context = None
604
605 def log(self, type, message, *args):
606 _log(type, message, *args)
607
608 def serve_forever(self):
609 self.shutdown_signal = False
610 try:
611 HTTPServer.serve_forever(self)
612 except KeyboardInterrupt:
613 pass
614 finally:
615 self.server_close()
616
617 def handle_error(self, request, client_address):
618 if self.passthrough_errors:
619 raise
620 return HTTPServer.handle_error(self, request, client_address)
621
622 def get_request(self):
623 con, info = self.socket.accept()
624 return con, info
625
626
627 class ThreadedWSGIServer(ThreadingMixIn, BaseWSGIServer):
628
629 """A WSGI server that does threading."""
630 multithread = True
631 daemon_threads = True
632
633
634 class ForkingWSGIServer(ForkingMixIn, BaseWSGIServer):
635
636 """A WSGI server that does forking."""
637 multiprocess = True
638
639 def __init__(self, host, port, app, processes=40, handler=None,
640 passthrough_errors=False, ssl_context=None, fd=None):
641 if not can_fork:
642 raise ValueError('Your platform does not support forking.')
643 BaseWSGIServer.__init__(self, host, port, app, handler,
644 passthrough_errors, ssl_context, fd)
645 self.max_children = processes
646
647
648 def make_server(host=None, port=None, app=None, threaded=False, processes=1,
649 request_handler=None, passthrough_errors=False,
650 ssl_context=None, fd=None):
651 """Create a new server instance that is either threaded, or forks
652 or just processes one request after another.
653 """
654 if threaded and processes > 1:
655 raise ValueError("cannot have a multithreaded and "
656 "multi process server.")
657 elif threaded:
658 return ThreadedWSGIServer(host, port, app, request_handler,
659 passthrough_errors, ssl_context, fd=fd)
660 elif processes > 1:
661 return ForkingWSGIServer(host, port, app, processes, request_handler,
662 passthrough_errors, ssl_context, fd=fd)
663 else:
664 return BaseWSGIServer(host, port, app, request_handler,
665 passthrough_errors, ssl_context, fd=fd)
666
667
668 def is_running_from_reloader():
669 """Checks if the application is running from within the Werkzeug
670 reloader subprocess.
671
672 .. versionadded:: 0.10
673 """
674 return os.environ.get('WERKZEUG_RUN_MAIN') == 'true'
675
676
677 def run_simple(hostname, port, application, use_reloader=False,
678 use_debugger=False, use_evalex=True,
679 extra_files=None, reloader_interval=1,
680 reloader_type='auto', threaded=False,
681 processes=1, request_handler=None, static_files=None,
682 passthrough_errors=False, ssl_context=None):
683 """Start a WSGI application. Optional features include a reloader,
684 multithreading and fork support.
685
686 This function has a command-line interface too::
687
688 python -m werkzeug.serving --help
689
690 .. versionadded:: 0.5
691 `static_files` was added to simplify serving of static files as well
692 as `passthrough_errors`.
693
694 .. versionadded:: 0.6
695 support for SSL was added.
696
697 .. versionadded:: 0.8
698 Added support for automatically loading a SSL context from certificate
699 file and private key.
700
701 .. versionadded:: 0.9
702 Added command-line interface.
703
704 .. versionadded:: 0.10
705 Improved the reloader and added support for changing the backend
706 through the `reloader_type` parameter. See :ref:`reloader`
707 for more information.
708
709 :param hostname: The host for the application. eg: ``'localhost'``
710 :param port: The port for the server. eg: ``8080``
711 :param application: the WSGI application to execute
712 :param use_reloader: should the server automatically restart the python
713 process if modules were changed?
714 :param use_debugger: should the werkzeug debugging system be used?
715 :param use_evalex: should the exception evaluation feature be enabled?
716 :param extra_files: a list of files the reloader should watch
717 additionally to the modules. For example configuration
718 files.
719 :param reloader_interval: the interval for the reloader in seconds.
720 :param reloader_type: the type of reloader to use. The default is
721 auto detection. Valid values are ``'stat'`` and
722 ``'watchdog'``. See :ref:`reloader` for more
723 information.
724 :param threaded: should the process handle each request in a separate
725 thread?
726 :param processes: if greater than 1 then handle each request in a new process
727 up to this maximum number of concurrent processes.
728 :param request_handler: optional parameter that can be used to replace
729 the default one. You can use this to replace it
730 with a different
731 :class:`~BaseHTTPServer.BaseHTTPRequestHandler`
732 subclass.
733 :param static_files: a list or dict of paths for static files. This works
734 exactly like :class:`SharedDataMiddleware`, it's actually
735 just wrapping the application in that middleware before
736 serving.
737 :param passthrough_errors: set this to `True` to disable the error catching.
738 This means that the server will die on errors but
739 it can be useful to hook debuggers in (pdb etc.)
740 :param ssl_context: an SSL context for the connection. Either an
741 :class:`ssl.SSLContext`, a tuple in the form
742 ``(cert_file, pkey_file)``, the string ``'adhoc'`` if
743 the server should automatically create one, or ``None``
744 to disable SSL (which is the default).
745 """
746 if not isinstance(port, int):
747 raise TypeError('port must be an integer')
748 if use_debugger:
749 from werkzeug.debug import DebuggedApplication
750 application = DebuggedApplication(application, use_evalex)
751 if static_files:
752 from werkzeug.wsgi import SharedDataMiddleware
753 application = SharedDataMiddleware(application, static_files)
754
755 def log_startup(sock):
756 display_hostname = hostname not in ('', '*') and hostname or 'localhost'
757 if ':' in display_hostname:
758 display_hostname = '[%s]' % display_hostname
759 quit_msg = '(Press CTRL+C to quit)'
760 port = sock.getsockname()[1]
761 _log('info', ' * Running on %s://%s:%d/ %s',
762 ssl_context is None and 'http' or 'https',
763 display_hostname, port, quit_msg)
764
765 def inner():
766 try:
767 fd = int(os.environ['WERKZEUG_SERVER_FD'])
768 except (LookupError, ValueError):
769 fd = None
770 srv = make_server(hostname, port, application, threaded,
771 processes, request_handler,
772 passthrough_errors, ssl_context,
773 fd=fd)
774 if fd is None:
775 log_startup(srv.socket)
776 srv.serve_forever()
777
778 if use_reloader:
779 # If we're not running already in the subprocess that is the
780 # reloader we want to open up a socket early to make sure the
781 # port is actually available.
782 if os.environ.get('WERKZEUG_RUN_MAIN') != 'true':
783 if port == 0 and not can_open_by_fd:
784 raise ValueError('Cannot bind to a random port with enabled '
785 'reloader if the Python interpreter does '
786 'not support socket opening by fd.')
787
788 # Create and destroy a socket so that any exceptions are
789 # raised before we spawn a separate Python interpreter and
790 # lose this ability.
791 address_family = select_ip_version(hostname, port)
792 s = socket.socket(address_family, socket.SOCK_STREAM)
793 s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
794 s.bind(get_sockaddr(hostname, port, address_family))
795 if hasattr(s, 'set_inheritable'):
796 s.set_inheritable(True)
797
798 # If we can open the socket by file descriptor, then we can just
799 # reuse this one and our socket will survive the restarts.
800 if can_open_by_fd:
801 os.environ['WERKZEUG_SERVER_FD'] = str(s.fileno())
802 s.listen(LISTEN_QUEUE)
803 log_startup(s)
804 else:
805 s.close()
806
807 # Do not use relative imports, otherwise "python -m werkzeug.serving"
808 # breaks.
809 from werkzeug._reloader import run_with_reloader
810 run_with_reloader(inner, extra_files, reloader_interval,
811 reloader_type)
812 else:
813 inner()
814
815
816 def run_with_reloader(*args, **kwargs):
817 # People keep using undocumented APIs. Do not use this function
818 # please, we do not guarantee that it continues working.
819 from werkzeug._reloader import run_with_reloader
820 return run_with_reloader(*args, **kwargs)
821
822
823 def main():
824 '''A simple command-line interface for :py:func:`run_simple`.'''
825
826 # in contrast to argparse, this works at least under Python < 2.7
827 import optparse
828 from werkzeug.utils import import_string
829
830 parser = optparse.OptionParser(
831 usage='Usage: %prog [options] app_module:app_object')
832 parser.add_option('-b', '--bind', dest='address',
833 help='The hostname:port the app should listen on.')
834 parser.add_option('-d', '--debug', dest='use_debugger',
835 action='store_true', default=False,
836 help='Use Werkzeug\'s debugger.')
837 parser.add_option('-r', '--reload', dest='use_reloader',
838 action='store_true', default=False,
839 help='Reload Python process if modules change.')
840 options, args = parser.parse_args()
841
842 hostname, port = None, None
843 if options.address:
844 address = options.address.split(':')
845 hostname = address[0]
846 if len(address) > 1:
847 port = address[1]
848
849 if len(args) != 1:
850 sys.stdout.write('No application supplied, or too much. See --help\n')
851 sys.exit(1)
852 app = import_string(args[0])
853
854 run_simple(
855 hostname=(hostname or '127.0.0.1'), port=int(port or 5000),
856 application=app, use_reloader=options.use_reloader,
857 use_debugger=options.use_debugger
858 )
859
860 if __name__ == '__main__':
861 main()
+0
-948
werkzeug/test.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.test
3 ~~~~~~~~~~~~~
4
5 This module implements a client to WSGI applications for testing.
6
7 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD, see LICENSE for more details.
9 """
10 import sys
11 import mimetypes
12 from time import time
13 from random import random
14 from itertools import chain
15 from tempfile import TemporaryFile
16 from io import BytesIO
17
18 try:
19 from urllib2 import Request as U2Request
20 except ImportError:
21 from urllib.request import Request as U2Request
22 try:
23 from http.cookiejar import CookieJar
24 except ImportError: # Py2
25 from cookielib import CookieJar
26
27 from werkzeug._compat import iterlists, iteritems, itervalues, to_bytes, \
28 string_types, text_type, reraise, wsgi_encoding_dance, \
29 make_literal_wrapper
30 from werkzeug._internal import _empty_stream, _get_environ
31 from werkzeug.wrappers import BaseRequest
32 from werkzeug.urls import url_encode, url_fix, iri_to_uri, url_unquote, \
33 url_unparse, url_parse
34 from werkzeug.wsgi import get_host, get_current_url, ClosingIterator
35 from werkzeug.utils import dump_cookie, get_content_type
36 from werkzeug.datastructures import FileMultiDict, MultiDict, \
37 CombinedMultiDict, Headers, FileStorage, CallbackDict
38 from werkzeug.http import dump_options_header, parse_options_header
39
40
41 def stream_encode_multipart(values, use_tempfile=True, threshold=1024 * 500,
42 boundary=None, charset='utf-8'):
43 """Encode a dict of values (either strings or file descriptors or
44 :class:`FileStorage` objects.) into a multipart encoded string stored
45 in a file descriptor.
46 """
47 if boundary is None:
48 boundary = '---------------WerkzeugFormPart_%s%s' % (time(), random())
49 _closure = [BytesIO(), 0, False]
50
51 if use_tempfile:
52 def write_binary(string):
53 stream, total_length, on_disk = _closure
54 if on_disk:
55 stream.write(string)
56 else:
57 length = len(string)
58 if length + _closure[1] <= threshold:
59 stream.write(string)
60 else:
61 new_stream = TemporaryFile('wb+')
62 new_stream.write(stream.getvalue())
63 new_stream.write(string)
64 _closure[0] = new_stream
65 _closure[2] = True
66 _closure[1] = total_length + length
67 else:
68 write_binary = _closure[0].write
69
70 def write(string):
71 write_binary(string.encode(charset))
72
73 if not isinstance(values, MultiDict):
74 values = MultiDict(values)
75
76 for key, values in iterlists(values):
77 for value in values:
78 write('--%s\r\nContent-Disposition: form-data; name="%s"' %
79 (boundary, key))
80 reader = getattr(value, 'read', None)
81 if reader is not None:
82 filename = getattr(value, 'filename',
83 getattr(value, 'name', None))
84 content_type = getattr(value, 'content_type', None)
85 if content_type is None:
86 content_type = filename and \
87 mimetypes.guess_type(filename)[0] or \
88 'application/octet-stream'
89 if filename is not None:
90 write('; filename="%s"\r\n' % filename)
91 else:
92 write('\r\n')
93 write('Content-Type: %s\r\n\r\n' % content_type)
94 while 1:
95 chunk = reader(16384)
96 if not chunk:
97 break
98 write_binary(chunk)
99 else:
100 if not isinstance(value, string_types):
101 value = str(value)
102
103 value = to_bytes(value, charset)
104 write('\r\n\r\n')
105 write_binary(value)
106 write('\r\n')
107 write('--%s--\r\n' % boundary)
108
109 length = int(_closure[0].tell())
110 _closure[0].seek(0)
111 return _closure[0], length, boundary
112
113
114 def encode_multipart(values, boundary=None, charset='utf-8'):
115 """Like `stream_encode_multipart` but returns a tuple in the form
116 (``boundary``, ``data``) where data is a bytestring.
117 """
118 stream, length, boundary = stream_encode_multipart(
119 values, use_tempfile=False, boundary=boundary, charset=charset)
120 return boundary, stream.read()
121
122
123 def File(fd, filename=None, mimetype=None):
124 """Backwards compat."""
125 from warnings import warn
126 warn(DeprecationWarning('werkzeug.test.File is deprecated, use the '
127 'EnvironBuilder or FileStorage instead'))
128 return FileStorage(fd, filename=filename, content_type=mimetype)
129
130
131 class _TestCookieHeaders(object):
132
133 """A headers adapter for cookielib
134 """
135
136 def __init__(self, headers):
137 self.headers = headers
138
139 def getheaders(self, name):
140 headers = []
141 name = name.lower()
142 for k, v in self.headers:
143 if k.lower() == name:
144 headers.append(v)
145 return headers
146
147 def get_all(self, name, default=None):
148 rv = []
149 for k, v in self.headers:
150 if k.lower() == name.lower():
151 rv.append(v)
152 return rv or default or []
153
154
155 class _TestCookieResponse(object):
156
157 """Something that looks like a httplib.HTTPResponse, but is actually just an
158 adapter for our test responses to make them available for cookielib.
159 """
160
161 def __init__(self, headers):
162 self.headers = _TestCookieHeaders(headers)
163
164 def info(self):
165 return self.headers
166
167
168 class _TestCookieJar(CookieJar):
169
170 """A cookielib.CookieJar modified to inject and read cookie headers from
171 and to wsgi environments, and wsgi application responses.
172 """
173
174 def inject_wsgi(self, environ):
175 """Inject the cookies as client headers into the server's wsgi
176 environment.
177 """
178 cvals = []
179 for cookie in self:
180 cvals.append('%s=%s' % (cookie.name, cookie.value))
181 if cvals:
182 environ['HTTP_COOKIE'] = '; '.join(cvals)
183
184 def extract_wsgi(self, environ, headers):
185 """Extract the server's set-cookie headers as cookies into the
186 cookie jar.
187 """
188 self.extract_cookies(
189 _TestCookieResponse(headers),
190 U2Request(get_current_url(environ)),
191 )
192
193
194 def _iter_data(data):
195 """Iterates over a `dict` or :class:`MultiDict` yielding all keys and
196 values.
197 This is used to iterate over the data passed to the
198 :class:`EnvironBuilder`.
199 """
200 if isinstance(data, MultiDict):
201 for key, values in iterlists(data):
202 for value in values:
203 yield key, value
204 else:
205 for key, values in iteritems(data):
206 if isinstance(values, list):
207 for value in values:
208 yield key, value
209 else:
210 yield key, values
211
212
213 class EnvironBuilder(object):
214
215 """This class can be used to conveniently create a WSGI environment
216 for testing purposes. It can be used to quickly create WSGI environments
217 or request objects from arbitrary data.
218
219 The signature of this class is also used in some other places as of
220 Werkzeug 0.5 (:func:`create_environ`, :meth:`BaseResponse.from_values`,
221 :meth:`Client.open`). Because of this most of the functionality is
222 available through the constructor alone.
223
224 Files and regular form data can be manipulated independently of each
225 other with the :attr:`form` and :attr:`files` attributes, but are
226 passed with the same argument to the constructor: `data`.
227
228 `data` can be any of these values:
229
230 - a `str` or `bytes` object: The object is converted into an
231 :attr:`input_stream`, the :attr:`content_length` is set and you have to
232 provide a :attr:`content_type`.
233 - a `dict` or :class:`MultiDict`: The keys have to be strings. The values
234 have to be either any of the following objects, or a list of any of the
235 following objects:
236
237 - a :class:`file`-like object: These are converted into
238 :class:`FileStorage` objects automatically.
239 - a `tuple`: The :meth:`~FileMultiDict.add_file` method is called
240 with the key and the unpacked `tuple` items as positional
241 arguments.
242 - a `str`: The string is set as form data for the associated key.
243 - a file-like object: The object content is loaded in memory and then
244 handled like a regular `str` or a `bytes`.
245
246 .. versionadded:: 0.6
247 `path` and `base_url` can now be unicode strings that are encoded using
248 the :func:`iri_to_uri` function.
249
250 :param path: the path of the request. In the WSGI environment this will
251 end up as `PATH_INFO`. If the `query_string` is not defined
252 and there is a question mark in the `path` everything after
253 it is used as query string.
254 :param base_url: the base URL is a URL that is used to extract the WSGI
255 URL scheme, host (server name + server port) and the
256 script root (`SCRIPT_NAME`).
257 :param query_string: an optional string or dict with URL parameters.
258 :param method: the HTTP method to use, defaults to `GET`.
259 :param input_stream: an optional input stream. Do not specify this and
260 `data`. As soon as an input stream is set you can't
261 modify :attr:`args` and :attr:`files` unless you
262 set the :attr:`input_stream` to `None` again.
263 :param content_type: The content type for the request. As of 0.5 you
264 don't have to provide this when specifying files
265 and form data via `data`.
266 :param content_length: The content length for the request. You don't
267 have to specify this when providing data via
268 `data`.
269 :param errors_stream: an optional error stream that is used for
270 `wsgi.errors`. Defaults to :data:`stderr`.
271 :param multithread: controls `wsgi.multithread`. Defaults to `False`.
272 :param multiprocess: controls `wsgi.multiprocess`. Defaults to `False`.
273 :param run_once: controls `wsgi.run_once`. Defaults to `False`.
274 :param headers: an optional list or :class:`Headers` object of headers.
275 :param data: a string or dict of form data or a file-object.
276 See explanation above.
277 :param environ_base: an optional dict of environment defaults.
278 :param environ_overrides: an optional dict of environment overrides.
279 :param charset: the charset used to encode unicode data.
280 """
281
282 #: the server protocol to use. defaults to HTTP/1.1
283 server_protocol = 'HTTP/1.1'
284
285 #: the wsgi version to use. defaults to (1, 0)
286 wsgi_version = (1, 0)
287
288 #: the default request class for :meth:`get_request`
289 request_class = BaseRequest
290
291 def __init__(self, path='/', base_url=None, query_string=None,
292 method='GET', input_stream=None, content_type=None,
293 content_length=None, errors_stream=None, multithread=False,
294 multiprocess=False, run_once=False, headers=None, data=None,
295 environ_base=None, environ_overrides=None, charset='utf-8',
296 mimetype=None):
297 path_s = make_literal_wrapper(path)
298 if query_string is None and path_s('?') in path:
299 path, query_string = path.split(path_s('?'), 1)
300 self.charset = charset
301 self.path = iri_to_uri(path)
302 if base_url is not None:
303 base_url = url_fix(iri_to_uri(base_url, charset), charset)
304 self.base_url = base_url
305 if isinstance(query_string, (bytes, text_type)):
306 self.query_string = query_string
307 else:
308 if query_string is None:
309 query_string = MultiDict()
310 elif not isinstance(query_string, MultiDict):
311 query_string = MultiDict(query_string)
312 self.args = query_string
313 self.method = method
314 if headers is None:
315 headers = Headers()
316 elif not isinstance(headers, Headers):
317 headers = Headers(headers)
318 self.headers = headers
319 if content_type is not None:
320 self.content_type = content_type
321 if errors_stream is None:
322 errors_stream = sys.stderr
323 self.errors_stream = errors_stream
324 self.multithread = multithread
325 self.multiprocess = multiprocess
326 self.run_once = run_once
327 self.environ_base = environ_base
328 self.environ_overrides = environ_overrides
329 self.input_stream = input_stream
330 self.content_length = content_length
331 self.closed = False
332
333 if data:
334 if input_stream is not None:
335 raise TypeError('can\'t provide input stream and data')
336 if hasattr(data, 'read'):
337 data = data.read()
338 if isinstance(data, text_type):
339 data = data.encode(self.charset)
340 if isinstance(data, bytes):
341 self.input_stream = BytesIO(data)
342 if self.content_length is None:
343 self.content_length = len(data)
344 else:
345 for key, value in _iter_data(data):
346 if isinstance(value, (tuple, dict)) or \
347 hasattr(value, 'read'):
348 self._add_file_from_data(key, value)
349 else:
350 self.form.setlistdefault(key).append(value)
351
352 if mimetype is not None:
353 self.mimetype = mimetype
354
355 def _add_file_from_data(self, key, value):
356 """Called in the EnvironBuilder to add files from the data dict."""
357 if isinstance(value, tuple):
358 self.files.add_file(key, *value)
359 elif isinstance(value, dict):
360 from warnings import warn
361 warn(DeprecationWarning('it\'s no longer possible to pass dicts '
362 'as `data`. Use tuples or FileStorage '
363 'objects instead'), stacklevel=2)
364 value = dict(value)
365 mimetype = value.pop('mimetype', None)
366 if mimetype is not None:
367 value['content_type'] = mimetype
368 self.files.add_file(key, **value)
369 else:
370 self.files.add_file(key, value)
371
372 def _get_base_url(self):
373 return url_unparse((self.url_scheme, self.host,
374 self.script_root, '', '')).rstrip('/') + '/'
375
376 def _set_base_url(self, value):
377 if value is None:
378 scheme = 'http'
379 netloc = 'localhost'
380 script_root = ''
381 else:
382 scheme, netloc, script_root, qs, anchor = url_parse(value)
383 if qs or anchor:
384 raise ValueError('base url must not contain a query string '
385 'or fragment')
386 self.script_root = script_root.rstrip('/')
387 self.host = netloc
388 self.url_scheme = scheme
389
390 base_url = property(_get_base_url, _set_base_url, doc='''
391 The base URL is a URL that is used to extract the WSGI
392 URL scheme, host (server name + server port) and the
393 script root (`SCRIPT_NAME`).''')
394 del _get_base_url, _set_base_url
395
396 def _get_content_type(self):
397 ct = self.headers.get('Content-Type')
398 if ct is None and not self._input_stream:
399 if self._files:
400 return 'multipart/form-data'
401 elif self._form:
402 return 'application/x-www-form-urlencoded'
403 return None
404 return ct
405
406 def _set_content_type(self, value):
407 if value is None:
408 self.headers.pop('Content-Type', None)
409 else:
410 self.headers['Content-Type'] = value
411
412 content_type = property(_get_content_type, _set_content_type, doc='''
413 The content type for the request. Reflected from and to the
414 :attr:`headers`. Do not set if you set :attr:`files` or
415 :attr:`form` for auto detection.''')
416 del _get_content_type, _set_content_type
417
418 def _get_content_length(self):
419 return self.headers.get('Content-Length', type=int)
420
421 def _get_mimetype(self):
422 ct = self.content_type
423 if ct:
424 return ct.split(';')[0].strip()
425
426 def _set_mimetype(self, value):
427 self.content_type = get_content_type(value, self.charset)
428
429 def _get_mimetype_params(self):
430 def on_update(d):
431 self.headers['Content-Type'] = \
432 dump_options_header(self.mimetype, d)
433 d = parse_options_header(self.headers.get('content-type', ''))[1]
434 return CallbackDict(d, on_update)
435
436 mimetype = property(_get_mimetype, _set_mimetype, doc='''
437 The mimetype (content type without charset etc.)
438
439 .. versionadded:: 0.14
440 ''')
441 mimetype_params = property(_get_mimetype_params, doc='''
442 The mimetype parameters as dict. For example if the content
443 type is ``text/html; charset=utf-8`` the params would be
444 ``{'charset': 'utf-8'}``.
445
446 .. versionadded:: 0.14
447 ''')
448 del _get_mimetype, _set_mimetype, _get_mimetype_params
449
450 def _set_content_length(self, value):
451 if value is None:
452 self.headers.pop('Content-Length', None)
453 else:
454 self.headers['Content-Length'] = str(value)
455
456 content_length = property(_get_content_length, _set_content_length, doc='''
457 The content length as integer. Reflected from and to the
458 :attr:`headers`. Do not set if you set :attr:`files` or
459 :attr:`form` for auto detection.''')
460 del _get_content_length, _set_content_length
461
462 def form_property(name, storage, doc):
463 key = '_' + name
464
465 def getter(self):
466 if self._input_stream is not None:
467 raise AttributeError('an input stream is defined')
468 rv = getattr(self, key)
469 if rv is None:
470 rv = storage()
471 setattr(self, key, rv)
472
473 return rv
474
475 def setter(self, value):
476 self._input_stream = None
477 setattr(self, key, value)
478 return property(getter, setter, doc=doc)
479
480 form = form_property('form', MultiDict, doc='''
481 A :class:`MultiDict` of form values.''')
482 files = form_property('files', FileMultiDict, doc='''
483 A :class:`FileMultiDict` of uploaded files. You can use the
484 :meth:`~FileMultiDict.add_file` method to add new files to the
485 dict.''')
486 del form_property
487
488 def _get_input_stream(self):
489 return self._input_stream
490
491 def _set_input_stream(self, value):
492 self._input_stream = value
493 self._form = self._files = None
494
495 input_stream = property(_get_input_stream, _set_input_stream, doc='''
496 An optional input stream. If you set this it will clear
497 :attr:`form` and :attr:`files`.''')
498 del _get_input_stream, _set_input_stream
499
500 def _get_query_string(self):
501 if self._query_string is None:
502 if self._args is not None:
503 return url_encode(self._args, charset=self.charset)
504 return ''
505 return self._query_string
506
507 def _set_query_string(self, value):
508 self._query_string = value
509 self._args = None
510
511 query_string = property(_get_query_string, _set_query_string, doc='''
512 The query string. If you set this to a string :attr:`args` will
513 no longer be available.''')
514 del _get_query_string, _set_query_string
515
516 def _get_args(self):
517 if self._query_string is not None:
518 raise AttributeError('a query string is defined')
519 if self._args is None:
520 self._args = MultiDict()
521 return self._args
522
523 def _set_args(self, value):
524 self._query_string = None
525 self._args = value
526
527 args = property(_get_args, _set_args, doc='''
528 The URL arguments as :class:`MultiDict`.''')
529 del _get_args, _set_args
530
531 @property
532 def server_name(self):
533 """The server name (read-only, use :attr:`host` to set)"""
534 return self.host.split(':', 1)[0]
535
536 @property
537 def server_port(self):
538 """The server port as integer (read-only, use :attr:`host` to set)"""
539 pieces = self.host.split(':', 1)
540 if len(pieces) == 2 and pieces[1].isdigit():
541 return int(pieces[1])
542 elif self.url_scheme == 'https':
543 return 443
544 return 80
545
546 def __del__(self):
547 try:
548 self.close()
549 except Exception:
550 pass
551
552 def close(self):
553 """Closes all files. If you put real :class:`file` objects into the
554 :attr:`files` dict you can call this method to automatically close
555 them all in one go.
556 """
557 if self.closed:
558 return
559 try:
560 files = itervalues(self.files)
561 except AttributeError:
562 files = ()
563 for f in files:
564 try:
565 f.close()
566 except Exception:
567 pass
568 self.closed = True
569
570 def get_environ(self):
571 """Return the built environ."""
572 input_stream = self.input_stream
573 content_length = self.content_length
574
575 mimetype = self.mimetype
576 content_type = self.content_type
577
578 if input_stream is not None:
579 start_pos = input_stream.tell()
580 input_stream.seek(0, 2)
581 end_pos = input_stream.tell()
582 input_stream.seek(start_pos)
583 content_length = end_pos - start_pos
584 elif mimetype == 'multipart/form-data':
585 values = CombinedMultiDict([self.form, self.files])
586 input_stream, content_length, boundary = \
587 stream_encode_multipart(values, charset=self.charset)
588 content_type = mimetype + '; boundary="%s"' % boundary
589 elif mimetype == 'application/x-www-form-urlencoded':
590 # XXX: py2v3 review
591 values = url_encode(self.form, charset=self.charset)
592 values = values.encode('ascii')
593 content_length = len(values)
594 input_stream = BytesIO(values)
595 else:
596 input_stream = _empty_stream
597
598 result = {}
599 if self.environ_base:
600 result.update(self.environ_base)
601
602 def _path_encode(x):
603 return wsgi_encoding_dance(url_unquote(x, self.charset), self.charset)
604
605 qs = wsgi_encoding_dance(self.query_string)
606
607 result.update({
608 'REQUEST_METHOD': self.method,
609 'SCRIPT_NAME': _path_encode(self.script_root),
610 'PATH_INFO': _path_encode(self.path),
611 'QUERY_STRING': qs,
612 'SERVER_NAME': self.server_name,
613 'SERVER_PORT': str(self.server_port),
614 'HTTP_HOST': self.host,
615 'SERVER_PROTOCOL': self.server_protocol,
616 'CONTENT_TYPE': content_type or '',
617 'CONTENT_LENGTH': str(content_length or '0'),
618 'wsgi.version': self.wsgi_version,
619 'wsgi.url_scheme': self.url_scheme,
620 'wsgi.input': input_stream,
621 'wsgi.errors': self.errors_stream,
622 'wsgi.multithread': self.multithread,
623 'wsgi.multiprocess': self.multiprocess,
624 'wsgi.run_once': self.run_once
625 })
626 for key, value in self.headers.to_wsgi_list():
627 result['HTTP_%s' % key.upper().replace('-', '_')] = value
628 if self.environ_overrides:
629 result.update(self.environ_overrides)
630 return result
631
632 def get_request(self, cls=None):
633 """Returns a request with the data. If the request class is not
634 specified :attr:`request_class` is used.
635
636 :param cls: The request wrapper to use.
637 """
638 if cls is None:
639 cls = self.request_class
640 return cls(self.get_environ())
641
642
643 class ClientRedirectError(Exception):
644
645 """
646 If a redirect loop is detected when using follow_redirects=True with
647 the :cls:`Client`, then this exception is raised.
648 """
649
650
651 class Client(object):
652
653 """This class allows to send requests to a wrapped application.
654
655 The response wrapper can be a class or factory function that takes
656 three arguments: app_iter, status and headers. The default response
657 wrapper just returns a tuple.
658
659 Example::
660
661 class ClientResponse(BaseResponse):
662 ...
663
664 client = Client(MyApplication(), response_wrapper=ClientResponse)
665
666 The use_cookies parameter indicates whether cookies should be stored and
667 sent for subsequent requests. This is True by default, but passing False
668 will disable this behaviour.
669
670 If you want to request some subdomain of your application you may set
671 `allow_subdomain_redirects` to `True` as if not no external redirects
672 are allowed.
673
674 .. versionadded:: 0.5
675 `use_cookies` is new in this version. Older versions did not provide
676 builtin cookie support.
677
678 .. versionadded:: 0.14
679 The `mimetype` parameter was added.
680 """
681
682 def __init__(self, application, response_wrapper=None, use_cookies=True,
683 allow_subdomain_redirects=False):
684 self.application = application
685 self.response_wrapper = response_wrapper
686 if use_cookies:
687 self.cookie_jar = _TestCookieJar()
688 else:
689 self.cookie_jar = None
690 self.allow_subdomain_redirects = allow_subdomain_redirects
691
692 def set_cookie(self, server_name, key, value='', max_age=None,
693 expires=None, path='/', domain=None, secure=None,
694 httponly=False, charset='utf-8'):
695 """Sets a cookie in the client's cookie jar. The server name
696 is required and has to match the one that is also passed to
697 the open call.
698 """
699 assert self.cookie_jar is not None, 'cookies disabled'
700 header = dump_cookie(key, value, max_age, expires, path, domain,
701 secure, httponly, charset)
702 environ = create_environ(path, base_url='http://' + server_name)
703 headers = [('Set-Cookie', header)]
704 self.cookie_jar.extract_wsgi(environ, headers)
705
706 def delete_cookie(self, server_name, key, path='/', domain=None):
707 """Deletes a cookie in the test client."""
708 self.set_cookie(server_name, key, expires=0, max_age=0,
709 path=path, domain=domain)
710
711 def run_wsgi_app(self, environ, buffered=False):
712 """Runs the wrapped WSGI app with the given environment."""
713 if self.cookie_jar is not None:
714 self.cookie_jar.inject_wsgi(environ)
715 rv = run_wsgi_app(self.application, environ, buffered=buffered)
716 if self.cookie_jar is not None:
717 self.cookie_jar.extract_wsgi(environ, rv[2])
718 return rv
719
720 def resolve_redirect(self, response, new_location, environ, buffered=False):
721 """Resolves a single redirect and triggers the request again
722 directly on this redirect client.
723 """
724 scheme, netloc, script_root, qs, anchor = url_parse(new_location)
725 base_url = url_unparse((scheme, netloc, '', '', '')).rstrip('/') + '/'
726
727 cur_server_name = netloc.split(':', 1)[0].split('.')
728 real_server_name = get_host(environ).rsplit(':', 1)[0].split('.')
729 if cur_server_name == ['']:
730 # this is a local redirect having autocorrect_location_header=False
731 cur_server_name = real_server_name
732 base_url = EnvironBuilder(environ).base_url
733
734 if self.allow_subdomain_redirects:
735 allowed = cur_server_name[-len(real_server_name):] == real_server_name
736 else:
737 allowed = cur_server_name == real_server_name
738
739 if not allowed:
740 raise RuntimeError('%r does not support redirect to '
741 'external targets' % self.__class__)
742
743 status_code = int(response[1].split(None, 1)[0])
744 if status_code == 307:
745 method = environ['REQUEST_METHOD']
746 else:
747 method = 'GET'
748
749 # For redirect handling we temporarily disable the response
750 # wrapper. This is not threadsafe but not a real concern
751 # since the test client must not be shared anyways.
752 old_response_wrapper = self.response_wrapper
753 self.response_wrapper = None
754 try:
755 return self.open(path=script_root, base_url=base_url,
756 query_string=qs, as_tuple=True,
757 buffered=buffered, method=method)
758 finally:
759 self.response_wrapper = old_response_wrapper
760
761 def open(self, *args, **kwargs):
762 """Takes the same arguments as the :class:`EnvironBuilder` class with
763 some additions: You can provide a :class:`EnvironBuilder` or a WSGI
764 environment as only argument instead of the :class:`EnvironBuilder`
765 arguments and two optional keyword arguments (`as_tuple`, `buffered`)
766 that change the type of the return value or the way the application is
767 executed.
768
769 .. versionchanged:: 0.5
770 If a dict is provided as file in the dict for the `data` parameter
771 the content type has to be called `content_type` now instead of
772 `mimetype`. This change was made for consistency with
773 :class:`werkzeug.FileWrapper`.
774
775 The `follow_redirects` parameter was added to :func:`open`.
776
777 Additional parameters:
778
779 :param as_tuple: Returns a tuple in the form ``(environ, result)``
780 :param buffered: Set this to True to buffer the application run.
781 This will automatically close the application for
782 you as well.
783 :param follow_redirects: Set this to True if the `Client` should
784 follow HTTP redirects.
785 """
786 as_tuple = kwargs.pop('as_tuple', False)
787 buffered = kwargs.pop('buffered', False)
788 follow_redirects = kwargs.pop('follow_redirects', False)
789 environ = None
790 if not kwargs and len(args) == 1:
791 if isinstance(args[0], EnvironBuilder):
792 environ = args[0].get_environ()
793 elif isinstance(args[0], dict):
794 environ = args[0]
795 if environ is None:
796 builder = EnvironBuilder(*args, **kwargs)
797 try:
798 environ = builder.get_environ()
799 finally:
800 builder.close()
801
802 response = self.run_wsgi_app(environ, buffered=buffered)
803
804 # handle redirects
805 redirect_chain = []
806 while 1:
807 status_code = int(response[1].split(None, 1)[0])
808 if status_code not in (301, 302, 303, 305, 307) \
809 or not follow_redirects:
810 break
811 new_location = response[2]['location']
812 new_redirect_entry = (new_location, status_code)
813 if new_redirect_entry in redirect_chain:
814 raise ClientRedirectError('loop detected')
815 redirect_chain.append(new_redirect_entry)
816 environ, response = self.resolve_redirect(response, new_location,
817 environ,
818 buffered=buffered)
819
820 if self.response_wrapper is not None:
821 response = self.response_wrapper(*response)
822 if as_tuple:
823 return environ, response
824 return response
825
826 def get(self, *args, **kw):
827 """Like open but method is enforced to GET."""
828 kw['method'] = 'GET'
829 return self.open(*args, **kw)
830
831 def patch(self, *args, **kw):
832 """Like open but method is enforced to PATCH."""
833 kw['method'] = 'PATCH'
834 return self.open(*args, **kw)
835
836 def post(self, *args, **kw):
837 """Like open but method is enforced to POST."""
838 kw['method'] = 'POST'
839 return self.open(*args, **kw)
840
841 def head(self, *args, **kw):
842 """Like open but method is enforced to HEAD."""
843 kw['method'] = 'HEAD'
844 return self.open(*args, **kw)
845
846 def put(self, *args, **kw):
847 """Like open but method is enforced to PUT."""
848 kw['method'] = 'PUT'
849 return self.open(*args, **kw)
850
851 def delete(self, *args, **kw):
852 """Like open but method is enforced to DELETE."""
853 kw['method'] = 'DELETE'
854 return self.open(*args, **kw)
855
856 def options(self, *args, **kw):
857 """Like open but method is enforced to OPTIONS."""
858 kw['method'] = 'OPTIONS'
859 return self.open(*args, **kw)
860
861 def trace(self, *args, **kw):
862 """Like open but method is enforced to TRACE."""
863 kw['method'] = 'TRACE'
864 return self.open(*args, **kw)
865
866 def __repr__(self):
867 return '<%s %r>' % (
868 self.__class__.__name__,
869 self.application
870 )
871
872
873 def create_environ(*args, **kwargs):
874 """Create a new WSGI environ dict based on the values passed. The first
875 parameter should be the path of the request which defaults to '/'. The
876 second one can either be an absolute path (in that case the host is
877 localhost:80) or a full path to the request with scheme, netloc port and
878 the path to the script.
879
880 This accepts the same arguments as the :class:`EnvironBuilder`
881 constructor.
882
883 .. versionchanged:: 0.5
884 This function is now a thin wrapper over :class:`EnvironBuilder` which
885 was added in 0.5. The `headers`, `environ_base`, `environ_overrides`
886 and `charset` parameters were added.
887 """
888 builder = EnvironBuilder(*args, **kwargs)
889 try:
890 return builder.get_environ()
891 finally:
892 builder.close()
893
894
895 def run_wsgi_app(app, environ, buffered=False):
896 """Return a tuple in the form (app_iter, status, headers) of the
897 application output. This works best if you pass it an application that
898 returns an iterator all the time.
899
900 Sometimes applications may use the `write()` callable returned
901 by the `start_response` function. This tries to resolve such edge
902 cases automatically. But if you don't get the expected output you
903 should set `buffered` to `True` which enforces buffering.
904
905 If passed an invalid WSGI application the behavior of this function is
906 undefined. Never pass non-conforming WSGI applications to this function.
907
908 :param app: the application to execute.
909 :param buffered: set to `True` to enforce buffering.
910 :return: tuple in the form ``(app_iter, status, headers)``
911 """
912 environ = _get_environ(environ)
913 response = []
914 buffer = []
915
916 def start_response(status, headers, exc_info=None):
917 if exc_info is not None:
918 reraise(*exc_info)
919 response[:] = [status, headers]
920 return buffer.append
921
922 app_rv = app(environ, start_response)
923 close_func = getattr(app_rv, 'close', None)
924 app_iter = iter(app_rv)
925
926 # when buffering we emit the close call early and convert the
927 # application iterator into a regular list
928 if buffered:
929 try:
930 app_iter = list(app_iter)
931 finally:
932 if close_func is not None:
933 close_func()
934
935 # otherwise we iterate the application iter until we have a response, chain
936 # the already received data with the already collected data and wrap it in
937 # a new `ClosingIterator` if we need to restore a `close` callable from the
938 # original return value.
939 else:
940 while not response:
941 buffer.append(next(app_iter))
942 if buffer:
943 app_iter = chain(buffer, app_iter)
944 if close_func is not None and app_iter is not app_rv:
945 app_iter = ClosingIterator(app_iter, close_func)
946
947 return app_iter, response[0], Headers(response[1])
+0
-230
werkzeug/testapp.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.testapp
3 ~~~~~~~~~~~~~~~~
4
5 Provide a small test application that can be used to test a WSGI server
6 and check it for WSGI compliance.
7
8 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
9 :license: BSD, see LICENSE for more details.
10 """
11 import os
12 import sys
13 import werkzeug
14 from textwrap import wrap
15 from werkzeug.wrappers import BaseRequest as Request, BaseResponse as Response
16 from werkzeug.utils import escape
17 import base64
18
19 logo = Response(base64.b64decode('''
20 R0lGODlhoACgAOMIAAEDACwpAEpCAGdgAJaKAM28AOnVAP3rAP/////////
21 //////////////////////yH5BAEKAAgALAAAAACgAKAAAAT+EMlJq704680R+F0ojmRpnuj0rWnrv
22 nB8rbRs33gu0bzu/0AObxgsGn3D5HHJbCUFyqZ0ukkSDlAidctNFg7gbI9LZlrBaHGtzAae0eloe25
23 7w9EDOX2fst/xenyCIn5/gFqDiVVDV4aGeYiKkhSFjnCQY5OTlZaXgZp8nJ2ekaB0SQOjqphrpnOiq
24 ncEn65UsLGytLVmQ6m4sQazpbtLqL/HwpnER8bHyLrLOc3Oz8PRONPU1crXN9na263dMt/g4SzjMeX
25 m5yDpLqgG7OzJ4u8lT/P69ej3JPn69kHzN2OIAHkB9RUYSFCFQYQJFTIkCDBiwoXWGnowaLEjRm7+G
26 p9A7Hhx4rUkAUaSLJlxHMqVMD/aSycSZkyTplCqtGnRAM5NQ1Ly5OmzZc6gO4d6DGAUKA+hSocWYAo
27 SlM6oUWX2O/o0KdaVU5vuSQLAa0ADwQgMEMB2AIECZhVSnTno6spgbtXmHcBUrQACcc2FrTrWS8wAf
28 78cMFBgwIBgbN+qvTt3ayikRBk7BoyGAGABAdYyfdzRQGV3l4coxrqQ84GpUBmrdR3xNIDUPAKDBSA
29 ADIGDhhqTZIWaDcrVX8EsbNzbkvCOxG8bN5w8ly9H8jyTJHC6DFndQydbguh2e/ctZJFXRxMAqqPVA
30 tQH5E64SPr1f0zz7sQYjAHg0In+JQ11+N2B0XXBeeYZgBZFx4tqBToiTCPv0YBgQv8JqA6BEf6RhXx
31 w1ENhRBnWV8ctEX4Ul2zc3aVGcQNC2KElyTDYyYUWvShdjDyMOGMuFjqnII45aogPhz/CodUHFwaDx
32 lTgsaOjNyhGWJQd+lFoAGk8ObghI0kawg+EV5blH3dr+digkYuAGSaQZFHFz2P/cTaLmhF52QeSb45
33 Jwxd+uSVGHlqOZpOeJpCFZ5J+rkAkFjQ0N1tah7JJSZUFNsrkeJUJMIBi8jyaEKIhKPomnC91Uo+NB
34 yyaJ5umnnpInIFh4t6ZSpGaAVmizqjpByDegYl8tPE0phCYrhcMWSv+uAqHfgH88ak5UXZmlKLVJhd
35 dj78s1Fxnzo6yUCrV6rrDOkluG+QzCAUTbCwf9SrmMLzK6p+OPHx7DF+bsfMRq7Ec61Av9i6GLw23r
36 idnZ+/OO0a99pbIrJkproCQMA17OPG6suq3cca5ruDfXCCDoS7BEdvmJn5otdqscn+uogRHHXs8cbh
37 EIfYaDY1AkrC0cqwcZpnM6ludx72x0p7Fo/hZAcpJDjax0UdHavMKAbiKltMWCF3xxh9k25N/Viud8
38 ba78iCvUkt+V6BpwMlErmcgc502x+u1nSxJSJP9Mi52awD1V4yB/QHONsnU3L+A/zR4VL/indx/y64
39 gqcj+qgTeweM86f0Qy1QVbvmWH1D9h+alqg254QD8HJXHvjQaGOqEqC22M54PcftZVKVSQG9jhkv7C
40 JyTyDoAJfPdu8v7DRZAxsP/ky9MJ3OL36DJfCFPASC3/aXlfLOOON9vGZZHydGf8LnxYJuuVIbl83y
41 Az5n/RPz07E+9+zw2A2ahz4HxHo9Kt79HTMx1Q7ma7zAzHgHqYH0SoZWyTuOLMiHwSfZDAQTn0ajk9
42 YQqodnUYjByQZhZak9Wu4gYQsMyEpIOAOQKze8CmEF45KuAHTvIDOfHJNipwoHMuGHBnJElUoDmAyX
43 c2Qm/R8Ah/iILCCJOEokGowdhDYc/yoL+vpRGwyVSCWFYZNljkhEirGXsalWcAgOdeAdoXcktF2udb
44 qbUhjWyMQxYO01o6KYKOr6iK3fE4MaS+DsvBsGOBaMb0Y6IxADaJhFICaOLmiWTlDAnY1KzDG4ambL
45 cWBA8mUzjJsN2KjSaSXGqMCVXYpYkj33mcIApyhQf6YqgeNAmNvuC0t4CsDbSshZJkCS1eNisKqlyG
46 cF8G2JeiDX6tO6Mv0SmjCa3MFb0bJaGPMU0X7c8XcpvMaOQmCajwSeY9G0WqbBmKv34DsMIEztU6Y2
47 KiDlFdt6jnCSqx7Dmt6XnqSKaFFHNO5+FmODxMCWBEaco77lNDGXBM0ECYB/+s7nKFdwSF5hgXumQe
48 EZ7amRg39RHy3zIjyRCykQh8Zo2iviRKyTDn/zx6EefptJj2Cw+Ep2FSc01U5ry4KLPYsTyWnVGnvb
49 UpyGlhjBUljyjHhWpf8OFaXwhp9O4T1gU9UeyPPa8A2l0p1kNqPXEVRm1AOs1oAGZU596t6SOR2mcB
50 Oco1srWtkaVrMUzIErrKri85keKqRQYX9VX0/eAUK1hrSu6HMEX3Qh2sCh0q0D2CtnUqS4hj62sE/z
51 aDs2Sg7MBS6xnQeooc2R2tC9YrKpEi9pLXfYXp20tDCpSP8rKlrD4axprb9u1Df5hSbz9QU0cRpfgn
52 kiIzwKucd0wsEHlLpe5yHXuc6FrNelOl7pY2+11kTWx7VpRu97dXA3DO1vbkhcb4zyvERYajQgAADs
53 ='''), mimetype='image/png')
54
55
56 TEMPLATE = u'''\
57 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
58 "http://www.w3.org/TR/html4/loose.dtd">
59 <title>WSGI Information</title>
60 <style type="text/css">
61 @import url(http://fonts.googleapis.com/css?family=Ubuntu);
62
63 body { font-family: 'Lucida Grande', 'Lucida Sans Unicode', 'Geneva',
64 'Verdana', sans-serif; background-color: white; color: #000;
65 font-size: 15px; text-align: center; }
66 #logo { float: right; padding: 0 0 10px 10px; }
67 div.box { text-align: left; width: 45em; margin: auto; padding: 50px 0;
68 background-color: white; }
69 h1, h2 { font-family: 'Ubuntu', 'Lucida Grande', 'Lucida Sans Unicode',
70 'Geneva', 'Verdana', sans-serif; font-weight: normal; }
71 h1 { margin: 0 0 30px 0; }
72 h2 { font-size: 1.4em; margin: 1em 0 0.5em 0; }
73 table { width: 100%%; border-collapse: collapse; border: 1px solid #AFC5C9 }
74 table th { background-color: #AFC1C4; color: white; font-size: 0.72em;
75 font-weight: normal; width: 18em; vertical-align: top;
76 padding: 0.5em 0 0.1em 0.5em; }
77 table td { border: 1px solid #AFC5C9; padding: 0.1em 0 0.1em 0.5em; }
78 code { font-family: 'Consolas', 'Monaco', 'Bitstream Vera Sans Mono',
79 monospace; font-size: 0.7em; }
80 ul li { line-height: 1.5em; }
81 ul.path { font-size: 0.7em; margin: 0 -30px; padding: 8px 30px;
82 list-style: none; background: #E8EFF0; }
83 ul.path li { line-height: 1.6em; }
84 li.virtual { color: #999; text-decoration: underline; }
85 li.exp { background: white; }
86 </style>
87 <div class="box">
88 <img src="?resource=logo" id="logo" alt="[The Werkzeug Logo]" />
89 <h1>WSGI Information</h1>
90 <p>
91 This page displays all available information about the WSGI server and
92 the underlying Python interpreter.
93 <h2 id="python-interpreter">Python Interpreter</h2>
94 <table>
95 <tr>
96 <th>Python Version
97 <td>%(python_version)s
98 <tr>
99 <th>Platform
100 <td>%(platform)s [%(os)s]
101 <tr>
102 <th>API Version
103 <td>%(api_version)s
104 <tr>
105 <th>Byteorder
106 <td>%(byteorder)s
107 <tr>
108 <th>Werkzeug Version
109 <td>%(werkzeug_version)s
110 </table>
111 <h2 id="wsgi-environment">WSGI Environment</h2>
112 <table>%(wsgi_env)s</table>
113 <h2 id="installed-eggs">Installed Eggs</h2>
114 <p>
115 The following python packages were installed on the system as
116 Python eggs:
117 <ul>%(python_eggs)s</ul>
118 <h2 id="sys-path">System Path</h2>
119 <p>
120 The following paths are the current contents of the load path. The
121 following entries are looked up for Python packages. Note that not
122 all items in this path are folders. Gray and underlined items are
123 entries pointing to invalid resources or used by custom import hooks
124 such as the zip importer.
125 <p>
126 Items with a bright background were expanded for display from a relative
127 path. If you encounter such paths in the output you might want to check
128 your setup as relative paths are usually problematic in multithreaded
129 environments.
130 <ul class="path">%(sys_path)s</ul>
131 </div>
132 '''
133
134
135 def iter_sys_path():
136 if os.name == 'posix':
137 def strip(x):
138 prefix = os.path.expanduser('~')
139 if x.startswith(prefix):
140 x = '~' + x[len(prefix):]
141 return x
142 else:
143 strip = lambda x: x
144
145 cwd = os.path.abspath(os.getcwd())
146 for item in sys.path:
147 path = os.path.join(cwd, item or os.path.curdir)
148 yield strip(os.path.normpath(path)), \
149 not os.path.isdir(path), path != item
150
151
152 def render_testapp(req):
153 try:
154 import pkg_resources
155 except ImportError:
156 eggs = ()
157 else:
158 eggs = sorted(pkg_resources.working_set,
159 key=lambda x: x.project_name.lower())
160 python_eggs = []
161 for egg in eggs:
162 try:
163 version = egg.version
164 except (ValueError, AttributeError):
165 version = 'unknown'
166 python_eggs.append('<li>%s <small>[%s]</small>' % (
167 escape(egg.project_name),
168 escape(version)
169 ))
170
171 wsgi_env = []
172 sorted_environ = sorted(req.environ.items(),
173 key=lambda x: repr(x[0]).lower())
174 for key, value in sorted_environ:
175 wsgi_env.append('<tr><th>%s<td><code>%s</code>' % (
176 escape(str(key)),
177 ' '.join(wrap(escape(repr(value))))
178 ))
179
180 sys_path = []
181 for item, virtual, expanded in iter_sys_path():
182 class_ = []
183 if virtual:
184 class_.append('virtual')
185 if expanded:
186 class_.append('exp')
187 sys_path.append('<li%s>%s' % (
188 class_ and ' class="%s"' % ' '.join(class_) or '',
189 escape(item)
190 ))
191
192 return (TEMPLATE % {
193 'python_version': '<br>'.join(escape(sys.version).splitlines()),
194 'platform': escape(sys.platform),
195 'os': escape(os.name),
196 'api_version': sys.api_version,
197 'byteorder': sys.byteorder,
198 'werkzeug_version': werkzeug.__version__,
199 'python_eggs': '\n'.join(python_eggs),
200 'wsgi_env': '\n'.join(wsgi_env),
201 'sys_path': '\n'.join(sys_path)
202 }).encode('utf-8')
203
204
205 def test_app(environ, start_response):
206 """Simple test application that dumps the environment. You can use
207 it to check if Werkzeug is working properly:
208
209 .. sourcecode:: pycon
210
211 >>> from werkzeug.serving import run_simple
212 >>> from werkzeug.testapp import test_app
213 >>> run_simple('localhost', 3000, test_app)
214 * Running on http://localhost:3000/
215
216 The application displays important information from the WSGI environment,
217 the Python interpreter and the installed libraries.
218 """
219 req = Request(environ, populate_request=False)
220 if req.args.get('resource') == 'logo':
221 response = logo
222 else:
223 response = Response(render_testapp(req), mimetype='text/html')
224 return response(environ, start_response)
225
226
227 if __name__ == '__main__':
228 from werkzeug.serving import run_simple
229 run_simple('localhost', 5000, test_app, use_reloader=True)
+0
-1007
werkzeug/urls.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.urls
3 ~~~~~~~~~~~~~
4
5 ``werkzeug.urls`` used to provide several wrapper functions for Python 2
6 urlparse, whose main purpose were to work around the behavior of the Py2
7 stdlib and its lack of unicode support. While this was already a somewhat
8 inconvenient situation, it got even more complicated because Python 3's
9 ``urllib.parse`` actually does handle unicode properly. In other words,
10 this module would wrap two libraries with completely different behavior. So
11 now this module contains a 2-and-3-compatible backport of Python 3's
12 ``urllib.parse``, which is mostly API-compatible.
13
14 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
15 :license: BSD, see LICENSE for more details.
16 """
17 import os
18 import re
19 from werkzeug._compat import text_type, PY2, to_unicode, \
20 to_native, implements_to_string, try_coerce_native, \
21 normalize_string_tuple, make_literal_wrapper, \
22 fix_tuple_repr
23 from werkzeug._internal import _encode_idna, _decode_idna
24 from werkzeug.datastructures import MultiDict, iter_multi_items
25 from collections import namedtuple
26
27
28 # A regular expression for what a valid schema looks like
29 _scheme_re = re.compile(r'^[a-zA-Z0-9+-.]+$')
30
31 # Characters that are safe in any part of an URL.
32 _always_safe = (b'abcdefghijklmnopqrstuvwxyz'
33 b'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_.-+')
34
35 _hexdigits = '0123456789ABCDEFabcdef'
36 _hextobyte = dict(
37 ((a + b).encode(), int(a + b, 16))
38 for a in _hexdigits for b in _hexdigits
39 )
40 _bytetohex = [
41 ('%%%02X' % char).encode('ascii') for char in range(256)
42 ]
43
44
45 _URLTuple = fix_tuple_repr(namedtuple(
46 '_URLTuple',
47 ['scheme', 'netloc', 'path', 'query', 'fragment']
48 ))
49
50
51 class BaseURL(_URLTuple):
52
53 '''Superclass of :py:class:`URL` and :py:class:`BytesURL`.'''
54 __slots__ = ()
55
56 def replace(self, **kwargs):
57 """Return an URL with the same values, except for those parameters
58 given new values by whichever keyword arguments are specified."""
59 return self._replace(**kwargs)
60
61 @property
62 def host(self):
63 """The host part of the URL if available, otherwise `None`. The
64 host is either the hostname or the IP address mentioned in the
65 URL. It will not contain the port.
66 """
67 return self._split_host()[0]
68
69 @property
70 def ascii_host(self):
71 """Works exactly like :attr:`host` but will return a result that
72 is restricted to ASCII. If it finds a netloc that is not ASCII
73 it will attempt to idna decode it. This is useful for socket
74 operations when the URL might include internationalized characters.
75 """
76 rv = self.host
77 if rv is not None and isinstance(rv, text_type):
78 try:
79 rv = _encode_idna(rv)
80 except UnicodeError:
81 rv = rv.encode('ascii', 'ignore')
82 return to_native(rv, 'ascii', 'ignore')
83
84 @property
85 def port(self):
86 """The port in the URL as an integer if it was present, `None`
87 otherwise. This does not fill in default ports.
88 """
89 try:
90 rv = int(to_native(self._split_host()[1]))
91 if 0 <= rv <= 65535:
92 return rv
93 except (ValueError, TypeError):
94 pass
95
96 @property
97 def auth(self):
98 """The authentication part in the URL if available, `None`
99 otherwise.
100 """
101 return self._split_netloc()[0]
102
103 @property
104 def username(self):
105 """The username if it was part of the URL, `None` otherwise.
106 This undergoes URL decoding and will always be a unicode string.
107 """
108 rv = self._split_auth()[0]
109 if rv is not None:
110 return _url_unquote_legacy(rv)
111
112 @property
113 def raw_username(self):
114 """The username if it was part of the URL, `None` otherwise.
115 Unlike :attr:`username` this one is not being decoded.
116 """
117 return self._split_auth()[0]
118
119 @property
120 def password(self):
121 """The password if it was part of the URL, `None` otherwise.
122 This undergoes URL decoding and will always be a unicode string.
123 """
124 rv = self._split_auth()[1]
125 if rv is not None:
126 return _url_unquote_legacy(rv)
127
128 @property
129 def raw_password(self):
130 """The password if it was part of the URL, `None` otherwise.
131 Unlike :attr:`password` this one is not being decoded.
132 """
133 return self._split_auth()[1]
134
135 def decode_query(self, *args, **kwargs):
136 """Decodes the query part of the URL. Ths is a shortcut for
137 calling :func:`url_decode` on the query argument. The arguments and
138 keyword arguments are forwarded to :func:`url_decode` unchanged.
139 """
140 return url_decode(self.query, *args, **kwargs)
141
142 def join(self, *args, **kwargs):
143 """Joins this URL with another one. This is just a convenience
144 function for calling into :meth:`url_join` and then parsing the
145 return value again.
146 """
147 return url_parse(url_join(self, *args, **kwargs))
148
149 def to_url(self):
150 """Returns a URL string or bytes depending on the type of the
151 information stored. This is just a convenience function
152 for calling :meth:`url_unparse` for this URL.
153 """
154 return url_unparse(self)
155
156 def decode_netloc(self):
157 """Decodes the netloc part into a string."""
158 rv = _decode_idna(self.host or '')
159
160 if ':' in rv:
161 rv = '[%s]' % rv
162 port = self.port
163 if port is not None:
164 rv = '%s:%d' % (rv, port)
165 auth = ':'.join(filter(None, [
166 _url_unquote_legacy(self.raw_username or '', '/:%@'),
167 _url_unquote_legacy(self.raw_password or '', '/:%@'),
168 ]))
169 if auth:
170 rv = '%s@%s' % (auth, rv)
171 return rv
172
173 def to_uri_tuple(self):
174 """Returns a :class:`BytesURL` tuple that holds a URI. This will
175 encode all the information in the URL properly to ASCII using the
176 rules a web browser would follow.
177
178 It's usually more interesting to directly call :meth:`iri_to_uri` which
179 will return a string.
180 """
181 return url_parse(iri_to_uri(self).encode('ascii'))
182
183 def to_iri_tuple(self):
184 """Returns a :class:`URL` tuple that holds a IRI. This will try
185 to decode as much information as possible in the URL without
186 losing information similar to how a web browser does it for the
187 URL bar.
188
189 It's usually more interesting to directly call :meth:`uri_to_iri` which
190 will return a string.
191 """
192 return url_parse(uri_to_iri(self))
193
194 def get_file_location(self, pathformat=None):
195 """Returns a tuple with the location of the file in the form
196 ``(server, location)``. If the netloc is empty in the URL or
197 points to localhost, it's represented as ``None``.
198
199 The `pathformat` by default is autodetection but needs to be set
200 when working with URLs of a specific system. The supported values
201 are ``'windows'`` when working with Windows or DOS paths and
202 ``'posix'`` when working with posix paths.
203
204 If the URL does not point to to a local file, the server and location
205 are both represented as ``None``.
206
207 :param pathformat: The expected format of the path component.
208 Currently ``'windows'`` and ``'posix'`` are
209 supported. Defaults to ``None`` which is
210 autodetect.
211 """
212 if self.scheme != 'file':
213 return None, None
214
215 path = url_unquote(self.path)
216 host = self.netloc or None
217
218 if pathformat is None:
219 if os.name == 'nt':
220 pathformat = 'windows'
221 else:
222 pathformat = 'posix'
223
224 if pathformat == 'windows':
225 if path[:1] == '/' and path[1:2].isalpha() and path[2:3] in '|:':
226 path = path[1:2] + ':' + path[3:]
227 windows_share = path[:3] in ('\\' * 3, '/' * 3)
228 import ntpath
229 path = ntpath.normpath(path)
230 # Windows shared drives are represented as ``\\host\\directory``.
231 # That results in a URL like ``file://///host/directory``, and a
232 # path like ``///host/directory``. We need to special-case this
233 # because the path contains the hostname.
234 if windows_share and host is None:
235 parts = path.lstrip('\\').split('\\', 1)
236 if len(parts) == 2:
237 host, path = parts
238 else:
239 host = parts[0]
240 path = ''
241 elif pathformat == 'posix':
242 import posixpath
243 path = posixpath.normpath(path)
244 else:
245 raise TypeError('Invalid path format %s' % repr(pathformat))
246
247 if host in ('127.0.0.1', '::1', 'localhost'):
248 host = None
249
250 return host, path
251
252 def _split_netloc(self):
253 if self._at in self.netloc:
254 return self.netloc.split(self._at, 1)
255 return None, self.netloc
256
257 def _split_auth(self):
258 auth = self._split_netloc()[0]
259 if not auth:
260 return None, None
261 if self._colon not in auth:
262 return auth, None
263 return auth.split(self._colon, 1)
264
265 def _split_host(self):
266 rv = self._split_netloc()[1]
267 if not rv:
268 return None, None
269
270 if not rv.startswith(self._lbracket):
271 if self._colon in rv:
272 return rv.split(self._colon, 1)
273 return rv, None
274
275 idx = rv.find(self._rbracket)
276 if idx < 0:
277 return rv, None
278
279 host = rv[1:idx]
280 rest = rv[idx + 1:]
281 if rest.startswith(self._colon):
282 return host, rest[1:]
283 return host, None
284
285
286 @implements_to_string
287 class URL(BaseURL):
288
289 """Represents a parsed URL. This behaves like a regular tuple but
290 also has some extra attributes that give further insight into the
291 URL.
292 """
293 __slots__ = ()
294 _at = '@'
295 _colon = ':'
296 _lbracket = '['
297 _rbracket = ']'
298
299 def __str__(self):
300 return self.to_url()
301
302 def encode_netloc(self):
303 """Encodes the netloc part to an ASCII safe URL as bytes."""
304 rv = self.ascii_host or ''
305 if ':' in rv:
306 rv = '[%s]' % rv
307 port = self.port
308 if port is not None:
309 rv = '%s:%d' % (rv, port)
310 auth = ':'.join(filter(None, [
311 url_quote(self.raw_username or '', 'utf-8', 'strict', '/:%'),
312 url_quote(self.raw_password or '', 'utf-8', 'strict', '/:%'),
313 ]))
314 if auth:
315 rv = '%s@%s' % (auth, rv)
316 return to_native(rv)
317
318 def encode(self, charset='utf-8', errors='replace'):
319 """Encodes the URL to a tuple made out of bytes. The charset is
320 only being used for the path, query and fragment.
321 """
322 return BytesURL(
323 self.scheme.encode('ascii'),
324 self.encode_netloc(),
325 self.path.encode(charset, errors),
326 self.query.encode(charset, errors),
327 self.fragment.encode(charset, errors)
328 )
329
330
331 class BytesURL(BaseURL):
332
333 """Represents a parsed URL in bytes."""
334 __slots__ = ()
335 _at = b'@'
336 _colon = b':'
337 _lbracket = b'['
338 _rbracket = b']'
339
340 def __str__(self):
341 return self.to_url().decode('utf-8', 'replace')
342
343 def encode_netloc(self):
344 """Returns the netloc unchanged as bytes."""
345 return self.netloc
346
347 def decode(self, charset='utf-8', errors='replace'):
348 """Decodes the URL to a tuple made out of strings. The charset is
349 only being used for the path, query and fragment.
350 """
351 return URL(
352 self.scheme.decode('ascii'),
353 self.decode_netloc(),
354 self.path.decode(charset, errors),
355 self.query.decode(charset, errors),
356 self.fragment.decode(charset, errors)
357 )
358
359
360 def _unquote_to_bytes(string, unsafe=''):
361 if isinstance(string, text_type):
362 string = string.encode('utf-8')
363 if isinstance(unsafe, text_type):
364 unsafe = unsafe.encode('utf-8')
365 unsafe = frozenset(bytearray(unsafe))
366 bits = iter(string.split(b'%'))
367 result = bytearray(next(bits, b''))
368 for item in bits:
369 try:
370 char = _hextobyte[item[:2]]
371 if char in unsafe:
372 raise KeyError()
373 result.append(char)
374 result.extend(item[2:])
375 except KeyError:
376 result.extend(b'%')
377 result.extend(item)
378 return bytes(result)
379
380
381 def _url_encode_impl(obj, charset, encode_keys, sort, key):
382 iterable = iter_multi_items(obj)
383 if sort:
384 iterable = sorted(iterable, key=key)
385 for key, value in iterable:
386 if value is None:
387 continue
388 if not isinstance(key, bytes):
389 key = text_type(key).encode(charset)
390 if not isinstance(value, bytes):
391 value = text_type(value).encode(charset)
392 yield url_quote_plus(key) + '=' + url_quote_plus(value)
393
394
395 def _url_unquote_legacy(value, unsafe=''):
396 try:
397 return url_unquote(value, charset='utf-8',
398 errors='strict', unsafe=unsafe)
399 except UnicodeError:
400 return url_unquote(value, charset='latin1', unsafe=unsafe)
401
402
403 def url_parse(url, scheme=None, allow_fragments=True):
404 """Parses a URL from a string into a :class:`URL` tuple. If the URL
405 is lacking a scheme it can be provided as second argument. Otherwise,
406 it is ignored. Optionally fragments can be stripped from the URL
407 by setting `allow_fragments` to `False`.
408
409 The inverse of this function is :func:`url_unparse`.
410
411 :param url: the URL to parse.
412 :param scheme: the default schema to use if the URL is schemaless.
413 :param allow_fragments: if set to `False` a fragment will be removed
414 from the URL.
415 """
416 s = make_literal_wrapper(url)
417 is_text_based = isinstance(url, text_type)
418
419 if scheme is None:
420 scheme = s('')
421 netloc = query = fragment = s('')
422 i = url.find(s(':'))
423 if i > 0 and _scheme_re.match(to_native(url[:i], errors='replace')):
424 # make sure "iri" is not actually a port number (in which case
425 # "scheme" is really part of the path)
426 rest = url[i + 1:]
427 if not rest or any(c not in s('0123456789') for c in rest):
428 # not a port number
429 scheme, url = url[:i].lower(), rest
430
431 if url[:2] == s('//'):
432 delim = len(url)
433 for c in s('/?#'):
434 wdelim = url.find(c, 2)
435 if wdelim >= 0:
436 delim = min(delim, wdelim)
437 netloc, url = url[2:delim], url[delim:]
438 if (s('[') in netloc and s(']') not in netloc) or \
439 (s(']') in netloc and s('[') not in netloc):
440 raise ValueError('Invalid IPv6 URL')
441
442 if allow_fragments and s('#') in url:
443 url, fragment = url.split(s('#'), 1)
444 if s('?') in url:
445 url, query = url.split(s('?'), 1)
446
447 result_type = is_text_based and URL or BytesURL
448 return result_type(scheme, netloc, url, query, fragment)
449
450
451 def url_quote(string, charset='utf-8', errors='strict', safe='/:', unsafe=''):
452 """URL encode a single string with a given encoding.
453
454 :param s: the string to quote.
455 :param charset: the charset to be used.
456 :param safe: an optional sequence of safe characters.
457 :param unsafe: an optional sequence of unsafe characters.
458
459 .. versionadded:: 0.9.2
460 The `unsafe` parameter was added.
461 """
462 if not isinstance(string, (text_type, bytes, bytearray)):
463 string = text_type(string)
464 if isinstance(string, text_type):
465 string = string.encode(charset, errors)
466 if isinstance(safe, text_type):
467 safe = safe.encode(charset, errors)
468 if isinstance(unsafe, text_type):
469 unsafe = unsafe.encode(charset, errors)
470 safe = frozenset(bytearray(safe) + _always_safe) - frozenset(bytearray(unsafe))
471 rv = bytearray()
472 for char in bytearray(string):
473 if char in safe:
474 rv.append(char)
475 else:
476 rv.extend(_bytetohex[char])
477 return to_native(bytes(rv))
478
479
480 def url_quote_plus(string, charset='utf-8', errors='strict', safe=''):
481 """URL encode a single string with the given encoding and convert
482 whitespace to "+".
483
484 :param s: The string to quote.
485 :param charset: The charset to be used.
486 :param safe: An optional sequence of safe characters.
487 """
488 return url_quote(string, charset, errors, safe + ' ', '+').replace(' ', '+')
489
490
491 def url_unparse(components):
492 """The reverse operation to :meth:`url_parse`. This accepts arbitrary
493 as well as :class:`URL` tuples and returns a URL as a string.
494
495 :param components: the parsed URL as tuple which should be converted
496 into a URL string.
497 """
498 scheme, netloc, path, query, fragment = \
499 normalize_string_tuple(components)
500 s = make_literal_wrapper(scheme)
501 url = s('')
502
503 # We generally treat file:///x and file:/x the same which is also
504 # what browsers seem to do. This also allows us to ignore a schema
505 # register for netloc utilization or having to differenciate between
506 # empty and missing netloc.
507 if netloc or (scheme and path.startswith(s('/'))):
508 if path and path[:1] != s('/'):
509 path = s('/') + path
510 url = s('//') + (netloc or s('')) + path
511 elif path:
512 url += path
513 if scheme:
514 url = scheme + s(':') + url
515 if query:
516 url = url + s('?') + query
517 if fragment:
518 url = url + s('#') + fragment
519 return url
520
521
522 def url_unquote(string, charset='utf-8', errors='replace', unsafe=''):
523 """URL decode a single string with a given encoding. If the charset
524 is set to `None` no unicode decoding is performed and raw bytes
525 are returned.
526
527 :param s: the string to unquote.
528 :param charset: the charset of the query string. If set to `None`
529 no unicode decoding will take place.
530 :param errors: the error handling for the charset decoding.
531 """
532 rv = _unquote_to_bytes(string, unsafe)
533 if charset is not None:
534 rv = rv.decode(charset, errors)
535 return rv
536
537
538 def url_unquote_plus(s, charset='utf-8', errors='replace'):
539 """URL decode a single string with the given `charset` and decode "+" to
540 whitespace.
541
542 Per default encoding errors are ignored. If you want a different behavior
543 you can set `errors` to ``'replace'`` or ``'strict'``. In strict mode a
544 :exc:`HTTPUnicodeError` is raised.
545
546 :param s: The string to unquote.
547 :param charset: the charset of the query string. If set to `None`
548 no unicode decoding will take place.
549 :param errors: The error handling for the `charset` decoding.
550 """
551 if isinstance(s, text_type):
552 s = s.replace(u'+', u' ')
553 else:
554 s = s.replace(b'+', b' ')
555 return url_unquote(s, charset, errors)
556
557
558 def url_fix(s, charset='utf-8'):
559 r"""Sometimes you get an URL by a user that just isn't a real URL because
560 it contains unsafe characters like ' ' and so on. This function can fix
561 some of the problems in a similar way browsers handle data entered by the
562 user:
563
564 >>> url_fix(u'http://de.wikipedia.org/wiki/Elf (Begriffskl\xe4rung)')
565 'http://de.wikipedia.org/wiki/Elf%20(Begriffskl%C3%A4rung)'
566
567 :param s: the string with the URL to fix.
568 :param charset: The target charset for the URL if the url was given as
569 unicode string.
570 """
571 # First step is to switch to unicode processing and to convert
572 # backslashes (which are invalid in URLs anyways) to slashes. This is
573 # consistent with what Chrome does.
574 s = to_unicode(s, charset, 'replace').replace('\\', '/')
575
576 # For the specific case that we look like a malformed windows URL
577 # we want to fix this up manually:
578 if s.startswith('file://') and s[7:8].isalpha() and s[8:10] in (':/', '|/'):
579 s = 'file:///' + s[7:]
580
581 url = url_parse(s)
582 path = url_quote(url.path, charset, safe='/%+$!*\'(),')
583 qs = url_quote_plus(url.query, charset, safe=':&%=+$!*\'(),')
584 anchor = url_quote_plus(url.fragment, charset, safe=':&%=+$!*\'(),')
585 return to_native(url_unparse((url.scheme, url.encode_netloc(),
586 path, qs, anchor)))
587
588
589 def uri_to_iri(uri, charset='utf-8', errors='replace'):
590 r"""
591 Converts a URI in a given charset to a IRI.
592
593 Examples for URI versus IRI:
594
595 >>> uri_to_iri(b'http://xn--n3h.net/')
596 u'http://\u2603.net/'
597 >>> uri_to_iri(b'http://%C3%BCser:p%C3%A4ssword@xn--n3h.net/p%C3%A5th')
598 u'http://\xfcser:p\xe4ssword@\u2603.net/p\xe5th'
599
600 Query strings are left unchanged:
601
602 >>> uri_to_iri('/?foo=24&x=%26%2f')
603 u'/?foo=24&x=%26%2f'
604
605 .. versionadded:: 0.6
606
607 :param uri: The URI to convert.
608 :param charset: The charset of the URI.
609 :param errors: The error handling on decode.
610 """
611 if isinstance(uri, tuple):
612 uri = url_unparse(uri)
613 uri = url_parse(to_unicode(uri, charset))
614 path = url_unquote(uri.path, charset, errors, '%/;?')
615 query = url_unquote(uri.query, charset, errors, '%;/?:@&=+,$#')
616 fragment = url_unquote(uri.fragment, charset, errors, '%;/?:@&=+,$#')
617 return url_unparse((uri.scheme, uri.decode_netloc(),
618 path, query, fragment))
619
620
621 def iri_to_uri(iri, charset='utf-8', errors='strict', safe_conversion=False):
622 r"""
623 Converts any unicode based IRI to an acceptable ASCII URI. Werkzeug always
624 uses utf-8 URLs internally because this is what browsers and HTTP do as
625 well. In some places where it accepts an URL it also accepts a unicode IRI
626 and converts it into a URI.
627
628 Examples for IRI versus URI:
629
630 >>> iri_to_uri(u'http://☃.net/')
631 'http://xn--n3h.net/'
632 >>> iri_to_uri(u'http://üser:pässword@☃.net/påth')
633 'http://%C3%BCser:p%C3%A4ssword@xn--n3h.net/p%C3%A5th'
634
635 There is a general problem with IRI and URI conversion with some
636 protocols that appear in the wild that are in violation of the URI
637 specification. In places where Werkzeug goes through a forced IRI to
638 URI conversion it will set the `safe_conversion` flag which will
639 not perform a conversion if the end result is already ASCII. This
640 can mean that the return value is not an entirely correct URI but
641 it will not destroy such invalid URLs in the process.
642
643 As an example consider the following two IRIs::
644
645 magnet:?xt=uri:whatever
646 itms-services://?action=download-manifest
647
648 The internal representation after parsing of those URLs is the same
649 and there is no way to reconstruct the original one. If safe
650 conversion is enabled however this function becomes a noop for both of
651 those strings as they both can be considered URIs.
652
653 .. versionadded:: 0.6
654
655 .. versionchanged:: 0.9.6
656 The `safe_conversion` parameter was added.
657
658 :param iri: The IRI to convert.
659 :param charset: The charset for the URI.
660 :param safe_conversion: indicates if a safe conversion should take place.
661 For more information see the explanation above.
662 """
663 if isinstance(iri, tuple):
664 iri = url_unparse(iri)
665
666 if safe_conversion:
667 try:
668 native_iri = to_native(iri)
669 ascii_iri = to_native(iri).encode('ascii')
670 if ascii_iri.split() == [ascii_iri]:
671 return native_iri
672 except UnicodeError:
673 pass
674
675 iri = url_parse(to_unicode(iri, charset, errors))
676
677 netloc = iri.encode_netloc()
678 path = url_quote(iri.path, charset, errors, '/:~+%')
679 query = url_quote(iri.query, charset, errors, '%&[]:;$*()+,!?*/=')
680 fragment = url_quote(iri.fragment, charset, errors, '=%&[]:;$()+,!?*/')
681
682 return to_native(url_unparse((iri.scheme, netloc,
683 path, query, fragment)))
684
685
686 def url_decode(s, charset='utf-8', decode_keys=False, include_empty=True,
687 errors='replace', separator='&', cls=None):
688 """
689 Parse a querystring and return it as :class:`MultiDict`. There is a
690 difference in key decoding on different Python versions. On Python 3
691 keys will always be fully decoded whereas on Python 2, keys will
692 remain bytestrings if they fit into ASCII. On 2.x keys can be forced
693 to be unicode by setting `decode_keys` to `True`.
694
695 If the charset is set to `None` no unicode decoding will happen and
696 raw bytes will be returned.
697
698 Per default a missing value for a key will default to an empty key. If
699 you don't want that behavior you can set `include_empty` to `False`.
700
701 Per default encoding errors are ignored. If you want a different behavior
702 you can set `errors` to ``'replace'`` or ``'strict'``. In strict mode a
703 `HTTPUnicodeError` is raised.
704
705 .. versionchanged:: 0.5
706 In previous versions ";" and "&" could be used for url decoding.
707 This changed in 0.5 where only "&" is supported. If you want to
708 use ";" instead a different `separator` can be provided.
709
710 The `cls` parameter was added.
711
712 :param s: a string with the query string to decode.
713 :param charset: the charset of the query string. If set to `None`
714 no unicode decoding will take place.
715 :param decode_keys: Used on Python 2.x to control whether keys should
716 be forced to be unicode objects. If set to `True`
717 then keys will be unicode in all cases. Otherwise,
718 they remain `str` if they fit into ASCII.
719 :param include_empty: Set to `False` if you don't want empty values to
720 appear in the dict.
721 :param errors: the decoding error behavior.
722 :param separator: the pair separator to be used, defaults to ``&``
723 :param cls: an optional dict class to use. If this is not specified
724 or `None` the default :class:`MultiDict` is used.
725 """
726 if cls is None:
727 cls = MultiDict
728 if isinstance(s, text_type) and not isinstance(separator, text_type):
729 separator = separator.decode(charset or 'ascii')
730 elif isinstance(s, bytes) and not isinstance(separator, bytes):
731 separator = separator.encode(charset or 'ascii')
732 return cls(_url_decode_impl(s.split(separator), charset, decode_keys,
733 include_empty, errors))
734
735
736 def url_decode_stream(stream, charset='utf-8', decode_keys=False,
737 include_empty=True, errors='replace', separator='&',
738 cls=None, limit=None, return_iterator=False):
739 """Works like :func:`url_decode` but decodes a stream. The behavior
740 of stream and limit follows functions like
741 :func:`~werkzeug.wsgi.make_line_iter`. The generator of pairs is
742 directly fed to the `cls` so you can consume the data while it's
743 parsed.
744
745 .. versionadded:: 0.8
746
747 :param stream: a stream with the encoded querystring
748 :param charset: the charset of the query string. If set to `None`
749 no unicode decoding will take place.
750 :param decode_keys: Used on Python 2.x to control whether keys should
751 be forced to be unicode objects. If set to `True`,
752 keys will be unicode in all cases. Otherwise, they
753 remain `str` if they fit into ASCII.
754 :param include_empty: Set to `False` if you don't want empty values to
755 appear in the dict.
756 :param errors: the decoding error behavior.
757 :param separator: the pair separator to be used, defaults to ``&``
758 :param cls: an optional dict class to use. If this is not specified
759 or `None` the default :class:`MultiDict` is used.
760 :param limit: the content length of the URL data. Not necessary if
761 a limited stream is provided.
762 :param return_iterator: if set to `True` the `cls` argument is ignored
763 and an iterator over all decoded pairs is
764 returned
765 """
766 from werkzeug.wsgi import make_chunk_iter
767 if return_iterator:
768 cls = lambda x: x
769 elif cls is None:
770 cls = MultiDict
771 pair_iter = make_chunk_iter(stream, separator, limit)
772 return cls(_url_decode_impl(pair_iter, charset, decode_keys,
773 include_empty, errors))
774
775
776 def _url_decode_impl(pair_iter, charset, decode_keys, include_empty, errors):
777 for pair in pair_iter:
778 if not pair:
779 continue
780 s = make_literal_wrapper(pair)
781 equal = s('=')
782 if equal in pair:
783 key, value = pair.split(equal, 1)
784 else:
785 if not include_empty:
786 continue
787 key = pair
788 value = s('')
789 key = url_unquote_plus(key, charset, errors)
790 if charset is not None and PY2 and not decode_keys:
791 key = try_coerce_native(key)
792 yield key, url_unquote_plus(value, charset, errors)
793
794
795 def url_encode(obj, charset='utf-8', encode_keys=False, sort=False, key=None,
796 separator=b'&'):
797 """URL encode a dict/`MultiDict`. If a value is `None` it will not appear
798 in the result string. Per default only values are encoded into the target
799 charset strings. If `encode_keys` is set to ``True`` unicode keys are
800 supported too.
801
802 If `sort` is set to `True` the items are sorted by `key` or the default
803 sorting algorithm.
804
805 .. versionadded:: 0.5
806 `sort`, `key`, and `separator` were added.
807
808 :param obj: the object to encode into a query string.
809 :param charset: the charset of the query string.
810 :param encode_keys: set to `True` if you have unicode keys. (Ignored on
811 Python 3.x)
812 :param sort: set to `True` if you want parameters to be sorted by `key`.
813 :param separator: the separator to be used for the pairs.
814 :param key: an optional function to be used for sorting. For more details
815 check out the :func:`sorted` documentation.
816 """
817 separator = to_native(separator, 'ascii')
818 return separator.join(_url_encode_impl(obj, charset, encode_keys, sort, key))
819
820
821 def url_encode_stream(obj, stream=None, charset='utf-8', encode_keys=False,
822 sort=False, key=None, separator=b'&'):
823 """Like :meth:`url_encode` but writes the results to a stream
824 object. If the stream is `None` a generator over all encoded
825 pairs is returned.
826
827 .. versionadded:: 0.8
828
829 :param obj: the object to encode into a query string.
830 :param stream: a stream to write the encoded object into or `None` if
831 an iterator over the encoded pairs should be returned. In
832 that case the separator argument is ignored.
833 :param charset: the charset of the query string.
834 :param encode_keys: set to `True` if you have unicode keys. (Ignored on
835 Python 3.x)
836 :param sort: set to `True` if you want parameters to be sorted by `key`.
837 :param separator: the separator to be used for the pairs.
838 :param key: an optional function to be used for sorting. For more details
839 check out the :func:`sorted` documentation.
840 """
841 separator = to_native(separator, 'ascii')
842 gen = _url_encode_impl(obj, charset, encode_keys, sort, key)
843 if stream is None:
844 return gen
845 for idx, chunk in enumerate(gen):
846 if idx:
847 stream.write(separator)
848 stream.write(chunk)
849
850
851 def url_join(base, url, allow_fragments=True):
852 """Join a base URL and a possibly relative URL to form an absolute
853 interpretation of the latter.
854
855 :param base: the base URL for the join operation.
856 :param url: the URL to join.
857 :param allow_fragments: indicates whether fragments should be allowed.
858 """
859 if isinstance(base, tuple):
860 base = url_unparse(base)
861 if isinstance(url, tuple):
862 url = url_unparse(url)
863
864 base, url = normalize_string_tuple((base, url))
865 s = make_literal_wrapper(base)
866
867 if not base:
868 return url
869 if not url:
870 return base
871
872 bscheme, bnetloc, bpath, bquery, bfragment = \
873 url_parse(base, allow_fragments=allow_fragments)
874 scheme, netloc, path, query, fragment = \
875 url_parse(url, bscheme, allow_fragments)
876 if scheme != bscheme:
877 return url
878 if netloc:
879 return url_unparse((scheme, netloc, path, query, fragment))
880 netloc = bnetloc
881
882 if path[:1] == s('/'):
883 segments = path.split(s('/'))
884 elif not path:
885 segments = bpath.split(s('/'))
886 if not query:
887 query = bquery
888 else:
889 segments = bpath.split(s('/'))[:-1] + path.split(s('/'))
890
891 # If the rightmost part is "./" we want to keep the slash but
892 # remove the dot.
893 if segments[-1] == s('.'):
894 segments[-1] = s('')
895
896 # Resolve ".." and "."
897 segments = [segment for segment in segments if segment != s('.')]
898 while 1:
899 i = 1
900 n = len(segments) - 1
901 while i < n:
902 if segments[i] == s('..') and \
903 segments[i - 1] not in (s(''), s('..')):
904 del segments[i - 1:i + 1]
905 break
906 i += 1
907 else:
908 break
909
910 # Remove trailing ".." if the URL is absolute
911 unwanted_marker = [s(''), s('..')]
912 while segments[:2] == unwanted_marker:
913 del segments[1]
914
915 path = s('/').join(segments)
916 return url_unparse((scheme, netloc, path, query, fragment))
917
918
919 class Href(object):
920
921 """Implements a callable that constructs URLs with the given base. The
922 function can be called with any number of positional and keyword
923 arguments which than are used to assemble the URL. Works with URLs
924 and posix paths.
925
926 Positional arguments are appended as individual segments to
927 the path of the URL:
928
929 >>> href = Href('/foo')
930 >>> href('bar', 23)
931 '/foo/bar/23'
932 >>> href('foo', bar=23)
933 '/foo/foo?bar=23'
934
935 If any of the arguments (positional or keyword) evaluates to `None` it
936 will be skipped. If no keyword arguments are given the last argument
937 can be a :class:`dict` or :class:`MultiDict` (or any other dict subclass),
938 otherwise the keyword arguments are used for the query parameters, cutting
939 off the first trailing underscore of the parameter name:
940
941 >>> href(is_=42)
942 '/foo?is=42'
943 >>> href({'foo': 'bar'})
944 '/foo?foo=bar'
945
946 Combining of both methods is not allowed:
947
948 >>> href({'foo': 'bar'}, bar=42)
949 Traceback (most recent call last):
950 ...
951 TypeError: keyword arguments and query-dicts can't be combined
952
953 Accessing attributes on the href object creates a new href object with
954 the attribute name as prefix:
955
956 >>> bar_href = href.bar
957 >>> bar_href("blub")
958 '/foo/bar/blub'
959
960 If `sort` is set to `True` the items are sorted by `key` or the default
961 sorting algorithm:
962
963 >>> href = Href("/", sort=True)
964 >>> href(a=1, b=2, c=3)
965 '/?a=1&b=2&c=3'
966
967 .. versionadded:: 0.5
968 `sort` and `key` were added.
969 """
970
971 def __init__(self, base='./', charset='utf-8', sort=False, key=None):
972 if not base:
973 base = './'
974 self.base = base
975 self.charset = charset
976 self.sort = sort
977 self.key = key
978
979 def __getattr__(self, name):
980 if name[:2] == '__':
981 raise AttributeError(name)
982 base = self.base
983 if base[-1:] != '/':
984 base += '/'
985 return Href(url_join(base, name), self.charset, self.sort, self.key)
986
987 def __call__(self, *path, **query):
988 if path and isinstance(path[-1], dict):
989 if query:
990 raise TypeError('keyword arguments and query-dicts '
991 'can\'t be combined')
992 query, path = path[-1], path[:-1]
993 elif query:
994 query = dict([(k.endswith('_') and k[:-1] or k, v)
995 for k, v in query.items()])
996 path = '/'.join([to_unicode(url_quote(x, self.charset), 'ascii')
997 for x in path if x is not None]).lstrip('/')
998 rv = self.base
999 if path:
1000 if not rv.endswith('/'):
1001 rv += '/'
1002 rv = url_join(rv, './' + path)
1003 if query:
1004 rv += '?' + to_unicode(url_encode(query, self.charset, sort=self.sort,
1005 key=self.key), 'ascii')
1006 return to_native(rv)
+0
-212
werkzeug/useragents.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.useragents
3 ~~~~~~~~~~~~~~~~~~~
4
5 This module provides a helper to inspect user agent strings. This module
6 is far from complete but should work for most of the currently available
7 browsers.
8
9
10 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
11 :license: BSD, see LICENSE for more details.
12 """
13 import re
14
15
16 class UserAgentParser(object):
17
18 """A simple user agent parser. Used by the `UserAgent`."""
19
20 platforms = (
21 ('cros', 'chromeos'),
22 ('iphone|ios', 'iphone'),
23 ('ipad', 'ipad'),
24 (r'darwin|mac|os\s*x', 'macos'),
25 ('win', 'windows'),
26 (r'android', 'android'),
27 ('netbsd', 'netbsd'),
28 ('openbsd', 'openbsd'),
29 ('freebsd', 'freebsd'),
30 ('dragonfly', 'dragonflybsd'),
31 ('(sun|i86)os', 'solaris'),
32 (r'x11|lin(\b|ux)?', 'linux'),
33 (r'nintendo\s+wii', 'wii'),
34 ('irix', 'irix'),
35 ('hp-?ux', 'hpux'),
36 ('aix', 'aix'),
37 ('sco|unix_sv', 'sco'),
38 ('bsd', 'bsd'),
39 ('amiga', 'amiga'),
40 ('blackberry|playbook', 'blackberry'),
41 ('symbian', 'symbian')
42 )
43 browsers = (
44 ('googlebot', 'google'),
45 ('msnbot', 'msn'),
46 ('yahoo', 'yahoo'),
47 ('ask jeeves', 'ask'),
48 (r'aol|america\s+online\s+browser', 'aol'),
49 ('opera', 'opera'),
50 ('edge', 'edge'),
51 ('chrome', 'chrome'),
52 ('seamonkey', 'seamonkey'),
53 ('firefox|firebird|phoenix|iceweasel', 'firefox'),
54 ('galeon', 'galeon'),
55 ('safari|version', 'safari'),
56 ('webkit', 'webkit'),
57 ('camino', 'camino'),
58 ('konqueror', 'konqueror'),
59 ('k-meleon', 'kmeleon'),
60 ('netscape', 'netscape'),
61 (r'msie|microsoft\s+internet\s+explorer|trident/.+? rv:', 'msie'),
62 ('lynx', 'lynx'),
63 ('links', 'links'),
64 ('Baiduspider', 'baidu'),
65 ('bingbot', 'bing'),
66 ('mozilla', 'mozilla')
67 )
68
69 _browser_version_re = r'(?:%s)[/\sa-z(]*(\d+[.\da-z]+)?'
70 _language_re = re.compile(
71 r'(?:;\s*|\s+)(\b\w{2}\b(?:-\b\w{2}\b)?)\s*;|'
72 r'(?:\(|\[|;)\s*(\b\w{2}\b(?:-\b\w{2}\b)?)\s*(?:\]|\)|;)'
73 )
74
75 def __init__(self):
76 self.platforms = [(b, re.compile(a, re.I)) for a, b in self.platforms]
77 self.browsers = [(b, re.compile(self._browser_version_re % a, re.I))
78 for a, b in self.browsers]
79
80 def __call__(self, user_agent):
81 for platform, regex in self.platforms:
82 match = regex.search(user_agent)
83 if match is not None:
84 break
85 else:
86 platform = None
87 for browser, regex in self.browsers:
88 match = regex.search(user_agent)
89 if match is not None:
90 version = match.group(1)
91 break
92 else:
93 browser = version = None
94 match = self._language_re.search(user_agent)
95 if match is not None:
96 language = match.group(1) or match.group(2)
97 else:
98 language = None
99 return platform, browser, version, language
100
101
102 class UserAgent(object):
103
104 """Represents a user agent. Pass it a WSGI environment or a user agent
105 string and you can inspect some of the details from the user agent
106 string via the attributes. The following attributes exist:
107
108 .. attribute:: string
109
110 the raw user agent string
111
112 .. attribute:: platform
113
114 the browser platform. The following platforms are currently
115 recognized:
116
117 - `aix`
118 - `amiga`
119 - `android`
120 - `blackberry`
121 - `bsd`
122 - `chromeos`
123 - `dragonflybsd`
124 - `freebsd`
125 - `hpux`
126 - `ipad`
127 - `iphone`
128 - `irix`
129 - `linux`
130 - `macos`
131 - `netbsd`
132 - `openbsd`
133 - `sco`
134 - `solaris`
135 - `symbian`
136 - `wii`
137 - `windows`
138
139 .. attribute:: browser
140
141 the name of the browser. The following browsers are currently
142 recognized:
143
144 - `aol` *
145 - `ask` *
146 - `baidu` *
147 - `bing` *
148 - `camino`
149 - `chrome`
150 - `firefox`
151 - `galeon`
152 - `google` *
153 - `kmeleon`
154 - `konqueror`
155 - `links`
156 - `lynx`
157 - `mozilla`
158 - `msie`
159 - `msn`
160 - `netscape`
161 - `opera`
162 - `safari`
163 - `seamonkey`
164 - `webkit`
165 - `yahoo` *
166
167 (Browsers marked with a star (``*``) are crawlers.)
168
169 .. attribute:: version
170
171 the version of the browser
172
173 .. attribute:: language
174
175 the language of the browser
176 """
177
178 _parser = UserAgentParser()
179
180 def __init__(self, environ_or_string):
181 if isinstance(environ_or_string, dict):
182 environ_or_string = environ_or_string.get('HTTP_USER_AGENT', '')
183 self.string = environ_or_string
184 self.platform, self.browser, self.version, self.language = \
185 self._parser(environ_or_string)
186
187 def to_header(self):
188 return self.string
189
190 def __str__(self):
191 return self.string
192
193 def __nonzero__(self):
194 return bool(self.browser)
195
196 __bool__ = __nonzero__
197
198 def __repr__(self):
199 return '<%s %r/%s>' % (
200 self.__class__.__name__,
201 self.browser,
202 self.version
203 )
204
205
206 # conceptionally this belongs in this module but because we want to lazily
207 # load the user agent module (which happens in wrappers.py) we have to import
208 # it afterwards. The class itself has the module set to this module so
209 # pickle, inspect and similar modules treat the object as if it was really
210 # implemented here.
211 from werkzeug.wrappers import UserAgentMixin # noqa
+0
-628
werkzeug/utils.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.utils
3 ~~~~~~~~~~~~~~
4
5 This module implements various utilities for WSGI applications. Most of
6 them are used by the request and response wrappers but especially for
7 middleware development it makes sense to use them without the wrappers.
8
9 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
10 :license: BSD, see LICENSE for more details.
11 """
12 import re
13 import os
14 import sys
15 import pkgutil
16 try:
17 from html.entities import name2codepoint
18 except ImportError:
19 from htmlentitydefs import name2codepoint
20
21 from werkzeug._compat import unichr, text_type, string_types, iteritems, \
22 reraise, PY2
23 from werkzeug._internal import _DictAccessorProperty, \
24 _parse_signature, _missing
25
26
27 _format_re = re.compile(r'\$(?:(%s)|\{(%s)\})' % (('[a-zA-Z_][a-zA-Z0-9_]*',) * 2))
28 _entity_re = re.compile(r'&([^;]+);')
29 _filename_ascii_strip_re = re.compile(r'[^A-Za-z0-9_.-]')
30 _windows_device_files = ('CON', 'AUX', 'COM1', 'COM2', 'COM3', 'COM4', 'LPT1',
31 'LPT2', 'LPT3', 'PRN', 'NUL')
32
33
34 class cached_property(property):
35
36 """A decorator that converts a function into a lazy property. The
37 function wrapped is called the first time to retrieve the result
38 and then that calculated result is used the next time you access
39 the value::
40
41 class Foo(object):
42
43 @cached_property
44 def foo(self):
45 # calculate something important here
46 return 42
47
48 The class has to have a `__dict__` in order for this property to
49 work.
50 """
51
52 # implementation detail: A subclass of python's builtin property
53 # decorator, we override __get__ to check for a cached value. If one
54 # choses to invoke __get__ by hand the property will still work as
55 # expected because the lookup logic is replicated in __get__ for
56 # manual invocation.
57
58 def __init__(self, func, name=None, doc=None):
59 self.__name__ = name or func.__name__
60 self.__module__ = func.__module__
61 self.__doc__ = doc or func.__doc__
62 self.func = func
63
64 def __set__(self, obj, value):
65 obj.__dict__[self.__name__] = value
66
67 def __get__(self, obj, type=None):
68 if obj is None:
69 return self
70 value = obj.__dict__.get(self.__name__, _missing)
71 if value is _missing:
72 value = self.func(obj)
73 obj.__dict__[self.__name__] = value
74 return value
75
76
77 class environ_property(_DictAccessorProperty):
78
79 """Maps request attributes to environment variables. This works not only
80 for the Werzeug request object, but also any other class with an
81 environ attribute:
82
83 >>> class Test(object):
84 ... environ = {'key': 'value'}
85 ... test = environ_property('key')
86 >>> var = Test()
87 >>> var.test
88 'value'
89
90 If you pass it a second value it's used as default if the key does not
91 exist, the third one can be a converter that takes a value and converts
92 it. If it raises :exc:`ValueError` or :exc:`TypeError` the default value
93 is used. If no default value is provided `None` is used.
94
95 Per default the property is read only. You have to explicitly enable it
96 by passing ``read_only=False`` to the constructor.
97 """
98
99 read_only = True
100
101 def lookup(self, obj):
102 return obj.environ
103
104
105 class header_property(_DictAccessorProperty):
106
107 """Like `environ_property` but for headers."""
108
109 def lookup(self, obj):
110 return obj.headers
111
112
113 class HTMLBuilder(object):
114
115 """Helper object for HTML generation.
116
117 Per default there are two instances of that class. The `html` one, and
118 the `xhtml` one for those two dialects. The class uses keyword parameters
119 and positional parameters to generate small snippets of HTML.
120
121 Keyword parameters are converted to XML/SGML attributes, positional
122 arguments are used as children. Because Python accepts positional
123 arguments before keyword arguments it's a good idea to use a list with the
124 star-syntax for some children:
125
126 >>> html.p(class_='foo', *[html.a('foo', href='foo.html'), ' ',
127 ... html.a('bar', href='bar.html')])
128 u'<p class="foo"><a href="foo.html">foo</a> <a href="bar.html">bar</a></p>'
129
130 This class works around some browser limitations and can not be used for
131 arbitrary SGML/XML generation. For that purpose lxml and similar
132 libraries exist.
133
134 Calling the builder escapes the string passed:
135
136 >>> html.p(html("<foo>"))
137 u'<p>&lt;foo&gt;</p>'
138 """
139
140 _entity_re = re.compile(r'&([^;]+);')
141 _entities = name2codepoint.copy()
142 _entities['apos'] = 39
143 _empty_elements = set([
144 'area', 'base', 'basefont', 'br', 'col', 'command', 'embed', 'frame',
145 'hr', 'img', 'input', 'keygen', 'isindex', 'link', 'meta', 'param',
146 'source', 'wbr'
147 ])
148 _boolean_attributes = set([
149 'selected', 'checked', 'compact', 'declare', 'defer', 'disabled',
150 'ismap', 'multiple', 'nohref', 'noresize', 'noshade', 'nowrap'
151 ])
152 _plaintext_elements = set(['textarea'])
153 _c_like_cdata = set(['script', 'style'])
154
155 def __init__(self, dialect):
156 self._dialect = dialect
157
158 def __call__(self, s):
159 return escape(s)
160
161 def __getattr__(self, tag):
162 if tag[:2] == '__':
163 raise AttributeError(tag)
164
165 def proxy(*children, **arguments):
166 buffer = '<' + tag
167 for key, value in iteritems(arguments):
168 if value is None:
169 continue
170 if key[-1] == '_':
171 key = key[:-1]
172 if key in self._boolean_attributes:
173 if not value:
174 continue
175 if self._dialect == 'xhtml':
176 value = '="' + key + '"'
177 else:
178 value = ''
179 else:
180 value = '="' + escape(value) + '"'
181 buffer += ' ' + key + value
182 if not children and tag in self._empty_elements:
183 if self._dialect == 'xhtml':
184 buffer += ' />'
185 else:
186 buffer += '>'
187 return buffer
188 buffer += '>'
189
190 children_as_string = ''.join([text_type(x) for x in children
191 if x is not None])
192
193 if children_as_string:
194 if tag in self._plaintext_elements:
195 children_as_string = escape(children_as_string)
196 elif tag in self._c_like_cdata and self._dialect == 'xhtml':
197 children_as_string = '/*<![CDATA[*/' + \
198 children_as_string + '/*]]>*/'
199 buffer += children_as_string + '</' + tag + '>'
200 return buffer
201 return proxy
202
203 def __repr__(self):
204 return '<%s for %r>' % (
205 self.__class__.__name__,
206 self._dialect
207 )
208
209
210 html = HTMLBuilder('html')
211 xhtml = HTMLBuilder('xhtml')
212
213
214 def get_content_type(mimetype, charset):
215 """Returns the full content type string with charset for a mimetype.
216
217 If the mimetype represents text the charset will be appended as charset
218 parameter, otherwise the mimetype is returned unchanged.
219
220 :param mimetype: the mimetype to be used as content type.
221 :param charset: the charset to be appended in case it was a text mimetype.
222 :return: the content type.
223 """
224 if mimetype.startswith('text/') or \
225 mimetype == 'application/xml' or \
226 (mimetype.startswith('application/') and
227 mimetype.endswith('+xml')):
228 mimetype += '; charset=' + charset
229 return mimetype
230
231
232 def format_string(string, context):
233 """String-template format a string:
234
235 >>> format_string('$foo and ${foo}s', dict(foo=42))
236 '42 and 42s'
237
238 This does not do any attribute lookup etc. For more advanced string
239 formattings have a look at the `werkzeug.template` module.
240
241 :param string: the format string.
242 :param context: a dict with the variables to insert.
243 """
244 def lookup_arg(match):
245 x = context[match.group(1) or match.group(2)]
246 if not isinstance(x, string_types):
247 x = type(string)(x)
248 return x
249 return _format_re.sub(lookup_arg, string)
250
251
252 def secure_filename(filename):
253 r"""Pass it a filename and it will return a secure version of it. This
254 filename can then safely be stored on a regular file system and passed
255 to :func:`os.path.join`. The filename returned is an ASCII only string
256 for maximum portability.
257
258 On windows systems the function also makes sure that the file is not
259 named after one of the special device files.
260
261 >>> secure_filename("My cool movie.mov")
262 'My_cool_movie.mov'
263 >>> secure_filename("../../../etc/passwd")
264 'etc_passwd'
265 >>> secure_filename(u'i contain cool \xfcml\xe4uts.txt')
266 'i_contain_cool_umlauts.txt'
267
268 The function might return an empty filename. It's your responsibility
269 to ensure that the filename is unique and that you generate random
270 filename if the function returned an empty one.
271
272 .. versionadded:: 0.5
273
274 :param filename: the filename to secure
275 """
276 if isinstance(filename, text_type):
277 from unicodedata import normalize
278 filename = normalize('NFKD', filename).encode('ascii', 'ignore')
279 if not PY2:
280 filename = filename.decode('ascii')
281 for sep in os.path.sep, os.path.altsep:
282 if sep:
283 filename = filename.replace(sep, ' ')
284 filename = str(_filename_ascii_strip_re.sub('', '_'.join(
285 filename.split()))).strip('._')
286
287 # on nt a couple of special files are present in each folder. We
288 # have to ensure that the target file is not such a filename. In
289 # this case we prepend an underline
290 if os.name == 'nt' and filename and \
291 filename.split('.')[0].upper() in _windows_device_files:
292 filename = '_' + filename
293
294 return filename
295
296
297 def escape(s, quote=None):
298 """Replace special characters "&", "<", ">" and (") to HTML-safe sequences.
299
300 There is a special handling for `None` which escapes to an empty string.
301
302 .. versionchanged:: 0.9
303 `quote` is now implicitly on.
304
305 :param s: the string to escape.
306 :param quote: ignored.
307 """
308 if s is None:
309 return ''
310 elif hasattr(s, '__html__'):
311 return text_type(s.__html__())
312 elif not isinstance(s, string_types):
313 s = text_type(s)
314 if quote is not None:
315 from warnings import warn
316 warn(DeprecationWarning('quote parameter is implicit now'), stacklevel=2)
317 s = s.replace('&', '&amp;').replace('<', '&lt;') \
318 .replace('>', '&gt;').replace('"', "&quot;")
319 return s
320
321
322 def unescape(s):
323 """The reverse function of `escape`. This unescapes all the HTML
324 entities, not only the XML entities inserted by `escape`.
325
326 :param s: the string to unescape.
327 """
328 def handle_match(m):
329 name = m.group(1)
330 if name in HTMLBuilder._entities:
331 return unichr(HTMLBuilder._entities[name])
332 try:
333 if name[:2] in ('#x', '#X'):
334 return unichr(int(name[2:], 16))
335 elif name.startswith('#'):
336 return unichr(int(name[1:]))
337 except ValueError:
338 pass
339 return u''
340 return _entity_re.sub(handle_match, s)
341
342
343 def redirect(location, code=302, Response=None):
344 """Returns a response object (a WSGI application) that, if called,
345 redirects the client to the target location. Supported codes are 301,
346 302, 303, 305, and 307. 300 is not supported because it's not a real
347 redirect and 304 because it's the answer for a request with a request
348 with defined If-Modified-Since headers.
349
350 .. versionadded:: 0.6
351 The location can now be a unicode string that is encoded using
352 the :func:`iri_to_uri` function.
353
354 .. versionadded:: 0.10
355 The class used for the Response object can now be passed in.
356
357 :param location: the location the response should redirect to.
358 :param code: the redirect status code. defaults to 302.
359 :param class Response: a Response class to use when instantiating a
360 response. The default is :class:`werkzeug.wrappers.Response` if
361 unspecified.
362 """
363 if Response is None:
364 from werkzeug.wrappers import Response
365
366 display_location = escape(location)
367 if isinstance(location, text_type):
368 # Safe conversion is necessary here as we might redirect
369 # to a broken URI scheme (for instance itms-services).
370 from werkzeug.urls import iri_to_uri
371 location = iri_to_uri(location, safe_conversion=True)
372 response = Response(
373 '<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">\n'
374 '<title>Redirecting...</title>\n'
375 '<h1>Redirecting...</h1>\n'
376 '<p>You should be redirected automatically to target URL: '
377 '<a href="%s">%s</a>. If not click the link.' %
378 (escape(location), display_location), code, mimetype='text/html')
379 response.headers['Location'] = location
380 return response
381
382
383 def append_slash_redirect(environ, code=301):
384 """Redirects to the same URL but with a slash appended. The behavior
385 of this function is undefined if the path ends with a slash already.
386
387 :param environ: the WSGI environment for the request that triggers
388 the redirect.
389 :param code: the status code for the redirect.
390 """
391 new_path = environ['PATH_INFO'].strip('/') + '/'
392 query_string = environ.get('QUERY_STRING')
393 if query_string:
394 new_path += '?' + query_string
395 return redirect(new_path, code)
396
397
398 def import_string(import_name, silent=False):
399 """Imports an object based on a string. This is useful if you want to
400 use import paths as endpoints or something similar. An import path can
401 be specified either in dotted notation (``xml.sax.saxutils.escape``)
402 or with a colon as object delimiter (``xml.sax.saxutils:escape``).
403
404 If `silent` is True the return value will be `None` if the import fails.
405
406 :param import_name: the dotted name for the object to import.
407 :param silent: if set to `True` import errors are ignored and
408 `None` is returned instead.
409 :return: imported object
410 """
411 # force the import name to automatically convert to strings
412 # __import__ is not able to handle unicode strings in the fromlist
413 # if the module is a package
414 import_name = str(import_name).replace(':', '.')
415 try:
416 try:
417 __import__(import_name)
418 except ImportError:
419 if '.' not in import_name:
420 raise
421 else:
422 return sys.modules[import_name]
423
424 module_name, obj_name = import_name.rsplit('.', 1)
425 try:
426 module = __import__(module_name, None, None, [obj_name])
427 except ImportError:
428 # support importing modules not yet set up by the parent module
429 # (or package for that matter)
430 module = import_string(module_name)
431
432 try:
433 return getattr(module, obj_name)
434 except AttributeError as e:
435 raise ImportError(e)
436
437 except ImportError as e:
438 if not silent:
439 reraise(
440 ImportStringError,
441 ImportStringError(import_name, e),
442 sys.exc_info()[2])
443
444
445 def find_modules(import_path, include_packages=False, recursive=False):
446 """Finds all the modules below a package. This can be useful to
447 automatically import all views / controllers so that their metaclasses /
448 function decorators have a chance to register themselves on the
449 application.
450
451 Packages are not returned unless `include_packages` is `True`. This can
452 also recursively list modules but in that case it will import all the
453 packages to get the correct load path of that module.
454
455 :param import_path: the dotted name for the package to find child modules.
456 :param include_packages: set to `True` if packages should be returned, too.
457 :param recursive: set to `True` if recursion should happen.
458 :return: generator
459 """
460 module = import_string(import_path)
461 path = getattr(module, '__path__', None)
462 if path is None:
463 raise ValueError('%r is not a package' % import_path)
464 basename = module.__name__ + '.'
465 for importer, modname, ispkg in pkgutil.iter_modules(path):
466 modname = basename + modname
467 if ispkg:
468 if include_packages:
469 yield modname
470 if recursive:
471 for item in find_modules(modname, include_packages, True):
472 yield item
473 else:
474 yield modname
475
476
477 def validate_arguments(func, args, kwargs, drop_extra=True):
478 """Checks if the function accepts the arguments and keyword arguments.
479 Returns a new ``(args, kwargs)`` tuple that can safely be passed to
480 the function without causing a `TypeError` because the function signature
481 is incompatible. If `drop_extra` is set to `True` (which is the default)
482 any extra positional or keyword arguments are dropped automatically.
483
484 The exception raised provides three attributes:
485
486 `missing`
487 A set of argument names that the function expected but where
488 missing.
489
490 `extra`
491 A dict of keyword arguments that the function can not handle but
492 where provided.
493
494 `extra_positional`
495 A list of values that where given by positional argument but the
496 function cannot accept.
497
498 This can be useful for decorators that forward user submitted data to
499 a view function::
500
501 from werkzeug.utils import ArgumentValidationError, validate_arguments
502
503 def sanitize(f):
504 def proxy(request):
505 data = request.values.to_dict()
506 try:
507 args, kwargs = validate_arguments(f, (request,), data)
508 except ArgumentValidationError:
509 raise BadRequest('The browser failed to transmit all '
510 'the data expected.')
511 return f(*args, **kwargs)
512 return proxy
513
514 :param func: the function the validation is performed against.
515 :param args: a tuple of positional arguments.
516 :param kwargs: a dict of keyword arguments.
517 :param drop_extra: set to `False` if you don't want extra arguments
518 to be silently dropped.
519 :return: tuple in the form ``(args, kwargs)``.
520 """
521 parser = _parse_signature(func)
522 args, kwargs, missing, extra, extra_positional = parser(args, kwargs)[:5]
523 if missing:
524 raise ArgumentValidationError(tuple(missing))
525 elif (extra or extra_positional) and not drop_extra:
526 raise ArgumentValidationError(None, extra, extra_positional)
527 return tuple(args), kwargs
528
529
530 def bind_arguments(func, args, kwargs):
531 """Bind the arguments provided into a dict. When passed a function,
532 a tuple of arguments and a dict of keyword arguments `bind_arguments`
533 returns a dict of names as the function would see it. This can be useful
534 to implement a cache decorator that uses the function arguments to build
535 the cache key based on the values of the arguments.
536
537 :param func: the function the arguments should be bound for.
538 :param args: tuple of positional arguments.
539 :param kwargs: a dict of keyword arguments.
540 :return: a :class:`dict` of bound keyword arguments.
541 """
542 args, kwargs, missing, extra, extra_positional, \
543 arg_spec, vararg_var, kwarg_var = _parse_signature(func)(args, kwargs)
544 values = {}
545 for (name, has_default, default), value in zip(arg_spec, args):
546 values[name] = value
547 if vararg_var is not None:
548 values[vararg_var] = tuple(extra_positional)
549 elif extra_positional:
550 raise TypeError('too many positional arguments')
551 if kwarg_var is not None:
552 multikw = set(extra) & set([x[0] for x in arg_spec])
553 if multikw:
554 raise TypeError('got multiple values for keyword argument ' +
555 repr(next(iter(multikw))))
556 values[kwarg_var] = extra
557 elif extra:
558 raise TypeError('got unexpected keyword argument ' +
559 repr(next(iter(extra))))
560 return values
561
562
563 class ArgumentValidationError(ValueError):
564
565 """Raised if :func:`validate_arguments` fails to validate"""
566
567 def __init__(self, missing=None, extra=None, extra_positional=None):
568 self.missing = set(missing or ())
569 self.extra = extra or {}
570 self.extra_positional = extra_positional or []
571 ValueError.__init__(self, 'function arguments invalid. ('
572 '%d missing, %d additional)' % (
573 len(self.missing),
574 len(self.extra) + len(self.extra_positional)
575 ))
576
577
578 class ImportStringError(ImportError):
579
580 """Provides information about a failed :func:`import_string` attempt."""
581
582 #: String in dotted notation that failed to be imported.
583 import_name = None
584 #: Wrapped exception.
585 exception = None
586
587 def __init__(self, import_name, exception):
588 self.import_name = import_name
589 self.exception = exception
590
591 msg = (
592 'import_string() failed for %r. Possible reasons are:\n\n'
593 '- missing __init__.py in a package;\n'
594 '- package or module path not included in sys.path;\n'
595 '- duplicated package or module name taking precedence in '
596 'sys.path;\n'
597 '- missing module, class, function or variable;\n\n'
598 'Debugged import:\n\n%s\n\n'
599 'Original exception:\n\n%s: %s')
600
601 name = ''
602 tracked = []
603 for part in import_name.replace(':', '.').split('.'):
604 name += (name and '.') + part
605 imported = import_string(name, silent=True)
606 if imported:
607 tracked.append((name, getattr(imported, '__file__', None)))
608 else:
609 track = ['- %r found in %r.' % (n, i) for n, i in tracked]
610 track.append('- %r not found.' % name)
611 msg = msg % (import_name, '\n'.join(track),
612 exception.__class__.__name__, str(exception))
613 break
614
615 ImportError.__init__(self, msg)
616
617 def __repr__(self):
618 return '<%s(%r, %r)>' % (self.__class__.__name__, self.import_name,
619 self.exception)
620
621
622 # DEPRECATED
623 # these objects were previously in this module as well. we import
624 # them here for backwards compatibility with old pickles.
625 from werkzeug.datastructures import ( # noqa
626 MultiDict, CombinedMultiDict, Headers, EnvironHeaders)
627 from werkzeug.http import parse_cookie, dump_cookie # noqa
+0
-2028
werkzeug/wrappers.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.wrappers
3 ~~~~~~~~~~~~~~~~~
4
5 The wrappers are simple request and response objects which you can
6 subclass to do whatever you want them to do. The request object contains
7 the information transmitted by the client (webbrowser) and the response
8 object contains all the information sent back to the browser.
9
10 An important detail is that the request object is created with the WSGI
11 environ and will act as high-level proxy whereas the response object is an
12 actual WSGI application.
13
14 Like everything else in Werkzeug these objects will work correctly with
15 unicode data. Incoming form data parsed by the response object will be
16 decoded into an unicode object if possible and if it makes sense.
17
18
19 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
20 :license: BSD, see LICENSE for more details.
21 """
22 from functools import update_wrapper
23 from datetime import datetime, timedelta
24 from warnings import warn
25
26 from werkzeug.http import HTTP_STATUS_CODES, \
27 parse_accept_header, parse_cache_control_header, parse_etags, \
28 parse_date, generate_etag, is_resource_modified, unquote_etag, \
29 quote_etag, parse_set_header, parse_authorization_header, \
30 parse_www_authenticate_header, remove_entity_headers, \
31 parse_options_header, dump_options_header, http_date, \
32 parse_if_range_header, parse_cookie, dump_cookie, \
33 parse_range_header, parse_content_range_header, dump_header, \
34 parse_age, dump_age
35 from werkzeug.urls import url_decode, iri_to_uri, url_join
36 from werkzeug.formparser import FormDataParser, default_stream_factory
37 from werkzeug.utils import cached_property, environ_property, \
38 header_property, get_content_type
39 from werkzeug.wsgi import get_current_url, get_host, \
40 ClosingIterator, get_input_stream, get_content_length, _RangeWrapper
41 from werkzeug.datastructures import MultiDict, CombinedMultiDict, Headers, \
42 EnvironHeaders, ImmutableMultiDict, ImmutableTypeConversionDict, \
43 ImmutableList, MIMEAccept, CharsetAccept, LanguageAccept, \
44 ResponseCacheControl, RequestCacheControl, CallbackDict, \
45 ContentRange, iter_multi_items
46 from werkzeug._internal import _get_environ
47 from werkzeug._compat import to_bytes, string_types, text_type, \
48 integer_types, wsgi_decoding_dance, wsgi_get_bytes, \
49 to_unicode, to_native, BytesIO
50
51
52 def _run_wsgi_app(*args):
53 """This function replaces itself to ensure that the test module is not
54 imported unless required. DO NOT USE!
55 """
56 global _run_wsgi_app
57 from werkzeug.test import run_wsgi_app as _run_wsgi_app
58 return _run_wsgi_app(*args)
59
60
61 def _warn_if_string(iterable):
62 """Helper for the response objects to check if the iterable returned
63 to the WSGI server is not a string.
64 """
65 if isinstance(iterable, string_types):
66 warn(Warning('response iterable was set to a string. This appears '
67 'to work but means that the server will send the '
68 'data to the client char, by char. This is almost '
69 'never intended behavior, use response.data to assign '
70 'strings to the response object.'), stacklevel=2)
71
72
73 def _assert_not_shallow(request):
74 if request.shallow:
75 raise RuntimeError('A shallow request tried to consume '
76 'form data. If you really want to do '
77 'that, set `shallow` to False.')
78
79
80 def _iter_encoded(iterable, charset):
81 for item in iterable:
82 if isinstance(item, text_type):
83 yield item.encode(charset)
84 else:
85 yield item
86
87
88 def _clean_accept_ranges(accept_ranges):
89 if accept_ranges is True:
90 return "bytes"
91 elif accept_ranges is False:
92 return "none"
93 elif isinstance(accept_ranges, text_type):
94 return to_native(accept_ranges)
95 raise ValueError("Invalid accept_ranges value")
96
97
98 class BaseRequest(object):
99
100 """Very basic request object. This does not implement advanced stuff like
101 entity tag parsing or cache controls. The request object is created with
102 the WSGI environment as first argument and will add itself to the WSGI
103 environment as ``'werkzeug.request'`` unless it's created with
104 `populate_request` set to False.
105
106 There are a couple of mixins available that add additional functionality
107 to the request object, there is also a class called `Request` which
108 subclasses `BaseRequest` and all the important mixins.
109
110 It's a good idea to create a custom subclass of the :class:`BaseRequest`
111 and add missing functionality either via mixins or direct implementation.
112 Here an example for such subclasses::
113
114 from werkzeug.wrappers import BaseRequest, ETagRequestMixin
115
116 class Request(BaseRequest, ETagRequestMixin):
117 pass
118
119 Request objects are **read only**. As of 0.5 modifications are not
120 allowed in any place. Unlike the lower level parsing functions the
121 request object will use immutable objects everywhere possible.
122
123 Per default the request object will assume all the text data is `utf-8`
124 encoded. Please refer to `the unicode chapter <unicode.txt>`_ for more
125 details about customizing the behavior.
126
127 Per default the request object will be added to the WSGI
128 environment as `werkzeug.request` to support the debugging system.
129 If you don't want that, set `populate_request` to `False`.
130
131 If `shallow` is `True` the environment is initialized as shallow
132 object around the environ. Every operation that would modify the
133 environ in any way (such as consuming form data) raises an exception
134 unless the `shallow` attribute is explicitly set to `False`. This
135 is useful for middlewares where you don't want to consume the form
136 data by accident. A shallow request is not populated to the WSGI
137 environment.
138
139 .. versionchanged:: 0.5
140 read-only mode was enforced by using immutables classes for all
141 data.
142 """
143
144 #: the charset for the request, defaults to utf-8
145 charset = 'utf-8'
146
147 #: the error handling procedure for errors, defaults to 'replace'
148 encoding_errors = 'replace'
149
150 #: the maximum content length. This is forwarded to the form data
151 #: parsing function (:func:`parse_form_data`). When set and the
152 #: :attr:`form` or :attr:`files` attribute is accessed and the
153 #: parsing fails because more than the specified value is transmitted
154 #: a :exc:`~werkzeug.exceptions.RequestEntityTooLarge` exception is raised.
155 #:
156 #: Have a look at :ref:`dealing-with-request-data` for more details.
157 #:
158 #: .. versionadded:: 0.5
159 max_content_length = None
160
161 #: the maximum form field size. This is forwarded to the form data
162 #: parsing function (:func:`parse_form_data`). When set and the
163 #: :attr:`form` or :attr:`files` attribute is accessed and the
164 #: data in memory for post data is longer than the specified value a
165 #: :exc:`~werkzeug.exceptions.RequestEntityTooLarge` exception is raised.
166 #:
167 #: Have a look at :ref:`dealing-with-request-data` for more details.
168 #:
169 #: .. versionadded:: 0.5
170 max_form_memory_size = None
171
172 #: the class to use for `args` and `form`. The default is an
173 #: :class:`~werkzeug.datastructures.ImmutableMultiDict` which supports
174 #: multiple values per key. alternatively it makes sense to use an
175 #: :class:`~werkzeug.datastructures.ImmutableOrderedMultiDict` which
176 #: preserves order or a :class:`~werkzeug.datastructures.ImmutableDict`
177 #: which is the fastest but only remembers the last key. It is also
178 #: possible to use mutable structures, but this is not recommended.
179 #:
180 #: .. versionadded:: 0.6
181 parameter_storage_class = ImmutableMultiDict
182
183 #: the type to be used for list values from the incoming WSGI environment.
184 #: By default an :class:`~werkzeug.datastructures.ImmutableList` is used
185 #: (for example for :attr:`access_list`).
186 #:
187 #: .. versionadded:: 0.6
188 list_storage_class = ImmutableList
189
190 #: the type to be used for dict values from the incoming WSGI environment.
191 #: By default an
192 #: :class:`~werkzeug.datastructures.ImmutableTypeConversionDict` is used
193 #: (for example for :attr:`cookies`).
194 #:
195 #: .. versionadded:: 0.6
196 dict_storage_class = ImmutableTypeConversionDict
197
198 #: The form data parser that shoud be used. Can be replaced to customize
199 #: the form date parsing.
200 form_data_parser_class = FormDataParser
201
202 #: Optionally a list of hosts that is trusted by this request. By default
203 #: all hosts are trusted which means that whatever the client sends the
204 #: host is will be accepted.
205 #:
206 #: This is the recommended setup as a webserver should manually be set up
207 #: to only route correct hosts to the application, and remove the
208 #: `X-Forwarded-Host` header if it is not being used (see
209 #: :func:`werkzeug.wsgi.get_host`).
210 #:
211 #: .. versionadded:: 0.9
212 trusted_hosts = None
213
214 #: Indicates whether the data descriptor should be allowed to read and
215 #: buffer up the input stream. By default it's enabled.
216 #:
217 #: .. versionadded:: 0.9
218 disable_data_descriptor = False
219
220 def __init__(self, environ, populate_request=True, shallow=False):
221 self.environ = environ
222 if populate_request and not shallow:
223 self.environ['werkzeug.request'] = self
224 self.shallow = shallow
225
226 def __repr__(self):
227 # make sure the __repr__ even works if the request was created
228 # from an invalid WSGI environment. If we display the request
229 # in a debug session we don't want the repr to blow up.
230 args = []
231 try:
232 args.append("'%s'" % to_native(self.url, self.url_charset))
233 args.append('[%s]' % self.method)
234 except Exception:
235 args.append('(invalid WSGI environ)')
236
237 return '<%s %s>' % (
238 self.__class__.__name__,
239 ' '.join(args)
240 )
241
242 @property
243 def url_charset(self):
244 """The charset that is assumed for URLs. Defaults to the value
245 of :attr:`charset`.
246
247 .. versionadded:: 0.6
248 """
249 return self.charset
250
251 @classmethod
252 def from_values(cls, *args, **kwargs):
253 """Create a new request object based on the values provided. If
254 environ is given missing values are filled from there. This method is
255 useful for small scripts when you need to simulate a request from an URL.
256 Do not use this method for unittesting, there is a full featured client
257 object (:class:`Client`) that allows to create multipart requests,
258 support for cookies etc.
259
260 This accepts the same options as the
261 :class:`~werkzeug.test.EnvironBuilder`.
262
263 .. versionchanged:: 0.5
264 This method now accepts the same arguments as
265 :class:`~werkzeug.test.EnvironBuilder`. Because of this the
266 `environ` parameter is now called `environ_overrides`.
267
268 :return: request object
269 """
270 from werkzeug.test import EnvironBuilder
271 charset = kwargs.pop('charset', cls.charset)
272 kwargs['charset'] = charset
273 builder = EnvironBuilder(*args, **kwargs)
274 try:
275 return builder.get_request(cls)
276 finally:
277 builder.close()
278
279 @classmethod
280 def application(cls, f):
281 """Decorate a function as responder that accepts the request as first
282 argument. This works like the :func:`responder` decorator but the
283 function is passed the request object as first argument and the
284 request object will be closed automatically::
285
286 @Request.application
287 def my_wsgi_app(request):
288 return Response('Hello World!')
289
290 As of Werkzeug 0.14 HTTP exceptions are automatically caught and
291 converted to responses instead of failing.
292
293 :param f: the WSGI callable to decorate
294 :return: a new WSGI callable
295 """
296 #: return a callable that wraps the -2nd argument with the request
297 #: and calls the function with all the arguments up to that one and
298 #: the request. The return value is then called with the latest
299 #: two arguments. This makes it possible to use this decorator for
300 #: both methods and standalone WSGI functions.
301 from werkzeug.exceptions import HTTPException
302
303 def application(*args):
304 request = cls(args[-2])
305 with request:
306 try:
307 resp = f(*args[:-2] + (request,))
308 except HTTPException as e:
309 resp = e.get_response(args[-2])
310 return resp(*args[-2:])
311
312 return update_wrapper(application, f)
313
314 def _get_file_stream(self, total_content_length, content_type, filename=None,
315 content_length=None):
316 """Called to get a stream for the file upload.
317
318 This must provide a file-like class with `read()`, `readline()`
319 and `seek()` methods that is both writeable and readable.
320
321 The default implementation returns a temporary file if the total
322 content length is higher than 500KB. Because many browsers do not
323 provide a content length for the files only the total content
324 length matters.
325
326 :param total_content_length: the total content length of all the
327 data in the request combined. This value
328 is guaranteed to be there.
329 :param content_type: the mimetype of the uploaded file.
330 :param filename: the filename of the uploaded file. May be `None`.
331 :param content_length: the length of this file. This value is usually
332 not provided because webbrowsers do not provide
333 this value.
334 """
335 return default_stream_factory(
336 total_content_length=total_content_length,
337 content_type=content_type,
338 filename=filename,
339 content_length=content_length)
340
341 @property
342 def want_form_data_parsed(self):
343 """Returns True if the request method carries content. As of
344 Werkzeug 0.9 this will be the case if a content type is transmitted.
345
346 .. versionadded:: 0.8
347 """
348 return bool(self.environ.get('CONTENT_TYPE'))
349
350 def make_form_data_parser(self):
351 """Creates the form data parser. Instantiates the
352 :attr:`form_data_parser_class` with some parameters.
353
354 .. versionadded:: 0.8
355 """
356 return self.form_data_parser_class(self._get_file_stream,
357 self.charset,
358 self.encoding_errors,
359 self.max_form_memory_size,
360 self.max_content_length,
361 self.parameter_storage_class)
362
363 def _load_form_data(self):
364 """Method used internally to retrieve submitted data. After calling
365 this sets `form` and `files` on the request object to multi dicts
366 filled with the incoming form data. As a matter of fact the input
367 stream will be empty afterwards. You can also call this method to
368 force the parsing of the form data.
369
370 .. versionadded:: 0.8
371 """
372 # abort early if we have already consumed the stream
373 if 'form' in self.__dict__:
374 return
375
376 _assert_not_shallow(self)
377
378 if self.want_form_data_parsed:
379 content_type = self.environ.get('CONTENT_TYPE', '')
380 content_length = get_content_length(self.environ)
381 mimetype, options = parse_options_header(content_type)
382 parser = self.make_form_data_parser()
383 data = parser.parse(self._get_stream_for_parsing(),
384 mimetype, content_length, options)
385 else:
386 data = (self.stream, self.parameter_storage_class(),
387 self.parameter_storage_class())
388
389 # inject the values into the instance dict so that we bypass
390 # our cached_property non-data descriptor.
391 d = self.__dict__
392 d['stream'], d['form'], d['files'] = data
393
394 def _get_stream_for_parsing(self):
395 """This is the same as accessing :attr:`stream` with the difference
396 that if it finds cached data from calling :meth:`get_data` first it
397 will create a new stream out of the cached data.
398
399 .. versionadded:: 0.9.3
400 """
401 cached_data = getattr(self, '_cached_data', None)
402 if cached_data is not None:
403 return BytesIO(cached_data)
404 return self.stream
405
406 def close(self):
407 """Closes associated resources of this request object. This
408 closes all file handles explicitly. You can also use the request
409 object in a with statement which will automatically close it.
410
411 .. versionadded:: 0.9
412 """
413 files = self.__dict__.get('files')
414 for key, value in iter_multi_items(files or ()):
415 value.close()
416
417 def __enter__(self):
418 return self
419
420 def __exit__(self, exc_type, exc_value, tb):
421 self.close()
422
423 @cached_property
424 def stream(self):
425 """
426 If the incoming form data was not encoded with a known mimetype
427 the data is stored unmodified in this stream for consumption. Most
428 of the time it is a better idea to use :attr:`data` which will give
429 you that data as a string. The stream only returns the data once.
430
431 Unlike :attr:`input_stream` this stream is properly guarded that you
432 can't accidentally read past the length of the input. Werkzeug will
433 internally always refer to this stream to read data which makes it
434 possible to wrap this object with a stream that does filtering.
435
436 .. versionchanged:: 0.9
437 This stream is now always available but might be consumed by the
438 form parser later on. Previously the stream was only set if no
439 parsing happened.
440 """
441 _assert_not_shallow(self)
442 return get_input_stream(self.environ)
443
444 input_stream = environ_property('wsgi.input', """
445 The WSGI input stream.
446
447 In general it's a bad idea to use this one because you can easily read past
448 the boundary. Use the :attr:`stream` instead.
449 """)
450
451 @cached_property
452 def args(self):
453 """The parsed URL parameters (the part in the URL after the question
454 mark).
455
456 By default an
457 :class:`~werkzeug.datastructures.ImmutableMultiDict`
458 is returned from this function. This can be changed by setting
459 :attr:`parameter_storage_class` to a different type. This might
460 be necessary if the order of the form data is important.
461 """
462 return url_decode(wsgi_get_bytes(self.environ.get('QUERY_STRING', '')),
463 self.url_charset, errors=self.encoding_errors,
464 cls=self.parameter_storage_class)
465
466 @cached_property
467 def data(self):
468 """
469 Contains the incoming request data as string in case it came with
470 a mimetype Werkzeug does not handle.
471 """
472
473 if self.disable_data_descriptor:
474 raise AttributeError('data descriptor is disabled')
475 # XXX: this should eventually be deprecated.
476
477 # We trigger form data parsing first which means that the descriptor
478 # will not cache the data that would otherwise be .form or .files
479 # data. This restores the behavior that was there in Werkzeug
480 # before 0.9. New code should use :meth:`get_data` explicitly as
481 # this will make behavior explicit.
482 return self.get_data(parse_form_data=True)
483
484 def get_data(self, cache=True, as_text=False, parse_form_data=False):
485 """This reads the buffered incoming data from the client into one
486 bytestring. By default this is cached but that behavior can be
487 changed by setting `cache` to `False`.
488
489 Usually it's a bad idea to call this method without checking the
490 content length first as a client could send dozens of megabytes or more
491 to cause memory problems on the server.
492
493 Note that if the form data was already parsed this method will not
494 return anything as form data parsing does not cache the data like
495 this method does. To implicitly invoke form data parsing function
496 set `parse_form_data` to `True`. When this is done the return value
497 of this method will be an empty string if the form parser handles
498 the data. This generally is not necessary as if the whole data is
499 cached (which is the default) the form parser will used the cached
500 data to parse the form data. Please be generally aware of checking
501 the content length first in any case before calling this method
502 to avoid exhausting server memory.
503
504 If `as_text` is set to `True` the return value will be a decoded
505 unicode string.
506
507 .. versionadded:: 0.9
508 """
509 rv = getattr(self, '_cached_data', None)
510 if rv is None:
511 if parse_form_data:
512 self._load_form_data()
513 rv = self.stream.read()
514 if cache:
515 self._cached_data = rv
516 if as_text:
517 rv = rv.decode(self.charset, self.encoding_errors)
518 return rv
519
520 @cached_property
521 def form(self):
522 """The form parameters. By default an
523 :class:`~werkzeug.datastructures.ImmutableMultiDict`
524 is returned from this function. This can be changed by setting
525 :attr:`parameter_storage_class` to a different type. This might
526 be necessary if the order of the form data is important.
527
528 Please keep in mind that file uploads will not end up here, but instead
529 in the :attr:`files` attribute.
530
531 .. versionchanged:: 0.9
532
533 Previous to Werkzeug 0.9 this would only contain form data for POST
534 and PUT requests.
535 """
536 self._load_form_data()
537 return self.form
538
539 @cached_property
540 def values(self):
541 """A :class:`werkzeug.datastructures.CombinedMultiDict` that combines
542 :attr:`args` and :attr:`form`."""
543 args = []
544 for d in self.args, self.form:
545 if not isinstance(d, MultiDict):
546 d = MultiDict(d)
547 args.append(d)
548 return CombinedMultiDict(args)
549
550 @cached_property
551 def files(self):
552 """:class:`~werkzeug.datastructures.MultiDict` object containing
553 all uploaded files. Each key in :attr:`files` is the name from the
554 ``<input type="file" name="">``. Each value in :attr:`files` is a
555 Werkzeug :class:`~werkzeug.datastructures.FileStorage` object.
556
557 It basically behaves like a standard file object you know from Python,
558 with the difference that it also has a
559 :meth:`~werkzeug.datastructures.FileStorage.save` function that can
560 store the file on the filesystem.
561
562 Note that :attr:`files` will only contain data if the request method was
563 POST, PUT or PATCH and the ``<form>`` that posted to the request had
564 ``enctype="multipart/form-data"``. It will be empty otherwise.
565
566 See the :class:`~werkzeug.datastructures.MultiDict` /
567 :class:`~werkzeug.datastructures.FileStorage` documentation for
568 more details about the used data structure.
569 """
570 self._load_form_data()
571 return self.files
572
573 @cached_property
574 def cookies(self):
575 """A :class:`dict` with the contents of all cookies transmitted with
576 the request."""
577 return parse_cookie(self.environ, self.charset,
578 self.encoding_errors,
579 cls=self.dict_storage_class)
580
581 @cached_property
582 def headers(self):
583 """The headers from the WSGI environ as immutable
584 :class:`~werkzeug.datastructures.EnvironHeaders`.
585 """
586 return EnvironHeaders(self.environ)
587
588 @cached_property
589 def path(self):
590 """Requested path as unicode. This works a bit like the regular path
591 info in the WSGI environment but will always include a leading slash,
592 even if the URL root is accessed.
593 """
594 raw_path = wsgi_decoding_dance(self.environ.get('PATH_INFO') or '',
595 self.charset, self.encoding_errors)
596 return '/' + raw_path.lstrip('/')
597
598 @cached_property
599 def full_path(self):
600 """Requested path as unicode, including the query string."""
601 return self.path + u'?' + to_unicode(self.query_string, self.url_charset)
602
603 @cached_property
604 def script_root(self):
605 """The root path of the script without the trailing slash."""
606 raw_path = wsgi_decoding_dance(self.environ.get('SCRIPT_NAME') or '',
607 self.charset, self.encoding_errors)
608 return raw_path.rstrip('/')
609
610 @cached_property
611 def url(self):
612 """The reconstructed current URL as IRI.
613 See also: :attr:`trusted_hosts`.
614 """
615 return get_current_url(self.environ,
616 trusted_hosts=self.trusted_hosts)
617
618 @cached_property
619 def base_url(self):
620 """Like :attr:`url` but without the querystring
621 See also: :attr:`trusted_hosts`.
622 """
623 return get_current_url(self.environ, strip_querystring=True,
624 trusted_hosts=self.trusted_hosts)
625
626 @cached_property
627 def url_root(self):
628 """The full URL root (with hostname), this is the application
629 root as IRI.
630 See also: :attr:`trusted_hosts`.
631 """
632 return get_current_url(self.environ, True,
633 trusted_hosts=self.trusted_hosts)
634
635 @cached_property
636 def host_url(self):
637 """Just the host with scheme as IRI.
638 See also: :attr:`trusted_hosts`.
639 """
640 return get_current_url(self.environ, host_only=True,
641 trusted_hosts=self.trusted_hosts)
642
643 @cached_property
644 def host(self):
645 """Just the host including the port if available.
646 See also: :attr:`trusted_hosts`.
647 """
648 return get_host(self.environ, trusted_hosts=self.trusted_hosts)
649
650 query_string = environ_property(
651 'QUERY_STRING', '', read_only=True,
652 load_func=wsgi_get_bytes, doc='The URL parameters as raw bytestring.')
653 method = environ_property(
654 'REQUEST_METHOD', 'GET', read_only=True,
655 load_func=lambda x: x.upper(),
656 doc="The request method. (For example ``'GET'`` or ``'POST'``).")
657
658 @cached_property
659 def access_route(self):
660 """If a forwarded header exists this is a list of all ip addresses
661 from the client ip to the last proxy server.
662 """
663 if 'HTTP_X_FORWARDED_FOR' in self.environ:
664 addr = self.environ['HTTP_X_FORWARDED_FOR'].split(',')
665 return self.list_storage_class([x.strip() for x in addr])
666 elif 'REMOTE_ADDR' in self.environ:
667 return self.list_storage_class([self.environ['REMOTE_ADDR']])
668 return self.list_storage_class()
669
670 @property
671 def remote_addr(self):
672 """The remote address of the client."""
673 return self.environ.get('REMOTE_ADDR')
674
675 remote_user = environ_property('REMOTE_USER', doc='''
676 If the server supports user authentication, and the script is
677 protected, this attribute contains the username the user has
678 authenticated as.''')
679
680 scheme = environ_property('wsgi.url_scheme', doc='''
681 URL scheme (http or https).
682
683 .. versionadded:: 0.7''')
684
685 @property
686 def is_xhr(self):
687 """True if the request was triggered via a JavaScript XMLHttpRequest.
688 This only works with libraries that support the ``X-Requested-With``
689 header and set it to "XMLHttpRequest". Libraries that do that are
690 prototype, jQuery and Mochikit and probably some more.
691
692 .. deprecated:: 0.13
693 ``X-Requested-With`` is not standard and is unreliable.
694 """
695 warn(DeprecationWarning(
696 'Request.is_xhr is deprecated. Given that the X-Requested-With '
697 'header is not a part of any spec, it is not reliable'
698 ), stacklevel=2)
699 return self.environ.get(
700 'HTTP_X_REQUESTED_WITH', ''
701 ).lower() == 'xmlhttprequest'
702
703 is_secure = property(lambda x: x.environ['wsgi.url_scheme'] == 'https',
704 doc='`True` if the request is secure.')
705 is_multithread = environ_property('wsgi.multithread', doc='''
706 boolean that is `True` if the application is served by
707 a multithreaded WSGI server.''')
708 is_multiprocess = environ_property('wsgi.multiprocess', doc='''
709 boolean that is `True` if the application is served by
710 a WSGI server that spawns multiple processes.''')
711 is_run_once = environ_property('wsgi.run_once', doc='''
712 boolean that is `True` if the application will be executed only
713 once in a process lifetime. This is the case for CGI for example,
714 but it's not guaranteed that the execution only happens one time.''')
715
716
717 class BaseResponse(object):
718
719 """Base response class. The most important fact about a response object
720 is that it's a regular WSGI application. It's initialized with a couple
721 of response parameters (headers, body, status code etc.) and will start a
722 valid WSGI response when called with the environ and start response
723 callable.
724
725 Because it's a WSGI application itself processing usually ends before the
726 actual response is sent to the server. This helps debugging systems
727 because they can catch all the exceptions before responses are started.
728
729 Here a small example WSGI application that takes advantage of the
730 response objects::
731
732 from werkzeug.wrappers import BaseResponse as Response
733
734 def index():
735 return Response('Index page')
736
737 def application(environ, start_response):
738 path = environ.get('PATH_INFO') or '/'
739 if path == '/':
740 response = index()
741 else:
742 response = Response('Not Found', status=404)
743 return response(environ, start_response)
744
745 Like :class:`BaseRequest` which object is lacking a lot of functionality
746 implemented in mixins. This gives you a better control about the actual
747 API of your response objects, so you can create subclasses and add custom
748 functionality. A full featured response object is available as
749 :class:`Response` which implements a couple of useful mixins.
750
751 To enforce a new type of already existing responses you can use the
752 :meth:`force_type` method. This is useful if you're working with different
753 subclasses of response objects and you want to post process them with a
754 known interface.
755
756 Per default the response object will assume all the text data is `utf-8`
757 encoded. Please refer to `the unicode chapter <unicode.txt>`_ for more
758 details about customizing the behavior.
759
760 Response can be any kind of iterable or string. If it's a string it's
761 considered being an iterable with one item which is the string passed.
762 Headers can be a list of tuples or a
763 :class:`~werkzeug.datastructures.Headers` object.
764
765 Special note for `mimetype` and `content_type`: For most mime types
766 `mimetype` and `content_type` work the same, the difference affects
767 only 'text' mimetypes. If the mimetype passed with `mimetype` is a
768 mimetype starting with `text/`, the charset parameter of the response
769 object is appended to it. In contrast the `content_type` parameter is
770 always added as header unmodified.
771
772 .. versionchanged:: 0.5
773 the `direct_passthrough` parameter was added.
774
775 :param response: a string or response iterable.
776 :param status: a string with a status or an integer with the status code.
777 :param headers: a list of headers or a
778 :class:`~werkzeug.datastructures.Headers` object.
779 :param mimetype: the mimetype for the response. See notice above.
780 :param content_type: the content type for the response. See notice above.
781 :param direct_passthrough: if set to `True` :meth:`iter_encoded` is not
782 called before iteration which makes it
783 possible to pass special iterators through
784 unchanged (see :func:`wrap_file` for more
785 details.)
786 """
787
788 #: the charset of the response.
789 charset = 'utf-8'
790
791 #: the default status if none is provided.
792 default_status = 200
793
794 #: the default mimetype if none is provided.
795 default_mimetype = 'text/plain'
796
797 #: if set to `False` accessing properties on the response object will
798 #: not try to consume the response iterator and convert it into a list.
799 #:
800 #: .. versionadded:: 0.6.2
801 #:
802 #: That attribute was previously called `implicit_seqence_conversion`.
803 #: (Notice the typo). If you did use this feature, you have to adapt
804 #: your code to the name change.
805 implicit_sequence_conversion = True
806
807 #: Should this response object correct the location header to be RFC
808 #: conformant? This is true by default.
809 #:
810 #: .. versionadded:: 0.8
811 autocorrect_location_header = True
812
813 #: Should this response object automatically set the content-length
814 #: header if possible? This is true by default.
815 #:
816 #: .. versionadded:: 0.8
817 automatically_set_content_length = True
818
819 #: Warn if a cookie header exceeds this size. The default, 4093, should be
820 #: safely `supported by most browsers <cookie_>`_. A cookie larger than
821 #: this size will still be sent, but it may be ignored or handled
822 #: incorrectly by some browsers. Set to 0 to disable this check.
823 #:
824 #: .. versionadded:: 0.13
825 #:
826 #: .. _`cookie`: http://browsercookielimits.squawky.net/
827 max_cookie_size = 4093
828
829 def __init__(self, response=None, status=None, headers=None,
830 mimetype=None, content_type=None, direct_passthrough=False):
831 if isinstance(headers, Headers):
832 self.headers = headers
833 elif not headers:
834 self.headers = Headers()
835 else:
836 self.headers = Headers(headers)
837
838 if content_type is None:
839 if mimetype is None and 'content-type' not in self.headers:
840 mimetype = self.default_mimetype
841 if mimetype is not None:
842 mimetype = get_content_type(mimetype, self.charset)
843 content_type = mimetype
844 if content_type is not None:
845 self.headers['Content-Type'] = content_type
846 if status is None:
847 status = self.default_status
848 if isinstance(status, integer_types):
849 self.status_code = status
850 else:
851 self.status = status
852
853 self.direct_passthrough = direct_passthrough
854 self._on_close = []
855
856 # we set the response after the headers so that if a class changes
857 # the charset attribute, the data is set in the correct charset.
858 if response is None:
859 self.response = []
860 elif isinstance(response, (text_type, bytes, bytearray)):
861 self.set_data(response)
862 else:
863 self.response = response
864
865 def call_on_close(self, func):
866 """Adds a function to the internal list of functions that should
867 be called as part of closing down the response. Since 0.7 this
868 function also returns the function that was passed so that this
869 can be used as a decorator.
870
871 .. versionadded:: 0.6
872 """
873 self._on_close.append(func)
874 return func
875
876 def __repr__(self):
877 if self.is_sequence:
878 body_info = '%d bytes' % sum(map(len, self.iter_encoded()))
879 else:
880 body_info = 'streamed' if self.is_streamed else 'likely-streamed'
881 return '<%s %s [%s]>' % (
882 self.__class__.__name__,
883 body_info,
884 self.status
885 )
886
887 @classmethod
888 def force_type(cls, response, environ=None):
889 """Enforce that the WSGI response is a response object of the current
890 type. Werkzeug will use the :class:`BaseResponse` internally in many
891 situations like the exceptions. If you call :meth:`get_response` on an
892 exception you will get back a regular :class:`BaseResponse` object, even
893 if you are using a custom subclass.
894
895 This method can enforce a given response type, and it will also
896 convert arbitrary WSGI callables into response objects if an environ
897 is provided::
898
899 # convert a Werkzeug response object into an instance of the
900 # MyResponseClass subclass.
901 response = MyResponseClass.force_type(response)
902
903 # convert any WSGI application into a response object
904 response = MyResponseClass.force_type(response, environ)
905
906 This is especially useful if you want to post-process responses in
907 the main dispatcher and use functionality provided by your subclass.
908
909 Keep in mind that this will modify response objects in place if
910 possible!
911
912 :param response: a response object or wsgi application.
913 :param environ: a WSGI environment object.
914 :return: a response object.
915 """
916 if not isinstance(response, BaseResponse):
917 if environ is None:
918 raise TypeError('cannot convert WSGI application into '
919 'response objects without an environ')
920 response = BaseResponse(*_run_wsgi_app(response, environ))
921 response.__class__ = cls
922 return response
923
924 @classmethod
925 def from_app(cls, app, environ, buffered=False):
926 """Create a new response object from an application output. This
927 works best if you pass it an application that returns a generator all
928 the time. Sometimes applications may use the `write()` callable
929 returned by the `start_response` function. This tries to resolve such
930 edge cases automatically. But if you don't get the expected output
931 you should set `buffered` to `True` which enforces buffering.
932
933 :param app: the WSGI application to execute.
934 :param environ: the WSGI environment to execute against.
935 :param buffered: set to `True` to enforce buffering.
936 :return: a response object.
937 """
938 return cls(*_run_wsgi_app(app, environ, buffered))
939
940 def _get_status_code(self):
941 return self._status_code
942
943 def _set_status_code(self, code):
944 self._status_code = code
945 try:
946 self._status = '%d %s' % (code, HTTP_STATUS_CODES[code].upper())
947 except KeyError:
948 self._status = '%d UNKNOWN' % code
949 status_code = property(_get_status_code, _set_status_code,
950 doc='The HTTP Status code as number')
951 del _get_status_code, _set_status_code
952
953 def _get_status(self):
954 return self._status
955
956 def _set_status(self, value):
957 try:
958 self._status = to_native(value)
959 except AttributeError:
960 raise TypeError('Invalid status argument')
961
962 try:
963 self._status_code = int(self._status.split(None, 1)[0])
964 except ValueError:
965 self._status_code = 0
966 self._status = '0 %s' % self._status
967 except IndexError:
968 raise ValueError('Empty status argument')
969 status = property(_get_status, _set_status, doc='The HTTP Status code')
970 del _get_status, _set_status
971
972 def get_data(self, as_text=False):
973 """The string representation of the request body. Whenever you call
974 this property the request iterable is encoded and flattened. This
975 can lead to unwanted behavior if you stream big data.
976
977 This behavior can be disabled by setting
978 :attr:`implicit_sequence_conversion` to `False`.
979
980 If `as_text` is set to `True` the return value will be a decoded
981 unicode string.
982
983 .. versionadded:: 0.9
984 """
985 self._ensure_sequence()
986 rv = b''.join(self.iter_encoded())
987 if as_text:
988 rv = rv.decode(self.charset)
989 return rv
990
991 def set_data(self, value):
992 """Sets a new string as response. The value set must either by a
993 unicode or bytestring. If a unicode string is set it's encoded
994 automatically to the charset of the response (utf-8 by default).
995
996 .. versionadded:: 0.9
997 """
998 # if an unicode string is set, it's encoded directly so that we
999 # can set the content length
1000 if isinstance(value, text_type):
1001 value = value.encode(self.charset)
1002 else:
1003 value = bytes(value)
1004 self.response = [value]
1005 if self.automatically_set_content_length:
1006 self.headers['Content-Length'] = str(len(value))
1007
1008 data = property(get_data, set_data, doc='''
1009 A descriptor that calls :meth:`get_data` and :meth:`set_data`. This
1010 should not be used and will eventually get deprecated.
1011 ''')
1012
1013 def calculate_content_length(self):
1014 """Returns the content length if available or `None` otherwise."""
1015 try:
1016 self._ensure_sequence()
1017 except RuntimeError:
1018 return None
1019 return sum(len(x) for x in self.iter_encoded())
1020
1021 def _ensure_sequence(self, mutable=False):
1022 """This method can be called by methods that need a sequence. If
1023 `mutable` is true, it will also ensure that the response sequence
1024 is a standard Python list.
1025
1026 .. versionadded:: 0.6
1027 """
1028 if self.is_sequence:
1029 # if we need a mutable object, we ensure it's a list.
1030 if mutable and not isinstance(self.response, list):
1031 self.response = list(self.response)
1032 return
1033 if self.direct_passthrough:
1034 raise RuntimeError('Attempted implicit sequence conversion '
1035 'but the response object is in direct '
1036 'passthrough mode.')
1037 if not self.implicit_sequence_conversion:
1038 raise RuntimeError('The response object required the iterable '
1039 'to be a sequence, but the implicit '
1040 'conversion was disabled. Call '
1041 'make_sequence() yourself.')
1042 self.make_sequence()
1043
1044 def make_sequence(self):
1045 """Converts the response iterator in a list. By default this happens
1046 automatically if required. If `implicit_sequence_conversion` is
1047 disabled, this method is not automatically called and some properties
1048 might raise exceptions. This also encodes all the items.
1049
1050 .. versionadded:: 0.6
1051 """
1052 if not self.is_sequence:
1053 # if we consume an iterable we have to ensure that the close
1054 # method of the iterable is called if available when we tear
1055 # down the response
1056 close = getattr(self.response, 'close', None)
1057 self.response = list(self.iter_encoded())
1058 if close is not None:
1059 self.call_on_close(close)
1060
1061 def iter_encoded(self):
1062 """Iter the response encoded with the encoding of the response.
1063 If the response object is invoked as WSGI application the return
1064 value of this method is used as application iterator unless
1065 :attr:`direct_passthrough` was activated.
1066 """
1067 if __debug__:
1068 _warn_if_string(self.response)
1069 # Encode in a separate function so that self.response is fetched
1070 # early. This allows us to wrap the response with the return
1071 # value from get_app_iter or iter_encoded.
1072 return _iter_encoded(self.response, self.charset)
1073
1074 def set_cookie(self, key, value='', max_age=None, expires=None,
1075 path='/', domain=None, secure=False, httponly=False,
1076 samesite=None):
1077 """Sets a cookie. The parameters are the same as in the cookie `Morsel`
1078 object in the Python standard library but it accepts unicode data, too.
1079
1080 A warning is raised if the size of the cookie header exceeds
1081 :attr:`max_cookie_size`, but the header will still be set.
1082
1083 :param key: the key (name) of the cookie to be set.
1084 :param value: the value of the cookie.
1085 :param max_age: should be a number of seconds, or `None` (default) if
1086 the cookie should last only as long as the client's
1087 browser session.
1088 :param expires: should be a `datetime` object or UNIX timestamp.
1089 :param path: limits the cookie to a given path, per default it will
1090 span the whole domain.
1091 :param domain: if you want to set a cross-domain cookie. For example,
1092 ``domain=".example.com"`` will set a cookie that is
1093 readable by the domain ``www.example.com``,
1094 ``foo.example.com`` etc. Otherwise, a cookie will only
1095 be readable by the domain that set it.
1096 :param secure: If `True`, the cookie will only be available via HTTPS
1097 :param httponly: disallow JavaScript to access the cookie. This is an
1098 extension to the cookie standard and probably not
1099 supported by all browsers.
1100 :param samesite: Limits the scope of the cookie such that it will only
1101 be attached to requests if those requests are
1102 "same-site".
1103 """
1104 self.headers.add('Set-Cookie', dump_cookie(
1105 key,
1106 value=value,
1107 max_age=max_age,
1108 expires=expires,
1109 path=path,
1110 domain=domain,
1111 secure=secure,
1112 httponly=httponly,
1113 charset=self.charset,
1114 max_size=self.max_cookie_size,
1115 samesite=samesite
1116 ))
1117
1118 def delete_cookie(self, key, path='/', domain=None):
1119 """Delete a cookie. Fails silently if key doesn't exist.
1120
1121 :param key: the key (name) of the cookie to be deleted.
1122 :param path: if the cookie that should be deleted was limited to a
1123 path, the path has to be defined here.
1124 :param domain: if the cookie that should be deleted was limited to a
1125 domain, that domain has to be defined here.
1126 """
1127 self.set_cookie(key, expires=0, max_age=0, path=path, domain=domain)
1128
1129 @property
1130 def is_streamed(self):
1131 """If the response is streamed (the response is not an iterable with
1132 a length information) this property is `True`. In this case streamed
1133 means that there is no information about the number of iterations.
1134 This is usually `True` if a generator is passed to the response object.
1135
1136 This is useful for checking before applying some sort of post
1137 filtering that should not take place for streamed responses.
1138 """
1139 try:
1140 len(self.response)
1141 except (TypeError, AttributeError):
1142 return True
1143 return False
1144
1145 @property
1146 def is_sequence(self):
1147 """If the iterator is buffered, this property will be `True`. A
1148 response object will consider an iterator to be buffered if the
1149 response attribute is a list or tuple.
1150
1151 .. versionadded:: 0.6
1152 """
1153 return isinstance(self.response, (tuple, list))
1154
1155 def close(self):
1156 """Close the wrapped response if possible. You can also use the object
1157 in a with statement which will automatically close it.
1158
1159 .. versionadded:: 0.9
1160 Can now be used in a with statement.
1161 """
1162 if hasattr(self.response, 'close'):
1163 self.response.close()
1164 for func in self._on_close:
1165 func()
1166
1167 def __enter__(self):
1168 return self
1169
1170 def __exit__(self, exc_type, exc_value, tb):
1171 self.close()
1172
1173 def freeze(self):
1174 """Call this method if you want to make your response object ready for
1175 being pickled. This buffers the generator if there is one. It will
1176 also set the `Content-Length` header to the length of the body.
1177
1178 .. versionchanged:: 0.6
1179 The `Content-Length` header is now set.
1180 """
1181 # we explicitly set the length to a list of the *encoded* response
1182 # iterator. Even if the implicit sequence conversion is disabled.
1183 self.response = list(self.iter_encoded())
1184 self.headers['Content-Length'] = str(sum(map(len, self.response)))
1185
1186 def get_wsgi_headers(self, environ):
1187 """This is automatically called right before the response is started
1188 and returns headers modified for the given environment. It returns a
1189 copy of the headers from the response with some modifications applied
1190 if necessary.
1191
1192 For example the location header (if present) is joined with the root
1193 URL of the environment. Also the content length is automatically set
1194 to zero here for certain status codes.
1195
1196 .. versionchanged:: 0.6
1197 Previously that function was called `fix_headers` and modified
1198 the response object in place. Also since 0.6, IRIs in location
1199 and content-location headers are handled properly.
1200
1201 Also starting with 0.6, Werkzeug will attempt to set the content
1202 length if it is able to figure it out on its own. This is the
1203 case if all the strings in the response iterable are already
1204 encoded and the iterable is buffered.
1205
1206 :param environ: the WSGI environment of the request.
1207 :return: returns a new :class:`~werkzeug.datastructures.Headers`
1208 object.
1209 """
1210 headers = Headers(self.headers)
1211 location = None
1212 content_location = None
1213 content_length = None
1214 status = self.status_code
1215
1216 # iterate over the headers to find all values in one go. Because
1217 # get_wsgi_headers is used each response that gives us a tiny
1218 # speedup.
1219 for key, value in headers:
1220 ikey = key.lower()
1221 if ikey == u'location':
1222 location = value
1223 elif ikey == u'content-location':
1224 content_location = value
1225 elif ikey == u'content-length':
1226 content_length = value
1227
1228 # make sure the location header is an absolute URL
1229 if location is not None:
1230 old_location = location
1231 if isinstance(location, text_type):
1232 # Safe conversion is necessary here as we might redirect
1233 # to a broken URI scheme (for instance itms-services).
1234 location = iri_to_uri(location, safe_conversion=True)
1235
1236 if self.autocorrect_location_header:
1237 current_url = get_current_url(environ, root_only=True)
1238 if isinstance(current_url, text_type):
1239 current_url = iri_to_uri(current_url)
1240 location = url_join(current_url, location)
1241 if location != old_location:
1242 headers['Location'] = location
1243
1244 # make sure the content location is a URL
1245 if content_location is not None and \
1246 isinstance(content_location, text_type):
1247 headers['Content-Location'] = iri_to_uri(content_location)
1248
1249 if status in (304, 412):
1250 remove_entity_headers(headers)
1251
1252 # if we can determine the content length automatically, we
1253 # should try to do that. But only if this does not involve
1254 # flattening the iterator or encoding of unicode strings in
1255 # the response. We however should not do that if we have a 304
1256 # response.
1257 if self.automatically_set_content_length and \
1258 self.is_sequence and content_length is None and \
1259 status not in (204, 304) and \
1260 not (100 <= status < 200):
1261 try:
1262 content_length = sum(len(to_bytes(x, 'ascii'))
1263 for x in self.response)
1264 except UnicodeError:
1265 # aha, something non-bytestringy in there, too bad, we
1266 # can't safely figure out the length of the response.
1267 pass
1268 else:
1269 headers['Content-Length'] = str(content_length)
1270
1271 return headers
1272
1273 def get_app_iter(self, environ):
1274 """Returns the application iterator for the given environ. Depending
1275 on the request method and the current status code the return value
1276 might be an empty response rather than the one from the response.
1277
1278 If the request method is `HEAD` or the status code is in a range
1279 where the HTTP specification requires an empty response, an empty
1280 iterable is returned.
1281
1282 .. versionadded:: 0.6
1283
1284 :param environ: the WSGI environment of the request.
1285 :return: a response iterable.
1286 """
1287 status = self.status_code
1288 if environ['REQUEST_METHOD'] == 'HEAD' or \
1289 100 <= status < 200 or status in (204, 304, 412):
1290 iterable = ()
1291 elif self.direct_passthrough:
1292 if __debug__:
1293 _warn_if_string(self.response)
1294 return self.response
1295 else:
1296 iterable = self.iter_encoded()
1297 return ClosingIterator(iterable, self.close)
1298
1299 def get_wsgi_response(self, environ):
1300 """Returns the final WSGI response as tuple. The first item in
1301 the tuple is the application iterator, the second the status and
1302 the third the list of headers. The response returned is created
1303 specially for the given environment. For example if the request
1304 method in the WSGI environment is ``'HEAD'`` the response will
1305 be empty and only the headers and status code will be present.
1306
1307 .. versionadded:: 0.6
1308
1309 :param environ: the WSGI environment of the request.
1310 :return: an ``(app_iter, status, headers)`` tuple.
1311 """
1312 headers = self.get_wsgi_headers(environ)
1313 app_iter = self.get_app_iter(environ)
1314 return app_iter, self.status, headers.to_wsgi_list()
1315
1316 def __call__(self, environ, start_response):
1317 """Process this response as WSGI application.
1318
1319 :param environ: the WSGI environment.
1320 :param start_response: the response callable provided by the WSGI
1321 server.
1322 :return: an application iterator
1323 """
1324 app_iter, status, headers = self.get_wsgi_response(environ)
1325 start_response(status, headers)
1326 return app_iter
1327
1328
1329 class AcceptMixin(object):
1330
1331 """A mixin for classes with an :attr:`~BaseResponse.environ` attribute
1332 to get all the HTTP accept headers as
1333 :class:`~werkzeug.datastructures.Accept` objects (or subclasses
1334 thereof).
1335 """
1336
1337 @cached_property
1338 def accept_mimetypes(self):
1339 """List of mimetypes this client supports as
1340 :class:`~werkzeug.datastructures.MIMEAccept` object.
1341 """
1342 return parse_accept_header(self.environ.get('HTTP_ACCEPT'), MIMEAccept)
1343
1344 @cached_property
1345 def accept_charsets(self):
1346 """List of charsets this client supports as
1347 :class:`~werkzeug.datastructures.CharsetAccept` object.
1348 """
1349 return parse_accept_header(self.environ.get('HTTP_ACCEPT_CHARSET'),
1350 CharsetAccept)
1351
1352 @cached_property
1353 def accept_encodings(self):
1354 """List of encodings this client accepts. Encodings in a HTTP term
1355 are compression encodings such as gzip. For charsets have a look at
1356 :attr:`accept_charset`.
1357 """
1358 return parse_accept_header(self.environ.get('HTTP_ACCEPT_ENCODING'))
1359
1360 @cached_property
1361 def accept_languages(self):
1362 """List of languages this client accepts as
1363 :class:`~werkzeug.datastructures.LanguageAccept` object.
1364
1365 .. versionchanged 0.5
1366 In previous versions this was a regular
1367 :class:`~werkzeug.datastructures.Accept` object.
1368 """
1369 return parse_accept_header(self.environ.get('HTTP_ACCEPT_LANGUAGE'),
1370 LanguageAccept)
1371
1372
1373 class ETagRequestMixin(object):
1374
1375 """Add entity tag and cache descriptors to a request object or object with
1376 a WSGI environment available as :attr:`~BaseRequest.environ`. This not
1377 only provides access to etags but also to the cache control header.
1378 """
1379
1380 @cached_property
1381 def cache_control(self):
1382 """A :class:`~werkzeug.datastructures.RequestCacheControl` object
1383 for the incoming cache control headers.
1384 """
1385 cache_control = self.environ.get('HTTP_CACHE_CONTROL')
1386 return parse_cache_control_header(cache_control, None,
1387 RequestCacheControl)
1388
1389 @cached_property
1390 def if_match(self):
1391 """An object containing all the etags in the `If-Match` header.
1392
1393 :rtype: :class:`~werkzeug.datastructures.ETags`
1394 """
1395 return parse_etags(self.environ.get('HTTP_IF_MATCH'))
1396
1397 @cached_property
1398 def if_none_match(self):
1399 """An object containing all the etags in the `If-None-Match` header.
1400
1401 :rtype: :class:`~werkzeug.datastructures.ETags`
1402 """
1403 return parse_etags(self.environ.get('HTTP_IF_NONE_MATCH'))
1404
1405 @cached_property
1406 def if_modified_since(self):
1407 """The parsed `If-Modified-Since` header as datetime object."""
1408 return parse_date(self.environ.get('HTTP_IF_MODIFIED_SINCE'))
1409
1410 @cached_property
1411 def if_unmodified_since(self):
1412 """The parsed `If-Unmodified-Since` header as datetime object."""
1413 return parse_date(self.environ.get('HTTP_IF_UNMODIFIED_SINCE'))
1414
1415 @cached_property
1416 def if_range(self):
1417 """The parsed `If-Range` header.
1418
1419 .. versionadded:: 0.7
1420
1421 :rtype: :class:`~werkzeug.datastructures.IfRange`
1422 """
1423 return parse_if_range_header(self.environ.get('HTTP_IF_RANGE'))
1424
1425 @cached_property
1426 def range(self):
1427 """The parsed `Range` header.
1428
1429 .. versionadded:: 0.7
1430
1431 :rtype: :class:`~werkzeug.datastructures.Range`
1432 """
1433 return parse_range_header(self.environ.get('HTTP_RANGE'))
1434
1435
1436 class UserAgentMixin(object):
1437
1438 """Adds a `user_agent` attribute to the request object which contains the
1439 parsed user agent of the browser that triggered the request as a
1440 :class:`~werkzeug.useragents.UserAgent` object.
1441 """
1442
1443 @cached_property
1444 def user_agent(self):
1445 """The current user agent."""
1446 from werkzeug.useragents import UserAgent
1447 return UserAgent(self.environ)
1448
1449
1450 class AuthorizationMixin(object):
1451
1452 """Adds an :attr:`authorization` property that represents the parsed
1453 value of the `Authorization` header as
1454 :class:`~werkzeug.datastructures.Authorization` object.
1455 """
1456
1457 @cached_property
1458 def authorization(self):
1459 """The `Authorization` object in parsed form."""
1460 header = self.environ.get('HTTP_AUTHORIZATION')
1461 return parse_authorization_header(header)
1462
1463
1464 class StreamOnlyMixin(object):
1465
1466 """If mixed in before the request object this will change the bahavior
1467 of it to disable handling of form parsing. This disables the
1468 :attr:`files`, :attr:`form` attributes and will just provide a
1469 :attr:`stream` attribute that however is always available.
1470
1471 .. versionadded:: 0.9
1472 """
1473
1474 disable_data_descriptor = True
1475 want_form_data_parsed = False
1476
1477
1478 class ETagResponseMixin(object):
1479
1480 """Adds extra functionality to a response object for etag and cache
1481 handling. This mixin requires an object with at least a `headers`
1482 object that implements a dict like interface similar to
1483 :class:`~werkzeug.datastructures.Headers`.
1484
1485 If you want the :meth:`freeze` method to automatically add an etag, you
1486 have to mixin this method before the response base class. The default
1487 response class does not do that.
1488 """
1489
1490 @property
1491 def cache_control(self):
1492 """The Cache-Control general-header field is used to specify
1493 directives that MUST be obeyed by all caching mechanisms along the
1494 request/response chain.
1495 """
1496 def on_update(cache_control):
1497 if not cache_control and 'cache-control' in self.headers:
1498 del self.headers['cache-control']
1499 elif cache_control:
1500 self.headers['Cache-Control'] = cache_control.to_header()
1501 return parse_cache_control_header(self.headers.get('cache-control'),
1502 on_update,
1503 ResponseCacheControl)
1504
1505 def _wrap_response(self, start, length):
1506 """Wrap existing Response in case of Range Request context."""
1507 if self.status_code == 206:
1508 self.response = _RangeWrapper(self.response, start, length)
1509
1510 def _is_range_request_processable(self, environ):
1511 """Return ``True`` if `Range` header is present and if underlying
1512 resource is considered unchanged when compared with `If-Range` header.
1513 """
1514 return (
1515 'HTTP_IF_RANGE' not in environ
1516 or not is_resource_modified(
1517 environ, self.headers.get('etag'), None,
1518 self.headers.get('last-modified'), ignore_if_range=False
1519 )
1520 ) and 'HTTP_RANGE' in environ
1521
1522 def _process_range_request(self, environ, complete_length=None, accept_ranges=None):
1523 """Handle Range Request related headers (RFC7233). If `Accept-Ranges`
1524 header is valid, and Range Request is processable, we set the headers
1525 as described by the RFC, and wrap the underlying response in a
1526 RangeWrapper.
1527
1528 Returns ``True`` if Range Request can be fulfilled, ``False`` otherwise.
1529
1530 :raises: :class:`~werkzeug.exceptions.RequestedRangeNotSatisfiable`
1531 if `Range` header could not be parsed or satisfied.
1532 """
1533 from werkzeug.exceptions import RequestedRangeNotSatisfiable
1534 if accept_ranges is None:
1535 return False
1536 self.headers['Accept-Ranges'] = accept_ranges
1537 if not self._is_range_request_processable(environ) or complete_length is None:
1538 return False
1539 parsed_range = parse_range_header(environ.get('HTTP_RANGE'))
1540 if parsed_range is None:
1541 raise RequestedRangeNotSatisfiable(complete_length)
1542 range_tuple = parsed_range.range_for_length(complete_length)
1543 content_range_header = parsed_range.to_content_range_header(complete_length)
1544 if range_tuple is None or content_range_header is None:
1545 raise RequestedRangeNotSatisfiable(complete_length)
1546 content_length = range_tuple[1] - range_tuple[0]
1547 # Be sure not to send 206 response
1548 # if requested range is the full content.
1549 if content_length != complete_length:
1550 self.headers['Content-Length'] = content_length
1551 self.content_range = content_range_header
1552 self.status_code = 206
1553 self._wrap_response(range_tuple[0], content_length)
1554 return True
1555 return False
1556
1557 def make_conditional(self, request_or_environ, accept_ranges=False,
1558 complete_length=None):
1559 """Make the response conditional to the request. This method works
1560 best if an etag was defined for the response already. The `add_etag`
1561 method can be used to do that. If called without etag just the date
1562 header is set.
1563
1564 This does nothing if the request method in the request or environ is
1565 anything but GET or HEAD.
1566
1567 For optimal performance when handling range requests, it's recommended
1568 that your response data object implements `seekable`, `seek` and `tell`
1569 methods as described by :py:class:`io.IOBase`. Objects returned by
1570 :meth:`~werkzeug.wsgi.wrap_file` automatically implement those methods.
1571
1572 It does not remove the body of the response because that's something
1573 the :meth:`__call__` function does for us automatically.
1574
1575 Returns self so that you can do ``return resp.make_conditional(req)``
1576 but modifies the object in-place.
1577
1578 :param request_or_environ: a request object or WSGI environment to be
1579 used to make the response conditional
1580 against.
1581 :param accept_ranges: This parameter dictates the value of
1582 `Accept-Ranges` header. If ``False`` (default),
1583 the header is not set. If ``True``, it will be set
1584 to ``"bytes"``. If ``None``, it will be set to
1585 ``"none"``. If it's a string, it will use this
1586 value.
1587 :param complete_length: Will be used only in valid Range Requests.
1588 It will set `Content-Range` complete length
1589 value and compute `Content-Length` real value.
1590 This parameter is mandatory for successful
1591 Range Requests completion.
1592 :raises: :class:`~werkzeug.exceptions.RequestedRangeNotSatisfiable`
1593 if `Range` header could not be parsed or satisfied.
1594 """
1595 environ = _get_environ(request_or_environ)
1596 if environ['REQUEST_METHOD'] in ('GET', 'HEAD'):
1597 # if the date is not in the headers, add it now. We however
1598 # will not override an already existing header. Unfortunately
1599 # this header will be overriden by many WSGI servers including
1600 # wsgiref.
1601 if 'date' not in self.headers:
1602 self.headers['Date'] = http_date()
1603 accept_ranges = _clean_accept_ranges(accept_ranges)
1604 is206 = self._process_range_request(environ, complete_length, accept_ranges)
1605 if not is206 and not is_resource_modified(
1606 environ, self.headers.get('etag'), None,
1607 self.headers.get('last-modified')
1608 ):
1609 if parse_etags(environ.get('HTTP_IF_MATCH')):
1610 self.status_code = 412
1611 else:
1612 self.status_code = 304
1613 if self.automatically_set_content_length and 'content-length' not in self.headers:
1614 length = self.calculate_content_length()
1615 if length is not None:
1616 self.headers['Content-Length'] = length
1617 return self
1618
1619 def add_etag(self, overwrite=False, weak=False):
1620 """Add an etag for the current response if there is none yet."""
1621 if overwrite or 'etag' not in self.headers:
1622 self.set_etag(generate_etag(self.get_data()), weak)
1623
1624 def set_etag(self, etag, weak=False):
1625 """Set the etag, and override the old one if there was one."""
1626 self.headers['ETag'] = quote_etag(etag, weak)
1627
1628 def get_etag(self):
1629 """Return a tuple in the form ``(etag, is_weak)``. If there is no
1630 ETag the return value is ``(None, None)``.
1631 """
1632 return unquote_etag(self.headers.get('ETag'))
1633
1634 def freeze(self, no_etag=False):
1635 """Call this method if you want to make your response object ready for
1636 pickeling. This buffers the generator if there is one. This also
1637 sets the etag unless `no_etag` is set to `True`.
1638 """
1639 if not no_etag:
1640 self.add_etag()
1641 super(ETagResponseMixin, self).freeze()
1642
1643 accept_ranges = header_property('Accept-Ranges', doc='''
1644 The `Accept-Ranges` header. Even though the name would indicate
1645 that multiple values are supported, it must be one string token only.
1646
1647 The values ``'bytes'`` and ``'none'`` are common.
1648
1649 .. versionadded:: 0.7''')
1650
1651 def _get_content_range(self):
1652 def on_update(rng):
1653 if not rng:
1654 del self.headers['content-range']
1655 else:
1656 self.headers['Content-Range'] = rng.to_header()
1657 rv = parse_content_range_header(self.headers.get('content-range'),
1658 on_update)
1659 # always provide a content range object to make the descriptor
1660 # more user friendly. It provides an unset() method that can be
1661 # used to remove the header quickly.
1662 if rv is None:
1663 rv = ContentRange(None, None, None, on_update=on_update)
1664 return rv
1665
1666 def _set_content_range(self, value):
1667 if not value:
1668 del self.headers['content-range']
1669 elif isinstance(value, string_types):
1670 self.headers['Content-Range'] = value
1671 else:
1672 self.headers['Content-Range'] = value.to_header()
1673 content_range = property(_get_content_range, _set_content_range, doc='''
1674 The `Content-Range` header as
1675 :class:`~werkzeug.datastructures.ContentRange` object. Even if the
1676 header is not set it wil provide such an object for easier
1677 manipulation.
1678
1679 .. versionadded:: 0.7''')
1680 del _get_content_range, _set_content_range
1681
1682
1683 class ResponseStream(object):
1684
1685 """A file descriptor like object used by the :class:`ResponseStreamMixin` to
1686 represent the body of the stream. It directly pushes into the response
1687 iterable of the response object.
1688 """
1689
1690 mode = 'wb+'
1691
1692 def __init__(self, response):
1693 self.response = response
1694 self.closed = False
1695
1696 def write(self, value):
1697 if self.closed:
1698 raise ValueError('I/O operation on closed file')
1699 self.response._ensure_sequence(mutable=True)
1700 self.response.response.append(value)
1701 self.response.headers.pop('Content-Length', None)
1702 return len(value)
1703
1704 def writelines(self, seq):
1705 for item in seq:
1706 self.write(item)
1707
1708 def close(self):
1709 self.closed = True
1710
1711 def flush(self):
1712 if self.closed:
1713 raise ValueError('I/O operation on closed file')
1714
1715 def isatty(self):
1716 if self.closed:
1717 raise ValueError('I/O operation on closed file')
1718 return False
1719
1720 def tell(self):
1721 self.response._ensure_sequence()
1722 return sum(map(len, self.response.response))
1723
1724 @property
1725 def encoding(self):
1726 return self.response.charset
1727
1728
1729 class ResponseStreamMixin(object):
1730
1731 """Mixin for :class:`BaseRequest` subclasses. Classes that inherit from
1732 this mixin will automatically get a :attr:`stream` property that provides
1733 a write-only interface to the response iterable.
1734 """
1735
1736 @cached_property
1737 def stream(self):
1738 """The response iterable as write-only stream."""
1739 return ResponseStream(self)
1740
1741
1742 class CommonRequestDescriptorsMixin(object):
1743
1744 """A mixin for :class:`BaseRequest` subclasses. Request objects that
1745 mix this class in will automatically get descriptors for a couple of
1746 HTTP headers with automatic type conversion.
1747
1748 .. versionadded:: 0.5
1749 """
1750
1751 content_type = environ_property('CONTENT_TYPE', doc='''
1752 The Content-Type entity-header field indicates the media type of
1753 the entity-body sent to the recipient or, in the case of the HEAD
1754 method, the media type that would have been sent had the request
1755 been a GET.''')
1756
1757 @cached_property
1758 def content_length(self):
1759 """The Content-Length entity-header field indicates the size of the
1760 entity-body in bytes or, in the case of the HEAD method, the size of
1761 the entity-body that would have been sent had the request been a
1762 GET.
1763 """
1764 return get_content_length(self.environ)
1765
1766 content_encoding = environ_property('HTTP_CONTENT_ENCODING', doc='''
1767 The Content-Encoding entity-header field is used as a modifier to the
1768 media-type. When present, its value indicates what additional content
1769 codings have been applied to the entity-body, and thus what decoding
1770 mechanisms must be applied in order to obtain the media-type
1771 referenced by the Content-Type header field.
1772
1773 .. versionadded:: 0.9''')
1774 content_md5 = environ_property('HTTP_CONTENT_MD5', doc='''
1775 The Content-MD5 entity-header field, as defined in RFC 1864, is an
1776 MD5 digest of the entity-body for the purpose of providing an
1777 end-to-end message integrity check (MIC) of the entity-body. (Note:
1778 a MIC is good for detecting accidental modification of the
1779 entity-body in transit, but is not proof against malicious attacks.)
1780
1781 .. versionadded:: 0.9''')
1782 referrer = environ_property('HTTP_REFERER', doc='''
1783 The Referer[sic] request-header field allows the client to specify,
1784 for the server's benefit, the address (URI) of the resource from which
1785 the Request-URI was obtained (the "referrer", although the header
1786 field is misspelled).''')
1787 date = environ_property('HTTP_DATE', None, parse_date, doc='''
1788 The Date general-header field represents the date and time at which
1789 the message was originated, having the same semantics as orig-date
1790 in RFC 822.''')
1791 max_forwards = environ_property('HTTP_MAX_FORWARDS', None, int, doc='''
1792 The Max-Forwards request-header field provides a mechanism with the
1793 TRACE and OPTIONS methods to limit the number of proxies or gateways
1794 that can forward the request to the next inbound server.''')
1795
1796 def _parse_content_type(self):
1797 if not hasattr(self, '_parsed_content_type'):
1798 self._parsed_content_type = \
1799 parse_options_header(self.environ.get('CONTENT_TYPE', ''))
1800
1801 @property
1802 def mimetype(self):
1803 """Like :attr:`content_type`, but without parameters (eg, without
1804 charset, type etc.) and always lowercase. For example if the content
1805 type is ``text/HTML; charset=utf-8`` the mimetype would be
1806 ``'text/html'``.
1807 """
1808 self._parse_content_type()
1809 return self._parsed_content_type[0].lower()
1810
1811 @property
1812 def mimetype_params(self):
1813 """The mimetype parameters as dict. For example if the content
1814 type is ``text/html; charset=utf-8`` the params would be
1815 ``{'charset': 'utf-8'}``.
1816 """
1817 self._parse_content_type()
1818 return self._parsed_content_type[1]
1819
1820 @cached_property
1821 def pragma(self):
1822 """The Pragma general-header field is used to include
1823 implementation-specific directives that might apply to any recipient
1824 along the request/response chain. All pragma directives specify
1825 optional behavior from the viewpoint of the protocol; however, some
1826 systems MAY require that behavior be consistent with the directives.
1827 """
1828 return parse_set_header(self.environ.get('HTTP_PRAGMA', ''))
1829
1830
1831 class CommonResponseDescriptorsMixin(object):
1832
1833 """A mixin for :class:`BaseResponse` subclasses. Response objects that
1834 mix this class in will automatically get descriptors for a couple of
1835 HTTP headers with automatic type conversion.
1836 """
1837
1838 def _get_mimetype(self):
1839 ct = self.headers.get('content-type')
1840 if ct:
1841 return ct.split(';')[0].strip()
1842
1843 def _set_mimetype(self, value):
1844 self.headers['Content-Type'] = get_content_type(value, self.charset)
1845
1846 def _get_mimetype_params(self):
1847 def on_update(d):
1848 self.headers['Content-Type'] = \
1849 dump_options_header(self.mimetype, d)
1850 d = parse_options_header(self.headers.get('content-type', ''))[1]
1851 return CallbackDict(d, on_update)
1852
1853 mimetype = property(_get_mimetype, _set_mimetype, doc='''
1854 The mimetype (content type without charset etc.)''')
1855 mimetype_params = property(_get_mimetype_params, doc='''
1856 The mimetype parameters as dict. For example if the content
1857 type is ``text/html; charset=utf-8`` the params would be
1858 ``{'charset': 'utf-8'}``.
1859
1860 .. versionadded:: 0.5
1861 ''')
1862 location = header_property('Location', doc='''
1863 The Location response-header field is used to redirect the recipient
1864 to a location other than the Request-URI for completion of the request
1865 or identification of a new resource.''')
1866 age = header_property('Age', None, parse_age, dump_age, doc='''
1867 The Age response-header field conveys the sender's estimate of the
1868 amount of time since the response (or its revalidation) was
1869 generated at the origin server.
1870
1871 Age values are non-negative decimal integers, representing time in
1872 seconds.''')
1873 content_type = header_property('Content-Type', doc='''
1874 The Content-Type entity-header field indicates the media type of the
1875 entity-body sent to the recipient or, in the case of the HEAD method,
1876 the media type that would have been sent had the request been a GET.
1877 ''')
1878 content_length = header_property('Content-Length', None, int, str, doc='''
1879 The Content-Length entity-header field indicates the size of the
1880 entity-body, in decimal number of OCTETs, sent to the recipient or,
1881 in the case of the HEAD method, the size of the entity-body that would
1882 have been sent had the request been a GET.''')
1883 content_location = header_property('Content-Location', doc='''
1884 The Content-Location entity-header field MAY be used to supply the
1885 resource location for the entity enclosed in the message when that
1886 entity is accessible from a location separate from the requested
1887 resource's URI.''')
1888 content_encoding = header_property('Content-Encoding', doc='''
1889 The Content-Encoding entity-header field is used as a modifier to the
1890 media-type. When present, its value indicates what additional content
1891 codings have been applied to the entity-body, and thus what decoding
1892 mechanisms must be applied in order to obtain the media-type
1893 referenced by the Content-Type header field.''')
1894 content_md5 = header_property('Content-MD5', doc='''
1895 The Content-MD5 entity-header field, as defined in RFC 1864, is an
1896 MD5 digest of the entity-body for the purpose of providing an
1897 end-to-end message integrity check (MIC) of the entity-body. (Note:
1898 a MIC is good for detecting accidental modification of the
1899 entity-body in transit, but is not proof against malicious attacks.)
1900 ''')
1901 date = header_property('Date', None, parse_date, http_date, doc='''
1902 The Date general-header field represents the date and time at which
1903 the message was originated, having the same semantics as orig-date
1904 in RFC 822.''')
1905 expires = header_property('Expires', None, parse_date, http_date, doc='''
1906 The Expires entity-header field gives the date/time after which the
1907 response is considered stale. A stale cache entry may not normally be
1908 returned by a cache.''')
1909 last_modified = header_property('Last-Modified', None, parse_date,
1910 http_date, doc='''
1911 The Last-Modified entity-header field indicates the date and time at
1912 which the origin server believes the variant was last modified.''')
1913
1914 def _get_retry_after(self):
1915 value = self.headers.get('retry-after')
1916 if value is None:
1917 return
1918 elif value.isdigit():
1919 return datetime.utcnow() + timedelta(seconds=int(value))
1920 return parse_date(value)
1921
1922 def _set_retry_after(self, value):
1923 if value is None:
1924 if 'retry-after' in self.headers:
1925 del self.headers['retry-after']
1926 return
1927 elif isinstance(value, datetime):
1928 value = http_date(value)
1929 else:
1930 value = str(value)
1931 self.headers['Retry-After'] = value
1932
1933 retry_after = property(_get_retry_after, _set_retry_after, doc='''
1934 The Retry-After response-header field can be used with a 503 (Service
1935 Unavailable) response to indicate how long the service is expected
1936 to be unavailable to the requesting client.
1937
1938 Time in seconds until expiration or date.''')
1939
1940 def _set_property(name, doc=None):
1941 def fget(self):
1942 def on_update(header_set):
1943 if not header_set and name in self.headers:
1944 del self.headers[name]
1945 elif header_set:
1946 self.headers[name] = header_set.to_header()
1947 return parse_set_header(self.headers.get(name), on_update)
1948
1949 def fset(self, value):
1950 if not value:
1951 del self.headers[name]
1952 elif isinstance(value, string_types):
1953 self.headers[name] = value
1954 else:
1955 self.headers[name] = dump_header(value)
1956 return property(fget, fset, doc=doc)
1957
1958 vary = _set_property('Vary', doc='''
1959 The Vary field value indicates the set of request-header fields that
1960 fully determines, while the response is fresh, whether a cache is
1961 permitted to use the response to reply to a subsequent request
1962 without revalidation.''')
1963 content_language = _set_property('Content-Language', doc='''
1964 The Content-Language entity-header field describes the natural
1965 language(s) of the intended audience for the enclosed entity. Note
1966 that this might not be equivalent to all the languages used within
1967 the entity-body.''')
1968 allow = _set_property('Allow', doc='''
1969 The Allow entity-header field lists the set of methods supported
1970 by the resource identified by the Request-URI. The purpose of this
1971 field is strictly to inform the recipient of valid methods
1972 associated with the resource. An Allow header field MUST be
1973 present in a 405 (Method Not Allowed) response.''')
1974
1975 del _set_property, _get_mimetype, _set_mimetype, _get_retry_after, \
1976 _set_retry_after
1977
1978
1979 class WWWAuthenticateMixin(object):
1980
1981 """Adds a :attr:`www_authenticate` property to a response object."""
1982
1983 @property
1984 def www_authenticate(self):
1985 """The `WWW-Authenticate` header in a parsed form."""
1986 def on_update(www_auth):
1987 if not www_auth and 'www-authenticate' in self.headers:
1988 del self.headers['www-authenticate']
1989 elif www_auth:
1990 self.headers['WWW-Authenticate'] = www_auth.to_header()
1991 header = self.headers.get('www-authenticate')
1992 return parse_www_authenticate_header(header, on_update)
1993
1994
1995 class Request(BaseRequest, AcceptMixin, ETagRequestMixin,
1996 UserAgentMixin, AuthorizationMixin,
1997 CommonRequestDescriptorsMixin):
1998
1999 """Full featured request object implementing the following mixins:
2000
2001 - :class:`AcceptMixin` for accept header parsing
2002 - :class:`ETagRequestMixin` for etag and cache control handling
2003 - :class:`UserAgentMixin` for user agent introspection
2004 - :class:`AuthorizationMixin` for http auth handling
2005 - :class:`CommonRequestDescriptorsMixin` for common headers
2006 """
2007
2008
2009 class PlainRequest(StreamOnlyMixin, Request):
2010
2011 """A request object without special form parsing capabilities.
2012
2013 .. versionadded:: 0.9
2014 """
2015
2016
2017 class Response(BaseResponse, ETagResponseMixin, ResponseStreamMixin,
2018 CommonResponseDescriptorsMixin,
2019 WWWAuthenticateMixin):
2020
2021 """Full featured response object implementing the following mixins:
2022
2023 - :class:`ETagResponseMixin` for etag and cache control handling
2024 - :class:`ResponseStreamMixin` to add support for the `stream` property
2025 - :class:`CommonResponseDescriptorsMixin` for various HTTP descriptors
2026 - :class:`WWWAuthenticateMixin` for HTTP authentication support
2027 """
+0
-1364
werkzeug/wsgi.py less more
0 # -*- coding: utf-8 -*-
1 """
2 werkzeug.wsgi
3 ~~~~~~~~~~~~~
4
5 This module implements WSGI related helpers.
6
7 :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details.
8 :license: BSD, see LICENSE for more details.
9 """
10 import io
11 try:
12 import httplib
13 except ImportError:
14 from http import client as httplib
15 import mimetypes
16 import os
17 import posixpath
18 import re
19 import socket
20 from datetime import datetime
21 from functools import partial, update_wrapper
22 from itertools import chain
23 from time import mktime, time
24 from zlib import adler32
25
26 from werkzeug._compat import BytesIO, PY2, implements_iterator, iteritems, \
27 make_literal_wrapper, string_types, text_type, to_bytes, to_unicode, \
28 try_coerce_native, wsgi_get_bytes
29 from werkzeug._internal import _empty_stream, _encode_idna
30 from werkzeug.filesystem import get_filesystem_encoding
31 from werkzeug.http import http_date, is_resource_modified, \
32 is_hop_by_hop_header
33 from werkzeug.urls import uri_to_iri, url_join, url_parse, url_quote
34 from werkzeug.datastructures import EnvironHeaders
35
36
37 def responder(f):
38 """Marks a function as responder. Decorate a function with it and it
39 will automatically call the return value as WSGI application.
40
41 Example::
42
43 @responder
44 def application(environ, start_response):
45 return Response('Hello World!')
46 """
47 return update_wrapper(lambda *a: f(*a)(*a[-2:]), f)
48
49
50 def get_current_url(environ, root_only=False, strip_querystring=False,
51 host_only=False, trusted_hosts=None):
52 """A handy helper function that recreates the full URL as IRI for the
53 current request or parts of it. Here's an example:
54
55 >>> from werkzeug.test import create_environ
56 >>> env = create_environ("/?param=foo", "http://localhost/script")
57 >>> get_current_url(env)
58 'http://localhost/script/?param=foo'
59 >>> get_current_url(env, root_only=True)
60 'http://localhost/script/'
61 >>> get_current_url(env, host_only=True)
62 'http://localhost/'
63 >>> get_current_url(env, strip_querystring=True)
64 'http://localhost/script/'
65
66 This optionally it verifies that the host is in a list of trusted hosts.
67 If the host is not in there it will raise a
68 :exc:`~werkzeug.exceptions.SecurityError`.
69
70 Note that the string returned might contain unicode characters as the
71 representation is an IRI not an URI. If you need an ASCII only
72 representation you can use the :func:`~werkzeug.urls.iri_to_uri`
73 function:
74
75 >>> from werkzeug.urls import iri_to_uri
76 >>> iri_to_uri(get_current_url(env))
77 'http://localhost/script/?param=foo'
78
79 :param environ: the WSGI environment to get the current URL from.
80 :param root_only: set `True` if you only want the root URL.
81 :param strip_querystring: set to `True` if you don't want the querystring.
82 :param host_only: set to `True` if the host URL should be returned.
83 :param trusted_hosts: a list of trusted hosts, see :func:`host_is_trusted`
84 for more information.
85 """
86 tmp = [environ['wsgi.url_scheme'], '://', get_host(environ, trusted_hosts)]
87 cat = tmp.append
88 if host_only:
89 return uri_to_iri(''.join(tmp) + '/')
90 cat(url_quote(wsgi_get_bytes(environ.get('SCRIPT_NAME', ''))).rstrip('/'))
91 cat('/')
92 if not root_only:
93 cat(url_quote(wsgi_get_bytes(environ.get('PATH_INFO', '')).lstrip(b'/')))
94 if not strip_querystring:
95 qs = get_query_string(environ)
96 if qs:
97 cat('?' + qs)
98 return uri_to_iri(''.join(tmp))
99
100
101 def host_is_trusted(hostname, trusted_list):
102 """Checks if a host is trusted against a list. This also takes care
103 of port normalization.
104
105 .. versionadded:: 0.9
106
107 :param hostname: the hostname to check
108 :param trusted_list: a list of hostnames to check against. If a
109 hostname starts with a dot it will match against
110 all subdomains as well.
111 """
112 if not hostname:
113 return False
114
115 if isinstance(trusted_list, string_types):
116 trusted_list = [trusted_list]
117
118 def _normalize(hostname):
119 if ':' in hostname:
120 hostname = hostname.rsplit(':', 1)[0]
121 return _encode_idna(hostname)
122
123 try:
124 hostname = _normalize(hostname)
125 except UnicodeError:
126 return False
127 for ref in trusted_list:
128 if ref.startswith('.'):
129 ref = ref[1:]
130 suffix_match = True
131 else:
132 suffix_match = False
133 try:
134 ref = _normalize(ref)
135 except UnicodeError:
136 return False
137 if ref == hostname:
138 return True
139 if suffix_match and hostname.endswith(b'.' + ref):
140 return True
141 return False
142
143
144 def get_host(environ, trusted_hosts=None):
145 """Return the real host for the given WSGI environment. This first checks
146 the `X-Forwarded-Host` header, then the normal `Host` header, and finally
147 the `SERVER_NAME` environment variable (using the first one it finds).
148
149 Optionally it verifies that the host is in a list of trusted hosts.
150 If the host is not in there it will raise a
151 :exc:`~werkzeug.exceptions.SecurityError`.
152
153 :param environ: the WSGI environment to get the host of.
154 :param trusted_hosts: a list of trusted hosts, see :func:`host_is_trusted`
155 for more information.
156 """
157 if 'HTTP_X_FORWARDED_HOST' in environ:
158 rv = environ['HTTP_X_FORWARDED_HOST'].split(',', 1)[0].strip()
159 elif 'HTTP_HOST' in environ:
160 rv = environ['HTTP_HOST']
161 else:
162 rv = environ['SERVER_NAME']
163 if (environ['wsgi.url_scheme'], environ['SERVER_PORT']) not \
164 in (('https', '443'), ('http', '80')):
165 rv += ':' + environ['SERVER_PORT']
166 if trusted_hosts is not None:
167 if not host_is_trusted(rv, trusted_hosts):
168 from werkzeug.exceptions import SecurityError
169 raise SecurityError('Host "%s" is not trusted' % rv)
170 return rv
171
172
173 def get_content_length(environ):
174 """Returns the content length from the WSGI environment as
175 integer. If it's not available or chunked transfer encoding is used,
176 ``None`` is returned.
177
178 .. versionadded:: 0.9
179
180 :param environ: the WSGI environ to fetch the content length from.
181 """
182 if environ.get('HTTP_TRANSFER_ENCODING', '') == 'chunked':
183 return None
184
185 content_length = environ.get('CONTENT_LENGTH')
186 if content_length is not None:
187 try:
188 return max(0, int(content_length))
189 except (ValueError, TypeError):
190 pass
191
192
193 def get_input_stream(environ, safe_fallback=True):
194 """Returns the input stream from the WSGI environment and wraps it
195 in the most sensible way possible. The stream returned is not the
196 raw WSGI stream in most cases but one that is safe to read from
197 without taking into account the content length.
198
199 If content length is not set, the stream will be empty for safety reasons.
200 If the WSGI server supports chunked or infinite streams, it should set
201 the ``wsgi.input_terminated`` value in the WSGI environ to indicate that.
202
203 .. versionadded:: 0.9
204
205 :param environ: the WSGI environ to fetch the stream from.
206 :param safe_fallback: use an empty stream as a safe fallback when the
207 content length is not set. Disabling this allows infinite streams,
208 which can be a denial-of-service risk.
209 """
210 stream = environ['wsgi.input']
211 content_length = get_content_length(environ)
212
213 # A wsgi extension that tells us if the input is terminated. In
214 # that case we return the stream unchanged as we know we can safely
215 # read it until the end.
216 if environ.get('wsgi.input_terminated'):
217 return stream
218
219 # If the request doesn't specify a content length, returning the stream is
220 # potentially dangerous because it could be infinite, malicious or not. If
221 # safe_fallback is true, return an empty stream instead for safety.
222 if content_length is None:
223 return safe_fallback and _empty_stream or stream
224
225 # Otherwise limit the stream to the content length
226 return LimitedStream(stream, content_length)
227
228
229 def get_query_string(environ):
230 """Returns the `QUERY_STRING` from the WSGI environment. This also takes
231 care about the WSGI decoding dance on Python 3 environments as a
232 native string. The string returned will be restricted to ASCII
233 characters.
234
235 .. versionadded:: 0.9
236
237 :param environ: the WSGI environment object to get the query string from.
238 """
239 qs = wsgi_get_bytes(environ.get('QUERY_STRING', ''))
240 # QUERY_STRING really should be ascii safe but some browsers
241 # will send us some unicode stuff (I am looking at you IE).
242 # In that case we want to urllib quote it badly.
243 return try_coerce_native(url_quote(qs, safe=':&%=+$!*\'(),'))
244
245
246 def get_path_info(environ, charset='utf-8', errors='replace'):
247 """Returns the `PATH_INFO` from the WSGI environment and properly
248 decodes it. This also takes care about the WSGI decoding dance
249 on Python 3 environments. if the `charset` is set to `None` a
250 bytestring is returned.
251
252 .. versionadded:: 0.9
253
254 :param environ: the WSGI environment object to get the path from.
255 :param charset: the charset for the path info, or `None` if no
256 decoding should be performed.
257 :param errors: the decoding error handling.
258 """
259 path = wsgi_get_bytes(environ.get('PATH_INFO', ''))
260 return to_unicode(path, charset, errors, allow_none_charset=True)
261
262
263 def get_script_name(environ, charset='utf-8', errors='replace'):
264 """Returns the `SCRIPT_NAME` from the WSGI environment and properly
265 decodes it. This also takes care about the WSGI decoding dance
266 on Python 3 environments. if the `charset` is set to `None` a
267 bytestring is returned.
268
269 .. versionadded:: 0.9
270
271 :param environ: the WSGI environment object to get the path from.
272 :param charset: the charset for the path, or `None` if no
273 decoding should be performed.
274 :param errors: the decoding error handling.
275 """
276 path = wsgi_get_bytes(environ.get('SCRIPT_NAME', ''))
277 return to_unicode(path, charset, errors, allow_none_charset=True)
278
279
280 def pop_path_info(environ, charset='utf-8', errors='replace'):
281 """Removes and returns the next segment of `PATH_INFO`, pushing it onto
282 `SCRIPT_NAME`. Returns `None` if there is nothing left on `PATH_INFO`.
283
284 If the `charset` is set to `None` a bytestring is returned.
285
286 If there are empty segments (``'/foo//bar``) these are ignored but
287 properly pushed to the `SCRIPT_NAME`:
288
289 >>> env = {'SCRIPT_NAME': '/foo', 'PATH_INFO': '/a/b'}
290 >>> pop_path_info(env)
291 'a'
292 >>> env['SCRIPT_NAME']
293 '/foo/a'
294 >>> pop_path_info(env)
295 'b'
296 >>> env['SCRIPT_NAME']
297 '/foo/a/b'
298
299 .. versionadded:: 0.5
300
301 .. versionchanged:: 0.9
302 The path is now decoded and a charset and encoding
303 parameter can be provided.
304
305 :param environ: the WSGI environment that is modified.
306 """
307 path = environ.get('PATH_INFO')
308 if not path:
309 return None
310
311 script_name = environ.get('SCRIPT_NAME', '')
312
313 # shift multiple leading slashes over
314 old_path = path
315 path = path.lstrip('/')
316 if path != old_path:
317 script_name += '/' * (len(old_path) - len(path))
318
319 if '/' not in path:
320 environ['PATH_INFO'] = ''
321 environ['SCRIPT_NAME'] = script_name + path
322 rv = wsgi_get_bytes(path)
323 else:
324 segment, path = path.split('/', 1)
325 environ['PATH_INFO'] = '/' + path
326 environ['SCRIPT_NAME'] = script_name + segment
327 rv = wsgi_get_bytes(segment)
328
329 return to_unicode(rv, charset, errors, allow_none_charset=True)
330
331
332 def peek_path_info(environ, charset='utf-8', errors='replace'):
333 """Returns the next segment on the `PATH_INFO` or `None` if there
334 is none. Works like :func:`pop_path_info` without modifying the
335 environment:
336
337 >>> env = {'SCRIPT_NAME': '/foo', 'PATH_INFO': '/a/b'}
338 >>> peek_path_info(env)
339 'a'
340 >>> peek_path_info(env)
341 'a'
342
343 If the `charset` is set to `None` a bytestring is returned.
344
345 .. versionadded:: 0.5
346
347 .. versionchanged:: 0.9
348 The path is now decoded and a charset and encoding
349 parameter can be provided.
350
351 :param environ: the WSGI environment that is checked.
352 """
353 segments = environ.get('PATH_INFO', '').lstrip('/').split('/', 1)
354 if segments:
355 return to_unicode(wsgi_get_bytes(segments[0]),
356 charset, errors, allow_none_charset=True)
357
358
359 def extract_path_info(environ_or_baseurl, path_or_url, charset='utf-8',
360 errors='replace', collapse_http_schemes=True):
361 """Extracts the path info from the given URL (or WSGI environment) and
362 path. The path info returned is a unicode string, not a bytestring
363 suitable for a WSGI environment. The URLs might also be IRIs.
364
365 If the path info could not be determined, `None` is returned.
366
367 Some examples:
368
369 >>> extract_path_info('http://example.com/app', '/app/hello')
370 u'/hello'
371 >>> extract_path_info('http://example.com/app',
372 ... 'https://example.com/app/hello')
373 u'/hello'
374 >>> extract_path_info('http://example.com/app',
375 ... 'https://example.com/app/hello',
376 ... collapse_http_schemes=False) is None
377 True
378
379 Instead of providing a base URL you can also pass a WSGI environment.
380
381 .. versionadded:: 0.6
382
383 :param environ_or_baseurl: a WSGI environment dict, a base URL or
384 base IRI. This is the root of the
385 application.
386 :param path_or_url: an absolute path from the server root, a
387 relative path (in which case it's the path info)
388 or a full URL. Also accepts IRIs and unicode
389 parameters.
390 :param charset: the charset for byte data in URLs
391 :param errors: the error handling on decode
392 :param collapse_http_schemes: if set to `False` the algorithm does
393 not assume that http and https on the
394 same server point to the same
395 resource.
396 """
397 def _normalize_netloc(scheme, netloc):
398 parts = netloc.split(u'@', 1)[-1].split(u':', 1)
399 if len(parts) == 2:
400 netloc, port = parts
401 if (scheme == u'http' and port == u'80') or \
402 (scheme == u'https' and port == u'443'):
403 port = None
404 else:
405 netloc = parts[0]
406 port = None
407 if port is not None:
408 netloc += u':' + port
409 return netloc
410
411 # make sure whatever we are working on is a IRI and parse it
412 path = uri_to_iri(path_or_url, charset, errors)
413 if isinstance(environ_or_baseurl, dict):
414 environ_or_baseurl = get_current_url(environ_or_baseurl,
415 root_only=True)
416 base_iri = uri_to_iri(environ_or_baseurl, charset, errors)
417 base_scheme, base_netloc, base_path = url_parse(base_iri)[:3]
418 cur_scheme, cur_netloc, cur_path, = \
419 url_parse(url_join(base_iri, path))[:3]
420
421 # normalize the network location
422 base_netloc = _normalize_netloc(base_scheme, base_netloc)
423 cur_netloc = _normalize_netloc(cur_scheme, cur_netloc)
424
425 # is that IRI even on a known HTTP scheme?
426 if collapse_http_schemes:
427 for scheme in base_scheme, cur_scheme:
428 if scheme not in (u'http', u'https'):
429 return None
430 else:
431 if not (base_scheme in (u'http', u'https') and
432 base_scheme == cur_scheme):
433 return None
434
435 # are the netlocs compatible?
436 if base_netloc != cur_netloc:
437 return None
438
439 # are we below the application path?
440 base_path = base_path.rstrip(u'/')
441 if not cur_path.startswith(base_path):
442 return None
443
444 return u'/' + cur_path[len(base_path):].lstrip(u'/')
445
446
447 class ProxyMiddleware(object):
448 """This middleware routes some requests to the provided WSGI app and
449 proxies some requests to an external server. This is not something that
450 can generally be done on the WSGI layer and some HTTP requests will not
451 tunnel through correctly (for instance websocket requests cannot be
452 proxied through WSGI). As a result this is only really useful for some
453 basic requests that can be forwarded.
454
455 Example configuration::
456
457 app = ProxyMiddleware(app, {
458 '/static/': {
459 'target': 'http://127.0.0.1:5001/',
460 }
461 })
462
463 For each host options can be specified. The following options are
464 supported:
465
466 ``target``:
467 the target URL to dispatch to
468 ``remove_prefix``:
469 if set to `True` the prefix is chopped off the URL before
470 dispatching it to the server.
471 ``host``:
472 When set to ``'<auto>'`` which is the default the host header is
473 automatically rewritten to the URL of the target. If set to `None`
474 then the host header is unmodified from the client request. Any
475 other value overwrites the host header with that value.
476 ``headers``:
477 An optional dictionary of headers that should be sent with the
478 request to the target host.
479 ``ssl_context``:
480 In case this is an HTTPS target host then an SSL context can be
481 provided here (:class:`ssl.SSLContext`). This can be used for instance
482 to disable SSL verification.
483
484 In this case everything below ``'/static/'`` is proxied to the server on
485 port 5001. The host header is automatically rewritten and so are request
486 URLs (eg: the leading `/static/` prefix here gets chopped off).
487
488 .. versionadded:: 0.14
489 """
490
491 def __init__(self, app, targets, chunk_size=2 << 13, timeout=10):
492 def _set_defaults(opts):
493 opts.setdefault('remove_prefix', False)
494 opts.setdefault('host', '<auto>')
495 opts.setdefault('headers', {})
496 opts.setdefault('ssl_context', None)
497 return opts
498 self.app = app
499 self.targets = dict(('/%s/' % k.strip('/'), _set_defaults(v))
500 for k, v in iteritems(targets))
501 self.chunk_size = chunk_size
502 self.timeout = timeout
503
504 def proxy_to(self, opts, path, prefix):
505 target = url_parse(opts['target'])
506
507 def application(environ, start_response):
508 headers = list(EnvironHeaders(environ).items())
509 headers[:] = [(k, v) for k, v in headers
510 if not is_hop_by_hop_header(k) and
511 k.lower() not in ('content-length', 'host')]
512 headers.append(('Connection', 'close'))
513 if opts['host'] == '<auto>':
514 headers.append(('Host', target.ascii_host))
515 elif opts['host'] is None:
516 headers.append(('Host', environ['HTTP_HOST']))
517 else:
518 headers.append(('Host', opts['host']))
519 headers.extend(opts['headers'].items())
520
521 remote_path = path
522 if opts['remove_prefix']:
523 remote_path = '%s/%s' % (
524 target.path.rstrip('/'),
525 remote_path[len(prefix):].lstrip('/')
526 )
527
528 content_length = environ.get('CONTENT_LENGTH')
529 chunked = False
530 if content_length not in ('', None):
531 headers.append(('Content-Length', content_length))
532 elif content_length is not None:
533 headers.append(('Transfer-Encoding', 'chunked'))
534 chunked = True
535
536 try:
537 if target.scheme == 'http':
538 con = httplib.HTTPConnection(
539 target.ascii_host, target.port or 80,
540 timeout=self.timeout)
541 elif target.scheme == 'https':
542 con = httplib.HTTPSConnection(
543 target.ascii_host, target.port or 443,
544 timeout=self.timeout,
545 context=opts['ssl_context'])
546 con.connect()
547 con.putrequest(environ['REQUEST_METHOD'], url_quote(remote_path),
548 skip_host=True)
549
550 for k, v in headers:
551 if k.lower() == 'connection':
552 v = 'close'
553 con.putheader(k, v)
554 con.endheaders()
555
556 stream = get_input_stream(environ)
557 while 1:
558 data = stream.read(self.chunk_size)
559 if not data:
560 break
561 if chunked:
562 con.send(b'%x\r\n%s\r\n' % (len(data), data))
563 else:
564 con.send(data)
565
566 resp = con.getresponse()
567 except socket.error:
568 from werkzeug.exceptions import BadGateway
569 return BadGateway()(environ, start_response)
570
571 start_response('%d %s' % (resp.status, resp.reason),
572 [(k.title(), v) for k, v in resp.getheaders()
573 if not is_hop_by_hop_header(k)])
574
575 def read():
576 while 1:
577 try:
578 data = resp.read(self.chunk_size)
579 except socket.error:
580 break
581 if not data:
582 break
583 yield data
584 return read()
585 return application
586
587 def __call__(self, environ, start_response):
588 path = environ['PATH_INFO']
589 app = self.app
590 for prefix, opts in iteritems(self.targets):
591 if path.startswith(prefix):
592 app = self.proxy_to(opts, path, prefix)
593 break
594 return app(environ, start_response)
595
596
597 class SharedDataMiddleware(object):
598
599 """A WSGI middleware that provides static content for development
600 environments or simple server setups. Usage is quite simple::
601
602 import os
603 from werkzeug.wsgi import SharedDataMiddleware
604
605 app = SharedDataMiddleware(app, {
606 '/shared': os.path.join(os.path.dirname(__file__), 'shared')
607 })
608
609 The contents of the folder ``./shared`` will now be available on
610 ``http://example.com/shared/``. This is pretty useful during development
611 because a standalone media server is not required. One can also mount
612 files on the root folder and still continue to use the application because
613 the shared data middleware forwards all unhandled requests to the
614 application, even if the requests are below one of the shared folders.
615
616 If `pkg_resources` is available you can also tell the middleware to serve
617 files from package data::
618
619 app = SharedDataMiddleware(app, {
620 '/shared': ('myapplication', 'shared_files')
621 })
622
623 This will then serve the ``shared_files`` folder in the `myapplication`
624 Python package.
625
626 The optional `disallow` parameter can be a list of :func:`~fnmatch.fnmatch`
627 rules for files that are not accessible from the web. If `cache` is set to
628 `False` no caching headers are sent.
629
630 Currently the middleware does not support non ASCII filenames. If the
631 encoding on the file system happens to be the encoding of the URI it may
632 work but this could also be by accident. We strongly suggest using ASCII
633 only file names for static files.
634
635 The middleware will guess the mimetype using the Python `mimetype`
636 module. If it's unable to figure out the charset it will fall back
637 to `fallback_mimetype`.
638
639 .. versionchanged:: 0.5
640 The cache timeout is configurable now.
641
642 .. versionadded:: 0.6
643 The `fallback_mimetype` parameter was added.
644
645 :param app: the application to wrap. If you don't want to wrap an
646 application you can pass it :exc:`NotFound`.
647 :param exports: a list or dict of exported files and folders.
648 :param disallow: a list of :func:`~fnmatch.fnmatch` rules.
649 :param fallback_mimetype: the fallback mimetype for unknown files.
650 :param cache: enable or disable caching headers.
651 :param cache_timeout: the cache timeout in seconds for the headers.
652 """
653
654 def __init__(self, app, exports, disallow=None, cache=True,
655 cache_timeout=60 * 60 * 12, fallback_mimetype='text/plain'):
656 self.app = app
657 self.exports = []
658 self.cache = cache
659 self.cache_timeout = cache_timeout
660 if hasattr(exports, 'items'):
661 exports = iteritems(exports)
662 for key, value in exports:
663 if isinstance(value, tuple):
664 loader = self.get_package_loader(*value)
665 elif isinstance(value, string_types):
666 if os.path.isfile(value):
667 loader = self.get_file_loader(value)
668 else:
669 loader = self.get_directory_loader(value)
670 else:
671 raise TypeError('unknown def %r' % value)
672 self.exports.append((key, loader))
673 if disallow is not None:
674 from fnmatch import fnmatch
675 self.is_allowed = lambda x: not fnmatch(x, disallow)
676 self.fallback_mimetype = fallback_mimetype
677
678 def is_allowed(self, filename):
679 """Subclasses can override this method to disallow the access to
680 certain files. However by providing `disallow` in the constructor
681 this method is overwritten.
682 """
683 return True
684
685 def _opener(self, filename):
686 return lambda: (
687 open(filename, 'rb'),
688 datetime.utcfromtimestamp(os.path.getmtime(filename)),
689 int(os.path.getsize(filename))
690 )
691
692 def get_file_loader(self, filename):
693 return lambda x: (os.path.basename(filename), self._opener(filename))
694
695 def get_package_loader(self, package, package_path):
696 from pkg_resources import DefaultProvider, ResourceManager, \
697 get_provider
698 loadtime = datetime.utcnow()
699 provider = get_provider(package)
700 manager = ResourceManager()
701 filesystem_bound = isinstance(provider, DefaultProvider)
702
703 def loader(path):
704 if path is None:
705 return None, None
706 path = posixpath.join(package_path, path)
707 if not provider.has_resource(path):
708 return None, None
709 basename = posixpath.basename(path)
710 if filesystem_bound:
711 return basename, self._opener(
712 provider.get_resource_filename(manager, path))
713 s = provider.get_resource_string(manager, path)
714 return basename, lambda: (
715 BytesIO(s),
716 loadtime,
717 len(s)
718 )
719 return loader
720
721 def get_directory_loader(self, directory):
722 def loader(path):
723 if path is not None:
724 path = os.path.join(directory, path)
725 else:
726 path = directory
727 if os.path.isfile(path):
728 return os.path.basename(path), self._opener(path)
729 return None, None
730 return loader
731
732 def generate_etag(self, mtime, file_size, real_filename):
733 if not isinstance(real_filename, bytes):
734 real_filename = real_filename.encode(get_filesystem_encoding())
735 return 'wzsdm-%d-%s-%s' % (
736 mktime(mtime.timetuple()),
737 file_size,
738 adler32(real_filename) & 0xffffffff
739 )
740
741 def __call__(self, environ, start_response):
742 cleaned_path = get_path_info(environ)
743 if PY2:
744 cleaned_path = cleaned_path.encode(get_filesystem_encoding())
745 # sanitize the path for non unix systems
746 cleaned_path = cleaned_path.strip('/')
747 for sep in os.sep, os.altsep:
748 if sep and sep != '/':
749 cleaned_path = cleaned_path.replace(sep, '/')
750 path = '/' + '/'.join(x for x in cleaned_path.split('/')
751 if x and x != '..')
752 file_loader = None
753 for search_path, loader in self.exports:
754 if search_path == path:
755 real_filename, file_loader = loader(None)
756 if file_loader is not None:
757 break
758 if not search_path.endswith('/'):
759 search_path += '/'
760 if path.startswith(search_path):
761 real_filename, file_loader = loader(path[len(search_path):])
762 if file_loader is not None:
763 break
764 if file_loader is None or not self.is_allowed(real_filename):
765 return self.app(environ, start_response)
766
767 guessed_type = mimetypes.guess_type(real_filename)
768 mime_type = guessed_type[0] or self.fallback_mimetype
769 f, mtime, file_size = file_loader()
770
771 headers = [('Date', http_date())]
772 if self.cache:
773 timeout = self.cache_timeout
774 etag = self.generate_etag(mtime, file_size, real_filename)
775 headers += [
776 ('Etag', '"%s"' % etag),
777 ('Cache-Control', 'max-age=%d, public' % timeout)
778 ]
779 if not is_resource_modified(environ, etag, last_modified=mtime):
780 f.close()
781 start_response('304 Not Modified', headers)
782 return []
783 headers.append(('Expires', http_date(time() + timeout)))
784 else:
785 headers.append(('Cache-Control', 'public'))
786
787 headers.extend((
788 ('Content-Type', mime_type),
789 ('Content-Length', str(file_size)),
790 ('Last-Modified', http_date(mtime))
791 ))
792 start_response('200 OK', headers)
793 return wrap_file(environ, f)
794
795
796 class DispatcherMiddleware(object):
797
798 """Allows one to mount middlewares or applications in a WSGI application.
799 This is useful if you want to combine multiple WSGI applications::
800
801 app = DispatcherMiddleware(app, {
802 '/app2': app2,
803 '/app3': app3
804 })
805 """
806
807 def __init__(self, app, mounts=None):
808 self.app = app
809 self.mounts = mounts or {}
810
811 def __call__(self, environ, start_response):
812 script = environ.get('PATH_INFO', '')
813 path_info = ''
814 while '/' in script:
815 if script in self.mounts:
816 app = self.mounts[script]
817 break
818 script, last_item = script.rsplit('/', 1)
819 path_info = '/%s%s' % (last_item, path_info)
820 else:
821 app = self.mounts.get(script, self.app)
822 original_script_name = environ.get('SCRIPT_NAME', '')
823 environ['SCRIPT_NAME'] = original_script_name + script
824 environ['PATH_INFO'] = path_info
825 return app(environ, start_response)
826
827
828 @implements_iterator
829 class ClosingIterator(object):
830
831 """The WSGI specification requires that all middlewares and gateways
832 respect the `close` callback of an iterator. Because it is useful to add
833 another close action to a returned iterator and adding a custom iterator
834 is a boring task this class can be used for that::
835
836 return ClosingIterator(app(environ, start_response), [cleanup_session,
837 cleanup_locals])
838
839 If there is just one close function it can be passed instead of the list.
840
841 A closing iterator is not needed if the application uses response objects
842 and finishes the processing if the response is started::
843
844 try:
845 return response(environ, start_response)
846 finally:
847 cleanup_session()
848 cleanup_locals()
849 """
850
851 def __init__(self, iterable, callbacks=None):
852 iterator = iter(iterable)
853 self._next = partial(next, iterator)
854 if callbacks is None:
855 callbacks = []
856 elif callable(callbacks):
857 callbacks = [callbacks]
858 else:
859 callbacks = list(callbacks)
860 iterable_close = getattr(iterator, 'close', None)
861 if iterable_close:
862 callbacks.insert(0, iterable_close)
863 self._callbacks = callbacks
864
865 def __iter__(self):
866 return self
867
868 def __next__(self):
869 return self._next()
870
871 def close(self):
872 for callback in self._callbacks:
873 callback()
874
875
876 def wrap_file(environ, file, buffer_size=8192):
877 """Wraps a file. This uses the WSGI server's file wrapper if available
878 or otherwise the generic :class:`FileWrapper`.
879
880 .. versionadded:: 0.5
881
882 If the file wrapper from the WSGI server is used it's important to not
883 iterate over it from inside the application but to pass it through
884 unchanged. If you want to pass out a file wrapper inside a response
885 object you have to set :attr:`~BaseResponse.direct_passthrough` to `True`.
886
887 More information about file wrappers are available in :pep:`333`.
888
889 :param file: a :class:`file`-like object with a :meth:`~file.read` method.
890 :param buffer_size: number of bytes for one iteration.
891 """
892 return environ.get('wsgi.file_wrapper', FileWrapper)(file, buffer_size)
893
894
895 @implements_iterator
896 class FileWrapper(object):
897
898 """This class can be used to convert a :class:`file`-like object into
899 an iterable. It yields `buffer_size` blocks until the file is fully
900 read.
901
902 You should not use this class directly but rather use the
903 :func:`wrap_file` function that uses the WSGI server's file wrapper
904 support if it's available.
905
906 .. versionadded:: 0.5
907
908 If you're using this object together with a :class:`BaseResponse` you have
909 to use the `direct_passthrough` mode.
910
911 :param file: a :class:`file`-like object with a :meth:`~file.read` method.
912 :param buffer_size: number of bytes for one iteration.
913 """
914
915 def __init__(self, file, buffer_size=8192):
916 self.file = file
917 self.buffer_size = buffer_size
918
919 def close(self):
920 if hasattr(self.file, 'close'):
921 self.file.close()
922
923 def seekable(self):
924 if hasattr(self.file, 'seekable'):
925 return self.file.seekable()
926 if hasattr(self.file, 'seek'):
927 return True
928 return False
929
930 def seek(self, *args):
931 if hasattr(self.file, 'seek'):
932 self.file.seek(*args)
933
934 def tell(self):
935 if hasattr(self.file, 'tell'):
936 return self.file.tell()
937 return None
938
939 def __iter__(self):
940 return self
941
942 def __next__(self):
943 data = self.file.read(self.buffer_size)
944 if data:
945 return data
946 raise StopIteration()
947
948
949 @implements_iterator
950 class _RangeWrapper(object):
951 # private for now, but should we make it public in the future ?
952
953 """This class can be used to convert an iterable object into
954 an iterable that will only yield a piece of the underlying content.
955 It yields blocks until the underlying stream range is fully read.
956 The yielded blocks will have a size that can't exceed the original
957 iterator defined block size, but that can be smaller.
958
959 If you're using this object together with a :class:`BaseResponse` you have
960 to use the `direct_passthrough` mode.
961
962 :param iterable: an iterable object with a :meth:`__next__` method.
963 :param start_byte: byte from which read will start.
964 :param byte_range: how many bytes to read.
965 """
966
967 def __init__(self, iterable, start_byte=0, byte_range=None):
968 self.iterable = iter(iterable)
969 self.byte_range = byte_range
970 self.start_byte = start_byte
971 self.end_byte = None
972 if byte_range is not None:
973 self.end_byte = self.start_byte + self.byte_range
974 self.read_length = 0
975 self.seekable = hasattr(iterable, 'seekable') and iterable.seekable()
976 self.end_reached = False
977
978 def __iter__(self):
979 return self
980
981 def _next_chunk(self):
982 try:
983 chunk = next(self.iterable)
984 self.read_length += len(chunk)
985 return chunk
986 except StopIteration:
987 self.end_reached = True
988 raise
989
990 def _first_iteration(self):
991 chunk = None
992 if self.seekable:
993 self.iterable.seek(self.start_byte)
994 self.read_length = self.iterable.tell()
995 contextual_read_length = self.read_length
996 else:
997 while self.read_length <= self.start_byte:
998 chunk = self._next_chunk()
999 if chunk is not None:
1000 chunk = chunk[self.start_byte - self.read_length:]
1001 contextual_read_length = self.start_byte
1002 return chunk, contextual_read_length
1003
1004 def _next(self):
1005 if self.end_reached:
1006 raise StopIteration()
1007 chunk = None
1008 contextual_read_length = self.read_length
1009 if self.read_length == 0:
1010 chunk, contextual_read_length = self._first_iteration()
1011 if chunk is None:
1012 chunk = self._next_chunk()
1013 if self.end_byte is not None and self.read_length >= self.end_byte:
1014 self.end_reached = True
1015 return chunk[:self.end_byte - contextual_read_length]
1016 return chunk
1017
1018 def __next__(self):
1019 chunk = self._next()
1020 if chunk:
1021 return chunk
1022 self.end_reached = True
1023 raise StopIteration()
1024
1025 def close(self):
1026 if hasattr(self.iterable, 'close'):
1027 self.iterable.close()
1028
1029
1030 def _make_chunk_iter(stream, limit, buffer_size):
1031 """Helper for the line and chunk iter functions."""
1032 if isinstance(stream, (bytes, bytearray, text_type)):
1033 raise TypeError('Passed a string or byte object instead of '
1034 'true iterator or stream.')
1035 if not hasattr(stream, 'read'):
1036 for item in stream:
1037 if item:
1038 yield item
1039 return
1040 if not isinstance(stream, LimitedStream) and limit is not None:
1041 stream = LimitedStream(stream, limit)
1042 _read = stream.read
1043 while 1:
1044 item = _read(buffer_size)
1045 if not item:
1046 break
1047 yield item
1048
1049
1050 def make_line_iter(stream, limit=None, buffer_size=10 * 1024,
1051 cap_at_buffer=False):
1052 """Safely iterates line-based over an input stream. If the input stream
1053 is not a :class:`LimitedStream` the `limit` parameter is mandatory.
1054
1055 This uses the stream's :meth:`~file.read` method internally as opposite
1056 to the :meth:`~file.readline` method that is unsafe and can only be used
1057 in violation of the WSGI specification. The same problem applies to the
1058 `__iter__` function of the input stream which calls :meth:`~file.readline`
1059 without arguments.
1060
1061 If you need line-by-line processing it's strongly recommended to iterate
1062 over the input stream using this helper function.
1063
1064 .. versionchanged:: 0.8
1065 This function now ensures that the limit was reached.
1066
1067 .. versionadded:: 0.9
1068 added support for iterators as input stream.
1069
1070 .. versionadded:: 0.11.10
1071 added support for the `cap_at_buffer` parameter.
1072
1073 :param stream: the stream or iterate to iterate over.
1074 :param limit: the limit in bytes for the stream. (Usually
1075 content length. Not necessary if the `stream`
1076 is a :class:`LimitedStream`.
1077 :param buffer_size: The optional buffer size.
1078 :param cap_at_buffer: if this is set chunks are split if they are longer
1079 than the buffer size. Internally this is implemented
1080 that the buffer size might be exhausted by a factor
1081 of two however.
1082 """
1083 _iter = _make_chunk_iter(stream, limit, buffer_size)
1084
1085 first_item = next(_iter, '')
1086 if not first_item:
1087 return
1088
1089 s = make_literal_wrapper(first_item)
1090 empty = s('')
1091 cr = s('\r')
1092 lf = s('\n')
1093 crlf = s('\r\n')
1094
1095 _iter = chain((first_item,), _iter)
1096
1097 def _iter_basic_lines():
1098 _join = empty.join
1099 buffer = []
1100 while 1:
1101 new_data = next(_iter, '')
1102 if not new_data:
1103 break
1104 new_buf = []
1105 buf_size = 0
1106 for item in chain(buffer, new_data.splitlines(True)):
1107 new_buf.append(item)
1108 buf_size += len(item)
1109 if item and item[-1:] in crlf:
1110 yield _join(new_buf)
1111 new_buf = []
1112 elif cap_at_buffer and buf_size >= buffer_size:
1113 rv = _join(new_buf)
1114 while len(rv) >= buffer_size:
1115 yield rv[:buffer_size]
1116 rv = rv[buffer_size:]
1117 new_buf = [rv]
1118 buffer = new_buf
1119 if buffer:
1120 yield _join(buffer)
1121
1122 # This hackery is necessary to merge 'foo\r' and '\n' into one item
1123 # of 'foo\r\n' if we were unlucky and we hit a chunk boundary.
1124 previous = empty
1125 for item in _iter_basic_lines():
1126 if item == lf and previous[-1:] == cr:
1127 previous += item
1128 item = empty
1129 if previous:
1130 yield previous
1131 previous = item
1132 if previous:
1133 yield previous
1134
1135
1136 def make_chunk_iter(stream, separator, limit=None, buffer_size=10 * 1024,
1137 cap_at_buffer=False):
1138 """Works like :func:`make_line_iter` but accepts a separator
1139 which divides chunks. If you want newline based processing
1140 you should use :func:`make_line_iter` instead as it
1141 supports arbitrary newline markers.
1142
1143 .. versionadded:: 0.8
1144
1145 .. versionadded:: 0.9
1146 added support for iterators as input stream.
1147
1148 .. versionadded:: 0.11.10
1149 added support for the `cap_at_buffer` parameter.
1150
1151 :param stream: the stream or iterate to iterate over.
1152 :param separator: the separator that divides chunks.
1153 :param limit: the limit in bytes for the stream. (Usually
1154 content length. Not necessary if the `stream`
1155 is otherwise already limited).
1156 :param buffer_size: The optional buffer size.
1157 :param cap_at_buffer: if this is set chunks are split if they are longer
1158 than the buffer size. Internally this is implemented
1159 that the buffer size might be exhausted by a factor
1160 of two however.
1161 """
1162 _iter = _make_chunk_iter(stream, limit, buffer_size)
1163
1164 first_item = next(_iter, '')
1165 if not first_item:
1166 return
1167
1168 _iter = chain((first_item,), _iter)
1169 if isinstance(first_item, text_type):
1170 separator = to_unicode(separator)
1171 _split = re.compile(r'(%s)' % re.escape(separator)).split
1172 _join = u''.join
1173 else:
1174 separator = to_bytes(separator)
1175 _split = re.compile(b'(' + re.escape(separator) + b')').split
1176 _join = b''.join
1177
1178 buffer = []
1179 while 1:
1180 new_data = next(_iter, '')
1181 if not new_data:
1182 break
1183 chunks = _split(new_data)
1184 new_buf = []
1185 buf_size = 0
1186 for item in chain(buffer, chunks):
1187 if item == separator:
1188 yield _join(new_buf)
1189 new_buf = []
1190 buf_size = 0
1191 else:
1192 buf_size += len(item)
1193 new_buf.append(item)
1194
1195 if cap_at_buffer and buf_size >= buffer_size:
1196 rv = _join(new_buf)
1197 while len(rv) >= buffer_size:
1198 yield rv[:buffer_size]
1199 rv = rv[buffer_size:]
1200 new_buf = [rv]
1201 buf_size = len(rv)
1202
1203 buffer = new_buf
1204 if buffer:
1205 yield _join(buffer)
1206
1207
1208 @implements_iterator
1209 class LimitedStream(io.IOBase):
1210
1211 """Wraps a stream so that it doesn't read more than n bytes. If the
1212 stream is exhausted and the caller tries to get more bytes from it
1213 :func:`on_exhausted` is called which by default returns an empty
1214 string. The return value of that function is forwarded
1215 to the reader function. So if it returns an empty string
1216 :meth:`read` will return an empty string as well.
1217
1218 The limit however must never be higher than what the stream can
1219 output. Otherwise :meth:`readlines` will try to read past the
1220 limit.
1221
1222 .. admonition:: Note on WSGI compliance
1223
1224 calls to :meth:`readline` and :meth:`readlines` are not
1225 WSGI compliant because it passes a size argument to the
1226 readline methods. Unfortunately the WSGI PEP is not safely
1227 implementable without a size argument to :meth:`readline`
1228 because there is no EOF marker in the stream. As a result
1229 of that the use of :meth:`readline` is discouraged.
1230
1231 For the same reason iterating over the :class:`LimitedStream`
1232 is not portable. It internally calls :meth:`readline`.
1233
1234 We strongly suggest using :meth:`read` only or using the
1235 :func:`make_line_iter` which safely iterates line-based
1236 over a WSGI input stream.
1237
1238 :param stream: the stream to wrap.
1239 :param limit: the limit for the stream, must not be longer than
1240 what the string can provide if the stream does not
1241 end with `EOF` (like `wsgi.input`)
1242 """
1243
1244 def __init__(self, stream, limit):
1245 self._read = stream.read
1246 self._readline = stream.readline
1247 self._pos = 0
1248 self.limit = limit
1249
1250 def __iter__(self):
1251 return self
1252
1253 @property
1254 def is_exhausted(self):
1255 """If the stream is exhausted this attribute is `True`."""
1256 return self._pos >= self.limit
1257
1258 def on_exhausted(self):
1259 """This is called when the stream tries to read past the limit.
1260 The return value of this function is returned from the reading
1261 function.
1262 """
1263 # Read null bytes from the stream so that we get the
1264 # correct end of stream marker.
1265 return self._read(0)
1266
1267 def on_disconnect(self):
1268 """What should happen if a disconnect is detected? The return
1269 value of this function is returned from read functions in case
1270 the client went away. By default a
1271 :exc:`~werkzeug.exceptions.ClientDisconnected` exception is raised.
1272 """
1273 from werkzeug.exceptions import ClientDisconnected
1274 raise ClientDisconnected()
1275
1276 def exhaust(self, chunk_size=1024 * 64):
1277 """Exhaust the stream. This consumes all the data left until the
1278 limit is reached.
1279
1280 :param chunk_size: the size for a chunk. It will read the chunk
1281 until the stream is exhausted and throw away
1282 the results.
1283 """
1284 to_read = self.limit - self._pos
1285 chunk = chunk_size
1286 while to_read > 0:
1287 chunk = min(to_read, chunk)
1288 self.read(chunk)
1289 to_read -= chunk
1290
1291 def read(self, size=None):
1292 """Read `size` bytes or if size is not provided everything is read.
1293
1294 :param size: the number of bytes read.
1295 """
1296 if self._pos >= self.limit:
1297 return self.on_exhausted()
1298 if size is None or size == -1: # -1 is for consistence with file
1299 size = self.limit
1300 to_read = min(self.limit - self._pos, size)
1301 try:
1302 read = self._read(to_read)
1303 except (IOError, ValueError):
1304 return self.on_disconnect()
1305 if to_read and len(read) != to_read:
1306 return self.on_disconnect()
1307 self._pos += len(read)
1308 return read
1309
1310 def readline(self, size=None):
1311 """Reads one line from the stream."""
1312 if self._pos >= self.limit:
1313 return self.on_exhausted()
1314 if size is None:
1315 size = self.limit - self._pos
1316 else:
1317 size = min(size, self.limit - self._pos)
1318 try:
1319 line = self._readline(size)
1320 except (ValueError, IOError):
1321 return self.on_disconnect()
1322 if size and not line:
1323 return self.on_disconnect()
1324 self._pos += len(line)
1325 return line
1326
1327 def readlines(self, size=None):
1328 """Reads a file into a list of strings. It calls :meth:`readline`
1329 until the file is read to the end. It does support the optional
1330 `size` argument if the underlaying stream supports it for
1331 `readline`.
1332 """
1333 last_pos = self._pos
1334 result = []
1335 if size is not None:
1336 end = min(self.limit, last_pos + size)
1337 else:
1338 end = self.limit
1339 while 1:
1340 if size is not None:
1341 size -= last_pos - self._pos
1342 if self._pos >= end:
1343 break
1344 result.append(self.readline(size))
1345 if size is not None:
1346 last_pos = self._pos
1347 return result
1348
1349 def tell(self):
1350 """Returns the position of the stream.
1351
1352 .. versionadded:: 0.9
1353 """
1354 return self._pos
1355
1356 def __next__(self):
1357 line = self.readline()
1358 if not line:
1359 raise StopIteration()
1360 return line
1361
1362 def readable(self):
1363 return True
66 Changes the deprecated werkzeug imports to the full canonical imports.
77 This is a terrible hack, don't trust the diff untested.
88
9 :copyright: (c) 2014 by the Werkzeug Team.
10 :license: BSD, see LICENSE for more details.
9 :copyright: 2007 Pallets
10 :license: BSD-3-Clause
1111 """
12 from __future__ import with_statement
12 import difflib
13 import os
14 import posixpath
15 import re
1316 import sys
14 import os
15 import re
16 import posixpath
17 import difflib
18
19
20 _from_import_re = re.compile(r'(\s*(>>>|\.\.\.)?\s*)from werkzeug import\s+')
21 _direct_usage = re.compile('(?<!`)(werkzeug\.)([a-zA-Z_][a-zA-Z0-9_]+)')
17
18
19 _from_import_re = re.compile(r"(\s*(>>>|\.\.\.)?\s*)from werkzeug import\s+")
20 _direct_usage = re.compile(r"(?<!`)(werkzeug\.)([a-zA-Z_][a-zA-Z0-9_]+)")
2221
2322 # not necessarily in sync with current werkzeug/__init__.py
2423 all_by_module = {
25 'werkzeug.debug': ['DebuggedApplication'],
26 'werkzeug.local': ['Local', 'LocalManager', 'LocalProxy', 'LocalStack',
27 'release_local'],
28 'werkzeug.serving': ['run_simple'],
29 'werkzeug.test': ['Client', 'EnvironBuilder', 'create_environ',
30 'run_wsgi_app'],
31 'werkzeug.testapp': ['test_app'],
32 'werkzeug.exceptions': ['abort', 'Aborter'],
33 'werkzeug.urls': ['url_decode', 'url_encode', 'url_quote',
34 'url_quote_plus', 'url_unquote', 'url_unquote_plus',
35 'url_fix', 'Href', 'iri_to_uri', 'uri_to_iri'],
36 'werkzeug.formparser': ['parse_form_data'],
37 'werkzeug.utils': ['escape', 'environ_property', 'append_slash_redirect',
38 'redirect', 'cached_property', 'import_string',
39 'dump_cookie', 'parse_cookie', 'unescape',
40 'format_string', 'find_modules', 'header_property',
41 'html', 'xhtml', 'HTMLBuilder', 'validate_arguments',
42 'ArgumentValidationError', 'bind_arguments',
43 'secure_filename'],
44 'werkzeug.wsgi': ['get_current_url', 'get_host', 'pop_path_info',
45 'peek_path_info', 'SharedDataMiddleware',
46 'DispatcherMiddleware', 'ClosingIterator', 'FileWrapper',
47 'make_line_iter', 'LimitedStream', 'responder',
48 'wrap_file', 'extract_path_info'],
49 'werkzeug.datastructures': ['MultiDict', 'CombinedMultiDict', 'Headers',
50 'EnvironHeaders', 'ImmutableList',
51 'ImmutableDict', 'ImmutableMultiDict',
52 'TypeConversionDict',
53 'ImmutableTypeConversionDict', 'Accept',
54 'MIMEAccept', 'CharsetAccept',
55 'LanguageAccept', 'RequestCacheControl',
56 'ResponseCacheControl', 'ETags', 'HeaderSet',
57 'WWWAuthenticate', 'Authorization',
58 'FileMultiDict', 'CallbackDict', 'FileStorage',
59 'OrderedMultiDict', 'ImmutableOrderedMultiDict'
60 ],
61 'werkzeug.useragents': ['UserAgent'],
62 'werkzeug.http': ['parse_etags', 'parse_date', 'http_date', 'cookie_date',
63 'parse_cache_control_header', 'is_resource_modified',
64 'parse_accept_header', 'parse_set_header', 'quote_etag',
65 'unquote_etag', 'generate_etag', 'dump_header',
66 'parse_list_header', 'parse_dict_header',
67 'parse_authorization_header',
68 'parse_www_authenticate_header', 'remove_entity_headers',
69 'is_entity_header', 'remove_hop_by_hop_headers',
70 'parse_options_header', 'dump_options_header',
71 'is_hop_by_hop_header', 'unquote_header_value',
72 'quote_header_value', 'HTTP_STATUS_CODES'],
73 'werkzeug.wrappers': ['BaseResponse', 'BaseRequest', 'Request', 'Response',
74 'AcceptMixin', 'ETagRequestMixin',
75 'ETagResponseMixin', 'ResponseStreamMixin',
76 'CommonResponseDescriptorsMixin', 'UserAgentMixin',
77 'AuthorizationMixin', 'WWWAuthenticateMixin',
78 'CommonRequestDescriptorsMixin'],
79 'werkzeug.security': ['generate_password_hash', 'check_password_hash'],
24 "werkzeug.debug": ["DebuggedApplication"],
25 "werkzeug.local": [
26 "Local",
27 "LocalManager",
28 "LocalProxy",
29 "LocalStack",
30 "release_local",
31 ],
32 "werkzeug.serving": ["run_simple"],
33 "werkzeug.test": ["Client", "EnvironBuilder", "create_environ", "run_wsgi_app"],
34 "werkzeug.testapp": ["test_app"],
35 "werkzeug.exceptions": ["abort", "Aborter"],
36 "werkzeug.urls": [
37 "url_decode",
38 "url_encode",
39 "url_quote",
40 "url_quote_plus",
41 "url_unquote",
42 "url_unquote_plus",
43 "url_fix",
44 "Href",
45 "iri_to_uri",
46 "uri_to_iri",
47 ],
48 "werkzeug.formparser": ["parse_form_data"],
49 "werkzeug.utils": [
50 "escape",
51 "environ_property",
52 "append_slash_redirect",
53 "redirect",
54 "cached_property",
55 "import_string",
56 "dump_cookie",
57 "parse_cookie",
58 "unescape",
59 "format_string",
60 "find_modules",
61 "header_property",
62 "html",
63 "xhtml",
64 "HTMLBuilder",
65 "validate_arguments",
66 "ArgumentValidationError",
67 "bind_arguments",
68 "secure_filename",
69 ],
70 "werkzeug.wsgi": [
71 "get_current_url",
72 "get_host",
73 "pop_path_info",
74 "peek_path_info",
75 "SharedDataMiddleware",
76 "DispatcherMiddleware",
77 "ClosingIterator",
78 "FileWrapper",
79 "make_line_iter",
80 "LimitedStream",
81 "responder",
82 "wrap_file",
83 "extract_path_info",
84 ],
85 "werkzeug.datastructures": [
86 "MultiDict",
87 "CombinedMultiDict",
88 "Headers",
89 "EnvironHeaders",
90 "ImmutableList",
91 "ImmutableDict",
92 "ImmutableMultiDict",
93 "TypeConversionDict",
94 "ImmutableTypeConversionDict",
95 "Accept",
96 "MIMEAccept",
97 "CharsetAccept",
98 "LanguageAccept",
99 "RequestCacheControl",
100 "ResponseCacheControl",
101 "ETags",
102 "HeaderSet",
103 "WWWAuthenticate",
104 "Authorization",
105 "FileMultiDict",
106 "CallbackDict",
107 "FileStorage",
108 "OrderedMultiDict",
109 "ImmutableOrderedMultiDict",
110 ],
111 "werkzeug.useragents": ["UserAgent"],
112 "werkzeug.http": [
113 "parse_etags",
114 "parse_date",
115 "http_date",
116 "cookie_date",
117 "parse_cache_control_header",
118 "is_resource_modified",
119 "parse_accept_header",
120 "parse_set_header",
121 "quote_etag",
122 "unquote_etag",
123 "generate_etag",
124 "dump_header",
125 "parse_list_header",
126 "parse_dict_header",
127 "parse_authorization_header",
128 "parse_www_authenticate_header",
129 "remove_entity_headers",
130 "is_entity_header",
131 "remove_hop_by_hop_headers",
132 "parse_options_header",
133 "dump_options_header",
134 "is_hop_by_hop_header",
135 "unquote_header_value",
136 "quote_header_value",
137 "HTTP_STATUS_CODES",
138 ],
139 "werkzeug.wrappers": [
140 "BaseResponse",
141 "BaseRequest",
142 "Request",
143 "Response",
144 "AcceptMixin",
145 "ETagRequestMixin",
146 "ETagResponseMixin",
147 "ResponseStreamMixin",
148 "CommonResponseDescriptorsMixin",
149 "UserAgentMixin",
150 "AuthorizationMixin",
151 "WWWAuthenticateMixin",
152 "CommonRequestDescriptorsMixin",
153 ],
154 "werkzeug.security": ["generate_password_hash", "check_password_hash"],
80155 # the undocumented easteregg ;-)
81 'werkzeug._internal': ['_easteregg']
156 "werkzeug._internal": ["_easteregg"],
82157 }
83158
84159 by_item = {}
85 for module, names in all_by_module.iteritems():
160 for module, names in all_by_module.items():
86161 for name in names:
87162 by_item[name] = module
88163
89164
90165 def find_module(item):
91 return by_item.get(item, 'werkzeug')
166 return by_item.get(item, "werkzeug")
92167
93168
94169 def complete_fromlist(fromlist, lineiter):
95170 fromlist = fromlist.strip()
96171 if not fromlist:
97172 return []
98 if fromlist[0] == '(':
99 if fromlist[-1] == ')':
100 return fromlist[1:-1].strip().split(',')
101 fromlist = fromlist[1:].strip().split(',')
173 if fromlist[0] == "(":
174 if fromlist[-1] == ")":
175 return fromlist[1:-1].strip().split(",")
176 fromlist = fromlist[1:].strip().split(",")
102177 for line in lineiter:
103178 line = line.strip()
104179 abort = False
105 if line.endswith(')'):
180 if line.endswith(")"):
106181 line = line[:-1]
107182 abort = True
108 fromlist.extend(line.split(','))
183 fromlist.extend(line.split(","))
109184 if abort:
110185 break
111186 return fromlist
112 elif fromlist[-1] == '\\':
113 fromlist = fromlist[:-1].strip().split(',')
187 elif fromlist[-1] == "\\":
188 fromlist = fromlist[:-1].strip().split(",")
114189 for line in lineiter:
115190 line = line.strip()
116191 abort = True
117 if line.endswith('\\'):
192 if line.endswith("\\"):
118193 abort = False
119194 line = line[:-1]
120 fromlist.extend(line.split(','))
195 fromlist.extend(line.split(","))
121196 if abort:
122197 break
123198 return fromlist
124 return fromlist.split(',')
199 return fromlist.split(",")
125200
126201
127202 def rewrite_from_imports(fromlist, indentation, lineiter):
132207 continue
133208 if len(item) == 1:
134209 parsed_items.append((item[0], None))
135 elif len(item) == 3 and item[1] == 'as':
210 elif len(item) == 3 and item[1] == "as":
136211 parsed_items.append((item[0], item[2]))
137212 else:
138 raise ValueError('invalid syntax for import')
213 raise ValueError("invalid syntax for import")
139214
140215 new_imports = {}
141216 for item, alias in parsed_items:
142217 new_imports.setdefault(find_module(item), []).append((item, alias))
143218
144219 for module_name, items in sorted(new_imports.items()):
145 fromlist_items = sorted(['%s%s' % (
146 item,
147 alias is not None and (' as ' + alias) or ''
148 ) for (item, alias) in items], reverse=True)
149
150 prefix = '%sfrom %s import ' % (indentation, module_name)
220 fromlist_items = sorted(
221 [
222 "%s%s" % (item, alias is not None and (" as " + alias) or "")
223 for (item, alias) in items
224 ],
225 reverse=True,
226 )
227
228 prefix = "%sfrom %s import " % (indentation, module_name)
151229 item_buffer = []
152230 while fromlist_items:
153231 item_buffer.append(fromlist_items.pop())
154 fromlist = ', '.join(item_buffer)
232 fromlist = ", ".join(item_buffer)
155233 if len(fromlist) + len(prefix) > 79:
156 yield prefix + ', '.join(item_buffer[:-1]) + ', \\'
234 yield prefix + ", ".join(item_buffer[:-1]) + ", \\"
157235 item_buffer = [item_buffer[-1]]
158236 # doctest continuations
159 indentation = indentation.replace('>', '.')
160 prefix = indentation + ' '
161 yield prefix + ', '.join(item_buffer)
237 indentation = indentation.replace(">", ".")
238 prefix = indentation + " "
239 yield prefix + ", ".join(item_buffer)
162240
163241
164242 def inject_imports(lines, imports):
165243 pos = 0
166244 for idx, line in enumerate(lines):
167 if re.match(r'(from|import)\s+werkzeug', line):
245 if re.match(r"(from|import)\s+werkzeug", line):
168246 pos = idx
169247 break
170 lines[pos:pos] = ['from %s import %s' % (mod, ', '.join(sorted(attrs)))
171 for mod, attrs in sorted(imports.items())]
248 lines[pos:pos] = [
249 "from %s import %s" % (mod, ", ".join(sorted(attrs)))
250 for mod, attrs in sorted(imports.items())
251 ]
172252
173253
174254 def rewrite_file(filename):
182262 # rewrite from imports
183263 match = _from_import_re.search(line)
184264 if match is not None:
185 fromlist = line[match.end():]
186 new_file.extend(rewrite_from_imports(fromlist,
187 match.group(1),
188 lineiter))
265 fromlist = line[match.end() :]
266 new_file.extend(rewrite_from_imports(fromlist, match.group(1), lineiter))
189267 continue
190268
191269 def _handle_match(match):
192270 # rewrite attribute access to 'werkzeug'
193271 attr = match.group(2)
194272 mod = find_module(attr)
195 if mod == 'werkzeug':
273 if mod == "werkzeug":
196274 return match.group(0)
197275 deferred_imports.setdefault(mod, []).append(attr)
198276 return attr
277
199278 new_file.append(_direct_usage.sub(_handle_match, line))
200279 if deferred_imports:
201280 inject_imports(new_file, deferred_imports)
202281
203282 for line in difflib.unified_diff(
204 old_file, new_file,
205 posixpath.normpath(posixpath.join('a', filename)),
206 posixpath.normpath(posixpath.join('b', filename)),
207 lineterm=''
283 old_file,
284 new_file,
285 posixpath.normpath(posixpath.join("a", filename)),
286 posixpath.normpath(posixpath.join("b", filename)),
287 lineterm="",
208288 ):
209289 print(line)
210290
211291
212292 def rewrite_in_folders(folders):
213293 for folder in folders:
214 for dirpath, dirnames, filenames in os.walk(folder):
294 for dirpath, _dirnames, filenames in os.walk(folder):
215295 for filename in filenames:
216296 filename = os.path.join(dirpath, filename)
217 if filename.endswith(('.rst', '.py')):
297 if filename.endswith((".rst", ".py")):
218298 rewrite_file(filename)
219299
220300
221301 def main():
222302 if len(sys.argv) == 1:
223 print('usage: werkzeug-import-rewrite.py [folders]')
303 print("usage: werkzeug-import-rewrite.py [folders]")
224304 sys.exit(1)
225305 rewrite_in_folders(sys.argv[1:])
226306
227307
228 if __name__ == '__main__':
308 if __name__ == "__main__":
229309 main()