Imported Upstream version 1.10.0
Bas Couwenberg
7 years ago
0 | <!--- Is this a bug? --> | |
1 | <!--- This issue tracker is only used for tracking bugs. Please use the mailing | |
2 | list, if you have any question or need help: https://mapproxy.org/support --> | |
3 | ||
4 | <!--- It is a bug! --> | |
5 | <!--- Please provide a general summary of the issue in the Title above --> | |
6 | ||
7 | ## Context | |
8 | <!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug --> | |
9 | ||
10 | ## Expected Behavior | |
11 | <!--- Tell us what should happen --> | |
12 | ||
13 | ## Actual Behavior | |
14 | <!--- Tell us what happens instead --> | |
15 | ||
16 | ## Possible Fix | |
17 | <!--- Not obligatory, but suggest a fix or reason for the bug --> | |
18 | ||
19 | ## Steps to Reproduce | |
20 | <!--- Provide a an unambiguous set of steps to reproduce this bug --> | |
21 | <!--- Include _minimal_ but _complete_ configurations and test requests. --> | |
22 | <!--- Use https://gist.github.com to link to larger configurations. --> | |
23 | 1. | |
24 | 2. | |
25 | 3. | |
26 | 4. | |
27 | ||
28 | ## Context | |
29 | <!--- How has this bug affected you? What were you trying to accomplish? --> | |
30 | ||
31 | ## Your Environment | |
32 | <!--- Include as many relevant details about the environment you experienced the bug in --> | |
33 | * Version used: | |
34 | * Environment name and version (e.g. Python 2.7.5 with mod_wsgi 4.5.9): | |
35 | * Server type and version: | |
36 | * Operating System and version: |
0 | 0 | language: python |
1 | 1 | |
2 | 2 | python: |
3 | - "2.6" | |
4 | 3 | - "2.7" |
5 | 4 | - "3.3" |
6 | 5 | - "3.4" |
9 | 8 | services: |
10 | 9 | - couchdb |
11 | 10 | - riak |
11 | - redis-server | |
12 | 12 | |
13 | 13 | addons: |
14 | 14 | apt: |
28 | 28 | - libprotoc-dev |
29 | 29 | |
30 | 30 | env: |
31 | - MAPPROXY_TEST_COUCHDB=http://127.0.0.1:5984 | |
31 | global: | |
32 | - MAPPROXY_TEST_COUCHDB=http://127.0.0.1:5984 | |
33 | - MAPPROXY_TEST_REDIS=127.0.0.1:6379 | |
34 | ||
35 | # do not load /etc/boto.cfg with Python 3 incompatible plugin | |
36 | # https://github.com/travis-ci/travis-ci/issues/5246#issuecomment-166460882 | |
37 | - BOTO_CONFIG=/doesnotexist | |
32 | 38 | |
33 | 39 | cache: |
34 | 40 | directories: |
36 | 42 | |
37 | 43 | install: |
38 | 44 | # riak packages are not compatible with Python 3 |
39 | - "if [[ $TRAVIS_PYTHON_VERSION = '2.7' ]]; then pip install --use-mirrors protobuf>=2.4.1 riak==2.2 riak_pb>=2.0; export MAPPROXY_TEST_COUCHDB=http://127.0.0.1:5984; export MAPPROXY_TEST_RIAK_PBC=pbc://localhost:8087; fi" | |
45 | - "if [[ $TRAVIS_PYTHON_VERSION = '2.7' ]]; then pip install protobuf>=2.4.1 riak==2.2 riak_pb>=2.0; export MAPPROXY_TEST_RIAK_PBC=pbc://localhost:8087; fi" | |
40 | 46 | - "pip install -r requirements-tests.txt" |
47 | - "pip freeze" | |
41 | 48 | |
42 | 49 | script: nosetests mapproxy |
28 | 28 | - Richard Duivenvoorde |
29 | 29 | - Stephan Holl |
30 | 30 | - Steven D. Lander |
31 | - Tom Payne⏎ | |
31 | - Tom Payne | |
32 | - Joseph Svrcek |
0 | 1.10.0 2017-05-18 | |
1 | ~~~~~~~~~~~~~~~~~ | |
2 | ||
3 | Improvements: | |
4 | ||
5 | - Support for S3 cache. | |
6 | - Support for the ArcGIS Compact Cache format version 1. | |
7 | - Support for GeoPackage files. | |
8 | - Support for Redis cache. | |
9 | - Support meta_tiles for tiles sources with bulk_meta_tiles option. | |
10 | - mbtiles/sqlite cache: Store multiple tiles in one transaction. | |
11 | - mbtiles/sqlite cache: Make timeout and WAL configurable. | |
12 | - ArcGIS REST source: Improve handling for ImageServer endpoints. | |
13 | - ArcGIS REST source: Support FeatureInfo requests. | |
14 | - ArcGIS REST source: Support min_res and max_res. | |
15 | - Support merging of RGB images with fixed transparency. | |
16 | - Coverages: Clip source requests at coverage boundaries. | |
17 | - Coverages: Build the difference, union or intersection of multiple coverages. | |
18 | - Coverages: Create coverages from webmercator tile coordinates like 05/182/123 | |
19 | with expire tiles files. | |
20 | - Coverages: Add native support for GeoJSON (no OGR/GDAL required). | |
21 | - mapproxy-seed: Add --duration, -reseed-file and -reseed-interval options. | |
22 | ||
23 | Fixes: | |
24 | ||
25 | - Fix level selection for grids with small res_factor. | |
26 | - mapproxy-util scales: Fix for Python 3. | |
27 | - WMS: Fix FeatureInfo precision for transformed requests. | |
28 | - Auth-API: Fix FeatureInfo for layers with limitto. | |
29 | - Fixes subpixel transformation deviations with Pillow 3.4 or higher. | |
30 | - mapproxy-seed: Reduce log output, especially in --quiet mode. | |
31 | - mapproxy-seed: Improve tile counter for tile grids with custom resolutions. | |
32 | - mapproxy-seed: Improve saving of the seed progress for --continue. | |
33 | - Fix band-merging when not all sources return an image. | |
34 | ||
35 | Other: | |
36 | ||
37 | - Python 2.6 is no longer supported. | |
38 | ||
39 | ||
40 | 1.9.1 2017-01-18 | |
41 | ~~~~~~~~~~~~~~~~ | |
42 | ||
43 | Fixes: | |
44 | ||
45 | - serve-develop: fixed reloader for Windows installations made | |
46 | with recent pip version (#279) | |
47 | ||
0 | 48 | 1.9.0 2016-07-22 |
1 | 49 | ~~~~~~~~~~~~~~~~ |
2 | 50 |
0 | 0 | MapProxy is an open source proxy for geospatial data. It caches, accelerates and transforms data from existing map services and serves any desktop or web GIS client. |
1 | 1 | |
2 | .. image:: http://mapproxy.org/mapproxy.png | |
2 | .. image:: https://mapproxy.org/mapproxy.png | |
3 | 3 | |
4 | 4 | MapProxy is a tile cache, but also offers many new and innovative features like full support for WMS clients. |
5 | 5 | |
6 | MapProxy is actively developed and supported by `Omniscale <http://omniscale.com>`_, it is released under the Apache Software License 2.0, runs on Unix/Linux and Windows and is easy to install and to configure. | |
6 | MapProxy is actively developed and supported by `Omniscale <https://omniscale.com>`_, it is released under the Apache Software License 2.0, runs on Unix/Linux and Windows and is easy to install and to configure. | |
7 | 7 | |
8 | Go to http://mapproxy.org/ for more information. | |
8 | Go to https://mapproxy.org/ for more information. | |
9 | 9 | |
10 | The documentation is available at: http://mapproxy.org/docs/latest/ | |
10 | The documentation is available at: https://mapproxy.org/docs/latest/ | |
11 | 11 |
32 | 32 | def __init__(self, app, global_conf): |
33 | 33 | self.app = app |
34 | 34 | |
35 | def __call__(self, environ, start_reponse): | |
35 | def __call__(self, environ, start_response): | |
36 | 36 | if random.randint(0, 1) == 1: |
37 | return self.app(environ, start_reponse) | |
37 | return self.app(environ, start_response) | |
38 | 38 | else: |
39 | start_reponse('403 Forbidden', | |
39 | start_response('403 Forbidden', | |
40 | 40 | [('content-type', 'text/plain')]) |
41 | 41 | return ['no luck today'] |
42 | 42 | |
85 | 85 | self.app = app |
86 | 86 | self.prefix = prefix |
87 | 87 | |
88 | def __call__(self, environ, start_reponse): | |
88 | def __call__(self, environ, start_response): | |
89 | 89 | # put authorize callback function into environment |
90 | 90 | environ['mapproxy.authorize'] = self.authorize |
91 | return self.app(environ, start_reponse) | |
91 | return self.app(environ, start_response) | |
92 | 92 | |
93 | 93 | def authorize(self, service, layers=[], environ=None, **kw): |
94 | 94 | allowed = denied = False |
516 | 516 | def __init__(self, app, global_conf): |
517 | 517 | self.app = app |
518 | 518 | |
519 | def __call__(self, environ, start_reponse): | |
519 | def __call__(self, environ, start_response): | |
520 | 520 | environ['mapproxy.authorize'] = self.authorize |
521 | return self.app(environ, start_reponse) | |
521 | return self.app(environ, start_response) | |
522 | 522 | |
523 | 523 | def authorize(self, service, layers=[]): |
524 | 524 | instance_name = environ.get('mapproxy.instance_name', '') |
3 | 3 | .. versionadded:: 1.2.0 |
4 | 4 | |
5 | 5 | MapProxy supports multiple backends to store the internal tiles. The default backend is file based and does not require any further configuration. |
6 | ||
6 | 7 | |
7 | 8 | Configuration |
8 | 9 | ============= |
24 | 25 | |
25 | 26 | The following backend types are available. |
26 | 27 | |
28 | ||
29 | - :ref:`cache_file` | |
30 | - :ref:`cache_mbtiles` | |
31 | - :ref:`cache_sqlite` | |
32 | - :ref:`cache_geopackage` | |
33 | - :ref:`cache_couchdb` | |
34 | - :ref:`cache_riak` | |
35 | - :ref:`cache_redis` | |
36 | - :ref:`cache_s3` | |
37 | - :ref:`cache_compact` | |
38 | ||
39 | .. _cache_file: | |
40 | ||
27 | 41 | ``file`` |
28 | 42 | ======== |
29 | 43 | |
52 | 66 | |
53 | 67 | .. versionadded:: 1.6.0 |
54 | 68 | |
69 | .. _cache_mbtiles: | |
55 | 70 | |
56 | 71 | ``mbtiles`` |
57 | 72 | =========== |
86 | 101 | The MBTiles format specification does not include any timestamps for each tile and the seeding function is limited therefore. If you include any ``refresh_before`` time in a seed task, all tiles will be recreated regardless of the value. The cleanup process does not support any ``remove_before`` times for MBTiles and it always removes all tiles. |
87 | 102 | Use the ``--summary`` option of the ``mapproxy-seed`` tool. |
88 | 103 | |
104 | The note about ``bulk_meta_tiles`` for SQLite below applies to MBtiles as well. | |
105 | ||
106 | .. _cache_sqlite: | |
107 | ||
89 | 108 | ``sqlite`` |
90 | 109 | =========== |
91 | 110 | |
92 | 111 | .. versionadded:: 1.6.0 |
93 | 112 | |
94 | Use SQLite databases to store the tiles, similar to ``mbtiles`` cache. The difference to ``mbtiles`` cache is that the ``sqlite`` cache stores each level into a separate databse. This makes it easy to remove complete levels during mapproxy-seed cleanup processes. The ``sqlite`` cache also stores the timestamp of each tile. | |
113 | Use SQLite databases to store the tiles, similar to ``mbtiles`` cache. The difference to ``mbtiles`` cache is that the ``sqlite`` cache stores each level into a separate database. This makes it easy to remove complete levels during mapproxy-seed cleanup processes. The ``sqlite`` cache also stores the timestamp of each tile. | |
95 | 114 | |
96 | 115 | Available options: |
97 | 116 | |
98 | 117 | ``dirname``: |
99 | The direcotry where the level databases will be stored. | |
118 | The directory where the level databases will be stored. | |
100 | 119 | |
101 | 120 | ``tile_lock_dir``: |
102 | 121 | Directory where MapProxy should write lock files when it creates new tiles for this cache. Defaults to ``cache_data/tile_locks``. |
113 | 132 | type: sqlite |
114 | 133 | directory: /path/to/cache |
115 | 134 | |
135 | ||
136 | .. note:: | |
137 | ||
138 | .. versionadded:: 1.10.0 | |
139 | ||
140 | All tiles from a meta tile request are stored in one transaction into the SQLite file to increase performance. You need to activate the :ref:`bulk_meta_tiles <bulk_meta_tiles>` option to get the same benefit when you are using tiled sources. | |
141 | ||
142 | :: | |
143 | ||
144 | caches: | |
145 | sqlite_cache: | |
146 | sources: [mytilesource] | |
147 | bulk_meta_tiles: true | |
148 | grids: [GLOBAL_MERCATOR] | |
149 | cache: | |
150 | type: sqlite | |
151 | directory: /path/to/cache | |
152 | ||
153 | .. _cache_couchdb: | |
116 | 154 | |
117 | 155 | ``couchdb`` |
118 | 156 | =========== |
234 | 272 | |
235 | 273 | The ``_attachments``-part is the internal structure of CouchDB where the tile itself is stored. You can access the tile directly at: ``http://localhost:9999/mywms_tiles/mygrid-3-1-2/tile``. |
236 | 274 | |
275 | .. _cache_riak: | |
237 | 276 | |
238 | 277 | ``riak`` |
239 | 278 | ======== |
287 | 326 | default_ports: |
288 | 327 | pb: 8087 |
289 | 328 | http: 8098 |
329 | ||
330 | .. _cache_redis: | |
331 | ||
332 | ``redis`` | |
333 | ========= | |
334 | ||
335 | .. versionadded:: 1.10.0 | |
336 | ||
337 | Store tiles in a `Redis <https://redis.io/>`_ in-memory database. This backend is useful for short-term caching. Typical use-case is a small Redis cache that allows you to benefit from meta-tiling. | |
338 | ||
339 | Your Redis database should be configured with ``maxmemory`` and ``maxmemory-policy`` options to limit the memory usage. For example:: | |
340 | ||
341 | maxmemory 256mb | |
342 | maxmemory-policy volatile-ttl | |
343 | ||
344 | ||
345 | Requirements | |
346 | ------------ | |
347 | ||
348 | You will need the `Python Redis client <https://pypi.python.org/pypi/redis>`_. You can install it in the usual way, for example with ``pip install redis``. | |
349 | ||
350 | Configuration | |
351 | ------------- | |
352 | ||
353 | Available options: | |
354 | ||
355 | ``host``: | |
356 | Host name of the Redis server. Defaults to ``127.0.0.1``. | |
357 | ||
358 | ``port``: | |
359 | Port of the Redis server. Defaults to ``6379``. | |
360 | ||
361 | ``db``: | |
362 | Number of the Redis database. Please refer to the Redis documentation. Defaults to `0`. | |
363 | ||
364 | ``prefix``: | |
365 | The prefix added to each tile-key in the Redis cache. Used to distinguish tiles from different caches and grids. Defaults to ``cache-name_grid-name``. | |
366 | ||
367 | ``default_ttl``: | |
368 | The default Time-To-Live of each tile in the Redis cache in seconds. Defaults to 3600 seconds (1 hour). | |
369 | ||
370 | ||
371 | ||
372 | Example | |
373 | ------- | |
374 | ||
375 | :: | |
376 | ||
377 | redis_cache: | |
378 | sources: [mywms] | |
379 | grids: [mygrid] | |
380 | cache: | |
381 | type: redis | |
382 | default_ttl: 600 | |
383 | ||
384 | ||
385 | .. _cache_geopackage: | |
386 | ||
387 | ``geopackage`` | |
388 | ============== | |
389 | ||
390 | .. versionadded:: 1.10.0 | |
391 | ||
392 | Store tiles in a `geopackage <http://www.geopackage.org/>`_ database. MapProxy creates a tile table if one isn't defined and populates the required meta data fields. | |
393 | This backend is good for datasets that require portability. | |
394 | Available options: | |
395 | ||
396 | ``filename``: | |
397 | The path to the geopackage file. Defaults to ``cachename.gpkg``. | |
398 | ||
399 | ``table_name``: | |
400 | The name of the table where the tiles should be stored (or retrieved if using an existing cache). Defaults to the ``cachename_gridname``. | |
401 | ||
402 | ``levels``: | |
403 | Set this to true to cache to a directory where each level is stored in a separate geopackage. Defaults to ``false``. | |
404 | If set to true, ``filename`` is ignored. | |
405 | ||
406 | ``directory``: | |
407 | If levels is true use this to specify the directory to store geopackage files. | |
408 | ||
409 | You can set the ``sources`` to an empty list, if you use an existing geopackage file and do not have a source. | |
410 | ||
411 | :: | |
412 | ||
413 | caches: | |
414 | geopackage_cache: | |
415 | sources: [] | |
416 | grids: [GLOBAL_MERCATOR] | |
417 | cache: | |
418 | type: geopackage | |
419 | filename: /path/to/bluemarble.gpkg | |
420 | table_name: bluemarble_tiles | |
421 | ||
422 | .. note:: | |
423 | ||
424 | The geopackage format specification does not include any timestamps for each tile and the seeding function is limited therefore. If you include any ``refresh_before`` time in a seed task, all tiles will be recreated regardless of the value. The cleanup process does not support any ``remove_before`` times for geopackage and it always removes all tiles. | |
425 | Use the ``--summary`` option of the ``mapproxy-seed`` tool. | |
426 | ||
427 | ||
428 | .. _cache_s3: | |
429 | ||
430 | ``s3`` | |
431 | ====== | |
432 | ||
433 | .. versionadded:: 1.10.0 | |
434 | ||
435 | Store tiles in a `Amazon Simple Storage Service (S3) <https://aws.amazon.com/s3/>`_. | |
436 | ||
437 | ||
438 | Requirements | |
439 | ------------ | |
440 | ||
441 | You will need the Python `boto3 <https://github.com/boto/boto3>`_ package. You can install it in the usual way, for example with ``pip install boto3``. | |
442 | ||
443 | Configuration | |
444 | ------------- | |
445 | ||
446 | Available options: | |
447 | ||
448 | ``bucket_name``: | |
449 | The bucket used for this cache. You can set the default bucket with ``globals.cache.s3.bucket_name``. | |
450 | ||
451 | ``profile_name``: | |
452 | Optional profile name for `shared credentials <http://boto3.readthedocs.io/en/latest/guide/configuration.html>`_ for this cache. Alternative methods of authentification are using the ``AWS_ACCESS_KEY_ID`` and ``AWS_SECRET_ACCESS_KEY`` environmental variables, or by using an `IAM role <http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html>`_ when using an Amazon EC2 instance. | |
453 | You can set the default profile with ``globals.cache.s3.profile_name``. | |
454 | ||
455 | ``directory``: | |
456 | Base directory (path) where all tiles are stored. | |
457 | ||
458 | ``directory_layout``: | |
459 | Defines the directory layout for the tiles (``12/12345/67890.png``, ``L12/R00010932/C00003039.png``, etc.). See :ref:`cache_file` for available options. Defaults to ``tms`` (e.g. ``12/12345/67890.png``). This cache cache also supports ``reverse_tms`` where tiles are stored as ``y/x/z.format``. See *note* below. | |
460 | ||
461 | .. note:: | |
462 | The hierarchical ``directory_layouts`` can hit limitations of S3 *"if you are routinely processing 100 or more requests per second"*. ``directory_layout: reverse_tms`` can work around this limitation. Please read `S3 Request Rate and Performance Considerations <http://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html>`_ for more information on this issue. | |
463 | ||
464 | Example | |
465 | ------- | |
466 | ||
467 | :: | |
468 | ||
469 | cache: | |
470 | my_layer_20110501_epsg_4326_cache_out: | |
471 | sources: [my_layer_20110501_cache] | |
472 | cache: | |
473 | type: s3 | |
474 | directory: /1.0.0/my_layer/default/20110501/4326/ | |
475 | bucket_name: my-s3-tiles-cache | |
476 | ||
477 | globals: | |
478 | cache: | |
479 | s3: | |
480 | profile_name: default | |
481 | ||
482 | ||
483 | .. _cache_compact: | |
484 | ||
485 | ||
486 | ``compact`` | |
487 | =========== | |
488 | ||
489 | .. versionadded:: 1.10.0 | |
490 | ||
491 | Store tiles in ArcGIS compatible compact cache files. A single compact cache ``.bundle`` file stores up to about 16,000 tiles. There is one additional ``.bundlx`` index file for each ``.bundle`` data file. | |
492 | ||
493 | Only version 1 of the compact cache format (ArcGIS 10.0-10.2) is supported. Version 2 (ArcGIS 10.3 or higher) is not supported at the moment. | |
494 | ||
495 | Available options: | |
496 | ||
497 | ``directory``: | |
498 | Directory where MapProxy should store the level directories. This will not add the cache name or grid name to the path. You can use this option to point MapProxy to an existing compact cache. | |
499 | ||
500 | ``version``: | |
501 | The version of the ArcGIS compact cache format. This option is required. | |
502 | ||
503 | ||
504 | You can set the ``sources`` to an empty list, if you use an existing compact cache files and do not have a source. | |
505 | ||
506 | ||
507 | The following configuration will load tiles from ``/path/to/cache/L00/R0000C0000.bundle``, etc. | |
508 | ||
509 | :: | |
510 | ||
511 | caches: | |
512 | compact_cache: | |
513 | sources: [] | |
514 | grids: [webmercator] | |
515 | cache: | |
516 | type: compact | |
517 | version: 1 | |
518 | directory: /path/to/cache | |
519 | ||
520 | .. note:: | |
521 | ||
522 | The compact cache format does not include any timestamps for each tile and the seeding function is limited therefore. If you include any ``refresh_before`` time in a seed task, all tiles will be recreated regardless of the value. The cleanup process does not support any ``remove_before`` times for compact caches and it always removes all tiles. | |
523 | Use the ``--summary`` option of the ``mapproxy-seed`` tool. | |
524 | ||
525 | ||
526 | .. note:: | |
527 | ||
528 | The compact cache format is append-only to allow parallel read and write operations. Removing or refreshing tiles with ``mapproxy-seed`` does not reduce the size of the cache files. Therefore, this format is not suitable for caches that require frequent updates. | |
529 |
48 | 48 | # built documents. |
49 | 49 | # |
50 | 50 | # The short X.Y version. |
51 | version = '1.8' | |
51 | version = '1.10' | |
52 | 52 | # The full version, including alpha/beta/rc tags. |
53 | release = '1.8.2a0' | |
53 | release = '1.10.0a0' | |
54 | 54 | |
55 | 55 | # The language for content autogenerated by Sphinx. Refer to documentation |
56 | 56 | # for a list of supported languages. |
418 | 418 | """""""""""""""""""""""""" |
419 | 419 | If set to ``true``, MapProxy will only issue a single request to the source. This option can reduce the request latency for uncached areas (on demand caching). |
420 | 420 | |
421 | By default MapProxy requests all uncached meta tiles that intersect the requested bbox. With a typical configuration it is not uncommon that a requests will trigger four requests each larger than 2000x2000 pixel. With the ``minimize_meta_requests`` option enabled, each request will trigger only one request to the source. That request will be aligned to the next tile boundaries and the tiles will be cached. | |
421 | By default MapProxy requests all uncached meta-tiles that intersect the requested bbox. With a typical configuration it is not uncommon that a requests will trigger four requests each larger than 2000x2000 pixel. With the ``minimize_meta_requests`` option enabled, each request will trigger only one request to the source. That request will be aligned to the next tile boundaries and the tiles will be cached. | |
422 | 422 | |
423 | 423 | .. index:: watermark |
424 | 424 | |
468 | 468 | """"""""""""""""""""""""""""""""" |
469 | 469 | |
470 | 470 | Change the ``meta_size`` and ``meta_buffer`` of this cache. See :ref:`global cache options <meta_size>` for more details. |
471 | ||
472 | ``bulk_meta_tiles`` | |
473 | """"""""""""""""""" | |
474 | ||
475 | Enables meta-tile handling for tiled sources. See :ref:`global cache options <meta_size>` for more details. | |
471 | 476 | |
472 | 477 | ``image`` |
473 | 478 | """"""""" |
607 | 612 | """""""" |
608 | 613 | |
609 | 614 | The extent of your grid. You can use either a list or a string with the lower left and upper right coordinates. You can set the SRS of the coordinates with the ``bbox_srs`` option. If that option is not set the ``srs`` of the grid will be used. |
615 | ||
616 | MapProxy always expects your BBOX coordinates order to be east, south, west, north, regardless of your SRS :ref:`axis order <axis_order>`. | |
617 | ||
610 | 618 | :: |
611 | 619 | |
612 | 620 | bbox: [0, 40, 15, 55] |
632 | 640 | The following values are supported: |
633 | 641 | |
634 | 642 | ``ll`` or ``sw``: |
635 | ||
636 | 643 | If the x=0, y=0 tile is in the lower-left/south-west corner of the tile grid. This is the default. |
637 | 644 | |
638 | 645 | ``ul`` or ``nw``: |
639 | ||
640 | 646 | If the x=0, y=0 tile is in the upper-left/north-west corner of the tile grid. |
641 | 647 | |
642 | 648 | |
787 | 793 | ``cache`` |
788 | 794 | """"""""" |
789 | 795 | |
796 | The following options define how tiles are created and stored. Most options can be set individually for each cache as well. | |
797 | ||
790 | 798 | .. versionadded:: 1.6.0 ``tile_lock_dir`` |
799 | .. versionadded:: 1.10.0 ``bulk_meta_tiles`` | |
791 | 800 | |
792 | 801 | |
793 | 802 | .. _meta_size: |
794 | 803 | |
795 | 804 | ``meta_size`` |
796 | MapProxy does not make a single request for every tile but will request a large meta-tile that consist of multiple tiles. ``meta_size`` defines how large a meta-tile is. A ``meta_size`` of ``[4, 4]`` will request 16 tiles in one pass. With a tile size of 256x256 this will result in 1024x1024 requests to the source WMS. | |
805 | MapProxy does not make a single request for every tile it needs, but it will request a large meta-tile that consist of multiple tiles. ``meta_size`` defines how large a meta-tile is. A ``meta_size`` of ``[4, 4]`` will request 16 tiles in one pass. With a tile size of 256x256 this will result in 1024x1024 requests to the source. Tiled sources are still requested tile by tile, but you can configure MapProxy to load multiple tiles in bulk with `bulk_meta_tiles`. | |
806 | ||
807 | ||
808 | .. _bulk_meta_tiles: | |
809 | ||
810 | ``bulk_meta_tiles`` | |
811 | Enables meta-tile handling for caches with tile sources. | |
812 | If set to `true`, MapProxy will request neighboring tiles from the source even if only one tile is requested from the cache. ``meta_size`` defines how many tiles should be requested in one step and ``concurrent_tile_creators`` defines how many requests are made in parallel. This option improves the performance for caches that allow to store multiple tiles with one request, like SQLite/MBTiles but not the ``file`` cache. | |
813 | ||
797 | 814 | |
798 | 815 | ``meta_buffer`` |
799 | 816 | MapProxy will increase the size of each meta-tile request by this number of |
821 | 838 | can either be absolute (e.g. ``/tmp/lock/mapproxy``) or relative to the |
822 | 839 | mapproxy.yaml file. Defaults to ``./cache_data/dir_of_the_cache/tile_locks``. |
823 | 840 | |
841 | ||
824 | 842 | ``concurrent_tile_creators`` |
825 | This limits the number of parallel requests MapProxy will make to a source WMS. This limit is per request and not for all MapProxy requests. To limit the requests MapProxy makes to a single server use the ``concurrent_requests`` option. | |
826 | ||
827 | Example: A request in an uncached region requires MapProxy to fetch four meta-tiles. A ``concurrent_tile_creators`` value of two allows MapProxy to make two requests to the source WMS request in parallel. The splitting of the meta tile and the encoding of the new tiles will happen in parallel to. | |
843 | This limits the number of parallel requests MapProxy will make to a source. This limit is per request for this cache and not for all MapProxy requests. To limit the requests MapProxy makes to a single server use the ``concurrent_requests`` option. | |
844 | ||
845 | Example: A request in an uncached region requires MapProxy to fetch four meta-tiles. A ``concurrent_tile_creators`` value of two allows MapProxy to make two requests to the source WMS request in parallel. The splitting of the meta-tile and the encoding of the new tiles will happen in parallel to. | |
828 | 846 | |
829 | 847 | |
830 | 848 | ``link_single_color_images`` |
896 | 914 | http: |
897 | 915 | ssl_ca_certs: /etc/ssl/certs/ca-certificates.crt |
898 | 916 | |
899 | If you want to use SSL but do not need certificate verification, then you can disable it with the ``ssl_no_cert_checks`` option. You can also disable this check on a source level, see :ref:`WMS source options <wms_source-ssl_no_cert_checks>`. | |
917 | If you want to use SSL but do not need certificate verification, then you can disable it with the ``ssl_no_cert_checks`` option. You can also disable this check on a source level, see :ref:`WMS source options <wms_source_ssl_no_cert_checks>`. | |
900 | 918 | :: |
901 | 919 | |
902 | 920 | http: |
6 | 6 | MapProxy supports coverages for :doc:`sources <sources>` and in the :doc:`mapproxy-seed tool <seed>`. Refer to the corresponding section in the documentation. |
7 | 7 | |
8 | 8 | |
9 | There are three different ways to describe a coverage. | |
9 | There are five different ways to describe a coverage: | |
10 | 10 | |
11 | 11 | - a simple rectangular bounding box, |
12 | 12 | - a text file with one or more (multi)polygons in WKT format, |
13 | - (multi)polygons from any data source readable with OGR (e.g. Shapefile, GeoJSON, PostGIS) | |
14 | ||
13 | - a GeoJSON file with (multi)polygons features, | |
14 | - (multi)polygons from any data source readable with OGR (e.g. Shapefile, GeoJSON, PostGIS), | |
15 | - a file with webmercator tile coordinates. | |
16 | ||
17 | .. versionadded:: 1.10 | |
18 | ||
19 | You can also build intersections, unions and differences between multiple coverages. | |
15 | 20 | |
16 | 21 | Requirements |
17 | 22 | ------------ |
45 | 50 | For simple box coverages. |
46 | 51 | |
47 | 52 | ``bbox`` or ``datasource``: |
48 | A simple BBOX as a list, e.g: `[4, -30, 10, -28]` or as a string `4,-30,10,-28`. | |
53 | A simple BBOX as a list of minx, miny, maxx, maxy, e.g: `[4, -30, 10, -28]` or as a string `4,-30,10,-28`. | |
49 | 54 | |
50 | 55 | Polygon file |
51 | 56 | """""""""""" |
55 | 60 | |
56 | 61 | ``datasource``: |
57 | 62 | The path to the polygon file. Should be relative to the proxy configuration or absolute. |
63 | ||
64 | GeoJSON | |
65 | """"""" | |
66 | ||
67 | .. versionadded:: 1.10 | |
68 | Previous versions required OGR/GDAL for reading GeoJSON. | |
69 | ||
70 | You can use GeoJSON files with Polygon and MultiPolygons geometries. FeatureCollections and Features of these geometries are suppored as well. MapProxy uses OGR to read GeoJSON files if you define a ``where`` filter. | |
71 | ||
72 | ``datasource``: | |
73 | The path to the GeoJSON file. Should be relative to the proxy configuration or absolute. | |
58 | 74 | |
59 | 75 | OGR datasource |
60 | 76 | """""""""""""" |
72 | 88 | statement (e.g. ``'CNTRY_NAME="Germany"'``) or a full select statement. Refer to the |
73 | 89 | `OGR SQL support documentation <http://www.gdal.org/ogr/ogr_sql.html>`_. If this |
74 | 90 | option is unset, the first layer from the datasource will be used. |
91 | ||
92 | ||
93 | Expire tiles | |
94 | """""""""""" | |
95 | ||
96 | .. versionadded:: 1.10 | |
97 | ||
98 | Text file with webmercator tile coordinates. The tiles should be in ``z/x/y`` format (e.g. ``14/1283/6201``), | |
99 | with one tile coordinate per line. Only tiles in the webmercator grid are supported (origin is always `nw`). | |
100 | ||
101 | ``expire_tiles``: | |
102 | File or directory with expire tile files. Directories are loaded recursive. | |
103 | ||
104 | ||
105 | Union | |
106 | """"" | |
107 | ||
108 | .. versionadded:: 1.10 | |
109 | ||
110 | A union coverage contains the combined coverage of one or more sub-coverages. This can be used to combine multiple coverages a single source. Each sub-coverage can be of any supported type and SRS. | |
111 | ||
112 | ``union``: | |
113 | A list of multiple coverages. | |
114 | ||
115 | Difference | |
116 | """""""""" | |
117 | ||
118 | .. versionadded:: 1.10 | |
119 | ||
120 | A difference coverage subtracts the coverage of other sub-coverages from the first coverage. This can be used to exclude parts from a coverage. Each sub-coverage can be of any supported type and SRS. | |
121 | ||
122 | ``difference``: | |
123 | A list of multiple coverages. | |
124 | ||
125 | ||
126 | Intersection | |
127 | """""""""""" | |
128 | ||
129 | .. versionadded:: 1.10 | |
130 | ||
131 | An intersection coverage contains only areas that are covered by all sub-coverages. This can be used to limit a larger coverage to a smaller area. Each sub-coverage can be of any supported type and SRS. | |
132 | ||
133 | ``difference``: | |
134 | A list of multiple coverages. | |
135 | ||
136 | ||
137 | Clipping | |
138 | -------- | |
139 | .. versionadded:: 1.10.0 | |
140 | ||
141 | By default MapProxy tries to get and serve full source image even if a coverage only touches it. | |
142 | Clipping by coverage can be enabled by setting ``clip: true``. If enabled, all areas outside the coverage will be converted to transparent pixels. | |
143 | ||
144 | The ``clip`` option is only active for source coverages and not for seeding coverages. | |
75 | 145 | |
76 | 146 | |
77 | 147 | Examples |
95 | 165 | srs: 'EPSG:4326' |
96 | 166 | |
97 | 167 | |
168 | Example of an intersection coverage with clipping:: | |
169 | ||
170 | sources: | |
171 | mywms: | |
172 | type: wms | |
173 | req: | |
174 | url: http://example.com/service? | |
175 | layers: base | |
176 | coverage: | |
177 | clip: true | |
178 | intersection: | |
179 | - bbox: [5, 50, 10, 55] | |
180 | srs: 'EPSG:4326' | |
181 | - datasource: coverage.geojson | |
182 | srs: 'EPSG:4326' | |
183 | ||
184 | ||
98 | 185 | mapproxy-seed |
99 | 186 | """"""""""""" |
100 | 187 |
114 | 114 | |
115 | 115 | <Directory /path/to/mapproxy/> |
116 | 116 | Order deny,allow |
117 | Require all granted # for Apache 2.4 | |
118 | # Allow from all # for Apache 2.2 | |
117 | # For Apache 2.4: | |
118 | Require all granted | |
119 | # For Apache 2.2: | |
120 | # Allow from all | |
119 | 121 | </Directory> |
120 | 122 | |
121 | 123 |
93 | 93 | |
94 | 94 | GDAL *(optional)* |
95 | 95 | ~~~~~~~~~~~~~~~~~ |
96 | The :doc:`coverage feature <coverages>` allows you to read geometries from OGR datasources (Shapefiles, PostGIS, etc.). This package is optional and only required for OGR datasource support. OGR is part of GDAL (``libgdal-dev``). | |
96 | The :doc:`coverage feature <coverages>` allows you to read geometries from OGR datasources (Shapefiles, PostGIS, etc.). This package is optional and only required for OGR datasource support (BBOX, WKT and GeoJSON coverages are supported natively). OGR is part of GDAL (``libgdal-dev``). | |
97 | 97 | |
98 | 98 | .. _lxml_install: |
99 | 99 |
0 | 0 | Installation on Windows |
1 | 1 | ======================= |
2 | 2 | |
3 | .. note:: You can also :doc:`install MapProxy inside an existing OSGeo4W installation<install_osgeo4w>`. | |
4 | ||
5 | At frist you need a working Python installation. You can download Python from: http://www.python.org/download/. MapProxy requires Python 2.7, 3.3 or 3.4. Python 2.6 should still work, but it is no longer officially supported. | |
6 | ||
3 | At frist you need a working Python installation. You can download Python from: https://www.python.org/download/. MapProxy requires Python 2.7, 3.3, 3.4, 3.5 or 3.6. Python 2.6 should still work, but it is no longer officially supported. We would recommend the latest 2.7 version available. | |
7 | 4 | |
8 | 5 | Virtualenv |
9 | 6 | ---------- |
23 | 20 | |
24 | 21 | .. note:: Apache mod_wsgi does not work well with virtualenv on Windows. If you want to use mod_wsgi for deployment, then you should skip the creation the virtualenv. |
25 | 22 | |
26 | After you activated the new environment, you have access to ``python`` and ``easy_install``. | |
23 | After you activated the new environment, you have access to ``python`` and ``pip``. | |
27 | 24 | To install MapProxy with most dependencies call:: |
28 | 25 | |
29 | easy_install MapProxy | |
26 | pip install MapProxy | |
30 | 27 | |
31 | 28 | This might take a minute. You can skip the next step. |
32 | 29 | |
33 | 30 | |
34 | Setuptools | |
35 | ---------- | |
31 | PIP | |
32 | --- | |
36 | 33 | |
37 | MapProxy and most dependencies can be installed with the ``easy_install`` command. | |
38 | You need to `install the setuptool package <http://pypi.python.org/pypi/setuptools>`_ to get the ``easy_install`` command. | |
34 | MapProxy and most dependencies can be installed with the ``pip`` command. ``pip`` is already installed if you are using Python >=2.7.9, or Python >=3.4. `Read the pip documentation for more information <https://pip.pypa.io/en/stable/installing/>`_. | |
39 | 35 | |
40 | 36 | After that you can install MapProxy with:: |
41 | 37 | |
42 | c:\Python27\Scripts\easy_install MapProxy | |
38 | c:\Python27\Scripts\pip install MapProxy | |
43 | 39 | |
44 | 40 | This might take a minute. |
45 | 41 | |
52 | 48 | Pillow and YAML |
53 | 49 | ~~~~~~~~~~~~~~~ |
54 | 50 | |
55 | Pillow and PyYAML are installed automatically by ``easy_install``. | |
51 | Pillow and PyYAML are installed automatically by ``pip``. | |
56 | 52 | |
57 | 53 | PyProj |
58 | 54 | ~~~~~~ |
59 | 55 | |
60 | 56 | Since libproj4 is generally not available on a Windows system, you will also need to install the Python package ``pyproj``. |
57 | You need to manually download the ``pyproj`` package for your system. See below for *Platform dependent packages*. | |
61 | 58 | |
62 | 59 | :: |
63 | 60 | |
64 | easy_install pyproj | |
61 | pip install path\to\pyproj-xxx.whl | |
65 | 62 | |
66 | 63 | |
67 | 64 | Shapely and GEOS *(optional)* |
68 | 65 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
69 | Shapely can be installed with ``easy_install Shapely``. This will already include the required ``geos.dll``. | |
66 | Shapely can be installed with ``pip install Shapely``. This will already include the required ``geos.dll``. | |
70 | 67 | |
71 | 68 | |
72 | 69 | GDAL *(optional)* |
86 | 83 | set GDAL_DRIVER_PATH=C:\Program Files (x86)\GDAL\gdalplugins |
87 | 84 | |
88 | 85 | |
86 | .. _win_platform_packages: | |
87 | ||
89 | 88 | Platform dependent packages |
90 | 89 | --------------------------- |
91 | 90 | |
92 | All Python packages are downloaded from http://pypi.python.org/, but not all platform combinations might be available as a binary package, especially if you run a 64bit version of Windows. | |
91 | ``pip`` downloads all packages from https://pypi.python.org/, but not all platform combinations might be available as a binary package, especially if you run a 64bit version of Python. | |
93 | 92 | |
94 | If you run into troubles during installation, because it is trying to compile something (e.g. complaining about ``vcvarsall.bat``), you should look at Christoph Gohlke's `Unofficial Windows Binaries for Python Extension Packages <http://www.lfd.uci.edu/~gohlke/pythonlibs/>`_. | |
93 | If you run into trouble during installation, because it is trying to compile something (e.g. complaining about ``vcvarsall.bat``), you should look at Christoph Gohlke's `Unofficial Windows Binaries for Python Extension Packages <http://www.lfd.uci.edu/~gohlke/pythonlibs/>`_. This is a reliable site for binary packages for Python. You need to download the right package: The ``cpxx`` code refers to the Python version (e.g. ``cp27`` for Python 2.7); ``win32`` for 32bit Python installations and ``amd64`` for 64bit. | |
95 | 94 | |
96 | You can install the ``.exe`` packages with ``easy_install``:: | |
95 | You can install the ``.whl``, ``.zip`` or ``.exe`` packages with ``pip``:: | |
97 | 96 | |
98 | easy_install path\to\package-xxx.exe | |
97 | pip install path\to\package-xxx.whl | |
99 | 98 | |
100 | 99 | |
101 | 100 | Check installation |
109 | 108 | |
110 | 109 | Now continue with :ref:`Create a configuration <create_configuration>` from the installation documentation. |
111 | 110 | |
112 |
503 | 503 | Export tiles like the internal cache directory structure. This is compatible with TileCache. |
504 | 504 | |
505 | 505 | ``mbtile``: |
506 | Exports tiles into a MBTile file. | |
506 | Export tiles into a MBTile file. | |
507 | ||
508 | ``sqlite``: | |
509 | Export tiles into SQLite level files. | |
510 | ||
511 | ``geopackage``: | |
512 | Export tiles into a GeoPackage file. | |
507 | 513 | |
508 | 514 | ``arcgis``: |
509 | Exports tiles in a ArcGIS exploded cache directory structure. | |
510 | ||
515 | Export tiles in a ArcGIS exploded cache directory structure. | |
516 | ||
517 | ``compact-v1``: | |
518 | Export tiles as ArcGIS compact cache bundle files (version 1). | |
511 | 519 | |
512 | 520 | |
513 | 521 | Examples |
20 | 20 | |
21 | 21 | .. option:: -s <seed.yaml>, --seed-conf==<seed.yaml> |
22 | 22 | |
23 | The seed configuration. You can also pass the configration as the last argument to ``mapproxy-seed`` | |
23 | The seed configuration. You can also pass the configuration as the last argument to ``mapproxy-seed`` | |
24 | 24 | |
25 | 25 | .. option:: -f <mapproxy.yaml>, --proxy-conf=<mapproxy.yaml> |
26 | 26 | |
66 | 66 | |
67 | 67 | Filename where MapProxy stores the seeding progress for the ``--continue`` option. Defaults to ``.mapproxy_seed_progress`` in the current working directory. MapProxy will remove that file after a successful seed. |
68 | 68 | |
69 | .. option:: --duration | |
70 | ||
71 | Stop seeding process after this duration. This option accepts duration in the following format: 120s, 15m, 4h, 0.5d | |
72 | Use this option in combination with ``--continue`` to be able to resume the seeding. By default, | |
73 | ||
74 | .. option:: --reseed-file | |
75 | ||
76 | File created by ``mapproxy-seed`` at the start of a new seeding. | |
77 | ||
78 | .. option:: --reseed-interval | |
79 | ||
80 | Only start seeding if ``--reseed-file`` is older then this duration. | |
81 | This option accepts duration in the following format: 120s, 15m, 4h, 0.5d | |
82 | Use this option in combination with ``--continue`` to be able to resume the seeding. By default, | |
83 | ||
84 | ||
69 | 85 | .. option:: --use-cache-lock |
70 | 86 | |
71 | 87 | Lock each cache to prevent multiple parallel `mapproxy-seed` calls to work on the same cache. |
80 | 96 | |
81 | 97 | .. versionadded:: 1.7.0 |
82 | 98 | ``--log-config`` option |
99 | ||
100 | .. versionadded:: 1.10.0 | |
101 | ``--duration``, ``--reseed-file`` and ``--reseed-interval`` option | |
102 | ||
103 | ||
83 | 104 | |
84 | 105 | |
85 | 106 | Examples |
377 | 398 | austria: |
378 | 399 | bbox: [9.36, 46.33, 17.28, 49.09] |
379 | 400 | srs: 'EPSG:4326' |
401 | ||
402 | ||
403 | .. _background_seeding: | |
404 | ||
405 | Example: Background seeding | |
406 | --------------------------- | |
407 | ||
408 | .. versionadded:: 1.10.0 | |
409 | ||
410 | The ``--duration`` option allows you run MapProxy seeding for a limited time. In combination with the ``--continue`` option, you can resume the seeding process at a later time. | |
411 | You can use this to call ``mapproxy-seed`` with ``cron`` to seed in the off-hours. | |
412 | ||
413 | However, this will restart the seeding process from the begining everytime the is seeding completed. | |
414 | You can prevent this with the ``--reeseed-interval`` and ``--reseed-file`` option. | |
415 | The follwing example starts seeding for six hours. It will seed for another six hours, everytime you call this command again. Once all seed and cleanup tasks were proccessed the command will exit immediately everytime you call it within 14 days after the first call. After 14 days, the modification time of the ``reseed.time`` file will be updated and the re-seeding process starts again. | |
416 | ||
417 | :: | |
418 | ||
419 | mapproxy-seed -f mapproxy.yaml -s seed.yaml \ | |
420 | --reseed-interval 14d --duration 6h --reseed-file reseed.time \ | |
421 | --continue --progress-file .mapproxy_seed_progress | |
422 | ||
423 | You can use the ``--reseed-file`` as a ``refresh_before`` and ``remove_before`` ``mtime``-file. | |
424 | ||
380 | 425 | |
381 | 426 | |
382 | 427 | .. _seed_old_configuration: |
107 | 107 | |
108 | 108 | A list of image mime types the server should offer. |
109 | 109 | |
110 | .. _wms_featureinfo_types: | |
111 | ||
110 | 112 | ``featureinfo_types`` |
111 | 113 | """"""""""""""""""""" |
112 | 114 | |
113 | A list of feature info types the server should offer. Available types are ``text``, ``html`` and ``xml``. The types then are advertised in the capabilities with the correct mime type. | |
115 | A list of feature info types the server should offer. Available types are ``text``, ``html``, ``xml`` and ``json``. The types are advertised in the capabilities with the correct mime type. Defaults to ``[text, html, xml]``. | |
114 | 116 | |
115 | 117 | ``featureinfo_xslt`` |
116 | 118 | """""""""""""""""""" |
256 | 256 | .. _arcgis_label: |
257 | 257 | |
258 | 258 | ArcGIS REST API |
259 | """ | |
259 | """"""""""""""" | |
260 | ||
261 | .. versionadded: 1.9.0 | |
260 | 262 | |
261 | 263 | Use the type ``arcgis`` for ArcGIS MapServer and ImageServer REST server endpoints. This |
262 | 264 | source is based on :ref:`the WMS source <wms_label>` and most WMS options apply to the |
265 | 267 | ``req`` |
266 | 268 | ^^^^^^^ |
267 | 269 | |
268 | This describes the ArcGIS source. The only required option is ``url``. You need to set ``transparent`` to ``true`` if you want to use this source as an overlay. | |
269 | :: | |
270 | ||
271 | req: | |
272 | url: http://example.org/ArcGIS/rest/services/Imagery/MapService | |
273 | layers: show: 0,1 | |
274 | transparent: true | |
275 | ||
276 | .. _example_configuration: | |
270 | This describes the ArcGIS source. The only required option is ``url``. You need to set ``transparent`` to ``true`` if you want to use this source as an overlay. You can also add ArcGIS specific parameters to ``req``, for example to set the `interpolation method for ImageServers <http://resources.arcgis.com/en/help/rest/apiref/exportimage.html>`_. | |
271 | ||
272 | ||
273 | ``opts`` | |
274 | ^^^^^^^^ | |
275 | ||
276 | .. versionadded: 1.10.0 | |
277 | ||
278 | This option affects what request MapProxy sends to the source ArcGIS server. | |
279 | ||
280 | ``featureinfo`` | |
281 | If this is set to ``true``, MapProxy will mark the layer as queryable and incoming `GetFeatureInfo` requests will be forwarded as ``identify`` requests to the source server. ArcGIS REST server support only HTML and JSON format. You need to enable support for JSON :ref:`wms_featureinfo_types`. | |
282 | ||
283 | ``featureinfo_return_geometries`` | |
284 | Whether the source should include the feature geometries. | |
285 | ||
286 | ``featureinfo_tolerance`` | |
287 | Tolerance in pixel within the ArcGIS server should identify features. | |
277 | 288 | |
278 | 289 | Example configuration |
279 | 290 | ^^^^^^^^^^^^^^^^^^^^^ |
280 | 291 | |
281 | Minimal example:: | |
292 | MapServer example:: | |
282 | 293 | |
283 | 294 | my_minimal_arcgissource: |
284 | 295 | type: arcgis |
285 | 296 | req: |
297 | layers: show: 0,1 | |
286 | 298 | url: http://example.org/ArcGIS/rest/services/Imagery/MapService |
287 | ||
288 | Full example:: | |
299 | transparent: true | |
300 | ||
301 | ImageServer example:: | |
289 | 302 | |
290 | 303 | my_arcgissource: |
291 | 304 | type: arcgis |
292 | 305 | coverage: |
293 | 306 | polygons: GM.txt |
294 | polygons_srs: EPSG:900913 | |
307 | srs: EPSG:3857 | |
295 | 308 | req: |
296 | url: http://example.org/ArcGIS/rest/services/Imagery/MapService | |
297 | layers: show:0,1 | |
298 | transparent: true | |
309 | url: http://example.org/ArcGIS/rest/services/World/MODIS/ImageServer | |
310 | interpolation: RSP_CubicConvolution | |
311 | bandIds: 2,0,1 | |
312 | ||
299 | 313 | |
300 | 314 | .. _tiles_label: |
301 | 315 | |
360 | 374 | - ``headers`` |
361 | 375 | - ``client_timeout`` |
362 | 376 | - ``ssl_ca_certs`` |
363 | - ``ssl_no_cert_checks`` (:ref:`see above <wms_source-ssl_no_cert_checks>`) | |
377 | - ``ssl_no_cert_checks`` (:ref:`see above <wms_source_ssl_no_cert_checks>`) | |
364 | 378 | |
365 | 379 | See :ref:`HTTP Options <http_ssl>` for detailed documentation. |
366 | 380 |
1 | 1 | demo: |
2 | 2 | wms: |
3 | 3 | md: |
4 | title: MapProxy WMS Proxy | |
5 | abstract: This is the fantastic MapProxy. | |
6 | online_resource: http://mapproxy.org/ | |
7 | contact: | |
8 | person: Your Name Here | |
9 | position: Technical Director | |
10 | organization: | |
11 | address: Fakestreet 123 | |
12 | city: Somewhere | |
13 | postcode: 12345 | |
14 | country: Germany | |
15 | phone: +49(0)000-000000-0 | |
16 | fax: +49(0)000-000000-0 | |
17 | email: info@omniscale.de | |
18 | access_constraints: | |
19 | This service is intended for private and | |
20 | evaluation use only. The data is licensed | |
21 | as Creative Commons Attribution-Share Alike 2.0 | |
22 | (http://creativecommons.org/licenses/by-sa/2.0/) | |
23 | fees: 'None' | |
4 | title: MapProxy WMS Proxy | |
5 | abstract: This is the fantastic MapProxy. | |
6 | online_resource: http://mapproxy.org/ | |
7 | contact: | |
8 | person: Your Name Here | |
9 | position: Technical Director | |
10 | organization: | |
11 | address: Fakestreet 123 | |
12 | city: Somewhere | |
13 | postcode: 12345 | |
14 | country: Germany | |
15 | phone: +49(0)000-000000-0 | |
16 | fax: +49(0)000-000000-0 | |
17 | email: info@omniscale.de | |
18 | access_constraints: | |
19 | Insert license and copyright information for this service. | |
20 | fees: 'None' | |
24 | 21 | |
25 | 22 | sources: |
26 | 23 | test_wms: |
7 | 7 | contact: |
8 | 8 | person: Your Name Here |
9 | 9 | position: Technical Director |
10 | organization: | |
10 | organization: | |
11 | 11 | address: Fakestreet 123 |
12 | 12 | city: Somewhere |
13 | 13 | postcode: 12345 |
16 | 16 | fax: +49(0)000-000000-0 |
17 | 17 | email: info@omniscale.de |
18 | 18 | access_constraints: |
19 | This service is intended for private and | |
20 | evaluation use only. The data is licensed | |
21 | as Creative Commons Attribution-Share Alike 2.0 | |
22 | (http://creativecommons.org/licenses/by-sa/2.0/) | |
19 | Insert license and copyright information for this service. | |
23 | 20 | fees: 'None' |
24 | 21 | |
25 | 22 | sources: |
7 | 7 | contact: |
8 | 8 | person: Your Name Here |
9 | 9 | position: Technical Director |
10 | organization: | |
10 | organization: | |
11 | 11 | address: Fakestreet 123 |
12 | 12 | city: Somewhere |
13 | 13 | postcode: 12345 |
16 | 16 | fax: +49(0)000-000000-0 |
17 | 17 | email: info@omniscale.de |
18 | 18 | access_constraints: |
19 | This service is intended for private and | |
20 | evaluation use only. The data is licensed | |
21 | as Creative Commons Attribution-Share Alike 2.0 | |
22 | (http://creativecommons.org/licenses/by-sa/2.0/) | |
19 | Insert license and copyright information for this service. | |
23 | 20 | fees: 'None' |
24 | 21 | |
25 | 22 | sources: |
7 | 7 | contact: |
8 | 8 | person: Your Name Here |
9 | 9 | position: Technical Director |
10 | organization: | |
10 | organization: | |
11 | 11 | address: Fakestreet 123 |
12 | 12 | city: Somewhere |
13 | 13 | postcode: 12345 |
16 | 16 | fax: +49(0)000-000000-0 |
17 | 17 | email: info@omniscale.de |
18 | 18 | access_constraints: |
19 | This service is intended for private and | |
20 | evaluation use only. The data is licensed | |
21 | as Creative Commons Attribution-Share Alike 2.0 | |
22 | (http://creativecommons.org/licenses/by-sa/2.0/) | |
19 | Insert license and copyright information for this service. | |
23 | 20 | fees: 'None' |
24 | 21 | |
25 | 22 | sources: |
7 | 7 | contact: |
8 | 8 | person: Your Name Here |
9 | 9 | position: Technical Director |
10 | organization: | |
10 | organization: | |
11 | 11 | address: Fakestreet 123 |
12 | 12 | city: Somewhere |
13 | 13 | postcode: 12345 |
16 | 16 | fax: +49(0)000-000000-0 |
17 | 17 | email: info@omniscale.de |
18 | 18 | access_constraints: |
19 | This service is intended for private and | |
20 | evaluation use only. The data is licensed | |
21 | as Creative Commons Attribution-Share Alike 2.0 | |
22 | (http://creativecommons.org/licenses/by-sa/2.0/) | |
19 | Insert license and copyright information for this service. | |
23 | 20 | fees: 'None' |
24 | 21 | |
25 | 22 | sources: |
0 | 0 | # This file is part of the MapProxy project. |
1 | 1 | # Copyright (C) 2010 Omniscale <http://omniscale.de> |
2 | # | |
2 | # | |
3 | 3 | # Licensed under the Apache License, Version 2.0 (the "License"); |
4 | 4 | # you may not use this file except in compliance with the License. |
5 | 5 | # You may obtain a copy of the License at |
6 | # | |
6 | # | |
7 | 7 | # http://www.apache.org/licenses/LICENSE-2.0 |
8 | # | |
8 | # | |
9 | 9 | # Unless required by applicable law or agreed to in writing, software |
10 | 10 | # distributed under the License is distributed on an "AS IS" BASIS, |
11 | 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
16 | 16 | Tile caching (creation, caching and retrieval of tiles). |
17 | 17 | |
18 | 18 | .. digraph:: Schematic Call Graph |
19 | ||
19 | ||
20 | 20 | ranksep = 0.1; |
21 | node [shape="box", height="0", width="0"] | |
22 | ||
21 | node [shape="box", height="0", width="0"] | |
22 | ||
23 | 23 | cl [label="CacheMapLayer" href="<mapproxy.layer.CacheMapLayer>"] |
24 | 24 | tm [label="TileManager", href="<mapproxy.cache.tile.TileManager>"]; |
25 | 25 | fc [label="FileCache", href="<mapproxy.cache.file.FileCache>"]; |
30 | 30 | tm -> fc [label="load\\nstore\\nis_cached"]; |
31 | 31 | tm -> s [label="get_map"] |
32 | 32 | } |
33 | ||
33 | ||
34 | 34 | |
35 | 35 | """ |
0 | # This file is part of the MapProxy project. | |
1 | # Copyright (C) 2016 Omniscale <http://omniscale.de> | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | from __future__ import with_statement | |
16 | import errno | |
17 | import hashlib | |
18 | import os | |
19 | import shutil | |
20 | import struct | |
21 | ||
22 | from mapproxy.image import ImageSource | |
23 | from mapproxy.cache.base import TileCacheBase, tile_buffer | |
24 | from mapproxy.util.fs import ensure_directory, write_atomic | |
25 | from mapproxy.util.lock import FileLock | |
26 | from mapproxy.compat import BytesIO | |
27 | ||
28 | import logging | |
29 | log = logging.getLogger(__name__) | |
30 | ||
31 | ||
32 | class CompactCacheV1(TileCacheBase): | |
33 | supports_timestamp = False | |
34 | ||
35 | def __init__(self, cache_dir): | |
36 | self.lock_cache_id = 'compactcache-' + hashlib.md5(cache_dir.encode('utf-8')).hexdigest() | |
37 | self.cache_dir = cache_dir | |
38 | ||
39 | def _get_bundle(self, tile_coord): | |
40 | x, y, z = tile_coord | |
41 | ||
42 | level_dir = os.path.join(self.cache_dir, 'L%02d' % z) | |
43 | ||
44 | c = x // BUNDLEX_GRID_WIDTH * BUNDLEX_GRID_WIDTH | |
45 | r = y // BUNDLEX_GRID_HEIGHT * BUNDLEX_GRID_HEIGHT | |
46 | ||
47 | basename = 'R%04xC%04x' % (r, c) | |
48 | return Bundle(os.path.join(level_dir, basename), offset=(c, r)) | |
49 | ||
50 | def is_cached(self, tile): | |
51 | if tile.coord is None: | |
52 | return True | |
53 | if tile.source: | |
54 | return True | |
55 | ||
56 | return self._get_bundle(tile.coord).is_cached(tile) | |
57 | ||
58 | def store_tile(self, tile): | |
59 | if tile.stored: | |
60 | return True | |
61 | ||
62 | return self._get_bundle(tile.coord).store_tile(tile) | |
63 | ||
64 | def load_tile(self, tile, with_metadata=False): | |
65 | if tile.source or tile.coord is None: | |
66 | return True | |
67 | ||
68 | return self._get_bundle(tile.coord).load_tile(tile) | |
69 | ||
70 | def remove_tile(self, tile): | |
71 | if tile.coord is None: | |
72 | return True | |
73 | ||
74 | return self._get_bundle(tile.coord).remove_tile(tile) | |
75 | ||
76 | def load_tile_metadata(self, tile): | |
77 | if self.load_tile(tile): | |
78 | tile.timestamp = -1 | |
79 | ||
80 | def remove_level_tiles_before(self, level, timestamp): | |
81 | if timestamp == 0: | |
82 | level_dir = os.path.join(self.cache_dir, 'L%02d' % level) | |
83 | shutil.rmtree(level_dir, ignore_errors=True) | |
84 | return True | |
85 | return False | |
86 | ||
87 | BUNDLE_EXT = '.bundle' | |
88 | BUNDLEX_EXT = '.bundlx' | |
89 | ||
90 | class Bundle(object): | |
91 | def __init__(self, base_filename, offset): | |
92 | self.base_filename = base_filename | |
93 | self.lock_filename = base_filename + '.lck' | |
94 | self.offset = offset | |
95 | ||
96 | def _rel_tile_coord(self, tile_coord): | |
97 | return ( | |
98 | tile_coord[0] % BUNDLEX_GRID_WIDTH, | |
99 | tile_coord[1] % BUNDLEX_GRID_HEIGHT, | |
100 | ) | |
101 | ||
102 | def is_cached(self, tile): | |
103 | if tile.source or tile.coord is None: | |
104 | return True | |
105 | ||
106 | idx = BundleIndex(self.base_filename + BUNDLEX_EXT) | |
107 | x, y = self._rel_tile_coord(tile.coord) | |
108 | offset = idx.tile_offset(x, y) | |
109 | if offset == 0: | |
110 | return False | |
111 | ||
112 | bundle = BundleData(self.base_filename + BUNDLE_EXT, self.offset) | |
113 | size = bundle.read_size(offset) | |
114 | return size != 0 | |
115 | ||
116 | def store_tile(self, tile): | |
117 | if tile.stored: | |
118 | return True | |
119 | ||
120 | with tile_buffer(tile) as buf: | |
121 | data = buf.read() | |
122 | ||
123 | with FileLock(self.lock_filename): | |
124 | bundle = BundleData(self.base_filename + BUNDLE_EXT, self.offset) | |
125 | idx = BundleIndex(self.base_filename + BUNDLEX_EXT) | |
126 | x, y = self._rel_tile_coord(tile.coord) | |
127 | offset = idx.tile_offset(x, y) | |
128 | offset, size = bundle.append_tile(data, prev_offset=offset) | |
129 | idx.update_tile_offset(x, y, offset=offset, size=size) | |
130 | ||
131 | return True | |
132 | ||
133 | def load_tile(self, tile, with_metadata=False): | |
134 | if tile.source or tile.coord is None: | |
135 | return True | |
136 | ||
137 | idx = BundleIndex(self.base_filename + BUNDLEX_EXT) | |
138 | x, y = self._rel_tile_coord(tile.coord) | |
139 | offset = idx.tile_offset(x, y) | |
140 | if offset == 0: | |
141 | return False | |
142 | ||
143 | bundle = BundleData(self.base_filename + BUNDLE_EXT, self.offset) | |
144 | data = bundle.read_tile(offset) | |
145 | if not data: | |
146 | return False | |
147 | tile.source = ImageSource(BytesIO(data)) | |
148 | ||
149 | return True | |
150 | ||
151 | def remove_tile(self, tile): | |
152 | if tile.coord is None: | |
153 | return True | |
154 | ||
155 | with FileLock(self.lock_filename): | |
156 | idx = BundleIndex(self.base_filename + BUNDLEX_EXT) | |
157 | x, y = self._rel_tile_coord(tile.coord) | |
158 | idx.remove_tile_offset(x, y) | |
159 | ||
160 | return True | |
161 | ||
162 | ||
163 | BUNDLEX_GRID_WIDTH = 128 | |
164 | BUNDLEX_GRID_HEIGHT = 128 | |
165 | BUNDLEX_HEADER_SIZE = 16 | |
166 | BUNDLEX_HEADER = b'\x03\x00\x00\x00\x10\x00\x00\x00\x00\x40\x00\x00\x05\x00\x00\x00' | |
167 | BUNDLEX_FOOTER_SIZE = 16 | |
168 | BUNDLEX_FOOTER = b'\x00\x00\x00\x00\x10\x00\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00' | |
169 | ||
170 | class BundleIndex(object): | |
171 | def __init__(self, filename): | |
172 | self.filename = filename | |
173 | # defer initialization to update/remove calls to avoid | |
174 | # index creation on is_cached (prevents new files in read-only caches) | |
175 | self._initialized = False | |
176 | ||
177 | def _init_index(self): | |
178 | self._initialized = True | |
179 | if os.path.exists(self.filename): | |
180 | return | |
181 | ensure_directory(self.filename) | |
182 | buf = BytesIO() | |
183 | buf.write(BUNDLEX_HEADER) | |
184 | for i in range(BUNDLEX_GRID_WIDTH * BUNDLEX_GRID_HEIGHT): | |
185 | buf.write(struct.pack('<Q', (i*4)+BUNDLE_HEADER_SIZE)[:5]) | |
186 | buf.write(BUNDLEX_FOOTER) | |
187 | write_atomic(self.filename, buf.getvalue()) | |
188 | ||
189 | def _tile_offset(self, x, y): | |
190 | return BUNDLEX_HEADER_SIZE + (x * BUNDLEX_GRID_HEIGHT + y) * 5 | |
191 | ||
192 | def tile_offset(self, x, y): | |
193 | idx_offset = self._tile_offset(x, y) | |
194 | try: | |
195 | with open(self.filename, 'rb') as f: | |
196 | f.seek(idx_offset) | |
197 | offset = struct.unpack('<Q', f.read(5) + b'\x00\x00\x00')[0] | |
198 | return offset | |
199 | except IOError as ex: | |
200 | if ex.errno == errno.ENOENT: | |
201 | # mising bundle file -> missing tile | |
202 | return 0 | |
203 | raise | |
204 | ||
205 | def update_tile_offset(self, x, y, offset, size): | |
206 | self._init_index() | |
207 | idx_offset = self._tile_offset(x, y) | |
208 | offset = struct.pack('<Q', offset)[:5] | |
209 | with open(self.filename, 'r+b') as f: | |
210 | f.seek(idx_offset, os.SEEK_SET) | |
211 | f.write(offset) | |
212 | ||
213 | def remove_tile_offset(self, x, y): | |
214 | self._init_index() | |
215 | idx_offset = self._tile_offset(x, y) | |
216 | with open(self.filename, 'r+b') as f: | |
217 | f.seek(idx_offset) | |
218 | f.write(b'\x00' * 5) | |
219 | ||
220 | # The bundle file has a header with 15 little-endian long values (60 bytes). | |
221 | # NOTE: the fixed values might be some flags for image options (format, aliasing) | |
222 | # all files available for testing had the same values however. | |
223 | BUNDLE_HEADER_SIZE = 60 | |
224 | BUNDLE_HEADER = [ | |
225 | 3 , # 0, fixed | |
226 | 16384 , # 1, max. num of tiles 128*128 = 16384 | |
227 | 16 , # 2, size of largest tile | |
228 | 5 , # 3, fixed | |
229 | 0 , # 4, num of tiles in bundle (*4) | |
230 | 0 , # 5, fixed | |
231 | 60+65536 , # 6, bundle size | |
232 | 0 , # 7, fixed | |
233 | 40 , # 8 fixed | |
234 | 0 , # 9, fixed | |
235 | 16 , # 10, fixed | |
236 | 0 , # 11, y0 | |
237 | 127 , # 12, y1 | |
238 | 0 , # 13, x0 | |
239 | 127 , # 14, x1 | |
240 | ] | |
241 | BUNDLE_HEADER_STRUCT_FORMAT = '<lllllllllllllll' | |
242 | ||
243 | class BundleData(object): | |
244 | def __init__(self, filename, tile_offsets): | |
245 | self.filename = filename | |
246 | self.tile_offsets = tile_offsets | |
247 | if not os.path.exists(self.filename): | |
248 | self._init_bundle() | |
249 | ||
250 | def _init_bundle(self): | |
251 | ensure_directory(self.filename) | |
252 | header = list(BUNDLE_HEADER) | |
253 | header[13], header[11] = self.tile_offsets | |
254 | header[14], header[12] = header[13]+127, header[11]+127 | |
255 | write_atomic(self.filename, | |
256 | struct.pack(BUNDLE_HEADER_STRUCT_FORMAT, *header) + | |
257 | # zero-size entry for each tile | |
258 | (b'\x00' * (BUNDLEX_GRID_HEIGHT * BUNDLEX_GRID_WIDTH * 4))) | |
259 | ||
260 | def read_size(self, offset): | |
261 | with open(self.filename, 'rb') as f: | |
262 | f.seek(offset) | |
263 | return struct.unpack('<L', f.read(4))[0] | |
264 | ||
265 | def read_tile(self, offset): | |
266 | with open(self.filename, 'rb') as f: | |
267 | f.seek(offset) | |
268 | size = struct.unpack('<L', f.read(4))[0] | |
269 | if size <= 0: | |
270 | return False | |
271 | return f.read(size) | |
272 | ||
273 | def append_tile(self, data, prev_offset): | |
274 | size = len(data) | |
275 | is_new_tile = True | |
276 | with open(self.filename, 'r+b') as f: | |
277 | if prev_offset: | |
278 | f.seek(prev_offset, os.SEEK_SET) | |
279 | if f.tell() == prev_offset: | |
280 | if struct.unpack('<L', f.read(4))[0] > 0: | |
281 | is_new_tile = False | |
282 | ||
283 | f.seek(0, os.SEEK_END) | |
284 | offset = f.tell() | |
285 | if offset == 0: | |
286 | f.write(b'\x00' * 16) # header | |
287 | offset = 16 | |
288 | f.write(struct.pack('<L', size)) | |
289 | f.write(data) | |
290 | ||
291 | # update header | |
292 | f.seek(0, os.SEEK_SET) | |
293 | header = list(struct.unpack(BUNDLE_HEADER_STRUCT_FORMAT, f.read(60))) | |
294 | header[2] = max(header[2], size) | |
295 | header[6] += size + 4 | |
296 | if is_new_tile: | |
297 | header[4] += 4 | |
298 | f.seek(0, os.SEEK_SET) | |
299 | f.write(struct.pack(BUNDLE_HEADER_STRUCT_FORMAT, *header)) | |
300 | ||
301 | return offset, size |
19 | 19 | |
20 | 20 | from mapproxy.util.fs import ensure_directory, write_atomic |
21 | 21 | from mapproxy.image import ImageSource, is_single_color_image |
22 | from mapproxy.cache import path | |
22 | 23 | from mapproxy.cache.base import TileCacheBase, tile_buffer |
23 | from mapproxy.compat import string_type | |
24 | 24 | |
25 | 25 | import logging |
26 | 26 | log = logging.getLogger('mapproxy.cache.file') |
30 | 30 | This class is responsible to store and load the actual tile data. |
31 | 31 | """ |
32 | 32 | def __init__(self, cache_dir, file_ext, directory_layout='tc', |
33 | link_single_color_images=False, lock_timeout=60.0): | |
33 | link_single_color_images=False): | |
34 | 34 | """ |
35 | 35 | :param cache_dir: the path where the tile will be stored |
36 | 36 | :param file_ext: the file extension that will be appended to |
41 | 41 | self.cache_dir = cache_dir |
42 | 42 | self.file_ext = file_ext |
43 | 43 | self.link_single_color_images = link_single_color_images |
44 | self._tile_location, self._level_location = path.location_funcs(layout=directory_layout) | |
45 | if self._level_location is None: | |
46 | self.level_location = None # disable level based clean-ups | |
44 | 47 | |
45 | if directory_layout == 'tc': | |
46 | self.tile_location = self._tile_location_tc | |
47 | self.level_location = self._level_location | |
48 | elif directory_layout == 'mp': | |
49 | self.tile_location = self._tile_location_mp | |
50 | self.level_location = self._level_location | |
51 | elif directory_layout == 'tms': | |
52 | self.tile_location = self._tile_location_tms | |
53 | self.level_location = self._level_location_tms | |
54 | elif directory_layout == 'quadkey': | |
55 | self.tile_location = self._tile_location_quadkey | |
56 | self.level_location = self._level_location | |
57 | elif directory_layout == 'arcgis': | |
58 | self.tile_location = self._tile_location_arcgiscache | |
59 | self.level_location = self._level_location_arcgiscache | |
60 | else: | |
61 | raise ValueError('unknown directory_layout "%s"' % directory_layout) | |
48 | def tile_location(self, tile, create_dir=False): | |
49 | return self._tile_location(tile, self.cache_dir, self.file_ext, create_dir=create_dir) | |
62 | 50 | |
63 | def _level_location(self, level): | |
51 | def level_location(self, level): | |
64 | 52 | """ |
65 | 53 | Return the path where all tiles for `level` will be stored. |
66 | 54 | |
67 | 55 | >>> c = FileCache(cache_dir='/tmp/cache/', file_ext='png') |
68 | >>> c._level_location(2) | |
56 | >>> c.level_location(2) | |
69 | 57 | '/tmp/cache/02' |
70 | 58 | """ |
71 | if isinstance(level, string_type): | |
72 | return os.path.join(self.cache_dir, level) | |
73 | else: | |
74 | return os.path.join(self.cache_dir, "%02d" % level) | |
75 | ||
76 | def _tile_location_tc(self, tile, create_dir=False): | |
77 | """ | |
78 | Return the location of the `tile`. Caches the result as ``location`` | |
79 | property of the `tile`. | |
80 | ||
81 | :param tile: the tile object | |
82 | :param create_dir: if True, create all necessary directories | |
83 | :return: the full filename of the tile | |
84 | ||
85 | >>> from mapproxy.cache.tile import Tile | |
86 | >>> c = FileCache(cache_dir='/tmp/cache/', file_ext='png') | |
87 | >>> c.tile_location(Tile((3, 4, 2))).replace('\\\\', '/') | |
88 | '/tmp/cache/02/000/000/003/000/000/004.png' | |
89 | """ | |
90 | if tile.location is None: | |
91 | x, y, z = tile.coord | |
92 | parts = (self._level_location(z), | |
93 | "%03d" % int(x / 1000000), | |
94 | "%03d" % (int(x / 1000) % 1000), | |
95 | "%03d" % (int(x) % 1000), | |
96 | "%03d" % int(y / 1000000), | |
97 | "%03d" % (int(y / 1000) % 1000), | |
98 | "%03d.%s" % (int(y) % 1000, self.file_ext)) | |
99 | tile.location = os.path.join(*parts) | |
100 | if create_dir: | |
101 | ensure_directory(tile.location) | |
102 | return tile.location | |
103 | ||
104 | def _tile_location_mp(self, tile, create_dir=False): | |
105 | """ | |
106 | Return the location of the `tile`. Caches the result as ``location`` | |
107 | property of the `tile`. | |
108 | ||
109 | :param tile: the tile object | |
110 | :param create_dir: if True, create all necessary directories | |
111 | :return: the full filename of the tile | |
112 | ||
113 | >>> from mapproxy.cache.tile import Tile | |
114 | >>> c = FileCache(cache_dir='/tmp/cache/', file_ext='png', directory_layout='mp') | |
115 | >>> c.tile_location(Tile((3, 4, 2))).replace('\\\\', '/') | |
116 | '/tmp/cache/02/0000/0003/0000/0004.png' | |
117 | >>> c.tile_location(Tile((12345678, 98765432, 22))).replace('\\\\', '/') | |
118 | '/tmp/cache/22/1234/5678/9876/5432.png' | |
119 | """ | |
120 | if tile.location is None: | |
121 | x, y, z = tile.coord | |
122 | parts = (self._level_location(z), | |
123 | "%04d" % int(x / 10000), | |
124 | "%04d" % (int(x) % 10000), | |
125 | "%04d" % int(y / 10000), | |
126 | "%04d.%s" % (int(y) % 10000, self.file_ext)) | |
127 | tile.location = os.path.join(*parts) | |
128 | if create_dir: | |
129 | ensure_directory(tile.location) | |
130 | return tile.location | |
131 | ||
132 | def _tile_location_tms(self, tile, create_dir=False): | |
133 | """ | |
134 | Return the location of the `tile`. Caches the result as ``location`` | |
135 | property of the `tile`. | |
136 | ||
137 | :param tile: the tile object | |
138 | :param create_dir: if True, create all necessary directories | |
139 | :return: the full filename of the tile | |
140 | ||
141 | >>> from mapproxy.cache.tile import Tile | |
142 | >>> c = FileCache(cache_dir='/tmp/cache/', file_ext='png', directory_layout='tms') | |
143 | >>> c.tile_location(Tile((3, 4, 2))).replace('\\\\', '/') | |
144 | '/tmp/cache/2/3/4.png' | |
145 | """ | |
146 | if tile.location is None: | |
147 | x, y, z = tile.coord | |
148 | tile.location = os.path.join( | |
149 | self.level_location(str(z)), | |
150 | str(x), str(y) + '.' + self.file_ext | |
151 | ) | |
152 | if create_dir: | |
153 | ensure_directory(tile.location) | |
154 | return tile.location | |
155 | ||
156 | def _level_location_tms(self, z): | |
157 | return self._level_location(str(z)) | |
158 | ||
159 | def _tile_location_quadkey(self, tile, create_dir=False): | |
160 | """ | |
161 | Return the location of the `tile`. Caches the result as ``location`` | |
162 | property of the `tile`. | |
163 | ||
164 | :param tile: the tile object | |
165 | :param create_dir: if True, create all necessary directories | |
166 | :return: the full filename of the tile | |
167 | ||
168 | >>> from mapproxy.cache.tile import Tile | |
169 | >>> from mapproxy.cache.file import FileCache | |
170 | >>> c = FileCache(cache_dir='/tmp/cache/', file_ext='png', directory_layout='quadkey') | |
171 | >>> c.tile_location(Tile((3, 4, 2))).replace('\\\\', '/') | |
172 | '/tmp/cache/11.png' | |
173 | """ | |
174 | if tile.location is None: | |
175 | x, y, z = tile.coord | |
176 | quadKey = "" | |
177 | for i in range(z,0,-1): | |
178 | digit = 0 | |
179 | mask = 1 << (i-1) | |
180 | if (x & mask) != 0: | |
181 | digit += 1 | |
182 | if (y & mask) != 0: | |
183 | digit += 2 | |
184 | quadKey += str(digit) | |
185 | tile.location = os.path.join( | |
186 | self.cache_dir, quadKey + '.' + self.file_ext | |
187 | ) | |
188 | if create_dir: | |
189 | ensure_directory(tile.location) | |
190 | return tile.location | |
191 | ||
192 | def _tile_location_arcgiscache(self, tile, create_dir=False): | |
193 | """ | |
194 | Return the location of the `tile`. Caches the result as ``location`` | |
195 | property of the `tile`. | |
196 | ||
197 | :param tile: the tile object | |
198 | :param create_dir: if True, create all necessary directories | |
199 | :return: the full filename of the tile | |
200 | ||
201 | >>> from mapproxy.cache.tile import Tile | |
202 | >>> from mapproxy.cache.file import FileCache | |
203 | >>> c = FileCache(cache_dir='/tmp/cache/', file_ext='png', directory_layout='arcgis') | |
204 | >>> c.tile_location(Tile((1234567, 87654321, 9))).replace('\\\\', '/') | |
205 | '/tmp/cache/L09/R05397fb1/C0012d687.png' | |
206 | """ | |
207 | if tile.location is None: | |
208 | x, y, z = tile.coord | |
209 | parts = (self._level_location_arcgiscache(z), 'R%08x' % y, 'C%08x.%s' % (x, self.file_ext)) | |
210 | tile.location = os.path.join(*parts) | |
211 | if create_dir: | |
212 | ensure_directory(tile.location) | |
213 | return tile.location | |
214 | ||
215 | def _level_location_arcgiscache(self, z): | |
216 | return self._level_location('L%02d' % z) | |
59 | return self._level_location(level, self.cache_dir) | |
217 | 60 | |
218 | 61 | def _single_color_tile_location(self, color, create_dir=False): |
219 | 62 | """ |
0 | # This file is part of the MapProxy project. | |
1 | # Copyright (C) 2011-2013 Omniscale <http://omniscale.de> | |
2 | ||
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | from __future__ import with_statement | |
16 | ||
17 | import hashlib | |
18 | import logging | |
19 | import os | |
20 | import re | |
21 | import sqlite3 | |
22 | import threading | |
23 | ||
24 | from mapproxy.cache.base import TileCacheBase, tile_buffer, REMOVE_ON_UNLOCK | |
25 | from mapproxy.compat import BytesIO, PY2, itertools | |
26 | from mapproxy.image import ImageSource | |
27 | from mapproxy.srs import get_epsg_num | |
28 | from mapproxy.util.fs import ensure_directory | |
29 | from mapproxy.util.lock import FileLock | |
30 | ||
31 | ||
32 | log = logging.getLogger(__name__) | |
33 | ||
34 | class GeopackageCache(TileCacheBase): | |
35 | supports_timestamp = False | |
36 | ||
37 | def __init__(self, geopackage_file, tile_grid, table_name, with_timestamps=False, timeout=30, wal=False): | |
38 | self.tile_grid = tile_grid | |
39 | self.table_name = self._check_table_name(table_name) | |
40 | self.lock_cache_id = 'gpkg' + hashlib.md5(geopackage_file.encode('utf-8')).hexdigest() | |
41 | self.geopackage_file = geopackage_file | |
42 | # XXX timestamps not implemented | |
43 | self.supports_timestamp = with_timestamps | |
44 | self.timeout = timeout | |
45 | self.wal = wal | |
46 | self.ensure_gpkg() | |
47 | self._db_conn_cache = threading.local() | |
48 | ||
49 | @property | |
50 | def db(self): | |
51 | if not getattr(self._db_conn_cache, 'db', None): | |
52 | self.ensure_gpkg() | |
53 | self._db_conn_cache.db = sqlite3.connect(self.geopackage_file, timeout=self.timeout) | |
54 | return self._db_conn_cache.db | |
55 | ||
56 | def cleanup(self): | |
57 | """ | |
58 | Close all open connection and remove them from cache. | |
59 | """ | |
60 | if getattr(self._db_conn_cache, 'db', None): | |
61 | self._db_conn_cache.db.close() | |
62 | self._db_conn_cache.db = None | |
63 | ||
64 | @staticmethod | |
65 | def _check_table_name(table_name): | |
66 | """ | |
67 | >>> GeopackageCache._check_table_name("test") | |
68 | 'test' | |
69 | >>> GeopackageCache._check_table_name("test_2") | |
70 | 'test_2' | |
71 | >>> GeopackageCache._check_table_name("test-2") | |
72 | 'test-2' | |
73 | >>> GeopackageCache._check_table_name("test3;") | |
74 | Traceback (most recent call last): | |
75 | ... | |
76 | ValueError: The table_name test3; contains unsupported characters. | |
77 | >>> GeopackageCache._check_table_name("table name") | |
78 | Traceback (most recent call last): | |
79 | ... | |
80 | ValueError: The table_name table name contains unsupported characters. | |
81 | ||
82 | @param table_name: A desired name for an geopackage table. | |
83 | @return: The name of the table if it is good, otherwise an exception. | |
84 | """ | |
85 | # Regex string indicating table names which will be accepted. | |
86 | regex_str = '^[a-zA-Z0-9_-]+$' | |
87 | if re.match(regex_str, table_name): | |
88 | return table_name | |
89 | else: | |
90 | msg = ("The table name may only contain alphanumeric characters, an underscore, " | |
91 | "or a dash: {}".format(regex_str)) | |
92 | log.info(msg) | |
93 | raise ValueError("The table_name {0} contains unsupported characters.".format(table_name)) | |
94 | ||
95 | def ensure_gpkg(self): | |
96 | if not os.path.isfile(self.geopackage_file): | |
97 | with FileLock(self.geopackage_file + '.init.lck', | |
98 | remove_on_unlock=REMOVE_ON_UNLOCK): | |
99 | ensure_directory(self.geopackage_file) | |
100 | self._initialize_gpkg() | |
101 | else: | |
102 | if not self.check_gpkg(): | |
103 | ensure_directory(self.geopackage_file) | |
104 | self._initialize_gpkg() | |
105 | ||
106 | def check_gpkg(self): | |
107 | if not self._verify_table(): | |
108 | return False | |
109 | if not self._verify_gpkg_contents(): | |
110 | return False | |
111 | if not self._verify_tile_size(): | |
112 | return False | |
113 | return True | |
114 | ||
115 | def _verify_table(self): | |
116 | with sqlite3.connect(self.geopackage_file) as db: | |
117 | cur = db.execute("""SELECT name FROM sqlite_master WHERE type='table' AND name=?""", | |
118 | (self.table_name,)) | |
119 | content = cur.fetchone() | |
120 | if not content: | |
121 | # Table doesn't exist _initialize_gpkg will create a new one. | |
122 | return False | |
123 | return True | |
124 | ||
125 | def _verify_gpkg_contents(self): | |
126 | with sqlite3.connect(self.geopackage_file) as db: | |
127 | cur = db.execute("""SELECT * FROM gpkg_contents WHERE table_name = ?""" | |
128 | , (self.table_name,)) | |
129 | ||
130 | results = cur.fetchone() | |
131 | if not results: | |
132 | # Table doesn't exist in gpkg_contents _initialize_gpkg will add it. | |
133 | return False | |
134 | gpkg_data_type = results[1] | |
135 | gpkg_srs_id = results[9] | |
136 | cur = db.execute("""SELECT * FROM gpkg_spatial_ref_sys WHERE srs_id = ?""" | |
137 | , (gpkg_srs_id,)) | |
138 | ||
139 | gpkg_coordsys_id = cur.fetchone()[3] | |
140 | if gpkg_data_type.lower() != "tiles": | |
141 | log.info("The geopackage table name already exists for a data type other than tiles.") | |
142 | raise ValueError("table_name is improperly configured.") | |
143 | if gpkg_coordsys_id != get_epsg_num(self.tile_grid.srs.srs_code): | |
144 | log.info( | |
145 | "The geopackage {0} table name {1} already exists and has an SRS of {2}, which does not match the configured" \ | |
146 | " Mapproxy SRS of {3}.".format(self.geopackage_file, self.table_name, gpkg_coordsys_id, | |
147 | get_epsg_num(self.tile_grid.srs.srs_code))) | |
148 | raise ValueError("srs is improperly configured.") | |
149 | return True | |
150 | ||
151 | def _verify_tile_size(self): | |
152 | with sqlite3.connect(self.geopackage_file) as db: | |
153 | cur = db.execute( | |
154 | """SELECT * FROM gpkg_tile_matrix WHERE table_name = ?""", | |
155 | (self.table_name,)) | |
156 | ||
157 | results = cur.fetchall() | |
158 | results = results[0] | |
159 | tile_size = self.tile_grid.tile_size | |
160 | ||
161 | if not results: | |
162 | # There is no tile conflict. Return to allow the creation of new tiles. | |
163 | return True | |
164 | ||
165 | gpkg_table_name, gpkg_zoom_level, gpkg_matrix_width, gpkg_matrix_height, gpkg_tile_width, gpkg_tile_height, \ | |
166 | gpkg_pixel_x_size, gpkg_pixel_y_size = results | |
167 | resolution = self.tile_grid.resolution(gpkg_zoom_level) | |
168 | if gpkg_tile_width != tile_size[0] or gpkg_tile_height != tile_size[1]: | |
169 | log.info( | |
170 | "The geopackage {0} table name {1} already exists and has tile sizes of ({2},{3})" | |
171 | " which is different than the configure tile sizes of ({4},{5}).".format(self.geopackage_file, | |
172 | self.table_name, | |
173 | gpkg_tile_width, | |
174 | gpkg_tile_height, | |
175 | tile_size[0], | |
176 | tile_size[1])) | |
177 | log.info("The current mapproxy configuration is invalid for this geopackage.") | |
178 | raise ValueError("tile_size is improperly configured.") | |
179 | if not is_close(gpkg_pixel_x_size, resolution) or not is_close(gpkg_pixel_y_size, resolution): | |
180 | log.info( | |
181 | "The geopackage {0} table name {1} already exists and level {2} a resolution of ({3:.13f},{4:.13f})" | |
182 | " which is different than the configured resolution of ({5:.13f},{6:.13f}).".format(self.geopackage_file, | |
183 | self.table_name, | |
184 | gpkg_zoom_level, | |
185 | gpkg_pixel_x_size, | |
186 | gpkg_pixel_y_size, | |
187 | resolution, | |
188 | resolution)) | |
189 | log.info("The current mapproxy configuration is invalid for this geopackage.") | |
190 | raise ValueError("res is improperly configured.") | |
191 | return True | |
192 | ||
193 | def _initialize_gpkg(self): | |
194 | log.info('initializing Geopackage file %s', self.geopackage_file) | |
195 | db = sqlite3.connect(self.geopackage_file) | |
196 | ||
197 | if self.wal: | |
198 | db.execute('PRAGMA journal_mode=wal') | |
199 | ||
200 | proj = get_epsg_num(self.tile_grid.srs.srs_code) | |
201 | stmts = [""" | |
202 | CREATE TABLE IF NOT EXISTS gpkg_contents | |
203 | (table_name TEXT NOT NULL PRIMARY KEY, -- The name of the tiles, or feature table | |
204 | data_type TEXT NOT NULL, -- Type of data stored in the table: "features" per clause Features (http://www.geopackage.org/spec/#features), "tiles" per clause Tiles (http://www.geopackage.org/spec/#tiles), or an implementer-defined value for other data tables per clause in an Extended GeoPackage | |
205 | identifier TEXT UNIQUE, -- A human-readable identifier (e.g. short name) for the table_name content | |
206 | description TEXT DEFAULT '', -- A human-readable description for the table_name content | |
207 | last_change DATETIME NOT NULL DEFAULT (strftime('%Y-%m-%dT%H:%M:%fZ','now')), -- Timestamp value in ISO 8601 format as defined by the strftime function %Y-%m-%dT%H:%M:%fZ format string applied to the current time | |
208 | min_x DOUBLE, -- Bounding box minimum easting or longitude for all content in table_name | |
209 | min_y DOUBLE, -- Bounding box minimum northing or latitude for all content in table_name | |
210 | max_x DOUBLE, -- Bounding box maximum easting or longitude for all content in table_name | |
211 | max_y DOUBLE, -- Bounding box maximum northing or latitude for all content in table_name | |
212 | srs_id INTEGER, -- Spatial Reference System ID: gpkg_spatial_ref_sys.srs_id; when data_type is features, SHALL also match gpkg_geometry_columns.srs_id; When data_type is tiles, SHALL also match gpkg_tile_matrix_set.srs.id | |
213 | CONSTRAINT fk_gc_r_srs_id FOREIGN KEY (srs_id) REFERENCES gpkg_spatial_ref_sys(srs_id)) | |
214 | """, | |
215 | """ | |
216 | CREATE TABLE IF NOT EXISTS gpkg_spatial_ref_sys | |
217 | (srs_name TEXT NOT NULL, -- Human readable name of this SRS (Spatial Reference System) | |
218 | srs_id INTEGER NOT NULL PRIMARY KEY, -- Unique identifier for each Spatial Reference System within a GeoPackage | |
219 | organization TEXT NOT NULL, -- Case-insensitive name of the defining organization e.g. EPSG or epsg | |
220 | organization_coordsys_id INTEGER NOT NULL, -- Numeric ID of the Spatial Reference System assigned by the organization | |
221 | definition TEXT NOT NULL, -- Well-known Text representation of the Spatial Reference System | |
222 | description TEXT) | |
223 | """, | |
224 | """ | |
225 | CREATE TABLE IF NOT EXISTS gpkg_tile_matrix | |
226 | (table_name TEXT NOT NULL, -- Tile Pyramid User Data Table Name | |
227 | zoom_level INTEGER NOT NULL, -- 0 <= zoom_level <= max_level for table_name | |
228 | matrix_width INTEGER NOT NULL, -- Number of columns (>= 1) in tile matrix at this zoom level | |
229 | matrix_height INTEGER NOT NULL, -- Number of rows (>= 1) in tile matrix at this zoom level | |
230 | tile_width INTEGER NOT NULL, -- Tile width in pixels (>= 1) for this zoom level | |
231 | tile_height INTEGER NOT NULL, -- Tile height in pixels (>= 1) for this zoom level | |
232 | pixel_x_size DOUBLE NOT NULL, -- In t_table_name srid units or default meters for srid 0 (>0) | |
233 | pixel_y_size DOUBLE NOT NULL, -- In t_table_name srid units or default meters for srid 0 (>0) | |
234 | CONSTRAINT pk_ttm PRIMARY KEY (table_name, zoom_level), CONSTRAINT fk_tmm_table_name FOREIGN KEY (table_name) REFERENCES gpkg_contents(table_name)) | |
235 | """, | |
236 | """ | |
237 | CREATE TABLE IF NOT EXISTS gpkg_tile_matrix_set | |
238 | (table_name TEXT NOT NULL PRIMARY KEY, -- Tile Pyramid User Data Table Name | |
239 | srs_id INTEGER NOT NULL, -- Spatial Reference System ID: gpkg_spatial_ref_sys.srs_id | |
240 | min_x DOUBLE NOT NULL, -- Bounding box minimum easting or longitude for all content in table_name | |
241 | min_y DOUBLE NOT NULL, -- Bounding box minimum northing or latitude for all content in table_name | |
242 | max_x DOUBLE NOT NULL, -- Bounding box maximum easting or longitude for all content in table_name | |
243 | max_y DOUBLE NOT NULL, -- Bounding box maximum northing or latitude for all content in table_name | |
244 | CONSTRAINT fk_gtms_table_name FOREIGN KEY (table_name) REFERENCES gpkg_contents(table_name), CONSTRAINT fk_gtms_srs FOREIGN KEY (srs_id) REFERENCES gpkg_spatial_ref_sys (srs_id)) | |
245 | """, | |
246 | """ | |
247 | CREATE TABLE IF NOT EXISTS [{0}] | |
248 | (id INTEGER PRIMARY KEY AUTOINCREMENT, -- Autoincrement primary key | |
249 | zoom_level INTEGER NOT NULL, -- min(zoom_level) <= zoom_level <= max(zoom_level) for t_table_name | |
250 | tile_column INTEGER NOT NULL, -- 0 to tile_matrix matrix_width - 1 | |
251 | tile_row INTEGER NOT NULL, -- 0 to tile_matrix matrix_height - 1 | |
252 | tile_data BLOB NOT NULL, -- Of an image MIME type specified in clauses Tile Encoding PNG, Tile Encoding JPEG, Tile Encoding WEBP | |
253 | UNIQUE (zoom_level, tile_column, tile_row)) | |
254 | """.format(self.table_name) | |
255 | ] | |
256 | ||
257 | for stmt in stmts: | |
258 | db.execute(stmt) | |
259 | ||
260 | db.execute("PRAGMA foreign_keys = 1;") | |
261 | ||
262 | # List of WKT execute statements and data.(""" | |
263 | wkt_statement = """ | |
264 | INSERT OR REPLACE INTO gpkg_spatial_ref_sys ( | |
265 | srs_id, | |
266 | organization, | |
267 | organization_coordsys_id, | |
268 | srs_name, | |
269 | definition) | |
270 | VALUES (?, ?, ?, ?, ?) | |
271 | """ | |
272 | wkt_entries = [(3857, 'epsg', 3857, 'WGS 84 / Pseudo-Mercator', | |
273 | """ | |
274 | PROJCS["WGS 84 / Pseudo-Mercator",GEOGCS["WGS 84",DATUM["WGS_1984",SPHEROID["WGS 84",6378137,298.257223563,\ | |
275 | AUTHORITY["EPSG","7030"]],AUTHORITY["EPSG","6326"]],PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]],\ | |
276 | UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]],AUTHORITY["EPSG","9122"]]AUTHORITY["EPSG","4326"]],\ | |
277 | PROJECTION["Mercator_1SP"],PARAMETER["central_meridian",0],PARAMETER["scale_factor",1],PARAMETER["false_easting",0],\ | |
278 | PARAMETER["false_northing",0],UNIT["metre",1,AUTHORITY["EPSG","9001"]],AXIS["X",EAST],AXIS["Y",NORTH]\ | |
279 | """ | |
280 | ), | |
281 | (4326, 'epsg', 4326, 'WGS 84', | |
282 | """ | |
283 | GEOGCS["WGS 84",DATUM["WGS_1984",SPHEROID["WGS 84",6378137,298.257223563,AUTHORITY["EPSG","7030"]],\ | |
284 | AUTHORITY["EPSG","6326"]],PRIMEM["Greenwich",0,AUTHORITY["EPSG","8901"]],UNIT["degree",0.0174532925199433,\ | |
285 | AUTHORITY["EPSG","9122"]],AUTHORITY["EPSG","4326"]]\ | |
286 | """ | |
287 | ), | |
288 | (-1, 'NONE', -1, ' ', 'undefined'), | |
289 | (0, 'NONE', 0, ' ', 'undefined') | |
290 | ] | |
291 | ||
292 | if get_epsg_num(self.tile_grid.srs.srs_code) not in [4326, 3857]: | |
293 | wkt_entries.append((proj, 'epsg', proj, 'Not provided', "Added via Mapproxy.")) | |
294 | db.commit() | |
295 | ||
296 | # Add geopackage version to the header (1.0) | |
297 | db.execute("PRAGMA application_id = 1196437808;") | |
298 | db.commit() | |
299 | ||
300 | for wkt_entry in wkt_entries: | |
301 | try: | |
302 | db.execute(wkt_statement, (wkt_entry[0], wkt_entry[1], wkt_entry[2], wkt_entry[3], wkt_entry[4])) | |
303 | except sqlite3.IntegrityError: | |
304 | log.info("srs_id already exists.".format(wkt_entry[0])) | |
305 | db.commit() | |
306 | ||
307 | # Ensure that tile table exists here, don't overwrite a valid entry. | |
308 | try: | |
309 | db.execute(""" | |
310 | INSERT INTO gpkg_contents ( | |
311 | table_name, | |
312 | data_type, | |
313 | identifier, | |
314 | description, | |
315 | min_x, | |
316 | max_x, | |
317 | min_y, | |
318 | max_y, | |
319 | srs_id) | |
320 | VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?); | |
321 | """, (self.table_name, | |
322 | "tiles", | |
323 | self.table_name, | |
324 | "Created with Mapproxy.", | |
325 | self.tile_grid.bbox[0], | |
326 | self.tile_grid.bbox[2], | |
327 | self.tile_grid.bbox[1], | |
328 | self.tile_grid.bbox[3], | |
329 | proj)) | |
330 | except sqlite3.IntegrityError: | |
331 | pass | |
332 | db.commit() | |
333 | ||
334 | # Ensure that tile set exists here, don't overwrite a valid entry. | |
335 | try: | |
336 | db.execute(""" | |
337 | INSERT INTO gpkg_tile_matrix_set (table_name, srs_id, min_x, max_x, min_y, max_y) | |
338 | VALUES (?, ?, ?, ?, ?, ?); | |
339 | """, ( | |
340 | self.table_name, proj, self.tile_grid.bbox[0], self.tile_grid.bbox[2], self.tile_grid.bbox[1], | |
341 | self.tile_grid.bbox[3])) | |
342 | except sqlite3.IntegrityError: | |
343 | pass | |
344 | db.commit() | |
345 | ||
346 | tile_size = self.tile_grid.tile_size | |
347 | for grid, resolution, level in zip(self.tile_grid.grid_sizes, | |
348 | self.tile_grid.resolutions, range(20)): | |
349 | db.execute("""INSERT OR REPLACE INTO gpkg_tile_matrix | |
350 | (table_name, zoom_level, matrix_width, matrix_height, tile_width, tile_height, pixel_x_size, pixel_y_size) | |
351 | VALUES(?, ?, ?, ?, ?, ?, ?, ?) | |
352 | """, | |
353 | (self.table_name, level, grid[0], grid[1], tile_size[0], tile_size[1], resolution, resolution)) | |
354 | db.commit() | |
355 | db.close() | |
356 | ||
357 | def is_cached(self, tile): | |
358 | if tile.coord is None: | |
359 | return True | |
360 | if tile.source: | |
361 | return True | |
362 | ||
363 | return self.load_tile(tile) | |
364 | ||
365 | ||
366 | def store_tile(self, tile): | |
367 | if tile.stored: | |
368 | return True | |
369 | return self._store_bulk([tile]) | |
370 | ||
371 | def store_tiles(self, tiles): | |
372 | tiles = [t for t in tiles if not t.stored] | |
373 | return self._store_bulk(tiles) | |
374 | ||
375 | ||
376 | def _store_bulk(self, tiles): | |
377 | records = [] | |
378 | # tile_buffer (as_buffer) will encode the tile to the target format | |
379 | # we collect all tiles before, to avoid having the db transaction | |
380 | # open during this slow encoding | |
381 | for tile in tiles: | |
382 | with tile_buffer(tile) as buf: | |
383 | if PY2: | |
384 | content = buffer(buf.read()) | |
385 | else: | |
386 | content = buf.read() | |
387 | x, y, level = tile.coord | |
388 | records.append((level, x, y, content)) | |
389 | ||
390 | cursor = self.db.cursor() | |
391 | try: | |
392 | stmt = "INSERT OR REPLACE INTO [{0}] (zoom_level, tile_column, tile_row, tile_data) VALUES (?,?,?,?)".format( | |
393 | self.table_name) | |
394 | cursor.executemany(stmt, records) | |
395 | self.db.commit() | |
396 | except sqlite3.OperationalError as ex: | |
397 | log.warn('unable to store tile: %s', ex) | |
398 | return False | |
399 | return True | |
400 | ||
401 | def load_tile(self, tile, with_metadata=False): | |
402 | if tile.source or tile.coord is None: | |
403 | return True | |
404 | ||
405 | cur = self.db.cursor() | |
406 | cur.execute("""SELECT tile_data FROM [{0}] | |
407 | WHERE tile_column = ? AND | |
408 | tile_row = ? AND | |
409 | zoom_level = ?""".format(self.table_name), tile.coord) | |
410 | ||
411 | content = cur.fetchone() | |
412 | if content: | |
413 | tile.source = ImageSource(BytesIO(content[0])) | |
414 | return True | |
415 | else: | |
416 | return False | |
417 | ||
418 | def load_tiles(self, tiles, with_metadata=False): | |
419 | # associate the right tiles with the cursor | |
420 | tile_dict = {} | |
421 | coords = [] | |
422 | for tile in tiles: | |
423 | if tile.source or tile.coord is None: | |
424 | continue | |
425 | x, y, level = tile.coord | |
426 | coords.append(x) | |
427 | coords.append(y) | |
428 | coords.append(level) | |
429 | tile_dict[(x, y)] = tile | |
430 | ||
431 | if not tile_dict: | |
432 | # all tiles loaded or coords are None | |
433 | return True | |
434 | ||
435 | stmt_base = "SELECT tile_column, tile_row, tile_data FROM [{0}] WHERE ".format(self.table_name) | |
436 | ||
437 | loaded_tiles = 0 | |
438 | ||
439 | # SQLite is limited to 1000 args -> split into multiple requests if more arguments are needed | |
440 | while coords: | |
441 | cur_coords = coords[:999] | |
442 | ||
443 | stmt = stmt_base + ' OR '.join( | |
444 | ['(tile_column = ? AND tile_row = ? AND zoom_level = ?)'] * (len(cur_coords) // 3)) | |
445 | ||
446 | cursor = self.db.cursor() | |
447 | cursor.execute(stmt, cur_coords) | |
448 | ||
449 | for row in cursor: | |
450 | loaded_tiles += 1 | |
451 | tile = tile_dict[(row[0], row[1])] | |
452 | data = row[2] | |
453 | tile.size = len(data) | |
454 | tile.source = ImageSource(BytesIO(data)) | |
455 | cursor.close() | |
456 | ||
457 | coords = coords[999:] | |
458 | ||
459 | return loaded_tiles == len(tile_dict) | |
460 | ||
461 | def remove_tile(self, tile): | |
462 | cursor = self.db.cursor() | |
463 | cursor.execute( | |
464 | "DELETE FROM [{0}] WHERE (tile_column = ? AND tile_row = ? AND zoom_level = ?)".format(self.table_name), | |
465 | tile.coord) | |
466 | self.db.commit() | |
467 | if cursor.rowcount: | |
468 | return True | |
469 | return False | |
470 | ||
471 | def remove_level_tiles_before(self, level, timestamp): | |
472 | if timestamp == 0: | |
473 | cursor = self.db.cursor() | |
474 | cursor.execute( | |
475 | "DELETE FROM [{0}] WHERE (zoom_level = ?)".format(self.table_name), (level,)) | |
476 | self.db.commit() | |
477 | log.info("Cursor rowcount = {0}".format(cursor.rowcount)) | |
478 | if cursor.rowcount: | |
479 | return True | |
480 | return False | |
481 | ||
482 | def load_tile_metadata(self, tile): | |
483 | self.load_tile(tile) | |
484 | ||
485 | ||
486 | class GeopackageLevelCache(TileCacheBase): | |
487 | ||
488 | def __init__(self, geopackage_dir, tile_grid, table_name, timeout=30, wal=False): | |
489 | self.lock_cache_id = 'gpkg-' + hashlib.md5(geopackage_dir.encode('utf-8')).hexdigest() | |
490 | self.cache_dir = geopackage_dir | |
491 | self.tile_grid = tile_grid | |
492 | self.table_name = table_name | |
493 | self.timeout = timeout | |
494 | self.wal = wal | |
495 | self._geopackage = {} | |
496 | self._geopackage_lock = threading.Lock() | |
497 | ||
498 | def _get_level(self, level): | |
499 | if level in self._geopackage: | |
500 | return self._geopackage[level] | |
501 | ||
502 | with self._geopackage_lock: | |
503 | if level not in self._geopackage: | |
504 | geopackage_filename = os.path.join(self.cache_dir, '%s.gpkg' % level) | |
505 | self._geopackage[level] = GeopackageCache( | |
506 | geopackage_filename, | |
507 | self.tile_grid, | |
508 | self.table_name, | |
509 | with_timestamps=True, | |
510 | timeout=self.timeout, | |
511 | wal=self.wal, | |
512 | ) | |
513 | ||
514 | return self._geopackage[level] | |
515 | ||
516 | def cleanup(self): | |
517 | """ | |
518 | Close all open connection and remove them from cache. | |
519 | """ | |
520 | with self._geopackage_lock: | |
521 | for gp in self._geopackage.values(): | |
522 | gp.cleanup() | |
523 | ||
524 | def is_cached(self, tile): | |
525 | if tile.coord is None: | |
526 | return True | |
527 | if tile.source: | |
528 | return True | |
529 | ||
530 | return self._get_level(tile.coord[2]).is_cached(tile) | |
531 | ||
532 | def store_tile(self, tile): | |
533 | if tile.stored: | |
534 | return True | |
535 | ||
536 | return self._get_level(tile.coord[2]).store_tile(tile) | |
537 | ||
538 | def store_tiles(self, tiles): | |
539 | failed = False | |
540 | for level, tiles in itertools.groupby(tiles, key=lambda t: t.coord[2]): | |
541 | tiles = [t for t in tiles if not t.stored] | |
542 | res = self._get_level(level).store_tiles(tiles) | |
543 | if not res: failed = True | |
544 | return failed | |
545 | ||
546 | def load_tile(self, tile, with_metadata=False): | |
547 | if tile.source or tile.coord is None: | |
548 | return True | |
549 | ||
550 | return self._get_level(tile.coord[2]).load_tile(tile, with_metadata=with_metadata) | |
551 | ||
552 | def load_tiles(self, tiles, with_metadata=False): | |
553 | level = None | |
554 | for tile in tiles: | |
555 | if tile.source or tile.coord is None: | |
556 | continue | |
557 | level = tile.coord[2] | |
558 | break | |
559 | ||
560 | if not level: | |
561 | return True | |
562 | ||
563 | return self._get_level(level).load_tiles(tiles, with_metadata=with_metadata) | |
564 | ||
565 | def remove_tile(self, tile): | |
566 | if tile.coord is None: | |
567 | return True | |
568 | ||
569 | return self._get_level(tile.coord[2]).remove_tile(tile) | |
570 | ||
571 | def remove_level_tiles_before(self, level, timestamp): | |
572 | level_cache = self._get_level(level) | |
573 | if timestamp == 0: | |
574 | level_cache.cleanup() | |
575 | os.unlink(level_cache.geopackage_file) | |
576 | return True | |
577 | else: | |
578 | return level_cache.remove_level_tiles_before(level, timestamp) | |
579 | ||
580 | ||
581 | def is_close(a, b, rel_tol=1e-09, abs_tol=0.0): | |
582 | """ | |
583 | See PEP 485, added here for legacy versions. | |
584 | ||
585 | >>> is_close(0.0, 0.0) | |
586 | True | |
587 | >>> is_close(1, 1.0) | |
588 | True | |
589 | >>> is_close(0.01, 0.001) | |
590 | False | |
591 | >>> is_close(0.0001001, 0.0001, rel_tol=1e-02) | |
592 | True | |
593 | >>> is_close(0.0001001, 0.0001) | |
594 | False | |
595 | ||
596 | @param a: An int or float. | |
597 | @param b: An int or float. | |
598 | @param rel_tol: Relative tolerance - maximumed allow difference between two numbers. | |
599 | @param abs_tol: Absolute tolerance - minimum absolute tolerance. | |
600 | @return: True if the values a and b are close. | |
601 | ||
602 | """ | |
603 | return abs(a - b) <= max(rel_tol * max(abs(a), abs(b)), abs_tol) |
20 | 20 | import time |
21 | 21 | |
22 | 22 | from mapproxy.image import ImageSource |
23 | from mapproxy.cache.base import TileCacheBase, tile_buffer, CacheBackendError | |
23 | from mapproxy.cache.base import TileCacheBase, tile_buffer, REMOVE_ON_UNLOCK | |
24 | 24 | from mapproxy.util.fs import ensure_directory |
25 | 25 | from mapproxy.util.lock import FileLock |
26 | from mapproxy.compat import BytesIO, PY2 | |
26 | from mapproxy.compat import BytesIO, PY2, itertools | |
27 | 27 | |
28 | 28 | import logging |
29 | 29 | log = logging.getLogger(__name__) |
37 | 37 | class MBTilesCache(TileCacheBase): |
38 | 38 | supports_timestamp = False |
39 | 39 | |
40 | def __init__(self, mbtile_file, with_timestamps=False): | |
40 | def __init__(self, mbtile_file, with_timestamps=False, timeout=30, wal=False): | |
41 | 41 | self.lock_cache_id = 'mbtiles-' + hashlib.md5(mbtile_file.encode('utf-8')).hexdigest() |
42 | 42 | self.mbtile_file = mbtile_file |
43 | 43 | self.supports_timestamp = with_timestamps |
44 | self.timeout = timeout | |
45 | self.wal = wal | |
44 | 46 | self.ensure_mbtile() |
45 | 47 | self._db_conn_cache = threading.local() |
46 | 48 | |
48 | 50 | def db(self): |
49 | 51 | if not getattr(self._db_conn_cache, 'db', None): |
50 | 52 | self.ensure_mbtile() |
51 | self._db_conn_cache.db = sqlite3.connect(self.mbtile_file) | |
53 | self._db_conn_cache.db = sqlite3.connect(self.mbtile_file, self.timeout) | |
52 | 54 | return self._db_conn_cache.db |
53 | 55 | |
54 | 56 | def cleanup(self): |
61 | 63 | |
62 | 64 | def ensure_mbtile(self): |
63 | 65 | if not os.path.exists(self.mbtile_file): |
64 | with FileLock(os.path.join(os.path.dirname(self.mbtile_file), 'init.lck'), | |
65 | remove_on_unlock=True): | |
66 | with FileLock(self.mbtile_file + '.init.lck', | |
67 | remove_on_unlock=REMOVE_ON_UNLOCK): | |
66 | 68 | if not os.path.exists(self.mbtile_file): |
67 | 69 | ensure_directory(self.mbtile_file) |
68 | 70 | self._initialize_mbtile() |
70 | 72 | def _initialize_mbtile(self): |
71 | 73 | log.info('initializing MBTile file %s', self.mbtile_file) |
72 | 74 | db = sqlite3.connect(self.mbtile_file) |
75 | ||
76 | if self.wal: | |
77 | db.execute('PRAGMA journal_mode=wal') | |
78 | ||
73 | 79 | stmt = """ |
74 | 80 | CREATE TABLE tiles ( |
75 | 81 | zoom_level integer, |
134 | 140 | def store_tile(self, tile): |
135 | 141 | if tile.stored: |
136 | 142 | return True |
137 | with tile_buffer(tile) as buf: | |
138 | if PY2: | |
139 | content = buffer(buf.read()) | |
143 | return self._store_bulk([tile]) | |
144 | ||
145 | def store_tiles(self, tiles): | |
146 | tiles = [t for t in tiles if not t.stored] | |
147 | return self._store_bulk(tiles) | |
148 | ||
149 | def _store_bulk(self, tiles): | |
150 | records = [] | |
151 | # tile_buffer (as_buffer) will encode the tile to the target format | |
152 | # we collect all tiles before, to avoid having the db transaction | |
153 | # open during this slow encoding | |
154 | for tile in tiles: | |
155 | with tile_buffer(tile) as buf: | |
156 | if PY2: | |
157 | content = buffer(buf.read()) | |
158 | else: | |
159 | content = buf.read() | |
160 | x, y, level = tile.coord | |
161 | if self.supports_timestamp: | |
162 | records.append((level, x, y, content, time.time())) | |
163 | else: | |
164 | records.append((level, x, y, content)) | |
165 | ||
166 | cursor = self.db.cursor() | |
167 | try: | |
168 | if self.supports_timestamp: | |
169 | stmt = "INSERT OR REPLACE INTO tiles (zoom_level, tile_column, tile_row, tile_data, last_modified) VALUES (?,?,?,?, datetime(?, 'unixepoch', 'localtime'))" | |
170 | cursor.executemany(stmt, records) | |
140 | 171 | else: |
141 | content = buf.read() | |
142 | x, y, level = tile.coord | |
143 | cursor = self.db.cursor() | |
144 | try: | |
145 | if self.supports_timestamp: | |
146 | stmt = "INSERT OR REPLACE INTO tiles (zoom_level, tile_column, tile_row, tile_data, last_modified) VALUES (?,?,?,?, datetime(?, 'unixepoch', 'localtime'))" | |
147 | cursor.execute(stmt, (level, x, y, content, time.time())) | |
148 | else: | |
149 | stmt = "INSERT OR REPLACE INTO tiles (zoom_level, tile_column, tile_row, tile_data) VALUES (?,?,?,?)" | |
150 | cursor.execute(stmt, (level, x, y, content)) | |
151 | self.db.commit() | |
152 | except sqlite3.OperationalError as ex: | |
153 | log.warn('unable to store tile: %s', ex) | |
154 | return False | |
155 | return True | |
172 | stmt = "INSERT OR REPLACE INTO tiles (zoom_level, tile_column, tile_row, tile_data) VALUES (?,?,?,?)" | |
173 | cursor.executemany(stmt, records) | |
174 | self.db.commit() | |
175 | except sqlite3.OperationalError as ex: | |
176 | log.warn('unable to store tile: %s', ex) | |
177 | return False | |
178 | return True | |
156 | 179 | |
157 | 180 | def load_tile(self, tile, with_metadata=False): |
158 | 181 | if tile.source or tile.coord is None: |
270 | 293 | class MBTilesLevelCache(TileCacheBase): |
271 | 294 | supports_timestamp = True |
272 | 295 | |
273 | def __init__(self, mbtiles_dir): | |
296 | def __init__(self, mbtiles_dir, timeout=30, wal=False): | |
274 | 297 | self.lock_cache_id = 'sqlite-' + hashlib.md5(mbtiles_dir.encode('utf-8')).hexdigest() |
275 | 298 | self.cache_dir = mbtiles_dir |
276 | 299 | self._mbtiles = {} |
300 | self.timeout = timeout | |
301 | self.wal = wal | |
277 | 302 | self._mbtiles_lock = threading.Lock() |
278 | 303 | |
279 | 304 | def _get_level(self, level): |
286 | 311 | self._mbtiles[level] = MBTilesCache( |
287 | 312 | mbtile_filename, |
288 | 313 | with_timestamps=True, |
314 | timeout=self.timeout, | |
315 | wal=self.wal, | |
289 | 316 | ) |
290 | 317 | |
291 | 318 | return self._mbtiles[level] |
311 | 338 | return True |
312 | 339 | |
313 | 340 | return self._get_level(tile.coord[2]).store_tile(tile) |
341 | ||
342 | def store_tiles(self, tiles): | |
343 | failed = False | |
344 | for level, tiles in itertools.groupby(tiles, key=lambda t: t.coord[2]): | |
345 | tiles = [t for t in tiles if not t.stored] | |
346 | res = self._get_level(level).store_tiles(tiles) | |
347 | if not res: failed = True | |
348 | return failed | |
314 | 349 | |
315 | 350 | def load_tile(self, tile, with_metadata=False): |
316 | 351 | if tile.source or tile.coord is None: |
0 | # This file is part of the MapProxy project. | |
1 | # Copyright (C) 2010-2016 Omniscale <http://omniscale.de> | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | import os | |
16 | from mapproxy.compat import string_type | |
17 | from mapproxy.util.fs import ensure_directory | |
18 | ||
19 | ||
20 | def location_funcs(layout): | |
21 | if layout == 'tc': | |
22 | return tile_location_tc, level_location | |
23 | elif layout == 'mp': | |
24 | return tile_location_mp, level_location | |
25 | elif layout == 'tms': | |
26 | return tile_location_tms, level_location | |
27 | elif layout == 'reverse_tms': | |
28 | return tile_location_reverse_tms, None | |
29 | elif layout == 'quadkey': | |
30 | return tile_location_quadkey, no_level_location | |
31 | elif layout == 'arcgis': | |
32 | return tile_location_arcgiscache, level_location_arcgiscache | |
33 | else: | |
34 | raise ValueError('unknown directory_layout "%s"' % layout) | |
35 | ||
36 | def level_location(level, cache_dir): | |
37 | """ | |
38 | Return the path where all tiles for `level` will be stored. | |
39 | ||
40 | >>> level_location(2, '/tmp/cache') | |
41 | '/tmp/cache/02' | |
42 | """ | |
43 | if isinstance(level, string_type): | |
44 | return os.path.join(cache_dir, level) | |
45 | else: | |
46 | return os.path.join(cache_dir, "%02d" % level) | |
47 | ||
48 | ||
49 | def level_part(level): | |
50 | """ | |
51 | Return the path where all tiles for `level` will be stored. | |
52 | ||
53 | >>> level_part(2) | |
54 | '02' | |
55 | >>> level_part('2') | |
56 | '2' | |
57 | """ | |
58 | if isinstance(level, string_type): | |
59 | return level | |
60 | else: | |
61 | return "%02d" % level | |
62 | ||
63 | ||
64 | def tile_location_tc(tile, cache_dir, file_ext, create_dir=False): | |
65 | """ | |
66 | Return the location of the `tile`. Caches the result as ``location`` | |
67 | property of the `tile`. | |
68 | ||
69 | :param tile: the tile object | |
70 | :param create_dir: if True, create all necessary directories | |
71 | :return: the full filename of the tile | |
72 | ||
73 | >>> from mapproxy.cache.tile import Tile | |
74 | >>> tile_location_tc(Tile((3, 4, 2)), '/tmp/cache', 'png').replace('\\\\', '/') | |
75 | '/tmp/cache/02/000/000/003/000/000/004.png' | |
76 | """ | |
77 | if tile.location is None: | |
78 | x, y, z = tile.coord | |
79 | parts = (cache_dir, | |
80 | level_part(z), | |
81 | "%03d" % int(x / 1000000), | |
82 | "%03d" % (int(x / 1000) % 1000), | |
83 | "%03d" % (int(x) % 1000), | |
84 | "%03d" % int(y / 1000000), | |
85 | "%03d" % (int(y / 1000) % 1000), | |
86 | "%03d.%s" % (int(y) % 1000, file_ext)) | |
87 | tile.location = os.path.join(*parts) | |
88 | if create_dir: | |
89 | ensure_directory(tile.location) | |
90 | return tile.location | |
91 | ||
92 | def tile_location_mp(tile, cache_dir, file_ext, create_dir=False): | |
93 | """ | |
94 | Return the location of the `tile`. Caches the result as ``location`` | |
95 | property of the `tile`. | |
96 | ||
97 | :param tile: the tile object | |
98 | :param create_dir: if True, create all necessary directories | |
99 | :return: the full filename of the tile | |
100 | ||
101 | >>> from mapproxy.cache.tile import Tile | |
102 | >>> tile_location_mp(Tile((3, 4, 2)), '/tmp/cache', 'png').replace('\\\\', '/') | |
103 | '/tmp/cache/02/0000/0003/0000/0004.png' | |
104 | >>> tile_location_mp(Tile((12345678, 98765432, 22)), '/tmp/cache', 'png').replace('\\\\', '/') | |
105 | '/tmp/cache/22/1234/5678/9876/5432.png' | |
106 | """ | |
107 | if tile.location is None: | |
108 | x, y, z = tile.coord | |
109 | parts = (cache_dir, | |
110 | level_part(z), | |
111 | "%04d" % int(x / 10000), | |
112 | "%04d" % (int(x) % 10000), | |
113 | "%04d" % int(y / 10000), | |
114 | "%04d.%s" % (int(y) % 10000, file_ext)) | |
115 | tile.location = os.path.join(*parts) | |
116 | if create_dir: | |
117 | ensure_directory(tile.location) | |
118 | return tile.location | |
119 | ||
120 | def tile_location_tms(tile, cache_dir, file_ext, create_dir=False): | |
121 | """ | |
122 | Return the location of the `tile`. Caches the result as ``location`` | |
123 | property of the `tile`. | |
124 | ||
125 | :param tile: the tile object | |
126 | :param create_dir: if True, create all necessary directories | |
127 | :return: the full filename of the tile | |
128 | ||
129 | >>> from mapproxy.cache.tile import Tile | |
130 | >>> tile_location_tms(Tile((3, 4, 2)), '/tmp/cache', 'png').replace('\\\\', '/') | |
131 | '/tmp/cache/2/3/4.png' | |
132 | """ | |
133 | if tile.location is None: | |
134 | x, y, z = tile.coord | |
135 | tile.location = os.path.join( | |
136 | cache_dir, level_part(str(z)), | |
137 | str(x), str(y) + '.' + file_ext | |
138 | ) | |
139 | if create_dir: | |
140 | ensure_directory(tile.location) | |
141 | return tile.location | |
142 | ||
143 | def tile_location_reverse_tms(tile, cache_dir, file_ext, create_dir=False): | |
144 | """ | |
145 | Return the location of the `tile`. Caches the result as ``location`` | |
146 | property of the `tile`. | |
147 | ||
148 | :param tile: the tile object | |
149 | :param create_dir: if True, create all necessary directories | |
150 | :return: the full filename of the tile | |
151 | ||
152 | >>> from mapproxy.cache.tile import Tile | |
153 | >>> tile_location_reverse_tms(Tile((3, 4, 2)), '/tmp/cache', 'png').replace('\\\\', '/') | |
154 | '/tmp/cache/4/3/2.png' | |
155 | """ | |
156 | if tile.location is None: | |
157 | x, y, z = tile.coord | |
158 | tile.location = os.path.join( | |
159 | cache_dir, str(y), str(x), str(z) + '.' + file_ext | |
160 | ) | |
161 | if create_dir: | |
162 | ensure_directory(tile.location) | |
163 | return tile.location | |
164 | ||
165 | def level_location_tms(level, cache_dir): | |
166 | return level_location(str(level), cache_dir=cache_dir) | |
167 | ||
168 | def tile_location_quadkey(tile, cache_dir, file_ext, create_dir=False): | |
169 | """ | |
170 | Return the location of the `tile`. Caches the result as ``location`` | |
171 | property of the `tile`. | |
172 | ||
173 | :param tile: the tile object | |
174 | :param create_dir: if True, create all necessary directories | |
175 | :return: the full filename of the tile | |
176 | ||
177 | >>> from mapproxy.cache.tile import Tile | |
178 | >>> tile_location_quadkey(Tile((3, 4, 2)), '/tmp/cache', 'png').replace('\\\\', '/') | |
179 | '/tmp/cache/11.png' | |
180 | """ | |
181 | if tile.location is None: | |
182 | x, y, z = tile.coord | |
183 | quadKey = "" | |
184 | for i in range(z,0,-1): | |
185 | digit = 0 | |
186 | mask = 1 << (i-1) | |
187 | if (x & mask) != 0: | |
188 | digit += 1 | |
189 | if (y & mask) != 0: | |
190 | digit += 2 | |
191 | quadKey += str(digit) | |
192 | tile.location = os.path.join( | |
193 | cache_dir, quadKey + '.' + file_ext | |
194 | ) | |
195 | if create_dir: | |
196 | ensure_directory(tile.location) | |
197 | return tile.location | |
198 | ||
199 | def no_level_location(level, cache_dir): | |
200 | # dummy for quadkey cache which stores all tiles in one directory | |
201 | raise NotImplementedError('cache does not have any level location') | |
202 | ||
203 | def tile_location_arcgiscache(tile, cache_dir, file_ext, create_dir=False): | |
204 | """ | |
205 | Return the location of the `tile`. Caches the result as ``location`` | |
206 | property of the `tile`. | |
207 | ||
208 | :param tile: the tile object | |
209 | :param create_dir: if True, create all necessary directories | |
210 | :return: the full filename of the tile | |
211 | ||
212 | >>> from mapproxy.cache.tile import Tile | |
213 | >>> tile_location_arcgiscache(Tile((1234567, 87654321, 9)), '/tmp/cache', 'png').replace('\\\\', '/') | |
214 | '/tmp/cache/L09/R05397fb1/C0012d687.png' | |
215 | """ | |
216 | if tile.location is None: | |
217 | x, y, z = tile.coord | |
218 | parts = (cache_dir, 'L%02d' % z, 'R%08x' % y, 'C%08x.%s' % (x, file_ext)) | |
219 | tile.location = os.path.join(*parts) | |
220 | if create_dir: | |
221 | ensure_directory(tile.location) | |
222 | return tile.location | |
223 | ||
224 | def level_location_arcgiscache(z, cache_dir): | |
225 | return level_location('L%02d' % z, cache_dir=cache_dir)⏎ |
0 | # This file is part of the MapProxy project. | |
1 | # Copyright (C) 2017 Omniscale <http://omniscale.de> | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | from __future__ import with_statement, absolute_import | |
16 | ||
17 | import hashlib | |
18 | ||
19 | from mapproxy.image import ImageSource | |
20 | from mapproxy.cache.base import ( | |
21 | TileCacheBase, | |
22 | tile_buffer, | |
23 | ) | |
24 | from mapproxy.compat import BytesIO | |
25 | ||
26 | try: | |
27 | import redis | |
28 | except ImportError: | |
29 | redis = None | |
30 | ||
31 | ||
32 | import logging | |
33 | log = logging.getLogger(__name__) | |
34 | ||
35 | ||
36 | class RedisCache(TileCacheBase): | |
37 | def __init__(self, host, port, prefix, ttl=0, db=0): | |
38 | if redis is None: | |
39 | raise ImportError("Redis backend requires 'redis' package.") | |
40 | ||
41 | self.prefix = prefix | |
42 | self.lock_cache_id = 'redis-' + hashlib.md5((host + str(port) + prefix + str(db)).encode('utf-8')).hexdigest() | |
43 | self.ttl = ttl | |
44 | self.r = redis.StrictRedis(host=host, port=port, db=db) | |
45 | ||
46 | def _key(self, tile): | |
47 | x, y, z = tile.coord | |
48 | return self.prefix + '-%d-%d-%d' % (z, x, y) | |
49 | ||
50 | def is_cached(self, tile): | |
51 | if tile.coord is None or tile.source: | |
52 | return True | |
53 | ||
54 | return self.r.exists(self._key(tile)) | |
55 | ||
56 | def store_tile(self, tile): | |
57 | if tile.stored: | |
58 | return True | |
59 | ||
60 | key = self._key(tile) | |
61 | ||
62 | with tile_buffer(tile) as buf: | |
63 | data = buf.read() | |
64 | ||
65 | r = self.r.set(key, data) | |
66 | if self.ttl: | |
67 | # use ms expire times for unit-tests | |
68 | self.r.pexpire(key, int(self.ttl * 1000)) | |
69 | return r | |
70 | ||
71 | def load_tile(self, tile, with_metadata=False): | |
72 | if tile.source or tile.coord is None: | |
73 | return True | |
74 | key = self._key(tile) | |
75 | tile_data = self.r.get(key) | |
76 | if tile_data: | |
77 | tile.source = ImageSource(BytesIO(tile_data)) | |
78 | return True | |
79 | return False | |
80 | ||
81 | def remove_tile(self, tile): | |
82 | if tile.coord is None: | |
83 | return True | |
84 | ||
85 | key = self._key(tile) | |
86 | self.r.delete(key) | |
87 | return True |
29 | 29 | from mapproxy.client.log import log_request |
30 | 30 | from mapproxy.cache.tile import TileCreator, Tile |
31 | 31 | from mapproxy.source import SourceError |
32 | from mapproxy.util.lock import LockTimeout | |
32 | 33 | |
33 | 34 | def has_renderd_support(): |
34 | 35 | if not json or not requests: |
70 | 71 | if result['status'] == 'error': |
71 | 72 | log_request(address, 500, None, duration=duration, method='RENDERD') |
72 | 73 | raise SourceError("Error from renderd: %s" % result.get('error_message', 'unknown error from renderd')) |
74 | elif result['status'] == 'lock': | |
75 | log_request(address, 503, None, duration=duration, method='RENDERD') | |
76 | raise LockTimeout("Lock timeout from renderd: %s" % result.get('error_message', 'unknown lock timeout error from renderd')) | |
73 | 77 | |
74 | 78 | log_request(address, 200, None, duration=duration, method='RENDERD') |
75 | 79 |
0 | # This file is part of the MapProxy project. | |
1 | # Copyright (C) 2016 Omniscale <http://omniscale.de> | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | from __future__ import with_statement | |
16 | ||
17 | import hashlib | |
18 | import sys | |
19 | import threading | |
20 | ||
21 | from mapproxy.image import ImageSource | |
22 | from mapproxy.cache import path | |
23 | from mapproxy.cache.base import tile_buffer, TileCacheBase | |
24 | from mapproxy.util import async | |
25 | from mapproxy.util.py import reraise_exception | |
26 | ||
27 | try: | |
28 | import boto3 | |
29 | import botocore | |
30 | except ImportError: | |
31 | boto3 = None | |
32 | ||
33 | ||
34 | import logging | |
35 | log = logging.getLogger('mapproxy.cache.s3') | |
36 | ||
37 | ||
38 | _s3_sessions_cache = threading.local() | |
39 | def s3_session(profile_name=None): | |
40 | if not hasattr(_s3_sessions_cache, 'sessions'): | |
41 | _s3_sessions_cache.sessions = {} | |
42 | if profile_name not in _s3_sessions_cache.sessions: | |
43 | _s3_sessions_cache.sessions[profile_name] = boto3.session.Session(profile_name=profile_name) | |
44 | return _s3_sessions_cache.sessions[profile_name] | |
45 | ||
46 | class S3ConnectionError(Exception): | |
47 | pass | |
48 | ||
49 | class S3Cache(TileCacheBase): | |
50 | ||
51 | def __init__(self, base_path, file_ext, directory_layout='tms', | |
52 | bucket_name='mapproxy', profile_name=None, | |
53 | _concurrent_writer=4): | |
54 | super(S3Cache, self).__init__() | |
55 | self.lock_cache_id = hashlib.md5(base_path.encode('utf-8') + bucket_name.encode('utf-8')).hexdigest() | |
56 | self.bucket_name = bucket_name | |
57 | try: | |
58 | self.bucket = self.conn().head_bucket(Bucket=bucket_name) | |
59 | except botocore.exceptions.ClientError as e: | |
60 | if e.response['Error']['Code'] == '404': | |
61 | raise S3ConnectionError('No such bucket: %s' % bucket_name) | |
62 | elif e.response['Error']['Code'] == '403': | |
63 | raise S3ConnectionError('Access denied. Check your credentials') | |
64 | else: | |
65 | reraise_exception( | |
66 | S3ConnectionError('Unknown error: %s' % e), | |
67 | sys.exc_info(), | |
68 | ) | |
69 | ||
70 | self.base_path = base_path | |
71 | self.file_ext = file_ext | |
72 | self._concurrent_writer = _concurrent_writer | |
73 | ||
74 | self._tile_location, _ = path.location_funcs(layout=directory_layout) | |
75 | ||
76 | def tile_key(self, tile): | |
77 | return self._tile_location(tile, self.base_path, self.file_ext).lstrip('/') | |
78 | ||
79 | def conn(self): | |
80 | if boto3 is None: | |
81 | raise ImportError("S3 Cache requires 'boto3' package.") | |
82 | ||
83 | try: | |
84 | return s3_session().client("s3") | |
85 | except Exception as e: | |
86 | raise S3ConnectionError('Error during connection %s' % e) | |
87 | ||
88 | def load_tile_metadata(self, tile): | |
89 | if tile.timestamp: | |
90 | return | |
91 | self.is_cached(tile) | |
92 | ||
93 | def _set_metadata(self, response, tile): | |
94 | if 'LastModified' in response: | |
95 | tile.timestamp = float(response['LastModified'].strftime('%s')) | |
96 | if 'ContentLength' in response: | |
97 | tile.size = response['ContentLength'] | |
98 | ||
99 | def is_cached(self, tile): | |
100 | if tile.is_missing(): | |
101 | key = self.tile_key(tile) | |
102 | try: | |
103 | r = self.conn().head_object(Bucket=self.bucket_name, Key=key) | |
104 | self._set_metadata(r, tile) | |
105 | except botocore.exceptions.ClientError as e: | |
106 | if e.response['Error']['Code'] in ('404', 'NoSuchKey'): | |
107 | return False | |
108 | raise | |
109 | ||
110 | return True | |
111 | ||
112 | def load_tiles(self, tiles, with_metadata=True): | |
113 | p = async.Pool(min(4, len(tiles))) | |
114 | return all(p.map(self.load_tile, tiles)) | |
115 | ||
116 | def load_tile(self, tile, with_metadata=True): | |
117 | if not tile.is_missing(): | |
118 | return True | |
119 | ||
120 | key = self.tile_key(tile) | |
121 | log.debug('S3:load_tile, key: %s' % key) | |
122 | ||
123 | try: | |
124 | r = self.conn().get_object(Bucket=self.bucket_name, Key=key) | |
125 | self._set_metadata(r, tile) | |
126 | tile.source = ImageSource(r['Body']) | |
127 | except botocore.exceptions.ClientError as e: | |
128 | error = e.response.get('Errors', e.response)['Error'] # moto get_object can return Error wrapped in Errors... | |
129 | if error['Code'] in ('404', 'NoSuchKey'): | |
130 | return False | |
131 | raise | |
132 | ||
133 | return True | |
134 | ||
135 | def remove_tile(self, tile): | |
136 | key = self.tile_key(tile) | |
137 | log.debug('remove_tile, key: %s' % key) | |
138 | self.conn().delete_object(Bucket=self.bucket_name, Key=key) | |
139 | ||
140 | def store_tiles(self, tiles): | |
141 | p = async.Pool(min(self._concurrent_writer, len(tiles))) | |
142 | p.map(self.store_tile, tiles) | |
143 | ||
144 | def store_tile(self, tile): | |
145 | if tile.stored: | |
146 | return | |
147 | ||
148 | key = self.tile_key(tile) | |
149 | log.debug('S3: store_tile, key: %s' % key) | |
150 | ||
151 | extra_args = {} | |
152 | if self.file_ext in ('jpeg', 'png'): | |
153 | extra_args['ContentType'] = 'image/' + self.file_ext | |
154 | with tile_buffer(tile) as buf: | |
155 | self.conn().upload_fileobj( | |
156 | NopCloser(buf), # upload_fileobj closes buf, wrap in NopCloser | |
157 | self.bucket_name, | |
158 | key, | |
159 | ExtraArgs=extra_args) | |
160 | ||
161 | class NopCloser(object): | |
162 | def __init__(self, wrapped): | |
163 | self.wrapped = wrapped | |
164 | ||
165 | def close(self): | |
166 | pass | |
167 | ||
168 | def __getattr__(self, name): | |
169 | return getattr(self.wrapped, name) |
36 | 36 | |
37 | 37 | from __future__ import with_statement |
38 | 38 | |
39 | from functools import partial | |
39 | 40 | from contextlib import contextmanager |
40 | 41 | from mapproxy.grid import MetaGrid |
41 | 42 | from mapproxy.image.merge import merge_images |
42 | 43 | from mapproxy.image.tile import TileSplitter |
43 | 44 | from mapproxy.layer import MapQuery, BlankImage |
44 | 45 | from mapproxy.util import async |
46 | from mapproxy.util.py import reraise | |
45 | 47 | |
46 | 48 | class TileManager(object): |
47 | 49 | """ |
55 | 57 | """ |
56 | 58 | def __init__(self, grid, cache, sources, format, locker, image_opts=None, request_format=None, |
57 | 59 | meta_buffer=None, meta_size=None, minimize_meta_requests=False, identifier=None, |
58 | pre_store_filter=None, concurrent_tile_creators=1, tile_creator_class=None): | |
60 | pre_store_filter=None, concurrent_tile_creators=1, tile_creator_class=None, | |
61 | bulk_meta_tiles=False, | |
62 | ): | |
59 | 63 | self.grid = grid |
60 | 64 | self.cache = cache |
61 | 65 | self.locker = locker |
77 | 81 | self.meta_grid = MetaGrid(grid, meta_size=meta_size, meta_buffer=meta_buffer) |
78 | 82 | elif any(source.supports_meta_tiles for source in sources): |
79 | 83 | raise ValueError('meta tiling configured but not supported by all sources') |
84 | elif meta_size and not meta_size == [1, 1] and bulk_meta_tiles: | |
85 | # meta tiles configured but all sources are tiled | |
86 | # use bulk_meta_tile mode that download tiles in parallel | |
87 | self.meta_grid = MetaGrid(grid, meta_size=meta_size, meta_buffer=0) | |
88 | self.tile_creator_class = partial(self.tile_creator_class, bulk_meta_tiles=True) | |
80 | 89 | |
81 | 90 | @contextmanager |
82 | 91 | def session(self): |
195 | 204 | return tile |
196 | 205 | |
197 | 206 | class TileCreator(object): |
198 | def __init__(self, tile_mgr, dimensions=None, image_merger=None): | |
207 | def __init__(self, tile_mgr, dimensions=None, image_merger=None, bulk_meta_tiles=False): | |
199 | 208 | self.cache = tile_mgr.cache |
200 | 209 | self.sources = tile_mgr.sources |
201 | 210 | self.grid = tile_mgr.grid |
202 | 211 | self.meta_grid = tile_mgr.meta_grid |
212 | self.bulk_meta_tiles = bulk_meta_tiles | |
203 | 213 | self.tile_mgr = tile_mgr |
204 | 214 | self.dimensions = dimensions |
205 | 215 | self.image_merger = image_merger |
282 | 292 | try: |
283 | 293 | img = source.get_map(query) |
284 | 294 | except BlankImage: |
285 | return None | |
295 | return None, None | |
286 | 296 | else: |
287 | return img | |
288 | ||
289 | imgs = [] | |
290 | for img in async.imap(get_map_from_source, self.sources): | |
291 | if img is not None: | |
292 | imgs.append(img) | |
293 | ||
294 | merger = self.image_merger | |
295 | if not merger: | |
296 | merger = merge_images | |
297 | return merger(imgs, size=query.size, image_opts=self.tile_mgr.image_opts) | |
297 | return (img, source.coverage) | |
298 | ||
299 | layers = [] | |
300 | for layer in async.imap(get_map_from_source, self.sources): | |
301 | if layer[0] is not None: | |
302 | layers.append(layer) | |
303 | ||
304 | return merge_images(layers, size=query.size, bbox=query.bbox, bbox_srs=query.srs, | |
305 | image_opts=self.tile_mgr.image_opts, merger=self.image_merger) | |
298 | 306 | |
299 | 307 | def _create_meta_tiles(self, meta_tiles): |
308 | if self.bulk_meta_tiles: | |
309 | created_tiles = [] | |
310 | for meta_tile in meta_tiles: | |
311 | created_tiles.extend(self._create_bulk_meta_tile(meta_tile)) | |
312 | return created_tiles | |
313 | ||
300 | 314 | if self.tile_mgr.concurrent_tile_creators > 1 and len(meta_tiles) > 1: |
301 | 315 | return self._create_threaded(self._create_meta_tile, meta_tiles) |
302 | 316 | |
306 | 320 | return created_tiles |
307 | 321 | |
308 | 322 | def _create_meta_tile(self, meta_tile): |
323 | """ | |
324 | _create_meta_tile queries a single meta tile and splits it into | |
325 | tiles. | |
326 | """ | |
309 | 327 | tile_size = self.grid.tile_size |
310 | 328 | query = MapQuery(meta_tile.bbox, meta_tile.size, self.grid.srs, self.tile_mgr.request_format, |
311 | 329 | dimensions=self.dimensions) |
320 | 338 | if meta_tile_image.cacheable: |
321 | 339 | self.cache.store_tiles(splitted_tiles) |
322 | 340 | return splitted_tiles |
323 | # else | |
341 | # else | |
324 | 342 | tiles = [Tile(coord) for coord in meta_tile.tiles] |
325 | 343 | self.cache.load_tiles(tiles) |
326 | 344 | return tiles |
345 | ||
346 | def _create_bulk_meta_tile(self, meta_tile): | |
347 | """ | |
348 | _create_bulk_meta_tile queries each tile of the meta tile in parallel | |
349 | (using concurrent_tile_creators). | |
350 | """ | |
351 | tile_size = self.grid.tile_size | |
352 | main_tile = Tile(meta_tile.main_tile_coord) | |
353 | with self.tile_mgr.lock(main_tile): | |
354 | if not all(self.is_cached(t) for t in meta_tile.tiles if t is not None): | |
355 | async_pool = async.Pool(self.tile_mgr.concurrent_tile_creators) | |
356 | def query_tile(coord): | |
357 | try: | |
358 | query = MapQuery(self.grid.tile_bbox(coord), tile_size, self.grid.srs, self.tile_mgr.request_format, | |
359 | dimensions=self.dimensions) | |
360 | tile_image = self._query_sources(query) | |
361 | if tile_image is None: | |
362 | return None | |
363 | ||
364 | if self.tile_mgr.image_opts != tile_image.image_opts: | |
365 | # call as_buffer to force conversion into cache format | |
366 | tile_image.as_buffer(self.tile_mgr.image_opts) | |
367 | ||
368 | tile = Tile(coord, cacheable=tile_image.cacheable) | |
369 | tile.source = tile_image | |
370 | tile = self.tile_mgr.apply_tile_filter(tile) | |
371 | except BlankImage: | |
372 | return None | |
373 | else: | |
374 | return tile | |
375 | ||
376 | tiles = [] | |
377 | for tile_task in async_pool.imap(query_tile, | |
378 | [t for t in meta_tile.tiles if t is not None], | |
379 | use_result_objects=True, | |
380 | ): | |
381 | if tile_task.exception is None: | |
382 | tile = tile_task.result | |
383 | if tile is not None: | |
384 | tiles.append(tile) | |
385 | else: | |
386 | ex = tile_task.exception | |
387 | async_pool.shutdown(True) | |
388 | reraise(ex) | |
389 | ||
390 | self.cache.store_tiles([t for t in tiles if t.cacheable]) | |
391 | return tiles | |
392 | ||
393 | # else | |
394 | tiles = [Tile(coord) for coord in meta_tile.tiles] | |
395 | self.cache.load_tiles(tiles) | |
396 | return tiles | |
397 | ||
327 | 398 | |
328 | 399 | class Tile(object): |
329 | 400 | """ |
11 | 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | ||
15 | from mapproxy.client.http import HTTPClient | |
16 | from mapproxy.client.wms import WMSInfoClient | |
17 | from mapproxy.srs import SRS | |
18 | from mapproxy.featureinfo import create_featureinfo_doc | |
14 | 19 | |
15 | 20 | class ArcGISClient(object): |
16 | 21 | def __init__(self, request_template, http_client=None): |
32 | 37 | req.params.transparent = query.transparent |
33 | 38 | |
34 | 39 | return req.complete_url |
40 | ||
41 | def combined_client(self, other, query): | |
42 | return | |
43 | ||
44 | class ArcGISInfoClient(WMSInfoClient): | |
45 | def __init__(self, request_template, supported_srs=None, http_client=None, | |
46 | return_geometries=False, | |
47 | tolerance=5, | |
48 | ): | |
49 | self.request_template = request_template | |
50 | self.http_client = http_client or HTTPClient() | |
51 | if not supported_srs and self.request_template.params.srs is not None: | |
52 | supported_srs = [SRS(self.request_template.params.srs)] | |
53 | self.supported_srs = supported_srs or [] | |
54 | self.return_geometries = return_geometries | |
55 | self.tolerance = tolerance | |
56 | ||
57 | def get_info(self, query): | |
58 | if self.supported_srs and query.srs not in self.supported_srs: | |
59 | query = self._get_transformed_query(query) | |
60 | resp = self._retrieve(query) | |
61 | # always use query.info_format and not content-type from response (even esri example server aleays return text/plain) | |
62 | return create_featureinfo_doc(resp.read(), query.info_format) | |
63 | ||
64 | def _query_url(self, query): | |
65 | req = self.request_template.copy() | |
66 | req.params.bbox = query.bbox | |
67 | req.params.size = query.size | |
68 | req.params.pos = query.pos | |
69 | req.params.srs = query.srs.srs_code | |
70 | if query.info_format.startswith('text/html'): | |
71 | req.params['f'] = 'html' | |
72 | else: | |
73 | req.params['f'] = 'json' | |
74 | ||
75 | req.params['tolerance'] = self.tolerance | |
76 | req.params['returnGeometry'] = str(self.return_geometries).lower() | |
77 | ||
78 | return req.complete_url |
133 | 133 | """ |
134 | 134 | req_srs = query.srs |
135 | 135 | req_bbox = query.bbox |
136 | req_coord = make_lin_transf((0, 0, query.size[0], query.size[1]), req_bbox)(query.pos) | |
137 | ||
136 | 138 | info_srs = self._best_supported_srs(req_srs) |
137 | 139 | info_bbox = req_srs.transform_bbox_to(info_srs, req_bbox) |
138 | ||
139 | req_coord = make_lin_transf((0, query.size[1], query.size[0], 0), req_bbox)(query.pos) | |
140 | # calculate new info_size to keep square pixels after transform_bbox_to | |
141 | info_aratio = (info_bbox[3] - info_bbox[1])/(info_bbox[2] - info_bbox[0]) | |
142 | info_size = query.size[0], int(info_aratio*query.size[0]) | |
140 | 143 | |
141 | 144 | info_coord = req_srs.transform_to(info_srs, req_coord) |
142 | info_pos = make_lin_transf((info_bbox), (0, query.size[1], query.size[0], 0))(info_coord) | |
145 | info_pos = make_lin_transf((info_bbox), (0, 0, info_size[0], info_size[1]))(info_coord) | |
143 | 146 | info_pos = int(round(info_pos[0])), int(round(info_pos[1])) |
144 | 147 | |
145 | 148 | info_query = InfoQuery( |
146 | 149 | bbox=info_bbox, |
147 | size=query.size, | |
150 | size=info_size, | |
148 | 151 | srs=info_srs, |
149 | 152 | pos=info_pos, |
150 | 153 | info_format=query.info_format, |
18 | 18 | 'ImageChops', 'quantize'] |
19 | 19 | |
20 | 20 | try: |
21 | import PIL | |
21 | 22 | from PIL import Image, ImageColor, ImageDraw, ImageFont, ImagePalette, ImageChops, ImageMath |
22 | 23 | # prevent pyflakes warnings |
23 | 24 | Image, ImageColor, ImageDraw, ImageFont, ImagePalette, ImageChops, ImageMath |
24 | 25 | except ImportError: |
25 | try: | |
26 | import Image, ImageColor, ImageDraw, ImageFont, ImagePalette, ImageChops, ImageMath | |
27 | # prevent pyflakes warnings | |
28 | Image, ImageColor, ImageDraw, ImageFont, ImagePalette, ImageChops, ImageMath | |
29 | except ImportError: | |
30 | # allow MapProxy to start without PIL (for tilecache only). | |
31 | # issue warning and raise ImportError on first use of | |
32 | # a function that requires PIL | |
33 | warnings.warn('PIL is not available') | |
34 | class NoPIL(object): | |
35 | def __getattr__(self, name): | |
36 | if name.startswith('__'): | |
37 | raise AttributeError() | |
38 | raise ImportError('PIL is not available') | |
39 | ImageDraw = ImageFont = ImagePalette = ImageChops = NoPIL() | |
40 | # add some dummy stuff required on import/load time | |
41 | Image = NoPIL() | |
42 | Image.NEAREST = Image.BILINEAR = Image.BICUBIC = 1 | |
43 | Image.Image = NoPIL | |
44 | ImageColor = NoPIL() | |
45 | ImageColor.getrgb = lambda x: x | |
26 | # allow MapProxy to start without PIL (for tilecache only). | |
27 | # issue warning and raise ImportError on first use of | |
28 | # a function that requires PIL | |
29 | warnings.warn('PIL is not available') | |
30 | class NoPIL(object): | |
31 | def __getattr__(self, name): | |
32 | if name.startswith('__'): | |
33 | raise AttributeError() | |
34 | raise ImportError('PIL is not available') | |
35 | ImageDraw = ImageFont = ImagePalette = ImageChops = NoPIL() | |
36 | # add some dummy stuff required on import/load time | |
37 | Image = NoPIL() | |
38 | Image.NEAREST = Image.BILINEAR = Image.BICUBIC = 1 | |
39 | Image.Image = NoPIL | |
40 | ImageColor = NoPIL() | |
41 | ImageColor.getrgb = lambda x: x | |
46 | 42 | |
47 | 43 | def has_alpha_composite_support(): |
48 | 44 | return hasattr(Image, 'alpha_composite') |
45 | ||
46 | def transform_uses_center(): | |
47 | # transformation behavior changed with Pillow 3.4 | |
48 | # https://github.com/python-pillow/Pillow/commit/5232361718bae0f0ccda76bfd5b390ebf9179b18 | |
49 | if hasattr(PIL, 'PILLOW_VERSION'): | |
50 | if not PIL.PILLOW_VERSION.startswith(('1.', '2.', '3.0', '3.1', '3.2', '3.3')): | |
51 | return True | |
52 | return False | |
49 | 53 | |
50 | 54 | def quantize_pil(img, colors=256, alpha=False, defaults=None): |
51 | 55 | if hasattr(Image, 'FASTOCTREE'): |
20 | 20 | load_datasource, |
21 | 21 | load_ogr_datasource, |
22 | 22 | load_polygons, |
23 | load_expire_tiles, | |
23 | 24 | require_geom_support, |
24 | 25 | build_multipolygon, |
25 | 26 | ) |
26 | from mapproxy.util.coverage import coverage | |
27 | from mapproxy.util.coverage import ( | |
28 | coverage, | |
29 | diff_coverage, | |
30 | union_coverage, | |
31 | intersection_coverage, | |
32 | ) | |
27 | 33 | from mapproxy.compat import string_type |
28 | 34 | |
29 | 35 | bbox_string_re = re.compile(r'[-+]?\d*.?\d+,[-+]?\d*.?\d+,[-+]?\d*.?\d+,[-+]?\d*.?\d+') |
30 | 36 | |
31 | 37 | def load_coverage(conf, base_path=None): |
32 | if 'ogr_datasource' in conf: | |
38 | clip = False | |
39 | if 'clip' in conf: | |
40 | clip = conf['clip'] | |
41 | ||
42 | if 'union' in conf: | |
43 | parts = [] | |
44 | for cov in conf['union']: | |
45 | parts.append(load_coverage(cov)) | |
46 | return union_coverage(parts, clip=clip) | |
47 | elif 'intersection' in conf: | |
48 | parts = [] | |
49 | for cov in conf['intersection']: | |
50 | parts.append(load_coverage(cov)) | |
51 | return intersection_coverage(parts, clip=clip) | |
52 | elif 'difference' in conf: | |
53 | parts = [] | |
54 | for cov in conf['difference']: | |
55 | parts.append(load_coverage(cov)) | |
56 | return diff_coverage(parts, clip=clip) | |
57 | elif 'ogr_datasource' in conf: | |
33 | 58 | require_geom_support() |
34 | 59 | srs = conf['ogr_srs'] |
35 | 60 | datasource = conf['ogr_datasource'] |
69 | 94 | where = conf.get('where', None) |
70 | 95 | geom = load_datasource(datasource, where) |
71 | 96 | bbox, geom = build_multipolygon(geom, simplify=True) |
97 | elif 'expire_tiles' in conf: | |
98 | require_geom_support() | |
99 | filename = abspath(conf['expire_tiles']) | |
100 | geom = load_expire_tiles(filename) | |
101 | _, geom = build_multipolygon(geom, simplify=False) | |
102 | return coverage(geom, SRS(3857)) | |
72 | 103 | else: |
73 | 104 | return None |
74 | return coverage(geom or bbox, SRS(srs)) | |
105 | ||
106 | return coverage(geom or bbox, SRS(srs), clip=clip) |
62 | 62 | meta_buffer = 80, |
63 | 63 | minimize_meta_requests = False, |
64 | 64 | link_single_color_images = False, |
65 | sqlite_timeout = 30, | |
65 | 66 | ) |
66 | 67 | |
67 | 68 | grid = dict( |
620 | 620 | request = create_request(self.conf["req"], params) |
621 | 621 | http_client, request.url = self.http_client(request.url) |
622 | 622 | coverage = self.coverage() |
623 | res_range = resolution_range(self.conf) | |
623 | 624 | |
624 | 625 | client = ArcGISClient(request, http_client) |
625 | 626 | image_opts = self.image_opts(format=params.get('format')) |
626 | 627 | return ArcGISSource(client, image_opts=image_opts, coverage=coverage, |
628 | res_range=res_range, | |
627 | 629 | supported_srs=supported_srs, |
628 | 630 | supported_formats=supported_formats or None) |
631 | ||
632 | ||
633 | def fi_source(self, params=None): | |
634 | from mapproxy.client.arcgis import ArcGISInfoClient | |
635 | from mapproxy.request.arcgis import create_identify_request | |
636 | from mapproxy.source.arcgis import ArcGISInfoSource | |
637 | from mapproxy.srs import SRS | |
638 | ||
639 | if params is None: params = {} | |
640 | request_format = self.conf['req'].get('format') | |
641 | if request_format: | |
642 | params['format'] = request_format | |
643 | supported_srs = [SRS(code) for code in self.conf.get('supported_srs', [])] | |
644 | fi_source = None | |
645 | if self.conf.get('opts', {}).get('featureinfo', False): | |
646 | opts = self.conf['opts'] | |
647 | tolerance = opts.get('featureinfo_tolerance', 5) | |
648 | return_geometries = opts.get('featureinfo_return_geometries', False) | |
649 | ||
650 | fi_request = create_identify_request(self.conf['req'], params) | |
651 | ||
652 | ||
653 | http_client, fi_request.url = self.http_client(fi_request.url) | |
654 | fi_client = ArcGISInfoClient(fi_request, | |
655 | supported_srs=supported_srs, | |
656 | http_client=http_client, | |
657 | tolerance=tolerance, | |
658 | return_geometries=return_geometries, | |
659 | ) | |
660 | fi_source = ArcGISInfoSource(fi_client) | |
661 | return fi_source | |
629 | 662 | |
630 | 663 | |
631 | 664 | class WMSSourceConfiguration(SourceConfiguration): |
952 | 985 | return self.context.globals.get_path('cache_dir', self.conf, |
953 | 986 | global_key='cache.base_dir') |
954 | 987 | |
988 | @memoize | |
989 | def has_multiple_grids(self): | |
990 | return len(self.grid_confs()) > 1 | |
991 | ||
955 | 992 | def lock_dir(self): |
956 | 993 | lock_dir = self.context.globals.get_path('cache.tile_lock_dir', self.conf) |
957 | 994 | if not lock_dir: |
964 | 1001 | cache_dir = self.cache_dir() |
965 | 1002 | directory_layout = self.conf.get('cache', {}).get('directory_layout', 'tc') |
966 | 1003 | if self.conf.get('cache', {}).get('directory'): |
1004 | if self.has_multiple_grids(): | |
1005 | raise ConfigurationError( | |
1006 | "using single directory for cache with multiple grids in %s" % | |
1007 | (self.conf['name']), | |
1008 | ) | |
967 | 1009 | pass |
968 | 1010 | elif self.conf.get('cache', {}).get('use_grid_names'): |
969 | 1011 | cache_dir = os.path.join(cache_dir, self.conf['name'], grid_conf.tile_grid().name) |
977 | 1019 | log.warn('link_single_color_images not supported on windows') |
978 | 1020 | link_single_color_images = False |
979 | 1021 | |
980 | lock_timeout = self.context.globals.get_value('http.client_timeout', {}) | |
981 | ||
982 | 1022 | return FileCache( |
983 | 1023 | cache_dir, |
984 | 1024 | file_ext=file_ext, |
985 | 1025 | directory_layout=directory_layout, |
986 | lock_timeout=lock_timeout, | |
987 | 1026 | link_single_color_images=link_single_color_images, |
988 | 1027 | ) |
989 | 1028 | |
999 | 1038 | else: |
1000 | 1039 | mbfile_path = os.path.join(self.cache_dir(), filename) |
1001 | 1040 | |
1041 | sqlite_timeout = self.context.globals.get_value('cache.sqlite_timeout', self.conf) | |
1042 | wal = self.context.globals.get_value('cache.sqlite_wal', self.conf) | |
1043 | ||
1002 | 1044 | return MBTilesCache( |
1003 | 1045 | mbfile_path, |
1046 | timeout=sqlite_timeout, | |
1047 | wal=wal, | |
1048 | ) | |
1049 | ||
1050 | def _geopackage_cache(self, grid_conf, file_ext): | |
1051 | from mapproxy.cache.geopackage import GeopackageCache, GeopackageLevelCache | |
1052 | ||
1053 | filename = self.conf['cache'].get('filename') | |
1054 | table_name = self.conf['cache'].get('table_name') or \ | |
1055 | "{}_{}".format(self.conf['name'], grid_conf.tile_grid().name) | |
1056 | levels = self.conf['cache'].get('levels') | |
1057 | ||
1058 | if not filename: | |
1059 | filename = self.conf['name'] + '.gpkg' | |
1060 | if filename.startswith('.' + os.sep): | |
1061 | gpkg_file_path = self.context.globals.abspath(filename) | |
1062 | else: | |
1063 | gpkg_file_path = os.path.join(self.cache_dir(), filename) | |
1064 | ||
1065 | cache_dir = self.conf['cache'].get('directory') | |
1066 | if cache_dir: | |
1067 | cache_dir = os.path.join( | |
1068 | self.context.globals.abspath(cache_dir), | |
1069 | grid_conf.tile_grid().name | |
1070 | ) | |
1071 | else: | |
1072 | cache_dir = self.cache_dir() | |
1073 | cache_dir = os.path.join( | |
1074 | cache_dir, | |
1075 | self.conf['name'], | |
1076 | grid_conf.tile_grid().name | |
1077 | ) | |
1078 | ||
1079 | if levels: | |
1080 | return GeopackageLevelCache( | |
1081 | cache_dir, grid_conf.tile_grid(), table_name | |
1082 | ) | |
1083 | else: | |
1084 | return GeopackageCache( | |
1085 | gpkg_file_path, grid_conf.tile_grid(), table_name | |
1086 | ) | |
1087 | ||
1088 | def _s3_cache(self, grid_conf, file_ext): | |
1089 | from mapproxy.cache.s3 import S3Cache | |
1090 | ||
1091 | bucket_name = self.context.globals.get_value('cache.bucket_name', self.conf, | |
1092 | global_key='cache.s3.bucket_name') | |
1093 | ||
1094 | if not bucket_name: | |
1095 | raise ConfigurationError("no bucket_name configured for s3 cache %s" % self.conf['name']) | |
1096 | ||
1097 | profile_name = self.context.globals.get_value('cache.profile_name', self.conf, | |
1098 | global_key='cache.s3.profile_name') | |
1099 | ||
1100 | directory_layout = self.conf['cache'].get('directory_layout', 'tms') | |
1101 | ||
1102 | base_path = self.conf['cache'].get('directory', None) | |
1103 | if base_path is None: | |
1104 | base_path = os.path.join(self.conf['name'], grid_conf.tile_grid().name) | |
1105 | ||
1106 | return S3Cache( | |
1107 | base_path=base_path, | |
1108 | file_ext=file_ext, | |
1109 | directory_layout=directory_layout, | |
1110 | bucket_name=bucket_name, | |
1111 | profile_name=profile_name, | |
1004 | 1112 | ) |
1005 | 1113 | |
1006 | 1114 | def _sqlite_cache(self, grid_conf, file_ext): |
1020 | 1128 | grid_conf.tile_grid().name |
1021 | 1129 | ) |
1022 | 1130 | |
1131 | sqlite_timeout = self.context.globals.get_value('cache.sqlite_timeout', self.conf) | |
1132 | wal = self.context.globals.get_value('cache.sqlite_wal', self.conf) | |
1133 | ||
1023 | 1134 | return MBTilesLevelCache( |
1024 | 1135 | cache_dir, |
1136 | timeout=sqlite_timeout, | |
1137 | wal=wal, | |
1025 | 1138 | ) |
1026 | 1139 | |
1027 | 1140 | def _couchdb_cache(self, grid_conf, file_ext): |
1071 | 1184 | return RiakCache(nodes=nodes, protocol=protocol, bucket=bucket, |
1072 | 1185 | tile_grid=grid_conf.tile_grid(), |
1073 | 1186 | use_secondary_index=use_secondary_index, |
1187 | ) | |
1188 | ||
1189 | def _redis_cache(self, grid_conf, file_ext): | |
1190 | from mapproxy.cache.redis import RedisCache | |
1191 | ||
1192 | host = self.conf['cache'].get('host', '127.0.0.1') | |
1193 | port = self.conf['cache'].get('port', 6379) | |
1194 | db = self.conf['cache'].get('db', 0) | |
1195 | ttl = self.conf['cache'].get('default_ttl', 3600) | |
1196 | ||
1197 | prefix = self.conf['cache'].get('prefix') | |
1198 | if not prefix: | |
1199 | prefix = self.conf['name'] + '_' + grid_conf.tile_grid().name | |
1200 | ||
1201 | return RedisCache( | |
1202 | host=host, | |
1203 | port=port, | |
1204 | db=db, | |
1205 | prefix=prefix, | |
1206 | ttl=ttl, | |
1207 | ) | |
1208 | ||
1209 | def _compact_cache(self, grid_conf, file_ext): | |
1210 | from mapproxy.cache.compact import CompactCacheV1 | |
1211 | ||
1212 | cache_dir = self.cache_dir() | |
1213 | if self.conf.get('cache', {}).get('directory'): | |
1214 | if self.has_multiple_grids(): | |
1215 | raise ConfigurationError( | |
1216 | "using single directory for cache with multiple grids in %s" % | |
1217 | (self.conf['name']), | |
1218 | ) | |
1219 | pass | |
1220 | else: | |
1221 | cache_dir = os.path.join(cache_dir, self.conf['name'], grid_conf.tile_grid().name) | |
1222 | ||
1223 | if self.conf['cache']['version'] != 1: | |
1224 | raise ConfigurationError("compact cache only supports version 1") | |
1225 | return CompactCacheV1( | |
1226 | cache_dir=cache_dir, | |
1074 | 1227 | ) |
1075 | 1228 | |
1076 | 1229 | def _tile_cache(self, grid_conf, file_ext): |
1227 | 1380 | factor=source.get('factor', 1.0), |
1228 | 1381 | ) |
1229 | 1382 | |
1230 | return band_merger.merge, sources, source_image_opts | |
1383 | return band_merger, sources, source_image_opts | |
1231 | 1384 | |
1232 | 1385 | @memoize |
1233 | 1386 | def caches(self): |
1252 | 1405 | global_key='cache.meta_buffer') |
1253 | 1406 | meta_size = self.context.globals.get_value('meta_size', self.conf, |
1254 | 1407 | global_key='cache.meta_size') |
1408 | bulk_meta_tiles = self.context.globals.get_value('bulk_meta_tiles', self.conf, | |
1409 | global_key='cache.bulk_meta_tiles') | |
1255 | 1410 | minimize_meta_requests = self.context.globals.get_value('minimize_meta_requests', self.conf, |
1256 | 1411 | global_key='cache.minimize_meta_requests') |
1257 | 1412 | concurrent_tile_creators = self.context.globals.get_value('concurrent_tile_creators', self.conf, |
1335 | 1490 | minimize_meta_requests=minimize_meta_requests, |
1336 | 1491 | concurrent_tile_creators=concurrent_tile_creators, |
1337 | 1492 | pre_store_filter=tile_filter, |
1338 | tile_creator_class=tile_creator_class) | |
1493 | tile_creator_class=tile_creator_class, | |
1494 | bulk_meta_tiles=bulk_meta_tiles, | |
1495 | ) | |
1339 | 1496 | extent = merge_layer_extents(sources) |
1340 | 1497 | if extent.is_default: |
1341 | 1498 | extent = map_extent_from_grid(tile_grid) |
1492 | 1649 | return dimensions |
1493 | 1650 | |
1494 | 1651 | @memoize |
1495 | def tile_layers(self): | |
1652 | def tile_layers(self, grid_name_as_path=False): | |
1496 | 1653 | from mapproxy.service.tile import TileLayer |
1497 | 1654 | from mapproxy.cache.dummy import DummyCache |
1498 | 1655 | |
1523 | 1680 | tile_layers = [] |
1524 | 1681 | for cache_name in sources: |
1525 | 1682 | for grid, extent, cache_source in self.context.caches[cache_name].caches(): |
1526 | ||
1527 | 1683 | if dimensions and not isinstance(cache_source.cache, DummyCache): |
1528 | 1684 | # caching of dimension layers is not supported yet |
1529 | 1685 | raise ConfigurationError( |
1534 | 1690 | md = {} |
1535 | 1691 | md['title'] = self.conf['title'] |
1536 | 1692 | md['name'] = self.conf['name'] |
1537 | md['name_path'] = (self.conf['name'], grid.srs.srs_code.replace(':', '').upper()) | |
1538 | 1693 | md['grid_name'] = grid.name |
1694 | if grid_name_as_path: | |
1695 | md['name_path'] = (md['name'], md['grid_name']) | |
1696 | else: | |
1697 | md['name_path'] = (self.conf['name'], grid.srs.srs_code.replace(':', '').upper()) | |
1539 | 1698 | md['name_internal'] = md['name_path'][0] + '_' + md['name_path'][1] |
1540 | 1699 | md['format'] = self.context.caches[cache_name].image_opts().format |
1541 | 1700 | md['cache_name'] = cache_name |
1611 | 1770 | def tile_layers(self, conf, use_grid_names=False): |
1612 | 1771 | layers = odict() |
1613 | 1772 | for layer_name, layer_conf in iteritems(self.context.layers): |
1614 | for tile_layer in layer_conf.tile_layers(): | |
1773 | for tile_layer in layer_conf.tile_layers(grid_name_as_path=use_grid_names): | |
1615 | 1774 | if not tile_layer: continue |
1616 | 1775 | if use_grid_names: |
1617 | # new style layer names are tuples | |
1618 | tile_layer.md['name_path'] = (tile_layer.md['name'], tile_layer.md['grid_name']) | |
1619 | 1776 | layers[tile_layer.md['name_path']] = tile_layer |
1620 | 1777 | else: |
1621 | 1778 | layers[tile_layer.md['name_internal']] = tile_layer |
35 | 35 | else: |
36 | 36 | return [], True |
37 | 37 | |
38 | coverage = { | |
38 | coverage = recursive({ | |
39 | 39 | 'polygons': str(), |
40 | 40 | 'polygons_srs': str(), |
41 | 41 | 'bbox': one_of(str(), [number()]), |
46 | 46 | 'datasource': one_of(str(), [number()]), |
47 | 47 | 'where': str(), |
48 | 48 | 'srs': str(), |
49 | } | |
49 | 'expire_tiles': str(), | |
50 | 'union': [recursive()], | |
51 | 'difference': [recursive()], | |
52 | 'intersection': [recursive()], | |
53 | 'clip': bool(), | |
54 | }) | |
55 | ||
50 | 56 | image_opts = { |
51 | 57 | 'mode': str(), |
52 | 58 | 'colors': number(), |
105 | 111 | }, |
106 | 112 | 'sqlite': { |
107 | 113 | 'directory': str(), |
114 | 'sqlite_timeout': number(), | |
115 | 'sqlite_wal': bool(), | |
108 | 116 | 'tile_lock_dir': str(), |
109 | 117 | }, |
110 | 118 | 'mbtiles': { |
111 | 119 | 'filename': str(), |
112 | 'tile_lock_dir': str(), | |
120 | 'sqlite_timeout': number(), | |
121 | 'sqlite_wal': bool(), | |
122 | 'tile_lock_dir': str(), | |
123 | }, | |
124 | 'geopackage': { | |
125 | 'filename': str(), | |
126 | 'directory': str(), | |
127 | 'tile_lock_dir': str(), | |
128 | 'table_name': str(), | |
129 | 'levels': bool(), | |
113 | 130 | }, |
114 | 131 | 'couchdb': { |
115 | 132 | 'url': str(), |
120 | 137 | 'tile_id': str(), |
121 | 138 | 'tile_lock_dir': str(), |
122 | 139 | }, |
140 | 's3': { | |
141 | 'bucket_name': str(), | |
142 | 'directory_layout': str(), | |
143 | 'directory': str(), | |
144 | 'profile_name': str(), | |
145 | 'tile_lock_dir': str(), | |
146 | }, | |
123 | 147 | 'riak': { |
124 | 148 | 'nodes': [riak_node], |
125 | 149 | 'protocol': one_of('pbc', 'http', 'https'), |
129 | 153 | 'http': number(), |
130 | 154 | }, |
131 | 155 | 'secondary_index': bool(), |
132 | } | |
156 | 'tile_lock_dir': str(), | |
157 | }, | |
158 | 'redis': { | |
159 | 'host': str(), | |
160 | 'port': int(), | |
161 | 'db': int(), | |
162 | 'prefix': str(), | |
163 | 'default_ttl': int(), | |
164 | }, | |
165 | 'compact': { | |
166 | 'directory': str(), | |
167 | required('version'): number(), | |
168 | 'tile_lock_dir': str(), | |
169 | }, | |
133 | 170 | } |
134 | 171 | |
135 | 172 | on_error = { |
323 | 360 | 'tile_lock_dir': str(), |
324 | 361 | 'meta_size': [number()], |
325 | 362 | 'meta_buffer': number(), |
363 | 'bulk_meta_tiles': bool(), | |
326 | 364 | 'max_tile_limit': number(), |
327 | 365 | 'minimize_meta_requests': bool(), |
328 | 366 | 'concurrent_tile_creators': int(), |
329 | 367 | 'link_single_color_images': bool(), |
368 | 's3': { | |
369 | 'bucket_name': str(), | |
370 | 'profile_name': str(), | |
371 | }, | |
330 | 372 | }, |
331 | 373 | 'grid': { |
332 | 374 | 'tile_size': [int()], |
355 | 397 | 'cache_dir': str(), |
356 | 398 | 'meta_size': [number()], |
357 | 399 | 'meta_buffer': number(), |
400 | 'bulk_meta_tiles': bool(), | |
358 | 401 | 'minimize_meta_requests': bool(), |
359 | 402 | 'concurrent_tile_creators': int(), |
360 | 403 | 'disable_storage': bool(), |
485 | 528 | 'transparent': bool(), |
486 | 529 | 'time': str() |
487 | 530 | }, |
531 | 'opts': { | |
532 | 'featureinfo': bool(), | |
533 | 'featureinfo_tolerance': number(), | |
534 | 'featureinfo_return_geometries': bool(), | |
535 | }, | |
488 | 536 | 'supported_srs': [str()], |
489 | 537 | 'http': http_opts |
490 | 538 | }), |
49 | 49 | email: info@omniscale.de |
50 | 50 | # multiline strings are possible with the right indention |
51 | 51 | access_constraints: |
52 | This service is intended for private and evaluation use only. | |
53 | The data is licensed as Creative Commons Attribution-Share Alike 2.0 | |
54 | (http://creativecommons.org/licenses/by-sa/2.0/) | |
52 | Insert license and copyright information for this service. | |
55 | 53 | fees: 'None' |
56 | 54 | |
57 | 55 | wms: |
105 | 103 | email: info@omniscale.de |
106 | 104 | # multiline strings are possible with the right indention |
107 | 105 | access_constraints: |
108 | This service is intended for private and evaluation use only. | |
109 | The data is licensed as Creative Commons Attribution-Share Alike 2.0 | |
110 | (http://creativecommons.org/licenses/by-sa/2.0/) | |
106 | Insert license and copyright information for this service. | |
111 | 107 | fees: 'None' |
112 | 108 | |
113 | 109 | layers: |
13 | 13 | contact: |
14 | 14 | person: Your Name Here |
15 | 15 | position: Technical Director |
16 | organization: | |
16 | organization: | |
17 | 17 | address: Fakestreet 123 |
18 | 18 | city: Somewhere |
19 | 19 | postcode: 12345 |
22 | 22 | fax: +49(0)000-000000-0 |
23 | 23 | email: info@omniscale.de |
24 | 24 | access_constraints: |
25 | This service is intended for private and evaluation use only. | |
26 | The data is licensed as Creative Commons Attribution-Share Alike 2.0 | |
27 | (http://creativecommons.org/licenses/by-sa/2.0/) | |
25 | Insert license and copyright information for this service. | |
28 | 26 | fees: 'None' |
29 | 27 | |
30 | 28 | layers: |
34 | 32 | # - name: osm_full_example |
35 | 33 | # title: Omniscale OSM WMS - osm.omniscale.net |
36 | 34 | # sources: [osm_cache_full_example] |
37 | ||
35 | ||
38 | 36 | caches: |
39 | 37 | osm_cache: |
40 | 38 | grids: [GLOBAL_MERCATOR, global_geodetic_sqrt2] |
41 | 39 | sources: [osm_wms] |
42 | ||
40 | ||
43 | 41 | # osm_cache_full_example: |
44 | 42 | # meta_buffer: 20 |
45 | 43 | # meta_size: [5, 5] |
76 | 74 | # # # always request in this format |
77 | 75 | # # format: image/png |
78 | 76 | # map: /home/map/mapserver.map |
79 | ||
77 | ||
80 | 78 | |
81 | 79 | grids: |
82 | 80 | global_geodetic_sqrt2: |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | 15 | import copy |
16 | import json | |
17 | ||
18 | from functools import reduce | |
16 | 19 | from io import StringIO |
17 | from mapproxy.compat import string_type, PY2, BytesIO | |
20 | ||
21 | from mapproxy.compat import string_type, PY2, BytesIO, iteritems | |
18 | 22 | |
19 | 23 | try: |
20 | 24 | from lxml import etree, html |
119 | 123 | |
120 | 124 | return cls(result_tree) |
121 | 125 | |
126 | ||
127 | class JSONFeatureInfoDoc(FeatureInfoDoc): | |
128 | info_type = 'json' | |
129 | ||
130 | def __init__(self, content): | |
131 | self.content = content | |
132 | ||
133 | def as_string(self): | |
134 | return self.content | |
135 | ||
136 | @classmethod | |
137 | def combine(cls, docs): | |
138 | contents = [json.loads(d.content) for d in docs] | |
139 | combined = reduce(lambda a, b: merge_dict(a, b), contents) | |
140 | return cls(json.dumps(combined)) | |
141 | ||
142 | ||
143 | def merge_dict(base, other): | |
144 | """ | |
145 | Return `base` dict with values from `conf` merged in. | |
146 | """ | |
147 | for k, v in iteritems(other): | |
148 | if k not in base: | |
149 | base[k] = v | |
150 | else: | |
151 | if isinstance(base[k], dict): | |
152 | merge_dict(base[k], v) | |
153 | elif isinstance(base[k], list): | |
154 | base[k].extend(v) | |
155 | else: | |
156 | base[k] = v | |
157 | return base | |
158 | ||
122 | 159 | def create_featureinfo_doc(content, info_format): |
123 | 160 | info_format = info_format.split(';', 1)[0].strip() # remove mime options like charset |
124 | 161 | if info_format in ('text/xml', 'application/vnd.ogc.gml'): |
125 | 162 | return XMLFeatureInfoDoc(content) |
126 | 163 | if info_format == 'text/html': |
127 | 164 | return HTMLFeatureInfoDoc(content) |
165 | if info_format == 'application/json': | |
166 | return JSONFeatureInfoDoc(content) | |
128 | 167 | |
129 | 168 | return TextFeatureInfoDoc(content) |
130 | 169 |
408 | 408 | threshold = thresholds.pop() if thresholds else None |
409 | 409 | |
410 | 410 | if threshold_result is not None: |
411 | return threshold_result | |
411 | # Use previous level that was within stretch_factor, | |
412 | # but only if this level res is smaller then res. | |
413 | # This fixes selection for resolutions that are closer together then stretch_factor. | |
414 | # | |
415 | if l_res < res: | |
416 | return threshold_result | |
412 | 417 | |
413 | 418 | if l_res <= res*self.stretch_factor: |
419 | # l_res within stretch_factor | |
420 | # remember this level, check for thresholds or better res in next loop | |
414 | 421 | threshold_result = level |
415 | 422 | prev_l_res = l_res |
416 | 423 | return level |
1059 | 1066 | def deg_to_m(deg): |
1060 | 1067 | return deg * (6378137 * 2 * math.pi) / 360 |
1061 | 1068 | |
1062 | OGC_PIXLE_SIZE = 0.00028 #m/px | |
1069 | OGC_PIXEL_SIZE = 0.00028 #m/px | |
1063 | 1070 | |
1064 | 1071 | def ogc_scale_to_res(scale): |
1065 | return scale * OGC_PIXLE_SIZE | |
1072 | return scale * OGC_PIXEL_SIZE | |
1066 | 1073 | def res_to_ogc_scale(res): |
1067 | return res / OGC_PIXLE_SIZE | |
1074 | return res / OGC_PIXEL_SIZE | |
1068 | 1075 | |
1069 | 1076 | def resolution_range(min_res=None, max_res=None, max_scale=None, min_scale=None): |
1070 | 1077 | if min_scale == max_scale == min_res == max_res == None: |
30 | 30 | |
31 | 31 | def mask_image(img, bbox, bbox_srs, coverage): |
32 | 32 | geom = mask_polygons(bbox, SRS(bbox_srs), coverage) |
33 | mask = image_mask_from_geom(img, bbox, geom) | |
33 | mask = image_mask_from_geom(img.size, bbox, geom) | |
34 | 34 | img = img.convert('RGBA') |
35 | 35 | img.paste((255, 255, 255, 0), (0, 0), mask) |
36 | 36 | return img |
40 | 40 | coverage = coverage.intersection(bbox, bbox_srs) |
41 | 41 | return flatten_to_polygons(coverage.geom) |
42 | 42 | |
43 | def image_mask_from_geom(img, bbox, polygons): | |
44 | transf = make_lin_transf(bbox, (0, 0) + img.size) | |
43 | def image_mask_from_geom(size, bbox, polygons): | |
44 | mask = Image.new('L', size, 255) | |
45 | if len(polygons) == 0: | |
46 | return mask | |
45 | 47 | |
46 | mask = Image.new('L', img.size, 255) | |
48 | transf = make_lin_transf(bbox, (0, 0) + size) | |
49 | ||
50 | # use negative ~.1 pixel buffer | |
51 | buffer = -0.1 * min((bbox[2] - bbox[0]) / size[0], (bbox[3] - bbox[1]) / size[1]) | |
52 | ||
47 | 53 | draw = ImageDraw.Draw(mask) |
48 | 54 | |
49 | for p in polygons: | |
55 | def draw_polygon(p): | |
50 | 56 | draw.polygon([transf(coord) for coord in p.exterior.coords], fill=0) |
51 | 57 | for ring in p.interiors: |
52 | 58 | draw.polygon([transf(coord) for coord in ring.coords], fill=255) |
53 | 59 | |
60 | for p in polygons: | |
61 | # little bit smaller polygon does not include touched pixels outside coverage | |
62 | buffered = p.buffer(buffer, resolution=1, join_style=2) | |
63 | ||
64 | if buffered.type == 'MultiPolygon': | |
65 | # negative buffer can turn polygon into multipolygon | |
66 | for p in buffered: | |
67 | draw_polygon(p) | |
68 | else: | |
69 | draw_polygon(buffered) | |
70 | ||
54 | 71 | return mask |
35 | 35 | self.layers = [] |
36 | 36 | self.cacheable = True |
37 | 37 | |
38 | def add(self, layer_img, layer=None): | |
38 | def add(self, img, coverage=None): | |
39 | 39 | """ |
40 | 40 | Add one layer image to merge. Bottom-layers first. |
41 | 41 | """ |
42 | if layer_img is not None: | |
43 | self.layers.append((layer_img, layer)) | |
42 | if img is not None: | |
43 | self.layers.append((img, coverage)) | |
44 | ||
45 | ||
46 | class LayerMerger(LayerMerger): | |
44 | 47 | |
45 | 48 | def merge(self, image_opts, size=None, bbox=None, bbox_srs=None, coverage=None): |
46 | 49 | """ |
53 | 56 | if not self.layers: |
54 | 57 | return BlankImageSource(size=size, image_opts=image_opts, cacheable=True) |
55 | 58 | if len(self.layers) == 1: |
56 | layer_img, layer = self.layers[0] | |
59 | layer_img, layer_coverage = self.layers[0] | |
57 | 60 | layer_opts = layer_img.image_opts |
58 | 61 | if (((layer_opts and not layer_opts.transparent) or image_opts.transparent) |
59 | 62 | and (not size or size == layer_img.size) |
60 | and (not layer or not layer.coverage or not layer.coverage.clip) | |
63 | and (not layer_coverage or not layer_coverage.clip) | |
61 | 64 | and not coverage): |
62 | 65 | # layer is opaque, no need to make transparent or add bgcolor |
63 | 66 | return layer_img |
67 | 70 | |
68 | 71 | cacheable = self.cacheable |
69 | 72 | result = create_image(size, image_opts) |
70 | for layer_img, layer in self.layers: | |
73 | for layer_img, layer_coverage in self.layers: | |
71 | 74 | if not layer_img.cacheable: |
72 | 75 | cacheable = False |
73 | 76 | img = layer_img.as_image() |
77 | 80 | else: |
78 | 81 | opacity = layer_image_opts.opacity |
79 | 82 | |
80 | if layer and layer.coverage and layer.coverage.clip: | |
81 | img = mask_image(img, bbox, bbox_srs, layer.coverage) | |
83 | if layer_coverage and layer_coverage.clip: | |
84 | img = mask_image(img, bbox, bbox_srs, layer_coverage) | |
82 | 85 | |
83 | 86 | if result.mode != 'RGBA': |
84 | 87 | merge_composite = False |
85 | 88 | else: |
86 | 89 | merge_composite = has_alpha_composite_support() |
90 | ||
91 | if 'transparency' in img.info: | |
92 | # non-paletted PNGs can have a fixed transparency value | |
93 | # convert to RGBA to have full alpha | |
94 | img = img.convert('RGBA') | |
87 | 95 | |
88 | 96 | if merge_composite: |
89 | 97 | if opacity is not None and opacity < 1.0: |
95 | 103 | ImageChops.constant(alpha, int(255 * opacity)) |
96 | 104 | ) |
97 | 105 | img.putalpha(alpha) |
98 | if img.mode == 'RGB': | |
99 | result.paste(img, (0, 0)) | |
100 | else: | |
106 | if img.mode in ('RGBA', 'P'): | |
101 | 107 | # assume paletted images have transparency |
102 | 108 | if img.mode == 'P': |
103 | 109 | img = img.convert('RGBA') |
104 | 110 | result = Image.alpha_composite(result, img) |
111 | else: | |
112 | result.paste(img, (0, 0)) | |
105 | 113 | else: |
106 | 114 | if opacity is not None and opacity < 1.0: |
107 | 115 | img = img.convert(result.mode) |
108 | 116 | result = Image.blend(result, img, layer_image_opts.opacity) |
109 | elif img.mode == 'RGBA' or img.mode == 'P': | |
117 | elif img.mode in ('RGBA', 'P'): | |
110 | 118 | # assume paletted images have transparency |
111 | 119 | if img.mode == 'P': |
112 | 120 | img = img.convert('RGBA') |
148 | 156 | self.cacheable = True |
149 | 157 | self.mode = mode |
150 | 158 | self.max_band = {} |
159 | self.max_src_images = 0 | |
151 | 160 | |
152 | 161 | def add_ops(self, dst_band, src_img, src_band, factor=1.0): |
153 | 162 | self.ops.append(band_ops( |
158 | 167 | )) |
159 | 168 | # store highest requested band index for each source |
160 | 169 | self.max_band[src_img] = max(self.max_band.get(src_img, 0), src_band) |
170 | self.max_src_images = max(src_img+1, self.max_src_images) | |
161 | 171 | |
162 | 172 | def merge(self, sources, image_opts, size=None, bbox=None, bbox_srs=None, coverage=None): |
163 | if not sources: | |
173 | if len(sources) < self.max_src_images: | |
164 | 174 | return BlankImageSource(size=size, image_opts=image_opts, cacheable=True) |
165 | 175 | |
166 | 176 | if size is None: |
218 | 228 | return ImageSource(result, size=size, image_opts=image_opts) |
219 | 229 | |
220 | 230 | |
221 | def merge_images(images, image_opts, size=None): | |
231 | def merge_images(layers, image_opts, size=None, bbox=None, bbox_srs=None, merger=None): | |
222 | 232 | """ |
223 | 233 | Merge multiple images into one. |
224 | 234 | |
226 | 236 | :param format: the format of the output `ImageSource` |
227 | 237 | :param size: size of the merged image, if ``None`` the size |
228 | 238 | of the first image is used |
239 | :param bbox: Bounding box | |
240 | :param bbox_srs: Bounding box SRS | |
241 | :param merger: Image merger | |
229 | 242 | :rtype: `ImageSource` |
230 | 243 | """ |
231 | merger = LayerMerger() | |
232 | for img in images: | |
233 | merger.add(img) | |
234 | return merger.merge(image_opts=image_opts, size=size) | |
244 | if merger is None: | |
245 | merger = LayerMerger() | |
246 | ||
247 | # BandMerger does not have coverage support, passing only images | |
248 | if isinstance(merger, BandMerger): | |
249 | sources = [l[0] if isinstance(l, tuple) else l for l in layers] | |
250 | return merger.merge(sources, image_opts=image_opts, size=size, bbox=bbox, bbox_srs=bbox_srs) | |
251 | ||
252 | for layer in layers: | |
253 | if isinstance(layer, tuple): | |
254 | merger.add(layer[0], layer[1]) | |
255 | else: | |
256 | merger.add(layer) | |
257 | ||
258 | return merger.merge(image_opts=image_opts, size=size, bbox=bbox, bbox_srs=bbox_srs) | |
259 | ||
235 | 260 | |
236 | 261 | def concat_legends(legends, format='png', size=None, bgcolor='#ffffff', transparent=True): |
237 | 262 | """ |
0 | 0 | # This file is part of the MapProxy project. |
1 | 1 | # Copyright (C) 2010 Omniscale <http://omniscale.de> |
2 | # | |
2 | # | |
3 | 3 | # Licensed under the Apache License, Version 2.0 (the "License"); |
4 | 4 | # you may not use this file except in compliance with the License. |
5 | 5 | # You may obtain a copy of the License at |
6 | # | |
6 | # | |
7 | 7 | # http://www.apache.org/licenses/LICENSE-2.0 |
8 | # | |
8 | # | |
9 | 9 | # Unless required by applicable law or agreed to in writing, software |
10 | 10 | # distributed under the License is distributed on an "AS IS" BASIS, |
11 | 11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. |
32 | 32 | """ |
33 | 33 | self.tile_grid = tile_grid |
34 | 34 | self.tile_size = tile_size |
35 | ||
35 | ||
36 | 36 | def merge(self, ordered_tiles, image_opts): |
37 | 37 | """ |
38 | 38 | Merge all tiles into one image. |
39 | ||
39 | ||
40 | 40 | :param ordered_tiles: list of tiles, sorted row-wise (top to bottom) |
41 | 41 | :rtype: `ImageSource` |
42 | 42 | """ |
46 | 46 | tile = ordered_tiles.pop() |
47 | 47 | return tile |
48 | 48 | src_size = self._src_size() |
49 | ||
49 | ||
50 | 50 | result = create_image(src_size, image_opts) |
51 | 51 | |
52 | 52 | cacheable = True |
72 | 72 | else: |
73 | 73 | raise |
74 | 74 | return ImageSource(result, size=src_size, image_opts=image_opts, cacheable=cacheable) |
75 | ||
75 | ||
76 | 76 | def _src_size(self): |
77 | 77 | width = self.tile_grid[0]*self.tile_size[0] |
78 | 78 | height = self.tile_grid[1]*self.tile_size[1] |
79 | 79 | return width, height |
80 | ||
80 | ||
81 | 81 | def _tile_offset(self, i): |
82 | 82 | """ |
83 | 83 | Return the image offset (upper-left coord) of the i-th tile, |
85 | 85 | """ |
86 | 86 | return (i%self.tile_grid[0]*self.tile_size[0], |
87 | 87 | i//self.tile_grid[0]*self.tile_size[1]) |
88 | ||
88 | ||
89 | 89 | |
90 | 90 | class TileSplitter(object): |
91 | 91 | """ |
105 | 105 | minx, miny = crop_coord |
106 | 106 | maxx = minx + tile_size[0] |
107 | 107 | maxy = miny + tile_size[1] |
108 | ||
108 | ||
109 | 109 | if (minx < 0 or miny < 0 or maxx > self.meta_img.size[0] |
110 | 110 | or maxy > self.meta_img.size[1]): |
111 | 111 | |
120 | 120 | else: |
121 | 121 | crop = self.meta_img.crop((minx, miny, maxx, maxy)) |
122 | 122 | return ImageSource(crop, size=tile_size, image_opts=self.image_opts) |
123 | ||
123 | ||
124 | 124 | |
125 | 125 | class TiledImage(object): |
126 | 126 | """ |
141 | 141 | self.tile_size = tile_size |
142 | 142 | self.src_bbox = src_bbox |
143 | 143 | self.src_srs = src_srs |
144 | ||
144 | ||
145 | 145 | def image(self, image_opts): |
146 | 146 | """ |
147 | 147 | Return the tiles as one merged image. |
148 | ||
148 | ||
149 | 149 | :rtype: `ImageSource` |
150 | 150 | """ |
151 | 151 | tm = TileMerger(self.tile_grid, self.tile_size) |
152 | 152 | return tm.merge(self.tiles, image_opts=image_opts) |
153 | ||
153 | ||
154 | 154 | def transform(self, req_bbox, req_srs, out_size, image_opts): |
155 | 155 | """ |
156 | 156 | Return the the tiles as one merged and transformed image. |
157 | ||
157 | ||
158 | 158 | :param req_bbox: the bbox of the output image |
159 | 159 | :param req_srs: the srs of the req_bbox |
160 | 160 | :param out_size: the size in pixel of the output image |
14 | 14 | |
15 | 15 | from __future__ import division |
16 | 16 | |
17 | from mapproxy.compat.image import Image | |
17 | from mapproxy.compat.image import Image, transform_uses_center | |
18 | 18 | from mapproxy.image import ImageSource, image_filter |
19 | 19 | from mapproxy.srs import make_lin_transf, bbox_equals |
20 | 20 | |
136 | 136 | to_src_px = make_lin_transf(src_bbox, src_quad) |
137 | 137 | to_dst_w = make_lin_transf(dst_quad, dst_bbox) |
138 | 138 | meshes = [] |
139 | ||
140 | # more recent versions of Pillow use center coordinates for | |
141 | # transformations, we manually need to add half a pixel otherwise | |
142 | if transform_uses_center(): | |
143 | px_offset = 0.0 | |
144 | else: | |
145 | px_offset = 0.5 | |
146 | ||
139 | 147 | def dst_quad_to_src(quad): |
140 | 148 | src_quad = [] |
141 | 149 | for dst_px in [(quad[0], quad[1]), (quad[0], quad[3]), |
142 | 150 | (quad[2], quad[3]), (quad[2], quad[1])]: |
143 | dst_w = to_dst_w((dst_px[0]+0.5, dst_px[1]+0.5)) | |
151 | dst_w = to_dst_w((dst_px[0]+px_offset, dst_px[1]+px_offset)) | |
144 | 152 | src_w = self.dst_srs.transform_to(self.src_srs, dst_w) |
145 | 153 | src_px = to_src_px(src_w) |
146 | 154 | src_quad.extend(src_px) |
151 | 151 | |
152 | 152 | @property |
153 | 153 | def coord(self): |
154 | return make_lin_transf((0, self.size[1], self.size[0], 0), self.bbox)(self.pos) | |
154 | return make_lin_transf((0, 0, self.size[0], self.size[1]), self.bbox)(self.pos) | |
155 | 155 | |
156 | 156 | class LegendQuery(object): |
157 | 157 | def __init__(self, format, scale): |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | 15 | from functools import partial as fp |
16 | from mapproxy.compat import string_type | |
17 | from mapproxy.compat.modules import urlparse | |
16 | 18 | from mapproxy.request.base import RequestParams, BaseRequest |
17 | from mapproxy.compat import string_type | |
18 | ||
19 | from mapproxy.srs import make_lin_transf | |
19 | 20 | |
20 | 21 | class ArcGISExportRequestParams(RequestParams): |
21 | 22 | """ |
85 | 86 | del _set_srs |
86 | 87 | |
87 | 88 | |
89 | ||
90 | class ArcGISIdentifyRequestParams(ArcGISExportRequestParams): | |
91 | def _get_format(self): | |
92 | """ | |
93 | The requested format as string (w/o any 'image/', 'text/', etc prefixes) | |
94 | """ | |
95 | return self["format"] | |
96 | def _set_format(self, format): | |
97 | self["format"] = format.rsplit("/")[-1] | |
98 | format = property(_get_format, _set_format) | |
99 | del _get_format | |
100 | del _set_format | |
101 | ||
102 | def _get_bbox(self): | |
103 | """ | |
104 | ``bbox`` as a tuple (minx, miny, maxx, maxy). | |
105 | """ | |
106 | if 'mapExtent' not in self.params or self.params['mapExtent'] is None: | |
107 | return None | |
108 | points = [float(val) for val in self.params['mapExtent'].split(',')] | |
109 | return tuple(points[:4]) | |
110 | def _set_bbox(self, value): | |
111 | if value is not None and not isinstance(value, string_type): | |
112 | value = ','.join(str(x) for x in value) | |
113 | self['mapExtent'] = value | |
114 | bbox = property(_get_bbox, _set_bbox) | |
115 | del _get_bbox | |
116 | del _set_bbox | |
117 | ||
118 | def _get_size(self): | |
119 | """ | |
120 | Size of the request in pixel as a tuple (width, height), | |
121 | or None if one is missing. | |
122 | """ | |
123 | if 'imageDisplay' not in self.params or self.params['imageDisplay'] is None: | |
124 | return None | |
125 | dim = [float(val) for val in self.params['imageDisplay'].split(',')] | |
126 | return tuple(dim[:2]) | |
127 | def _set_size(self, value): | |
128 | if value is not None and not isinstance(value, string_type): | |
129 | value = ','.join(str(x) for x in value) + ',96' | |
130 | self['imageDisplay'] = value | |
131 | size = property(_get_size, _set_size) | |
132 | del _get_size | |
133 | del _set_size | |
134 | ||
135 | def _get_pos(self): | |
136 | size = self.size | |
137 | vals = self['geometry'].split(',') | |
138 | x, y = float(vals[0]), float(vals[1]) | |
139 | return make_lin_transf(self.bbox, (0, 0, size[0], size[1]))((x, y)) | |
140 | ||
141 | def _set_pos(self, value): | |
142 | size = self.size | |
143 | req_coord = make_lin_transf((0, 0, size[0], size[1]), self.bbox)(value) | |
144 | self['geometry'] = '%f,%f' % req_coord | |
145 | pos = property(_get_pos, _set_pos) | |
146 | del _get_pos | |
147 | del _set_pos | |
148 | ||
149 | @property | |
150 | def srs(self): | |
151 | srs = self.params.get('sr', None) | |
152 | if srs: | |
153 | return 'EPSG:%s' % srs | |
154 | ||
155 | @srs.setter | |
156 | def srs(self, srs): | |
157 | if hasattr(srs, 'srs_code'): | |
158 | code = srs.srs_code | |
159 | else: | |
160 | code = srs | |
161 | self.params['sr'] = code.rsplit(':', 1)[-1] | |
162 | ||
88 | 163 | class ArcGISRequest(BaseRequest): |
89 | 164 | request_params = ArcGISExportRequestParams |
90 | 165 | fixed_params = {"f": "image"} |
92 | 167 | def __init__(self, param=None, url='', validate=False, http=None): |
93 | 168 | BaseRequest.__init__(self, param, url, validate, http) |
94 | 169 | |
95 | self.url = self.url.rstrip("/") | |
96 | if not self.url.endswith("export"): | |
97 | self.url += "/export" | |
170 | self.url = rest_endpoint(url) | |
98 | 171 | |
99 | 172 | def copy(self): |
100 | 173 | return self.__class__(param=self.params.copy(), url=self.url) |
107 | 180 | return params.query_string |
108 | 181 | |
109 | 182 | |
110 | def create_request(req_data, param): | |
183 | class ArcGISIdentifyRequest(BaseRequest): | |
184 | request_params = ArcGISIdentifyRequestParams | |
185 | fixed_params = {'geometryType': 'esriGeometryPoint'} | |
186 | def __init__(self, param=None, url='', validate=False, http=None): | |
187 | BaseRequest.__init__(self, param, url, validate, http) | |
188 | ||
189 | self.url = rest_identify_endpoint(url) | |
190 | ||
191 | def copy(self): | |
192 | return self.__class__(param=self.params.copy(), url=self.url) | |
193 | ||
194 | @property | |
195 | def query_string(self): | |
196 | params = self.params.copy() | |
197 | for key, value in self.fixed_params.items(): | |
198 | params[key] = value | |
199 | return params.query_string | |
200 | ||
201 | ||
202 | ||
203 | def create_identify_request(req_data, param): | |
111 | 204 | req_data = req_data.copy() |
112 | 205 | |
113 | 206 | # Pop the URL off the request data. |
114 | 207 | url = req_data['url'] |
115 | 208 | del req_data['url'] |
116 | 209 | |
210 | return ArcGISIdentifyRequest(url=url, param=req_data) | |
211 | ||
212 | def create_request(req_data, param): | |
213 | req_data = req_data.copy() | |
214 | ||
215 | # Pop the URL off the request data. | |
216 | url = req_data['url'] | |
217 | del req_data['url'] | |
218 | ||
117 | 219 | if 'format' in param: |
118 | 220 | req_data['format'] = param['format'] |
119 | 221 | |
122 | 224 | req_data['transparent'] = str(req_data['transparent']) |
123 | 225 | |
124 | 226 | return ArcGISRequest(url=url, param=req_data) |
227 | ||
228 | ||
229 | def rest_endpoint(url): | |
230 | parts = urlparse.urlsplit(url) | |
231 | path = parts.path.rstrip('/').split('/') | |
232 | ||
233 | if path[-1] in ('export', 'exportImage'): | |
234 | if path[-2] == 'MapServer': | |
235 | path[-1] = 'export' | |
236 | elif path[-2] == 'ImageServer': | |
237 | path[-1] = 'exportImage' | |
238 | elif path[-1] == 'MapServer': | |
239 | path.append('export') | |
240 | elif path[-1] == 'ImageServer': | |
241 | path.append('exportImage') | |
242 | ||
243 | parts = parts[0], parts[1], '/'.join(path), parts[3], parts[4] | |
244 | return urlparse.urlunsplit(parts) | |
245 | ||
246 | ||
247 | def rest_identify_endpoint(url): | |
248 | parts = urlparse.urlsplit(url) | |
249 | path = parts.path.rstrip('/').split('/') | |
250 | ||
251 | if path[-1] in ('export', 'exportImage'): | |
252 | path[-1] = 'identify' | |
253 | elif path[-1] in ('MapServer', 'ImageServer'): | |
254 | path.append('identify') | |
255 | ||
256 | parts = parts[0], parts[1], '/'.join(path), parts[3], parts[4] | |
257 | return urlparse.urlunsplit(parts) | |
258 |
733 | 733 | Version('1.3.0'): (('text', 'text/plain'), |
734 | 734 | ('html', 'text/html'), |
735 | 735 | ('xml', 'text/xml'), |
736 | ('json', 'application/json'), | |
736 | 737 | ), |
737 | 738 | None: (('text', 'text/plain'), |
738 | 739 | ('html', 'text/html'), |
739 | 740 | ('xml', 'application/vnd.ogc.gml'), |
741 | ('json', 'application/json'), | |
740 | 742 | ) |
741 | 743 | } |
742 | 744 |
90 | 90 | |
91 | 91 | self.last_modified = timestamp |
92 | 92 | if (timestamp or etag_data) and max_age is not None: |
93 | self.headers['Cache-control'] = 'max-age=%d public' % max_age | |
93 | self.headers['Cache-control'] = 'public, max-age=%d, s-maxage=%d' % (max_age, max_age) | |
94 | 94 | |
95 | 95 | def make_conditional(self, req): |
96 | 96 | """ |
95 | 95 | parser = optparse.OptionParser("%prog grids [options] mapproxy_conf") |
96 | 96 | parser.add_option("-f", "--mapproxy-conf", dest="mapproxy_conf", |
97 | 97 | help="MapProxy configuration") |
98 | ||
99 | parser.add_option("-q", "--quiet", | |
100 | action="count", dest="quiet", default=0, | |
101 | help="reduce number of messages to stdout, repeat to disable progress output") | |
98 | 102 | |
99 | 103 | parser.add_option("--source", dest="source", |
100 | 104 | help="source to export (source or cache)") |
203 | 207 | 'type': 'mbtiles', |
204 | 208 | 'filename': options.dest, |
205 | 209 | } |
210 | elif options.type == 'sqlite': | |
211 | cache_conf['cache'] = { | |
212 | 'type': 'sqlite', | |
213 | 'directory': options.dest, | |
214 | } | |
215 | elif options.type == 'geopackage': | |
216 | cache_conf['cache'] = { | |
217 | 'type': 'geopackage', | |
218 | 'filename': options.dest, | |
219 | } | |
220 | elif options.type == 'compact-v1': | |
221 | cache_conf['cache'] = { | |
222 | 'type': 'compact', | |
223 | 'version': 1, | |
224 | 'directory': options.dest, | |
225 | } | |
206 | 226 | elif options.type in ('tc', 'mapproxy'): |
207 | 227 | cache_conf['cache'] = { |
208 | 228 | 'type': 'file', |
256 | 276 | |
257 | 277 | print(format_export_task(task, custom_grid=custom_grid)) |
258 | 278 | |
259 | logger = ProgressLog(verbose=True, silent=False) | |
279 | logger = ProgressLog(verbose=options.quiet==0, silent=options.quiet>=2) | |
260 | 280 | try: |
261 | 281 | seed_task(task, progress_logger=logger, dry_run=options.dry_run, |
262 | 282 | concurrency=options.concurrency) |
90 | 90 | if args[0] == '-': |
91 | 91 | values = values_from_stdin() |
92 | 92 | elif options.eval: |
93 | values = map(eval, args) | |
93 | values = [eval(a) for a in args] | |
94 | 94 | else: |
95 | values = map(float, args) | |
95 | values = [float(a) for a in args] | |
96 | 96 | |
97 | 97 | values.sort(reverse=True) |
98 | 98 |
15 | 15 | from __future__ import print_function |
16 | 16 | |
17 | 17 | import os |
18 | from mapproxy.compat.itertools import izip_longest | |
18 | 19 | from mapproxy.seed.util import format_cleanup_task |
19 | 20 | from mapproxy.util.fs import cleanup_directory |
20 | from mapproxy.seed.seeder import TileWorkerPool, TileWalker, TileCleanupWorker | |
21 | from mapproxy.seed.seeder import ( | |
22 | TileWorkerPool, TileWalker, TileCleanupWorker, | |
23 | SeedProgress, | |
24 | ) | |
25 | from mapproxy.seed.util import ProgressLog | |
21 | 26 | |
22 | 27 | def cleanup(tasks, concurrency=2, dry_run=False, skip_geoms_for_last_levels=0, |
23 | 28 | verbose=True, progress_logger=None): |
27 | 32 | if task.coverage is False: |
28 | 33 | continue |
29 | 34 | |
35 | # seed_progress for tilewalker cleanup | |
36 | seed_progress = None | |
37 | # cleanup_progress for os.walk based cleanup | |
38 | cleanup_progress = None | |
39 | if progress_logger and progress_logger.progress_store: | |
40 | progress_logger.current_task_id = task.id | |
41 | start_progress = progress_logger.progress_store.get(task.id) | |
42 | seed_progress = SeedProgress(old_progress_identifier=start_progress) | |
43 | cleanup_progress = DirectoryCleanupProgress(old_dir=start_progress) | |
44 | ||
30 | 45 | if task.complete_extent: |
31 | if hasattr(task.tile_manager.cache, 'level_location'): | |
32 | simple_cleanup(task, dry_run=dry_run, progress_logger=progress_logger) | |
46 | if callable(getattr(task.tile_manager.cache, 'level_location', None)): | |
47 | simple_cleanup(task, dry_run=dry_run, progress_logger=progress_logger, | |
48 | cleanup_progress=cleanup_progress) | |
33 | 49 | continue |
34 | elif hasattr(task.tile_manager.cache, 'remove_level_tiles_before'): | |
50 | elif callable(getattr(task.tile_manager.cache, 'remove_level_tiles_before', None)): | |
35 | 51 | cache_cleanup(task, dry_run=dry_run, progress_logger=progress_logger) |
36 | 52 | continue |
37 | 53 | |
38 | 54 | tilewalker_cleanup(task, dry_run=dry_run, concurrency=concurrency, |
39 | 55 | skip_geoms_for_last_levels=skip_geoms_for_last_levels, |
40 | progress_logger=progress_logger) | |
56 | progress_logger=progress_logger, | |
57 | seed_progress=seed_progress, | |
58 | ) | |
41 | 59 | |
42 | def simple_cleanup(task, dry_run, progress_logger=None): | |
60 | ||
61 | def simple_cleanup(task, dry_run, progress_logger=None, cleanup_progress=None): | |
43 | 62 | """ |
44 | 63 | Cleanup cache level on file system level. |
45 | 64 | """ |
65 | ||
46 | 66 | for level in task.levels: |
47 | 67 | level_dir = task.tile_manager.cache.level_location(level) |
48 | 68 | if dry_run: |
52 | 72 | file_handler = None |
53 | 73 | if progress_logger: |
54 | 74 | progress_logger.log_message('removing old tiles in ' + normpath(level_dir)) |
75 | if progress_logger.progress_store: | |
76 | cleanup_progress.step_dir(level_dir) | |
77 | if cleanup_progress.already_processed(): | |
78 | continue | |
79 | progress_logger.progress_store.add( | |
80 | task.id, | |
81 | cleanup_progress.current_progress_identifier(), | |
82 | ) | |
83 | progress_logger.progress_store.write() | |
84 | ||
55 | 85 | cleanup_directory(level_dir, task.remove_timestamp, |
56 | 86 | file_handler=file_handler, remove_empty_dirs=True) |
57 | 87 | |
77 | 107 | return path |
78 | 108 | |
79 | 109 | def tilewalker_cleanup(task, dry_run, concurrency, skip_geoms_for_last_levels, |
80 | progress_logger=None): | |
110 | progress_logger=None, seed_progress=None): | |
81 | 111 | """ |
82 | 112 | Cleanup tiles with tile traversal. |
83 | 113 | """ |
87 | 117 | dry_run=dry_run, size=concurrency) |
88 | 118 | tile_walker = TileWalker(task, tile_worker_pool, handle_stale=True, |
89 | 119 | work_on_metatiles=False, progress_logger=progress_logger, |
90 | skip_geoms_for_last_levels=skip_geoms_for_last_levels) | |
120 | skip_geoms_for_last_levels=skip_geoms_for_last_levels, | |
121 | seed_progress=seed_progress) | |
91 | 122 | try: |
92 | 123 | tile_walker.walk() |
93 | 124 | except KeyboardInterrupt: |
95 | 126 | raise |
96 | 127 | finally: |
97 | 128 | tile_worker_pool.stop() |
129 | ||
130 | ||
131 | class DirectoryCleanupProgress(object): | |
132 | def __init__(self, old_dir=None): | |
133 | self.old_dir = old_dir | |
134 | self.current_dir = None | |
135 | ||
136 | def step_dir(self, dir): | |
137 | self.current_dir = dir | |
138 | ||
139 | def already_processed(self): | |
140 | return self.can_skip(self.old_dir, self.current_dir) | |
141 | ||
142 | def current_progress_identifier(self): | |
143 | if self.already_processed() or self.current_dir is None: | |
144 | return self.old_dir | |
145 | return self.current_dir | |
146 | ||
147 | @staticmethod | |
148 | def can_skip(old_dir, current_dir): | |
149 | """ | |
150 | Return True if the `current_dir` is before `old_dir` when compared | |
151 | lexicographic. | |
152 | ||
153 | >>> DirectoryCleanupProgress.can_skip(None, '/00') | |
154 | False | |
155 | >>> DirectoryCleanupProgress.can_skip(None, '/00/000/000') | |
156 | False | |
157 | ||
158 | >>> DirectoryCleanupProgress.can_skip('/01/000/001', '/00') | |
159 | True | |
160 | >>> DirectoryCleanupProgress.can_skip('/01/000/001', '/01/000/000') | |
161 | True | |
162 | >>> DirectoryCleanupProgress.can_skip('/01/000/001', '/01/000/000/000') | |
163 | True | |
164 | >>> DirectoryCleanupProgress.can_skip('/01/000/001', '/01/000/001') | |
165 | False | |
166 | >>> DirectoryCleanupProgress.can_skip('/01/000/001', '/01/000/001/000') | |
167 | False | |
168 | """ | |
169 | if old_dir is None: | |
170 | return False | |
171 | if current_dir is None: | |
172 | return False | |
173 | for old, current in izip_longest(old_dir.split(os.path.sep), current_dir.split(os.path.sep), fillvalue=None): | |
174 | if old is None: | |
175 | return False | |
176 | if current is None: | |
177 | return False | |
178 | if old < current: | |
179 | return False | |
180 | if old > current: | |
181 | return True | |
182 | return False | |
183 | ||
184 | def running(self): | |
185 | return True |
14 | 14 | |
15 | 15 | from __future__ import print_function |
16 | 16 | |
17 | import errno | |
18 | import os | |
19 | import re | |
20 | import signal | |
17 | 21 | import sys |
22 | import time | |
18 | 23 | import logging |
19 | 24 | from logging.config import fileConfig |
20 | 25 | |
21 | from optparse import OptionParser | |
26 | from subprocess import Popen | |
27 | from optparse import OptionParser, OptionValueError | |
22 | 28 | |
23 | 29 | from mapproxy.config.loader import load_configuration, ConfigurationError |
24 | 30 | from mapproxy.seed.config import load_seed_tasks_conf |
28 | 34 | ProgressLog, ProgressStore) |
29 | 35 | from mapproxy.seed.cachelock import CacheLocker |
30 | 36 | |
37 | SECONDS_PER_DAY = 60 * 60 * 24 | |
38 | SECONDS_PER_MINUTE = 60 | |
39 | ||
31 | 40 | def setup_logging(logging_conf=None): |
32 | 41 | if logging_conf is not None: |
33 | 42 | fileConfig(logging_conf, {'here': './'}) |
41 | 50 | "[%(asctime)s] %(name)s - %(levelname)s - %(message)s") |
42 | 51 | ch.setFormatter(formatter) |
43 | 52 | mapproxy_log.addHandler(ch) |
53 | ||
54 | ||
55 | def check_duration(option, opt, value, parser): | |
56 | try: | |
57 | setattr(parser.values, option.dest, parse_duration(value)) | |
58 | except ValueError: | |
59 | raise OptionValueError( | |
60 | "option %s: invalid duration value: %r, expected (10s, 15m, 0.5h, 3d, etc)" | |
61 | % (opt, value), | |
62 | ) | |
63 | ||
64 | ||
65 | def parse_duration(string): | |
66 | match = re.match(r'^(\d*.?\d+)(s|m|h|d)', string) | |
67 | if not match: | |
68 | raise ValueError('invalid duration, not in format: 10s, 0.5h, etc.') | |
69 | duration = float(match.group(1)) | |
70 | unit = match.group(2) | |
71 | if unit == 's': | |
72 | return duration | |
73 | duration *= 60 | |
74 | if unit == 'm': | |
75 | return duration | |
76 | duration *= 60 | |
77 | if unit == 'h': | |
78 | return duration | |
79 | duration *= 24 | |
80 | return duration | |
81 | ||
44 | 82 | |
45 | 83 | class SeedScript(object): |
46 | 84 | usage = "usage: %prog [options] seed_conf" |
96 | 134 | default=None, |
97 | 135 | help="filename for storing the seed progress (for --continue option)") |
98 | 136 | |
137 | parser.add_option("--duration", dest="duration", | |
138 | help="stop seeding after (120s, 15m, 4h, 0.5d, etc)", | |
139 | type=str, action="callback", callback=check_duration) | |
140 | ||
141 | parser.add_option("--reseed-file", dest="reseed_file", | |
142 | help="start of last re-seed", metavar="FILE", | |
143 | default=None) | |
144 | parser.add_option("--reseed-interval", dest="reseed_interval", | |
145 | help="only start seeding if --reseed-file is older then --reseed-interval", | |
146 | metavar="DURATION", | |
147 | type=str, action="callback", callback=check_duration, | |
148 | default=None) | |
149 | ||
99 | 150 | parser.add_option("--log-config", dest='logging_conf', default=None, |
100 | 151 | help="logging configuration") |
101 | 152 | |
117 | 168 | |
118 | 169 | setup_logging(options.logging_conf) |
119 | 170 | |
171 | if options.duration: | |
172 | # calls with --duration are handled in call_with_duration | |
173 | sys.exit(self.call_with_duration(options, args)) | |
174 | ||
120 | 175 | try: |
121 | 176 | mapproxy_conf = load_configuration(options.conf_file, seed=True) |
122 | 177 | except ConfigurationError as ex: |
131 | 186 | if not sys.stdout.isatty() and options.quiet == 0: |
132 | 187 | # disable verbose output for non-ttys |
133 | 188 | options.quiet = 1 |
189 | ||
190 | progress = None | |
191 | if options.continue_seed or options.progress_file: | |
192 | if not options.progress_file: | |
193 | options.progress_file = '.mapproxy_seed_progress' | |
194 | progress = ProgressStore(options.progress_file, | |
195 | continue_seed=options.continue_seed) | |
196 | ||
197 | if options.reseed_file: | |
198 | if not os.path.exists(options.reseed_file): | |
199 | # create --reseed-file if missing | |
200 | with open(options.reseed_file, 'w'): | |
201 | pass | |
202 | else: | |
203 | if progress and not os.path.exists(options.progress_file): | |
204 | # we have an existing --reseed-file but no --progress-file | |
205 | # meaning the last seed call was completed | |
206 | if options.reseed_interval and ( | |
207 | os.path.getmtime(options.reseed_file) > (time.time() - options.reseed_interval) | |
208 | ): | |
209 | print("no need for re-seeding") | |
210 | sys.exit(1) | |
211 | os.utime(options.reseed_file, (time.time(), time.time())) | |
134 | 212 | |
135 | 213 | with mapproxy_conf: |
136 | 214 | try: |
150 | 228 | for task in cleanup_tasks: |
151 | 229 | print(format_cleanup_task(task)) |
152 | 230 | return 0 |
153 | ||
154 | progress = None | |
155 | if options.continue_seed or options.progress_file: | |
156 | if options.progress_file: | |
157 | progress_file = options.progress_file | |
158 | else: | |
159 | progress_file = '.mapproxy_seed_progress' | |
160 | progress = ProgressStore(progress_file, | |
161 | continue_seed=options.continue_seed) | |
162 | 231 | |
163 | 232 | try: |
164 | 233 | if options.interactive: |
177 | 246 | print('========== Cleanup tasks ==========') |
178 | 247 | print('Start cleanup process (%d task%s)' % ( |
179 | 248 | len(cleanup_tasks), 's' if len(cleanup_tasks) > 1 else '')) |
180 | logger = ProgressLog(verbose=options.quiet==0, silent=options.quiet>=2) | |
249 | logger = ProgressLog(verbose=options.quiet==0, silent=options.quiet>=2, | |
250 | progress_store=progress) | |
181 | 251 | cleanup(cleanup_tasks, verbose=options.quiet==0, dry_run=options.dry_run, |
182 | 252 | concurrency=options.concurrency, progress_logger=logger, |
183 | 253 | skip_geoms_for_last_levels=options.geom_levels) |
224 | 294 | |
225 | 295 | return seed_names, cleanup_names |
226 | 296 | |
297 | def call_with_duration(self, options, args): | |
298 | # --duration is implemented by calling mapproxy-seed again in a separate | |
299 | # process (but without --duration) and terminating that process | |
300 | # after --duration | |
301 | ||
302 | argv = sys.argv[:] | |
303 | for i, arg in enumerate(sys.argv): | |
304 | if arg == '--duration': | |
305 | argv = sys.argv[:i] + sys.argv[i+2:] | |
306 | break | |
307 | elif arg.startswith('--duration='): | |
308 | argv = sys.argv[:i] + sys.argv[i+1:] | |
309 | break | |
310 | ||
311 | # call mapproxy-seed again, poll status, terminate after --duration | |
312 | cmd = Popen(args=argv) | |
313 | start = time.time() | |
314 | while True: | |
315 | if (time.time() - start) > options.duration: | |
316 | try: | |
317 | cmd.send_signal(signal.SIGINT) | |
318 | # try to stop with sigint | |
319 | # send sigterm after 10 seconds | |
320 | for _ in range(10): | |
321 | time.sleep(1) | |
322 | if cmd.poll() is not None: | |
323 | break | |
324 | else: | |
325 | cmd.terminate() | |
326 | except OSError as ex: | |
327 | if ex.errno != errno.ESRCH: # no such process | |
328 | raise | |
329 | return 0 | |
330 | if cmd.poll() is not None: | |
331 | return cmd.returncode | |
332 | try: | |
333 | time.sleep(1) | |
334 | except KeyboardInterrupt: | |
335 | # force termination | |
336 | start = 0 | |
337 | ||
338 | ||
227 | 339 | def interactive(self, seed_tasks, cleanup_tasks): |
228 | 340 | selected_seed_tasks = [] |
229 | 341 | print('========== Select seeding tasks ==========') |
263 | 375 | result.extend(args.split(',')) |
264 | 376 | return result |
265 | 377 | |
378 | ||
266 | 379 | if __name__ == '__main__': |
267 | 380 | main() |
15 | 15 | from __future__ import print_function, division |
16 | 16 | |
17 | 17 | import sys |
18 | from collections import deque | |
18 | 19 | from contextlib import contextmanager |
19 | 20 | import time |
20 | 21 | try: |
31 | 32 | from mapproxy.seed.util import format_seed_task, timestamp |
32 | 33 | from mapproxy.seed.cachelock import DummyCacheLocker, CacheLockedError |
33 | 34 | |
34 | from mapproxy.seed.util import (exp_backoff, ETA, limit_sub_bbox, | |
35 | from mapproxy.seed.util import (exp_backoff, limit_sub_bbox, | |
35 | 36 | status_symbol, BackoffError) |
36 | 37 | |
37 | 38 | import logging |
53 | 54 | queue_class = multiprocessing.Queue |
54 | 55 | |
55 | 56 | |
56 | class TileProcessor(object): | |
57 | def __init__(self, dry_run=False): | |
58 | self._lastlog = time.time() | |
59 | self.dry_run = dry_run | |
60 | ||
61 | def log_progress(self, progress): | |
62 | if (self._lastlog + .1) < time.time(): | |
63 | # log progress at most every 100ms | |
64 | print('[%s] %6.2f%% %s \tETA: %s\r' % ( | |
65 | timestamp(), progress[1]*100, progress[0], | |
66 | progress[2] | |
67 | ), end=' ') | |
68 | sys.stdout.flush() | |
69 | self._lastlog = time.time() | |
70 | ||
71 | def process(self, tiles, progress): | |
72 | if not self.dry_run: | |
73 | self.process_tiles(tiles) | |
74 | ||
75 | self.log_progress(progress) | |
76 | ||
77 | def stop(self): | |
78 | raise NotImplementedError() | |
79 | ||
80 | def process_tiles(self, tiles): | |
81 | raise NotImplementedError() | |
82 | ||
83 | ||
84 | class TileWorkerPool(TileProcessor): | |
57 | class TileWorkerPool(object): | |
85 | 58 | """ |
86 | 59 | Manages multiple TileWorker. |
87 | 60 | """ |
88 | 61 | def __init__(self, task, worker_class, size=2, dry_run=False, progress_logger=None): |
89 | TileProcessor.__init__(self, dry_run=dry_run) | |
90 | 62 | self.tiles_queue = queue_class(size) |
91 | 63 | self.task = task |
92 | 64 | self.dry_run = dry_run |
192 | 164 | class SeedProgress(object): |
193 | 165 | def __init__(self, old_progress_identifier=None): |
194 | 166 | self.progress = 0.0 |
195 | self.eta = ETA() | |
196 | 167 | self.level_progress_percentages = [1.0] |
197 | self.level_progresses = [] | |
168 | self.level_progresses = None | |
169 | self.level_progresses_level = 0 | |
198 | 170 | self.progress_str_parts = [] |
199 | self.old_level_progresses = None | |
200 | if old_progress_identifier is not None: | |
201 | self.old_level_progresses = old_progress_identifier | |
171 | self.old_level_progresses = old_progress_identifier | |
202 | 172 | |
203 | 173 | def step_forward(self, subtiles=1): |
204 | 174 | self.progress += self.level_progress_percentages[-1] / subtiles |
205 | self.eta.update(self.progress) | |
206 | 175 | |
207 | 176 | @property |
208 | 177 | def progress_str(self): |
210 | 179 | |
211 | 180 | @contextmanager |
212 | 181 | def step_down(self, i, subtiles): |
182 | if self.level_progresses is None: | |
183 | self.level_progresses = [] | |
184 | self.level_progresses = self.level_progresses[:self.level_progresses_level] | |
213 | 185 | self.level_progresses.append((i, subtiles)) |
186 | self.level_progresses_level += 1 | |
214 | 187 | self.progress_str_parts.append(status_symbol(i, subtiles)) |
215 | 188 | self.level_progress_percentages.append(self.level_progress_percentages[-1] / subtiles) |
189 | ||
216 | 190 | yield |
191 | ||
217 | 192 | self.level_progress_percentages.pop() |
218 | 193 | self.progress_str_parts.pop() |
219 | self.level_progresses.pop() | |
194 | ||
195 | self.level_progresses_level -= 1 | |
196 | if self.level_progresses_level == 0: | |
197 | self.level_progresses = [] | |
220 | 198 | |
221 | 199 | def already_processed(self): |
222 | if self.old_level_progresses == []: | |
223 | return True | |
224 | ||
225 | if self.old_level_progresses is None: | |
226 | return False | |
227 | ||
228 | if self.progress_is_behind(self.old_level_progresses, self.level_progresses): | |
229 | return True | |
230 | else: | |
231 | return False | |
200 | return self.can_skip(self.old_level_progresses, self.level_progresses) | |
232 | 201 | |
233 | 202 | def current_progress_identifier(self): |
234 | return self.level_progresses | |
203 | if self.already_processed() or self.level_progresses is None: | |
204 | return self.old_level_progresses | |
205 | return self.level_progresses[:] | |
235 | 206 | |
236 | 207 | @staticmethod |
237 | def progress_is_behind(old_progress, current_progress): | |
208 | def can_skip(old_progress, current_progress): | |
238 | 209 | """ |
239 | 210 | Return True if the `current_progress` is behind the `old_progress` - |
240 | 211 | when it isn't as far as the old progress. |
241 | 212 | |
242 | >>> SeedProgress.progress_is_behind([], [(0, 1)]) | |
213 | >>> SeedProgress.can_skip(None, [(0, 4)]) | |
214 | False | |
215 | >>> SeedProgress.can_skip([], [(0, 4)]) | |
243 | 216 | True |
244 | >>> SeedProgress.progress_is_behind([(0, 1), (1, 4)], [(0, 1)]) | |
245 | False | |
246 | >>> SeedProgress.progress_is_behind([(0, 1), (1, 4)], [(0, 1), (0, 4)]) | |
217 | >>> SeedProgress.can_skip([(0, 4)], None) | |
218 | False | |
219 | >>> SeedProgress.can_skip([(0, 4)], [(0, 4)]) | |
220 | False | |
221 | >>> SeedProgress.can_skip([(1, 4)], [(0, 4)]) | |
247 | 222 | True |
248 | >>> SeedProgress.progress_is_behind([(0, 1), (1, 4)], [(0, 1), (1, 4)]) | |
223 | >>> SeedProgress.can_skip([(0, 4)], [(0, 4), (0, 4)]) | |
224 | False | |
225 | ||
226 | >>> SeedProgress.can_skip([(0, 4), (0, 4), (2, 4)], [(0, 4), (0, 4)]) | |
227 | False | |
228 | >>> SeedProgress.can_skip([(0, 4), (0, 4), (2, 4)], [(0, 4), (0, 4), (1, 4)]) | |
249 | 229 | True |
250 | >>> SeedProgress.progress_is_behind([(0, 1), (1, 4)], [(0, 1), (3, 4)]) | |
251 | False | |
252 | ||
253 | """ | |
254 | for old, current in izip_longest(old_progress, current_progress, fillvalue=(9e15, 9e15)): | |
230 | >>> SeedProgress.can_skip([(0, 4), (0, 4), (2, 4)], [(0, 4), (0, 4), (2, 4)]) | |
231 | False | |
232 | >>> SeedProgress.can_skip([(0, 4), (0, 4), (2, 4)], [(0, 4), (0, 4), (3, 4)]) | |
233 | False | |
234 | >>> SeedProgress.can_skip([(0, 4), (0, 4), (2, 4)], [(0, 4), (1, 4)]) | |
235 | False | |
236 | >>> SeedProgress.can_skip([(0, 4), (0, 4), (2, 4)], [(0, 4), (1, 4), (0, 4)]) | |
237 | False | |
238 | """ | |
239 | if current_progress is None: | |
240 | return False | |
241 | if old_progress is None: | |
242 | return False | |
243 | if old_progress == []: | |
244 | return True | |
245 | for old, current in izip_longest(old_progress, current_progress, fillvalue=None): | |
246 | if old is None: | |
247 | return False | |
248 | if current is None: | |
249 | return False | |
255 | 250 | if old < current: |
256 | 251 | return False |
257 | 252 | if old > current: |
258 | 253 | return True |
259 | return True | |
254 | return False | |
260 | 255 | |
261 | 256 | def running(self): |
262 | 257 | return True |
269 | 264 | |
270 | 265 | |
271 | 266 | class TileWalker(object): |
267 | """ | |
268 | TileWalker traverses through all tiles in a tile grid and calls worker_pool.process | |
269 | for each (meta) tile. It traverses the tile grid (pyramid) depth-first. | |
270 | Intersection with coverages are checked before handling subtiles in the next level, | |
271 | allowing to determine if all subtiles should be seeded or skipped. | |
272 | """ | |
272 | 273 | def __init__(self, task, worker_pool, handle_stale=False, handle_uncached=False, |
273 | 274 | work_on_metatiles=True, skip_geoms_for_last_levels=0, progress_logger=None, |
274 | 275 | seed_progress=None): |
282 | 283 | self.progress_logger = progress_logger |
283 | 284 | |
284 | 285 | num_seed_levels = len(task.levels) |
285 | self.report_till_level = task.levels[int(num_seed_levels * 0.8)] | |
286 | if num_seed_levels >= 4: | |
287 | self.report_till_level = task.levels[num_seed_levels-2] | |
288 | else: | |
289 | self.report_till_level = task.levels[num_seed_levels-1] | |
286 | 290 | meta_size = self.tile_mgr.meta_grid.meta_size if self.tile_mgr.meta_grid else (1, 1) |
287 | 291 | self.tiles_per_metatile = meta_size[0] * meta_size[1] |
288 | 292 | self.grid = MetaGrid(self.tile_mgr.grid, meta_size=meta_size, meta_buffer=0) |
289 | 293 | self.count = 0 |
290 | 294 | self.seed_progress = seed_progress or SeedProgress() |
295 | ||
296 | # It is possible that we 'walk' through the same tile multiple times | |
297 | # when seeding irregular tile grids[0]. limit_sub_bbox prevents that we | |
298 | # recurse into the same area multiple times, but it is still possible | |
299 | # that a tile is processed multiple times. Locking prevents that a tile | |
300 | # is seeded multiple times, but it is possible that we count the same tile | |
301 | # multiple times (in dry-mode, or while the tile is in the process queue). | |
302 | ||
303 | # Tile counts can be off by 280% with sqrt2 grids. | |
304 | # We keep a small cache of already processed tiles to skip most duplicates. | |
305 | # A simple cache of 64 tile coordinates for each level already brings the | |
306 | # difference down to ~8%, which is good enough and faster than a more | |
307 | # sophisticated FIFO cache with O(1) lookup, or even caching all tiles. | |
308 | ||
309 | # [0] irregular tile grids: where one tile does not have exactly 4 subtiles | |
310 | # Typically when you use res_factor, or a custom res list. | |
311 | self.seeded_tiles = {l: deque(maxlen=64) for l in task.levels} | |
291 | 312 | |
292 | 313 | def walk(self): |
293 | 314 | assert self.handle_stale or self.handle_uncached |
329 | 350 | if current_level in levels: |
330 | 351 | levels = levels[1:] |
331 | 352 | process = True |
332 | current_level += 1 | |
333 | 353 | |
334 | 354 | for i, (subtile, sub_bbox, intersection) in enumerate(subtiles): |
335 | 355 | if subtile is None: # no intersection |
346 | 366 | if self.seed_progress.already_processed(): |
347 | 367 | self.seed_progress.step_forward() |
348 | 368 | else: |
349 | self._walk(sub_bbox, levels, current_level=current_level, | |
369 | self._walk(sub_bbox, levels, current_level=current_level+1, | |
350 | 370 | all_subtiles=all_subtiles) |
351 | 371 | |
352 | 372 | if not process: |
353 | 373 | continue |
374 | ||
375 | # check if subtile was already processed. see comment in __init__ | |
376 | if subtile in self.seeded_tiles[current_level]: | |
377 | continue | |
378 | self.seeded_tiles[current_level].appendleft(subtile) | |
354 | 379 | |
355 | 380 | if not self.work_on_metatiles: |
356 | 381 | # collect actual tiles |
434 | 459 | self.remove_timestamp = remove_timestamp |
435 | 460 | self.coverage = coverage |
436 | 461 | self.complete_extent = complete_extent |
462 | ||
463 | @property | |
464 | def id(self): | |
465 | return 'cleanup', self.md['name'], self.md['cache_name'], self.md['grid_name'] | |
437 | 466 | |
438 | 467 | def intersects(self, bbox): |
439 | 468 | if self.coverage.contains(bbox, self.grid.srs): return CONTAINS |
41 | 41 | dict.__setitem__(self, key, val) |
42 | 42 | dict.__setitem__(self, val, key) |
43 | 43 | |
44 | class ETA(object): | |
45 | def __init__(self): | |
46 | self.avgs = [] | |
47 | self.last_tick_start = time.time() | |
48 | self.progress = 0.0 | |
49 | self.ticks = 10000 | |
50 | self.tick_duration_sums = 0.0 | |
51 | self.tick_duration_divisor = 0.0 | |
52 | self.tick_count = 0 | |
53 | ||
54 | def update(self, progress): | |
55 | self.progress = progress | |
56 | missing_ticks = (self.progress * self.ticks) - self.tick_count | |
57 | if missing_ticks: | |
58 | tick_duration = (time.time() - self.last_tick_start) / missing_ticks | |
59 | ||
60 | while missing_ticks > 0: | |
61 | ||
62 | # reduce the influence of older messurements | |
63 | self.tick_duration_sums *= 0.999 | |
64 | self.tick_duration_divisor *= 0.999 | |
65 | ||
66 | self.tick_count += 1 | |
67 | ||
68 | self.tick_duration_sums += tick_duration | |
69 | self.tick_duration_divisor += 1 | |
70 | ||
71 | missing_ticks -= 1 | |
72 | ||
73 | self.last_tick_start = time.time() | |
74 | ||
75 | def eta_string(self): | |
76 | timestamp = self.eta() | |
77 | if timestamp is None: | |
78 | return 'N/A' | |
79 | try: | |
80 | return time.strftime('%Y-%m-%d-%H:%M:%S', time.localtime(timestamp)) | |
81 | except (ValueError, OSError): # OSError since Py 3.3 | |
82 | # raised when time is out of range (e.g. year >2038) | |
83 | return 'N/A' | |
84 | ||
85 | def eta(self): | |
86 | if not self.tick_count: return | |
87 | return (self.last_tick_start + | |
88 | ((self.tick_duration_sums/self.tick_duration_divisor) | |
89 | * (self.ticks - self.tick_count))) | |
90 | ||
91 | def __str__(self): | |
92 | return self.eta_string() | |
93 | ||
94 | 44 | class ProgressStore(object): |
95 | 45 | """ |
96 | 46 | Reads and stores seed progresses to a file. |
141 | 91 | if not out: |
142 | 92 | out = sys.stdout |
143 | 93 | self.out = out |
144 | self.lastlog = time.time() | |
94 | self._laststep = time.time() | |
95 | self._lastprogress = 0 | |
96 | ||
145 | 97 | self.verbose = verbose |
146 | 98 | self.silent = silent |
147 | 99 | self.current_task_id = None |
156 | 108 | def log_step(self, progress): |
157 | 109 | if not self.verbose: |
158 | 110 | return |
159 | if (self.lastlog + .1) < time.time(): | |
160 | # log progress at most every 100ms | |
161 | self.out.write('[%s] %6.2f%%\t%-20s ETA: %s\r' % ( | |
111 | if (self._laststep + .5) < time.time(): | |
112 | # log progress at most every 500ms | |
113 | self.out.write('[%s] %6.2f%%\t%-20s \r' % ( | |
162 | 114 | timestamp(), progress.progress*100, progress.progress_str, |
163 | progress.eta | |
164 | 115 | )) |
165 | 116 | self.out.flush() |
166 | self.lastlog = time.time() | |
117 | self._laststep = time.time() | |
167 | 118 | |
168 | 119 | def log_progress(self, progress, level, bbox, tiles): |
169 | if self.progress_store and self.current_task_id: | |
170 | self.progress_store.add(self.current_task_id, | |
171 | progress.current_progress_identifier()) | |
172 | self.progress_store.write() | |
120 | progress_interval = 1 | |
121 | if not self.verbose: | |
122 | progress_interval = 30 | |
123 | ||
124 | log_progess = False | |
125 | if progress.progress == 1.0 or (self._lastprogress + progress_interval) < time.time(): | |
126 | self._lastprogress = time.time() | |
127 | log_progess = True | |
128 | ||
129 | if log_progess: | |
130 | if self.progress_store and self.current_task_id: | |
131 | self.progress_store.add(self.current_task_id, | |
132 | progress.current_progress_identifier()) | |
133 | self.progress_store.write() | |
173 | 134 | |
174 | 135 | if self.silent: |
175 | 136 | return |
176 | self.out.write('[%s] %2s %6.2f%% %s (%d tiles) ETA: %s\n' % ( | |
177 | timestamp(), level, progress.progress*100, | |
178 | format_bbox(bbox), tiles, progress.eta)) | |
179 | self.out.flush() | |
137 | ||
138 | if log_progess: | |
139 | self.out.write('[%s] %2s %6.2f%% %s (%d tiles)\n' % ( | |
140 | timestamp(), level, progress.progress*100, | |
141 | format_bbox(bbox), tiles)) | |
142 | self.out.flush() | |
180 | 143 | |
181 | 144 | |
182 | 145 | def limit_sub_bbox(bbox, sub_bbox): |
0 | 0 | <?xml version="1.0"?> |
1 | <Capabilities xmlns="http://www.opengis.net/wmts/1.0" xmlns:ows="http://www.opengis.net/ows/1.1" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:gml="http://www.opengis.net/gml" xsi:schemaLocation="http://www.opengis.net/wmts/1.0 ../wmtsGetCapabilities_response.xsd" version="1.0.0"> | |
1 | <Capabilities xmlns="http://www.opengis.net/wmts/1.0" xmlns:ows="http://www.opengis.net/ows/1.1" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:gml="http://www.opengis.net/gml" xsi:schemaLocation="http://www.opengis.net/wmts/1.0 http://schemas.opengis.net/wmts/1.0/wmtsGetCapabilities_response.xsd" version="1.0.0"> | |
2 | 2 | <ows:ServiceIdentification> |
3 | 3 | <ows:Title>{{service.title}}</ows:Title> |
4 | 4 | <ows:Abstract>{{service.abstract}}</ows:Abstract> |
547 | 547 | if layer_task.exception is None: |
548 | 548 | layer, layer_img = layer_task.result |
549 | 549 | if layer_img is not None: |
550 | layer_merger.add(layer_img, layer=layer) | |
550 | layer_merger.add(layer_img, layer.coverage) | |
551 | 551 | else: |
552 | 552 | ex = layer_task.exception |
553 | 553 | async_pool.shutdown(True) |
565 | 565 | if layer_task.exception is None: |
566 | 566 | layer, layer_img = layer_task.result |
567 | 567 | if layer_img is not None: |
568 | layer_merger.add(layer_img, layer=layer) | |
568 | layer_merger.add(layer_img, layer.coverage) | |
569 | 569 | rendered += 1 |
570 | 570 | else: |
571 | 571 | layer_merger.cacheable = False |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | from mapproxy.source.wms import WMSSource | |
15 | from mapproxy.source.wms import WMSSource, WMSInfoSource | |
16 | 16 | |
17 | 17 | import logging |
18 | 18 | log = logging.getLogger('mapproxy.source.arcgis') |
20 | 20 | |
21 | 21 | class ArcGISSource(WMSSource): |
22 | 22 | def __init__(self, client, image_opts=None, coverage=None, |
23 | supported_srs=None, supported_formats=None): | |
24 | WMSSource.__init__(self, client, image_opts=image_opts, coverage=coverage, | |
25 | supported_srs=supported_srs, supported_formats=supported_formats) | |
23 | res_range=None, supported_srs=None, supported_formats=None): | |
24 | WMSSource.__init__(self, client, image_opts=image_opts, | |
25 | coverage=coverage, res_range=res_range, | |
26 | supported_srs=supported_srs, | |
27 | supported_formats=supported_formats) | |
28 | ||
29 | ||
30 | class ArcGISInfoSource(WMSInfoSource): | |
31 | def __init__(self, client): | |
32 | self.client = client | |
33 | ||
34 | def get_info(self, query): | |
35 | doc = self.client.get_info(query) | |
36 | return doc⏎ |
14 | 14 | |
15 | 15 | from __future__ import print_function |
16 | 16 | |
17 | import re | |
17 | 18 | import threading |
18 | 19 | import sys |
19 | 20 | import cgi |
35 | 36 | else: |
36 | 37 | from http.server import HTTPServer as HTTPServer_, BaseHTTPRequestHandler |
37 | 38 | |
38 | class RequestsMissmatchError(AssertionError): | |
39 | class RequestsMismatchError(AssertionError): | |
39 | 40 | def __init__(self, assertions): |
40 | 41 | self.assertions = assertions |
41 | 42 | |
43 | 44 | assertions = [] |
44 | 45 | for assertion in self.assertions: |
45 | 46 | assertions.append(text_indent(str(assertion), ' ', ' - ')) |
46 | return 'requests missmatch:\n' + '\n'.join(assertions) | |
47 | return 'requests mismatch:\n' + '\n'.join(assertions) | |
47 | 48 | |
48 | 49 | class RequestError(str): |
49 | 50 | pass |
55 | 56 | text = first_indent + text |
56 | 57 | return text.replace('\n', '\n' + indent) |
57 | 58 | |
58 | class RequestMissmatch(object): | |
59 | class RequestMismatch(object): | |
59 | 60 | def __init__(self, msg, expected, actual): |
60 | 61 | self.msg = msg |
61 | 62 | self.expected = expected |
62 | 63 | self.actual = actual |
63 | 64 | |
64 | 65 | def __str__(self): |
65 | return ('requests missmatch, expected:\n' + | |
66 | return ('requests mismatch, expected:\n' + | |
66 | 67 | text_indent(str(self.expected), ' ') + |
67 | 68 | '\n got:\n' + text_indent(str(self.actual), ' ')) |
68 | 69 | |
161 | 162 | if 'method' in req: |
162 | 163 | if req['method'] != method: |
163 | 164 | self.server.assertions.append( |
164 | RequestMissmatch('unexpected method', req['method'], method) | |
165 | RequestMismatch('unexpected method', req['method'], method) | |
165 | 166 | ) |
166 | 167 | self.server.shutdown = True |
167 | 168 | if req.get('require_basic_auth', False): |
176 | 177 | for k, v in req['headers'].items(): |
177 | 178 | if k not in self.headers: |
178 | 179 | self.server.assertions.append( |
179 | RequestMissmatch('missing header', k, self.headers) | |
180 | RequestMismatch('missing header', k, self.headers) | |
180 | 181 | ) |
181 | 182 | elif self.headers[k] != v: |
182 | 183 | self.server.assertions.append( |
183 | RequestMissmatch('header missmatch', '%s: %s' % (k, v), self.headers) | |
184 | RequestMismatch('header mismatch', '%s: %s' % (k, v), self.headers) | |
184 | 185 | ) |
185 | 186 | if not query_comparator(req['path'], self.query_data): |
186 | 187 | self.server.assertions.append( |
187 | RequestMissmatch('requests differ', req['path'], self.query_data) | |
188 | RequestMismatch('requests differ', req['path'], self.query_data) | |
188 | 189 | ) |
189 | 190 | query_actual = set(query_to_dict(self.query_data).items()) |
190 | 191 | query_expected = set(query_to_dict(req['path']).items()) |
191 | 192 | self.server.assertions.append( |
192 | RequestMissmatch('requests params differ', query_expected - query_actual, query_actual - query_expected) | |
193 | RequestMismatch('requests params differ', query_expected - query_actual, query_actual - query_expected) | |
193 | 194 | ) |
194 | 195 | self.server.shutdown = True |
195 | 196 | if 'req_assert_function' in req: |
270 | 271 | |
271 | 272 | if not self._thread.sucess and value: |
272 | 273 | print('requests to mock httpd did not ' |
273 | 'match expectations:\n %s' % RequestsMissmatchError(self._thread.assertions)) | |
274 | 'match expectations:\n %s' % RequestsMismatchError(self._thread.assertions)) | |
274 | 275 | if value: |
275 | 276 | raise reraise((type, value, traceback)) |
276 | 277 | if not self._thread.sucess: |
277 | raise RequestsMissmatchError(self._thread.assertions) | |
278 | raise RequestsMismatchError(self._thread.assertions) | |
278 | 279 | |
279 | 280 | def wms_query_eq(expected, actual): |
280 | 281 | """ |
311 | 312 | |
312 | 313 | return True |
313 | 314 | |
315 | numbers_only = re.compile('^-?\d+\.\d+(,-?\d+\.\d+)*$') | |
316 | ||
314 | 317 | def query_eq(expected, actual): |
315 | 318 | """ |
316 | 319 | >>> query_eq('bAR=baz&foo=bizz', 'foO=bizz&bar=baz') |
321 | 324 | True |
322 | 325 | >>> query_eq('/1/2/3.png', '/1/2/0.png') |
323 | 326 | False |
324 | """ | |
325 | return (query_to_dict(expected) == query_to_dict(actual) and | |
326 | path_from_query(expected) == path_from_query(actual)) | |
327 | ||
328 | def assert_query_eq(expected, actual): | |
327 | >>> query_eq('/map?point=2.9999999999,1.00000000001', '/map?point=3.0,1.0') | |
328 | True | |
329 | """ | |
330 | ||
331 | if path_from_query(expected) != path_from_query(actual): | |
332 | return False | |
333 | ||
334 | expected = query_to_dict(expected) | |
335 | actual = query_to_dict(actual) | |
336 | ||
337 | if set(expected.keys()) != set(actual.keys()): | |
338 | return False | |
339 | ||
340 | for ke, ve in expected.items(): | |
341 | if numbers_only.match(ve): | |
342 | if not float_string_almost_eq(ve, actual[ke]): | |
343 | return False | |
344 | else: | |
345 | if ve != actual[ke]: | |
346 | return False | |
347 | ||
348 | return True | |
349 | ||
350 | def float_string_almost_eq(expected, actual): | |
351 | """ | |
352 | Compares if two strings with comma-separated floats are almost equal. | |
353 | Strings must contain floats. | |
354 | ||
355 | >>> float_string_almost_eq('12345678900', '12345678901') | |
356 | False | |
357 | >>> float_string_almost_eq('12345678900.0', '12345678901.0') | |
358 | True | |
359 | ||
360 | >>> float_string_almost_eq('12345678900.0,-3.0', '12345678901.0,-2.9999999999') | |
361 | True | |
362 | """ | |
363 | if not numbers_only.match(expected) or not numbers_only.match(actual): | |
364 | return False | |
365 | ||
366 | expected_nums = [float(x) for x in expected.split(',')] | |
367 | actual_nums = [float(x) for x in actual.split(',')] | |
368 | ||
369 | if len(expected_nums) != len(actual_nums): | |
370 | return False | |
371 | ||
372 | for e, a in zip(expected_nums, actual_nums): | |
373 | if abs(e - a) > abs((e+a)/2)/10e9: | |
374 | return False | |
375 | ||
376 | return True | |
377 | ||
378 | def assert_query_eq(expected, actual, fuzzy_number_compare=False): | |
329 | 379 | path_actual = path_from_query(actual) |
330 | 380 | path_expected = path_from_query(expected) |
331 | 381 | assert path_expected == path_actual, path_expected + '!=' + path_actual |
333 | 383 | query_actual = set(query_to_dict(actual).items()) |
334 | 384 | query_expected = set(query_to_dict(expected).items()) |
335 | 385 | |
336 | assert query_expected == query_actual, '%s != %s\t%s|%s' % ( | |
386 | if fuzzy_number_compare: | |
387 | equal = query_eq(expected, actual) | |
388 | else: | |
389 | equal = query_expected == query_actual | |
390 | assert equal, '%s != %s\t%s|%s' % ( | |
337 | 391 | expected, actual, query_expected - query_actual, query_actual - query_expected) |
338 | 392 | |
339 | 393 | def path_from_query(query): |
390 | 444 | yield |
391 | 445 | except: |
392 | 446 | if not t.sucess: |
393 | print(str(RequestsMissmatchError(t.assertions))) | |
447 | print(str(RequestsMismatchError(t.assertions))) | |
394 | 448 | raise |
395 | 449 | finally: |
396 | 450 | t.shutdown = True |
397 | 451 | t.join(1) |
398 | 452 | if not t.sucess: |
399 | raise RequestsMissmatchError(t.assertions) | |
453 | raise RequestsMismatchError(t.assertions) | |
400 | 454 | |
401 | 455 | @contextmanager |
402 | 456 | def mock_single_req_httpd(address, request_handler): |
406 | 460 | yield |
407 | 461 | except: |
408 | 462 | if not t.sucess: |
409 | print(str(RequestsMissmatchError(t.assertions))) | |
463 | print(str(RequestsMismatchError(t.assertions))) | |
410 | 464 | raise |
411 | 465 | finally: |
412 | 466 | t.shutdown = True |
413 | 467 | t.join(1) |
414 | 468 | if not t.sucess: |
415 | raise RequestsMissmatchError(t.assertions) | |
469 | raise RequestsMismatchError(t.assertions) | |
416 | 470 | |
417 | 471 | |
418 | 472 | def make_wsgi_env(query_string, extra_environ={}): |
0 | 0 | services: |
1 | 1 | tms: |
2 | wms: | |
3 | featureinfo_types: ['json'] | |
2 | 4 | |
3 | 5 | layers: |
4 | 6 | - name: app2_layer |
7 | 9 | - name: app2_with_layers_layer |
8 | 10 | title: ArcGIS Cache Layer |
9 | 11 | sources: [app2_with_layers_cache] |
12 | - name: app2_with_layers_fi_layer | |
13 | title: ArcGIS Cache Layer | |
14 | sources: [app2_with_layers_fi_cache] | |
10 | 15 | - name: app2_wrong_url_layer |
11 | 16 | title: ArcGIS Cache Layer |
12 | 17 | sources: [app2_wrong_url_cache] |
18 | 23 | app2_with_layers_cache: |
19 | 24 | grids: [GLOBAL_MERCATOR] |
20 | 25 | sources: [app2_with_layers_source] |
26 | app2_with_layers_fi_cache: | |
27 | grids: [GLOBAL_MERCATOR] | |
28 | sources: [app2_with_layers_fi_source] | |
21 | 29 | app2_wrong_url_cache: |
22 | 30 | grids: [GLOBAL_MERCATOR] |
23 | 31 | sources: [app2_wrong_url_source] |
31 | 39 | type: arcgis |
32 | 40 | req: |
33 | 41 | layers: show:0,1 |
34 | url: http://localhost:42423/arcgis/rest/services/ExampleLayer/ImageServer | |
42 | url: http://localhost:42423/arcgis/rest/services/ExampleLayer/MapServer | |
43 | app2_with_layers_fi_source: | |
44 | type: arcgis | |
45 | opts: | |
46 | featureinfo: true | |
47 | featureinfo_tolerance: 10 | |
48 | featureinfo_return_geometries: true | |
49 | supported_srs: ['EPSG:3857'] | |
50 | req: | |
51 | layers: show:1,2,3 | |
52 | url: http://localhost:42423/arcgis/rest/services/ExampleLayer/MapServer | |
35 | 53 | app2_wrong_url_source: |
36 | 54 | type: arcgis |
37 | 55 | req: |
Binary diff not shown
0 | globals: | |
1 | cache: | |
2 | base_dir: cache_data/ | |
3 | ||
4 | services: | |
5 | tms: | |
6 | wms: | |
7 | md: | |
8 | title: MapProxy test fixture | |
9 | ||
10 | layers: | |
11 | - name: gpkg | |
12 | title: TMS Cache Layer | |
13 | sources: [gpkg_cache, new_gpkg, new_gpkg_table] | |
14 | - name: gpkg_new | |
15 | title: TMS Cache Layer | |
16 | sources: [new_gpkg] | |
17 | ||
18 | caches: | |
19 | gpkg_cache: | |
20 | grids: [cache_grid] | |
21 | cache: | |
22 | type: geopackage | |
23 | filename: ./cache.gpkg | |
24 | table_name: cache | |
25 | tile_lock_dir: ./testlockdir | |
26 | sources: [tms] | |
27 | new_gpkg: | |
28 | grids: [new_grid] | |
29 | sources: [] | |
30 | cache: | |
31 | type: geopackage | |
32 | filename: ./cache_new.gpkg | |
33 | table_name: cache | |
34 | tile_lock_dir: ./testlockdir | |
35 | new_gpkg_table: | |
36 | grids: [cache_grid] | |
37 | cache: | |
38 | type: geopackage | |
39 | filename: ./cache.gpkg | |
40 | table_name: new_cache | |
41 | tile_lock_dir: ./testlockdir | |
42 | sources: [tms] | |
43 | ||
44 | grids: | |
45 | cache_grid: | |
46 | srs: EPSG:900913 | |
47 | new_grid: | |
48 | srs: EPSG:4326 | |
49 | ||
50 | ||
51 | sources: | |
52 | tms: | |
53 | type: tile | |
54 | url: http://localhost:42423/tiles/%(tc_path)s.png | |
55 |
0 | globals: | |
1 | cache: | |
2 | s3: | |
3 | bucket_name: default_bucket | |
4 | ||
5 | services: | |
6 | tms: | |
7 | wms: | |
8 | md: | |
9 | title: MapProxy S3 | |
10 | ||
11 | layers: | |
12 | - name: default | |
13 | title: Default | |
14 | sources: [default_cache] | |
15 | - name: quadkey | |
16 | title: Quadkey | |
17 | sources: [quadkey_cache] | |
18 | - name: reverse | |
19 | title: Reverse | |
20 | sources: [reverse_cache] | |
21 | ||
22 | caches: | |
23 | default_cache: | |
24 | grids: [webmercator] | |
25 | cache: | |
26 | type: s3 | |
27 | sources: [tms] | |
28 | ||
29 | quadkey_cache: | |
30 | grids: [webmercator] | |
31 | cache: | |
32 | type: s3 | |
33 | bucket_name: tiles | |
34 | directory_layout: quadkey | |
35 | directory: quadkeytiles | |
36 | sources: [tms] | |
37 | ||
38 | reverse_cache: | |
39 | grids: [webmercator] | |
40 | cache: | |
41 | type: s3 | |
42 | bucket_name: tiles | |
43 | directory_layout: reverse_tms | |
44 | directory: reversetiles | |
45 | sources: [tms] | |
46 | ||
47 | grids: | |
48 | webmercator: | |
49 | name: WebMerc | |
50 | base: GLOBAL_WEBMERCATOR | |
51 | ||
52 | ||
53 | sources: | |
54 | tms: | |
55 | type: tile | |
56 | url: http://localhost:42423/tiles/%(tc_path)s.png | |
57 |
25 | 25 | fax: +49(0)441-9392774-9 |
26 | 26 | email: info@omniscale.de |
27 | 27 | access_constraints: |
28 | This service is intended for private and evaluation use only. | |
29 | The data is licensed as Creative Commons Attribution-Share Alike 2.0 | |
30 | (http://creativecommons.org/licenses/by-sa/2.0/) | |
28 | Here be dragons. | |
31 | 29 | |
32 | 30 | layers: |
33 | 31 | - name: wms_cache |
24 | 24 | fax: +49(0)441-9392774-9 |
25 | 25 | email: info@omniscale.de |
26 | 26 | access_constraints: |
27 | This service is intended for private and evaluation use only. | |
28 | The data is licensed as Creative Commons Attribution-Share Alike 2.0 | |
29 | (http://creativecommons.org/licenses/by-sa/2.0/) | |
27 | Here be dragons. | |
30 | 28 | |
31 | 29 | layers: |
32 | 30 | - name: jpeg_cache_tiff_source |
40 | 40 | fax: +49(0)441-9392774-9 |
41 | 41 | email: info@omniscale.de |
42 | 42 | access_constraints: |
43 | This service is intended for private and evaluation use only. | |
44 | The data is licensed as Creative Commons Attribution-Share Alike 2.0 | |
45 | (http://creativecommons.org/licenses/by-sa/2.0/) | |
43 | Here be dragons. | |
46 | 44 | inspire_md: |
47 | 45 | type: linked |
48 | 46 | languages: |
40 | 40 | fax: +49(0)441-9392774-9 |
41 | 41 | email: info@omniscale.de |
42 | 42 | access_constraints: |
43 | This service is intended for private and evaluation use only. | |
44 | The data is licensed as Creative Commons Attribution-Share Alike 2.0 | |
45 | (http://creativecommons.org/licenses/by-sa/2.0/) | |
43 | Here be dragons. | |
46 | 44 | keyword_list: |
47 | 45 | - vocabulary: GEMET |
48 | 46 | keywords: [Orthoimagery] |
40 | 40 | fax: +49(0)441-9392774-9 |
41 | 41 | email: info@omniscale.de |
42 | 42 | access_constraints: |
43 | This service is intended for private and evaluation use only. | |
44 | The data is licensed as Creative Commons Attribution-Share Alike 2.0 | |
45 | (http://creativecommons.org/licenses/by-sa/2.0/) | |
43 | Here be dragons. | |
46 | 44 | |
47 | 45 | layers: |
48 | 46 | - name: direct |
25 | 25 | fax: +49(0)441-9392774-9 |
26 | 26 | email: info@omniscale.de |
27 | 27 | access_constraints: |
28 | This service is intended for private and evaluation use only. | |
29 | The data is licensed as Creative Commons Attribution-Share Alike 2.0 | |
30 | (http://creativecommons.org/licenses/by-sa/2.0/) | |
28 | Here be dragons. | |
31 | 29 | |
32 | 30 | layers: |
33 | 31 | - name: wms_legend |
25 | 25 | fax: +49(0)441-9392774-9 |
26 | 26 | email: info@omniscale.de |
27 | 27 | access_constraints: |
28 | This service is intended for private and evaluation use only. | |
29 | The data is licensed as Creative Commons Attribution-Share Alike 2.0 | |
30 | (http://creativecommons.org/licenses/by-sa/2.0/) | |
28 | Here be dragons. | |
31 | 29 | |
32 | 30 | layers: |
33 | 31 | - name: mixed_mode |
34 | title: cache with PNG and JPEG | |
32 | title: cache with PNG and JPEG | |
35 | 33 | sources: [mixed_cache] |
36 | 34 | |
37 | 35 | caches: |
25 | 25 | fax: +49(0)441-9392774-9 |
26 | 26 | email: info@omniscale.de |
27 | 27 | access_constraints: |
28 | This service is intended for private and evaluation use only. | |
29 | The data is licensed as Creative Commons Attribution-Share Alike 2.0 | |
30 | (http://creativecommons.org/licenses/by-sa/2.0/) | |
28 | Here be dragons. | |
31 | 29 | |
32 | 30 | layers: |
33 | 31 | - name: res |
25 | 25 | fax: +49(0)441-9392774-9 |
26 | 26 | email: info@omniscale.de |
27 | 27 | access_constraints: |
28 | This service is intended for private and evaluation use only. | |
29 | The data is licensed as Creative Commons Attribution-Share Alike 2.0 | |
30 | (http://creativecommons.org/licenses/by-sa/2.0/) | |
28 | Here be dragons. | |
31 | 29 | |
32 | 30 | layers: |
33 | 31 | - name: wms_cache |
27 | 27 | <ContactElectronicMailAddress>osm@omniscale.de</ContactElectronicMailAddress> |
28 | 28 | </ContactInformation> |
29 | 29 | <Fees>none</Fees> |
30 | <AccessConstraints>This service is intended for private and evaluation use only. The data is licensed as Creative Commons Attribution-Share Alike 2.0 (http://creativecommons.org/licenses/by-sa/2.0/)</AccessConstraints> | |
30 | <AccessConstraints>Here be dragons.</AccessConstraints> | |
31 | 31 | </Service> |
32 | 32 | <Capability> |
33 | 33 | <Request> |
27 | 27 | <ContactElectronicMailAddress>info@omniscale.de</ContactElectronicMailAddress> |
28 | 28 | </ContactInformation> |
29 | 29 | <Fees>None</Fees> |
30 | <AccessConstraints>This service is intended for private and evaluation use only. The data is licensed as Creative Commons Attribution-Share Alike 2.0 (http://creativecommons.org/licenses/by-sa/2.0/)</AccessConstraints> | |
30 | <AccessConstraints>Here be dragons.</AccessConstraints> | |
31 | 31 | </Service> |
32 | 32 | <Capability> |
33 | 33 | <Request> |
23 | 23 | <ContactElectronicMailAddress>info@omniscale.de</ContactElectronicMailAddress> |
24 | 24 | </ContactInformation> |
25 | 25 | <Fees>None</Fees> |
26 | <AccessConstraints>This service is intended for private and evaluation use only. The data is licensed as Creative Commons Attribution-Share Alike 2.0 (http://creativecommons.org/licenses/by-sa/2.0/)</AccessConstraints> | |
26 | <AccessConstraints>Here be dragons.</AccessConstraints> | |
27 | 27 | </Service> |
28 | 28 | <Capability> |
29 | 29 | <Request> |
22 | 22 | fax: +49(0)441-9392774-9 |
23 | 23 | email: info@omniscale.de |
24 | 24 | access_constraints: |
25 | This service is intended for private and evaluation use only. | |
26 | The data is licensed as Creative Commons Attribution-Share Alike 2.0 | |
27 | (http://creativecommons.org/licenses/by-sa/2.0/) | |
25 | Here be dragons. | |
28 | 26 | |
29 | 27 | layers: |
30 | 28 | - name: direct |
27 | 27 | fax: +49(0)441-9392774-9 |
28 | 28 | email: info@omniscale.de |
29 | 29 | access_constraints: |
30 | This service is intended for private and evaluation use only. | |
31 | The data is licensed as Creative Commons Attribution-Share Alike 2.0 | |
32 | (http://creativecommons.org/licenses/by-sa/2.0/) | |
30 | Here be dragons. | |
33 | 31 | |
34 | 32 | layers: |
35 | 33 | - name: wms_cache |
15 | 15 | from __future__ import with_statement, division |
16 | 16 | |
17 | 17 | from io import BytesIO |
18 | from mapproxy.request.arcgis import ArcGISRequest | |
18 | from mapproxy.request.wms import WMS111FeatureInfoRequest | |
19 | 19 | from mapproxy.test.image import is_png, create_tmp_image |
20 | 20 | from mapproxy.test.http import mock_httpd |
21 | 21 | from mapproxy.test.system import module_setup, module_teardown, SystemTest |
31 | 31 | |
32 | 32 | transp = create_tmp_image((512, 512), mode='RGBA', color=(0, 0, 0, 0)) |
33 | 33 | |
34 | ||
34 | 35 | class TestArcgisSource(SystemTest): |
35 | 36 | config = test_config |
36 | 37 | def setup(self): |
37 | 38 | SystemTest.setup(self) |
39 | self.common_fi_req = WMS111FeatureInfoRequest(url='/service?', | |
40 | param=dict(x='10', y='20', width='200', height='200', layers='app2_with_layers_fi_layer', | |
41 | format='image/png', query_layers='app2_with_layers_fi_layer', styles='', | |
42 | bbox='1000,400,2000,1400', srs='EPSG:3857', info_format='application/json')) | |
38 | 43 | |
39 | 44 | def test_get_tile(self): |
40 | expected_req = [({'path': '/arcgis/rest/services/ExampleLayer/ImageServer/export?f=image&format=png&imageSR=900913&bboxSR=900913&bbox=-20037508.342789244,-20037508.342789244,20037508.342789244,20037508.342789244&size=512,512'}, | |
45 | expected_req = [({'path': '/arcgis/rest/services/ExampleLayer/ImageServer/exportImage?f=image&format=png&imageSR=900913&bboxSR=900913&bbox=-20037508.342789244,-20037508.342789244,20037508.342789244,20037508.342789244&size=512,512'}, | |
41 | 46 | {'body': transp, 'headers': {'content-type': 'image/png'}}), |
42 | 47 | ] |
43 | 48 | |
49 | 54 | assert is_png(data) |
50 | 55 | |
51 | 56 | def test_get_tile_with_layer(self): |
52 | expected_req = [({'path': '/arcgis/rest/services/ExampleLayer/ImageServer/export?f=image&format=png&layers=show:0,1&imageSR=900913&bboxSR=900913&bbox=-20037508.342789244,-20037508.342789244,20037508.342789244,20037508.342789244&size=512,512'}, | |
57 | expected_req = [({'path': '/arcgis/rest/services/ExampleLayer/MapServer/export?f=image&format=png&layers=show:0,1&imageSR=900913&bboxSR=900913&bbox=-20037508.342789244,-20037508.342789244,20037508.342789244,20037508.342789244&size=512,512'}, | |
53 | 58 | {'body': transp, 'headers': {'content-type': 'image/png'}}), |
54 | 59 | ] |
55 | 60 | |
61 | 66 | assert is_png(data) |
62 | 67 | |
63 | 68 | def test_get_tile_from_missing_arcgis_layer(self): |
64 | expected_req = [({'path': '/arcgis/rest/services/NonExistentLayer/ImageServer/export?f=image&format=png&imageSR=900913&bboxSR=900913&bbox=-20037508.342789244,-20037508.342789244,20037508.342789244,20037508.342789244&size=512,512'}, | |
69 | expected_req = [({'path': '/arcgis/rest/services/NonExistentLayer/ImageServer/exportImage?f=image&format=png&imageSR=900913&bboxSR=900913&bbox=-20037508.342789244,-20037508.342789244,20037508.342789244,20037508.342789244&size=512,512'}, | |
65 | 70 | {'body': b'', 'status': 400}), |
66 | 71 | ] |
67 | 72 | |
68 | 73 | with mock_httpd(('localhost', 42423), expected_req, bbox_aware_query_comparator=True): |
69 | 74 | resp = self.app.get('/tms/1.0.0/app2_wrong_url_layer/0/0/1.png', status=500) |
70 | 75 | eq_(resp.status_code, 500) |
76 | ||
77 | def test_identify(self): | |
78 | expected_req = [( | |
79 | {'path': '/arcgis/rest/services/ExampleLayer/MapServer/identify?f=json&' | |
80 | 'geometry=1050.000000,1300.000000&returnGeometry=true&imageDisplay=200,200,96' | |
81 | '&mapExtent=1000.0,400.0,2000.0,1400.0&layers=show:1,2,3' | |
82 | '&tolerance=10&geometryType=esriGeometryPoint&sr=3857' | |
83 | }, | |
84 | {'body': b'{"results": []}', 'headers': {'content-type': 'application/json'}}), | |
85 | ] | |
86 | ||
87 | with mock_httpd(('localhost', 42423), expected_req, bbox_aware_query_comparator=True): | |
88 | resp = self.app.get(self.common_fi_req) | |
89 | eq_(resp.content_type, 'application/json') | |
90 | eq_(resp.content_length, len(resp.body)) | |
91 | eq_(resp.body, b'{"results": []}') | |
92 | ||
93 | ||
94 | def test_transformed_identify(self): | |
95 | expected_req = [( | |
96 | {'path': '/arcgis/rest/services/ExampleLayer/MapServer/identify?f=json&' | |
97 | 'geometry=573295.377585,6927820.884193&returnGeometry=true&imageDisplay=200,321,96' | |
98 | '&mapExtent=556597.453966,6446275.84102,890555.926346,6982997.92039&layers=show:1,2,3' | |
99 | '&tolerance=10&geometryType=esriGeometryPoint&sr=3857' | |
100 | }, | |
101 | {'body': b'{"results": []}', 'headers': {'content-type': 'application/json'}}), | |
102 | ] | |
103 | ||
104 | with mock_httpd(('localhost', 42423), expected_req): | |
105 | self.common_fi_req.params.bbox = '5,50,8,53' | |
106 | self.common_fi_req.params.srs = 'EPSG:4326' | |
107 | resp = self.app.get(self.common_fi_req) | |
108 | eq_(resp.content_type, 'application/json') | |
109 | eq_(resp.content_length, len(resp.body)) | |
110 | eq_(resp.body, b'{"results": []}') |
263 | 263 | return { |
264 | 264 | 'authorized': 'partial', |
265 | 265 | 'layers': { |
266 | 'layer1b': {'featureinfo': True, 'limited_to': {'srs': 'EPSG:4326', 'geometry': [-40.0, -40.0, 0.0, 0.0]}}, | |
266 | 'layer1b': {'featureinfo': True, 'limited_to': {'srs': 'EPSG:4326', 'geometry': [-80.0, -40.0, 0.0, -10.0]}}, | |
267 | 267 | } |
268 | 268 | } |
269 | 269 |
0 | # This file is part of the MapProxy project. | |
1 | # Copyright (C) 2011 Omniscale <http://omniscale.de> | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | from __future__ import with_statement, division | |
16 | ||
17 | import os | |
18 | import shutil | |
19 | ||
20 | from io import BytesIO | |
21 | ||
22 | from mapproxy.request.wms import WMS111MapRequest | |
23 | from mapproxy.test.http import MockServ | |
24 | from mapproxy.test.image import is_png, create_tmp_image | |
25 | from mapproxy.test.system import prepare_env, create_app, module_teardown, SystemTest | |
26 | from mapproxy.cache.geopackage import GeopackageCache | |
27 | from mapproxy.grid import TileGrid | |
28 | from nose.tools import eq_ | |
29 | import sqlite3 | |
30 | ||
31 | test_config = {} | |
32 | ||
33 | ||
34 | def setup_module(): | |
35 | prepare_env(test_config, 'cache_geopackage.yaml') | |
36 | ||
37 | shutil.copy(os.path.join(test_config['fixture_dir'], 'cache.gpkg'), | |
38 | test_config['base_dir']) | |
39 | create_app(test_config) | |
40 | ||
41 | ||
42 | def teardown_module(): | |
43 | module_teardown(test_config) | |
44 | ||
45 | ||
46 | class TestGeopackageCache(SystemTest): | |
47 | config = test_config | |
48 | table_name = 'cache' | |
49 | ||
50 | def setup(self): | |
51 | SystemTest.setup(self) | |
52 | self.common_map_req = WMS111MapRequest(url='/service?', | |
53 | param=dict(service='WMS', | |
54 | version='1.1.1', bbox='-180,-80,0,0', | |
55 | width='200', height='200', | |
56 | layers='gpkg', srs='EPSG:4326', | |
57 | format='image/png', | |
58 | styles='', request='GetMap')) | |
59 | ||
60 | def test_get_map_cached(self): | |
61 | resp = self.app.get(self.common_map_req) | |
62 | eq_(resp.content_type, 'image/png') | |
63 | data = BytesIO(resp.body) | |
64 | assert is_png(data) | |
65 | ||
66 | def test_get_map_uncached(self): | |
67 | assert os.path.exists(os.path.join(test_config['base_dir'], 'cache.gpkg')) # already created on startup | |
68 | ||
69 | self.common_map_req.params.bbox = '-180,0,0,80' | |
70 | serv = MockServ(port=42423) | |
71 | serv.expects('/tiles/01/000/000/000/000/000/001.png') | |
72 | serv.returns(create_tmp_image((256, 256))) | |
73 | with serv: | |
74 | resp = self.app.get(self.common_map_req) | |
75 | eq_(resp.content_type, 'image/png') | |
76 | data = BytesIO(resp.body) | |
77 | assert is_png(data) | |
78 | ||
79 | # now cached | |
80 | resp = self.app.get(self.common_map_req) | |
81 | eq_(resp.content_type, 'image/png') | |
82 | data = BytesIO(resp.body) | |
83 | assert is_png(data) | |
84 | ||
85 | def test_bad_config_geopackage_no_gpkg_contents(self): | |
86 | gpkg_file = os.path.join(test_config['base_dir'], 'cache.gpkg') | |
87 | table_name = 'no_gpkg_contents' | |
88 | ||
89 | with sqlite3.connect(gpkg_file) as db: | |
90 | cur = db.execute('''SELECT name FROM sqlite_master WHERE type='table' AND name=?''', | |
91 | (table_name,)) | |
92 | content = cur.fetchone() | |
93 | assert content[0] == table_name | |
94 | ||
95 | with sqlite3.connect(gpkg_file) as db: | |
96 | cur = db.execute('''SELECT table_name FROM gpkg_contents WHERE table_name=?''', | |
97 | (table_name,)) | |
98 | content = cur.fetchone() | |
99 | assert not content | |
100 | ||
101 | GeopackageCache(gpkg_file, TileGrid(srs=4326), table_name=table_name) | |
102 | ||
103 | with sqlite3.connect(gpkg_file) as db: | |
104 | cur = db.execute('''SELECT table_name FROM gpkg_contents WHERE table_name=?''', | |
105 | (table_name,)) | |
106 | content = cur.fetchone() | |
107 | assert content[0] == table_name | |
108 | ||
109 | def test_bad_config_geopackage_no_spatial_ref_sys(self): | |
110 | gpkg_file = os.path.join(test_config['base_dir'], 'cache.gpkg') | |
111 | organization_coordsys_id = 3785 | |
112 | table_name='no_gpkg_spatial_ref_sys' | |
113 | ||
114 | with sqlite3.connect(gpkg_file) as db: | |
115 | cur = db.execute('''SELECT organization_coordsys_id FROM gpkg_spatial_ref_sys WHERE organization_coordsys_id=?''', | |
116 | (organization_coordsys_id,)) | |
117 | content = cur.fetchone() | |
118 | assert not content | |
119 | ||
120 | GeopackageCache(gpkg_file, TileGrid(srs=3785), table_name=table_name) | |
121 | ||
122 | with sqlite3.connect(gpkg_file) as db: | |
123 | cur = db.execute( | |
124 | '''SELECT organization_coordsys_id FROM gpkg_spatial_ref_sys WHERE organization_coordsys_id=?''', | |
125 | (organization_coordsys_id,)) | |
126 | content = cur.fetchone() | |
127 | assert content[0] == organization_coordsys_id |
0 | # This file is part of the MapProxy project. | |
1 | # Copyright (C) 2016 Omniscale <http://omniscale.de> | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | from __future__ import with_statement, division | |
16 | ||
17 | from io import BytesIO | |
18 | ||
19 | from mapproxy.request.wms import WMS111MapRequest | |
20 | from mapproxy.test.image import is_png, create_tmp_image | |
21 | from mapproxy.test.system import prepare_env, create_app, module_teardown, SystemTest | |
22 | ||
23 | from nose.tools import eq_ | |
24 | from nose.plugins.skip import SkipTest | |
25 | ||
26 | try: | |
27 | import boto3 | |
28 | from moto import mock_s3 | |
29 | except ImportError: | |
30 | boto3 = None | |
31 | mock_s3 = None | |
32 | ||
33 | ||
34 | test_config = {} | |
35 | ||
36 | _mock = None | |
37 | ||
38 | def setup_module(): | |
39 | if not mock_s3 or not boto3: | |
40 | raise SkipTest("boto3 and moto required for S3 tests") | |
41 | ||
42 | global _mock | |
43 | _mock = mock_s3() | |
44 | _mock.start() | |
45 | ||
46 | boto3.client("s3").create_bucket(Bucket="default_bucket") | |
47 | boto3.client("s3").create_bucket(Bucket="tiles") | |
48 | boto3.client("s3").create_bucket(Bucket="reversetiles") | |
49 | ||
50 | prepare_env(test_config, 'cache_s3.yaml') | |
51 | create_app(test_config) | |
52 | ||
53 | def teardown_module(): | |
54 | module_teardown(test_config) | |
55 | _mock.stop() | |
56 | ||
57 | class TestS3Cache(SystemTest): | |
58 | config = test_config | |
59 | table_name = 'cache' | |
60 | ||
61 | def setup(self): | |
62 | SystemTest.setup(self) | |
63 | self.common_map_req = WMS111MapRequest(url='/service?', | |
64 | param=dict(service='WMS', | |
65 | version='1.1.1', bbox='-150,-40,-140,-30', | |
66 | width='100', height='100', | |
67 | layers='default', srs='EPSG:4326', | |
68 | format='image/png', | |
69 | styles='', request='GetMap')) | |
70 | ||
71 | def test_get_map_cached(self): | |
72 | # mock_s3 interferes with MockServ, use boto to manually upload tile | |
73 | tile = create_tmp_image((256, 256)) | |
74 | boto3.client("s3").upload_fileobj( | |
75 | BytesIO(tile), | |
76 | Bucket='default_bucket', | |
77 | Key='default_cache/WebMerc/4/1/9.png', | |
78 | ) | |
79 | ||
80 | resp = self.app.get(self.common_map_req) | |
81 | eq_(resp.content_type, 'image/png') | |
82 | data = BytesIO(resp.body) | |
83 | assert is_png(data) | |
84 | ||
85 | ||
86 | def test_get_map_cached_quadkey(self): | |
87 | # mock_s3 interferes with MockServ, use boto to manually upload tile | |
88 | tile = create_tmp_image((256, 256)) | |
89 | boto3.client("s3").upload_fileobj( | |
90 | BytesIO(tile), | |
91 | Bucket='tiles', | |
92 | Key='quadkeytiles/2003.png', | |
93 | ) | |
94 | ||
95 | self.common_map_req.params.layers = 'quadkey' | |
96 | resp = self.app.get(self.common_map_req) | |
97 | eq_(resp.content_type, 'image/png') | |
98 | data = BytesIO(resp.body) | |
99 | assert is_png(data) | |
100 | ||
101 | def test_get_map_cached_reverse_tms(self): | |
102 | # mock_s3 interferes with MockServ, use boto to manually upload tile | |
103 | tile = create_tmp_image((256, 256)) | |
104 | boto3.client("s3").upload_fileobj( | |
105 | BytesIO(tile), | |
106 | Bucket='tiles', | |
107 | Key='reversetiles/9/1/4.png', | |
108 | ) | |
109 | ||
110 | self.common_map_req.params.layers = 'reverse' | |
111 | resp = self.app.get(self.common_map_req) | |
112 | eq_(resp.content_type, 'image/png') | |
113 | data = BytesIO(resp.body) | |
114 | assert is_png(data) |
87 | 87 | assert 'Last-modified' not in resp.headers |
88 | 88 | else: |
89 | 89 | eq_(resp.headers['Last-modified'], format_httpdate(timestamp)) |
90 | eq_(resp.headers['Cache-control'], 'max-age=%d public' % max_age) | |
90 | eq_(resp.headers['Cache-control'], 'public, max-age=%d, s-maxage=%d' % (max_age, max_age)) | |
91 | 91 | |
92 | 92 | def test_get_cached_tile(self): |
93 | 93 | etag, max_age = self._update_timestamp() |
70 | 70 | |
71 | 71 | def test_tms_capabilities(self): |
72 | 72 | resp = self.app.get('/tms/1.0.0/') |
73 | assert 'http://localhost/tms/1.0.0/multi_cache/wmts_incompatible_grid' in resp | |
74 | assert 'http://localhost/tms/1.0.0/multi_cache/GLOBAL_WEBMERCATOR' in resp | |
75 | assert 'http://localhost/tms/1.0.0/multi_cache/InspireCrs84Quad' in resp | |
76 | assert 'http://localhost/tms/1.0.0/multi_cache/gk3' in resp | |
77 | assert 'http://localhost/tms/1.0.0/cache/utm32' in resp | |
73 | assert 'http://localhost/tms/1.0.0/multi_cache/EPSG25832' in resp | |
74 | assert 'http://localhost/tms/1.0.0/multi_cache/EPSG3857' in resp | |
75 | assert 'http://localhost/tms/1.0.0/multi_cache/CRS84' in resp | |
76 | assert 'http://localhost/tms/1.0.0/multi_cache/EPSG31467' in resp | |
77 | assert 'http://localhost/tms/1.0.0/cache/EPSG25832' in resp | |
78 | 78 | xml = resp.lxml |
79 | 79 | assert xml.xpath('count(//TileMap)') == 5 |
80 | 80 |
178 | 178 | def _check_cache_control_headers(self, resp, etag, max_age): |
179 | 179 | eq_(resp.headers['ETag'], etag) |
180 | 180 | eq_(resp.headers['Last-modified'], 'Fri, 13 Feb 2009 23:31:30 GMT') |
181 | eq_(resp.headers['Cache-control'], 'max-age=%d public' % max_age) | |
181 | eq_(resp.headers['Cache-control'], 'public, max-age=%d, s-maxage=%d' % (max_age, max_age)) | |
182 | 182 | |
183 | 183 | def test_get_cached_tile(self): |
184 | 184 | etag, max_age = self._update_timestamp() |
438 | 438 | # broken bbox for the requested srs |
439 | 439 | url = """/service?SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&BBOX=-72988843.697212,-255661507.634227,142741550.188860,255661507.634227&SRS=EPSG:25833&WIDTH=164&HEIGHT=388&LAYERS=wms_cache_100&STYLES=&FORMAT=image/png&TRANSPARENT=TRUE""" |
440 | 440 | resp = self.app.get(url) |
441 | is_111_exception(resp.lxml, 'Request too large or invalid BBOX.') | |
441 | # result depends on proj version | |
442 | is_111_exception(resp.lxml, re_msg='Request too large or invalid BBOX.|Could not transform BBOX: Invalid result.') | |
442 | 443 | |
443 | 444 | def test_get_map_broken_bbox(self): |
444 | 445 | url = """/service?VERSION=1.1.11&REQUEST=GetMap&SRS=EPSG:31468&BBOX=-10000855.0573254,2847125.18913603,-9329367.42767611,4239924.78564583&WIDTH=130&HEIGHT=62&LAYERS=wms_cache&STYLES=&FORMAT=image/png&TRANSPARENT=TRUE""" |
528 | 529 | def test_get_featureinfo_transformed(self): |
529 | 530 | expected_req = ({'path': r'/service?LAYERs=foo,bar&SERVICE=WMS&FORMAT=image%2Fpng' |
530 | 531 | '&REQUEST=GetFeatureInfo&HEIGHT=200&SRS=EPSG%3A900913' |
531 | '&BBOX=5197367.93088,5312902.73895,5311885.44223,5434731.78213' | |
532 | '&BBOX=1172272.30156,7196018.03449,1189711.04571,7213496.99738' | |
532 | 533 | '&styles=&VERSION=1.1.1&feature_count=100' |
533 | '&WIDTH=200&QUERY_LAYERS=foo,bar&X=14&Y=78'}, | |
534 | '&WIDTH=200&QUERY_LAYERS=foo,bar&X=14&Y=20'}, | |
534 | 535 | {'body': b'info', 'headers': {'content-type': 'text/plain'}}) |
535 | 536 | |
536 | 537 | # out fi point at x=10,y=20 |
537 | p_25832 = (3570269+10*(3643458 - 3570269)/200, 5540889+20*(5614078 - 5540889)/200) | |
538 | # the transformed fi point at x=10,y=22 | |
539 | p_900913 = (5197367.93088+14*(5311885.44223 - 5197367.93088)/200, | |
540 | 5312902.73895+78*(5434731.78213 - 5312902.73895)/200) | |
538 | p_25832 = (600000+10*(610000 - 600000)/200, 6010000-20*(6010000 - 6000000)/200) | |
539 | # the transformed fi point at x=14,y=20 | |
540 | p_900913 = (1172272.30156+14*(1189711.04571-1172272.30156)/200, | |
541 | 7213496.99738-20*(7213496.99738 - 7196018.03449)/200) | |
541 | 542 | |
542 | 543 | # are they the same? |
543 | # check with tolerance: pixel resolution is ~570 and x/y position is rounded to pizel | |
544 | assert abs(SRS(25832).transform_to(SRS(900913), p_25832)[0] - p_900913[0]) < 570/2 | |
545 | assert abs(SRS(25832).transform_to(SRS(900913), p_25832)[1] - p_900913[1]) < 570/2 | |
544 | # check with tolerance: pixel resolution is ~50 and x/y position is rounded to pizel | |
545 | assert abs(SRS(25832).transform_to(SRS(900913), p_25832)[0] - p_900913[0]) < 50 | |
546 | assert abs(SRS(25832).transform_to(SRS(900913), p_25832)[1] - p_900913[1]) < 50 | |
546 | 547 | |
547 | 548 | with mock_httpd(('localhost', 42423), [expected_req], bbox_aware_query_comparator=True): |
548 | self.common_fi_req.params['bbox'] = '3570269,5540889,3643458,5614078' | |
549 | self.common_fi_req.params['bbox'] = '600000,6000000,610000,6010000' | |
549 | 550 | self.common_fi_req.params['srs'] = 'EPSG:25832' |
550 | 551 | self.common_fi_req.params.pos = 10, 20 |
551 | 552 | self.common_fi_req.params['feature_count'] = 100 |
14 | 14 | |
15 | 15 | import requests |
16 | 16 | from mapproxy.test.http import ( |
17 | MockServ, RequestsMissmatchError, mock_httpd, | |
18 | basic_auth_value, | |
17 | MockServ, RequestsMismatchError, mock_httpd, | |
18 | basic_auth_value, query_eq, | |
19 | 19 | ) |
20 | 20 | |
21 | 21 | from nose.tools import eq_ |
47 | 47 | try: |
48 | 48 | with serv: |
49 | 49 | requests.get('http://localhost:%d/test' % serv.port) |
50 | except RequestsMissmatchError as ex: | |
50 | except RequestsMismatchError as ex: | |
51 | 51 | assert ex.assertions[0].expected == 'Accept: Coffee' |
52 | 52 | |
53 | 53 | def test_expects_post(self): |
64 | 64 | try: |
65 | 65 | with serv: |
66 | 66 | requests.get('http://localhost:%d/test' % serv.port) |
67 | except RequestsMissmatchError as ex: | |
67 | except RequestsMismatchError as ex: | |
68 | 68 | assert ex.assertions[0].expected == 'POST' |
69 | 69 | assert ex.assertions[0].actual == 'GET' |
70 | 70 | else: |
136 | 136 | with serv: |
137 | 137 | resp = requests.get('http://localhost:%d/test1' % serv.port) |
138 | 138 | eq_(resp.content, b'hello1') |
139 | except RequestsMissmatchError as ex: | |
140 | assert 'requests missmatch:\n - missing requests' in str(ex) | |
139 | except RequestsMismatchError as ex: | |
140 | assert 'requests mismatch:\n - missing requests' in str(ex) | |
141 | 141 | else: |
142 | 142 | raise AssertionError('AssertionError expected') |
143 | 143 | |
176 | 176 | raise AssertionError('RequestException expected') |
177 | 177 | resp = requests.get('http://localhost:%d/test2' % serv.port) |
178 | 178 | eq_(resp.content, b'hello2') |
179 | except RequestsMissmatchError as ex: | |
179 | except RequestsMismatchError as ex: | |
180 | 180 | assert 'unexpected request' in ex.assertions[0] |
181 | 181 | else: |
182 | 182 | raise AssertionError('AssertionError expected') |
206 | 206 | 'Authorization': basic_auth_value('foo', 'bar'), 'Accept': 'Coffee'} |
207 | 207 | ) |
208 | 208 | eq_(resp.content, b'ok') |
209 | ||
210 | ||
211 | def test_query_eq(): | |
212 | assert query_eq('?baz=42&foo=bar', '?foo=bar&baz=42') | |
213 | assert query_eq('?baz=42.00&foo=bar', '?foo=bar&baz=42.0') | |
214 | assert query_eq('?baz=42.000000001&foo=bar', '?foo=bar&baz=42.0') | |
215 | assert not query_eq('?baz=42.00000001&foo=bar', '?foo=bar&baz=42.0') | |
216 | ||
217 | assert query_eq('?baz=42.000000001,23.99999999999&foo=bar', '?foo=bar&baz=42.0,24.0') | |
218 | assert not query_eq('?baz=42.00000001&foo=bar', '?foo=bar&baz=42.0')⏎ |
Binary diff not shown
31 | 31 | stop = time.time() |
32 | 32 | |
33 | 33 | duration = stop - start |
34 | assert duration < 0.2 | |
34 | assert duration < 0.5, "took %s" % duration | |
35 | 35 | |
36 | 36 | eq_(len(result), 40) |
37 | 37 | |
67 | 67 | stop = time.time() |
68 | 68 | |
69 | 69 | duration = stop - start |
70 | assert duration < 0.1 | |
70 | assert duration < 0.2, "took %s" % duration | |
71 | 71 | |
72 | 72 | eq_(len(result), 40) |
73 | 73 |
322 | 322 | ((0.0, -90.0, 180.0, 90.0), (512, 512), SRS(4326))]) |
323 | 323 | |
324 | 324 | |
325 | class TestTileManagerWMSSourceConcurrent(TestTileManagerWMSSource): | |
326 | def setup(self): | |
327 | TestTileManagerWMSSource.setup(self) | |
328 | self.tile_mgr.concurrent_tile_creators = 2 | |
329 | ||
325 | 330 | class TestTileManagerWMSSourceMinimalMetaRequests(object): |
326 | 331 | def setup(self): |
327 | 332 | self.file_cache = MockFileCache('/dev/null', 'png') |
480 | 485 | locker=self.locker) |
481 | 486 | |
482 | 487 | assert self.tile_mgr.meta_grid is None |
488 | ||
489 | ||
490 | class TestTileManagerBulkMetaTiles(object): | |
491 | def setup(self): | |
492 | self.file_cache = MockFileCache('/dev/null', 'png') | |
493 | self.grid = TileGrid(SRS(4326), bbox=[-180, -90, 180, 90], origin='ul') | |
494 | self.source_base = SolidColorMockSource(color='#ff0000') | |
495 | self.source_base.supports_meta_tiles = False | |
496 | self.source_overlay = MockSource() | |
497 | self.source_overlay.supports_meta_tiles = False | |
498 | self.locker = TileLocker(tmp_lock_dir, 10, "id") | |
499 | self.tile_mgr = TileManager(self.grid, self.file_cache, | |
500 | [self.source_base, self.source_overlay], 'png', | |
501 | meta_size=[2, 2], meta_buffer=0, | |
502 | locker=self.locker, | |
503 | bulk_meta_tiles=True, | |
504 | ) | |
505 | ||
506 | def test_bulk_get(self): | |
507 | tiles = self.tile_mgr.creator().create_tiles([Tile((0, 0, 2))]) | |
508 | eq_(len(tiles), 2*2) | |
509 | eq_(self.file_cache.stored_tiles, set([(0, 0, 2), (1, 0, 2), (0, 1, 2), (1, 1, 2)])) | |
510 | for requested in [self.source_base.requested, self.source_overlay.requested]: | |
511 | eq_(set(requested), set([ | |
512 | ((-180.0, 0.0, -90.0, 90.0), (256, 256), SRS(4326)), | |
513 | ((-90.0, 0.0, 0.0, 90.0), (256, 256), SRS(4326)), | |
514 | ((-180.0, -90.0, -90.0, 0.0), (256, 256), SRS(4326)), | |
515 | ((-90.0, -90.0, 0.0, 0.0), (256, 256), SRS(4326)), | |
516 | ])) | |
517 | ||
518 | def test_bulk_get_error(self): | |
519 | self.tile_mgr.sources = [self.source_base, ErrorSource()] | |
520 | try: | |
521 | self.tile_mgr.creator().create_tiles([Tile((0, 0, 2))]) | |
522 | except Exception as ex: | |
523 | eq_(ex.args[0], "source error") | |
524 | ||
525 | def test_bulk_get_multiple_meta_tiles(self): | |
526 | tiles = self.tile_mgr.creator().create_tiles([Tile((1, 0, 2)), Tile((2, 0, 2))]) | |
527 | eq_(len(tiles), 2*2*2) | |
528 | eq_(self.file_cache.stored_tiles, set([ | |
529 | (0, 0, 2), (1, 0, 2), (0, 1, 2), (1, 1, 2), | |
530 | (2, 0, 2), (3, 0, 2), (2, 1, 2), (3, 1, 2), | |
531 | ])) | |
532 | ||
533 | class ErrorSource(MapLayer): | |
534 | def __init__(self, *args): | |
535 | MapLayer.__init__(self, *args) | |
536 | self.requested = [] | |
537 | ||
538 | def get_map(self, query): | |
539 | self.requested.append((query.bbox, query.size, query.srs)) | |
540 | raise Exception("source error") | |
541 | ||
542 | class TestTileManagerBulkMetaTilesConcurrent(TestTileManagerBulkMetaTiles): | |
543 | def setup(self): | |
544 | TestTileManagerBulkMetaTiles.setup(self) | |
545 | self.tile_mgr.concurrent_tile_creators = 2 | |
546 | ||
483 | 547 | |
484 | 548 | default_image_opts = ImageOptions(resampling='bicubic') |
485 | 549 |
0 | # This file is part of the MapProxy project. | |
1 | # Copyright (C) 2016 Omniscale <http://omniscale.de> | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | from __future__ import with_statement, division | |
16 | ||
17 | import os | |
18 | import time | |
19 | import struct | |
20 | ||
21 | from io import BytesIO | |
22 | ||
23 | from mapproxy.cache.compact import CompactCacheV1 | |
24 | from mapproxy.cache.tile import Tile | |
25 | from mapproxy.image import ImageSource | |
26 | from mapproxy.image.opts import ImageOptions | |
27 | from mapproxy.test.unit.test_cache_tile import TileCacheTestBase | |
28 | ||
29 | from nose.tools import eq_ | |
30 | ||
31 | class TestCompactCacheV1(TileCacheTestBase): | |
32 | ||
33 | always_loads_metadata = True | |
34 | ||
35 | def setup(self): | |
36 | TileCacheTestBase.setup(self) | |
37 | self.cache = CompactCacheV1( | |
38 | cache_dir=self.cache_dir, | |
39 | ) | |
40 | ||
41 | def test_bundle_files(self): | |
42 | assert not os.path.exists(os.path.join(self.cache_dir, 'L00', 'R0000C0000.bundle')) | |
43 | assert not os.path.exists(os.path.join(self.cache_dir, 'L00', 'R0000C0000.bundlx')) | |
44 | self.cache.store_tile(self.create_tile(coord=(0, 0, 0))) | |
45 | assert os.path.exists(os.path.join(self.cache_dir, 'L00', 'R0000C0000.bundle')) | |
46 | assert os.path.exists(os.path.join(self.cache_dir, 'L00', 'R0000C0000.bundlx')) | |
47 | ||
48 | assert not os.path.exists(os.path.join(self.cache_dir, 'L12', 'R0000C0000.bundle')) | |
49 | assert not os.path.exists(os.path.join(self.cache_dir, 'L12', 'R0000C0000.bundlx')) | |
50 | self.cache.store_tile(self.create_tile(coord=(127, 127, 12))) | |
51 | assert os.path.exists(os.path.join(self.cache_dir, 'L12', 'R0000C0000.bundle')) | |
52 | assert os.path.exists(os.path.join(self.cache_dir, 'L12', 'R0000C0000.bundlx')) | |
53 | ||
54 | assert not os.path.exists(os.path.join(self.cache_dir, 'L12', 'R0100C0080.bundle')) | |
55 | assert not os.path.exists(os.path.join(self.cache_dir, 'L12', 'R0100C0080.bundlx')) | |
56 | self.cache.store_tile(self.create_tile(coord=(128, 256, 12))) | |
57 | assert os.path.exists(os.path.join(self.cache_dir, 'L12', 'R0100C0080.bundle')) | |
58 | assert os.path.exists(os.path.join(self.cache_dir, 'L12', 'R0100C0080.bundlx')) | |
59 | ||
60 | def test_bundle_files_not_created_on_is_cached(self): | |
61 | assert not os.path.exists(os.path.join(self.cache_dir, 'L00', 'R0000C0000.bundle')) | |
62 | assert not os.path.exists(os.path.join(self.cache_dir, 'L00', 'R0000C0000.bundlx')) | |
63 | self.cache.is_cached(Tile(coord=(0, 0, 0))) | |
64 | assert not os.path.exists(os.path.join(self.cache_dir, 'L00', 'R0000C0000.bundle')) | |
65 | assert not os.path.exists(os.path.join(self.cache_dir, 'L00', 'R0000C0000.bundlx')) | |
66 | ||
67 | def test_missing_tiles(self): | |
68 | self.cache.store_tile(self.create_tile(coord=(130, 200, 8))) | |
69 | assert os.path.exists(os.path.join(self.cache_dir, 'L08', 'R0080C0080.bundle')) | |
70 | assert os.path.exists(os.path.join(self.cache_dir, 'L08', 'R0080C0080.bundlx')) | |
71 | ||
72 | # test that all other tiles in this bundle are missing | |
73 | assert self.cache.is_cached(Tile((130, 200, 8))) | |
74 | for x in range(128, 255): | |
75 | for y in range(128, 255): | |
76 | if x == 130 and y == 200: | |
77 | continue | |
78 | assert not self.cache.is_cached(Tile((x, y, 8))), (x, y) | |
79 | assert not self.cache.load_tile(Tile((x, y, 8))), (x, y) | |
80 | ||
81 | def test_remove_level_tiles_before(self): | |
82 | self.cache.store_tile(self.create_tile(coord=(0, 0, 12))) | |
83 | assert os.path.exists(os.path.join(self.cache_dir, 'L12', 'R0000C0000.bundle')) | |
84 | assert os.path.exists(os.path.join(self.cache_dir, 'L12', 'R0000C0000.bundlx')) | |
85 | ||
86 | # not removed with timestamp | |
87 | self.cache.remove_level_tiles_before(12, time.time()) | |
88 | assert os.path.exists(os.path.join(self.cache_dir, 'L12', 'R0000C0000.bundle')) | |
89 | assert os.path.exists(os.path.join(self.cache_dir, 'L12', 'R0000C0000.bundlx')) | |
90 | ||
91 | # removed with timestamp=0 (remove_all:true in seed.yaml) | |
92 | self.cache.remove_level_tiles_before(12, 0) | |
93 | assert not os.path.exists(os.path.join(self.cache_dir, 'L12')) | |
94 | ||
95 | ||
96 | def test_bundle_header(self): | |
97 | t = Tile((5000, 1000, 12), ImageSource(BytesIO(b'a' * 4000), image_opts=ImageOptions(format='image/png'))) | |
98 | self.cache.store_tile(t) | |
99 | assert os.path.exists(os.path.join(self.cache_dir, 'L12', 'R0380C1380.bundle')) | |
100 | assert os.path.exists(os.path.join(self.cache_dir, 'L12', 'R0380C1380.bundlx')) | |
101 | ||
102 | def assert_header(tile_bytes_written, max_tile_bytes): | |
103 | with open(os.path.join(self.cache_dir, 'L12', 'R0380C1380.bundle'), 'r+b') as f: | |
104 | header = struct.unpack('<lllllllllllllll', f.read(60)) | |
105 | eq_(header[11], 896) | |
106 | eq_(header[12], 1023) | |
107 | eq_(header[13], 4992) | |
108 | eq_(header[14], 5119) | |
109 | eq_(header[6], 60 + 128*128*4 + sum(tile_bytes_written)) | |
110 | eq_(header[2], max_tile_bytes) | |
111 | eq_(header[4], len(tile_bytes_written)*4) | |
112 | ||
113 | assert_header([4000 + 4], 4000) | |
114 | ||
115 | t = Tile((5000, 1001, 12), ImageSource(BytesIO(b'a' * 6000), image_opts=ImageOptions(format='image/png'))) | |
116 | self.cache.store_tile(t) | |
117 | assert_header([4000 + 4, 6000 + 4], 6000) | |
118 | ||
119 | t = Tile((4992, 999, 12), ImageSource(BytesIO(b'a' * 1000), image_opts=ImageOptions(format='image/png'))) | |
120 | self.cache.store_tile(t) | |
121 | assert_header([4000 + 4, 6000 + 4, 1000 + 4], 6000) | |
122 | ||
123 | t = Tile((5000, 1001, 12), ImageSource(BytesIO(b'a' * 3000), image_opts=ImageOptions(format='image/png'))) | |
124 | self.cache.store_tile(t) | |
125 | assert_header([4000 + 4, 6000 + 4 + 3000 + 4, 1000 + 4], 6000) # still contains bytes from overwritten tile | |
126 |
0 | # This file is part of the MapProxy project. | |
1 | # Copyright (C) 2016 Omniscale <http://omniscale.de> | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | from __future__ import with_statement, division | |
16 | ||
17 | import os | |
18 | import time | |
19 | import sqlite3 | |
20 | import threading | |
21 | ||
22 | from io import BytesIO | |
23 | ||
24 | from mapproxy.image import ImageSource | |
25 | from mapproxy.cache.geopackage import GeopackageCache, GeopackageLevelCache | |
26 | from mapproxy.cache.tile import Tile | |
27 | from mapproxy.grid import tile_grid, TileGrid | |
28 | from mapproxy.test.unit.test_cache_tile import TileCacheTestBase | |
29 | ||
30 | from nose.tools import eq_ | |
31 | ||
32 | class TestGeopackageCache(TileCacheTestBase): | |
33 | ||
34 | always_loads_metadata = True | |
35 | ||
36 | def setup(self): | |
37 | TileCacheTestBase.setup(self) | |
38 | self.gpkg_file = os.path.join(self.cache_dir, 'tmp.gpkg') | |
39 | self.table_name = 'test_tiles' | |
40 | self.cache = GeopackageCache( | |
41 | self.gpkg_file, | |
42 | tile_grid=tile_grid(3857, name='global-webmarcator'), | |
43 | table_name=self.table_name, | |
44 | ) | |
45 | ||
46 | def teardown(self): | |
47 | if self.cache: | |
48 | self.cache.cleanup() | |
49 | TileCacheTestBase.teardown(self) | |
50 | ||
51 | def test_new_geopackage(self): | |
52 | assert os.path.exists(self.gpkg_file) | |
53 | ||
54 | with sqlite3.connect(self.gpkg_file) as db: | |
55 | cur = db.execute('''SELECT name FROM sqlite_master WHERE type='table' AND name=?''', | |
56 | (self.table_name,)) | |
57 | content = cur.fetchone() | |
58 | assert content[0] == self.table_name | |
59 | ||
60 | with sqlite3.connect(self.gpkg_file) as db: | |
61 | cur = db.execute('''SELECT table_name, data_type FROM gpkg_contents WHERE table_name = ?''', | |
62 | (self.table_name,)) | |
63 | content = cur.fetchone() | |
64 | assert content[0] == self.table_name | |
65 | assert content[1] == 'tiles' | |
66 | ||
67 | with sqlite3.connect(self.gpkg_file) as db: | |
68 | cur = db.execute('''SELECT table_name FROM gpkg_tile_matrix WHERE table_name = ?''', | |
69 | (self.table_name,)) | |
70 | content = cur.fetchall() | |
71 | assert len(content) == 20 | |
72 | ||
73 | with sqlite3.connect(self.gpkg_file) as db: | |
74 | cur = db.execute('''SELECT table_name FROM gpkg_tile_matrix_set WHERE table_name = ?''', | |
75 | (self.table_name,)) | |
76 | content = cur.fetchone() | |
77 | assert content[0] == self.table_name | |
78 | ||
79 | def test_load_empty_tileset(self): | |
80 | assert self.cache.load_tiles([Tile(None)]) == True | |
81 | assert self.cache.load_tiles([Tile(None), Tile(None), Tile(None)]) == True | |
82 | ||
83 | def test_load_more_than_2000_tiles(self): | |
84 | # prepare data | |
85 | for i in range(0, 2010): | |
86 | assert self.cache.store_tile(Tile((i, 0, 10), ImageSource(BytesIO(b'foo')))) | |
87 | ||
88 | tiles = [Tile((i, 0, 10)) for i in range(0, 2010)] | |
89 | assert self.cache.load_tiles(tiles) | |
90 | ||
91 | def test_timeouts(self): | |
92 | self.cache._db_conn_cache.db = sqlite3.connect(self.cache.geopackage_file, timeout=0.05) | |
93 | ||
94 | def block(): | |
95 | # block database by delaying the commit | |
96 | db = sqlite3.connect(self.cache.geopackage_file) | |
97 | cur = db.cursor() | |
98 | stmt = "INSERT OR REPLACE INTO {0} (zoom_level, tile_column, tile_row, tile_data) " \ | |
99 | "VALUES (?,?,?,?)".format(self.table_name) | |
100 | cur.execute(stmt, (3, 1, 1, '1234')) | |
101 | time.sleep(0.2) | |
102 | db.commit() | |
103 | ||
104 | try: | |
105 | assert self.cache.store_tile(self.create_tile((0, 0, 1))) == True | |
106 | ||
107 | t = threading.Thread(target=block) | |
108 | t.start() | |
109 | time.sleep(0.05) | |
110 | assert self.cache.store_tile(self.create_tile((0, 0, 1))) == False | |
111 | finally: | |
112 | t.join() | |
113 | ||
114 | assert self.cache.store_tile(self.create_tile((0, 0, 1))) == True | |
115 | ||
116 | ||
117 | class TestGeopackageLevelCache(TileCacheTestBase): | |
118 | ||
119 | always_loads_metadata = True | |
120 | ||
121 | def setup(self): | |
122 | TileCacheTestBase.setup(self) | |
123 | self.cache = GeopackageLevelCache( | |
124 | self.cache_dir, | |
125 | tile_grid=tile_grid(3857, name='global-webmarcator'), | |
126 | table_name='test_tiles', | |
127 | ) | |
128 | ||
129 | def teardown(self): | |
130 | if self.cache: | |
131 | self.cache.cleanup() | |
132 | TileCacheTestBase.teardown(self) | |
133 | ||
134 | def test_level_files(self): | |
135 | if os.path.exists(self.cache_dir): | |
136 | eq_(os.listdir(self.cache_dir), []) | |
137 | ||
138 | self.cache.store_tile(self.create_tile((0, 0, 1))) | |
139 | eq_(os.listdir(self.cache_dir), ['1.gpkg']) | |
140 | ||
141 | self.cache.store_tile(self.create_tile((0, 0, 5))) | |
142 | eq_(sorted(os.listdir(self.cache_dir)), ['1.gpkg', '5.gpkg']) | |
143 | ||
144 | def test_remove_level_files(self): | |
145 | self.cache.store_tile(self.create_tile((0, 0, 1))) | |
146 | self.cache.store_tile(self.create_tile((0, 0, 2))) | |
147 | eq_(sorted(os.listdir(self.cache_dir)), ['1.gpkg', '2.gpkg']) | |
148 | ||
149 | self.cache.remove_level_tiles_before(1, timestamp=0) | |
150 | eq_(os.listdir(self.cache_dir), ['2.gpkg']) | |
151 | ||
152 | def test_remove_level_tiles_before(self): | |
153 | self.cache.store_tile(self.create_tile((0, 0, 1))) | |
154 | self.cache.store_tile(self.create_tile((0, 0, 2))) | |
155 | ||
156 | eq_(sorted(os.listdir(self.cache_dir)), ['1.gpkg', '2.gpkg']) | |
157 | assert self.cache.is_cached(Tile((0, 0, 1))) | |
158 | ||
159 | self.cache.remove_level_tiles_before(1, timestamp=time.time() - 60) | |
160 | assert self.cache.is_cached(Tile((0, 0, 1))) | |
161 | ||
162 | self.cache.remove_level_tiles_before(1, timestamp=0) | |
163 | assert not self.cache.is_cached(Tile((0, 0, 1))) | |
164 | ||
165 | eq_(sorted(os.listdir(self.cache_dir)), ['1.gpkg', '2.gpkg']) | |
166 | assert self.cache.is_cached(Tile((0, 0, 2))) | |
167 | ||
168 | ||
169 | def test_bulk_store_tiles_with_different_levels(self): | |
170 | self.cache.store_tiles([ | |
171 | self.create_tile((0, 0, 1)), | |
172 | self.create_tile((0, 0, 2)), | |
173 | self.create_tile((1, 0, 2)), | |
174 | self.create_tile((1, 0, 1)), | |
175 | ]) | |
176 | ||
177 | eq_(sorted(os.listdir(self.cache_dir)), ['1.gpkg', '2.gpkg']) | |
178 | assert self.cache.is_cached(Tile((0, 0, 1))) | |
179 | assert self.cache.is_cached(Tile((1, 0, 1))) | |
180 | assert self.cache.is_cached(Tile((0, 0, 2))) | |
181 | assert self.cache.is_cached(Tile((1, 0, 2))) | |
182 | ||
183 | class TestGeopackageCacheInitErrors(object): | |
184 | table_name = 'cache' | |
185 | ||
186 | def test_bad_config_geopackage_srs(self): | |
187 | error_msg = None | |
188 | gpkg_file = os.path.join(os.path.join(os.path.dirname(__file__), | |
189 | 'fixture'), | |
190 | 'cache.gpkg') | |
191 | table_name = 'cache' | |
192 | try: | |
193 | GeopackageCache(gpkg_file, TileGrid(srs=4326), table_name) | |
194 | except ValueError as ve: | |
195 | error_msg = ve | |
196 | assert "srs is improperly configured." in str(error_msg) | |
197 | ||
198 | def test_bad_config_geopackage_tile(self): | |
199 | error_msg = None | |
200 | gpkg_file = os.path.join(os.path.join(os.path.dirname(__file__), | |
201 | 'fixture'), | |
202 | 'cache.gpkg') | |
203 | table_name = 'cache' | |
204 | try: | |
205 | GeopackageCache(gpkg_file, TileGrid(srs=900913, tile_size=(512, 512)), table_name) | |
206 | except ValueError as ve: | |
207 | error_msg = ve | |
208 | assert "tile_size is improperly configured." in str(error_msg) | |
209 | ||
210 | def test_bad_config_geopackage_res(self): | |
211 | error_msg = None | |
212 | gpkg_file = os.path.join(os.path.join(os.path.dirname(__file__), | |
213 | 'fixture'), | |
214 | 'cache.gpkg') | |
215 | table_name = 'cache' | |
216 | try: | |
217 | GeopackageCache(gpkg_file, TileGrid(srs=900913, res=[1000, 100, 10]), table_name) | |
218 | except ValueError as ve: | |
219 | error_msg = ve | |
220 | assert "res is improperly configured." in str(error_msg) |
0 | # This file is part of the MapProxy project. | |
1 | # Copyright (C) 2017 Omniscale <http://omniscale.de> | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | from __future__ import with_statement | |
16 | ||
17 | try: | |
18 | import redis | |
19 | except ImportError: | |
20 | redis = None | |
21 | ||
22 | import time | |
23 | import os | |
24 | ||
25 | from nose.plugins.skip import SkipTest | |
26 | ||
27 | from mapproxy.cache.tile import Tile | |
28 | from mapproxy.cache.redis import RedisCache | |
29 | ||
30 | from mapproxy.test.unit.test_cache_tile import TileCacheTestBase | |
31 | ||
32 | class TestRedisCache(TileCacheTestBase): | |
33 | always_loads_metadata = False | |
34 | def setup(self): | |
35 | if not redis: | |
36 | raise SkipTest("redis required for Redis tests") | |
37 | ||
38 | redis_host = os.environ.get('MAPPROXY_TEST_REDIS') | |
39 | if not redis_host: | |
40 | raise SkipTest() | |
41 | self.host, self.port = redis_host.split(':') | |
42 | ||
43 | TileCacheTestBase.setup(self) | |
44 | ||
45 | self.cache = RedisCache(self.host, int(self.port), prefix='mapproxy-test', db=1) | |
46 | ||
47 | def teardown(self): | |
48 | for k in self.cache.r.keys('mapproxy-test-*'): | |
49 | self.cache.r.delete(k) | |
50 | ||
51 | def test_expire(self): | |
52 | cache = RedisCache(self.host, int(self.port), prefix='mapproxy-test', db=1, ttl=0) | |
53 | t1 = self.create_tile(coord=(9382, 1234, 9)) | |
54 | assert cache.store_tile(t1) | |
55 | time.sleep(0.1) | |
56 | t2 = Tile(t1.coord) | |
57 | assert cache.is_cached(t2) | |
58 | ||
59 | cache = RedisCache(self.host, int(self.port), prefix='mapproxy-test', db=1, ttl=0.05) | |
60 | t1 = self.create_tile(coord=(5382, 2234, 9)) | |
61 | assert cache.store_tile(t1) | |
62 | time.sleep(0.1) | |
63 | t2 = Tile(t1.coord) | |
64 | assert not cache.is_cached(t2) | |
65 | ||
66 | def test_double_remove(self): | |
67 | tile = self.create_tile() | |
68 | self.create_cached_tile(tile) | |
69 | assert self.cache.remove_tile(tile) | |
70 | assert self.cache.remove_tile(tile) |
0 | # This file is part of the MapProxy project. | |
1 | # Copyright (C) 2011 Omniscale <http://omniscale.de> | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | try: | |
16 | import boto3 | |
17 | from moto import mock_s3 | |
18 | except ImportError: | |
19 | boto3 = None | |
20 | mock_s3 = None | |
21 | ||
22 | from nose.plugins.skip import SkipTest | |
23 | ||
24 | from mapproxy.cache.s3 import S3Cache | |
25 | from mapproxy.test.unit.test_cache_tile import TileCacheTestBase | |
26 | ||
27 | ||
28 | class TestS3Cache(TileCacheTestBase): | |
29 | always_loads_metadata = True | |
30 | uses_utc = True | |
31 | ||
32 | def setup(self): | |
33 | if not mock_s3 or not boto3: | |
34 | raise SkipTest("boto3 and moto required for S3 tests") | |
35 | ||
36 | TileCacheTestBase.setup(self) | |
37 | ||
38 | self.mock = mock_s3() | |
39 | self.mock.start() | |
40 | ||
41 | self.bucket_name = "test" | |
42 | dir_name = 'mapproxy' | |
43 | ||
44 | boto3.client("s3").create_bucket(Bucket=self.bucket_name) | |
45 | ||
46 | self.cache = S3Cache(dir_name, | |
47 | file_ext='png', | |
48 | directory_layout='tms', | |
49 | bucket_name=self.bucket_name, | |
50 | profile_name=None, | |
51 | _concurrent_writer=1, # moto is not thread safe | |
52 | ) | |
53 | ||
54 | def teardown(self): | |
55 | self.mock.stop() | |
56 | TileCacheTestBase.teardown(self) | |
57 | ||
58 | def check_tile_key(self, layout, tile_coord, key): | |
59 | cache = S3Cache('/mycache/webmercator', 'png', bucket_name=self.bucket_name, directory_layout=layout) | |
60 | cache.store_tile(self.create_tile(tile_coord)) | |
61 | ||
62 | # raises, if key is missing | |
63 | boto3.client("s3").head_object(Bucket=self.bucket_name, Key=key) | |
64 | ||
65 | def test_tile_keys(self): | |
66 | yield self.check_tile_key, 'mp', (12345, 67890, 2), 'mycache/webmercator/02/0001/2345/0006/7890.png' | |
67 | yield self.check_tile_key, 'mp', (12345, 67890, 12), 'mycache/webmercator/12/0001/2345/0006/7890.png' | |
68 | ||
69 | yield self.check_tile_key, 'tc', (12345, 67890, 2), 'mycache/webmercator/02/000/012/345/000/067/890.png' | |
70 | yield self.check_tile_key, 'tc', (12345, 67890, 12), 'mycache/webmercator/12/000/012/345/000/067/890.png' | |
71 | ||
72 | yield self.check_tile_key, 'tms', (12345, 67890, 2), 'mycache/webmercator/2/12345/67890.png' | |
73 | yield self.check_tile_key, 'tms', (12345, 67890, 12), 'mycache/webmercator/12/12345/67890.png' | |
74 | ||
75 | yield self.check_tile_key, 'quadkey', (0, 0, 0), 'mycache/webmercator/.png' | |
76 | yield self.check_tile_key, 'quadkey', (0, 0, 1), 'mycache/webmercator/0.png' | |
77 | yield self.check_tile_key, 'quadkey', (1, 1, 1), 'mycache/webmercator/3.png' | |
78 | yield self.check_tile_key, 'quadkey', (12345, 67890, 12), 'mycache/webmercator/200200331021.png' | |
79 | ||
80 | yield self.check_tile_key, 'arcgis', (1, 2, 3), 'mycache/webmercator/L03/R00000002/C00000001.png' | |
81 | yield self.check_tile_key, 'arcgis', (9, 2, 3), 'mycache/webmercator/L03/R00000002/C00000009.png' | |
82 | yield self.check_tile_key, 'arcgis', (10, 2, 3), 'mycache/webmercator/L03/R00000002/C0000000a.png' | |
83 | yield self.check_tile_key, 'arcgis', (12345, 67890, 12), 'mycache/webmercator/L12/R00010932/C00003039.png' | |
84 |
14 | 14 | |
15 | 15 | from __future__ import with_statement |
16 | 16 | |
17 | import datetime | |
17 | 18 | import os |
18 | 19 | import shutil |
19 | 20 | import threading |
28 | 29 | from mapproxy.cache.tile import Tile |
29 | 30 | from mapproxy.cache.file import FileCache |
30 | 31 | from mapproxy.cache.mbtiles import MBTilesCache, MBTilesLevelCache |
31 | from mapproxy.cache.base import CacheBackendError | |
32 | 32 | from mapproxy.image import ImageSource |
33 | 33 | from mapproxy.image.opts import ImageOptions |
34 | 34 | from mapproxy.test.image import create_tmp_image_buf, is_png |
35 | 35 | |
36 | from nose.tools import eq_, assert_raises | |
36 | from nose.tools import eq_ | |
37 | 37 | |
38 | 38 | tile_image = create_tmp_image_buf((256, 256), color='blue') |
39 | 39 | tile_image2 = create_tmp_image_buf((256, 256), color='red') |
40 | 40 | |
41 | def timestamp_is_now(timestamp, delta=5): | |
42 | return abs(timestamp - time.time()) <= delta | |
43 | 41 | |
44 | 42 | class TileCacheTestBase(object): |
45 | 43 | always_loads_metadata = False |
44 | uses_utc = False | |
46 | 45 | |
47 | 46 | def setup(self): |
48 | 47 | self.cache_dir = tempfile.mkdtemp() |
51 | 50 | if hasattr(self, 'cache_dir') and os.path.exists(self.cache_dir): |
52 | 51 | shutil.rmtree(self.cache_dir) |
53 | 52 | |
54 | def create_tile(self, coord=(0, 0, 4)): | |
53 | def create_tile(self, coord=(3009, 589, 12)): | |
55 | 54 | return Tile(coord, |
56 | 55 | ImageSource(tile_image, |
57 | 56 | image_opts=ImageOptions(format='image/png'))) |
58 | 57 | |
59 | def create_another_tile(self, coord=(0, 0, 4)): | |
58 | def create_another_tile(self, coord=(3009, 589, 12)): | |
60 | 59 | return Tile(coord, |
61 | 60 | ImageSource(tile_image2, |
62 | 61 | image_opts=ImageOptions(format='image/png'))) |
63 | 62 | |
64 | 63 | def test_is_cached_miss(self): |
65 | assert not self.cache.is_cached(Tile((0, 0, 4))) | |
64 | assert not self.cache.is_cached(Tile((3009, 589, 12))) | |
66 | 65 | |
67 | 66 | def test_is_cached_hit(self): |
68 | 67 | tile = self.create_tile() |
69 | 68 | self.create_cached_tile(tile) |
70 | assert self.cache.is_cached(Tile((0, 0, 4))) | |
69 | assert self.cache.is_cached(Tile((3009, 589, 12))) | |
71 | 70 | |
72 | 71 | def test_is_cached_none(self): |
73 | 72 | assert self.cache.is_cached(Tile(None)) |
76 | 75 | assert self.cache.load_tile(Tile(None)) |
77 | 76 | |
78 | 77 | def test_load_tile_not_cached(self): |
79 | tile = Tile((0, 0, 4)) | |
78 | tile = Tile((3009, 589, 12)) | |
80 | 79 | assert not self.cache.load_tile(tile) |
81 | 80 | assert tile.source is None |
82 | 81 | assert tile.is_missing() |
84 | 83 | def test_load_tile_cached(self): |
85 | 84 | tile = self.create_tile() |
86 | 85 | self.create_cached_tile(tile) |
87 | tile = Tile((0, 0, 4)) | |
86 | tile = Tile((3009, 589, 12)) | |
88 | 87 | assert self.cache.load_tile(tile) == True |
89 | 88 | assert not tile.is_missing() |
90 | 89 | |
91 | 90 | def test_store_tiles(self): |
92 | tiles = [self.create_tile((x, 0, 4)) for x in range(4)] | |
91 | tiles = [self.create_tile((x, 589, 12)) for x in range(4)] | |
93 | 92 | tiles[0].stored = True |
94 | 93 | self.cache.store_tiles(tiles) |
95 | 94 | |
96 | tiles = [Tile((x, 0, 4)) for x in range(4)] | |
95 | tiles = [Tile((x, 589, 12)) for x in range(4)] | |
97 | 96 | assert tiles[0].is_missing() |
98 | 97 | assert self.cache.load_tile(tiles[0]) == False |
99 | 98 | assert tiles[0].is_missing() |
144 | 143 | assert self.cache.load_tile(tile, with_metadata=True) |
145 | 144 | assert tile.source is not None |
146 | 145 | if tile.timestamp: |
147 | assert timestamp_is_now(tile.timestamp, delta=10) | |
146 | now = time.time() | |
147 | if self.uses_utc: | |
148 | now = time.mktime(datetime.datetime.utcnow().timetuple()) | |
149 | assert abs(tile.timestamp - now) <= 10 | |
148 | 150 | if tile.size: |
149 | 151 | assert tile.size == size |
150 | 152 | |
171 | 173 | # tile object is marked as stored, |
172 | 174 | # check that is is not stored 'again' |
173 | 175 | # (used for disable_storage) |
174 | tile = Tile((0, 0, 4), ImageSource(BytesIO(b'foo'))) | |
176 | tile = Tile((1234, 589, 12), ImageSource(BytesIO(b'foo'))) | |
175 | 177 | tile.stored = True |
176 | 178 | self.cache.store_tile(tile) |
177 | 179 | |
178 | 180 | assert self.cache.is_cached(tile) |
179 | 181 | |
180 | tile = Tile((0, 0, 4)) | |
182 | tile = Tile((1234, 589, 12)) | |
181 | 183 | assert not self.cache.is_cached(tile) |
182 | 184 | |
183 | 185 | def test_remove(self): |
187 | 189 | |
188 | 190 | self.cache.remove_tile(Tile((1, 0, 4))) |
189 | 191 | assert not self.cache.is_cached(Tile((1, 0, 4))) |
192 | ||
193 | # check if we can recreate a removed tile | |
194 | tile = self.create_tile((1, 0, 4)) | |
195 | self.create_cached_tile(tile) | |
196 | assert self.cache.is_cached(Tile((1, 0, 4))) | |
190 | 197 | |
191 | 198 | def create_cached_tile(self, tile): |
192 | 199 | self.cache.store_tile(tile) |
247 | 254 | with open(loc, 'wb') as f: |
248 | 255 | f.write(b'foo') |
249 | 256 | |
257 | ||
258 | def check_tile_location(self, layout, tile_coord, path): | |
259 | cache = FileCache('/tmp/foo', 'png', directory_layout=layout) | |
260 | eq_(cache.tile_location(Tile(tile_coord)), path) | |
261 | ||
262 | def test_tile_locations(self): | |
263 | yield self.check_tile_location, 'mp', (12345, 67890, 2), '/tmp/foo/02/0001/2345/0006/7890.png' | |
264 | yield self.check_tile_location, 'mp', (12345, 67890, 12), '/tmp/foo/12/0001/2345/0006/7890.png' | |
265 | ||
266 | yield self.check_tile_location, 'tc', (12345, 67890, 2), '/tmp/foo/02/000/012/345/000/067/890.png' | |
267 | yield self.check_tile_location, 'tc', (12345, 67890, 12), '/tmp/foo/12/000/012/345/000/067/890.png' | |
268 | ||
269 | yield self.check_tile_location, 'tms', (12345, 67890, 2), '/tmp/foo/2/12345/67890.png' | |
270 | yield self.check_tile_location, 'tms', (12345, 67890, 12), '/tmp/foo/12/12345/67890.png' | |
271 | ||
272 | yield self.check_tile_location, 'quadkey', (0, 0, 0), '/tmp/foo/.png' | |
273 | yield self.check_tile_location, 'quadkey', (0, 0, 1), '/tmp/foo/0.png' | |
274 | yield self.check_tile_location, 'quadkey', (1, 1, 1), '/tmp/foo/3.png' | |
275 | yield self.check_tile_location, 'quadkey', (12345, 67890, 12), '/tmp/foo/200200331021.png' | |
276 | ||
277 | yield self.check_tile_location, 'arcgis', (1, 2, 3), '/tmp/foo/L03/R00000002/C00000001.png' | |
278 | yield self.check_tile_location, 'arcgis', (9, 2, 3), '/tmp/foo/L03/R00000002/C00000009.png' | |
279 | yield self.check_tile_location, 'arcgis', (10, 2, 3), '/tmp/foo/L03/R00000002/C0000000a.png' | |
280 | yield self.check_tile_location, 'arcgis', (12345, 67890, 12), '/tmp/foo/L12/R00010932/C00003039.png' | |
281 | ||
282 | ||
283 | def check_level_location(self, layout, level, path): | |
284 | cache = FileCache('/tmp/foo', 'png', directory_layout=layout) | |
285 | eq_(cache.level_location(level), path) | |
286 | ||
287 | def test_level_locations(self): | |
288 | yield self.check_level_location, 'mp', 2, '/tmp/foo/02' | |
289 | yield self.check_level_location, 'mp', 12, '/tmp/foo/12' | |
290 | ||
291 | yield self.check_level_location, 'tc', 2, '/tmp/foo/02' | |
292 | yield self.check_level_location, 'tc', 12, '/tmp/foo/12' | |
293 | ||
294 | yield self.check_level_location, 'tms', '2', '/tmp/foo/2' | |
295 | yield self.check_level_location, 'tms', 12, '/tmp/foo/12' | |
296 | ||
297 | yield self.check_level_location, 'arcgis', 3, '/tmp/foo/L03' | |
298 | yield self.check_level_location, 'arcgis', 3, '/tmp/foo/L03' | |
299 | yield self.check_level_location, 'arcgis', 3, '/tmp/foo/L03' | |
300 | yield self.check_level_location, 'arcgis', 12, '/tmp/foo/L12' | |
301 | ||
302 | def test_level_location_quadkey(self): | |
303 | try: | |
304 | self.check_level_location('quadkey', 0, None) | |
305 | except NotImplementedError: | |
306 | pass | |
307 | else: | |
308 | assert False, "expected NotImplementedError" | |
250 | 309 | |
251 | 310 | class TestMBTileCache(TileCacheTestBase): |
252 | 311 | def setup(self): |
346 | 405 | |
347 | 406 | eq_(sorted(os.listdir(self.cache_dir)), ['1.mbtile', '2.mbtile']) |
348 | 407 | assert self.cache.is_cached(Tile((0, 0, 2))) |
408 | ||
409 | def test_bulk_store_tiles_with_different_levels(self): | |
410 | self.cache.store_tiles([ | |
411 | self.create_tile((0, 0, 1)), | |
412 | self.create_tile((0, 0, 2)), | |
413 | self.create_tile((1, 0, 2)), | |
414 | self.create_tile((1, 0, 1)), | |
415 | ]) | |
416 | ||
417 | eq_(sorted(os.listdir(self.cache_dir)), ['1.mbtile', '2.mbtile']) | |
418 | assert self.cache.is_cached(Tile((0, 0, 1))) | |
419 | assert self.cache.is_cached(Tile((1, 0, 1))) | |
420 | assert self.cache.is_cached(Tile((0, 0, 2))) | |
421 | assert self.cache.is_cached(Tile((1, 0, 2))) |
295 | 295 | http = MockHTTPClient() |
296 | 296 | wms = WMSInfoClient(req, http_client=http, supported_srs=[SRS(25832)]) |
297 | 297 | fi_req = InfoQuery((8, 50, 9, 51), (512, 512), |
298 | SRS(4326), (256, 256), 'text/plain') | |
298 | SRS(4326), (128, 64), 'text/plain') | |
299 | 299 | |
300 | 300 | wms.get_info(fi_req) |
301 | 301 | |
302 | 302 | assert wms_query_eq(http.requested[0], |
303 | 303 | TESTSERVER_URL+'/service?map=foo&LAYERS=foo&SERVICE=WMS&FORMAT=image%2Fpng' |
304 | '&REQUEST=GetFeatureInfo&HEIGHT=512&SRS=EPSG%3A25832&info_format=text/plain' | |
304 | '&REQUEST=GetFeatureInfo&SRS=EPSG%3A25832&info_format=text/plain' | |
305 | 305 | '&query_layers=foo' |
306 | '&VERSION=1.1.1&WIDTH=512&STYLES=&x=259&y=255' | |
307 | '&BBOX=428333.552496,5538630.70275,500000.0,5650300.78652') | |
306 | '&VERSION=1.1.1&WIDTH=512&HEIGHT=797&STYLES=&x=135&y=101' | |
307 | '&BBOX=428333.552496,5538630.70275,500000.0,5650300.78652'), http.requested[0] | |
308 | 308 | |
309 | 309 | def test_transform_fi_request(self): |
310 | 310 | req = WMS111FeatureInfoRequest(url=TESTSERVER_URL + '/service?map=foo', param={'layers':'foo', 'srs': 'EPSG:25832'}) |
311 | 311 | http = MockHTTPClient() |
312 | 312 | wms = WMSInfoClient(req, http_client=http) |
313 | 313 | fi_req = InfoQuery((8, 50, 9, 51), (512, 512), |
314 | SRS(4326), (256, 256), 'text/plain') | |
314 | SRS(4326), (128, 64), 'text/plain') | |
315 | 315 | |
316 | 316 | wms.get_info(fi_req) |
317 | 317 | |
318 | 318 | assert wms_query_eq(http.requested[0], |
319 | 319 | TESTSERVER_URL+'/service?map=foo&LAYERS=foo&SERVICE=WMS&FORMAT=image%2Fpng' |
320 | '&REQUEST=GetFeatureInfo&HEIGHT=512&SRS=EPSG%3A25832&info_format=text/plain' | |
320 | '&REQUEST=GetFeatureInfo&SRS=EPSG%3A25832&info_format=text/plain' | |
321 | 321 | '&query_layers=foo' |
322 | '&VERSION=1.1.1&WIDTH=512&STYLES=&x=259&y=255' | |
323 | '&BBOX=428333.552496,5538630.70275,500000.0,5650300.78652') | |
322 | '&VERSION=1.1.1&WIDTH=512&HEIGHT=797&STYLES=&x=135&y=101' | |
323 | '&BBOX=428333.552496,5538630.70275,500000.0,5650300.78652'), http.requested[0] | |
324 | 324 | |
325 | 325 | class TestWMSMapRequest100(object): |
326 | 326 | def setup(self): |
0 | # This file is part of the MapProxy project. | |
1 | # Copyright (C) 2010 Omniscale <http://omniscale.de> | |
2 | # | |
3 | # Licensed under the Apache License, Version 2.0 (the "License"); | |
4 | # you may not use this file except in compliance with the License. | |
5 | # You may obtain a copy of the License at | |
6 | # | |
7 | # http://www.apache.org/licenses/LICENSE-2.0 | |
8 | # | |
9 | # Unless required by applicable law or agreed to in writing, software | |
10 | # distributed under the License is distributed on an "AS IS" BASIS, | |
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
12 | # See the License for the specific language governing permissions and | |
13 | # limitations under the License. | |
14 | ||
15 | from io import BytesIO | |
16 | ||
17 | from mapproxy.client.arcgis import ArcGISInfoClient | |
18 | from mapproxy.layer import InfoQuery | |
19 | from mapproxy.request.arcgis import ArcGISIdentifyRequest | |
20 | from mapproxy.srs import SRS | |
21 | from mapproxy.test.http import assert_query_eq | |
22 | ||
23 | TESTSERVER_ADDRESS = ('127.0.0.1', 56413) | |
24 | TESTSERVER_URL = 'http://%s:%s' % TESTSERVER_ADDRESS | |
25 | ||
26 | ||
27 | ||
28 | class MockHTTPClient(object): | |
29 | def __init__(self): | |
30 | self.requested = [] | |
31 | ||
32 | def open(self, url, data=None): | |
33 | self.requested.append(url) | |
34 | result = BytesIO(b'{}') | |
35 | result.seek(0) | |
36 | result.headers = {} | |
37 | return result | |
38 | ||
39 | class TestArcGISInfoClient(object): | |
40 | def test_fi_request(self): | |
41 | req = ArcGISIdentifyRequest(url=TESTSERVER_URL + '/MapServer/export?map=foo', param={'layers':'foo'}) | |
42 | http = MockHTTPClient() | |
43 | wms = ArcGISInfoClient(req, http_client=http, supported_srs=[SRS(4326)]) | |
44 | fi_req = InfoQuery((8, 50, 9, 51), (512, 512), | |
45 | SRS(4326), (128, 64), 'text/plain') | |
46 | ||
47 | wms.get_info(fi_req) | |
48 | ||
49 | assert_query_eq(http.requested[0], | |
50 | TESTSERVER_URL+'/MapServer/identify?map=foo' | |
51 | '&imageDisplay=512,512,96&sr=4326&f=json' | |
52 | '&layers=foo&tolerance=5&returnGeometry=false' | |
53 | '&geometryType=esriGeometryPoint&geometry=8.250000,50.875000' | |
54 | '&mapExtent=8,50,9,51', | |
55 | fuzzy_number_compare=True) | |
56 | ||
57 | def test_transform_fi_request_supported_srs(self): | |
58 | req = ArcGISIdentifyRequest(url=TESTSERVER_URL + '/MapServer/export?map=foo', param={'layers':'foo'}) | |
59 | http = MockHTTPClient() | |
60 | wms = ArcGISInfoClient(req, http_client=http, supported_srs=[SRS(25832)]) | |
61 | fi_req = InfoQuery((8, 50, 9, 51), (512, 512), | |
62 | SRS(4326), (128, 64), 'text/plain') | |
63 | ||
64 | wms.get_info(fi_req) | |
65 | ||
66 | assert_query_eq(http.requested[0], | |
67 | TESTSERVER_URL+'/MapServer/identify?map=foo' | |
68 | '&imageDisplay=512,797,96&sr=25832&f=json' | |
69 | '&layers=foo&tolerance=5&returnGeometry=false' | |
70 | '&geometryType=esriGeometryPoint&geometry=447229.979084,5636149.370634' | |
71 | '&mapExtent=428333.552496,5538630.70275,500000.0,5650300.78652', | |
72 | fuzzy_number_compare=True)⏎ |
23 | 23 | merge_dict, |
24 | 24 | ConfigurationError, |
25 | 25 | ) |
26 | from mapproxy.config.coverage import load_coverage | |
26 | 27 | from mapproxy.config.spec import validate_options |
27 | 28 | from mapproxy.cache.tile import TileManager |
29 | from mapproxy.seed.spec import validate_seed_conf | |
28 | 30 | from mapproxy.test.helper import TempFile |
29 | 31 | from mapproxy.test.unit.test_grid import assert_almost_equal_bbox |
30 | 32 | from nose.tools import eq_, assert_raises |
922 | 924 | |
923 | 925 | conf.globals.image_options.image_opts({}, 'image/jpeg') |
924 | 926 | |
927 | class TestLoadCoverage(object): | |
928 | def test_union(self): | |
929 | conf = { | |
930 | 'coverages': { | |
931 | 'covname': { | |
932 | 'union': [ | |
933 | {'bbox': [0, 0, 10, 10], 'srs': 'EPSG:4326'}, | |
934 | {'bbox': [10, 0, 20, 10], 'srs': 'EPSG:4326', 'unknown': True}, | |
935 | ], | |
936 | }, | |
937 | }, | |
938 | } | |
939 | ||
940 | errors, informal_only = validate_seed_conf(conf) | |
941 | assert informal_only | |
942 | assert len(errors) == 1 | |
943 | eq_(errors[0], "unknown 'unknown' in coverages.covname.union[1]") |
20 | 20 | from lxml import etree, html |
21 | 21 | from nose.tools import eq_ |
22 | 22 | |
23 | from mapproxy.featureinfo import (combined_inputs, XSLTransformer, | |
24 | XMLFeatureInfoDoc, HTMLFeatureInfoDoc) | |
23 | from mapproxy.featureinfo import ( | |
24 | combined_inputs, | |
25 | XSLTransformer, | |
26 | XMLFeatureInfoDoc, | |
27 | HTMLFeatureInfoDoc, | |
28 | JSONFeatureInfoDoc, | |
29 | ) | |
25 | 30 | from mapproxy.test.helper import strip_whitespace |
26 | 31 | |
27 | 32 | def test_combined_inputs(): |
176 | 181 | b"<p>baz2\n<p>foo</p>\n<body><p>bar</p></body>", |
177 | 182 | result.as_string()) |
178 | 183 | eq_(result.info_type, 'text') |
184 | ||
185 | class TestJSONFeatureInfoDocs(object): | |
186 | def test_combine(self): | |
187 | docs = [ | |
188 | JSONFeatureInfoDoc('{}'), | |
189 | JSONFeatureInfoDoc('{"results": [{"foo": 1}]}'), | |
190 | JSONFeatureInfoDoc('{"results": [{"bar": 2}]}'), | |
191 | ] | |
192 | result = JSONFeatureInfoDoc.combine(docs) | |
193 | ||
194 | eq_('''{"results": [{"foo": 1}, {"bar": 2}]}''', | |
195 | result.as_string()) | |
196 | eq_(result.info_type, 'json') |
15 | 15 | from __future__ import division, with_statement |
16 | 16 | |
17 | 17 | import os |
18 | import tempfile | |
19 | import shutil | |
18 | 20 | |
19 | 21 | from mapproxy.srs import SRS, bbox_equals |
20 | 22 | from mapproxy.util.geom import ( |
21 | 23 | load_polygons, |
22 | 24 | load_datasource, |
25 | load_geojson, | |
26 | load_expire_tiles, | |
23 | 27 | transform_geometry, |
24 | 28 | geom_support, |
25 | 29 | bbox_polygon, |
26 | 30 | build_multipolygon, |
27 | 31 | ) |
28 | from mapproxy.util.coverage import coverage, MultiCoverage | |
32 | from mapproxy.util.coverage import ( | |
33 | coverage, | |
34 | MultiCoverage, | |
35 | union_coverage, | |
36 | diff_coverage, | |
37 | intersection_coverage, | |
38 | ) | |
29 | 39 | from mapproxy.layer import MapExtent, DefaultMapExtent |
30 | 40 | from mapproxy.test.helper import TempFile |
31 | 41 | |
137 | 147 | eq_(polygon.type, 'Polygon') |
138 | 148 | assert polygon.equals(shapely.geometry.Polygon([(0, 0), (15, 0), (15, 10), (0, 10)])) |
139 | 149 | |
150 | ||
151 | class TestGeoJSONLoading(object): | |
152 | def test_geojson(self): | |
153 | yield (self.check_geojson, | |
154 | '''{"type": "Polygon", "coordinates": [[[0, 0], [10, 0], [10, 10], [0, 0]]]}''', | |
155 | shapely.geometry.Polygon([[0, 0], [10, 0], [10, 10], [0, 0]]), | |
156 | ) | |
157 | ||
158 | yield (self.check_geojson, | |
159 | '''{"type": "MultiPolygon", "coordinates": [[[[0, 0], [10, 0], [10, 10], [0, 0]]], [[[20, 0], [30, 0], [20, 10], [20, 0]]]]}''', | |
160 | shapely.geometry.Polygon([[0, 0], [10, 0], [10, 10], [0, 0]]).union(shapely.geometry.Polygon([[20, 0], [30, 0], [20, 10], [20, 0]])), | |
161 | ) | |
162 | ||
163 | yield (self.check_geojson, | |
164 | '''{"type": "Feature", "geometry": {"type": "Polygon", "coordinates": [[[0, 0], [10, 0], [10, 10], [0, 0]]]}}''', | |
165 | shapely.geometry.Polygon([[0, 0], [10, 0], [10, 10], [0, 0]]), | |
166 | ) | |
167 | ||
168 | yield (self.check_geojson, | |
169 | '''{"type": "FeatureCollection", "features": [{"type": "Feature", "geometry": {"type": "Polygon", "coordinates": [[[0, 0], [10, 0], [10, 10], [0, 0]]]}}]}''', | |
170 | shapely.geometry.Polygon([[0, 0], [10, 0], [10, 10], [0, 0]]), | |
171 | ) | |
172 | ||
173 | def check_geojson(self, geojson, geometry): | |
174 | with TempFile() as fname: | |
175 | with open(fname, 'w') as f: | |
176 | f.write(geojson) | |
177 | polygon = load_geojson(fname) | |
178 | bbox, polygon = build_multipolygon(polygon, simplify=True) | |
179 | assert polygon.is_valid | |
180 | assert polygon.type in ('Polygon', 'MultiPolygon'), polygon.type | |
181 | assert polygon.equals(geometry) | |
182 | ||
183 | ||
140 | 184 | class TestTransform(object): |
141 | 185 | def test_polygon_transf(self): |
142 | 186 | p1 = shapely.geometry.Polygon([(0, 0), (10, 0), (10, 10), (0, 10)]) |
267 | 311 | assert coverage([-10, 10, 80, 80], SRS(4326)) != coverage([-10, 10.0, 80.0, 80], SRS(31467)) |
268 | 312 | |
269 | 313 | |
314 | class TestUnionCoverage(object): | |
315 | def setup(self): | |
316 | self.coverage = union_coverage([ | |
317 | coverage([0, 0, 10, 10], SRS(4326)), | |
318 | coverage(shapely.wkt.loads("POLYGON((10 0, 20 0, 20 10, 10 10, 10 0))"), SRS(4326)), | |
319 | coverage(shapely.wkt.loads("POLYGON((-1000000 0, 0 0, 0 1000000, -1000000 1000000, -1000000 0))"), SRS(3857)), | |
320 | ]) | |
321 | ||
322 | def test_bbox(self): | |
323 | assert bbox_equals(self.coverage.bbox, [-8.98315284, 0.0, 20.0, 10.0], 0.0001), self.coverage.bbox | |
324 | ||
325 | def test_contains(self): | |
326 | assert self.coverage.contains((0, 0, 5, 5), SRS(4326)) | |
327 | assert self.coverage.contains((-50000, 0, -20000, 20000), SRS(3857)) | |
328 | assert not self.coverage.contains((-50000, -100, -20000, 20000), SRS(3857)) | |
329 | ||
330 | def test_intersects(self): | |
331 | assert self.coverage.intersects((0, 0, 5, 5), SRS(4326)) | |
332 | assert self.coverage.intersects((5, 0, 25, 5), SRS(4326)) | |
333 | assert self.coverage.intersects((-50000, 0, -20000, 20000), SRS(3857)) | |
334 | assert self.coverage.intersects((-50000, -100, -20000, 20000), SRS(3857)) | |
335 | ||
336 | ||
337 | class TestDiffCoverage(object): | |
338 | def setup(self): | |
339 | g1 = coverage(shapely.wkt.loads("POLYGON((-10 0, 20 0, 20 10, -10 10, -10 0))"), SRS(4326)) | |
340 | g2 = coverage([0, 2, 8, 8], SRS(4326)) | |
341 | g3 = coverage(shapely.wkt.loads("POLYGON((-1000000 500000, 0 500000, 0 1000000, -1000000 1000000, -1000000 500000))"), SRS(3857)) | |
342 | self.coverage = diff_coverage([g1, g2, g3]) | |
343 | ||
344 | def test_bbox(self): | |
345 | assert bbox_equals(self.coverage.bbox, [-10, 0.0, 20.0, 10.0], 0.0001), self.coverage.bbox | |
346 | ||
347 | def test_contains(self): | |
348 | assert self.coverage.contains((0, 0, 1, 1), SRS(4326)) | |
349 | assert self.coverage.contains((-1100000, 510000, -1050000, 600000), SRS(3857)) | |
350 | assert not self.coverage.contains((-1100000, 510000, -990000, 600000), SRS(3857)) # touches # g3 | |
351 | assert not self.coverage.contains((4, 4, 5, 5), SRS(4326)) # in g2 | |
352 | ||
353 | def test_intersects(self): | |
354 | assert self.coverage.intersects((0, 0, 1, 1), SRS(4326)) | |
355 | assert self.coverage.intersects((-1100000, 510000, -1050000, 600000), SRS(3857)) | |
356 | assert self.coverage.intersects((-1100000, 510000, -990000, 600000), SRS(3857)) # touches # g3 | |
357 | assert not self.coverage.intersects((4, 4, 5, 5), SRS(4326)) # in g2 | |
358 | ||
359 | ||
360 | class TestIntersectionCoverage(object): | |
361 | def setup(self): | |
362 | g1 = coverage(shapely.wkt.loads("POLYGON((0 0, 10 0, 10 10, 0 10, 0 0))"), SRS(4326)) | |
363 | g2 = coverage([5, 5, 15, 15], SRS(4326)) | |
364 | self.coverage = intersection_coverage([g1, g2]) | |
365 | ||
366 | def test_bbox(self): | |
367 | assert bbox_equals(self.coverage.bbox, [5.0, 5.0, 10.0, 10.0], 0.0001), self.coverage.bbox | |
368 | ||
369 | def test_contains(self): | |
370 | assert not self.coverage.contains((0, 0, 1, 1), SRS(4326)) | |
371 | assert self.coverage.contains((6, 6, 7, 7), SRS(4326)) | |
372 | ||
373 | def test_intersects(self): | |
374 | assert self.coverage.intersection((3, 6, 7, 7), SRS(4326)) | |
375 | assert self.coverage.intersection((6, 6, 7, 7), SRS(4326)) | |
376 | assert not self.coverage.intersects((0, 0, 1, 1), SRS(4326)) | |
377 | ||
378 | ||
270 | 379 | class TestMultiCoverage(object): |
271 | 380 | def setup(self): |
272 | 381 | # box from 10 10 to 80 80 with small spike/corner to -10 60 (upper left) |
363 | 472 | |
364 | 473 | geoms = load_datasource(fname) |
365 | 474 | eq_(len(geoms), 2) |
475 | ||
476 | def test_geojson(self): | |
477 | with TempFile() as fname: | |
478 | with open(fname, 'wb') as f: | |
479 | f.write(b'''{"type": "FeatureCollection", "features": [ | |
480 | {"type": "Feature", "geometry": {"type": "Polygon", "coordinates": [[[0, 0], [10, 0], [10, 10], [0, 0]]]} }, | |
481 | {"type": "Feature", "geometry": {"type": "MultiPolygon", "coordinates": [[[[0, 0], [10, 0], [10, 10], [0, 0]]], [[[0, 0], [10, 0], [10, 10], [0, 0]]], [[[0, 0], [10, 0], [10, 10], [0, 0]]]]} }, | |
482 | {"type": "Feature", "geometry": {"type": "Point", "coordinates": [0, 0]} } | |
483 | ]}''') | |
484 | ||
485 | geoms = load_datasource(fname) | |
486 | eq_(len(geoms), 4) | |
487 | ||
488 | def test_expire_tiles_dir(self): | |
489 | dirname = tempfile.mkdtemp() | |
490 | try: | |
491 | fname = os.path.join(dirname, 'tiles') | |
492 | with open(fname, 'wb') as f: | |
493 | f.write(b"4/2/5\n") | |
494 | f.write(b"4/2/6\n") | |
495 | f.write(b"4/4/3\n") | |
496 | ||
497 | geoms = load_expire_tiles(dirname) | |
498 | eq_(len(geoms), 3) | |
499 | finally: | |
500 | shutil.rmtree(dirname) | |
501 | ||
502 | def test_expire_tiles_file(self): | |
503 | with TempFile() as fname: | |
504 | with open(fname, 'wb') as f: | |
505 | f.write(b"4/2/5\n") | |
506 | f.write(b"4/2/6\n") | |
507 | f.write(b"error\n") | |
508 | f.write(b"4/2/1\n") # rest of file is ignored | |
509 | ||
510 | geoms = load_expire_tiles(fname) | |
511 | eq_(len(geoms), 2) |
676 | 676 | assert t1[2] == t2[0] |
677 | 677 | assert t1[0] == t3[0] |
678 | 678 | assert t1[1] == t3[3] |
679 | ||
680 | ||
681 | class TestClosestLevelTinyResFactor(object): | |
682 | def setup(self): | |
683 | self.grid = TileGrid(SRS(31467), | |
684 | bbox=[420000,30000,900000,350000], origin='ul', | |
685 | res=[4000,3750,3500,3250,3000,2750,2500,2250,2000,1750,1500,1250,1000,750,650,500,250,100,50,20,10,5,2.5,2,1.5,1,0.5], | |
686 | ) | |
687 | ||
688 | def test_closest_level(self): | |
689 | eq_(self.grid.closest_level(5000), 0) | |
690 | eq_(self.grid.closest_level(4000), 0) | |
691 | eq_(self.grid.closest_level(3750), 1) | |
692 | eq_(self.grid.closest_level(3500), 2) | |
693 | eq_(self.grid.closest_level(3250), 3) | |
694 | eq_(self.grid.closest_level(3000), 4) | |
679 | 695 | |
680 | 696 | |
681 | 697 | class TestOrigins(object): |
18 | 18 | import os |
19 | 19 | from io import BytesIO |
20 | 20 | from mapproxy.compat.image import Image, ImageDraw |
21 | from mapproxy.image import ImageSource, ReadBufWrapper, is_single_color_image | |
22 | from mapproxy.image import peek_image_format | |
21 | from mapproxy.image import ( | |
22 | ImageSource, | |
23 | BlankImageSource, | |
24 | ReadBufWrapper, | |
25 | is_single_color_image, | |
26 | peek_image_format, | |
27 | _make_transparent as make_transparent, | |
28 | SubImageSource, | |
29 | img_has_transparency, | |
30 | quantize, | |
31 | ) | |
23 | 32 | from mapproxy.image.merge import merge_images, BandMerger |
24 | from mapproxy.image import _make_transparent as make_transparent, SubImageSource, img_has_transparency, quantize | |
25 | 33 | from mapproxy.image.opts import ImageOptions |
26 | 34 | from mapproxy.image.tile import TileMerger, TileSplitter |
27 | 35 | from mapproxy.image.transform import ImageTransformer |
310 | 318 | (10*10, (127, 127, 255, 255)), |
311 | 319 | ]) |
312 | 320 | |
321 | def test_merge_L(self): | |
322 | img1 = ImageSource(Image.new('RGBA', (10, 10), (255, 0, 255, 255))) | |
323 | img2 = ImageSource(Image.new('L', (10, 10), 100)) | |
324 | ||
325 | # img2 overlays img1 | |
326 | result = merge_images([img1, img2], ImageOptions(transparent=True)) | |
327 | img = result.as_image() | |
328 | assert_img_colors_eq(img, [ | |
329 | (10*10, (100, 100, 100, 255)), | |
330 | ]) | |
331 | ||
313 | 332 | def test_paletted_merge(self): |
314 | 333 | if not hasattr(Image, 'FASTOCTREE'): |
315 | 334 | raise SkipTest() |
345 | 364 | result = merge_images([img1, img2], ImageOptions(transparent=False)) |
346 | 365 | img = result.as_image() |
347 | 366 | eq_(img.getpixel((0, 0)), (0, 255, 255)) |
367 | ||
368 | def test_merge_rgb_with_transp(self): | |
369 | img1 = ImageSource(Image.new('RGB', (10, 10), (255, 0, 255))) | |
370 | raw = Image.new('RGB', (10, 10), (0, 255, 255)) | |
371 | raw.info = {'transparency': (0, 255, 255)} # make full transparent | |
372 | img2 = ImageSource(raw) | |
373 | ||
374 | result = merge_images([img1, img2], ImageOptions(transparent=False)) | |
375 | img = result.as_image() | |
376 | eq_(img.getpixel((0, 0)), (255, 0, 255)) | |
348 | 377 | |
349 | 378 | |
350 | 379 | class TestLayerCompositeMerge(object): |
581 | 610 | self.img1 = ImageSource(Image.new('RGB', (10, 10), (100, 110, 120))) |
582 | 611 | self.img2 = ImageSource(Image.new('RGB', (10, 10), (200, 210, 220))) |
583 | 612 | self.img3 = ImageSource(Image.new('RGB', (10, 10), (0, 255, 0))) |
613 | self.blank = BlankImageSource(size=(10, 10), image_opts=ImageOptions()) | |
584 | 614 | |
585 | 615 | def test_merge_noops(self): |
586 | 616 | """ |
594 | 624 | eq_(img.size, (10, 10)) |
595 | 625 | eq_(img.getpixel((0, 0)), (0, 0, 0)) |
596 | 626 | |
597 | def test_merge_no_source(self): | |
598 | """ | |
599 | Check that empty source list returns BlankImageSource. | |
627 | def test_merge_missing_source(self): | |
628 | """ | |
629 | Check that empty source list or source list with missing images | |
630 | returns BlankImageSource. | |
600 | 631 | """ |
601 | 632 | merger = BandMerger(mode='RGB') |
602 | 633 | merger.add_ops(dst_band=0, src_img=0, src_band=0) |
634 | merger.add_ops(dst_band=1, src_img=1, src_band=0) | |
635 | merger.add_ops(dst_band=2, src_img=2, src_band=0) | |
603 | 636 | |
604 | 637 | img_opts = ImageOptions('RGBA', transparent=True) |
605 | 638 | result = merger.merge([], img_opts, size=(10, 10)) |
607 | 640 | |
608 | 641 | eq_(img.size, (10, 10)) |
609 | 642 | eq_(img.getpixel((0, 0)), (255, 255, 255, 0)) |
643 | ||
644 | result = merger.merge([self.img0, self.img1], img_opts, size=(10, 10)) | |
645 | img = result.as_image() | |
646 | ||
647 | eq_(img.size, (10, 10)) | |
648 | eq_(img.getpixel((0, 0)), (255, 255, 255, 0)) | |
649 | ||
610 | 650 | |
611 | 651 | def test_rgb_merge(self): |
612 | 652 | """ |
12 | 12 | # See the License for the specific language governing permissions and |
13 | 13 | # limitations under the License. |
14 | 14 | |
15 | from mapproxy.compat.image import Image | |
15 | from mapproxy.compat.image import Image, ImageDraw | |
16 | 16 | from mapproxy.srs import SRS |
17 | 17 | from mapproxy.image import ImageSource |
18 | 18 | from mapproxy.image.opts import ImageOptions |
19 | 19 | from mapproxy.image.mask import mask_image_source_from_coverage |
20 | from mapproxy.image.merge import LayerMerger | |
20 | 21 | from mapproxy.util.coverage import load_limited_to |
21 | from mapproxy.test.image import assert_img_colors_eq | |
22 | from mapproxy.test.image import assert_img_colors_eq, create_image | |
23 | from nose.tools import eq_ | |
22 | 24 | |
23 | 25 | try: |
24 | 26 | from shapely.geometry import Polygon |
72 | 74 | geom = 'POLYGON((2 2, 2 8, 8 8, 8 2, 2 2), (4 4, 4 6, 6 6, 6 4, 4 4))' |
73 | 75 | |
74 | 76 | result = mask_image_source_from_coverage(img, [0, 0, 10, 10], SRS(4326), coverage(geom)) |
75 | # 60*61 - 20*21 = 3240 | |
77 | # 60*60 - 20*20 = 3200 | |
76 | 78 | assert_img_colors_eq(result.as_image().getcolors(), |
77 | [(10000-3240, (255, 255, 255, 0)), (3240, (100, 0, 200, 255))]) | |
79 | [(10000-3200, (255, 255, 255, 0)), (3200, (100, 0, 200, 255))]) | |
78 | 80 | |
79 | 81 | def test_shapely_mask_with_transform_partial_image_transparent(self): |
80 | 82 | img = ImageSource(Image.new('RGB', (100, 100), color=(100, 0, 200)), |
86 | 88 | # 20*20 = 400 |
87 | 89 | assert_img_colors_eq(result.as_image().getcolors(), |
88 | 90 | [(10000-400, (255, 255, 255, 0)), (400, (100, 0, 200, 255))]) |
91 | ||
92 | ||
93 | class TestLayerCoverageMerge(object): | |
94 | def setup(self): | |
95 | self.coverage1 = coverage(Polygon([(0, 0), (0, 10), (10, 10), (10, 0)]), 3857) | |
96 | self.coverage2 = coverage([2, 2, 8, 8], 3857) | |
97 | ||
98 | def test_merge_single_coverage(self): | |
99 | merger = LayerMerger() | |
100 | merger.add(ImageSource(Image.new('RGB', (10, 10), (255, 255, 255))), self.coverage1) | |
101 | result = merger.merge(image_opts=ImageOptions(transparent=True), bbox=(5, 0, 15, 10), bbox_srs=3857) | |
102 | img = result.as_image() | |
103 | eq_(img.mode, 'RGBA') | |
104 | eq_(img.getpixel((4, 0)), (255, 255, 255, 255)) | |
105 | eq_(img.getpixel((6, 0)), (255, 255, 255, 0)) | |
106 | ||
107 | def test_merge_overlapping_coverage(self): | |
108 | color1 = (255, 255, 0) | |
109 | color2 = (0, 255, 255) | |
110 | merger = LayerMerger() | |
111 | merger.add(ImageSource(Image.new('RGB', (10, 10), color1)), self.coverage1) | |
112 | merger.add(ImageSource(Image.new('RGB', (10, 10), color2)), self.coverage2) | |
113 | ||
114 | result = merger.merge(image_opts=ImageOptions(), bbox=(0, 0, 10, 10), bbox_srs=3857) | |
115 | img = result.as_image() | |
116 | eq_(img.mode, 'RGB') | |
117 | ||
118 | expected = create_image((10, 10), color1, 'RGB') | |
119 | draw = ImageDraw.Draw(expected) | |
120 | draw.polygon([(2, 2), (7, 2), (7, 7), (2, 7)], fill=color2) | |
121 | ||
122 | for x in range(0, 9): | |
123 | for y in range(0, 9): | |
124 | eq_(img.getpixel((x, y)), expected.getpixel((x, y))) | |
125 |
21 | 21 | from mapproxy.request.wms import (wms_request, WMSMapRequest, WMSMapRequestParams, |
22 | 22 | WMS111MapRequest, WMS100MapRequest, WMS130MapRequest, |
23 | 23 | WMS111FeatureInfoRequest) |
24 | from mapproxy.request.arcgis import ArcGISRequest | |
24 | from mapproxy.request.arcgis import ArcGISRequest, ArcGISIdentifyRequest | |
25 | 25 | from mapproxy.exception import RequestError |
26 | 26 | from mapproxy.request.wms.exception import (WMS111ExceptionHandler, WMSImageExceptionHandler, |
27 | 27 | WMSBlankExceptionHandler) |
230 | 230 | req.params.bboxSR = SRS("EPSG:4326") |
231 | 231 | eq_("4326", req.params.bboxSR) |
232 | 232 | eq_("4326", req.params["bboxSR"]) |
233 | ||
234 | def check_endpoint(self, url, expected): | |
235 | req = ArcGISRequest(url=url) | |
236 | eq_(req.url, expected) | |
237 | ||
238 | def test_endpoint_urls(self): | |
239 | yield self.check_endpoint, 'http://example.com/ArcGIS/rest/MapServer/', 'http://example.com/ArcGIS/rest/MapServer/export' | |
240 | yield self.check_endpoint, 'http://example.com/ArcGIS/rest/MapServer', 'http://example.com/ArcGIS/rest/MapServer/export' | |
241 | yield self.check_endpoint, 'http://example.com/ArcGIS/rest/MapServer/export', 'http://example.com/ArcGIS/rest/MapServer/export' | |
242 | yield self.check_endpoint, 'http://example.com/ArcGIS/rest/ImageServer/', 'http://example.com/ArcGIS/rest/ImageServer/exportImage' | |
243 | yield self.check_endpoint, 'http://example.com/ArcGIS/rest/ImageServer', 'http://example.com/ArcGIS/rest/ImageServer/exportImage' | |
244 | yield self.check_endpoint, 'http://example.com/ArcGIS/rest/ImageServer/export', 'http://example.com/ArcGIS/rest/ImageServer/exportImage' | |
245 | yield self.check_endpoint, 'http://example.com/ArcGIS/rest/ImageServer/exportImage', 'http://example.com/ArcGIS/rest/ImageServer/exportImage' | |
246 | ||
247 | yield self.check_endpoint, 'http://example.com/ArcGIS/rest/MapServer/export?param=foo', 'http://example.com/ArcGIS/rest/MapServer/export?param=foo' | |
248 | yield self.check_endpoint, 'http://example.com/ArcGIS/rest/ImageServer/export?param=foo', 'http://example.com/ArcGIS/rest/ImageServer/exportImage?param=foo' | |
249 | ||
250 | ||
251 | class TestArcGISIndentifyRequest(object): | |
252 | def test_base_request(self): | |
253 | req = ArcGISIdentifyRequest(url="http://example.com/ArcGIS/rest/MapServer/") | |
254 | eq_("http://example.com/ArcGIS/rest/MapServer/identify", req.url) | |
255 | req.params.bbox = [-180.0, -90.0, 180.0, 90.0] | |
256 | eq_((-180.0, -90.0, 180.0, 90.0), req.params.bbox) | |
257 | eq_("-180.0,-90.0,180.0,90.0", req.params["mapExtent"]) | |
258 | req.params.size = [256, 256] | |
259 | eq_((256, 256), req.params.size) | |
260 | eq_("256,256,96", req.params["imageDisplay"]) | |
261 | req.params.srs = "EPSG:4326" | |
262 | eq_("EPSG:4326", req.params.srs) | |
263 | eq_("4326", req.params["sr"]) | |
264 | ||
265 | def check_endpoint(self, url, expected): | |
266 | req = ArcGISIdentifyRequest(url=url) | |
267 | eq_(req.url, expected) | |
268 | ||
269 | def test_endpoint_urls(self): | |
270 | yield self.check_endpoint, 'http://example.com/ArcGIS/rest/MapServer/', 'http://example.com/ArcGIS/rest/MapServer/identify' | |
271 | yield self.check_endpoint, 'http://example.com/ArcGIS/rest/MapServer', 'http://example.com/ArcGIS/rest/MapServer/identify' | |
272 | yield self.check_endpoint, 'http://example.com/ArcGIS/rest/MapServer/export', 'http://example.com/ArcGIS/rest/MapServer/identify' | |
273 | yield self.check_endpoint, 'http://example.com/ArcGIS/rest/ImageServer/', 'http://example.com/ArcGIS/rest/ImageServer/identify' | |
274 | yield self.check_endpoint, 'http://example.com/ArcGIS/rest/ImageServer', 'http://example.com/ArcGIS/rest/ImageServer/identify' | |
275 | yield self.check_endpoint, 'http://example.com/ArcGIS/rest/ImageServer/export', 'http://example.com/ArcGIS/rest/ImageServer/identify' | |
276 | yield self.check_endpoint, 'http://example.com/ArcGIS/rest/ImageServer/exportImage', 'http://example.com/ArcGIS/rest/ImageServer/identify' | |
277 | ||
278 | yield self.check_endpoint, 'http://example.com/ArcGIS/rest/MapServer/export?param=foo', 'http://example.com/ArcGIS/rest/MapServer/identify?param=foo' | |
279 | yield self.check_endpoint, 'http://example.com/ArcGIS/rest/ImageServer/export?param=foo', 'http://example.com/ArcGIS/rest/ImageServer/identify?param=foo' | |
233 | 280 | |
234 | 281 | |
235 | 282 | class TestRequest(object): |
153 | 153 | |
154 | 154 | def test_seed_full_bbox_continue(self): |
155 | 155 | task = self.make_bbox_task([-180, -90, 180, 90], SRS(4326), [0, 1, 2]) |
156 | seed_progress = SeedProgress([(0, 1), (0, 2)]) | |
156 | seed_progress = SeedProgress([(0, 1), (1, 2)]) | |
157 | 157 | seeder = TileWalker(task, self.seed_pool, handle_uncached=True, seed_progress=seed_progress) |
158 | 158 | seeder.walk() |
159 | 159 | |
262 | 262 | before_timestamp_from_options({'minutes': 15}) + 60 * 15, |
263 | 263 | time.time(), -1 |
264 | 264 | ) |
265 | ||
266 | class TestSeedProgress(object): | |
267 | def test_progress_identifier(self): | |
268 | old = SeedProgress() | |
269 | with old.step_down(0, 2): | |
270 | with old.step_down(0, 4): | |
271 | eq_(old.current_progress_identifier(), [(0, 2), (0, 4)]) | |
272 | # previous leafs are still present | |
273 | eq_(old.current_progress_identifier(), [(0, 2), (0, 4)]) | |
274 | with old.step_down(1, 4): | |
275 | eq_(old.current_progress_identifier(), [(0, 2), (1, 4)]) | |
276 | eq_(old.current_progress_identifier(), [(0, 2), (1, 4)]) | |
277 | ||
278 | eq_(old.current_progress_identifier(), []) # empty list after seed | |
279 | ||
280 | with old.step_down(1, 2): | |
281 | eq_(old.current_progress_identifier(), [(1, 2)]) | |
282 | with old.step_down(0, 4): | |
283 | with old.step_down(1, 4): | |
284 | eq_(old.current_progress_identifier(), [(1, 2), (0, 4), (1, 4)]) | |
285 | ||
286 | def test_already_processed(self): | |
287 | new = SeedProgress([(0, 2)]) | |
288 | with new.step_down(0, 2): | |
289 | assert not new.already_processed() | |
290 | with new.step_down(0, 2): | |
291 | assert not new.already_processed() | |
292 | ||
293 | new = SeedProgress([(1, 2)]) | |
294 | with new.step_down(0, 2): | |
295 | assert new.already_processed() | |
296 | with new.step_down(0, 2): | |
297 | assert new.already_processed() | |
298 | ||
299 | ||
300 | new = SeedProgress([(0, 2), (1, 4), (2, 4)]) | |
301 | with new.step_down(0, 2): | |
302 | assert not new.already_processed() | |
303 | with new.step_down(0, 4): | |
304 | assert new.already_processed() | |
305 | with new.step_down(1, 4): | |
306 | assert not new.already_processed() | |
307 | with new.step_down(1, 4): | |
308 | assert new.already_processed() | |
309 | with new.step_down(2, 4): | |
310 | assert not new.already_processed() | |
311 | with new.step_down(3, 4): | |
312 | assert not new.already_processed() | |
313 | with new.step_down(2, 4): | |
314 | assert not new.already_processed() |
14 | 14 | |
15 | 15 | from __future__ import with_statement, division |
16 | 16 | |
17 | from mapproxy.layer import MapQuery | |
17 | from mapproxy.layer import MapQuery, InfoQuery | |
18 | 18 | from mapproxy.srs import SRS |
19 | 19 | from mapproxy.service.wms import combined_layers |
20 | 20 | from nose.tools import eq_ |
75 | 75 | eq_(combined[1].client.request_template.params.layers, ['c', 'd']) |
76 | 76 | eq_(combined[2].client.request_template.params.layers, ['e', 'f']) |
77 | 77 | |
78 | ||
79 | class TestInfoQuery(object): | |
80 | def test_coord(self): | |
81 | query = InfoQuery((8, 50, 9, 51), (400, 1000), | |
82 | SRS(4326), (100, 600), 'text/plain') | |
83 | eq_(query.coord, (8.25, 50.4)) |
91 | 91 | raise |
92 | 92 | if len(args[0]) == 1: |
93 | 93 | eventlet.sleep() |
94 | return _result_iter([call(*zip(*args)[0])], use_result_objects) | |
94 | return _result_iter([call(*list(zip(*args))[0])], use_result_objects) | |
95 | 95 | pool = eventlet.greenpool.GreenPool(self.size) |
96 | 96 | return _result_iter(pool.imap(call, *args), use_result_objects) |
97 | 97 |
24 | 24 | load_polygon_lines, |
25 | 25 | transform_geometry, |
26 | 26 | bbox_polygon, |
27 | EmptyGeometryError, | |
27 | 28 | ) |
28 | 29 | from mapproxy.srs import SRS |
29 | 30 | |
38 | 39 | # missing Shapely is handled by require_geom_support |
39 | 40 | pass |
40 | 41 | |
41 | def coverage(geom, srs): | |
42 | def coverage(geom, srs, clip=False): | |
42 | 43 | if isinstance(geom, (list, tuple)): |
43 | return BBOXCoverage(geom, srs) | |
44 | return BBOXCoverage(geom, srs, clip=clip) | |
44 | 45 | else: |
45 | return GeomCoverage(geom, srs) | |
46 | return GeomCoverage(geom, srs, clip=clip) | |
46 | 47 | |
47 | 48 | def load_limited_to(limited_to): |
48 | 49 | require_geom_support() |
106 | 107 | return '<MultiCoverage %r: %r>' % (self.extent.llbbox, self.coverages) |
107 | 108 | |
108 | 109 | class BBOXCoverage(object): |
109 | clip = False | |
110 | def __init__(self, bbox, srs): | |
110 | def __init__(self, bbox, srs, clip=False): | |
111 | 111 | self.bbox = bbox |
112 | 112 | self.srs = srs |
113 | 113 | self.geom = None |
114 | self.clip = clip | |
114 | 115 | |
115 | 116 | @property |
116 | 117 | def extent(self): |
138 | 139 | |
139 | 140 | if intersection[0] >= intersection[2] or intersection[1] >= intersection[3]: |
140 | 141 | return None |
141 | return BBOXCoverage(intersection, self.srs) | |
142 | return BBOXCoverage(intersection, self.srs, clip=self.clip) | |
142 | 143 | |
143 | 144 | def contains(self, bbox, srs): |
144 | 145 | bbox = self._bbox_in_coverage_srs(bbox, srs) |
149 | 150 | return self |
150 | 151 | |
151 | 152 | bbox = self.srs.transform_bbox_to(srs, self.bbox) |
152 | return BBOXCoverage(bbox, srs) | |
153 | return BBOXCoverage(bbox, srs, clip=self.clip) | |
153 | 154 | |
154 | 155 | def __eq__(self, other): |
155 | 156 | if not isinstance(other, BBOXCoverage): |
217 | 218 | return self |
218 | 219 | |
219 | 220 | geom = transform_geometry(self.srs, srs, self.geom) |
220 | return GeomCoverage(geom, srs) | |
221 | return GeomCoverage(geom, srs, clip=self.clip) | |
221 | 222 | |
222 | 223 | def intersects(self, bbox, srs): |
223 | 224 | bbox = self._geom_in_coverage_srs(bbox, srs) |
226 | 227 | |
227 | 228 | def intersection(self, bbox, srs): |
228 | 229 | bbox = self._geom_in_coverage_srs(bbox, srs) |
229 | return GeomCoverage(self.geom.intersection(bbox), self.srs) | |
230 | return GeomCoverage(self.geom.intersection(bbox), self.srs, clip=self.clip) | |
230 | 231 | |
231 | 232 | def contains(self, bbox, srs): |
232 | 233 | bbox = self._geom_in_coverage_srs(bbox, srs) |
254 | 255 | return not self.__eq__(other) |
255 | 256 | |
256 | 257 | def __repr__(self): |
257 | return '<GeomCoverage %r: %r>' % (self.extent.llbbox, self.geom)⏎ | |
258 | return '<GeomCoverage %r: %r>' % (self.extent.llbbox, self.geom) | |
259 | ||
260 | def union_coverage(coverages, clip=None): | |
261 | """ | |
262 | Create a coverage that is the union of all `coverages`. | |
263 | Resulting coverage is in the SRS of the first coverage. | |
264 | """ | |
265 | srs = coverages[0].srs | |
266 | ||
267 | coverages = [c.transform_to(srs) for c in coverages] | |
268 | ||
269 | geoms = [] | |
270 | for c in coverages: | |
271 | if isinstance(c, BBOXCoverage): | |
272 | geoms.append(bbox_polygon(c.bbox)) | |
273 | else: | |
274 | geoms.append(c.geom) | |
275 | ||
276 | import shapely.ops | |
277 | union = shapely.ops.cascaded_union(geoms) | |
278 | ||
279 | return GeomCoverage(union, srs=srs, clip=clip) | |
280 | ||
281 | def diff_coverage(coverages, clip=None): | |
282 | """ | |
283 | Create a coverage by subtracting all `coverages` from the first one. | |
284 | Resulting coverage is in the SRS of the first coverage. | |
285 | """ | |
286 | srs = coverages[0].srs | |
287 | ||
288 | coverages = [c.transform_to(srs) for c in coverages] | |
289 | ||
290 | geoms = [] | |
291 | for c in coverages: | |
292 | if isinstance(c, BBOXCoverage): | |
293 | geoms.append(bbox_polygon(c.bbox)) | |
294 | else: | |
295 | geoms.append(c.geom) | |
296 | ||
297 | sub = shapely.ops.cascaded_union(geoms[1:]) | |
298 | diff = geoms[0].difference(sub) | |
299 | ||
300 | if diff.is_empty: | |
301 | raise EmptyGeometryError("diff did not return any geometry") | |
302 | ||
303 | return GeomCoverage(diff, srs=srs, clip=clip) | |
304 | ||
305 | def intersection_coverage(coverages, clip=None): | |
306 | """ | |
307 | Create a coverage by creating the intersection of all `coverages`. | |
308 | Resulting coverage is in the SRS of the first coverage. | |
309 | """ | |
310 | srs = coverages[0].srs | |
311 | ||
312 | coverages = [c.transform_to(srs) for c in coverages] | |
313 | ||
314 | geoms = [] | |
315 | for c in coverages: | |
316 | if isinstance(c, BBOXCoverage): | |
317 | geoms.append(bbox_polygon(c.bbox)) | |
318 | else: | |
319 | geoms.append(c.geom) | |
320 | ||
321 | intersection = reduce(lambda a, b: a.intersection(b), geoms) | |
322 | ||
323 | if intersection.is_empty: | |
324 | raise EmptyGeometryError("intersection did not return any geometry") | |
325 | ||
326 | return GeomCoverage(intersection, srs=srs, clip=clip)⏎ |
603 | 603 | _log('info', ' * Restarting with reloader') |
604 | 604 | |
605 | 605 | args = [sys.executable] + sys.argv |
606 | # pip installs commands as .exe, but sys.argv[0] | |
607 | # can miss the prefix. add .exe to avoid file-not-found | |
608 | # in subprocess call | |
609 | if os.name == 'nt' and '.' not in args[1]: | |
610 | args[1] = args[1] + '.exe' | |
611 | ||
606 | if os.name == 'nt': | |
607 | # pip installs commands as .exe, but sys.argv[0] | |
608 | # can miss the prefix. | |
609 | # Add .exe to avoid file-not-found in subprocess call. | |
610 | # Also, recent pip versions create .exe commands that are not | |
611 | # executable by Python, but there is a -script.py which | |
612 | # we need to call in this case. Check for this first. | |
613 | if os.path.exists(args[1] + '-script.py'): | |
614 | args[1] = args[1] + '-script.py' | |
615 | elif not args[1].endswith('.exe'): | |
616 | args[1] = args[1] + '.exe' | |
612 | 617 | new_environ = os.environ.copy() |
613 | 618 | new_environ['WERKZEUG_RUN_MAIN'] = 'true' |
614 | 619 |
13 | 13 | md = cap.metadata() |
14 | 14 | eq_(md['name'], 'OGC:WMS') |
15 | 15 | eq_(md['title'], 'Omniscale OpenStreetMap WMS') |
16 | eq_(md['access_constraints'], 'This service is intended for private and evaluation use only. The data is licensed as Creative Commons Attribution-Share Alike 2.0 (http://creativecommons.org/licenses/by-sa/2.0/)') | |
16 | eq_(md['access_constraints'], 'Here be dragons.') | |
17 | 17 | eq_(md['fees'], 'none') |
18 | 18 | eq_(md['online_resource'], 'http://omniscale.de/') |
19 | 19 | eq_(md['abstract'], 'Omniscale OpenStreetMap WMS (powered by MapProxy)') |
27 | 27 | <ContactElectronicMailAddress>osm@omniscale.de</ContactElectronicMailAddress> |
28 | 28 | </ContactInformation> |
29 | 29 | <Fees>none</Fees> |
30 | <AccessConstraints>This service is intended for private and evaluation use only. The data is licensed as Creative Commons Attribution-Share Alike 2.0 (http://creativecommons.org/licenses/by-sa/2.0/)</AccessConstraints> | |
30 | <AccessConstraints>Here be dragons.</AccessConstraints> | |
31 | 31 | </Service> |
32 | 32 | <Capability> |
33 | 33 | <Request> |
15 | 15 | from __future__ import division, with_statement |
16 | 16 | |
17 | 17 | import os |
18 | import json | |
18 | 19 | import codecs |
19 | 20 | from functools import partial |
20 | 21 | from contextlib import closing |
21 | 22 | |
23 | from mapproxy.grid import tile_grid | |
22 | 24 | from mapproxy.compat import string_type |
23 | 25 | |
24 | 26 | import logging |
54 | 56 | |
55 | 57 | Returns a list of Shapely Polygons. |
56 | 58 | """ |
57 | # check if it is a wkt file | |
59 | # check if it is a wkt or geojson file | |
58 | 60 | if os.path.exists(os.path.abspath(datasource)): |
59 | 61 | with open(os.path.abspath(datasource), 'rb') as fp: |
60 | 62 | data = fp.read(50) |
61 | 63 | if data.lower().lstrip().startswith((b'polygon', b'multipolygon')): |
62 | 64 | return load_polygons(datasource) |
63 | ||
65 | # only load geojson directly if we don't have a filter | |
66 | if where is None and data and data.startswith(b'{'): | |
67 | return load_geojson(datasource) | |
64 | 68 | # otherwise pass to OGR |
65 | 69 | return load_ogr_datasource(datasource, where=where) |
66 | 70 | |
110 | 114 | |
111 | 115 | return polygons |
112 | 116 | |
117 | def load_geojson(datasource): | |
118 | with open(datasource) as f: | |
119 | geojson = json.load(f) | |
120 | t = geojson.get('type') | |
121 | if not t: | |
122 | raise CoverageReadError("not a GeoJSON") | |
123 | geometries = [] | |
124 | if t == 'FeatureCollection': | |
125 | for f in geojson.get('features'): | |
126 | geom = f.get('geometry') | |
127 | if geom: | |
128 | geometries.append(geom) | |
129 | elif t == 'Feature': | |
130 | if 'geometry' in geojson: | |
131 | geometries.append(geojson['geometry']) | |
132 | elif t in ('Polygon', 'MultiPolygon'): | |
133 | geometries.append(geojson) | |
134 | else: | |
135 | log_config.warn('skipping feature of type %s from %s: not a Polygon/MultiPolygon', | |
136 | t, datasource) | |
137 | ||
138 | polygons = [] | |
139 | for geom in geometries: | |
140 | geom = shapely.geometry.asShape(geom) | |
141 | if geom.type == 'Polygon': | |
142 | polygons.append(geom) | |
143 | elif geom.type == 'MultiPolygon': | |
144 | for p in geom: | |
145 | polygons.append(p) | |
146 | else: | |
147 | log_config.warn('ignoring non-polygon geometry (%s) from %s', | |
148 | geom.type, datasource) | |
149 | ||
150 | return polygons | |
151 | ||
113 | 152 | def load_polygon_lines(line_iter, source='<string>'): |
114 | 153 | polygons = [] |
115 | 154 | for line in line_iter: |
172 | 211 | transf = partial(transform_xy, from_srs, to_srs) |
173 | 212 | |
174 | 213 | if geometry.type == 'Polygon': |
175 | return transform_polygon(transf, geometry) | |
176 | ||
177 | if geometry.type == 'MultiPolygon': | |
178 | return transform_multipolygon(transf, geometry) | |
179 | ||
180 | raise ValueError('cannot transform %s' % geometry.type) | |
214 | result = transform_polygon(transf, geometry) | |
215 | elif geometry.type == 'MultiPolygon': | |
216 | result = transform_multipolygon(transf, geometry) | |
217 | else: | |
218 | raise ValueError('cannot transform %s' % geometry.type) | |
219 | ||
220 | if not result.is_valid: | |
221 | result = result.buffer(0) | |
222 | return result | |
181 | 223 | |
182 | 224 | def transform_polygon(transf, polygon): |
183 | 225 | ext = transf(polygon.exterior.xy) |
215 | 257 | |
216 | 258 | return [] |
217 | 259 | |
218 | ||
260 | def load_expire_tiles(expire_dir, grid=None): | |
261 | if grid is None: | |
262 | grid = tile_grid(3857, origin='nw') | |
263 | tiles = set() | |
264 | ||
265 | def parse(filename): | |
266 | with open(filename) as f: | |
267 | try: | |
268 | for line in f: | |
269 | if not line: | |
270 | continue | |
271 | tile = tuple(map(int, line.split('/'))) | |
272 | tiles.add(tile) | |
273 | except: | |
274 | log_config.warn('found error in %s, skipping rest of file', filename) | |
275 | ||
276 | if os.path.isdir(expire_dir): | |
277 | for root, dirs, files in os.walk(expire_dir): | |
278 | for name in files: | |
279 | filename = os.path.join(root, name) | |
280 | parse(filename) | |
281 | else: | |
282 | parse(expire_dir) | |
283 | ||
284 | boxes = [] | |
285 | for tile in tiles: | |
286 | z, x, y = tile | |
287 | boxes.append(shapely.geometry.box(*grid.tile_bbox((x, y, z)))) | |
288 | ||
289 | return boxes |
69 | 69 | |
70 | 70 | def memoize(func): |
71 | 71 | @wraps(func) |
72 | def wrapper(self, *args): | |
72 | def wrapper(self, *args, **kwargs): | |
73 | 73 | if not hasattr(self, '__memoize_cache'): |
74 | 74 | self.__memoize_cache = {} |
75 | 75 | cache = self.__memoize_cache.setdefault(func, {}) |
76 | if args not in cache: | |
77 | cache[args] = func(self, *args) | |
78 | return cache[args] | |
76 | key = args + tuple(kwargs.items()) | |
77 | if key not in cache: | |
78 | cache[key] = func(self, *args, **kwargs) | |
79 | return cache[key] | |
79 | 80 | return wrapper |
80 | 81 |
10 | 10 | from scriptine.shell import backtick_, sh |
11 | 11 | |
12 | 12 | PACKAGE_NAME = 'MapProxy' |
13 | REMOTE_DOC_LOCATION = 'omniscale.de:domains/mapproxy.org/docs' | |
14 | REMOTE_REL_LOCATION = 'omniscale.de:domains/mapproxy.org/static/rel' | |
13 | REMOTE_DOC_LOCATION = 'mapproxy.org:/opt/www/mapproxy.org/docs' | |
14 | REMOTE_REL_LOCATION = 'mapproxy.org:/opt/www/mapproxy.org/static/rel' | |
15 | 15 | |
16 | 16 | VERSION_FILES = [ |
17 | 17 | ('setup.py', 'version="###"'), |
77 | 77 | remote_rel_location = REMOTE_REL_LOCATION |
78 | 78 | sh('scp dist/MapProxy-%(ver)s.* %(remote_rel_location)s' % locals()) |
79 | 79 | |
80 | def upload_test_sdist_command(): | |
81 | date = backtick_('date +%Y%m%d').strip() | |
82 | print('python setup.py egg_info -R -D -b ".dev%s" register -r testpypi sdist upload -r testpypi' % (date, )) | |
83 | ||
80 | 84 | def upload_final_sdist_command(): |
81 | 85 | sh('python setup.py egg_info -b "" -D sdist upload') |
82 | 86 |
0 | WebTest==2.0.10 | |
1 | lxml==3.2.4 | |
2 | nose==1.3.0 | |
3 | Shapely==1.5.8 | |
4 | PyYAML==3.10 | |
5 | Pillow==2.8.1 | |
6 | WebOb==1.2.3 | |
7 | beautifulsoup4==4.4.0 | |
8 | coverage==3.7 | |
9 | requests==2.0.1 | |
10 | six==1.4.1 | |
11 | waitress==0.8.7 | |
0 | WebTest==2.0.25 | |
1 | lxml==3.7.3 | |
2 | nose==1.3.7 | |
3 | Shapely==1.5.17 | |
4 | PyYAML==3.12 | |
5 | Pillow==4.0.0 | |
6 | WebOb==1.7.1 | |
7 | coverage==4.3.4 | |
8 | requests==2.13.0 | |
9 | boto3==1.4.4 | |
10 | moto==0.4.31 | |
11 | eventlet==0.20.1 | |
12 | beautifulsoup4==4.5.3 | |
13 | boto==2.46.1 | |
14 | botocore==1.5.14 | |
15 | docutils==0.13.1 | |
16 | enum-compat==0.0.2 | |
17 | futures==3.0.5 | |
18 | greenlet==0.4.12 | |
19 | httpretty==0.8.10 | |
20 | Jinja2==2.9.5 | |
21 | jmespath==0.9.1 | |
22 | MarkupSafe==0.23 | |
23 | olefile==0.44 | |
24 | python-dateutil==2.6.0 | |
25 | pytz==2016.10 | |
26 | s3transfer==0.1.10 | |
27 | six==1.10.0 | |
28 | waitress==1.0.2 | |
29 | Werkzeug==0.11.15 | |
30 | xmltodict==0.10.2 | |
31 | redis==2.10.5 |
53 | 53 | |
54 | 54 | setup( |
55 | 55 | name='MapProxy', |
56 | version="1.8.2a0", | |
56 | version="1.10.0a0", | |
57 | 57 | description='An accelerating proxy for web map services', |
58 | 58 | long_description=long_description(7), |
59 | 59 | author='Oliver Tonnhofer', |
31 | 31 | sphinx-build -b html -d {envtmpdir}/doctrees . {envtmpdir}/html |
32 | 32 | sphinx-build -b latex -d {envtmpdir}/doctrees . {envtmpdir}/latex |
33 | 33 | make -C {envtmpdir}/latex all-pdf |
34 | rsync -a --delete-after {envtmpdir}/html/ {envtmpdir}/latex/MapProxy.pdf ssh-226270-upload@mapproxy.org:domains/mapproxy.org/docs/nightly/ | |
34 | rsync -a --delete-after {envtmpdir}/html/ {envtmpdir}/latex/MapProxy.pdf os@mapproxy.org:/opt/www/mapproxy.org/docs/nightly/ |