Codebase list glance / c52bde0
Merge tag '21.0.0_rc1' into debian/victoria glance 21.0.0.0rc1 release candidate meta:version: 21.0.0.0rc1 meta:diff-start: - meta:series: victoria meta:release-type: release candidate meta:pypi: no meta:first: no meta:release:Author: Thierry Carrez <thierry@openstack.org> meta:release:Commit: Abhishek Kekane <akekane@redhat.com> meta:release:Change-Id: I589a07cbfba551158e68a9035258eeddfc86d634 meta:release:Code-Review+2: Hervé Beraud <hberaud@redhat.com> meta:release:Code-Review+2: Thierry Carrez <thierry@openstack.org> meta:release:Workflow+1: Thierry Carrez <thierry@openstack.org> Thomas Goirand 3 years ago
176 changed file(s) with 6950 addition(s) and 8854 deletion(s). Raw diff Collapse all Expand all
2020 name: glance-tox-oslo-tips-base
2121 parent: tox
2222 abstract: true
23 nodeset: ubuntu-bionic
23 nodeset: ubuntu-focal
2424 timeout: 2400
2525 description: Abstract job for Glance vs. oslo libraries
2626 # NOTE(rosmaita): we only need functional test jobs, oslo is
4848 - name: openstack/taskflow
4949
5050 - job:
51 name: glance-tox-functional-py37-oslo-tips
51 name: glance-tox-functional-py38-oslo-tips
5252 parent: glance-tox-oslo-tips-base
5353 description: |
54 Glance py37 functional tests vs. oslo libraries masters
55 vars:
56 python_version: 3.7
57 tox_envlist: functional-py37
54 Glance py38 functional tests vs. oslo libraries masters
55 vars:
56 python_version: 3.8
57 tox_envlist: functional-py38
5858
5959 - job:
6060 name: glance-tox-functional-py36-oslo-tips
6161 parent: glance-tox-oslo-tips-base
6262 description: |
6363 Glance py36 functional tests vs. oslo libraries masters
64 nodeset: ubuntu-bionic
6465 vars:
6566 python_version: 3.6
6667 tox_envlist: functional-py36
6970 name: glance-tox-keystone-tips-base
7071 parent: tox
7172 abstract: true
72 nodeset: ubuntu-bionic
73 nodeset: ubuntu-focal
7374 timeout: 2400
7475 description: Abstract job for Glance vs. keystone
7576 required-projects:
7879 - name: openstack/python-keystoneclient
7980
8081 - job:
81 name: glance-tox-py37-keystone-tips
82 name: glance-tox-py38-keystone-tips
8283 parent: glance-tox-keystone-tips-base
8384 description: |
84 Glance py37 unit tests vs. keystone masters
85 vars:
86 python_version: 3.7
87 tox_envlist: py37
85 Glance py38 unit tests vs. keystone masters
86 vars:
87 python_version: 3.8
88 tox_envlist: py38
8889
8990 - job:
9091 name: glance-tox-py36-keystone-tips
9192 parent: glance-tox-keystone-tips-base
9293 description: |
9394 Glance py36 unit tests vs. keystone masters
95 nodeset: ubuntu-bionic
9496 vars:
9597 python_version: 3.6
9698 tox_envlist: py36
9799
98100 - job:
99 name: glance-tox-functional-py37-keystone-tips
101 name: glance-tox-functional-py38-keystone-tips
100102 parent: glance-tox-keystone-tips-base
101103 description: |
102 Glance py37 functional tests vs. keystone masters
103 vars:
104 python_version: 3.7
105 tox_envlist: functional-py37
104 Glance py38 functional tests vs. keystone masters
105 vars:
106 python_version: 3.8
107 tox_envlist: functional-py38
106108
107109 - job:
108110 name: glance-tox-functional-py36-keystone-tips
109111 parent: glance-tox-keystone-tips-base
110112 description: |
111113 Glance py36 functional tests vs. keystone masters
114 nodeset: ubuntu-bionic
112115 vars:
113116 python_version: 3.6
114117 tox_envlist: functional-py36
117120 name: glance-tox-glance_store-tips-base
118121 parent: tox
119122 abstract: true
120 nodeset: ubuntu-bionic
123 nodeset: ubuntu-focal
121124 timeout: 2400
122125 description: Abstract job for Glance vs. glance_store
123126 required-projects:
124127 - name: openstack/glance_store
125128
126129 - job:
127 name: glance-tox-py37-glance_store-tips
130 name: glance-tox-py38-glance_store-tips
128131 parent: glance-tox-glance_store-tips-base
129132 description: |
130 Glance py37 unit tests vs. glance_store master
131 vars:
132 python_version: 3.7
133 tox_envlist: py37
133 Glance py38 unit tests vs. glance_store master
134 vars:
135 python_version: 3.8
136 tox_envlist: py38
134137
135138 - job:
136139 name: glance-tox-py36-glance_store-tips
137140 parent: glance-tox-glance_store-tips-base
138141 description: |
139142 Glance py36 unit tests vs. glance_store master
143 nodeset: ubuntu-bionic
140144 vars:
141145 python_version: 3.6
142146 tox_envlist: py36
143147
144148 - job:
145 name: glance-tox-functional-py37-glance_store-tips
149 name: glance-tox-functional-py38-glance_store-tips
146150 parent: glance-tox-glance_store-tips-base
147151 description: |
148 Glance py37 functional tests vs. glance_store master
149 vars:
150 python_version: 3.7
151 tox_envlist: functional-py37
152 Glance py38 functional tests vs. glance_store master
153 vars:
154 python_version: 3.8
155 tox_envlist: functional-py38
152156
153157 - job:
154158 name: glance-tox-functional-py36-glance_store-tips
155159 parent: glance-tox-glance_store-tips-base
156160 description: |
157161 Glance py36 functional tests vs. glance_store master
162 nodeset: ubuntu-bionic
158163 vars:
159164 python_version: 3.6
160165 tox_envlist: functional-py36
163168 name: glance-tox-cursive-tips-base
164169 parent: tox
165170 abstract: true
166 nodeset: ubuntu-bionic
171 nodeset: ubuntu-focal
167172 timeout: 2400
168173 description: Abstract job for Glance vs. cursive and related libs
169174 required-projects:
172177 - name: openstack/castellan
173178
174179 - job:
175 name: glance-tox-py37-cursive-tips
180 name: glance-tox-py38-cursive-tips
176181 parent: glance-tox-cursive-tips-base
177182 description: |
178 Glance py37 unit tests vs. cursive (and related libs) master
179 vars:
180 python_version: 3.7
181 tox_envlist: py37
183 Glance py38 unit tests vs. cursive (and related libs) master
184 vars:
185 python_version: 3.8
186 tox_envlist: py38
182187
183188 - job:
184189 name: glance-tox-py36-cursive-tips
185190 parent: glance-tox-cursive-tips-base
186191 description: |
187192 Glance py36 unit tests vs. cursive (and related libs) master
193 nodeset: ubuntu-bionic
188194 vars:
189195 python_version: 3.6
190196 tox_envlist: py36
191197
192198 - job:
193 name: glance-tox-functional-py37-cursive-tips
199 name: glance-tox-functional-py38-cursive-tips
194200 parent: glance-tox-cursive-tips-base
195201 description: |
196 Glance py37 functional tests vs. cursive (and related libs) master
197 vars:
198 python_version: 3.7
199 tox_envlist: functional-py37
202 Glance py38 functional tests vs. cursive (and related libs) master
203 vars:
204 python_version: 3.8
205 tox_envlist: functional-py38
200206
201207 - job:
202208 name: glance-tox-functional-py36-cursive-tips
203209 parent: glance-tox-cursive-tips-base
204210 description: |
205211 Glance py36 functional tests vs. cursive (and related libs) master
212 nodeset: ubuntu-bionic
206213 vars:
207214 python_version: 3.6
208215 tox_envlist: functional-py36
216
217 - job:
218 name: tempest-integrated-storage-import-workflow
219 parent: tempest-integrated-storage
220 description: |
221 The regular tempest-integrated-storage job but with glance metadata injection
222 post-run: playbooks/post-check-metadata-injection.yaml
223 vars:
224 devstack_localrc:
225 GLANCE_STANDALONE: True
226 GLANCE_USE_IMPORT_WORKFLOW: True
227 devstack_local_conf:
228 post-config:
229 $GLANCE_IMAGE_IMPORT_CONF:
230 image_import_opts:
231 image_import_plugins: "['inject_image_metadata', 'image_conversion']"
232 inject_metadata_properties:
233 ignore_user_roles:
234 inject: |
235 "glance_devstack_test":"doyouseeme?"
236 image_conversion:
237 output_format: raw
238
239 - job:
240 name: tempest-integrated-storage-wsgi-import
241 parent: tempest-integrated-storage
242 description: |
243 The regular tempest-integrated-storage job but with glance in wsgi mode
244 vars:
245 devstack_localrc:
246 GLANCE_STANDALONE: False
247 GLANCE_USE_IMPORT_WORKFLOW: True
248 devstack_local_conf:
249 post-config:
250 $GLANCE_API_CONF:
251 DEFAULT:
252 enabled_import_methods: "[\"copy-image\", \"glance-direct\"]"
253 wsgi:
254 python_interpreter: /usr/bin/python3
255 $GLANCE_IMAGE_IMPORT_CONF:
256 image_import_opts:
257 image_import_plugins: "['image_conversion']"
258 image_conversion:
259 output_format: raw
260
261 - job:
262 name: glance-ceph-thin-provisioning
263 parent: devstack-plugin-ceph-tempest-py3
264 description: |
265 Just like devstack-plugin-ceph-tempest-py3, but with thin provisioning enabled
266 required-projects:
267 - name: openstack/glance_store
268 vars:
269 devstack_local_conf:
270 post-config:
271 $GLANCE_API_CONF:
272 glance_store:
273 rbd_thin_provisioning: True
209274
210275 - project:
211276 templates:
212277 - check-requirements
213278 - integrated-gate-storage
214279 - openstack-lower-constraints-jobs
215 - openstack-python3-ussuri-jobs
280 - openstack-python3-victoria-jobs
216281 - periodic-stable-jobs
217282 - publish-openstack-docs-pti
218283 - release-notes-jobs-python3
219284 check:
220285 jobs:
221286 - openstack-tox-functional-py36
222 - openstack-tox-functional-py37
287 - openstack-tox-functional-py38
223288 - glance-code-constants-check
224 - devstack-plugin-ceph-tempest-py3:
289 - glance-ceph-thin-provisioning:
225290 voting: false
226291 irrelevant-files: &tempest-irrelevant-files
227292 - ^(test-|)requirements.txt$
238303 - ^\.zuul\.yaml$
239304 - tempest-integrated-storage:
240305 irrelevant-files: *tempest-irrelevant-files
306 - tempest-integrated-storage-import-workflow:
307 irrelevant-files: *tempest-irrelevant-files
308 - tempest-integrated-storage-wsgi-import:
309 irrelevant-files: *tempest-irrelevant-files
241310 - grenade:
242311 irrelevant-files: *tempest-irrelevant-files
243312 - tempest-ipv6-only:
244313 irrelevant-files: *tempest-irrelevant-files
314 - nova-ceph-multistore
245315
246316 gate:
247317 jobs:
248318 - openstack-tox-functional-py36
249 - openstack-tox-functional-py37
319 - openstack-tox-functional-py38
250320 - tempest-integrated-storage:
251321 irrelevant-files: *tempest-irrelevant-files
322 - tempest-integrated-storage-import-workflow:
323 irrelevant-files: *tempest-irrelevant-files
252324 - grenade:
253325 irrelevant-files: *tempest-irrelevant-files
254326 - tempest-ipv6-only:
255327 irrelevant-files: *tempest-irrelevant-files
328 - nova-ceph-multistore
256329 experimental:
257330 jobs:
258 - glance-tox-py37-glance_store-tips
331 - glance-tox-py38-glance_store-tips
259332 - glance-tox-py36-glance_store-tips
260 - glance-tox-functional-py37-glance_store-tips
333 - glance-tox-functional-py38-glance_store-tips
261334 - glance-tox-functional-py36-glance_store-tips
262 - barbican-simple-crypto-devstack-tempest
335 - barbican-tempest-plugin-simple-crypto
263336 - grenade-multinode
264337 - tempest-pg-full:
265338 irrelevant-files: *tempest-irrelevant-files
279352 # to define these jobs in the openstack/project-config repo.
280353 # That would make us less agile in adjusting these tests, so we
281354 # aren't doing that either.
282 - glance-tox-functional-py37-oslo-tips:
355 - glance-tox-functional-py38-oslo-tips:
283356 branches: master
284357 - glance-tox-functional-py36-oslo-tips:
285358 branches: master
286 - glance-tox-py37-keystone-tips:
359 - glance-tox-py38-keystone-tips:
287360 branches: master
288361 - glance-tox-py36-keystone-tips:
289362 branches: master
290 - glance-tox-functional-py37-keystone-tips:
363 - glance-tox-functional-py38-keystone-tips:
291364 branches: master
292365 - glance-tox-functional-py36-keystone-tips:
293366 branches: master
294 - glance-tox-py37-glance_store-tips:
367 - glance-tox-py38-glance_store-tips:
295368 branches: master
296369 - glance-tox-py36-glance_store-tips:
297370 branches: master
298 - glance-tox-functional-py37-glance_store-tips:
371 - glance-tox-functional-py38-glance_store-tips:
299372 branches: master
300373 - glance-tox-functional-py36-glance_store-tips:
301374 branches: master
302 - glance-tox-py37-cursive-tips:
375 - glance-tox-py38-cursive-tips:
303376 branches: master
304377 - glance-tox-py36-cursive-tips:
305378 branches: master
306 - glance-tox-functional-py37-cursive-tips:
379 - glance-tox-functional-py38-cursive-tips:
307380 branches: master
308381 - glance-tox-functional-py36-cursive-tips:
309382 branches: master
9191 show_authors = False
9292
9393 # The name of the Pygments (syntax highlighting) style to use.
94 pygments_style = 'sphinx'
94 pygments_style = 'native'
9595
9696 # openstackdocstheme options
97 repository_name = 'openstack/glance'
98 bug_project = 'glance'
99 bug_tag = 'api-ref'
97 openstackdocs_repo_name = 'openstack/glance'
98 openstackdocs_bug_project = 'glance'
99 openstackdocs_bug_tag = 'api-ref'
100100
101101 # -- Options for man page output ----------------------------------------------
102102
139139 # relative to this directory. They are copied after the builtin static files,
140140 # so a file named "default.css" will overwrite the builtin "default.css".
141141 # html_static_path = ['_static']
142
143 # If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
144 # using the given strftime format.
145 # html_last_updated_fmt = '%b %d, %Y'
146 html_last_updated_fmt = '%Y-%m-%d %H:%M'
147142
148143 # If true, SmartyPants will be used to convert quotes and dashes to
149144 # typographically correct entities.
210210
211211 To set the behavior of the import workflow in case of error, you can use the
212212 optional boolean body parameter ``all_stores_must_succeed``.
213 When set to True, if an error occurs during the upload in at least one store,
213 When set to True (default), if an error occurs during the upload in at least one store,
214214 the worfklow fails, the data is deleted from stores where copying is done and
215215 the state of the image remains unchanged.
216 When set to False (default), the workflow will fail only if the upload fails
216 When set to False, the workflow will fail only if the upload fails
217217 on all stores specified. In case of a partial success, the locations added to
218218 the image will be the stores where the data has been correctly uploaded.
219219
282282 from stores, ...) only if the copying fails on all stores specified by
283283 the user. In case of a partial success, the locations added to the
284284 image will be the stores where the data has been correctly uploaded.
285
286 - By default, you may perform the copy-image operation only on images that
287 you own. This action is governed by policy, so some users may be granted
288 permission to copy unowned images. Consult your cloud's local
289 documentation for details.
285290
286291 **Synchronous Postconditions**
287292
274274 visibility-in-query:
275275 description: |
276276 Filters the response by an image visibility value. A valid value is
277 ``public``, ``private``, ``community``, or ``shared``. (Note that if you
278 filter on ``shared``, the images included in the response will only be
279 those where your member status is ``accepted`` unless you explicitly
280 include a ``member_status`` filter in the request.) If you omit this
281 parameter, the response shows ``public``, ``private``, and those ``shared``
282 images with a member status of ``accepted``.
277 ``public``, ``private``, ``community``, ``shared``, or ``all``. (Note
278 that if you filter on ``shared``, the images included in the response
279 will only be those where your member status is ``accepted`` unless you
280 explicitly include a ``member_status`` filter in the request.) If you
281 omit this parameter, the response shows ``public``, ``private``, and those
282 ``shared`` images with a member status of ``accepted``.
283283 in: query
284284 required: false
285285 type: string
299299 description: |
300300 A boolean parameter indicating the behavior of the import workflow when an
301301 error occurs.
302 When set to True, if an error occurs during the upload in at least one
302 When set to True (default), if an error occurs during the upload in at least one
303303 store, the worfklow fails, the data is deleted from stores where copying
304304 is done (not staging), and the state of the image is unchanged.
305305 When set to False, the workflow will fail (data deleted from stores, ...)
306306 only if the import fails on all stores specified by the user. In case of a
307307 partial success, the locations added to the image will be the stores where
308 the data has been correctly uploaded.
308 the data has been correctly uploaded. Default is True.
309309 in: body
310310 required: false
311311 type: boolean
+0
-1
babel.cfg less more
0 [python: **.py]
1919 postgresql-client [platform:dpkg]
2020 postgresql-devel [platform:rpm]
2121 postgresql-server [platform:rpm]
22 qemu [platform:dpkg devstack build-image-dib]
23 qemu-utils [platform:dpkg devstack build-image-dib]
2224 libpq-dev [platform:dpkg]
00 # The order of packages is significant, because pip processes them in the order
11 # of appearance. Changing the order has an impact on the overall integration
22 # process, which may cause wedges in the gate later.
3 sphinx!=1.6.6,!=1.6.7,!=2.1.0,>=1.6.2 # BSD
3 sphinx>=2.0.0,!=2.1.0 # BSD
44 os-api-ref>=1.4.0 # Apache-2.0
5 openstackdocstheme>=1.20.0 # Apache-2.0
6 reno>=2.5.0 # Apache-2.0
5 openstackdocstheme>=2.2.1 # Apache-2.0
6 reno>=3.1.0 # Apache-2.0
77 sphinxcontrib-apidoc>=0.2.0 # BSD
88 whereto>=0.3.0 # Apache-2.0
99
2727 Starting a server
2828 -----------------
2929
30 There are two ways to start a Glance server (either the API server or the
31 registry server):
30 There are two ways to start a Glance server:
3231
3332 * Manually calling the server program
3433
6059 * ``/etc``
6160
6261 The filename that is searched for depends on the server application name. So,
63 if you are starting up the API server, ``glance-api.conf`` is searched for,
64 otherwise ``glance-registry.conf``.
62 if you are starting up the API server, ``glance-api.conf`` is searched for.
6563
6664 If no configuration file is found, you will see an error, like::
6765
6967 ERROR: Unable to locate any configuration file. Cannot load application glance-api
7068
7169 Here is an example showing how you can manually start the ``glance-api`` server
72 and ``glance-registry`` in a shell.::
70 in a shell.::
7371
7472 $ sudo glance-api --config-file glance-api.conf --debug &
7573 jsuh@mc-ats1:~$ 2011-04-13 14:50:12 DEBUG [glance-api] ********************************************************************************
8785 2011-04-13 14:50:12 DEBUG [routes.middleware] Initialized with method overriding = True, and path info altering = True
8886 2011-04-13 14:50:12 DEBUG [eventlet.wsgi.server] (21354) wsgi starting up on http://65.114.169.29:9292/
8987
90 $ sudo glance-registry --config-file glance-registry.conf &
91 jsuh@mc-ats1:~$ 2011-04-13 14:51:16 INFO [sqlalchemy.engine.base.Engine.0x...feac] PRAGMA table_info("images")
92 2011-04-13 14:51:16 INFO [sqlalchemy.engine.base.Engine.0x...feac] ()
93 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Col ('cid', 'name', 'type', 'notnull', 'dflt_value', 'pk')
94 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (0, u'created_at', u'DATETIME', 1, None, 0)
95 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (1, u'updated_at', u'DATETIME', 0, None, 0)
96 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (2, u'deleted_at', u'DATETIME', 0, None, 0)
97 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (3, u'deleted', u'BOOLEAN', 1, None, 0)
98 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (4, u'id', u'INTEGER', 1, None, 1)
99 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (5, u'name', u'VARCHAR(255)', 0, None, 0)
100 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (6, u'disk_format', u'VARCHAR(20)', 0, None, 0)
101 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (7, u'container_format', u'VARCHAR(20)', 0, None, 0)
102 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (8, u'size', u'INTEGER', 0, None, 0)
103 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (9, u'status', u'VARCHAR(30)', 1, None, 0)
104 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (10, u'is_public', u'BOOLEAN', 1, None, 0)
105 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (11, u'location', u'TEXT', 0, None, 0)
106 2011-04-13 14:51:16 INFO [sqlalchemy.engine.base.Engine.0x...feac] PRAGMA table_info("image_properties")
107 2011-04-13 14:51:16 INFO [sqlalchemy.engine.base.Engine.0x...feac] ()
108 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Col ('cid', 'name', 'type', 'notnull', 'dflt_value', 'pk')
109 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (0, u'created_at', u'DATETIME', 1, None, 0)
110 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (1, u'updated_at', u'DATETIME', 0, None, 0)
111 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (2, u'deleted_at', u'DATETIME', 0, None, 0)
112 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (3, u'deleted', u'BOOLEAN', 1, None, 0)
113 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (4, u'id', u'INTEGER', 1, None, 1)
114 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (5, u'image_id', u'INTEGER', 1, None, 0)
115 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (6, u'key', u'VARCHAR(255)', 1, None, 0)
116 2011-04-13 14:51:16 DEBUG [sqlalchemy.engine.base.Engine.0x...feac] Row (7, u'value', u'TEXT', 0, None, 0)
117
11888 $ ps aux | grep glance
11989 root 20009 0.7 0.1 12744 9148 pts/1 S 12:47 0:00 /usr/bin/python /usr/bin/glance-api glance-api.conf --debug
120 root 20012 2.0 0.1 25188 13356 pts/1 S 12:47 0:00 /usr/bin/python /usr/bin/glance-registry glance-registry.conf
12190 jsuh 20017 0.0 0.0 3368 744 pts/1 S+ 12:47 0:00 grep glance
12291
12392 Simply supply the configuration file as the parameter to the ``--config-file``
124 option (the ``etc/glance-api.conf`` and ``etc/glance-registry.conf`` sample
125 configuration files were used in the above example) and then any other options
126 you want to use. (``--debug`` was used above to show some of the debugging
127 output that the server shows when starting up. Call the server program
128 with ``--help`` to see all available options you can specify on the
129 command line.)
93 option (the ``etc/glance-api.conf`` sample configuration file was used in the
94 above example) and then any other options you want to use. (``--debug`` was
95 used above to show some of the debugging output that the server shows when
96 starting up. Call the server program with ``--help`` to see all available
97 options you can specify on the command line.)
13098
13199 For more information on configuring the server via the ``paste.deploy``
132100 configuration files, see the section entitled
161129 You must use the ``sudo`` program to run ``glance-control`` currently, as the
162130 pid files for the server programs are written to /var/run/glance/
163131
164 Here is an example that shows how to start the ``glance-registry`` server
132 Here is an example that shows how to start the ``glance-api`` server
165133 with the ``glance-control`` wrapper script. ::
166134
167135
168136 $ sudo glance-control api start glance-api.conf
169137 Starting glance-api with /home/jsuh/glance.conf
170138
171 $ sudo glance-control registry start glance-registry.conf
172 Starting glance-registry with /home/jsuh/glance.conf
173
174139 $ ps aux | grep glance
175140 root 20038 4.0 0.1 12728 9116 ? Ss 12:51 0:00 /usr/bin/python /usr/bin/glance-api /home/jsuh/glance-api.conf
176 root 20039 6.0 0.1 25188 13356 ? Ss 12:51 0:00 /usr/bin/python /usr/bin/glance-registry /home/jsuh/glance-registry.conf
177141 jsuh 20042 0.0 0.0 3368 744 pts/1 S+ 12:51 0:00 grep glance
178142
179143
217181
218182 as this example shows::
219183
220 $ sudo glance-control registry stop
221 Stopping glance-registry pid: 17602 signal: 15
184 $ sudo glance-control api stop
185 Stopping glance-api pid: 17602 signal: 15
222186
223187 Restarting a server
224188 -------------------
226190 You can restart a server with the ``glance-control`` program, as demonstrated
227191 here::
228192
229 $ sudo glance-control registry restart etc/glance-registry.conf
230 Stopping glance-registry pid: 17611 signal: 15
231 Starting glance-registry with /home/jpipes/repos/glance/trunk/etc/glance-registry.conf
193 $ sudo glance-control api restart etc/glance-api.conf
194 Stopping glance-api pid: 17611 signal: 15
195 Starting glance-api with /home/jpipes/repos/glance/trunk/etc/glance-api.conf
232196
233197 Reloading a server
234198 ------------------
213213
214214 For the ``copy-image`` method, make sure that ``copy-image`` is included
215215 in the list specified by your ``enabled_import_methods`` setting as well
216 as you have multiple glance backends configured in your environment.
216 as you have multiple glance backends configured in your environment. To
217 allow copy-image operation to be performed by users on images they do
218 not own, you can set the `copy_image` policy to something other than
219 the default, for example::
220
221 "copy_image": "'public':%(visibility)s"
217222
218223 .. _iir_plugins:
219224
221226 -----------------------------------------
222227 Starting with Ussuri release, it is possible to copy existing image data
223228 into multiple stores using interoperable image import workflow.
229
230 Basically user will be able to copy only those images which are
231 owned by him. Unless the copying of unowned images are allowed by
232 cloud operator by enforcing policy check, user will get Forbidden
233 (Operation not permitted response) for such copy operations. Even if
234 copying of unowned images is allowed by enforcing policy, ownership of
235 the image remains unchanged.
224236
225237 Operator or end user can either copy the existing image by specifying
226238 ``all_stores`` as True in request body or by passing list of desired
1313 hypervisors
1414
1515 Using image properties
16 ~~~~~~~~~~~~~~~~~~~~~~
16 ----------------------
1717
1818 Some important points to keep in mind:
1919
7373 .. _image_property_keys_and_values:
7474
7575 Image property keys and values
76 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
76 ------------------------------
7777
7878 Here is a list of useful image properties and the values they expect.
7979
80 .. list-table::
81 :widths: 15 35 50 90
82 :header-rows: 1
83
84 * - Specific to
85 - Key
86 - Description
87 - Supported values
88 * - All
89 - ``architecture``
90 - The CPU architecture that must be supported by the hypervisor. For
91 example, ``x86_64``, ``arm``, or ``ppc64``. Run :command:`uname -m`
92 to get the architecture of a machine. We strongly recommend using
93 the architecture data vocabulary defined by the `libosinfo project
94 <http://libosinfo.org/>`_ for this purpose.
95 - * ``alpha`` - `DEC 64-bit RISC
96 <https://en.wikipedia.org/wiki/DEC_Alpha>`_
97 * ``armv7l`` - `ARM Cortex-A7 MPCore
98 <https://en.wikipedia.org/wiki/ARM_architecture>`_
99 * ``cris`` - `Ethernet, Token Ring, AXis—Code Reduced Instruction
100 Set <https://en.wikipedia.org/wiki/ETRAX_CRIS>`_
101 * ``i686`` - `Intel sixth-generation x86 (P6 micro architecture)
102 <https://en.wikipedia.org/wiki/X86>`_
103 * ``ia64`` - `Itanium <https://en.wikipedia.org/wiki/Itanium>`_
104 * ``lm32`` - `Lattice Micro32
105 <https://en.wikipedia.org/wiki/Milkymist>`_
106 * ``m68k`` - `Motorola 68000
107 <https://en.wikipedia.org/wiki/Motorola_68000_family>`_
108 * ``microblaze`` - `Xilinx 32-bit FPGA (Big Endian)
109 <https://en.wikipedia.org/wiki/MicroBlaze>`_
110 * ``microblazeel`` - `Xilinx 32-bit FPGA (Little Endian)
111 <https://en.wikipedia.org/wiki/MicroBlaze>`_
112 * ``mips`` - `MIPS 32-bit RISC (Big Endian)
113 <https://en.wikipedia.org/wiki/MIPS_architecture>`_
114 * ``mipsel`` - `MIPS 32-bit RISC (Little Endian)
115 <https://en.wikipedia.org/wiki/MIPS_architecture>`_
116 * ``mips64`` - `MIPS 64-bit RISC (Big Endian)
117 <https://en.wikipedia.org/wiki/MIPS_architecture>`_
118 * ``mips64el`` - `MIPS 64-bit RISC (Little Endian)
119 <https://en.wikipedia.org/wiki/MIPS_architecture>`_
120 * ``openrisc`` - `OpenCores RISC
121 <https://en.wikipedia.org/wiki/OpenRISC#QEMU_support>`_
122 * ``parisc`` - `HP Precision Architecture RISC
123 <https://en.wikipedia.org/wiki/PA-RISC>`_
124 * parisc64 - `HP Precision Architecture 64-bit RISC
125 <https://en.wikipedia.org/wiki/PA-RISC>`_
126 * ppc - `PowerPC 32-bit <https://en.wikipedia.org/wiki/PowerPC>`_
127 * ppc64 - `PowerPC 64-bit <https://en.wikipedia.org/wiki/PowerPC>`_
128 * ppcemb - `PowerPC (Embedded 32-bit)
129 <https://en.wikipedia.org/wiki/PowerPC>`_
130 * s390 - `IBM Enterprise Systems Architecture/390
131 <https://en.wikipedia.org/wiki/S390>`_
132 * s390x - `S/390 64-bit <https://en.wikipedia.org/wiki/S390x>`_
133 * sh4 - `SuperH SH-4 (Little Endian)
134 <https://en.wikipedia.org/wiki/SuperH>`_
135 * sh4eb - `SuperH SH-4 (Big Endian)
136 <https://en.wikipedia.org/wiki/SuperH>`_
137 * sparc - `Scalable Processor Architecture, 32-bit
138 <https://en.wikipedia.org/wiki/Sparc>`_
139 * sparc64 - `Scalable Processor Architecture, 64-bit
140 <https://en.wikipedia.org/wiki/Sparc>`_
141 * unicore32 - `Microprocessor Research and Development Center RISC
142 Unicore32 <https://en.wikipedia.org/wiki/Unicore>`_
143 * x86_64 - `64-bit extension of IA-32
144 <https://en.wikipedia.org/wiki/X86>`_
145 * xtensa - `Tensilica Xtensa configurable microprocessor core
146 <https://en.wikipedia.org/wiki/Xtensa#Processor_Cores>`_
147 * xtensaeb - `Tensilica Xtensa configurable microprocessor core
148 <https://en.wikipedia.org/wiki/Xtensa#Processor_Cores>`_ (Big Endian)
149 * - All
150 - ``hypervisor_type``
151 - The hypervisor type. Note that ``qemu`` is used for both QEMU and KVM
152 hypervisor types.
153 - ``hyperv``, ``ironic``, ``lxc``, ``qemu``, ``uml``, ``vmware``, or
154 ``xen``.
155 * - All
156 - ``instance_type_rxtx_factor``
157 - Optional property allows created servers to have a different bandwidth
158 cap than that defined in the network they are attached to. This factor
159 is multiplied by the ``rxtx_base`` property of the network. The
160 ``rxtx_base`` property defaults to ``1.0``, which is the same as the
161 attached network. This parameter is only available for Xen or NSX based
162 systems.
163 - Float (default value is ``1.0``)
164 * - All
165 - ``instance_uuid``
166 - For snapshot images, this is the UUID of the server used to create this
167 image.
168 - Valid server UUID
169 * - All
170 - ``img_config_drive``
171 - Specifies whether the image needs a config drive.
172 - ``mandatory`` or ``optional`` (default if property is not used).
173 * - All
174 - ``kernel_id``
175 - The ID of an image stored in the Image service that should be used as
176 the kernel when booting an AMI-style image.
177 - Valid image ID
178 * - All
179 - ``os_admin_user``
180 - The name of the user with admin privileges.
181 - Valid username (defaults to ``root`` for Linux guests and ``Administrator`` for Windows guests).
182 * - All
183 - ``os_distro``
184 - The common name of the operating system distribution in lowercase
185 (uses the same data vocabulary as the
186 `libosinfo project`_). Specify only a recognized
187 value for this field. Deprecated values are listed to assist you in
188 searching for the recognized value.
189 - * ``arch`` - Arch Linux. Do not use ``archlinux`` or ``org.archlinux``.
190 * ``centos`` - Community Enterprise Operating System. Do not use
191 ``org.centos`` or ``CentOS``.
192 * ``debian`` - Debian. Do not use ``Debian` or ``org.debian``.
193 * ``fedora`` - Fedora. Do not use ``Fedora``, ``org.fedora``, or
194 ``org.fedoraproject``.
195 * ``freebsd`` - FreeBSD. Do not use ``org.freebsd``, ``freeBSD``, or
196 ``FreeBSD``.
197 * ``gentoo`` - Gentoo Linux. Do not use ``Gentoo`` or ``org.gentoo``.
198 * ``mandrake`` - Mandrakelinux (MandrakeSoft) distribution. Do not use
199 ``mandrakelinux`` or ``MandrakeLinux``.
200 * ``mandriva`` - Mandriva Linux. Do not use ``mandrivalinux``.
201 * ``mes`` - Mandriva Enterprise Server. Do not use ``mandrivaent`` or
202 ``mandrivaES``.
203 * ``msdos`` - Microsoft Disc Operating System. Do not use ``ms-dos``.
204 * ``netbsd`` - NetBSD. Do not use ``NetBSD`` or ``org.netbsd``.
205 * ``netware`` - Novell NetWare. Do not use ``novell`` or ``NetWare``.
206 * ``openbsd`` - OpenBSD. Do not use ``OpenBSD`` or ``org.openbsd``.
207 * ``opensolaris`` - OpenSolaris. Do not use ``OpenSolaris`` or
208 ``org.opensolaris``.
209 * ``opensuse`` - openSUSE. Do not use ``suse``, ``SuSE``, or
210 `` org.opensuse``.
211 * ``rhel`` - Red Hat Enterprise Linux. Do not use ``redhat``, ``RedHat``,
212 or ``com.redhat``.
213 * ``sled`` - SUSE Linux Enterprise Desktop. Do not use ``com.suse``.
214 * ``ubuntu`` - Ubuntu. Do not use ``Ubuntu``, ``com.ubuntu``,
215 ``org.ubuntu``, or ``canonical``.
216 * ``windows`` - Microsoft Windows. Do not use ``com.microsoft.server``
217 or ``windoze``.
218 * - All
219 - ``os_version``
220 - The operating system version as specified by the distributor.
221 - Valid version number (for example, ``11.10``).
222 * - All
223 - ``os_secure_boot``
224 - Secure Boot is a security standard. When the instance starts,
225 Secure Boot first examines software such as firmware and OS by their
226 signature and only allows them to run if the signatures are valid.
227
228 For Hyper-V: Images must be prepared as Generation 2 VMs. Instance must
229 also contain ``hw_machine_type=hyperv-gen2`` image property. Linux
230 guests will also require bootloader's digital signature provided as
231 ``os_secure_boot_signature`` and
232 ``hypervisor_version_requires'>=10.0'`` image properties.
233 - * ``required`` - Enable the Secure Boot feature.
234 * ``disabled`` or ``optional`` - (default) Disable the Secure Boot
235 feature.
236 * - All
237 - ``os_shutdown_timeout``
238 - By default, guests will be given 60 seconds to perform a graceful
239 shutdown. After that, the VM is powered off. This property allows
240 overriding the amount of time (unit: seconds) to allow a guest OS to
241 cleanly shut down before power off. A value of 0 (zero) means the guest
242 will be powered off immediately with no opportunity for guest OS
243 clean-up.
244 - Integer value (in seconds) with a minimum of 0 (zero). Default is 60.
245 * - All
246 - ``ramdisk_id``
247 - The ID of image stored in the Image service that should be used as the
248 ramdisk when booting an AMI-style image.
249 - Valid image ID.
250 * - All
251 - ``trait:<trait_name>``
252 - Added in the Rocky release. Functionality is similar to traits specified
253 in `flavor extra specs <https://docs.openstack.org/nova/latest/user/flavors.html#extra-specs>`_.
254
255 Traits allow specifying a server to build on a compute node with the set
256 of traits specified in the image. The traits are associated with the
257 resource provider that represents the compute node in the Placement API.
258
259 The syntax of specifying traits is **trait:<trait_name>=value**, for
260 example:
261
262 * trait:HW_CPU_X86_AVX2=required
263 * trait:STORAGE_DISK_SSD=required
264
265 The nova scheduler will pass required traits specified on the image to
266 the Placement API to include only resource providers that can satisfy
267 the required traits. Traits for the resource providers can be managed
268 using the `osc-placement plugin. <https://docs.openstack.org/osc-placement/latest/index.html>`_
269
270 Image traits are used by the nova scheduler even in cases of volume
271 backed instances, if the volume source is an image with traits.
272 - Only valid value is ``required``, any other value is invalid.
273
274 * ``required`` - <trait_name> is required on the resource provider that
275 represents the compute node on which the image is launched.
276 * - All
277 - ``vm_mode``
278 - The virtual machine mode. This represents the host/guest ABI
279 (application binary interface) used for the virtual machine.
280 - * ``hvm`` - Fully virtualized. This is the mode used by QEMU and KVM.
281 * ``xen`` - Xen 3.0 paravirtualized.
282 * ``uml`` - User Mode Linux paravirtualized.
283 * ``exe`` - Executables in containers. This is the mode used by LXC.
284 * - libvirt API driver
285 - ``hw_cpu_sockets``
286 - The preferred number of sockets to expose to the guest.
287 - Integer.
288 * - libvirt API driver
289 - ``hw_cpu_cores``
290 - The preferred number of cores to expose to the guest.
291 - Integer.
292 * - libvirt API driver
293 - ``hw_cpu_threads``
294 - The preferred number of threads to expose to the guest.
295 - Integer.
296 * - libvirt API driver
297 - ``hw_cpu_policy``
298 - Used to pin the virtual CPUs (vCPUs) of instances to the host’s
299 physical CPU cores (pCPUs). Host aggregates should be used to separate
300 these pinned instances from unpinned instances as the latter will not
301 respect the resourcing requirements of the former.
302 - * ``shared`` - (default) The guest vCPUs will be allowed to freely float
303 across host pCPUs, albeit potentially constrained by NUMA policy.
304 * ``dedicated`` - The guest vCPUs will be strictly pinned to a set of
305 host pCPUs. In the absence of an explicit vCPU topology request, the
306 drivers typically expose all vCPUs as sockets with one core and one
307 thread. When strict CPU pinning is in effect the guest CPU topology
308 will be setup to match the topology of the CPUs to which it is pinned.
309 This option implies an overcommit ratio of 1.0. For example, if a two
310 vCPU guest is pinned to a single host core with two threads, then the
311 guest will get a topology of one socket, one core, two threads.
312 * - libvirt API driver
313 - ``hw_cpu_thread_policy``
314 - Further refine ``hw_cpu_policy=dedicated`` by stating how hardware CPU
315 threads in a simultaneous multithreading-based (SMT) architecture be
316 used. SMT-based architectures include Intel processors with
317 Hyper-Threading technology. In these architectures, processor cores
318 share a number of components with one or more other cores. Cores in
319 such architectures are commonly referred to as hardware threads, while
320 the cores that a given core share components with are known as thread
321 siblings.
322 - * ``prefer`` - (default) The host may or may not have an SMT
323 architecture. Where an SMT architecture is present, thread siblings
324 are preferred.
325 * ``isolate`` - The host must not have an SMT architecture or must
326 emulate a non-SMT architecture. If the host does not have an SMT
327 architecture, each vCPU is placed on a different core as expected. If
328 the host does have an SMT architecture - that is, one or more cores
329 have thread siblings - then each vCPU is placed on a different
330 physical core. No vCPUs from other guests are placed on the same core.
331 All but one thread sibling on each utilized core is therefore
332 guaranteed to be unusable.
333 * ``require`` - The host must have an SMT architecture. Each vCPU is
334 allocated on thread siblings. If the host does not have an SMT
335 architecture, then it is not used. If the host has an SMT
336 architecture, but not enough cores with free thread siblings are
337 available, then scheduling fails.
338 * - libvirt API driver
339 - ``hw_cdrom_bus``
340 - Specifies the type of disk controller to attach CD-ROM devices to.
341 - As for ``hw_disk_bus``.
342 * - libvirt API driver
343 - ``hw_disk_bus``
344 - Specifies the type of disk controller to attach disk devices to.
345 - Options depend on the value of `nova's virt_type config option
346 <https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.virt_type>`_:
347
348 * For ``qemu`` and ``kvm``: one of ``scsi``, ``virtio``,
349 ``uml``, ``xen``, ``ide``, ``usb``, or ``lxc``.
350 * For ``xen``: one of ``xen`` or ``ide``.
351 * For ``uml``: must be ``uml``.
352 * For ``lxc``: must be ``lxc``.
353 * For ``parallels``: one of ``ide`` or ``scsi``.
354 * - libvirt API driver
355 - ``hw_firmware_type``
356 - Specifies the type of firmware with which to boot the guest.
357 - One of ``bios`` or ``uefi``.
358 * - libvirt API driver
359 - ``hw_mem_encryption``
360 - Enables encryption of guest memory at the hardware level, if
361 there are compute hosts available which support this. See
362 `nova's documentation on configuration of the KVM hypervisor
363 <https://docs.openstack.org/nova/latest/admin/configuration/hypervisor-kvm.html#amd-sev-secure-encrypted-virtualization>`_
364 for more details.
365 - ``true`` or ``false`` (default).
366 * - libvirt API driver
367 - ``hw_pointer_model``
368 - Input devices that allow interaction with a graphical framebuffer,
369 for example to provide a graphic tablet for absolute cursor movement.
370 Currently only supported by the KVM/QEMU hypervisor configuration
371 and VNC or SPICE consoles must be enabled.
372 - ``usbtablet``
373 * - libvirt API driver
374 - ``hw_rng_model``
375 - Adds a random-number generator device to the image's instances. This
376 image property by itself does not guarantee that a hardware RNG will be
377 used; it expresses a preference that may or may not be satisfied
378 depending upon Nova configuration.
379
380 The cloud administrator can enable and control device behavior by
381 configuring the instance's flavor. By default:
382
383 * The generator device is disabled.
384 * ``/dev/urandom`` is used as the default entropy source. To
385 specify a physical HW RNG device, use the following option in
386 the nova.conf file:
387
388 .. code-block:: ini
389
390 rng_dev_path=/dev/hwrng
391
392 * The use of a hardware random number generator must be configured in a
393 flavor's extra_specs by setting ``hw_rng:allowed`` to True in the
394 flavor definition.
395 - ``virtio``, or other supported device.
396 * - libvirt API driver
397 - ``hw_time_hpet``
398 - Adds support for the High Precision Event Timer (HPET) for x86 guests
399 in the libvirt driver when ``hypervisor_type=qemu`` and
400 ``architecture=i686`` or ``architecture=x86_64``. The timer can be
401 enabled by setting ``hw_time_hpet=true``. By default HPET remains
402 disabled.
403 - ``true`` or ``false`` (default)
404 * - libvirt API driver, Hyper-V driver
405 - ``hw_machine_type``
406 - For libvirt: Enables booting an ARM system using the specified
407 machine type. If an ARM image is used and its machine type is
408 not explicitly specified, then Compute uses the ``virt`` machine
409 type as the default for ARMv7 and AArch64.
410
411 For Hyper-V: Specifies whether the Hyper-V instance will be a generation
412 1 or generation 2 VM. By default, if the property is not provided, the
413 instances will be generation 1 VMs. If the image is specific for
414 generation 2 VMs but the property is not provided accordingly, the
415 instance will fail to boot.
416 - For libvirt: Valid types can be viewed by using the
417 :command:`virsh capabilities` command (machine types are displayed in
418 the ``machine`` tag).
419
420 For hyper-V: Acceptable values are either ``hyperv-gen1`` or
421 ``hyperv-gen2``.
422 * - libvirt API driver, XenAPI driver
423 - ``os_type``
424 - The operating system installed on the image. The ``libvirt`` API driver
425 and ``XenAPI`` driver contains logic that takes different actions
426 depending on the value of the ``os_type`` parameter of the image.
427 For example, for ``os_type=windows`` images, it creates a FAT32-based
428 swap partition instead of a Linux swap partition, and it limits the
429 injected host name to less than 16 characters.
430 - ``linux`` or ``windows``.
431
432 * - libvirt API driver
433 - ``hw_scsi_model``
434 - Enables the use of VirtIO SCSI (``virtio-scsi``) to provide block
435 device access for compute instances; by default, instances use VirtIO
436 Block (``virtio-blk``). VirtIO SCSI is a para-virtualized SCSI
437 controller device that provides improved scalability and performance,
438 and supports advanced SCSI hardware.
439 - ``virtio-scsi``
440 * - libvirt API driver
441 - ``hw_serial_port_count``
442 - Specifies the count of serial ports that should be provided. If
443 ``hw:serial_port_count`` is not set in the flavor's extra_specs, then
444 any count is permitted. If ``hw:serial_port_count`` is set, then this
445 provides the default serial port count. It is permitted to override the
446 default serial port count, but only with a lower value.
447 - Integer
448 * - libvirt API driver
449 - ``hw_video_model``
450 - The graphic device model presented to the guest.
451 hw_video_model=none disables the graphics device in the guest and should
452 generally be used when using gpu passthrough.
453 - ``vga``, ``cirrus``, ``vmvga``, ``xen``, ``qxl``, ``virtio``, ``gop`` or ``none``.
454 * - libvirt API driver
455 - ``hw_video_ram``
456 - Maximum RAM for the video image. Used only if a ``hw_video:ram_max_mb``
457 value has been set in the flavor's extra_specs and that value is higher
458 than the value set in ``hw_video_ram``.
459 - Integer in MB (for example, ``64``).
460 * - libvirt API driver
461 - ``hw_watchdog_action``
462 - Enables a virtual hardware watchdog device that carries out the
463 specified action if the server hangs. The watchdog uses the
464 ``i6300esb`` device (emulating a PCI Intel 6300ESB). If
465 ``hw_watchdog_action`` is not specified, the watchdog is disabled.
466 - * ``disabled`` - (default) The device is not attached. Allows the user to
467 disable the watchdog for the image, even if it has been enabled using
468 the image's flavor.
469 * ``reset`` - Forcefully reset the guest.
470 * ``poweroff`` - Forcefully power off the guest.
471 * ``pause`` - Pause the guest.
472 * ``none`` - Only enable the watchdog; do nothing if the server hangs.
473 * - libvirt API driver
474 - ``os_command_line``
475 - The kernel command line to be used by the ``libvirt`` driver, instead
476 of the default. For Linux Containers (LXC), the value is used as
477 arguments for initialization. This key is valid only for Amazon kernel,
478 ``ramdisk``, or machine images (``aki``, ``ari``, or ``ami``).
479 -
480 * - libvirt API driver and VMware API driver
481 - ``hw_vif_model``
482 - Specifies the model of virtual network interface device to use.
483 - The valid options depend on the configured hypervisor.
484 * ``KVM`` and ``QEMU``: ``e1000``, ``ne2k_pci``, ``pcnet``,
485 ``rtl8139``, and ``virtio``.
486 * VMware: ``e1000``, ``e1000e``, ``VirtualE1000``, ``VirtualE1000e``,
487 ``VirtualPCNet32``, ``VirtualSriovEthernetCard``, and
488 ``VirtualVmxnet``.
489 * Xen: ``e1000``, ``netfront``, ``ne2k_pci``, ``pcnet``, and
490 ``rtl8139``.
491 * - libvirt API driver
492 - ``hw_vif_multiqueue_enabled``
493 - If ``true``, this enables the ``virtio-net multiqueue`` feature. In
494 this case, the driver sets the number of queues equal to the number
495 of guest vCPUs. This makes the network performance scale across a
496 number of vCPUs.
497 - ``true`` | ``false``
498 * - libvirt API driver
499 - ``hw_boot_menu``
500 - If ``true``, enables the BIOS bootmenu. In cases where both the image
501 metadata and Extra Spec are set, the Extra Spec setting is used. This
502 allows for flexibility in setting/overriding the default behavior as
503 needed.
504 - ``true`` or ``false``
505 * - libvirt API driver
506 - ``hw_pmu``
507 - Controls emulation of a virtual performance monitoring unit (vPMU) in the guest.
508 To reduce latency in realtime workloads disable the vPMU by setting hw_pmu=false.
509 - ``true`` or ``false``
510 * - libvirt API driver
511 - ``img_hide_hypervisor_id``
512 - Some hypervisors add a signature to their guests. While the presence
513 of the signature can enable some paravirtualization features on the
514 guest, it can also have the effect of preventing some drivers from
515 loading. Hiding the signature by setting this property to ``true``
516 may allow such drivers to load and work.
517 - ``true`` or ``false``
518 * - VMware API driver
519 - ``vmware_adaptertype``
520 - The virtual SCSI or IDE controller used by the hypervisor.
521 - ``lsiLogic``, ``lsiLogicsas``, ``busLogic``, ``ide``, or
522 ``paraVirtual``.
523 * - VMware API driver
524 - ``vmware_ostype``
525 - A VMware GuestID which describes the operating system installed in
526 the image. This value is passed to the hypervisor when creating a
527 virtual machine. If not specified, the key defaults to ``otherGuest``.
528 - See `thinkvirt.com <http://www.thinkvirt.com/?q=node/181>`_.
529 * - VMware API driver
530 - ``vmware_image_version``
531 - Currently unused.
532 - ``1``
533 * - XenAPI driver
534 - ``auto_disk_config``
535 - If ``true``, the root partition on the disk is automatically resized
536 before the instance boots. This value is only taken into account by
537 the Compute service when using a Xen-based hypervisor with the
538 ``XenAPI`` driver. The Compute service will only attempt to resize if
539 there is a single partition on the image, and only if the partition
540 is in ``ext3`` or ``ext4`` format.
541 - ``true`` or ``false``
80 ``architecture``
81 :Type: str
82
83 The CPU architecture that must be supported by the hypervisor. For
84 example, ``x86_64``, ``arm``, or ``ppc64``. Run :command:`uname -m`
85 to get the architecture of a machine. We strongly recommend using
86 the architecture data vocabulary defined by the `libosinfo project
87 <http://libosinfo.org/>`_ for this purpose.
88
89 One of:
90
91 * ``alpha`` - `DEC 64-bit RISC <https://en.wikipedia.org/wiki/DEC_Alpha>`_
92 * ``armv7l`` - `ARM Cortex-A7 MPCore <https://en.wikipedia.org/wiki/ARM_architecture>`_
93 * ``cris`` - `Ethernet, Token Ring, AXis—Code Reduced Instruction Set <https://en.wikipedia.org/wiki/ETRAX_CRIS>`_
94 * ``i686`` - `Intel sixth-generation x86 (P6 micro architecture) <https://en.wikipedia.org/wiki/X86>`_
95 * ``ia64`` - `Itanium <https://en.wikipedia.org/wiki/Itanium>`_
96 * ``lm32`` - `Lattice Micro32 <https://en.wikipedia.org/wiki/Milkymist>`_
97 * ``m68k`` - `Motorola 68000 <https://en.wikipedia.org/wiki/Motorola_68000_family>`_
98 * ``microblaze`` - `Xilinx 32-bit FPGA (Big Endian) <https://en.wikipedia.org/wiki/MicroBlaze>`_
99 * ``microblazeel`` - `Xilinx 32-bit FPGA (Little Endian) <https://en.wikipedia.org/wiki/MicroBlaze>`_
100 * ``mips`` - `MIPS 32-bit RISC (Big Endian) <https://en.wikipedia.org/wiki/MIPS_architecture>`_
101 * ``mipsel`` - `MIPS 32-bit RISC (Little Endian) <https://en.wikipedia.org/wiki/MIPS_architecture>`_
102 * ``mips64`` - `MIPS 64-bit RISC (Big Endian) <https://en.wikipedia.org/wiki/MIPS_architecture>`_
103 * ``mips64el`` - `MIPS 64-bit RISC (Little Endian) <https://en.wikipedia.org/wiki/MIPS_architecture>`_
104 * ``openrisc`` - `OpenCores RISC <https://en.wikipedia.org/wiki/OpenRISC#QEMU_support>`_
105 * ``parisc`` - `HP Precision Architecture RISC <https://en.wikipedia.org/wiki/PA-RISC>`_
106 * ``parisc64`` - `HP Precision Architecture 64-bit RISC <https://en.wikipedia.org/wiki/PA-RISC>`_
107 * ``ppc`` - `PowerPC 32-bit <https://en.wikipedia.org/wiki/PowerPC>`_
108 * ``ppc64`` - `PowerPC 64-bit <https://en.wikipedia.org/wiki/PowerPC>`_
109 * ``ppcemb`` - `PowerPC (Embedded 32-bit) <https://en.wikipedia.org/wiki/PowerPC>`_
110 * ``s390`` - `IBM Enterprise Systems Architecture/390 <https://en.wikipedia.org/wiki/S390>`_
111 * ``s390x`` - `S/390 64-bit <https://en.wikipedia.org/wiki/S390x>`_
112 * ``sh4`` - `SuperH SH-4 (Little Endian) <https://en.wikipedia.org/wiki/SuperH>`_
113 * ``sh4eb`` - `SuperH SH-4 (Big Endian) <https://en.wikipedia.org/wiki/SuperH>`_
114 * ``sparc`` - `Scalable Processor Architecture, 32-bit <https://en.wikipedia.org/wiki/Sparc>`_
115 * ``sparc64`` - `Scalable Processor Architecture, 64-bit <https://en.wikipedia.org/wiki/Sparc>`_
116 * ``unicore32`` - `Microprocessor Research and Development Center RISC Unicore32 <https://en.wikipedia.org/wiki/Unicore>`_
117 * ``x86_64`` - `64-bit extension of IA-32 <https://en.wikipedia.org/wiki/X86>`_
118 * ``xtensa`` - `Tensilica Xtensa configurable microprocessor core <https://en.wikipedia.org/wiki/Xtensa#Processor_Cores>`_
119 * ``xtensaeb`` - `Tensilica Xtensa configurable microprocessor core <https://en.wikipedia.org/wiki/Xtensa#Processor_Cores>`_ (Big Endian)
120
121 ``hypervisor_type``
122 :Type: str
123
124 The hypervisor type. Note that ``qemu`` is used for both QEMU and KVM
125 hypervisor types.
126
127 One of:
128
129 - ``hyperv``
130 - ``ironic``
131 - ``lxc``
132 - ``qemu``
133 - ``uml``
134 - ``vmware``
135 - ``xen``.
136
137 ``instance_uuid``
138 :Type: str
139
140 For snapshot images, this is the UUID of the server used to create this
141 image. The value must be a valid server UUID.
142
143 ``img_config_drive``
144 :Type: str
145
146 Specifies whether the image needs a config drive.
147
148 One of:
149
150 - ``mandatory``
151 - ``optional`` (default if property is not used)
152
153 ``kernel_id``
154 :Type: str
155
156 The ID of an image stored in the Image service that should be used as
157 the kernel when booting an AMI-style image. The value must be a valid image
158 ID
159
160 ``os_admin_user``
161 :Type: str
162
163 The name of the user with admin privileges.
164 The value must be a valid username (defaults to ``root`` for Linux guests and
165 ``Administrator`` for Windows guests).
166
167 ``os_distro``
168 :Type: str
169
170 The common name of the operating system distribution in lowercase
171 (uses the same data vocabulary as the `libosinfo project`_). Specify only a
172 recognized value for this field. Deprecated values are listed to assist you
173 in searching for the recognized value.
174
175 One of:
176
177 * ``arch`` - Arch Linux. Do not use ``archlinux`` or ``org.archlinux``.
178 * ``centos`` - Community Enterprise Operating System. Do not use
179 ``org.centos`` or ``CentOS``.
180 * ``debian`` - Debian. Do not use ``Debian` or ``org.debian``.
181 * ``fedora`` - Fedora. Do not use ``Fedora``, ``org.fedora``, or
182 ``org.fedoraproject``.
183 * ``freebsd`` - FreeBSD. Do not use ``org.freebsd``, ``freeBSD``, or
184 ``FreeBSD``.
185 * ``gentoo`` - Gentoo Linux. Do not use ``Gentoo`` or ``org.gentoo``.
186 * ``mandrake`` - Mandrakelinux (MandrakeSoft) distribution. Do not use
187 ``mandrakelinux`` or ``MandrakeLinux``.
188 * ``mandriva`` - Mandriva Linux. Do not use ``mandrivalinux``.
189 * ``mes`` - Mandriva Enterprise Server. Do not use ``mandrivaent`` or
190 ``mandrivaES``.
191 * ``msdos`` - Microsoft Disc Operating System. Do not use ``ms-dos``.
192 * ``netbsd`` - NetBSD. Do not use ``NetBSD`` or ``org.netbsd``.
193 * ``netware`` - Novell NetWare. Do not use ``novell`` or ``NetWare``.
194 * ``openbsd`` - OpenBSD. Do not use ``OpenBSD`` or ``org.openbsd``.
195 * ``opensolaris`` - OpenSolaris. Do not use ``OpenSolaris`` or
196 ``org.opensolaris``.
197 * ``opensuse`` - openSUSE. Do not use ``suse``, ``SuSE``, or
198 `` org.opensuse``.
199 * ``rhel`` - Red Hat Enterprise Linux. Do not use ``redhat``, ``RedHat``,
200 or ``com.redhat``.
201 * ``sled`` - SUSE Linux Enterprise Desktop. Do not use ``com.suse``.
202 * ``ubuntu`` - Ubuntu. Do not use ``Ubuntu``, ``com.ubuntu``,
203 ``org.ubuntu``, or ``canonical``.
204 * ``windows`` - Microsoft Windows. Do not use ``com.microsoft.server``
205 or ``windoze``.
206
207 ``os_version``
208 :Type: str
209
210 The operating system version as specified by the distributor.
211
212 The value must be a valid version number (for example, ``11.10``).
213
214 ``os_secure_boot``
215 :Type: str
216
217 Secure Boot is a security standard. When the instance starts,
218 Secure Boot first examines software such as firmware and OS by their
219 signature and only allows them to run if the signatures are valid.
220
221 For Hyper-V: Images must be prepared as Generation 2 VMs. Instance must
222 also contain ``hw_machine_type=hyperv-gen2`` image property. Linux
223 guests will also require bootloader's digital signature provided as
224 ``os_secure_boot_signature`` and
225 ``hypervisor_version_requires'>=10.0'`` image properties.
226
227 One of:
228
229 * ``required`` - Enable the Secure Boot feature.
230 * ``disabled`` or ``optional`` - (default if property not used) Disable the
231 Secure Boot feature.
232
233 ``os_shutdown_timeout``
234 :Type: int
235
236 By default, guests will be given 60 seconds to perform a graceful
237 shutdown. After that, the VM is powered off. This property allows
238 overriding the amount of time (unit: seconds) to allow a guest OS to
239 cleanly shut down before power off. A value of 0 (zero) means the guest
240 will be powered off immediately with no opportunity for guest OS
241 clean-up.
242
243 ``ramdisk_id``
244 The ID of image stored in the Image service that should be used as the
245 ramdisk when booting an AMI-style image.
246
247 The value must be a valid image ID.
248
249 ``trait:<trait_name>``
250 :Type: str
251
252 Added in the Rocky release. Functionality is similar to traits specified
253 in `flavor extra specs <https://docs.openstack.org/nova/latest/user/flavors.html#extra-specs>`_.
254
255 Traits allow specifying a server to build on a compute node with the set
256 of traits specified in the image. The traits are associated with the
257 resource provider that represents the compute node in the Placement API.
258
259 The syntax of specifying traits is **trait:<trait_name>=value**, for
260 example:
261
262 * ``trait:HW_CPU_X86_AVX2=required``
263 * ``trait:STORAGE_DISK_SSD=required``
264
265 The nova scheduler will pass required traits specified on the image to
266 the Placement API to include only resource providers that can satisfy
267 the required traits. Traits for the resource providers can be managed
268 using the `osc-placement plugin. <https://docs.openstack.org/osc-placement/latest/index.html>`_
269
270 Image traits are used by the nova scheduler even in cases of volume
271 backed instances, if the volume source is an image with traits.
272
273 The only valid value is ``required``. Any other value is invalid.
274
275 One of:
276
277 * ``required`` - <trait_name> is required on the resource provider that
278 represents the compute node on which the image is launched.
279
280 ``vm_mode``
281 :Type: str
282
283 The virtual machine mode. This represents the host/guest ABI
284 (application binary interface) used for the virtual machine.
285
286 One of:
287
288 * ``hvm`` - Fully virtualized. This is the mode used by QEMU and KVM.
289 * ``xen`` - Xen 3.0 paravirtualized.
290 * ``uml`` - User Mode Linux paravirtualized.
291 * ``exe`` - Executables in containers. This is the mode used by LXC.
292
293 ``hw_cpu_sockets``
294 :Type: int
295
296 The preferred number of sockets to expose to the guest.
297
298 Only supported by the libvirt driver.
299
300 ``hw_cpu_cores``
301 :Type: int
302
303 The preferred number of cores to expose to the guest.
304
305 Only supported by the libvirt driver.
306
307 ``hw_cpu_threads``
308 :Type: int
309
310 The preferred number of threads to expose to the guest.
311
312 Only supported by the libvirt driver.
313
314 ``hw_cpu_policy``
315 :Type: str
316
317 Used to pin the virtual CPUs (vCPUs) of instances to the host’s
318 physical CPU cores (pCPUs). Host aggregates should be used to separate
319 these pinned instances from unpinned instances as the latter will not
320 respect the resourcing requirements of the former.
321
322 Only supported by the libvirt driver.
323
324 One of:
325
326 * ``shared`` - (default if property not specified) The guest vCPUs will be
327 allowed to freely float across host pCPUs, albeit potentially constrained
328 by NUMA policy.
329 * ``dedicated`` - The guest vCPUs will be strictly pinned to a set of
330 host pCPUs. In the absence of an explicit vCPU topology request, the
331 drivers typically expose all vCPUs as sockets with one core and one
332 thread. When strict CPU pinning is in effect the guest CPU topology
333 will be setup to match the topology of the CPUs to which it is pinned.
334 This option implies an overcommit ratio of 1.0. For example, if a two
335 vCPU guest is pinned to a single host core with two threads, then the
336 guest will get a topology of one socket, one core, two threads.
337
338 ``hw_cpu_thread_policy``
339 :Type: str
340
341 Further refine ``hw_cpu_policy=dedicated`` by stating how hardware CPU
342 threads in a simultaneous multithreading-based (SMT) architecture be
343 used. SMT-based architectures include Intel processors with
344 Hyper-Threading technology. In these architectures, processor cores
345 share a number of components with one or more other cores. Cores in
346 such architectures are commonly referred to as hardware threads, while
347 the cores that a given core share components with are known as thread
348 siblings.
349
350 Only supported by the libvirt driver.
351
352 One of:
353
354 * ``prefer`` - (default if property not specified) The host may or may not
355 have an SMT architecture. Where an SMT architecture is present, thread
356 siblings are preferred.
357 * ``isolate`` - The host must not have an SMT architecture or must
358 emulate a non-SMT architecture. If the host does not have an SMT
359 architecture, each vCPU is placed on a different core as expected. If
360 the host does have an SMT architecture - that is, one or more cores
361 have thread siblings - then each vCPU is placed on a different
362 physical core. No vCPUs from other guests are placed on the same core.
363 All but one thread sibling on each utilized core is therefore
364 guaranteed to be unusable.
365 * ``require`` - The host must have an SMT architecture. Each vCPU is
366 allocated on thread siblings. If the host does not have an SMT
367 architecture, then it is not used. If the host has an SMT
368 architecture, but not enough cores with free thread siblings are
369 available, then scheduling fails.
370
371 ``hw_cdrom_bus``
372 :Type: str
373
374 Specifies the type of disk controller to attach CD-ROM devices to.
375 As for ``hw_disk_bus``.
376
377 Only supported by the libvirt driver.
378
379 ``hw_disk_bus``
380 :Type: str
381
382 Specifies the type of disk controller to attach disk devices to.
383
384 Only supported by the libvirt driver.
385
386 Options depend on the value of `nova's virt_type config option
387 <https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.virt_type>`_:
388
389 * For ``qemu`` and ``kvm``: one of ``scsi``, ``virtio``,
390 ``uml``, ``xen``, ``ide``, ``usb``, or ``lxc``.
391 * For ``xen``: one of ``xen`` or ``ide``.
392 * For ``uml``: must be ``uml``.
393 * For ``lxc``: must be ``lxc``.
394 * For ``parallels``: one of ``ide`` or ``scsi``.
395
396 ``hw_firmware_type``
397 Specifies the type of firmware with which to boot the guest.
398
399 Only supported by the libvirt driver.
400
401 One of:
402
403 * ``bios``
404 * ``uefi``
405
406 ``hw_mem_encryption``
407 :Type: bool
408
409 Enables encryption of guest memory at the hardware level, if
410 there are compute hosts available which support this. See
411 `nova's documentation on configuration of the KVM hypervisor
412 <https://docs.openstack.org/nova/latest/admin/configuration/hypervisor-kvm.html#amd-sev-secure-encrypted-virtualization>`_
413 for more details.
414
415 Only supported by the libvirt driver.
416
417 ``hw_pointer_model``
418 :Type: str
419
420 Input devices that allow interaction with a graphical framebuffer,
421 for example to provide a graphic tablet for absolute cursor movement.
422 Currently only supported by the KVM/QEMU hypervisor configuration
423 and VNC or SPICE consoles must be enabled.
424
425 Only supported by the libvirt driver.
426
427 One of:
428
429 - ``usbtablet``
430
431 ``hw_rng_model``
432 :Type: str
433
434 Adds a random-number generator device to the image's instances. This
435 image property by itself does not guarantee that a hardware RNG will be
436 used; it expresses a preference that may or may not be satisfied
437 depending upon Nova configuration.
438
439 The cloud administrator can enable and control device behavior by
440 configuring the instance's flavor. By default:
441
442 * The generator device is disabled.
443 * ``/dev/urandom`` is used as the default entropy source. To
444 specify a physical hardwre RNG device, use the following option in
445 the ``nova.conf`` file:
446
447 .. code-block:: ini
448
449 rng_dev_path=/dev/hwrng
450
451 * The use of a hardware random number generator must be configured in a
452 flavor's extra_specs by setting ``hw_rng:allowed`` to True in the
453 flavor definition.
454
455 Only supported by the libvirt driver.
456
457 One of:
458
459 - ``virtio``
460 - Other supported device.
461
462 ``hw_time_hpet``
463 :Type: bool
464
465 Adds support for the High Precision Event Timer (HPET) for x86 guests
466 in the libvirt driver when ``hypervisor_type=qemu`` and
467 ``architecture=i686`` or ``architecture=x86_64``. The timer can be
468 enabled by setting ``hw_time_hpet=true``. By default HPET remains
469 disabled.
470
471 Only supported by the libvirt driver.
472
473 ``hw_machine_type``
474 :Type: str
475
476 For libvirt: Enables booting an ARM system using the specified
477 machine type. If an ARM image is used and its machine type is
478 not explicitly specified, then Compute uses the ``virt`` machine
479 type as the default for ARMv7 and AArch64.
480
481 For Hyper-V: Specifies whether the Hyper-V instance will be a generation
482 1 or generation 2 VM. By default, if the property is not provided, the
483 instances will be generation 1 VMs. If the image is specific for
484 generation 2 VMs but the property is not provided accordingly, the
485 instance will fail to boot.
486
487 For libvirt: Valid types can be viewed by using the
488 :command:`virsh capabilities` command (machine types are displayed in
489 the ``machine`` tag).
490
491 For hyper-V: Acceptable values are either ``hyperv-gen1`` or
492 ``hyperv-gen2``.
493
494 Only supported by the libvirt and Hyper-V drivers.
495
496 ``os_type``
497 :Type: str
498
499 The operating system installed on the image. The ``libvirt`` API driver
500 contains logic that takes different actions
501 depending on the value of the ``os_type`` parameter of the image.
502 For example, for ``os_type=windows`` images, it creates a FAT32-based
503 swap partition instead of a Linux swap partition, and it limits the
504 injected host name to less than 16 characters.
505
506 Only supported by the libvirt driver.
507
508 One of:
509
510 * ``linux``
511 * ``windows``
512
513 ``hw_scsi_model``
514 :Type: str
515
516 Enables the use of VirtIO SCSI (``virtio-scsi``) to provide block
517 device access for compute instances; by default, instances use VirtIO
518 Block (``virtio-blk``). VirtIO SCSI is a para-virtualized SCSI
519 controller device that provides improved scalability and performance,
520 and supports advanced SCSI hardware.
521
522 Only supported by the libvirt driver.
523
524 One of:
525
526 * ``virtio-scsi``
527
528 ``hw_serial_port_count``
529 :Type: int
530
531 Specifies the count of serial ports that should be provided. If
532 ``hw:serial_port_count`` is not set in the flavor's extra_specs, then
533 any count is permitted. If ``hw:serial_port_count`` is set, then this
534 provides the default serial port count. It is permitted to override the
535 default serial port count, but only with a lower value.
536
537 Only supported by the libvirt driver.
538
539 ``hw_video_model``
540 :Type: str
541
542 The graphic device model presented to the guest. ``none`` disables the
543 graphics device in the guest and should generally be used when using GPU
544 passthrough.
545
546 One of:
547
548 * ``vga``
549 * ``cirrus``
550 * ``vmvga``
551 * ``xen``
552 * ``qxl``
553 * ``virtio``
554 * ``gop``
555 * ``none``
556
557 Only supported by the libvirt driver.
558
559 ``hw_video_ram``
560 :Type: int
561
562 Maximum RAM in MB for the video image. Used only if a ``hw_video:ram_max_mb``
563 value has been set in the flavor's extra_specs and that value is higher
564 than the value set in ``hw_video_ram``.
565
566 Only supported by the libvirt driver.
567
568 ``hw_watchdog_action``
569 :Type: str
570
571 Enables a virtual hardware watchdog device that carries out the
572 specified action if the server hangs. The watchdog uses the
573 ``i6300esb`` device (emulating a PCI Intel 6300ESB). If
574 ``hw_watchdog_action`` is not specified, the watchdog is disabled.
575
576 Only supported by the libvirt driver.
577
578 One of:
579
580 * ``disabled`` - (default) The device is not attached. Allows the user to
581 disable the watchdog for the image, even if it has been enabled using
582 the image's flavor.
583 * ``reset`` - Forcefully reset the guest.
584 * ``poweroff`` - Forcefully power off the guest.
585 * ``pause`` - Pause the guest.
586 * ``none`` - Only enable the watchdog; do nothing if the server hangs.
587
588 ``os_command_line``
589 :Type: str
590
591 The kernel command line to be used by the ``libvirt`` driver, instead
592 of the default. For Linux Containers (LXC), the value is used as
593 arguments for initialization. This key is valid only for Amazon kernel,
594 ``ramdisk``, or machine images (``aki``, ``ari``, or ``ami``).
595
596 Only supported by the libvirt driver.
597
598 ``hw_vif_model``
599 :Type: str
600
601 Specifies the model of virtual network interface device to use.
602
603 Only supported by the libvirt driver and VMware API drivers.
604
605 The valid options depend on the configured hypervisor.
606
607 * ``KVM`` and ``QEMU``: ``e1000``, ``ne2k_pci``, ``pcnet``,
608 ``rtl8139``, and ``virtio``.
609 * VMware: ``e1000``, ``e1000e``, ``VirtualE1000``, ``VirtualE1000e``,
610 ``VirtualPCNet32``, ``VirtualSriovEthernetCard``, and
611 ``VirtualVmxnet``.
612 * Xen: ``e1000``, ``netfront``, ``ne2k_pci``, ``pcnet``, and
613 ``rtl8139``.
614
615 ``hw_vif_multiqueue_enabled``
616 :Type: bool
617
618 If ``true``, this enables the ``virtio-net multiqueue`` feature. In
619 this case, the driver sets the number of queues equal to the number
620 of guest vCPUs. This makes the network performance scale across a
621 number of vCPUs.
622
623 Only supported by the libvirt driver.
624
625 ``hw_boot_menu``
626 :Type: bool
627
628 If ``true``, enables the BIOS bootmenu. In cases where both the image
629 metadata and Extra Spec are set, the Extra Spec setting is used. This
630 allows for flexibility in setting/overriding the default behavior as
631 needed.
632
633 Only supported by the libvirt driver.
634
635 ``hw_pmu``
636 :Type: bool
637
638 Controls emulation of a virtual performance monitoring unit (vPMU) in the
639 guest. To reduce latency in realtime workloads disable the vPMU by setting
640 ``hw_pmu=false``.
641
642 Only supported by the libvirt driver.
643
644 ``img_hide_hypervisor_id``
645 :Type: bool
646
647 Some hypervisors add a signature to their guests. While the presence
648 of the signature can enable some paravirtualization features on the
649 guest, it can also have the effect of preventing some drivers from
650 loading. Hiding the signature by setting this property to ``true``
651 may allow such drivers to load and work.
652
653 Only supported by the libvirt driver.
654
655 ``vmware_adaptertype``
656 :Type: str
657
658 The virtual SCSI or IDE controller used by the hypervisor.
659
660 Only supported by the VMWare API driver.
661
662 One of:
663
664 * ``lsiLogic``
665 * ``lsiLogicsas``
666 * ``busLogic``
667 * ``ide``
668 * ``paraVirtual``
669
670 ``vmware_ostype``
671 A VMware GuestID which describes the operating system installed in
672 the image. This value is passed to the hypervisor when creating a
673 virtual machine. If not specified, the key defaults to ``otherGuest``.
674 See `thinkvirt.com <http://www.thinkvirt.com/?q=node/181>`_ for supported
675 values.
676
677 Only supported by the VMWare API driver.
678
679 ``vmware_image_version``
680 :Type: int
681
682 Currently unused.
683
684 ``instance_type_rxtx_factor``
685 :Type: float
686
687 Deprecated and currently unused.
688
689 ``auto_disk_config``
690 :Type: bool
691
692 Deprecated and currently unused.
102102 * ``/etc``
103103
104104 All options set in ``glance-manage.conf`` override those set in
105 ``glance-registry.conf`` and ``glance-api.conf``.
105 ``glance-api.conf``.
+0
-40
doc/source/cli/glanceregistry.rst less more
0 ===============
1 glance-registry
2 ===============
3
4 --------------------------------------
5 Server for the Glance Registry Service
6 --------------------------------------
7
8 .. include:: header.txt
9
10 .. include:: ../deprecate-registry.inc
11
12
13 SYNOPSIS
14 ========
15
16 ::
17
18 glance-registry [options]
19
20 DESCRIPTION
21 ===========
22
23 glance-registry is a server daemon that serves image metadata through a
24 REST-like API.
25
26 OPTIONS
27 =======
28
29 **General options**
30
31 .. include:: general_options.txt
32
33 FILES
34 =====
35
36 **/etc/glance/glance-registry.conf**
37 Default configuration file for Glance Registry
38
39 .. include:: footer.txt
0 # -*- coding: utf-8 -*-
10 # Copyright (c) 2010 OpenStack Foundation.
21 #
32 # Licensed under the Apache License, Version 2.0 (the "License");
1312 # See the License for the specific language governing permissions and
1413 # limitations under the License.
1514
16 #
17 # Glance documentation build configuration file, created by
18 # sphinx-quickstart on Tue May 18 13:50:15 2010.
19 #
20 # This file is execfile()'d with the current directory set to its containing
21 # dir.
22 #
23 # Note that not all possible configuration values are present in this
24 # autogenerated file.
25 #
26 # All configuration values have a default; values that are commented out
27 # serve to show the default.
15 # Glance documentation build configuration file
2816
2917 import os
3018 import sys
3220 # If extensions (or modules to document with autodoc) are in another directory,
3321 # add these directories to sys.path here. If the directory is relative to the
3422 # documentation root, use os.path.abspath to make it absolute, like shown here.
35 sys.path = [
36 os.path.abspath('../..'),
37 os.path.abspath('../../bin')
38 ] + sys.path
23 sys.path.insert(0, os.path.abspath('../..'))
24 sys.path.insert(0, os.path.abspath('../../bin'))
3925
4026 # -- General configuration ---------------------------------------------------
4127
4228 # Add any Sphinx extension module names here, as strings. They can be
4329 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
44 extensions = ['stevedore.sphinxext',
45 'sphinx.ext.viewcode',
46 'oslo_config.sphinxext',
47 'oslo_config.sphinxconfiggen',
48 'openstackdocstheme',
49 'sphinxcontrib.apidoc',
50 ]
30 extensions = [
31 'stevedore.sphinxext',
32 'sphinx.ext.viewcode',
33 'oslo_config.sphinxext',
34 'oslo_config.sphinxconfiggen',
35 'openstackdocstheme',
36 'sphinxcontrib.apidoc',
37 ]
5138
5239 # openstackdocstheme options
53 repository_name = 'openstack/glance'
54 bug_project = 'glance'
55 bug_tag = 'documentation'
40 openstackdocs_repo_name = 'openstack/glance'
41 openstackdocs_bug_project = 'glance'
42 openstackdocs_bug_tag = 'documentation'
5643
5744 # sphinxcontrib.apidoc options
5845 apidoc_module_dir = '../../glance'
7360 '_static/glance-cache'),
7461 ('../../etc/oslo-config-generator/glance-manage.conf',
7562 '_static/glance-manage'),
76 ('../../etc/oslo-config-generator/glance-registry.conf',
77 '_static/glance-registry'),
7863 ('../../etc/oslo-config-generator/glance-scrubber.conf',
7964 '_static/glance-scrubber'),
8065 ]
81
82
83 # Add any paths that contain templates here, relative to this directory.
84 # templates_path = []
85
86 # The suffix of source filenames.
87 source_suffix = '.rst'
88
89 # The encoding of source files.
90 #source_encoding = 'utf-8'
9166
9267 # The master toctree document.
9368 master_doc = 'index'
9570 # General information about the project.
9671 copyright = u'2010-present, OpenStack Foundation.'
9772
98 # The language for content autogenerated by Sphinx. Refer to documentation
99 # for a list of supported languages.
100 #language = None
101
102 # There are two options for replacing |today|: either, you set today to some
103 # non-false value, then it is used:
104 #today = ''
105 # Else, today_fmt is used as the format for a strftime call.
106 #today_fmt = '%B %d, %Y'
107
108 # List of documents that shouldn't be included in the build.
109 #unused_docs = []
110
111 # List of directories, relative to source directory, that shouldn't be searched
112 # for source files.
113 #exclude_trees = ['api']
11473 exclude_patterns = [
11574 # The man directory includes some snippet files that are included
11675 # in other documents during the build but that should not be
12180 'cli/openstack_options.txt',
12281 ]
12382
124 # The reST default role (for this markup: `text`) to use for all documents.
125 #default_role = None
126
127 # If true, '()' will be appended to :func: etc. cross-reference text.
128 #add_function_parentheses = True
129
13083 # If true, the current module name will be prepended to all description
13184 # unit titles (such as .. function::).
13285 add_module_names = True
13689 show_authors = True
13790
13891 # The name of the Pygments (syntax highlighting) style to use.
139 pygments_style = 'sphinx'
92 pygments_style = 'native'
14093
14194 # A list of ignored prefixes for module index sorting.
14295 modindex_common_prefix = ['glance.']
161114 [u'OpenStack'], 1),
162115 ('cli/glancemanage', 'glance-manage', u'Glance Management Utility',
163116 [u'OpenStack'], 1),
164 ('cli/glanceregistry', 'glance-registry', u'Glance Registry Server',
165 [u'OpenStack'], 1),
166117 ('cli/glancereplicator', 'glance-replicator', u'Glance Replicator',
167118 [u'OpenStack'], 1),
168119 ('cli/glancescrubber', 'glance-scrubber', u'Glance Scrubber Service',
178129 # html_theme = '_theme'
179130 html_theme = 'openstackdocs'
180131
181 # Theme options are theme-specific and customize the look and feel of a theme
182 # further. For a list of options available for each theme, see the
183 # documentation.
184 #html_theme_options = {}
185
186 # Add any paths that contain custom themes here, relative to this directory.
187 #html_theme_path = ['_theme']
188
189 # The name for this set of Sphinx documents. If None, it defaults to
190 # "<project> v<release> documentation".
191 #html_title = None
192
193 # A shorter title for the navigation bar. Default is the same as html_title.
194 #html_short_title = None
195
196 # The name of an image file (relative to this directory) to place at the top
197 # of the sidebar.
198 #html_logo = None
199
200 # The name of an image file (within the static path) to use as favicon of the
201 # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
202 # pixels large.
203 #html_favicon = None
204
205132 # Add any paths that contain custom static files (such as style sheets) here,
206133 # relative to this directory. They are copied after the builtin static files,
207134 # so a file named "default.css" will overwrite the builtin "default.css".
208135 html_static_path = ['_static']
209
210 # If true, SmartyPants will be used to convert quotes and dashes to
211 # typographically correct entities.
212 #html_use_smartypants = True
213
214 # Custom sidebar templates, maps document names to template names.
215 #html_sidebars = {}
216
217 # Additional templates that should be rendered to pages, maps page names to
218 # template names.
219 #html_additional_pages = {}
220136
221137 # Add any paths that contain "extra" files, such as .htaccess or
222138 # robots.txt.
228144 # If false, no index is generated.
229145 html_use_index = True
230146
231 # If true, the index is split into individual pages for each letter.
232 #html_split_index = False
233
234 # If true, links to the reST sources are added to the pages.
235 #html_show_sourcelink = True
236
237 # If true, an OpenSearch description file will be output, and all pages will
238 # contain a <link> tag referring to it. The value of this option must be the
239 # base URL from which the finished HTML is served.
240 #html_use_opensearch = ''
241
242 # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml").
243 #html_file_suffix = ''
244
245 # Output file base name for HTML help builder.
246 htmlhelp_basename = 'glancedoc'
247
248147
249148 # -- Options for LaTeX output ------------------------------------------------
250
251 # The paper size ('letter' or 'a4').
252 #latex_paper_size = 'letter'
253
254 # The font size ('10pt', '11pt' or '12pt').
255 #latex_font_size = '10pt'
256149
257150 # Grouping the document tree into LaTeX files. List of tuples
258151 # (source start file, target name, title, author,
261154 ('index', 'Glance.tex', u'Glance Documentation',
262155 u'Glance Team', 'manual'),
263156 ]
264
265 # The name of an image file (relative to this directory) to place at the top of
266 # the title page.
267 #latex_logo = None
268
269 # For "manual" documents, if this is true, then toplevel headings are parts,
270 # not chapters.
271 #latex_use_parts = False
272
273 # Additional stuff for the LaTeX preamble.
274 #latex_preamble = ''
275
276 # Documents to append as an appendix to all manuals.
277 #latex_appendices = []
278
279 # If false, no module index is generated.
280 #latex_use_modindex = True
911911 Configuring the Cinder Storage Backend
912912 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
913913
914 **Note**: Currently Cinder store is experimental. Current deployers should be
915 aware that the use of it in production right now may be risky. It is expected
916 to work well with most iSCSI Cinder backends such as LVM iSCSI, but will not
917 work with some backends especially if they don't support host-attach.
918
919914 **Note**: To create a Cinder volume from an image in this store quickly, additional
920915 settings are required. Please see the
921916 `Volume-backed image <https://docs.openstack.org/cinder/latest/admin/blockstorage-volume-backed-image.html>`_
10551050 `This option is specific to the Cinder storage backend.`
10561051
10571052 Path to the rootwrap configuration file to use for running commands as root.
1053
1054 Configuring multiple Cinder Storage Backend
1055 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1056
1057 From Victoria onwards Glance fully supports configuring multiple cinder
1058 backends and user/operator will decide which cinder backend to use. While
1059 using cinder as a store for glance, operator may configure which volume
1060 types to used by setting the ``enabled_backends`` configuration option in
1061 ``glance-api.conf``. For each of the stores defined in ``enabled_backends``
1062 administrator has to set specific ``volume_type`` using
1063 ``cinder_volume_type`` configuration option in its own config section.
1064
1065 **NOTE** Even in cinder one backend can be associated with multiple
1066 volume type(s), glance will support only one store per cinder volume type.
1067
1068 Below are some multiple cinder store configuration examples.
1069
1070 Example 1: Fresh deployment
1071
1072 For example, if cinder has configured 2 volume types `fast` and `slow` then
1073 glance configuration should look like;::
1074
1075 [DEFAULT]
1076 # list of enabled stores identified by their property group name
1077 enabled_backends = fast:cinder, slow:cinder
1078
1079 # the default store, if not set glance-api service will not start
1080 [glance_store]
1081 default_backend = fast
1082
1083 # conf props for fast store instance
1084 [fast]
1085 rootwrap_config = /etc/glance/rootwrap.conf
1086 cinder_volume_type = glance-fast
1087 description = LVM based cinder store
1088 cinder_catalog_info = volumev2::publicURL
1089 cinder_store_auth_address = http://localhost/identity/v3
1090 cinder_store_user_name = glance
1091 cinder_store_password = admin
1092 cinder_store_project_name = service
1093 # etc..
1094
1095 # conf props for slow store instance
1096 [slow]
1097 rootwrap_config = /etc/glance/rootwrap.conf
1098 cinder_volume_type = glance-slow
1099 description = NFS based cinder store
1100 cinder_catalog_info = volumev2::publicURL
1101 cinder_store_auth_address = http://localhost/identity/v3
1102 cinder_store_user_name = glance
1103 cinder_store_password = admin
1104 cinder_store_project_name = service
1105 # etc..
1106
1107 Example 2: Upgrade from single cinder store to multiple cinder stores, if
1108 `default_volume_type` is set in `cinder.conf` and `cinder_volume_type` is
1109 also set in `glance-api.conf` then operator needs to create one store in
1110 glance where `cinder_volume_type` is same as the old glance configuration::
1111
1112 # cinder.conf
1113 The glance administrator has to find out what the default volume-type is
1114 in the cinder installation, so he/she needs to discuss with either cinder
1115 admin or cloud admin to identify default volume-type from cinder and then
1116 explicitly configure that as the value of ``cinder_volume_type``.
1117
1118 Example config before upgrade::
1119
1120 # old configuration in glance
1121 [glance_store]
1122 stores = cinder, file, http
1123 default_store = cinder
1124 cinder_state_transition_timeout = 300
1125 rootwrap_config = /etc/glance/rootwrap.conf
1126 cinder_catalog_info = volumev2::publicURL
1127 cinder_volume_type = glance-old
1128
1129 Example config after upgrade::
1130
1131 # new configuration in glance
1132 [DEFAULT]
1133 enabled_backends = old:cinder, new:cinder
1134
1135 [glance_store]
1136 default_backend = new
1137
1138 [new]
1139 rootwrap_config = /etc/glance/rootwrap.conf
1140 cinder_volume_type = glance-new
1141 description = LVM based cinder store
1142 cinder_catalog_info = volumev2::publicURL
1143 cinder_store_auth_address = http://localhost/identity/v3
1144 cinder_store_user_name = glance
1145 cinder_store_password = admin
1146 cinder_store_project_name = service
1147 # etc..
1148
1149 [old]
1150 rootwrap_config = /etc/glance/rootwrap.conf
1151 cinder_volume_type = glance-old # as per old cinder.conf
1152 description = NFS based cinder store
1153 cinder_catalog_info = volumev2::publicURL
1154 cinder_store_auth_address = http://localhost/identity/v3
1155 cinder_store_user_name = glance
1156 cinder_store_password = admin
1157 cinder_store_project_name = service
1158 # etc..
1159
1160 Example 3: Upgrade from single cinder store to multiple cinder stores, if
1161 `default_volume_type` is not set in `cinder.conf` neither `cinder_volume_type`
1162 set in `glance-api.conf` then administrator needs to create one store in
1163 glance to replicate exact old configuration::
1164
1165 # cinder.conf
1166 The glance administrator has to find out what the default volume-type is
1167 in the cinder installation, so he/she needs to discuss with either cinder
1168 admin or cloud admin to identify default volume-type from cinder and then
1169 explicitly configure that as the value of ``cinder_volume_type``.
1170
1171 Example config before upgrade::
1172
1173 # old configuration in glance
1174 [glance_store]
1175 stores = cinder, file, http
1176 default_store = cinder
1177 cinder_state_transition_timeout = 300
1178 rootwrap_config = /etc/glance/rootwrap.conf
1179 cinder_catalog_info = volumev2::publicURL
1180
1181 Example config after upgrade::
1182
1183 # new configuration in glance
1184 [DEFAULT]
1185 enabled_backends = old:cinder, new:cinder
1186
1187 [glance_store]
1188 default_backend = new
1189
1190 # cinder store as per old (single store configuration)
1191 [old]
1192 rootwrap_config = /etc/glance/rootwrap.conf
1193 description = LVM based cinder store
1194 cinder_catalog_info = volumev2::publicURL
1195 cinder_store_auth_address = http://localhost/identity/v3
1196 cinder_store_user_name = glance
1197 cinder_store_password = admin
1198 cinder_store_project_name = service
1199 # etc..
1200
1201 [new]
1202 rootwrap_config = /etc/glance/rootwrap.conf
1203 cinder_volume_type = glance-new
1204 description = NFS based cinder store
1205 cinder_catalog_info = volumev2::publicURL
1206 cinder_store_auth_address = http://localhost/identity/v3
1207 cinder_store_user_name = glance
1208 cinder_store_password = admin
1209 cinder_store_project_name = service
1210 # etc..
1211
1212 Example 4: Upgrade from single cinder store to multiple cinder stores, if
1213 `default_volume_type` is set in `cinder.conf` but `cinder_volume_type` is
1214 not set in `glance-api.conf` then administrator needs to set
1215 `cinder_volume_type` same as the `default_backend` set in `cinder.conf` to
1216 one of the store::
1217
1218 # cinder.conf
1219 The glance administrator has to find out what the default volume-type is
1220 in the cinder installation, so he/she needs to discuss with either cinder
1221 admin or cloud admin to identify default volume-type from cinder and then
1222 explicitly configure that as the value of ``cinder_volume_type``.
1223
1224 Example config before upgrade::
1225
1226 # old configuration in glance
1227 [glance_store]
1228 stores = cinder, file, http
1229 default_store = cinder
1230 cinder_state_transition_timeout = 300
1231 rootwrap_config = /etc/glance/rootwrap.conf
1232 cinder_catalog_info = volumev2::publicURL
1233
1234 Example config after upgrade::
1235
1236 # new configuration in glance
1237 [DEFAULT]
1238 enabled_backends = old:cinder,new:cinder
1239
1240 [glance_store]
1241 default_backend = old
1242
1243 [old]
1244 rootwrap_config = /etc/glance/rootwrap.conf
1245 cinder_volume_type = glance-old # as per old cinder.conf
1246 description = LVM based cinder store
1247 cinder_catalog_info = volumev2::publicURL
1248 cinder_store_auth_address = http://localhost/identity/v3
1249 cinder_store_user_name = glance
1250 cinder_store_password = admin
1251 cinder_store_project_name = service
1252 # etc..
1253
1254 [new]
1255 rootwrap_config = /etc/glance/rootwrap.conf
1256 cinder_volume_type = glance-new
1257 description = NFS based cinder store
1258 cinder_catalog_info = volumev2::publicURL
1259 cinder_store_auth_address = http://localhost/identity/v3
1260 cinder_store_user_name = glance
1261 cinder_store_password = admin
1262 cinder_store_project_name = service
1263 # etc..
1264
1265 While upgrading from single cinder stores to multiple single stores, location
1266 URLs for legacy images will be changed from ``cinder://volume-id`` to
1267 ``cinder://store-name/volume-id``.
1268
1269 **Note** After upgrade from single cinder store to use multiple cinder
1270 stores the first ``image-list`` or first ``GET`` or ``image-show`` call for
1271 image will take additional time as we will perform the lazy loading
1272 operation to update legacy image location url to use new image location urls.
1273 Subsequent ``GET`` or ``image-list`` or ``image-show`` calls will perform as
1274 they were performing earlier.
10581275
10591276 Configuring the VMware Storage Backend
10601277 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+0
-13
doc/source/configuration/glance_registry.rst less more
0 .. _glance-registry.conf:
1
2 --------------------
3 glance-registry.conf
4 --------------------
5
6 .. include:: ../deprecate-registry.inc
7
8 This configuration file controls how the register server operates. More
9 information can be found in :ref:`configuring-the-glance-registry`.
10
11 .. show-options::
12 :config-file: etc/oslo-config-generator/glance-registry.conf
1515 <../_static/glance-api.conf.sample>`_.
1616
1717 .. literalinclude:: ../_static/glance-api.conf.sample
18
19
20 Sample configuration for Glance Registry
21 ----------------------------------------
22
23 This sample configuration can also be viewed in `glance-registry.conf.sample
24 <../_static/glance-registry.conf.sample>`_.
25
26 .. literalinclude:: ../_static/glance-registry.conf.sample
2718
2819
2920 Sample configuration for Glance Scrubber
4949 language features).
5050 * New features can use Python-3-only language constructs, but bugfixes
5151 likely to be backported should be more conservative and write for
52 Python 2 compatibilty.
52 Python 2 compatibility.
5353 * The code for drivers may continue to use the six compatibility library at
5454 their discretion.
5555 * We will not remove six from mainline Cinder code that impacts the drivers
3636 [composite:rootapp]
3737 paste.composite_factory = glance.api:root_app_factory
3838 /: apiversions
39 /v1: apiv1app
4039 /v2: apiv2app
4140
4241 [app:apiversions]
4342 paste.app_factory = glance.api.versions:create_resource
44
45 [app:apiv1app]
46 paste.app_factory = glance.api.v1.router:API.factory
4743
4844 [app:apiv2app]
4945 paste.app_factory = glance.api.v2.router:API.factory
234234 #
235235 # (integer value)
236236 #image_location_quota = 10
237
238 # DEPRECATED:
239 # Python module path of data access API.
240 #
241 # Specifies the path to the API to use for accessing the data model.
242 # This option determines how the image catalog data will be accessed.
243 #
244 # Possible values:
245 # * glance.db.sqlalchemy.api
246 # * glance.db.registry.api
247 # * glance.db.simple.api
248 #
249 # If this option is set to ``glance.db.sqlalchemy.api`` then the image
250 # catalog data is stored in and read from the database via the
251 # SQLAlchemy Core and ORM APIs.
252 #
253 # Setting this option to ``glance.db.registry.api`` will force all
254 # database access requests to be routed through the Registry service.
255 # This avoids data access from the Glance API nodes for an added layer
256 # of security, scalability and manageability.
257 #
258 # NOTE: In v2 OpenStack Images API, the registry service is optional.
259 # In order to use the Registry API in v2, the option
260 # ``enable_v2_registry`` must be set to ``True``.
261 #
262 # Finally, when this configuration option is set to
263 # ``glance.db.simple.api``, image catalog data is stored in and read
264 # from an in-memory data structure. This is primarily used for testing.
265 #
266 # Related options:
267 # * enable_v2_api
268 # * enable_v2_registry
269 #
270 # (string value)
271 # This option is deprecated for removal since Queens.
272 # Its value may be silently ignored in the future.
273 # Reason:
274 # Glance registry service is deprecated for removal.
275 #
276 # More information can be found from the spec:
277 # http://specs.openstack.org/openstack/glance-
278 # specs/specs/queens/approved/glance/deprecate-registry.html
279 #data_api = glance.db.sqlalchemy.api
280237
281238 #
282239 # The default number of results to return for a request.
455412 #user_storage_quota = 0
456413
457414 #
458 # Deploy the v2 OpenStack Images API.
459 #
460 # When this option is set to ``True``, Glance service will respond
461 # to requests on registered endpoints conforming to the v2 OpenStack
462 # Images API.
463 #
464 # NOTES:
465 # * If this option is disabled, then the ``enable_v2_registry``
466 # option, which is enabled by default, is also recommended
467 # to be disabled.
468 #
469 # Possible values:
470 # * True
471 # * False
472 #
473 # Related options:
474 # * enable_v2_registry
475 #
476 # (boolean value)
477 #enable_v2_api = true
478
479 #
480 # DEPRECATED FOR REMOVAL
481 # (boolean value)
482 #enable_v1_registry = true
483
484 # DEPRECATED:
485 # Deploy the v2 API Registry service.
486 #
487 # When this option is set to ``True``, the Registry service
488 # will be enabled in Glance for v2 API requests.
489 #
490 # NOTES:
491 # * Use of Registry is optional in v2 API, so this option
492 # must only be enabled if both ``enable_v2_api`` is set to
493 # ``True`` and the ``data_api`` option is set to
494 # ``glance.db.registry.api``.
495 #
496 # * If deploying only the v1 OpenStack Images API, this option,
497 # which is enabled by default, should be disabled.
498 #
499 # Possible values:
500 # * True
501 # * False
502 #
503 # Related options:
504 # * enable_v2_api
505 # * data_api
506 #
507 # (boolean value)
508 # This option is deprecated for removal since Queens.
509 # Its value may be silently ignored in the future.
510 # Reason:
511 # Glance registry service is deprecated for removal.
512 #
513 # More information can be found from the spec:
514 # http://specs.openstack.org/openstack/glance-
515 # specs/specs/queens/approved/glance/deprecate-registry.html
516 #enable_v2_registry = true
517
518 #
519415 # Host address of the pydev server.
520416 #
521417 # Provide a string value representing the hostname or IP of the
718614 # roles - <No description provided>
719615 # policies - <No description provided>
720616 #property_protection_rule_format = roles
721
722 #
723 # List of allowed exception modules to handle RPC exceptions.
724 #
725 # Provide a comma separated list of modules whose exceptions are
726 # permitted to be recreated upon receiving exception data via an RPC
727 # call made to Glance. The default list includes
728 # ``glance.common.exception``, ``builtins``, and ``exceptions``.
729 #
730 # The RPC protocol permits interaction with Glance via calls across a
731 # network or within the same system. Including a list of exception
732 # namespaces with this option enables RPC to propagate the exceptions
733 # back to the users.
734 #
735 # Possible values:
736 # * A comma separated list of valid exception modules
737 #
738 # Related options:
739 # * None
740 # (list value)
741 #allowed_rpc_exception_modules = glance.common.exception,builtins,exceptions
742617
743618 #
744619 # IP address to bind the glance servers to.
1117992 # (list value)
1118993 #disabled_notifications =
1119994
1120 # DEPRECATED:
1121 # Address the registry server is hosted on.
1122 #
1123 # Possible values:
1124 # * A valid IP or hostname
1125 #
1126 # Related options:
1127 # * None
1128 #
1129 # (host address value)
1130 # This option is deprecated for removal since Queens.
1131 # Its value may be silently ignored in the future.
1132 # Reason:
1133 # Glance registry service is deprecated for removal.
1134 #
1135 # More information can be found from the spec:
1136 # http://specs.openstack.org/openstack/glance-
1137 # specs/specs/queens/approved/glance/deprecate-registry.html
1138 #registry_host = 0.0.0.0
1139
1140 # DEPRECATED:
1141 # Port the registry server is listening on.
1142 #
1143 # Possible values:
1144 # * A valid port number
1145 #
1146 # Related options:
1147 # * None
1148 #
1149 # (port value)
1150 # Minimum value: 0
1151 # Maximum value: 65535
1152 # This option is deprecated for removal since Queens.
1153 # Its value may be silently ignored in the future.
1154 # Reason:
1155 # Glance registry service is deprecated for removal.
1156 #
1157 # More information can be found from the spec:
1158 # http://specs.openstack.org/openstack/glance-
1159 # specs/specs/queens/approved/glance/deprecate-registry.html
1160 #registry_port = 9191
1161
1162 # DEPRECATED: Whether to pass through the user token when making requests to the
1163 # registry. To prevent failures with token expiration during big files upload,
1164 # it is recommended to set this parameter to False.If "use_user_token" is not in
1165 # effect, then admin credentials can be specified. (boolean value)
1166 # This option is deprecated for removal.
1167 # Its value may be silently ignored in the future.
1168 # Reason: This option was considered harmful and has been deprecated in M
1169 # release. It will be removed in O release. For more information read OSSN-0060.
1170 # Related functionality with uploading big images has been implemented with
1171 # Keystone trusts support.
1172 #use_user_token = true
1173
1174 # DEPRECATED: The administrators user name. If "use_user_token" is not in
1175 # effect, then admin credentials can be specified. (string value)
1176 # This option is deprecated for removal.
1177 # Its value may be silently ignored in the future.
1178 # Reason: This option was considered harmful and has been deprecated in M
1179 # release. It will be removed in O release. For more information read OSSN-0060.
1180 # Related functionality with uploading big images has been implemented with
1181 # Keystone trusts support.
1182 #admin_user = <None>
1183
1184 # DEPRECATED: The administrators password. If "use_user_token" is not in effect,
1185 # then admin credentials can be specified. (string value)
1186 # This option is deprecated for removal.
1187 # Its value may be silently ignored in the future.
1188 # Reason: This option was considered harmful and has been deprecated in M
1189 # release. It will be removed in O release. For more information read OSSN-0060.
1190 # Related functionality with uploading big images has been implemented with
1191 # Keystone trusts support.
1192 #admin_password = <None>
1193
1194 # DEPRECATED: The tenant name of the administrative user. If "use_user_token" is
1195 # not in effect, then admin tenant name can be specified. (string value)
1196 # This option is deprecated for removal.
1197 # Its value may be silently ignored in the future.
1198 # Reason: This option was considered harmful and has been deprecated in M
1199 # release. It will be removed in O release. For more information read OSSN-0060.
1200 # Related functionality with uploading big images has been implemented with
1201 # Keystone trusts support.
1202 #admin_tenant_name = <None>
1203
1204 # DEPRECATED: The URL to the keystone service. If "use_user_token" is not in
1205 # effect and using keystone auth, then URL of keystone can be specified. (string
1206 # value)
1207 # This option is deprecated for removal.
1208 # Its value may be silently ignored in the future.
1209 # Reason: This option was considered harmful and has been deprecated in M
1210 # release. It will be removed in O release. For more information read OSSN-0060.
1211 # Related functionality with uploading big images has been implemented with
1212 # Keystone trusts support.
1213 #auth_url = <None>
1214
1215 # DEPRECATED: The strategy to use for authentication. If "use_user_token" is not
1216 # in effect, then auth strategy can be specified. (string value)
1217 # This option is deprecated for removal.
1218 # Its value may be silently ignored in the future.
1219 # Reason: This option was considered harmful and has been deprecated in M
1220 # release. It will be removed in O release. For more information read OSSN-0060.
1221 # Related functionality with uploading big images has been implemented with
1222 # Keystone trusts support.
1223 #auth_strategy = noauth
1224
1225 # DEPRECATED: The region for the authentication service. If "use_user_token" is
1226 # not in effect and using keystone auth, then region name can be specified.
1227 # (string value)
1228 # This option is deprecated for removal.
1229 # Its value may be silently ignored in the future.
1230 # Reason: This option was considered harmful and has been deprecated in M
1231 # release. It will be removed in O release. For more information read OSSN-0060.
1232 # Related functionality with uploading big images has been implemented with
1233 # Keystone trusts support.
1234 #auth_region = <None>
1235
1236 # DEPRECATED:
1237 # Protocol to use for communication with the registry server.
1238 #
1239 # Provide a string value representing the protocol to use for
1240 # communication with the registry server. By default, this option is
1241 # set to ``http`` and the connection is not secure.
1242 #
1243 # This option can be set to ``https`` to establish a secure connection
1244 # to the registry server. In this case, provide a key to use for the
1245 # SSL connection using the ``registry_client_key_file`` option. Also
1246 # include the CA file and cert file using the options
1247 # ``registry_client_ca_file`` and ``registry_client_cert_file``
1248 # respectively.
1249 #
1250 # Possible values:
1251 # * http
1252 # * https
1253 #
1254 # Related options:
1255 # * registry_client_key_file
1256 # * registry_client_cert_file
1257 # * registry_client_ca_file
1258 #
1259 # (string value)
1260 # Possible values:
1261 # http - <No description provided>
1262 # https - <No description provided>
1263 # This option is deprecated for removal since Queens.
1264 # Its value may be silently ignored in the future.
1265 # Reason:
1266 # Glance registry service is deprecated for removal.
1267 #
1268 # More information can be found from the spec:
1269 # http://specs.openstack.org/openstack/glance-
1270 # specs/specs/queens/approved/glance/deprecate-registry.html
1271 #registry_client_protocol = http
1272
1273 # DEPRECATED:
1274 # Absolute path to the private key file.
1275 #
1276 # Provide a string value representing a valid absolute path to the
1277 # private key file to use for establishing a secure connection to
1278 # the registry server.
1279 #
1280 # NOTE: This option must be set if ``registry_client_protocol`` is
1281 # set to ``https``. Alternatively, the GLANCE_CLIENT_KEY_FILE
1282 # environment variable may be set to a filepath of the key file.
1283 #
1284 # Possible values:
1285 # * String value representing a valid absolute path to the key
1286 # file.
1287 #
1288 # Related options:
1289 # * registry_client_protocol
1290 #
1291 # (string value)
1292 # This option is deprecated for removal since Queens.
1293 # Its value may be silently ignored in the future.
1294 # Reason:
1295 # Glance registry service is deprecated for removal.
1296 #
1297 # More information can be found from the spec:
1298 # http://specs.openstack.org/openstack/glance-
1299 # specs/specs/queens/approved/glance/deprecate-registry.html
1300 #
1301 # This option has a sample default set, which means that
1302 # its actual default value may vary from the one documented
1303 # below.
1304 #registry_client_key_file = /etc/ssl/key/key-file.pem
1305
1306 # DEPRECATED:
1307 # Absolute path to the certificate file.
1308 #
1309 # Provide a string value representing a valid absolute path to the
1310 # certificate file to use for establishing a secure connection to
1311 # the registry server.
1312 #
1313 # NOTE: This option must be set if ``registry_client_protocol`` is
1314 # set to ``https``. Alternatively, the GLANCE_CLIENT_CERT_FILE
1315 # environment variable may be set to a filepath of the certificate
1316 # file.
1317 #
1318 # Possible values:
1319 # * String value representing a valid absolute path to the
1320 # certificate file.
1321 #
1322 # Related options:
1323 # * registry_client_protocol
1324 #
1325 # (string value)
1326 # This option is deprecated for removal since Queens.
1327 # Its value may be silently ignored in the future.
1328 # Reason:
1329 # Glance registry service is deprecated for removal.
1330 #
1331 # More information can be found from the spec:
1332 # http://specs.openstack.org/openstack/glance-
1333 # specs/specs/queens/approved/glance/deprecate-registry.html
1334 #
1335 # This option has a sample default set, which means that
1336 # its actual default value may vary from the one documented
1337 # below.
1338 #registry_client_cert_file = /etc/ssl/certs/file.crt
1339
1340 # DEPRECATED:
1341 # Absolute path to the Certificate Authority file.
1342 #
1343 # Provide a string value representing a valid absolute path to the
1344 # certificate authority file to use for establishing a secure
1345 # connection to the registry server.
1346 #
1347 # NOTE: This option must be set if ``registry_client_protocol`` is
1348 # set to ``https``. Alternatively, the GLANCE_CLIENT_CA_FILE
1349 # environment variable may be set to a filepath of the CA file.
1350 # This option is ignored if the ``registry_client_insecure`` option
1351 # is set to ``True``.
1352 #
1353 # Possible values:
1354 # * String value representing a valid absolute path to the CA
1355 # file.
1356 #
1357 # Related options:
1358 # * registry_client_protocol
1359 # * registry_client_insecure
1360 #
1361 # (string value)
1362 # This option is deprecated for removal since Queens.
1363 # Its value may be silently ignored in the future.
1364 # Reason:
1365 # Glance registry service is deprecated for removal.
1366 #
1367 # More information can be found from the spec:
1368 # http://specs.openstack.org/openstack/glance-
1369 # specs/specs/queens/approved/glance/deprecate-registry.html
1370 #
1371 # This option has a sample default set, which means that
1372 # its actual default value may vary from the one documented
1373 # below.
1374 #registry_client_ca_file = /etc/ssl/cafile/file.ca
1375
1376 # DEPRECATED:
1377 # Set verification of the registry server certificate.
1378 #
1379 # Provide a boolean value to determine whether or not to validate
1380 # SSL connections to the registry server. By default, this option
1381 # is set to ``False`` and the SSL connections are validated.
1382 #
1383 # If set to ``True``, the connection to the registry server is not
1384 # validated via a certifying authority and the
1385 # ``registry_client_ca_file`` option is ignored. This is the
1386 # registry's equivalent of specifying --insecure on the command line
1387 # using glanceclient for the API.
1388 #
1389 # Possible values:
1390 # * True
1391 # * False
1392 #
1393 # Related options:
1394 # * registry_client_protocol
1395 # * registry_client_ca_file
1396 #
1397 # (boolean value)
1398 # This option is deprecated for removal since Queens.
1399 # Its value may be silently ignored in the future.
1400 # Reason:
1401 # Glance registry service is deprecated for removal.
1402 #
1403 # More information can be found from the spec:
1404 # http://specs.openstack.org/openstack/glance-
1405 # specs/specs/queens/approved/glance/deprecate-registry.html
1406 #registry_client_insecure = false
1407
1408 # DEPRECATED:
1409 # Timeout value for registry requests.
1410 #
1411 # Provide an integer value representing the period of time in seconds
1412 # that the API server will wait for a registry request to complete.
1413 # The default value is 600 seconds.
1414 #
1415 # A value of 0 implies that a request will never timeout.
1416 #
1417 # Possible values:
1418 # * Zero
1419 # * Positive integer
1420 #
1421 # Related options:
1422 # * None
1423 #
1424 # (integer value)
1425 # Minimum value: 0
1426 # This option is deprecated for removal since Queens.
1427 # Its value may be silently ignored in the future.
1428 # Reason:
1429 # Glance registry service is deprecated for removal.
1430 #
1431 # More information can be found from the spec:
1432 # http://specs.openstack.org/openstack/glance-
1433 # specs/specs/queens/approved/glance/deprecate-registry.html
1434 #registry_client_timeout = 600
1435
1436 #
1437 # Send headers received from identity when making requests to
1438 # registry.
1439 #
1440 # Typically, Glance registry can be deployed in multiple flavors,
1441 # which may or may not include authentication. For example,
1442 # ``trusted-auth`` is a flavor that does not require the registry
1443 # service to authenticate the requests it receives. However, the
1444 # registry service may still need a user context to be populated to
1445 # serve the requests. This can be achieved by the caller
1446 # (the Glance API usually) passing through the headers it received
1447 # from authenticating with identity for the same request. The typical
1448 # headers sent are ``X-User-Id``, ``X-Tenant-Id``, ``X-Roles``,
1449 # ``X-Identity-Status`` and ``X-Service-Catalog``.
1450 #
1451 # Provide a boolean value to determine whether to send the identity
1452 # headers to provide tenant and user information along with the
1453 # requests to registry service. By default, this option is set to
1454 # ``False``, which means that user and tenant information is not
1455 # available readily. It must be obtained by authenticating. Hence, if
1456 # this is set to ``False``, ``flavor`` must be set to value that
1457 # either includes authentication or authenticated user context.
1458 #
1459 # Possible values:
1460 # * True
1461 # * False
1462 #
1463 # Related options:
1464 # * flavor
1465 #
1466 # (boolean value)
1467 #send_identity_headers = false
1468
1469995 #
1470996 # The amount of time, in seconds, to delay image scrubbing.
1471997 #
19871513 # Related options:
19881514 # * None
19891515 #
1516 # NOTE: You cannot use an encrypted volume_type associated with an NFS backend.
1517 # An encrypted volume stored on an NFS backend will raise an exception whenever
1518 # glance_store tries to write or access image data stored in that volume.
1519 # Consult your Cinder administrator to determine an appropriate volume_type.
1520 #
19901521 # (string value)
19911522 #cinder_volume_type = <None>
19921523
22431774 #
22441775 # Filesystem store metadata file.
22451776 #
2246 # The path to a file which contains the metadata to be returned with
2247 # any location associated with the filesystem store. The file must
2248 # contain a valid JSON object. The object should contain the keys
2249 # ``id`` and ``mountpoint``. The value for both keys should be a
2250 # string.
1777 # The path to a file which contains the metadata to be returned with any
1778 # location
1779 # associated with the filesystem store. Once this option is set, it is used for
1780 # new images created afterward only - previously existing images are not
1781 # affected.
1782 #
1783 # The file must contain a valid JSON object. The object should contain the keys
1784 # ``id`` and ``mountpoint``. The value for both keys should be a string.
22511785 #
22521786 # Possible values:
22531787 # * A valid path to the store metadata file
23011835 # Minimum value: 1
23021836 #filesystem_store_chunk_size = 65536
23031837
1838 #
1839 # Enable or not thin provisioning in this backend.
1840 #
1841 # This configuration option enable the feature of not really write null byte
1842 # sequences on the filesystem, the holes who can appear will automatically
1843 # be interpreted by the filesystem as null bytes, and do not really consume
1844 # your storage.
1845 # Enabling this feature will also speed up image upload and save network trafic
1846 # in addition to save space in the backend, as null bytes sequences are not
1847 # sent over the network.
1848 #
1849 # Possible Values:
1850 # * True
1851 # * False
1852 #
1853 # Related options:
1854 # * None
1855 #
1856 # (boolean value)
1857 #filesystem_thin_provisioning = false
1858
23041859
23051860 [glance.store.http.store]
23061861
24812036 #
24822037 # (integer value)
24832038 #rados_connect_timeout = 0
2039
2040 #
2041 # Enable or not thin provisioning in this backend.
2042 #
2043 # This configuration option enable the feature of not really write null byte
2044 # sequences on the RBD backend, the holes who can appear will automatically
2045 # be interpreted by Ceph as null bytes, and do not really consume your storage.
2046 # Enabling this feature will also speed up image upload and save network trafic
2047 # in addition to save space in the backend, as null bytes sequences are not
2048 # sent over the network.
2049 #
2050 # Possible Values:
2051 # * True
2052 # * False
2053 #
2054 # Related options:
2055 # * None
2056 #
2057 # (boolean value)
2058 #rbd_thin_provisioning = false
24842059
24852060
24862061 [glance.store.s3.store]
37783353 # Related options:
37793354 # * None
37803355 #
3356 # NOTE: You cannot use an encrypted volume_type associated with an NFS backend.
3357 # An encrypted volume stored on an NFS backend will raise an exception whenever
3358 # glance_store tries to write or access image data stored in that volume.
3359 # Consult your Cinder administrator to determine an appropriate volume_type.
3360 #
37813361 # (string value)
37823362 #cinder_volume_type = <None>
37833363
38843464 #
38853465 # Filesystem store metadata file.
38863466 #
3887 # The path to a file which contains the metadata to be returned with
3888 # any location associated with the filesystem store. The file must
3889 # contain a valid JSON object. The object should contain the keys
3890 # ``id`` and ``mountpoint``. The value for both keys should be a
3891 # string.
3467 # The path to a file which contains the metadata to be returned with any
3468 # location
3469 # associated with the filesystem store. Once this option is set, it is used for
3470 # new images created afterward only - previously existing images are not
3471 # affected.
3472 #
3473 # The file must contain a valid JSON object. The object should contain the keys
3474 # ``id`` and ``mountpoint``. The value for both keys should be a string.
38923475 #
38933476 # Possible values:
38943477 # * A valid path to the store metadata file
39433526 #filesystem_store_chunk_size = 65536
39443527
39453528 #
3529 # Enable or not thin provisioning in this backend.
3530 #
3531 # This configuration option enable the feature of not really write null byte
3532 # sequences on the filesystem, the holes who can appear will automatically
3533 # be interpreted by the filesystem as null bytes, and do not really consume
3534 # your storage.
3535 # Enabling this feature will also speed up image upload and save network trafic
3536 # in addition to save space in the backend, as null bytes sequences are not
3537 # sent over the network.
3538 #
3539 # Possible Values:
3540 # * True
3541 # * False
3542 #
3543 # Related options:
3544 # * None
3545 #
3546 # (boolean value)
3547 #filesystem_thin_provisioning = false
3548
3549 #
39463550 # Path to the CA bundle file.
39473551 #
39483552 # This configuration option enables the operator to use a custom
41083712 #
41093713 # (integer value)
41103714 #rados_connect_timeout = 0
3715
3716 #
3717 # Enable or not thin provisioning in this backend.
3718 #
3719 # This configuration option enable the feature of not really write null byte
3720 # sequences on the RBD backend, the holes who can appear will automatically
3721 # be interpreted by Ceph as null bytes, and do not really consume your storage.
3722 # Enabling this feature will also speed up image upload and save network trafic
3723 # in addition to save space in the backend, as null bytes sequences are not
3724 # sent over the network.
3725 #
3726 # Possible Values:
3727 # * True
3728 # * False
3729 #
3730 # Related options:
3731 # * None
3732 #
3733 # (boolean value)
3734 #rbd_thin_provisioning = false
41113735
41123736 #
41133737 # The host where the S3 server is listening.
50644688 #auth_version = <None>
50654689
50664690 # Interface to use for the Identity API endpoint. Valid values are "public",
5067 # "internal" or "admin"(default). (string value)
5068 #interface = admin
4691 # "internal" (default) or "admin". (string value)
4692 #interface = internal
50694693
50704694 # Do not handle authorization requests within the middleware, but delegate the
50714695 # authorization decision to downstream WSGI components. (boolean value)
56125236 # client queue does not exist. (integer value)
56135237 #direct_mandatory_flag = True
56145238
5239 # Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and
5240 # notify consumerswhen queue is down (boolean value)
5241 #enable_cancel_on_failover = false
5242
56155243
56165244 [oslo_middleware]
56175245
56375265 # logged informing operators that policies are being invoked with mismatching
56385266 # scope. (boolean value)
56395267 #enforce_scope = false
5268
5269 # This option controls whether or not to use old deprecated defaults when
5270 # evaluating policies. If ``True``, the old deprecated defaults are not going to
5271 # be evaluated. This means if any existing token is allowed for old defaults but
5272 # is disallowed for new defaults, it will be disallowed. It is encouraged to
5273 # enable this flag along with the ``enforce_scope`` flag so that you can get the
5274 # benefits of new defaults and ``scope_type`` together (boolean value)
5275 #enforce_new_defaults = false
56405276
56415277 # The relative or absolute path of a file that maps roles to permissions for a
56425278 # given service. Relative paths must be specified in relation to the
60475683 # its actual default value may vary from the one documented
60485684 # below.
60495685 #conversion_format = raw
5686
5687
5688 [wsgi]
5689
5690 #
5691 # From glance.api
5692 #
5693
5694 #
5695 # The number of threads (per worker process) in the pool for processing
5696 # asynchronous tasks. This controls how many asynchronous tasks (i.e. for
5697 # image interoperable import) each worker can run at a time. If this is
5698 # too large, you *may* have increased memory footprint per worker and/or you
5699 # may overwhelm other system resources such as disk or outbound network
5700 # bandwidth. If this is too small, image import requests will have to wait
5701 # until a thread becomes available to begin processing. (integer value)
5702 # Minimum value: 1
5703 #task_pool_threads = 16
5704
5705 #
5706 # Path to the python interpreter to use when spawning external
5707 # processes. By default this is sys.executable, which should be the
5708 # same interpreter running Glance itself. However, in some situations
5709 # (i.e. uwsgi) this may not actually point to a python interpreter
5710 # itself. (string value)
5711 #python_interpreter = /opt/stack/glance/.tox/genconfig/bin/python
115115 #
116116 # (integer value)
117117 #image_location_quota = 10
118
119 # DEPRECATED:
120 # Python module path of data access API.
121 #
122 # Specifies the path to the API to use for accessing the data model.
123 # This option determines how the image catalog data will be accessed.
124 #
125 # Possible values:
126 # * glance.db.sqlalchemy.api
127 # * glance.db.registry.api
128 # * glance.db.simple.api
129 #
130 # If this option is set to ``glance.db.sqlalchemy.api`` then the image
131 # catalog data is stored in and read from the database via the
132 # SQLAlchemy Core and ORM APIs.
133 #
134 # Setting this option to ``glance.db.registry.api`` will force all
135 # database access requests to be routed through the Registry service.
136 # This avoids data access from the Glance API nodes for an added layer
137 # of security, scalability and manageability.
138 #
139 # NOTE: In v2 OpenStack Images API, the registry service is optional.
140 # In order to use the Registry API in v2, the option
141 # ``enable_v2_registry`` must be set to ``True``.
142 #
143 # Finally, when this configuration option is set to
144 # ``glance.db.simple.api``, image catalog data is stored in and read
145 # from an in-memory data structure. This is primarily used for testing.
146 #
147 # Related options:
148 # * enable_v2_api
149 # * enable_v2_registry
150 #
151 # (string value)
152 # This option is deprecated for removal since Queens.
153 # Its value may be silently ignored in the future.
154 # Reason:
155 # Glance registry service is deprecated for removal.
156 #
157 # More information can be found from the spec:
158 # http://specs.openstack.org/openstack/glance-
159 # specs/specs/queens/approved/glance/deprecate-registry.html
160 #data_api = glance.db.sqlalchemy.api
161118
162119 #
163120 # The default number of results to return for a request.
336293 #user_storage_quota = 0
337294
338295 #
339 # Deploy the v2 OpenStack Images API.
340 #
341 # When this option is set to ``True``, Glance service will respond
342 # to requests on registered endpoints conforming to the v2 OpenStack
343 # Images API.
344 #
345 # NOTES:
346 # * If this option is disabled, then the ``enable_v2_registry``
347 # option, which is enabled by default, is also recommended
348 # to be disabled.
349 #
350 # Possible values:
351 # * True
352 # * False
353 #
354 # Related options:
355 # * enable_v2_registry
356 #
357 # (boolean value)
358 #enable_v2_api = true
359
360 #
361 # DEPRECATED FOR REMOVAL
362 # (boolean value)
363 #enable_v1_registry = true
364
365 # DEPRECATED:
366 # Deploy the v2 API Registry service.
367 #
368 # When this option is set to ``True``, the Registry service
369 # will be enabled in Glance for v2 API requests.
370 #
371 # NOTES:
372 # * Use of Registry is optional in v2 API, so this option
373 # must only be enabled if both ``enable_v2_api`` is set to
374 # ``True`` and the ``data_api`` option is set to
375 # ``glance.db.registry.api``.
376 #
377 # * If deploying only the v1 OpenStack Images API, this option,
378 # which is enabled by default, should be disabled.
379 #
380 # Possible values:
381 # * True
382 # * False
383 #
384 # Related options:
385 # * enable_v2_api
386 # * data_api
387 #
388 # (boolean value)
389 # This option is deprecated for removal since Queens.
390 # Its value may be silently ignored in the future.
391 # Reason:
392 # Glance registry service is deprecated for removal.
393 #
394 # More information can be found from the spec:
395 # http://specs.openstack.org/openstack/glance-
396 # specs/specs/queens/approved/glance/deprecate-registry.html
397 #enable_v2_registry = true
398
399 #
400296 # Host address of the pydev server.
401297 #
402298 # Provide a string value representing the hostname or IP of the
654550 #
655551 # (string value)
656552 #image_cache_dir = <None>
657
658 # DEPRECATED:
659 # Address the registry server is hosted on.
660 #
661 # Possible values:
662 # * A valid IP or hostname
663 #
664 # Related options:
665 # * None
666 #
667 # (host address value)
668 # This option is deprecated for removal since Queens.
669 # Its value may be silently ignored in the future.
670 # Reason:
671 # Glance registry service is deprecated for removal.
672 #
673 # More information can be found from the spec:
674 # http://specs.openstack.org/openstack/glance-
675 # specs/specs/queens/approved/glance/deprecate-registry.html
676 #registry_host = 0.0.0.0
677
678 # DEPRECATED:
679 # Port the registry server is listening on.
680 #
681 # Possible values:
682 # * A valid port number
683 #
684 # Related options:
685 # * None
686 #
687 # (port value)
688 # Minimum value: 0
689 # Maximum value: 65535
690 # This option is deprecated for removal since Queens.
691 # Its value may be silently ignored in the future.
692 # Reason:
693 # Glance registry service is deprecated for removal.
694 #
695 # More information can be found from the spec:
696 # http://specs.openstack.org/openstack/glance-
697 # specs/specs/queens/approved/glance/deprecate-registry.html
698 #registry_port = 9191
699
700 # DEPRECATED:
701 # Protocol to use for communication with the registry server.
702 #
703 # Provide a string value representing the protocol to use for
704 # communication with the registry server. By default, this option is
705 # set to ``http`` and the connection is not secure.
706 #
707 # This option can be set to ``https`` to establish a secure connection
708 # to the registry server. In this case, provide a key to use for the
709 # SSL connection using the ``registry_client_key_file`` option. Also
710 # include the CA file and cert file using the options
711 # ``registry_client_ca_file`` and ``registry_client_cert_file``
712 # respectively.
713 #
714 # Possible values:
715 # * http
716 # * https
717 #
718 # Related options:
719 # * registry_client_key_file
720 # * registry_client_cert_file
721 # * registry_client_ca_file
722 #
723 # (string value)
724 # Possible values:
725 # http - <No description provided>
726 # https - <No description provided>
727 # This option is deprecated for removal since Queens.
728 # Its value may be silently ignored in the future.
729 # Reason:
730 # Glance registry service is deprecated for removal.
731 #
732 # More information can be found from the spec:
733 # http://specs.openstack.org/openstack/glance-
734 # specs/specs/queens/approved/glance/deprecate-registry.html
735 #registry_client_protocol = http
736
737 # DEPRECATED:
738 # Absolute path to the private key file.
739 #
740 # Provide a string value representing a valid absolute path to the
741 # private key file to use for establishing a secure connection to
742 # the registry server.
743 #
744 # NOTE: This option must be set if ``registry_client_protocol`` is
745 # set to ``https``. Alternatively, the GLANCE_CLIENT_KEY_FILE
746 # environment variable may be set to a filepath of the key file.
747 #
748 # Possible values:
749 # * String value representing a valid absolute path to the key
750 # file.
751 #
752 # Related options:
753 # * registry_client_protocol
754 #
755 # (string value)
756 # This option is deprecated for removal since Queens.
757 # Its value may be silently ignored in the future.
758 # Reason:
759 # Glance registry service is deprecated for removal.
760 #
761 # More information can be found from the spec:
762 # http://specs.openstack.org/openstack/glance-
763 # specs/specs/queens/approved/glance/deprecate-registry.html
764 #
765 # This option has a sample default set, which means that
766 # its actual default value may vary from the one documented
767 # below.
768 #registry_client_key_file = /etc/ssl/key/key-file.pem
769
770 # DEPRECATED:
771 # Absolute path to the certificate file.
772 #
773 # Provide a string value representing a valid absolute path to the
774 # certificate file to use for establishing a secure connection to
775 # the registry server.
776 #
777 # NOTE: This option must be set if ``registry_client_protocol`` is
778 # set to ``https``. Alternatively, the GLANCE_CLIENT_CERT_FILE
779 # environment variable may be set to a filepath of the certificate
780 # file.
781 #
782 # Possible values:
783 # * String value representing a valid absolute path to the
784 # certificate file.
785 #
786 # Related options:
787 # * registry_client_protocol
788 #
789 # (string value)
790 # This option is deprecated for removal since Queens.
791 # Its value may be silently ignored in the future.
792 # Reason:
793 # Glance registry service is deprecated for removal.
794 #
795 # More information can be found from the spec:
796 # http://specs.openstack.org/openstack/glance-
797 # specs/specs/queens/approved/glance/deprecate-registry.html
798 #
799 # This option has a sample default set, which means that
800 # its actual default value may vary from the one documented
801 # below.
802 #registry_client_cert_file = /etc/ssl/certs/file.crt
803
804 # DEPRECATED:
805 # Absolute path to the Certificate Authority file.
806 #
807 # Provide a string value representing a valid absolute path to the
808 # certificate authority file to use for establishing a secure
809 # connection to the registry server.
810 #
811 # NOTE: This option must be set if ``registry_client_protocol`` is
812 # set to ``https``. Alternatively, the GLANCE_CLIENT_CA_FILE
813 # environment variable may be set to a filepath of the CA file.
814 # This option is ignored if the ``registry_client_insecure`` option
815 # is set to ``True``.
816 #
817 # Possible values:
818 # * String value representing a valid absolute path to the CA
819 # file.
820 #
821 # Related options:
822 # * registry_client_protocol
823 # * registry_client_insecure
824 #
825 # (string value)
826 # This option is deprecated for removal since Queens.
827 # Its value may be silently ignored in the future.
828 # Reason:
829 # Glance registry service is deprecated for removal.
830 #
831 # More information can be found from the spec:
832 # http://specs.openstack.org/openstack/glance-
833 # specs/specs/queens/approved/glance/deprecate-registry.html
834 #
835 # This option has a sample default set, which means that
836 # its actual default value may vary from the one documented
837 # below.
838 #registry_client_ca_file = /etc/ssl/cafile/file.ca
839
840 # DEPRECATED:
841 # Set verification of the registry server certificate.
842 #
843 # Provide a boolean value to determine whether or not to validate
844 # SSL connections to the registry server. By default, this option
845 # is set to ``False`` and the SSL connections are validated.
846 #
847 # If set to ``True``, the connection to the registry server is not
848 # validated via a certifying authority and the
849 # ``registry_client_ca_file`` option is ignored. This is the
850 # registry's equivalent of specifying --insecure on the command line
851 # using glanceclient for the API.
852 #
853 # Possible values:
854 # * True
855 # * False
856 #
857 # Related options:
858 # * registry_client_protocol
859 # * registry_client_ca_file
860 #
861 # (boolean value)
862 # This option is deprecated for removal since Queens.
863 # Its value may be silently ignored in the future.
864 # Reason:
865 # Glance registry service is deprecated for removal.
866 #
867 # More information can be found from the spec:
868 # http://specs.openstack.org/openstack/glance-
869 # specs/specs/queens/approved/glance/deprecate-registry.html
870 #registry_client_insecure = false
871
872 # DEPRECATED:
873 # Timeout value for registry requests.
874 #
875 # Provide an integer value representing the period of time in seconds
876 # that the API server will wait for a registry request to complete.
877 # The default value is 600 seconds.
878 #
879 # A value of 0 implies that a request will never timeout.
880 #
881 # Possible values:
882 # * Zero
883 # * Positive integer
884 #
885 # Related options:
886 # * None
887 #
888 # (integer value)
889 # Minimum value: 0
890 # This option is deprecated for removal since Queens.
891 # Its value may be silently ignored in the future.
892 # Reason:
893 # Glance registry service is deprecated for removal.
894 #
895 # More information can be found from the spec:
896 # http://specs.openstack.org/openstack/glance-
897 # specs/specs/queens/approved/glance/deprecate-registry.html
898 #registry_client_timeout = 600
899
900 # DEPRECATED: Whether to pass through the user token when making requests to the
901 # registry. To prevent failures with token expiration during big files upload,
902 # it is recommended to set this parameter to False.If "use_user_token" is not in
903 # effect, then admin credentials can be specified. (boolean value)
904 # This option is deprecated for removal.
905 # Its value may be silently ignored in the future.
906 # Reason: This option was considered harmful and has been deprecated in M
907 # release. It will be removed in O release. For more information read OSSN-0060.
908 # Related functionality with uploading big images has been implemented with
909 # Keystone trusts support.
910 #use_user_token = true
911
912 # DEPRECATED: The administrators user name. If "use_user_token" is not in
913 # effect, then admin credentials can be specified. (string value)
914 # This option is deprecated for removal.
915 # Its value may be silently ignored in the future.
916 # Reason: This option was considered harmful and has been deprecated in M
917 # release. It will be removed in O release. For more information read OSSN-0060.
918 # Related functionality with uploading big images has been implemented with
919 # Keystone trusts support.
920 #admin_user = <None>
921
922 # DEPRECATED: The administrators password. If "use_user_token" is not in effect,
923 # then admin credentials can be specified. (string value)
924 # This option is deprecated for removal.
925 # Its value may be silently ignored in the future.
926 # Reason: This option was considered harmful and has been deprecated in M
927 # release. It will be removed in O release. For more information read OSSN-0060.
928 # Related functionality with uploading big images has been implemented with
929 # Keystone trusts support.
930 #admin_password = <None>
931
932 # DEPRECATED: The tenant name of the administrative user. If "use_user_token" is
933 # not in effect, then admin tenant name can be specified. (string value)
934 # This option is deprecated for removal.
935 # Its value may be silently ignored in the future.
936 # Reason: This option was considered harmful and has been deprecated in M
937 # release. It will be removed in O release. For more information read OSSN-0060.
938 # Related functionality with uploading big images has been implemented with
939 # Keystone trusts support.
940 #admin_tenant_name = <None>
941
942 # DEPRECATED: The URL to the keystone service. If "use_user_token" is not in
943 # effect and using keystone auth, then URL of keystone can be specified. (string
944 # value)
945 # This option is deprecated for removal.
946 # Its value may be silently ignored in the future.
947 # Reason: This option was considered harmful and has been deprecated in M
948 # release. It will be removed in O release. For more information read OSSN-0060.
949 # Related functionality with uploading big images has been implemented with
950 # Keystone trusts support.
951 #auth_url = <None>
952
953 # DEPRECATED: The strategy to use for authentication. If "use_user_token" is not
954 # in effect, then auth strategy can be specified. (string value)
955 # This option is deprecated for removal.
956 # Its value may be silently ignored in the future.
957 # Reason: This option was considered harmful and has been deprecated in M
958 # release. It will be removed in O release. For more information read OSSN-0060.
959 # Related functionality with uploading big images has been implemented with
960 # Keystone trusts support.
961 #auth_strategy = noauth
962
963 # DEPRECATED: The region for the authentication service. If "use_user_token" is
964 # not in effect and using keystone auth, then region name can be specified.
965 # (string value)
966 # This option is deprecated for removal.
967 # Its value may be silently ignored in the future.
968 # Reason: This option was considered harmful and has been deprecated in M
969 # release. It will be removed in O release. For more information read OSSN-0060.
970 # Related functionality with uploading big images has been implemented with
971 # Keystone trusts support.
972 #auth_region = <None>
973553
974554 #
975555 # From oslo.log
14651045 # Related options:
14661046 # * None
14671047 #
1048 # NOTE: You cannot use an encrypted volume_type associated with an NFS backend.
1049 # An encrypted volume stored on an NFS backend will raise an exception whenever
1050 # glance_store tries to write or access image data stored in that volume.
1051 # Consult your Cinder administrator to determine an appropriate volume_type.
1052 #
14681053 # (string value)
14691054 #cinder_volume_type = <None>
14701055
15711156 #
15721157 # Filesystem store metadata file.
15731158 #
1574 # The path to a file which contains the metadata to be returned with
1575 # any location associated with the filesystem store. The file must
1576 # contain a valid JSON object. The object should contain the keys
1577 # ``id`` and ``mountpoint``. The value for both keys should be a
1578 # string.
1159 # The path to a file which contains the metadata to be returned with any
1160 # location
1161 # associated with the filesystem store. Once this option is set, it is used for
1162 # new images created afterward only - previously existing images are not
1163 # affected.
1164 #
1165 # The file must contain a valid JSON object. The object should contain the keys
1166 # ``id`` and ``mountpoint``. The value for both keys should be a string.
15791167 #
15801168 # Possible values:
15811169 # * A valid path to the store metadata file
16301218 #filesystem_store_chunk_size = 65536
16311219
16321220 #
1221 # Enable or not thin provisioning in this backend.
1222 #
1223 # This configuration option enable the feature of not really write null byte
1224 # sequences on the filesystem, the holes who can appear will automatically
1225 # be interpreted by the filesystem as null bytes, and do not really consume
1226 # your storage.
1227 # Enabling this feature will also speed up image upload and save network trafic
1228 # in addition to save space in the backend, as null bytes sequences are not
1229 # sent over the network.
1230 #
1231 # Possible Values:
1232 # * True
1233 # * False
1234 #
1235 # Related options:
1236 # * None
1237 #
1238 # (boolean value)
1239 #filesystem_thin_provisioning = false
1240
1241 #
16331242 # Path to the CA bundle file.
16341243 #
16351244 # This configuration option enables the operator to use a custom
17951404 #
17961405 # (integer value)
17971406 #rados_connect_timeout = 0
1407
1408 #
1409 # Enable or not thin provisioning in this backend.
1410 #
1411 # This configuration option enable the feature of not really write null byte
1412 # sequences on the RBD backend, the holes who can appear will automatically
1413 # be interpreted by Ceph as null bytes, and do not really consume your storage.
1414 # Enabling this feature will also speed up image upload and save network trafic
1415 # in addition to save space in the backend, as null bytes sequences are not
1416 # sent over the network.
1417 #
1418 # Possible Values:
1419 # * True
1420 # * False
1421 #
1422 # Related options:
1423 # * None
1424 #
1425 # (boolean value)
1426 #rbd_thin_provisioning = false
17981427
17991428 #
18001429 # The host where the S3 server is listening.
27152344 # scope. (boolean value)
27162345 #enforce_scope = false
27172346
2347 # This option controls whether or not to use old deprecated defaults when
2348 # evaluating policies. If ``True``, the old deprecated defaults are not going to
2349 # be evaluated. This means if any existing token is allowed for old defaults but
2350 # is disallowed for new defaults, it will be disallowed. It is encouraged to
2351 # enable this flag along with the ``enforce_scope`` flag so that you can get the
2352 # benefits of new defaults and ``scope_type`` together (boolean value)
2353 #enforce_new_defaults = false
2354
27182355 # The relative or absolute path of a file that maps roles to permissions for a
27192356 # given service. Relative paths must be specified in relation to the
27202357 # configuration file setting this option. (string value)
+0
-35
etc/glance-registry-paste.ini less more
0 # Use this pipeline for no auth - DEFAULT
1 [pipeline:glance-registry]
2 pipeline = healthcheck osprofiler unauthenticated-context registryapp
3
4 # Use this pipeline for keystone auth
5 [pipeline:glance-registry-keystone]
6 pipeline = healthcheck osprofiler authtoken context registryapp
7
8 # Use this pipeline for authZ only. This means that the registry will treat a
9 # user as authenticated without making requests to keystone to reauthenticate
10 # the user.
11 [pipeline:glance-registry-trusted-auth]
12 pipeline = healthcheck osprofiler context registryapp
13
14 [app:registryapp]
15 paste.app_factory = glance.registry.api:API.factory
16
17 [filter:healthcheck]
18 paste.filter_factory = oslo_middleware:Healthcheck.factory
19 backends = disable_by_file
20 disable_by_file_path = /etc/glance/healthcheck_disable
21
22 [filter:context]
23 paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory
24
25 [filter:unauthenticated-context]
26 paste.filter_factory = glance.api.middleware.context:UnauthenticatedContextMiddleware.factory
27
28 [filter:authtoken]
29 paste.filter_factory = keystonemiddleware.auth_token:filter_factory
30
31 [filter:osprofiler]
32 paste.filter_factory = osprofiler.web:WsgiMiddleware.factory
33 hmac_keys = SECRET_KEY #DEPRECATED
34 enabled = yes #DEPRECATED
+0
-1896
etc/glance-registry.conf less more
0 [DEFAULT]
1
2 #
3 # From glance.registry
4 #
5
6 # DEPRECATED:
7 # Set the image owner to tenant or the authenticated user.
8 #
9 # Assign a boolean value to determine the owner of an image. When set to
10 # True, the owner of the image is the tenant. When set to False, the
11 # owner of the image will be the authenticated user issuing the request.
12 # Setting it to False makes the image private to the associated user and
13 # sharing with other users within the same tenant (or "project")
14 # requires explicit image sharing via image membership.
15 #
16 # Possible values:
17 # * True
18 # * False
19 #
20 # Related options:
21 # * None
22 #
23 # (boolean value)
24 # This option is deprecated for removal since Rocky.
25 # Its value may be silently ignored in the future.
26 # Reason:
27 # The non-default setting for this option misaligns Glance with other
28 # OpenStack services with respect to resource ownership. Further, surveys
29 # indicate that this option is not used by operators. The option will be
30 # removed early in the 'S' development cycle following the standard OpenStack
31 # deprecation policy. As the option is not in wide use, no migration path is
32 # proposed.
33 #owner_is_tenant = true
34
35 # DEPRECATED:
36 # Role used to identify an authenticated user as administrator.
37 #
38 # Provide a string value representing a Keystone role to identify an
39 # administrative user. Users with this role will be granted
40 # administrative privileges.
41 #
42 # NOTE: The default value for this option has changed in this release.
43 #
44 # Possible values:
45 # * A string value which is a valid Keystone role
46 #
47 # Related options:
48 # * None
49 #
50 # (string value)
51 # This option is deprecated for removal since Ussuri.
52 # Its value may be silently ignored in the future.
53 # Reason:
54 # This option is redundant as its goal can be achieved via policy file
55 # configuration. Additionally, it can override any configured policies,
56 # leading to unexpected behavior and difficulty in policy configuration.
57 # The option will be removed early in the Victoria development cycle,
58 # following the standard OpenStack deprecation policy.
59 #
60 # Because this can be a security issue, the default value of this
61 # configuration option has been changed in this release.
62 #
63 # Please see the 'Deprecation Notes' section of the Ussuri Glance
64 # Release Notes for more information.
65 #admin_role = __NOT_A_ROLE_07697c71e6174332989d3d5f2a7d2e7c_NOT_A_ROLE__
66
67 #
68 # Allow limited access to unauthenticated users.
69 #
70 # Assign a boolean to determine API access for unathenticated
71 # users. When set to False, the API cannot be accessed by
72 # unauthenticated users. When set to True, unauthenticated users can
73 # access the API with read-only privileges. This however only applies
74 # when using ContextMiddleware.
75 #
76 # Possible values:
77 # * True
78 # * False
79 #
80 # Related options:
81 # * None
82 #
83 # (boolean value)
84 #allow_anonymous_access = false
85
86 #
87 # Limit the request ID length.
88 #
89 # Provide an integer value to limit the length of the request ID to
90 # the specified length. The default value is 64. Users can change this
91 # to any ineteger value between 0 and 16384 however keeping in mind that
92 # a larger value may flood the logs.
93 #
94 # Possible values:
95 # * Integer value between 0 and 16384
96 #
97 # Related options:
98 # * None
99 #
100 # (integer value)
101 # Minimum value: 0
102 #max_request_id_length = 64
103
104 # DEPRECATED:
105 # Allow users to add additional/custom properties to images.
106 #
107 # Glance defines a standard set of properties (in its schema) that
108 # appear on every image. These properties are also known as
109 # ``base properties``. In addition to these properties, Glance
110 # allows users to add custom properties to images. These are known
111 # as ``additional properties``.
112 #
113 # By default, this configuration option is set to ``True`` and users
114 # are allowed to add additional properties. The number of additional
115 # properties that can be added to an image can be controlled via
116 # ``image_property_quota`` configuration option.
117 #
118 # Possible values:
119 # * True
120 # * False
121 #
122 # Related options:
123 # * image_property_quota
124 #
125 # (boolean value)
126 # This option is deprecated for removal since Ussuri.
127 # Its value may be silently ignored in the future.
128 # Reason:
129 # This option is redundant. Control custom image property usage via the
130 # 'image_property_quota' configuration option. This option is scheduled
131 # to be removed during the Victoria development cycle.
132 #allow_additional_image_properties = true
133
134 #
135 # Secure hashing algorithm used for computing the 'os_hash_value' property.
136 #
137 # This option configures the Glance "multihash", which consists of two
138 # image properties: the 'os_hash_algo' and the 'os_hash_value'. The
139 # 'os_hash_algo' will be populated by the value of this configuration
140 # option, and the 'os_hash_value' will be populated by the hexdigest computed
141 # when the algorithm is applied to the uploaded or imported image data.
142 #
143 # The value must be a valid secure hash algorithm name recognized by the
144 # python 'hashlib' library. You can determine what these are by examining
145 # the 'hashlib.algorithms_available' data member of the version of the
146 # library being used in your Glance installation. For interoperability
147 # purposes, however, we recommend that you use the set of secure hash
148 # names supplied by the 'hashlib.algorithms_guaranteed' data member because
149 # those algorithms are guaranteed to be supported by the 'hashlib' library
150 # on all platforms. Thus, any image consumer using 'hashlib' locally should
151 # be able to verify the 'os_hash_value' of the image.
152 #
153 # The default value of 'sha512' is a performant secure hash algorithm.
154 #
155 # If this option is misconfigured, any attempts to store image data will fail.
156 # For that reason, we recommend using the default value.
157 #
158 # Possible values:
159 # * Any secure hash algorithm name recognized by the Python 'hashlib'
160 # library
161 #
162 # Related options:
163 # * None
164 #
165 # (string value)
166 #hashing_algorithm = sha512
167
168 #
169 # Maximum number of image members per image.
170 #
171 # This limits the maximum of users an image can be shared with. Any negative
172 # value is interpreted as unlimited.
173 #
174 # Related options:
175 # * None
176 #
177 # (integer value)
178 #image_member_quota = 128
179
180 #
181 # Maximum number of properties allowed on an image.
182 #
183 # This enforces an upper limit on the number of additional properties an image
184 # can have. Any negative value is interpreted as unlimited.
185 #
186 # NOTE: This won't have any impact if additional properties are disabled. Please
187 # refer to ``allow_additional_image_properties``.
188 #
189 # Related options:
190 # * ``allow_additional_image_properties``
191 #
192 # (integer value)
193 #image_property_quota = 128
194
195 #
196 # Maximum number of tags allowed on an image.
197 #
198 # Any negative value is interpreted as unlimited.
199 #
200 # Related options:
201 # * None
202 #
203 # (integer value)
204 #image_tag_quota = 128
205
206 #
207 # Maximum number of locations allowed on an image.
208 #
209 # Any negative value is interpreted as unlimited.
210 #
211 # Related options:
212 # * None
213 #
214 # (integer value)
215 #image_location_quota = 10
216
217 # DEPRECATED:
218 # Python module path of data access API.
219 #
220 # Specifies the path to the API to use for accessing the data model.
221 # This option determines how the image catalog data will be accessed.
222 #
223 # Possible values:
224 # * glance.db.sqlalchemy.api
225 # * glance.db.registry.api
226 # * glance.db.simple.api
227 #
228 # If this option is set to ``glance.db.sqlalchemy.api`` then the image
229 # catalog data is stored in and read from the database via the
230 # SQLAlchemy Core and ORM APIs.
231 #
232 # Setting this option to ``glance.db.registry.api`` will force all
233 # database access requests to be routed through the Registry service.
234 # This avoids data access from the Glance API nodes for an added layer
235 # of security, scalability and manageability.
236 #
237 # NOTE: In v2 OpenStack Images API, the registry service is optional.
238 # In order to use the Registry API in v2, the option
239 # ``enable_v2_registry`` must be set to ``True``.
240 #
241 # Finally, when this configuration option is set to
242 # ``glance.db.simple.api``, image catalog data is stored in and read
243 # from an in-memory data structure. This is primarily used for testing.
244 #
245 # Related options:
246 # * enable_v2_api
247 # * enable_v2_registry
248 #
249 # (string value)
250 # This option is deprecated for removal since Queens.
251 # Its value may be silently ignored in the future.
252 # Reason:
253 # Glance registry service is deprecated for removal.
254 #
255 # More information can be found from the spec:
256 # http://specs.openstack.org/openstack/glance-
257 # specs/specs/queens/approved/glance/deprecate-registry.html
258 #data_api = glance.db.sqlalchemy.api
259
260 #
261 # The default number of results to return for a request.
262 #
263 # Responses to certain API requests, like list images, may return
264 # multiple items. The number of results returned can be explicitly
265 # controlled by specifying the ``limit`` parameter in the API request.
266 # However, if a ``limit`` parameter is not specified, this
267 # configuration value will be used as the default number of results to
268 # be returned for any API request.
269 #
270 # NOTES:
271 # * The value of this configuration option may not be greater than
272 # the value specified by ``api_limit_max``.
273 # * Setting this to a very large value may slow down database
274 # queries and increase response times. Setting this to a
275 # very low value may result in poor user experience.
276 #
277 # Possible values:
278 # * Any positive integer
279 #
280 # Related options:
281 # * api_limit_max
282 #
283 # (integer value)
284 # Minimum value: 1
285 #limit_param_default = 25
286
287 #
288 # Maximum number of results that could be returned by a request.
289 #
290 # As described in the help text of ``limit_param_default``, some
291 # requests may return multiple results. The number of results to be
292 # returned are governed either by the ``limit`` parameter in the
293 # request or the ``limit_param_default`` configuration option.
294 # The value in either case, can't be greater than the absolute maximum
295 # defined by this configuration option. Anything greater than this
296 # value is trimmed down to the maximum value defined here.
297 #
298 # NOTE: Setting this to a very large value may slow down database
299 # queries and increase response times. Setting this to a
300 # very low value may result in poor user experience.
301 #
302 # Possible values:
303 # * Any positive integer
304 #
305 # Related options:
306 # * limit_param_default
307 #
308 # (integer value)
309 # Minimum value: 1
310 #api_limit_max = 1000
311
312 #
313 # Show direct image location when returning an image.
314 #
315 # This configuration option indicates whether to show the direct image
316 # location when returning image details to the user. The direct image
317 # location is where the image data is stored in backend storage. This
318 # image location is shown under the image property ``direct_url``.
319 #
320 # When multiple image locations exist for an image, the best location
321 # is displayed based on the location strategy indicated by the
322 # configuration option ``location_strategy``.
323 #
324 # NOTES:
325 # * Revealing image locations can present a GRAVE SECURITY RISK as
326 # image locations can sometimes include credentials. Hence, this
327 # is set to ``False`` by default. Set this to ``True`` with
328 # EXTREME CAUTION and ONLY IF you know what you are doing!
329 # * If an operator wishes to avoid showing any image location(s)
330 # to the user, then both this option and
331 # ``show_multiple_locations`` MUST be set to ``False``.
332 #
333 # Possible values:
334 # * True
335 # * False
336 #
337 # Related options:
338 # * show_multiple_locations
339 # * location_strategy
340 #
341 # (boolean value)
342 #show_image_direct_url = false
343
344 # DEPRECATED:
345 # Show all image locations when returning an image.
346 #
347 # This configuration option indicates whether to show all the image
348 # locations when returning image details to the user. When multiple
349 # image locations exist for an image, the locations are ordered based
350 # on the location strategy indicated by the configuration opt
351 # ``location_strategy``. The image locations are shown under the
352 # image property ``locations``.
353 #
354 # NOTES:
355 # * Revealing image locations can present a GRAVE SECURITY RISK as
356 # image locations can sometimes include credentials. Hence, this
357 # is set to ``False`` by default. Set this to ``True`` with
358 # EXTREME CAUTION and ONLY IF you know what you are doing!
359 # * See https://wiki.openstack.org/wiki/OSSN/OSSN-0065 for more
360 # information.
361 # * If an operator wishes to avoid showing any image location(s)
362 # to the user, then both this option and
363 # ``show_image_direct_url`` MUST be set to ``False``.
364 #
365 # Possible values:
366 # * True
367 # * False
368 #
369 # Related options:
370 # * show_image_direct_url
371 # * location_strategy
372 #
373 # (boolean value)
374 # This option is deprecated for removal since Newton.
375 # Its value may be silently ignored in the future.
376 # Reason: Use of this option, deprecated since Newton, is a security risk and
377 # will be removed once we figure out a way to satisfy those use cases that
378 # currently require it. An earlier announcement that the same functionality can
379 # be achieved with greater granularity by using policies is incorrect. You
380 # cannot work around this option via policy configuration at the present time,
381 # though that is the direction we believe the fix will take. Please keep an eye
382 # on the Glance release notes to stay up to date on progress in addressing this
383 # issue.
384 #show_multiple_locations = false
385
386 #
387 # Maximum size of image a user can upload in bytes.
388 #
389 # An image upload greater than the size mentioned here would result
390 # in an image creation failure. This configuration option defaults to
391 # 1099511627776 bytes (1 TiB).
392 #
393 # NOTES:
394 # * This value should only be increased after careful
395 # consideration and must be set less than or equal to
396 # 8 EiB (9223372036854775808).
397 # * This value must be set with careful consideration of the
398 # backend storage capacity. Setting this to a very low value
399 # may result in a large number of image failures. And, setting
400 # this to a very large value may result in faster consumption
401 # of storage. Hence, this must be set according to the nature of
402 # images created and storage capacity available.
403 #
404 # Possible values:
405 # * Any positive number less than or equal to 9223372036854775808
406 #
407 # (integer value)
408 # Minimum value: 1
409 # Maximum value: 9223372036854775808
410 #image_size_cap = 1099511627776
411
412 #
413 # Maximum amount of image storage per tenant.
414 #
415 # This enforces an upper limit on the cumulative storage consumed by all images
416 # of a tenant across all stores. This is a per-tenant limit.
417 #
418 # The default unit for this configuration option is Bytes. However, storage
419 # units can be specified using case-sensitive literals ``B``, ``KB``, ``MB``,
420 # ``GB`` and ``TB`` representing Bytes, KiloBytes, MegaBytes, GigaBytes and
421 # TeraBytes respectively. Note that there should not be any space between the
422 # value and unit. Value ``0`` signifies no quota enforcement. Negative values
423 # are invalid and result in errors.
424 #
425 # Possible values:
426 # * A string that is a valid concatenation of a non-negative integer
427 # representing the storage value and an optional string literal
428 # representing storage units as mentioned above.
429 #
430 # Related options:
431 # * None
432 #
433 # (string value)
434 #user_storage_quota = 0
435
436 #
437 # Deploy the v2 OpenStack Images API.
438 #
439 # When this option is set to ``True``, Glance service will respond
440 # to requests on registered endpoints conforming to the v2 OpenStack
441 # Images API.
442 #
443 # NOTES:
444 # * If this option is disabled, then the ``enable_v2_registry``
445 # option, which is enabled by default, is also recommended
446 # to be disabled.
447 #
448 # Possible values:
449 # * True
450 # * False
451 #
452 # Related options:
453 # * enable_v2_registry
454 #
455 # (boolean value)
456 #enable_v2_api = true
457
458 #
459 # DEPRECATED FOR REMOVAL
460 # (boolean value)
461 #enable_v1_registry = true
462
463 # DEPRECATED:
464 # Deploy the v2 API Registry service.
465 #
466 # When this option is set to ``True``, the Registry service
467 # will be enabled in Glance for v2 API requests.
468 #
469 # NOTES:
470 # * Use of Registry is optional in v2 API, so this option
471 # must only be enabled if both ``enable_v2_api`` is set to
472 # ``True`` and the ``data_api`` option is set to
473 # ``glance.db.registry.api``.
474 #
475 # * If deploying only the v1 OpenStack Images API, this option,
476 # which is enabled by default, should be disabled.
477 #
478 # Possible values:
479 # * True
480 # * False
481 #
482 # Related options:
483 # * enable_v2_api
484 # * data_api
485 #
486 # (boolean value)
487 # This option is deprecated for removal since Queens.
488 # Its value may be silently ignored in the future.
489 # Reason:
490 # Glance registry service is deprecated for removal.
491 #
492 # More information can be found from the spec:
493 # http://specs.openstack.org/openstack/glance-
494 # specs/specs/queens/approved/glance/deprecate-registry.html
495 #enable_v2_registry = true
496
497 #
498 # Host address of the pydev server.
499 #
500 # Provide a string value representing the hostname or IP of the
501 # pydev server to use for debugging. The pydev server listens for
502 # debug connections on this address, facilitating remote debugging
503 # in Glance.
504 #
505 # Possible values:
506 # * Valid hostname
507 # * Valid IP address
508 #
509 # Related options:
510 # * None
511 #
512 # (host address value)
513 #
514 # This option has a sample default set, which means that
515 # its actual default value may vary from the one documented
516 # below.
517 #pydev_worker_debug_host = localhost
518
519 #
520 # Port number that the pydev server will listen on.
521 #
522 # Provide a port number to bind the pydev server to. The pydev
523 # process accepts debug connections on this port and facilitates
524 # remote debugging in Glance.
525 #
526 # Possible values:
527 # * A valid port number
528 #
529 # Related options:
530 # * None
531 #
532 # (port value)
533 # Minimum value: 0
534 # Maximum value: 65535
535 #pydev_worker_debug_port = 5678
536
537 #
538 # AES key for encrypting store location metadata.
539 #
540 # Provide a string value representing the AES cipher to use for
541 # encrypting Glance store metadata.
542 #
543 # NOTE: The AES key to use must be set to a random string of length
544 # 16, 24 or 32 bytes.
545 #
546 # Possible values:
547 # * String value representing a valid AES key
548 #
549 # Related options:
550 # * None
551 #
552 # (string value)
553 #metadata_encryption_key = <None>
554
555 #
556 # Digest algorithm to use for digital signature.
557 #
558 # Provide a string value representing the digest algorithm to
559 # use for generating digital signatures. By default, ``sha256``
560 # is used.
561 #
562 # To get a list of the available algorithms supported by the version
563 # of OpenSSL on your platform, run the command:
564 # ``openssl list-message-digest-algorithms``.
565 # Examples are 'sha1', 'sha256', and 'sha512'.
566 #
567 # NOTE: ``digest_algorithm`` is not related to Glance's image signing
568 # and verification. It is only used to sign the universally unique
569 # identifier (UUID) as a part of the certificate file and key file
570 # validation.
571 #
572 # Possible values:
573 # * An OpenSSL message digest algorithm identifier
574 #
575 # Relation options:
576 # * None
577 #
578 # (string value)
579 #digest_algorithm = sha256
580
581 #
582 # The URL provides location where the temporary data will be stored
583 #
584 # This option is for Glance internal use only. Glance will save the
585 # image data uploaded by the user to 'staging' endpoint during the
586 # image import process.
587 #
588 # This option does not change the 'staging' API endpoint by any means.
589 #
590 # NOTE: It is discouraged to use same path as [task]/work_dir
591 #
592 # NOTE: 'file://<absolute-directory-path>' is the only option
593 # api_image_import flow will support for now.
594 #
595 # NOTE: The staging path must be on shared filesystem available to all
596 # Glance API nodes.
597 #
598 # Possible values:
599 # * String starting with 'file://' followed by absolute FS path
600 #
601 # Related options:
602 # * [task]/work_dir
603 #
604 # (string value)
605 #node_staging_uri = file:///tmp/staging/
606
607 #
608 # List of enabled Image Import Methods
609 #
610 # 'glance-direct', 'copy-image' and 'web-download' are enabled by default.
611 #
612 # Related options:
613 # * [DEFAULT]/node_staging_uri (list value)
614 #enabled_import_methods = [glance-direct,web-download,copy-image]
615
616 #
617 # IP address to bind the glance servers to.
618 #
619 # Provide an IP address to bind the glance server to. The default
620 # value is ``0.0.0.0``.
621 #
622 # Edit this option to enable the server to listen on one particular
623 # IP address on the network card. This facilitates selection of a
624 # particular network interface for the server.
625 #
626 # Possible values:
627 # * A valid IPv4 address
628 # * A valid IPv6 address
629 #
630 # Related options:
631 # * None
632 #
633 # (host address value)
634 #bind_host = 0.0.0.0
635
636 #
637 # Port number on which the server will listen.
638 #
639 # Provide a valid port number to bind the server's socket to. This
640 # port is then set to identify processes and forward network messages
641 # that arrive at the server. The default bind_port value for the API
642 # server is 9292 and for the registry server is 9191.
643 #
644 # Possible values:
645 # * A valid port number (0 to 65535)
646 #
647 # Related options:
648 # * None
649 #
650 # (port value)
651 # Minimum value: 0
652 # Maximum value: 65535
653 #bind_port = <None>
654
655 #
656 # Set the number of incoming connection requests.
657 #
658 # Provide a positive integer value to limit the number of requests in
659 # the backlog queue. The default queue size is 4096.
660 #
661 # An incoming connection to a TCP listener socket is queued before a
662 # connection can be established with the server. Setting the backlog
663 # for a TCP socket ensures a limited queue size for incoming traffic.
664 #
665 # Possible values:
666 # * Positive integer
667 #
668 # Related options:
669 # * None
670 #
671 # (integer value)
672 # Minimum value: 1
673 #backlog = 4096
674
675 #
676 # Set the wait time before a connection recheck.
677 #
678 # Provide a positive integer value representing time in seconds which
679 # is set as the idle wait time before a TCP keep alive packet can be
680 # sent to the host. The default value is 600 seconds.
681 #
682 # Setting ``tcp_keepidle`` helps verify at regular intervals that a
683 # connection is intact and prevents frequent TCP connection
684 # reestablishment.
685 #
686 # Possible values:
687 # * Positive integer value representing time in seconds
688 #
689 # Related options:
690 # * None
691 #
692 # (integer value)
693 # Minimum value: 1
694 #tcp_keepidle = 600
695
696 # DEPRECATED: The HTTP header used to determine the scheme for the original
697 # request, even if it was removed by an SSL terminating proxy. Typical value is
698 # "HTTP_X_FORWARDED_PROTO". (string value)
699 # This option is deprecated for removal.
700 # Its value may be silently ignored in the future.
701 # Reason: Use the http_proxy_to_wsgi middleware instead.
702 #secure_proxy_ssl_header = <None>
703
704 #
705 # Number of Glance worker processes to start.
706 #
707 # Provide a non-negative integer value to set the number of child
708 # process workers to service requests. By default, the number of CPUs
709 # available is set as the value for ``workers`` limited to 8. For
710 # example if the processor count is 6, 6 workers will be used, if the
711 # processor count is 24 only 8 workers will be used. The limit will only
712 # apply to the default value, if 24 workers is configured, 24 is used.
713 #
714 # Each worker process is made to listen on the port set in the
715 # configuration file and contains a greenthread pool of size 1000.
716 #
717 # NOTE: Setting the number of workers to zero, triggers the creation
718 # of a single API process with a greenthread pool of size 1000.
719 #
720 # Possible values:
721 # * 0
722 # * Positive integer value (typically equal to the number of CPUs)
723 #
724 # Related options:
725 # * None
726 #
727 # (integer value)
728 # Minimum value: 0
729 #workers = <None>
730
731 #
732 # Maximum line size of message headers.
733 #
734 # Provide an integer value representing a length to limit the size of
735 # message headers. The default value is 16384.
736 #
737 # NOTE: ``max_header_line`` may need to be increased when using large
738 # tokens (typically those generated by the Keystone v3 API with big
739 # service catalogs). However, it is to be kept in mind that larger
740 # values for ``max_header_line`` would flood the logs.
741 #
742 # Setting ``max_header_line`` to 0 sets no limit for the line size of
743 # message headers.
744 #
745 # Possible values:
746 # * 0
747 # * Positive integer
748 #
749 # Related options:
750 # * None
751 #
752 # (integer value)
753 # Minimum value: 0
754 #max_header_line = 16384
755
756 #
757 # Set keep alive option for HTTP over TCP.
758 #
759 # Provide a boolean value to determine sending of keep alive packets.
760 # If set to ``False``, the server returns the header
761 # "Connection: close". If set to ``True``, the server returns a
762 # "Connection: Keep-Alive" in its responses. This enables retention of
763 # the same TCP connection for HTTP conversations instead of opening a
764 # new one with each new request.
765 #
766 # This option must be set to ``False`` if the client socket connection
767 # needs to be closed explicitly after the response is received and
768 # read successfully by the client.
769 #
770 # Possible values:
771 # * True
772 # * False
773 #
774 # Related options:
775 # * None
776 #
777 # (boolean value)
778 #http_keepalive = true
779
780 #
781 # Timeout for client connections' socket operations.
782 #
783 # Provide a valid integer value representing time in seconds to set
784 # the period of wait before an incoming connection can be closed. The
785 # default value is 900 seconds.
786 #
787 # The value zero implies wait forever.
788 #
789 # Possible values:
790 # * Zero
791 # * Positive integer
792 #
793 # Related options:
794 # * None
795 #
796 # (integer value)
797 # Minimum value: 0
798 #client_socket_timeout = 900
799
800 #
801 # From oslo.log
802 #
803
804 # If set to true, the logging level will be set to DEBUG instead of the default
805 # INFO level. (boolean value)
806 # Note: This option can be changed without restarting.
807 #debug = false
808
809 # The name of a logging configuration file. This file is appended to any
810 # existing logging configuration files. For details about logging configuration
811 # files, see the Python logging module documentation. Note that when logging
812 # configuration files are used then all logging configuration is set in the
813 # configuration file and other logging configuration options are ignored (for
814 # example, log-date-format). (string value)
815 # Note: This option can be changed without restarting.
816 # Deprecated group/name - [DEFAULT]/log_config
817 #log_config_append = <None>
818
819 # Defines the format string for %%(asctime)s in log records. Default:
820 # %(default)s . This option is ignored if log_config_append is set. (string
821 # value)
822 #log_date_format = %Y-%m-%d %H:%M:%S
823
824 # (Optional) Name of log file to send logging output to. If no default is set,
825 # logging will go to stderr as defined by use_stderr. This option is ignored if
826 # log_config_append is set. (string value)
827 # Deprecated group/name - [DEFAULT]/logfile
828 #log_file = <None>
829
830 # (Optional) The base directory used for relative log_file paths. This option
831 # is ignored if log_config_append is set. (string value)
832 # Deprecated group/name - [DEFAULT]/logdir
833 #log_dir = <None>
834
835 # Uses logging handler designed to watch file system. When log file is moved or
836 # removed this handler will open a new log file with specified path
837 # instantaneously. It makes sense only if log_file option is specified and Linux
838 # platform is used. This option is ignored if log_config_append is set. (boolean
839 # value)
840 #watch_log_file = false
841
842 # Use syslog for logging. Existing syslog format is DEPRECATED and will be
843 # changed later to honor RFC5424. This option is ignored if log_config_append is
844 # set. (boolean value)
845 #use_syslog = false
846
847 # Enable journald for logging. If running in a systemd environment you may wish
848 # to enable journal support. Doing so will use the journal native protocol which
849 # includes structured metadata in addition to log messages.This option is
850 # ignored if log_config_append is set. (boolean value)
851 #use_journal = false
852
853 # Syslog facility to receive log lines. This option is ignored if
854 # log_config_append is set. (string value)
855 #syslog_log_facility = LOG_USER
856
857 # Use JSON formatting for logging. This option is ignored if log_config_append
858 # is set. (boolean value)
859 #use_json = false
860
861 # Log output to standard error. This option is ignored if log_config_append is
862 # set. (boolean value)
863 #use_stderr = false
864
865 # Log output to Windows Event Log. (boolean value)
866 #use_eventlog = false
867
868 # The amount of time before the log files are rotated. This option is ignored
869 # unless log_rotation_type is setto "interval". (integer value)
870 #log_rotate_interval = 1
871
872 # Rotation interval type. The time of the last file change (or the time when the
873 # service was started) is used when scheduling the next rotation. (string value)
874 # Possible values:
875 # Seconds - <No description provided>
876 # Minutes - <No description provided>
877 # Hours - <No description provided>
878 # Days - <No description provided>
879 # Weekday - <No description provided>
880 # Midnight - <No description provided>
881 #log_rotate_interval_type = days
882
883 # Maximum number of rotated log files. (integer value)
884 #max_logfile_count = 30
885
886 # Log file maximum size in MB. This option is ignored if "log_rotation_type" is
887 # not set to "size". (integer value)
888 #max_logfile_size_mb = 200
889
890 # Log rotation type. (string value)
891 # Possible values:
892 # interval - Rotate logs at predefined time intervals.
893 # size - Rotate logs once they reach a predefined size.
894 # none - Do not rotate log files.
895 #log_rotation_type = none
896
897 # Format string to use for log messages with context. Used by
898 # oslo_log.formatters.ContextFormatter (string value)
899 #logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s
900
901 # Format string to use for log messages when context is undefined. Used by
902 # oslo_log.formatters.ContextFormatter (string value)
903 #logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s
904
905 # Additional data to append to log message when logging level for the message is
906 # DEBUG. Used by oslo_log.formatters.ContextFormatter (string value)
907 #logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d
908
909 # Prefix each line of exception output with this format. Used by
910 # oslo_log.formatters.ContextFormatter (string value)
911 #logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s
912
913 # Defines the format string for %(user_identity)s that is used in
914 # logging_context_format_string. Used by oslo_log.formatters.ContextFormatter
915 # (string value)
916 #logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s
917
918 # List of package logging levels in logger=LEVEL pairs. This option is ignored
919 # if log_config_append is set. (list value)
920 #default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,oslo_policy=INFO,dogpile.core.dogpile=INFO
921
922 # Enables or disables publication of error events. (boolean value)
923 #publish_errors = false
924
925 # The format for an instance that is passed with the log message. (string value)
926 #instance_format = "[instance: %(uuid)s] "
927
928 # The format for an instance UUID that is passed with the log message. (string
929 # value)
930 #instance_uuid_format = "[instance: %(uuid)s] "
931
932 # Interval, number of seconds, of log rate limiting. (integer value)
933 #rate_limit_interval = 0
934
935 # Maximum number of logged messages per rate_limit_interval. (integer value)
936 #rate_limit_burst = 0
937
938 # Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or
939 # empty string. Logs with level greater or equal to rate_limit_except_level are
940 # not filtered. An empty string means that all levels are filtered. (string
941 # value)
942 #rate_limit_except_level = CRITICAL
943
944 # Enables or disables fatal status of deprecations. (boolean value)
945 #fatal_deprecations = false
946
947 #
948 # From oslo.messaging
949 #
950
951 # Size of RPC connection pool. (integer value)
952 #rpc_conn_pool_size = 30
953
954 # The pool size limit for connections expiration policy (integer value)
955 #conn_pool_min_size = 2
956
957 # The time-to-live in sec of idle connections in the pool (integer value)
958 #conn_pool_ttl = 1200
959
960 # Size of executor thread pool when executor is threading or eventlet. (integer
961 # value)
962 # Deprecated group/name - [DEFAULT]/rpc_thread_pool_size
963 #executor_thread_pool_size = 64
964
965 # Seconds to wait for a response from a call. (integer value)
966 #rpc_response_timeout = 60
967
968 # The network address and optional user credentials for connecting to the
969 # messaging backend, in URL format. The expected format is:
970 #
971 # driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query
972 #
973 # Example: rabbit://rabbitmq:password@127.0.0.1:5672//
974 #
975 # For full details on the fields in the URL see the documentation of
976 # oslo_messaging.TransportURL at
977 # https://docs.openstack.org/oslo.messaging/latest/reference/transport.html
978 # (string value)
979 #transport_url = rabbit://
980
981 # The default exchange under which topics are scoped. May be overridden by an
982 # exchange name specified in the transport_url option. (string value)
983 #control_exchange = openstack
984
985
986 [database]
987
988 #
989 # From oslo.db
990 #
991
992 # If True, SQLite uses synchronous mode. (boolean value)
993 #sqlite_synchronous = true
994
995 # The back end to use for the database. (string value)
996 # Deprecated group/name - [DEFAULT]/db_backend
997 #backend = sqlalchemy
998
999 # The SQLAlchemy connection string to use to connect to the database. (string
1000 # value)
1001 # Deprecated group/name - [DEFAULT]/sql_connection
1002 # Deprecated group/name - [DATABASE]/sql_connection
1003 # Deprecated group/name - [sql]/connection
1004 #connection = <None>
1005
1006 # The SQLAlchemy connection string to use to connect to the slave database.
1007 # (string value)
1008 #slave_connection = <None>
1009
1010 # The SQL mode to be used for MySQL sessions. This option, including the
1011 # default, overrides any server-set SQL mode. To use whatever SQL mode is set by
1012 # the server configuration, set this to no value. Example: mysql_sql_mode=
1013 # (string value)
1014 #mysql_sql_mode = TRADITIONAL
1015
1016 # If True, transparently enables support for handling MySQL Cluster (NDB).
1017 # (boolean value)
1018 #mysql_enable_ndb = false
1019
1020 # Connections which have been present in the connection pool longer than this
1021 # number of seconds will be replaced with a new one the next time they are
1022 # checked out from the pool. (integer value)
1023 # Deprecated group/name - [DATABASE]/idle_timeout
1024 # Deprecated group/name - [database]/idle_timeout
1025 # Deprecated group/name - [DEFAULT]/sql_idle_timeout
1026 # Deprecated group/name - [DATABASE]/sql_idle_timeout
1027 # Deprecated group/name - [sql]/idle_timeout
1028 #connection_recycle_time = 3600
1029
1030 # Maximum number of SQL connections to keep open in a pool. Setting a value of 0
1031 # indicates no limit. (integer value)
1032 # Deprecated group/name - [DEFAULT]/sql_max_pool_size
1033 # Deprecated group/name - [DATABASE]/sql_max_pool_size
1034 #max_pool_size = 5
1035
1036 # Maximum number of database connection retries during startup. Set to -1 to
1037 # specify an infinite retry count. (integer value)
1038 # Deprecated group/name - [DEFAULT]/sql_max_retries
1039 # Deprecated group/name - [DATABASE]/sql_max_retries
1040 #max_retries = 10
1041
1042 # Interval between retries of opening a SQL connection. (integer value)
1043 # Deprecated group/name - [DEFAULT]/sql_retry_interval
1044 # Deprecated group/name - [DATABASE]/reconnect_interval
1045 #retry_interval = 10
1046
1047 # If set, use this value for max_overflow with SQLAlchemy. (integer value)
1048 # Deprecated group/name - [DEFAULT]/sql_max_overflow
1049 # Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow
1050 #max_overflow = 50
1051
1052 # Verbosity of SQL debugging information: 0=None, 100=Everything. (integer
1053 # value)
1054 # Minimum value: 0
1055 # Maximum value: 100
1056 # Deprecated group/name - [DEFAULT]/sql_connection_debug
1057 #connection_debug = 0
1058
1059 # Add Python stack traces to SQL as comment strings. (boolean value)
1060 # Deprecated group/name - [DEFAULT]/sql_connection_trace
1061 #connection_trace = false
1062
1063 # If set, use this value for pool_timeout with SQLAlchemy. (integer value)
1064 # Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout
1065 #pool_timeout = <None>
1066
1067 # Enable the experimental use of database reconnect on connection lost. (boolean
1068 # value)
1069 #use_db_reconnect = false
1070
1071 # Seconds between retries of a database transaction. (integer value)
1072 #db_retry_interval = 1
1073
1074 # If True, increases the interval between retries of a database operation up to
1075 # db_max_retry_interval. (boolean value)
1076 #db_inc_retry_interval = true
1077
1078 # If db_inc_retry_interval is set, the maximum seconds between retries of a
1079 # database operation. (integer value)
1080 #db_max_retry_interval = 10
1081
1082 # Maximum retries in case of connection error or deadlock error before error is
1083 # raised. Set to -1 to specify an infinite retry count. (integer value)
1084 #db_max_retries = 20
1085
1086 # Optional URL parameters to append onto the connection URL at connect time;
1087 # specify as param1=value1&param2=value2&... (string value)
1088 #connection_parameters =
1089
1090 #
1091 # From oslo.db.concurrency
1092 #
1093
1094 # Enable the experimental use of thread pooling for all DB API calls (boolean
1095 # value)
1096 # Deprecated group/name - [DEFAULT]/dbapi_use_tpool
1097 #use_tpool = false
1098
1099
1100 [keystone_authtoken]
1101
1102 #
1103 # From keystonemiddleware.auth_token
1104 #
1105
1106 # Complete "public" Identity API endpoint. This endpoint should not be an
1107 # "admin" endpoint, as it should be accessible by all end users. Unauthenticated
1108 # clients are redirected to this endpoint to authenticate. Although this
1109 # endpoint should ideally be unversioned, client support in the wild varies. If
1110 # you're using a versioned v2 endpoint here, then this should *not* be the same
1111 # endpoint the service user utilizes for validating tokens, because normal end
1112 # users may not be able to reach that endpoint. (string value)
1113 # Deprecated group/name - [keystone_authtoken]/auth_uri
1114 #www_authenticate_uri = <None>
1115
1116 # DEPRECATED: Complete "public" Identity API endpoint. This endpoint should not
1117 # be an "admin" endpoint, as it should be accessible by all end users.
1118 # Unauthenticated clients are redirected to this endpoint to authenticate.
1119 # Although this endpoint should ideally be unversioned, client support in the
1120 # wild varies. If you're using a versioned v2 endpoint here, then this should
1121 # *not* be the same endpoint the service user utilizes for validating tokens,
1122 # because normal end users may not be able to reach that endpoint. This option
1123 # is deprecated in favor of www_authenticate_uri and will be removed in the S
1124 # release. (string value)
1125 # This option is deprecated for removal since Queens.
1126 # Its value may be silently ignored in the future.
1127 # Reason: The auth_uri option is deprecated in favor of www_authenticate_uri and
1128 # will be removed in the S release.
1129 #auth_uri = <None>
1130
1131 # API version of the Identity API endpoint. (string value)
1132 #auth_version = <None>
1133
1134 # Interface to use for the Identity API endpoint. Valid values are "public",
1135 # "internal" or "admin"(default). (string value)
1136 #interface = admin
1137
1138 # Do not handle authorization requests within the middleware, but delegate the
1139 # authorization decision to downstream WSGI components. (boolean value)
1140 #delay_auth_decision = false
1141
1142 # Request timeout value for communicating with Identity API server. (integer
1143 # value)
1144 #http_connect_timeout = <None>
1145
1146 # How many times are we trying to reconnect when communicating with Identity API
1147 # Server. (integer value)
1148 #http_request_max_retries = 3
1149
1150 # Request environment key where the Swift cache object is stored. When
1151 # auth_token middleware is deployed with a Swift cache, use this option to have
1152 # the middleware share a caching backend with swift. Otherwise, use the
1153 # ``memcached_servers`` option instead. (string value)
1154 #cache = <None>
1155
1156 # Required if identity server requires client certificate (string value)
1157 #certfile = <None>
1158
1159 # Required if identity server requires client certificate (string value)
1160 #keyfile = <None>
1161
1162 # A PEM encoded Certificate Authority to use when verifying HTTPs connections.
1163 # Defaults to system CAs. (string value)
1164 #cafile = <None>
1165
1166 # Verify HTTPS connections. (boolean value)
1167 #insecure = false
1168
1169 # The region in which the identity server can be found. (string value)
1170 #region_name = <None>
1171
1172 # Optionally specify a list of memcached server(s) to use for caching. If left
1173 # undefined, tokens will instead be cached in-process. (list value)
1174 # Deprecated group/name - [keystone_authtoken]/memcache_servers
1175 #memcached_servers = <None>
1176
1177 # In order to prevent excessive effort spent validating tokens, the middleware
1178 # caches previously-seen tokens for a configurable duration (in seconds). Set to
1179 # -1 to disable caching completely. (integer value)
1180 #token_cache_time = 300
1181
1182 # (Optional) If defined, indicate whether token data should be authenticated or
1183 # authenticated and encrypted. If MAC, token data is authenticated (with HMAC)
1184 # in the cache. If ENCRYPT, token data is encrypted and authenticated in the
1185 # cache. If the value is not one of these options or empty, auth_token will
1186 # raise an exception on initialization. (string value)
1187 # Possible values:
1188 # None - <No description provided>
1189 # MAC - <No description provided>
1190 # ENCRYPT - <No description provided>
1191 #memcache_security_strategy = None
1192
1193 # (Optional, mandatory if memcache_security_strategy is defined) This string is
1194 # used for key derivation. (string value)
1195 #memcache_secret_key = <None>
1196
1197 # (Optional) Number of seconds memcached server is considered dead before it is
1198 # tried again. (integer value)
1199 #memcache_pool_dead_retry = 300
1200
1201 # (Optional) Maximum total number of open connections to every memcached server.
1202 # (integer value)
1203 #memcache_pool_maxsize = 10
1204
1205 # (Optional) Socket timeout in seconds for communicating with a memcached
1206 # server. (integer value)
1207 #memcache_pool_socket_timeout = 3
1208
1209 # (Optional) Number of seconds a connection to memcached is held unused in the
1210 # pool before it is closed. (integer value)
1211 #memcache_pool_unused_timeout = 60
1212
1213 # (Optional) Number of seconds that an operation will wait to get a memcached
1214 # client connection from the pool. (integer value)
1215 #memcache_pool_conn_get_timeout = 10
1216
1217 # (Optional) Use the advanced (eventlet safe) memcached client pool. The
1218 # advanced pool will only work under python 2.x. (boolean value)
1219 #memcache_use_advanced_pool = false
1220
1221 # (Optional) Indicate whether to set the X-Service-Catalog header. If False,
1222 # middleware will not ask for service catalog on token validation and will not
1223 # set the X-Service-Catalog header. (boolean value)
1224 #include_service_catalog = true
1225
1226 # Used to control the use and type of token binding. Can be set to: "disabled"
1227 # to not check token binding. "permissive" (default) to validate binding
1228 # information if the bind type is of a form known to the server and ignore it if
1229 # not. "strict" like "permissive" but if the bind type is unknown the token will
1230 # be rejected. "required" any form of token binding is needed to be allowed.
1231 # Finally the name of a binding method that must be present in tokens. (string
1232 # value)
1233 #enforce_token_bind = permissive
1234
1235 # A choice of roles that must be present in a service token. Service tokens are
1236 # allowed to request that an expired token can be used and so this check should
1237 # tightly control that only actual services should be sending this token. Roles
1238 # here are applied as an ANY check so any role in this list must be present. For
1239 # backwards compatibility reasons this currently only affects the allow_expired
1240 # check. (list value)
1241 #service_token_roles = service
1242
1243 # For backwards compatibility reasons we must let valid service tokens pass that
1244 # don't pass the service_token_roles check as valid. Setting this true will
1245 # become the default in a future release and should be enabled if possible.
1246 # (boolean value)
1247 #service_token_roles_required = false
1248
1249 # The name or type of the service as it appears in the service catalog. This is
1250 # used to validate tokens that have restricted access rules. (string value)
1251 #service_type = <None>
1252
1253 # Authentication type to load (string value)
1254 # Deprecated group/name - [keystone_authtoken]/auth_plugin
1255 #auth_type = <None>
1256
1257 # Config Section from which to load plugin specific options (string value)
1258 #auth_section = <None>
1259
1260
1261 [oslo_messaging_amqp]
1262
1263 #
1264 # From oslo.messaging
1265 #
1266
1267 # Name for the AMQP container. must be globally unique. Defaults to a generated
1268 # UUID (string value)
1269 #container_name = <None>
1270
1271 # Timeout for inactive connections (in seconds) (integer value)
1272 #idle_timeout = 0
1273
1274 # Debug: dump AMQP frames to stdout (boolean value)
1275 #trace = false
1276
1277 # Attempt to connect via SSL. If no other ssl-related parameters are given, it
1278 # will use the system's CA-bundle to verify the server's certificate. (boolean
1279 # value)
1280 #ssl = false
1281
1282 # CA certificate PEM file used to verify the server's certificate (string value)
1283 #ssl_ca_file =
1284
1285 # Self-identifying certificate PEM file for client authentication (string value)
1286 #ssl_cert_file =
1287
1288 # Private key PEM file used to sign ssl_cert_file certificate (optional) (string
1289 # value)
1290 #ssl_key_file =
1291
1292 # Password for decrypting ssl_key_file (if encrypted) (string value)
1293 #ssl_key_password = <None>
1294
1295 # By default SSL checks that the name in the server's certificate matches the
1296 # hostname in the transport_url. In some configurations it may be preferable to
1297 # use the virtual hostname instead, for example if the server uses the Server
1298 # Name Indication TLS extension (rfc6066) to provide a certificate per virtual
1299 # host. Set ssl_verify_vhost to True if the server's SSL certificate uses the
1300 # virtual host name instead of the DNS name. (boolean value)
1301 #ssl_verify_vhost = false
1302
1303 # Space separated list of acceptable SASL mechanisms (string value)
1304 #sasl_mechanisms =
1305
1306 # Path to directory that contains the SASL configuration (string value)
1307 #sasl_config_dir =
1308
1309 # Name of configuration file (without .conf suffix) (string value)
1310 #sasl_config_name =
1311
1312 # SASL realm to use if no realm present in username (string value)
1313 #sasl_default_realm =
1314
1315 # Seconds to pause before attempting to re-connect. (integer value)
1316 # Minimum value: 1
1317 #connection_retry_interval = 1
1318
1319 # Increase the connection_retry_interval by this many seconds after each
1320 # unsuccessful failover attempt. (integer value)
1321 # Minimum value: 0
1322 #connection_retry_backoff = 2
1323
1324 # Maximum limit for connection_retry_interval + connection_retry_backoff
1325 # (integer value)
1326 # Minimum value: 1
1327 #connection_retry_interval_max = 30
1328
1329 # Time to pause between re-connecting an AMQP 1.0 link that failed due to a
1330 # recoverable error. (integer value)
1331 # Minimum value: 1
1332 #link_retry_delay = 10
1333
1334 # The maximum number of attempts to re-send a reply message which failed due to
1335 # a recoverable error. (integer value)
1336 # Minimum value: -1
1337 #default_reply_retry = 0
1338
1339 # The deadline for an rpc reply message delivery. (integer value)
1340 # Minimum value: 5
1341 #default_reply_timeout = 30
1342
1343 # The deadline for an rpc cast or call message delivery. Only used when caller
1344 # does not provide a timeout expiry. (integer value)
1345 # Minimum value: 5
1346 #default_send_timeout = 30
1347
1348 # The deadline for a sent notification message delivery. Only used when caller
1349 # does not provide a timeout expiry. (integer value)
1350 # Minimum value: 5
1351 #default_notify_timeout = 30
1352
1353 # The duration to schedule a purge of idle sender links. Detach link after
1354 # expiry. (integer value)
1355 # Minimum value: 1
1356 #default_sender_link_timeout = 600
1357
1358 # Indicates the addressing mode used by the driver.
1359 # Permitted values:
1360 # 'legacy' - use legacy non-routable addressing
1361 # 'routable' - use routable addresses
1362 # 'dynamic' - use legacy addresses if the message bus does not support routing
1363 # otherwise use routable addressing (string value)
1364 #addressing_mode = dynamic
1365
1366 # Enable virtual host support for those message buses that do not natively
1367 # support virtual hosting (such as qpidd). When set to true the virtual host
1368 # name will be added to all message bus addresses, effectively creating a
1369 # private 'subnet' per virtual host. Set to False if the message bus supports
1370 # virtual hosting using the 'hostname' field in the AMQP 1.0 Open performative
1371 # as the name of the virtual host. (boolean value)
1372 #pseudo_vhost = true
1373
1374 # address prefix used when sending to a specific server (string value)
1375 #server_request_prefix = exclusive
1376
1377 # address prefix used when broadcasting to all servers (string value)
1378 #broadcast_prefix = broadcast
1379
1380 # address prefix when sending to any server in group (string value)
1381 #group_request_prefix = unicast
1382
1383 # Address prefix for all generated RPC addresses (string value)
1384 #rpc_address_prefix = openstack.org/om/rpc
1385
1386 # Address prefix for all generated Notification addresses (string value)
1387 #notify_address_prefix = openstack.org/om/notify
1388
1389 # Appended to the address prefix when sending a fanout message. Used by the
1390 # message bus to identify fanout messages. (string value)
1391 #multicast_address = multicast
1392
1393 # Appended to the address prefix when sending to a particular RPC/Notification
1394 # server. Used by the message bus to identify messages sent to a single
1395 # destination. (string value)
1396 #unicast_address = unicast
1397
1398 # Appended to the address prefix when sending to a group of consumers. Used by
1399 # the message bus to identify messages that should be delivered in a round-robin
1400 # fashion across consumers. (string value)
1401 #anycast_address = anycast
1402
1403 # Exchange name used in notification addresses.
1404 # Exchange name resolution precedence:
1405 # Target.exchange if set
1406 # else default_notification_exchange if set
1407 # else control_exchange if set
1408 # else 'notify' (string value)
1409 #default_notification_exchange = <None>
1410
1411 # Exchange name used in RPC addresses.
1412 # Exchange name resolution precedence:
1413 # Target.exchange if set
1414 # else default_rpc_exchange if set
1415 # else control_exchange if set
1416 # else 'rpc' (string value)
1417 #default_rpc_exchange = <None>
1418
1419 # Window size for incoming RPC Reply messages. (integer value)
1420 # Minimum value: 1
1421 #reply_link_credit = 200
1422
1423 # Window size for incoming RPC Request messages (integer value)
1424 # Minimum value: 1
1425 #rpc_server_credit = 100
1426
1427 # Window size for incoming Notification messages (integer value)
1428 # Minimum value: 1
1429 #notify_server_credit = 100
1430
1431 # Send messages of this type pre-settled.
1432 # Pre-settled messages will not receive acknowledgement
1433 # from the peer. Note well: pre-settled messages may be
1434 # silently discarded if the delivery fails.
1435 # Permitted values:
1436 # 'rpc-call' - send RPC Calls pre-settled
1437 # 'rpc-reply'- send RPC Replies pre-settled
1438 # 'rpc-cast' - Send RPC Casts pre-settled
1439 # 'notify' - Send Notifications pre-settled
1440 # (multi valued)
1441 #pre_settled = rpc-cast
1442 #pre_settled = rpc-reply
1443
1444
1445 [oslo_messaging_kafka]
1446
1447 #
1448 # From oslo.messaging
1449 #
1450
1451 # Max fetch bytes of Kafka consumer (integer value)
1452 #kafka_max_fetch_bytes = 1048576
1453
1454 # Default timeout(s) for Kafka consumers (floating point value)
1455 #kafka_consumer_timeout = 1.0
1456
1457 # DEPRECATED: Pool Size for Kafka Consumers (integer value)
1458 # This option is deprecated for removal.
1459 # Its value may be silently ignored in the future.
1460 # Reason: Driver no longer uses connection pool.
1461 #pool_size = 10
1462
1463 # DEPRECATED: The pool size limit for connections expiration policy (integer
1464 # value)
1465 # This option is deprecated for removal.
1466 # Its value may be silently ignored in the future.
1467 # Reason: Driver no longer uses connection pool.
1468 #conn_pool_min_size = 2
1469
1470 # DEPRECATED: The time-to-live in sec of idle connections in the pool (integer
1471 # value)
1472 # This option is deprecated for removal.
1473 # Its value may be silently ignored in the future.
1474 # Reason: Driver no longer uses connection pool.
1475 #conn_pool_ttl = 1200
1476
1477 # Group id for Kafka consumer. Consumers in one group will coordinate message
1478 # consumption (string value)
1479 #consumer_group = oslo_messaging_consumer
1480
1481 # Upper bound on the delay for KafkaProducer batching in seconds (floating point
1482 # value)
1483 #producer_batch_timeout = 0.0
1484
1485 # Size of batch for the producer async send (integer value)
1486 #producer_batch_size = 16384
1487
1488 # The compression codec for all data generated by the producer. If not set,
1489 # compression will not be used. Note that the allowed values of this depend on
1490 # the kafka version (string value)
1491 # Possible values:
1492 # none - <No description provided>
1493 # gzip - <No description provided>
1494 # snappy - <No description provided>
1495 # lz4 - <No description provided>
1496 # zstd - <No description provided>
1497 #compression_codec = none
1498
1499 # Enable asynchronous consumer commits (boolean value)
1500 #enable_auto_commit = false
1501
1502 # The maximum number of records returned in a poll call (integer value)
1503 #max_poll_records = 500
1504
1505 # Protocol used to communicate with brokers (string value)
1506 # Possible values:
1507 # PLAINTEXT - <No description provided>
1508 # SASL_PLAINTEXT - <No description provided>
1509 # SSL - <No description provided>
1510 # SASL_SSL - <No description provided>
1511 #security_protocol = PLAINTEXT
1512
1513 # Mechanism when security protocol is SASL (string value)
1514 #sasl_mechanism = PLAIN
1515
1516 # CA certificate PEM file used to verify the server certificate (string value)
1517 #ssl_cafile =
1518
1519 # Client certificate PEM file used for authentication. (string value)
1520 #ssl_client_cert_file =
1521
1522 # Client key PEM file used for authentication. (string value)
1523 #ssl_client_key_file =
1524
1525 # Client key password file used for authentication. (string value)
1526 #ssl_client_key_password =
1527
1528
1529 [oslo_messaging_notifications]
1530
1531 #
1532 # From oslo.messaging
1533 #
1534
1535 # The Drivers(s) to handle sending notifications. Possible values are messaging,
1536 # messagingv2, routing, log, test, noop (multi valued)
1537 # Deprecated group/name - [DEFAULT]/notification_driver
1538 #driver =
1539
1540 # A URL representing the messaging driver to use for notifications. If not set,
1541 # we fall back to the same configuration used for RPC. (string value)
1542 # Deprecated group/name - [DEFAULT]/notification_transport_url
1543 #transport_url = <None>
1544
1545 # AMQP topic used for OpenStack notifications. (list value)
1546 # Deprecated group/name - [rpc_notifier2]/topics
1547 # Deprecated group/name - [DEFAULT]/notification_topics
1548 #topics = notifications
1549
1550 # The maximum number of attempts to re-send a notification message which failed
1551 # to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite
1552 # (integer value)
1553 #retry = -1
1554
1555
1556 [oslo_messaging_rabbit]
1557
1558 #
1559 # From oslo.messaging
1560 #
1561
1562 # Use durable queues in AMQP. (boolean value)
1563 #amqp_durable_queues = false
1564
1565 # Auto-delete queues in AMQP. (boolean value)
1566 #amqp_auto_delete = false
1567
1568 # Connect over SSL. (boolean value)
1569 # Deprecated group/name - [oslo_messaging_rabbit]/rabbit_use_ssl
1570 #ssl = false
1571
1572 # SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and
1573 # SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some
1574 # distributions. (string value)
1575 # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_version
1576 #ssl_version =
1577
1578 # SSL key file (valid only if SSL enabled). (string value)
1579 # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_keyfile
1580 #ssl_key_file =
1581
1582 # SSL cert file (valid only if SSL enabled). (string value)
1583 # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_certfile
1584 #ssl_cert_file =
1585
1586 # SSL certification authority file (valid only if SSL enabled). (string value)
1587 # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_ca_certs
1588 #ssl_ca_file =
1589
1590 # EXPERIMENTAL: Run the health check heartbeat thread through a native python
1591 # thread. By default if this option isn't provided the health check heartbeat
1592 # will inherit the execution model from the parent process. By example if the
1593 # parent process have monkey patched the stdlib by using eventlet/greenlet then
1594 # the heartbeat will be run through a green thread. (boolean value)
1595 #heartbeat_in_pthread = false
1596
1597 # How long to wait before reconnecting in response to an AMQP consumer cancel
1598 # notification. (floating point value)
1599 #kombu_reconnect_delay = 1.0
1600
1601 # EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not
1602 # be used. This option may not be available in future versions. (string value)
1603 #kombu_compression = <None>
1604
1605 # How long to wait a missing client before abandoning to send it its replies.
1606 # This value should not be longer than rpc_response_timeout. (integer value)
1607 # Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout
1608 #kombu_missing_consumer_retry_timeout = 60
1609
1610 # Determines how the next RabbitMQ node is chosen in case the one we are
1611 # currently connected to becomes unavailable. Takes effect only if more than one
1612 # RabbitMQ node is provided in config. (string value)
1613 # Possible values:
1614 # round-robin - <No description provided>
1615 # shuffle - <No description provided>
1616 #kombu_failover_strategy = round-robin
1617
1618 # The RabbitMQ login method. (string value)
1619 # Possible values:
1620 # PLAIN - <No description provided>
1621 # AMQPLAIN - <No description provided>
1622 # RABBIT-CR-DEMO - <No description provided>
1623 #rabbit_login_method = AMQPLAIN
1624
1625 # How frequently to retry connecting with RabbitMQ. (integer value)
1626 #rabbit_retry_interval = 1
1627
1628 # How long to backoff for between retries when connecting to RabbitMQ. (integer
1629 # value)
1630 #rabbit_retry_backoff = 2
1631
1632 # Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
1633 # (integer value)
1634 #rabbit_interval_max = 30
1635
1636 # Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this
1637 # option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring
1638 # is no longer controlled by the x-ha-policy argument when declaring a queue. If
1639 # you just want to make sure that all queues (except those with auto-generated
1640 # names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA
1641 # '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value)
1642 #rabbit_ha_queues = false
1643
1644 # Positive integer representing duration in seconds for queue TTL (x-expires).
1645 # Queues which are unused for the duration of the TTL are automatically deleted.
1646 # The parameter affects only reply and fanout queues. (integer value)
1647 # Minimum value: 1
1648 #rabbit_transient_queues_ttl = 1800
1649
1650 # Specifies the number of messages to prefetch. Setting to zero allows unlimited
1651 # messages. (integer value)
1652 #rabbit_qos_prefetch_count = 0
1653
1654 # Number of seconds after which the Rabbit broker is considered down if
1655 # heartbeat's keep-alive fails (0 disables heartbeat). (integer value)
1656 #heartbeat_timeout_threshold = 60
1657
1658 # How often times during the heartbeat_timeout_threshold we check the heartbeat.
1659 # (integer value)
1660 #heartbeat_rate = 2
1661
1662 # Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send is
1663 # used as reply, so the MessageUndeliverable exception is raised in case the
1664 # client queue does not exist. (integer value)
1665 #direct_mandatory_flag = True
1666
1667
1668 [oslo_policy]
1669
1670 #
1671 # From oslo.policy
1672 #
1673
1674 # This option controls whether or not to enforce scope when evaluating policies.
1675 # If ``True``, the scope of the token used in the request is compared to the
1676 # ``scope_types`` of the policy being enforced. If the scopes do not match, an
1677 # ``InvalidScope`` exception will be raised. If ``False``, a message will be
1678 # logged informing operators that policies are being invoked with mismatching
1679 # scope. (boolean value)
1680 #enforce_scope = false
1681
1682 # The relative or absolute path of a file that maps roles to permissions for a
1683 # given service. Relative paths must be specified in relation to the
1684 # configuration file setting this option. (string value)
1685 #policy_file = policy.json
1686
1687 # Default rule. Enforced when a requested rule is not found. (string value)
1688 #policy_default_rule = default
1689
1690 # Directories where policy configuration files are stored. They can be relative
1691 # to any directory in the search path defined by the config_dir option, or
1692 # absolute paths. The file defined by policy_file must exist for these
1693 # directories to be searched. Missing or empty directories are ignored. (multi
1694 # valued)
1695 #policy_dirs = policy.d
1696
1697 # Content Type to send and receive data for REST based policy check (string
1698 # value)
1699 # Possible values:
1700 # application/x-www-form-urlencoded - <No description provided>
1701 # application/json - <No description provided>
1702 #remote_content_type = application/x-www-form-urlencoded
1703
1704 # server identity verification for REST based policy check (boolean value)
1705 #remote_ssl_verify_server_crt = false
1706
1707 # Absolute path to ca cert file for REST based policy check (string value)
1708 #remote_ssl_ca_crt_file = <None>
1709
1710 # Absolute path to client cert for REST based policy check (string value)
1711 #remote_ssl_client_crt_file = <None>
1712
1713 # Absolute path client key file REST based policy check (string value)
1714 #remote_ssl_client_key_file = <None>
1715
1716
1717 [paste_deploy]
1718
1719 #
1720 # From glance.registry
1721 #
1722
1723 #
1724 # Deployment flavor to use in the server application pipeline.
1725 #
1726 # Provide a string value representing the appropriate deployment
1727 # flavor used in the server application pipleline. This is typically
1728 # the partial name of a pipeline in the paste configuration file with
1729 # the service name removed.
1730 #
1731 # For example, if your paste section name in the paste configuration
1732 # file is [pipeline:glance-api-keystone], set ``flavor`` to
1733 # ``keystone``.
1734 #
1735 # Possible values:
1736 # * String value representing a partial pipeline name.
1737 #
1738 # Related Options:
1739 # * config_file
1740 #
1741 # (string value)
1742 #
1743 # This option has a sample default set, which means that
1744 # its actual default value may vary from the one documented
1745 # below.
1746 #flavor = keystone
1747
1748 #
1749 # Name of the paste configuration file.
1750 #
1751 # Provide a string value representing the name of the paste
1752 # configuration file to use for configuring piplelines for
1753 # server application deployments.
1754 #
1755 # NOTES:
1756 # * Provide the name or the path relative to the glance directory
1757 # for the paste configuration file and not the absolute path.
1758 # * The sample paste configuration file shipped with Glance need
1759 # not be edited in most cases as it comes with ready-made
1760 # pipelines for all common deployment flavors.
1761 #
1762 # If no value is specified for this option, the ``paste.ini`` file
1763 # with the prefix of the corresponding Glance service's configuration
1764 # file name will be searched for in the known configuration
1765 # directories. (For example, if this option is missing from or has no
1766 # value set in ``glance-api.conf``, the service will look for a file
1767 # named ``glance-api-paste.ini``.) If the paste configuration file is
1768 # not found, the service will not start.
1769 #
1770 # Possible values:
1771 # * A string value representing the name of the paste configuration
1772 # file.
1773 #
1774 # Related Options:
1775 # * flavor
1776 #
1777 # (string value)
1778 #
1779 # This option has a sample default set, which means that
1780 # its actual default value may vary from the one documented
1781 # below.
1782 #config_file = glance-api-paste.ini
1783
1784
1785 [profiler]
1786
1787 #
1788 # From glance.registry
1789 #
1790
1791 #
1792 # Enable the profiling for all services on this node.
1793 #
1794 # Default value is False (fully disable the profiling feature).
1795 #
1796 # Possible values:
1797 #
1798 # * True: Enables the feature
1799 # * False: Disables the feature. The profiling cannot be started via this
1800 # project
1801 # operations. If the profiling is triggered by another project, this project
1802 # part will be empty.
1803 # (boolean value)
1804 # Deprecated group/name - [profiler]/profiler_enabled
1805 #enabled = false
1806
1807 #
1808 # Enable SQL requests profiling in services.
1809 #
1810 # Default value is False (SQL requests won't be traced).
1811 #
1812 # Possible values:
1813 #
1814 # * True: Enables SQL requests profiling. Each SQL query will be part of the
1815 # trace and can the be analyzed by how much time was spent for that.
1816 # * False: Disables SQL requests profiling. The spent time is only shown on a
1817 # higher level of operations. Single SQL queries cannot be analyzed this way.
1818 # (boolean value)
1819 #trace_sqlalchemy = false
1820
1821 #
1822 # Secret key(s) to use for encrypting context data for performance profiling.
1823 #
1824 # This string value should have the following format: <key1>[,<key2>,...<keyn>],
1825 # where each key is some random string. A user who triggers the profiling via
1826 # the REST API has to set one of these keys in the headers of the REST API call
1827 # to include profiling results of this node for this particular project.
1828 #
1829 # Both "enabled" flag and "hmac_keys" config options should be set to enable
1830 # profiling. Also, to generate correct profiling information across all services
1831 # at least one key needs to be consistent between OpenStack projects. This
1832 # ensures it can be used from client side to generate the trace, containing
1833 # information from all possible resources.
1834 # (string value)
1835 #hmac_keys = SECRET_KEY
1836
1837 #
1838 # Connection string for a notifier backend.
1839 #
1840 # Default value is ``messaging://`` which sets the notifier to oslo_messaging.
1841 #
1842 # Examples of possible values:
1843 #
1844 # * ``messaging://`` - use oslo_messaging driver for sending spans.
1845 # * ``redis://127.0.0.1:6379`` - use redis driver for sending spans.
1846 # * ``mongodb://127.0.0.1:27017`` - use mongodb driver for sending spans.
1847 # * ``elasticsearch://127.0.0.1:9200`` - use elasticsearch driver for sending
1848 # spans.
1849 # * ``jaeger://127.0.0.1:6831`` - use jaeger tracing as driver for sending
1850 # spans.
1851 # (string value)
1852 #connection_string = messaging://
1853
1854 #
1855 # Document type for notification indexing in elasticsearch.
1856 # (string value)
1857 #es_doc_type = notification
1858
1859 #
1860 # This parameter is a time value parameter (for example: es_scroll_time=2m),
1861 # indicating for how long the nodes that participate in the search will maintain
1862 # relevant resources in order to continue and support it.
1863 # (string value)
1864 #es_scroll_time = 2m
1865
1866 #
1867 # Elasticsearch splits large requests in batches. This parameter defines
1868 # maximum size of each batch (for example: es_scroll_size=10000).
1869 # (integer value)
1870 #es_scroll_size = 10000
1871
1872 #
1873 # Redissentinel provides a timeout option on the connections.
1874 # This parameter defines that timeout (for example: socket_timeout=0.1).
1875 # (floating point value)
1876 #socket_timeout = 0.1
1877
1878 #
1879 # Redissentinel uses a service name to identify a master redis service.
1880 # This parameter defines the name (for example:
1881 # ``sentinal_service_name=mymaster``).
1882 # (string value)
1883 #sentinel_service_name = mymaster
1884
1885 #
1886 # Enable filter traces that contain error/exception to a separated place.
1887 #
1888 # Default value is set to False.
1889 #
1890 # Possible values:
1891 #
1892 # * True: Enable filter traces that contain error/exception.
1893 # * False: Disable the filter.
1894 # (boolean value)
1895 #filter_error_trace = false
115115 #
116116 # (integer value)
117117 #image_location_quota = 10
118
119 # DEPRECATED:
120 # Python module path of data access API.
121 #
122 # Specifies the path to the API to use for accessing the data model.
123 # This option determines how the image catalog data will be accessed.
124 #
125 # Possible values:
126 # * glance.db.sqlalchemy.api
127 # * glance.db.registry.api
128 # * glance.db.simple.api
129 #
130 # If this option is set to ``glance.db.sqlalchemy.api`` then the image
131 # catalog data is stored in and read from the database via the
132 # SQLAlchemy Core and ORM APIs.
133 #
134 # Setting this option to ``glance.db.registry.api`` will force all
135 # database access requests to be routed through the Registry service.
136 # This avoids data access from the Glance API nodes for an added layer
137 # of security, scalability and manageability.
138 #
139 # NOTE: In v2 OpenStack Images API, the registry service is optional.
140 # In order to use the Registry API in v2, the option
141 # ``enable_v2_registry`` must be set to ``True``.
142 #
143 # Finally, when this configuration option is set to
144 # ``glance.db.simple.api``, image catalog data is stored in and read
145 # from an in-memory data structure. This is primarily used for testing.
146 #
147 # Related options:
148 # * enable_v2_api
149 # * enable_v2_registry
150 #
151 # (string value)
152 # This option is deprecated for removal since Queens.
153 # Its value may be silently ignored in the future.
154 # Reason:
155 # Glance registry service is deprecated for removal.
156 #
157 # More information can be found from the spec:
158 # http://specs.openstack.org/openstack/glance-
159 # specs/specs/queens/approved/glance/deprecate-registry.html
160 #data_api = glance.db.sqlalchemy.api
161118
162119 #
163120 # The default number of results to return for a request.
334291 #
335292 # (string value)
336293 #user_storage_quota = 0
337
338 #
339 # Deploy the v2 OpenStack Images API.
340 #
341 # When this option is set to ``True``, Glance service will respond
342 # to requests on registered endpoints conforming to the v2 OpenStack
343 # Images API.
344 #
345 # NOTES:
346 # * If this option is disabled, then the ``enable_v2_registry``
347 # option, which is enabled by default, is also recommended
348 # to be disabled.
349 #
350 # Possible values:
351 # * True
352 # * False
353 #
354 # Related options:
355 # * enable_v2_registry
356 #
357 # (boolean value)
358 #enable_v2_api = true
359
360 #
361 # DEPRECATED FOR REMOVAL
362 # (boolean value)
363 #enable_v1_registry = true
364
365 # DEPRECATED:
366 # Deploy the v2 API Registry service.
367 #
368 # When this option is set to ``True``, the Registry service
369 # will be enabled in Glance for v2 API requests.
370 #
371 # NOTES:
372 # * Use of Registry is optional in v2 API, so this option
373 # must only be enabled if both ``enable_v2_api`` is set to
374 # ``True`` and the ``data_api`` option is set to
375 # ``glance.db.registry.api``.
376 #
377 # * If deploying only the v1 OpenStack Images API, this option,
378 # which is enabled by default, should be disabled.
379 #
380 # Possible values:
381 # * True
382 # * False
383 #
384 # Related options:
385 # * enable_v2_api
386 # * data_api
387 #
388 # (boolean value)
389 # This option is deprecated for removal since Queens.
390 # Its value may be silently ignored in the future.
391 # Reason:
392 # Glance registry service is deprecated for removal.
393 #
394 # More information can be found from the spec:
395 # http://specs.openstack.org/openstack/glance-
396 # specs/specs/queens/approved/glance/deprecate-registry.html
397 #enable_v2_registry = true
398294
399295 #
400296 # Host address of the pydev server.
12581154 # Related options:
12591155 # * None
12601156 #
1157 # NOTE: You cannot use an encrypted volume_type associated with an NFS backend.
1158 # An encrypted volume stored on an NFS backend will raise an exception whenever
1159 # glance_store tries to write or access image data stored in that volume.
1160 # Consult your Cinder administrator to determine an appropriate volume_type.
1161 #
12611162 # (string value)
12621163 #cinder_volume_type = <None>
12631164
13641265 #
13651266 # Filesystem store metadata file.
13661267 #
1367 # The path to a file which contains the metadata to be returned with
1368 # any location associated with the filesystem store. The file must
1369 # contain a valid JSON object. The object should contain the keys
1370 # ``id`` and ``mountpoint``. The value for both keys should be a
1371 # string.
1268 # The path to a file which contains the metadata to be returned with any
1269 # location
1270 # associated with the filesystem store. Once this option is set, it is used for
1271 # new images created afterward only - previously existing images are not
1272 # affected.
1273 #
1274 # The file must contain a valid JSON object. The object should contain the keys
1275 # ``id`` and ``mountpoint``. The value for both keys should be a string.
13721276 #
13731277 # Possible values:
13741278 # * A valid path to the store metadata file
14231327 #filesystem_store_chunk_size = 65536
14241328
14251329 #
1330 # Enable or not thin provisioning in this backend.
1331 #
1332 # This configuration option enable the feature of not really write null byte
1333 # sequences on the filesystem, the holes who can appear will automatically
1334 # be interpreted by the filesystem as null bytes, and do not really consume
1335 # your storage.
1336 # Enabling this feature will also speed up image upload and save network trafic
1337 # in addition to save space in the backend, as null bytes sequences are not
1338 # sent over the network.
1339 #
1340 # Possible Values:
1341 # * True
1342 # * False
1343 #
1344 # Related options:
1345 # * None
1346 #
1347 # (boolean value)
1348 #filesystem_thin_provisioning = false
1349
1350 #
14261351 # Path to the CA bundle file.
14271352 #
14281353 # This configuration option enables the operator to use a custom
15881513 #
15891514 # (integer value)
15901515 #rados_connect_timeout = 0
1516
1517 #
1518 # Enable or not thin provisioning in this backend.
1519 #
1520 # This configuration option enable the feature of not really write null byte
1521 # sequences on the RBD backend, the holes who can appear will automatically
1522 # be interpreted by Ceph as null bytes, and do not really consume your storage.
1523 # Enabling this feature will also speed up image upload and save network trafic
1524 # in addition to save space in the backend, as null bytes sequences are not
1525 # sent over the network.
1526 #
1527 # Possible Values:
1528 # * True
1529 # * False
1530 #
1531 # Related options:
1532 # * None
1533 #
1534 # (boolean value)
1535 #rbd_thin_provisioning = false
15911536
15921537 #
15931538 # The host where the S3 server is listening.
25242469 # scope. (boolean value)
25252470 #enforce_scope = false
25262471
2472 # This option controls whether or not to use old deprecated defaults when
2473 # evaluating policies. If ``True``, the old deprecated defaults are not going to
2474 # be evaluated. This means if any existing token is allowed for old defaults but
2475 # is disallowed for new defaults, it will be disallowed. It is encouraged to
2476 # enable this flag along with the ``enforce_scope`` flag so that you can get the
2477 # benefits of new defaults and ``scope_type`` together (boolean value)
2478 #enforce_new_defaults = false
2479
25272480 # The relative or absolute path of a file that maps roles to permissions for a
25282481 # given service. Relative paths must be specified in relation to the
25292482 # configuration file setting this option. (string value)
9393 "description": "The kernel command line to be used by the libvirt driver, instead of the default. For linux containers (LXC), the value is used as arguments for initialization. This key is valid only for Amazon kernel, ramdisk, or machine images (aki, ari, or ami).",
9494 "type": "string"
9595 },
96 "os_type": {
97 "title": "OS Type",
98 "description": "The operating system installed on the image. The libvirt driver contains logic that takes different actions depending on the value of the os_type parameter of the image. For example, for os_type=windows images, it creates a FAT32-based swap partition instead of a Linux swap partition, and it limits the injected host name to less than 16 characters.",
99 "type": "string",
100 "enum": [
101 "linux",
102 "windows"
103 ]
104 },
96105 "hw_vif_model": {
97106 "title": "Virtual Network Interface",
98107 "description": "Specifies the model of virtual network interface device to use. The valid options depend on the hypervisor configuration. libvirt driver options: KVM and QEMU: e1000, ne2k_pci, pcnet, rtl8139, spapr-vlan, and virtio. Xen: e1000, netfront, ne2k_pci, pcnet, and rtl8139.",
208208 },
209209 "hw_vif_multiqueue_enabled": {
210210 "title": "Multiqueue Enabled",
211 j "description": "If true, this enables the virtio-net multiqueue feature. In this case, the driver sets the number of queues equal to the number of guest vCPUs. This makes the network performance scale across a number of vCPUs.",
211 "description": "If true, this enables the virtio-net multiqueue feature. In this case, the driver sets the number of queues equal to the number of guest vCPUs. This makes the network performance scale across a number of vCPUs.",
212212 "type": "string",
213213 "enum": ["true", "false"]
214214 }
55 "protected": true,
66 "resource_type_associations": [
77 {
8 "name": "OS::Glance::Image"
8 "name": "OS::Glance::Image",
9 "prefix": "hw_"
910 },
1011 {
1112 "name": "OS::Cinder::Volume",
13 "prefix": "hw_",
1214 "properties_target": "image"
1315 },
1416 {
15 "name": "OS::Nova::Flavor"
17 "name": "OS::Nova::Flavor",
18 "prefix": "hw:"
1619 }
1720 ],
1821 "properties": {
19 "hw_watchdog_action": {
22 "watchdog_action": {
2023 "title": "Watchdog Action",
2124 "description": "For the libvirt driver, you can enable and set the behavior of a virtual hardware watchdog device for each flavor. Watchdog devices keep an eye on the guest server, and carry out the configured action, if the server hangs. The watchdog uses the i6300esb device (emulating a PCI Intel 6300ESB). If hw_watchdog_action is not specified, the watchdog is disabled. Watchdog behavior set using a specific image's properties will override behavior set using flavors.",
2225 "type": "string",
99 }
1010 ],
1111 "properties": {
12 "os_type": {
13 "title": "OS Type",
14 "description": "The operating system installed on the image. The XenAPI driver contains logic that takes different actions depending on the value of the os_type parameter of the image. For example, for os_type=windows images, it creates a FAT32-based swap partition instead of a Linux swap partition, and it limits the injected host name to less than 16 characters.",
15 "type": "string",
16 "enum": [
17 "linux",
18 "windows"
19 ]
20 },
2112 "auto_disk_config": {
2213 "title": "Disk Adapter Type",
2314 "description": "If true, the root partition on the disk is automatically resized before the instance boots. This value is only taken into account by the Compute service when using a Xen-based hypervisor with the XenAPI driver. The Compute service will only attempt to resize if there is a single partition on the image, and only if the partition is in ext3 or ext4 format.",
+0
-10
etc/oslo-config-generator/glance-registry.conf less more
0 [DEFAULT]
1 wrap_width = 80
2 output_file = etc/glance-registry.conf.sample
3 namespace = glance.registry
4 namespace = oslo.messaging
5 namespace = oslo.db
6 namespace = oslo.db.concurrency
7 namespace = oslo.policy
8 namespace = keystonemiddleware.auth_token
9 namespace = oslo.log
1919
2020
2121 def root_app_factory(loader, global_conf, **local_conf):
22 if not CONF.enable_v2_api and '/v2' in local_conf:
23 del local_conf['/v2']
2422 return paste.urlmap.urlmap_factory(loader, global_conf, **local_conf)
3434 def wrapped(context, image, image_repo, **kwargs):
3535 if CONF.enabled_backends:
3636 store_utils.update_store_in_locations(
37 image, image_repo)
37 context, image, image_repo)
3838
3939 return func(context, image, image_repo, **kwargs)
4040
+0
-129
glance/api/cached_images.py less more
0 # Copyright 2011 OpenStack Foundation
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 """
16 Controller for Image Cache Management API
17 """
18
19 from oslo_log import log as logging
20 import webob.exc
21
22 from glance.api import policy
23 from glance.api.v1 import controller
24 from glance.common import exception
25 from glance.common import wsgi
26 from glance import image_cache
27
28 LOG = logging.getLogger(__name__)
29
30
31 class Controller(controller.BaseController):
32 """
33 A controller for managing cached images.
34 """
35
36 def __init__(self):
37 self.cache = image_cache.ImageCache()
38 self.policy = policy.Enforcer()
39
40 def _enforce(self, req):
41 """Authorize request against 'manage_image_cache' policy"""
42 try:
43 self.policy.enforce(req.context, 'manage_image_cache', {})
44 except exception.Forbidden:
45 LOG.debug("User not permitted to manage the image cache")
46 raise webob.exc.HTTPForbidden()
47
48 def get_cached_images(self, req):
49 """
50 GET /cached_images
51
52 Returns a mapping of records about cached images.
53 """
54 self._enforce(req)
55 images = self.cache.get_cached_images()
56 return dict(cached_images=images)
57
58 def delete_cached_image(self, req, image_id):
59 """
60 DELETE /cached_images/<IMAGE_ID>
61
62 Removes an image from the cache.
63 """
64 self._enforce(req)
65 self.cache.delete_cached_image(image_id)
66
67 def delete_cached_images(self, req):
68 """
69 DELETE /cached_images - Clear all active cached images
70
71 Removes all images from the cache.
72 """
73 self._enforce(req)
74 return dict(num_deleted=self.cache.delete_all_cached_images())
75
76 def get_queued_images(self, req):
77 """
78 GET /queued_images
79
80 Returns a mapping of records about queued images.
81 """
82 self._enforce(req)
83 images = self.cache.get_queued_images()
84 return dict(queued_images=images)
85
86 def queue_image(self, req, image_id):
87 """
88 PUT /queued_images/<IMAGE_ID>
89
90 Queues an image for caching. We do not check to see if
91 the image is in the registry here. That is done by the
92 prefetcher...
93 """
94 self._enforce(req)
95 self.cache.queue_image(image_id)
96
97 def delete_queued_image(self, req, image_id):
98 """
99 DELETE /queued_images/<IMAGE_ID>
100
101 Removes an image from the cache.
102 """
103 self._enforce(req)
104 self.cache.delete_queued_image(image_id)
105
106 def delete_queued_images(self, req):
107 """
108 DELETE /queued_images - Clear all active queued images
109
110 Removes all images from the cache.
111 """
112 self._enforce(req)
113 return dict(num_deleted=self.cache.delete_all_queued_images())
114
115
116 class CachedImageDeserializer(wsgi.JSONRequestDeserializer):
117 pass
118
119
120 class CachedImageSerializer(wsgi.JSONResponseSerializer):
121 pass
122
123
124 def create_resource():
125 """Cached Images resource factory method"""
126 deserializer = CachedImageDeserializer()
127 serializer = CachedImageSerializer()
128 return wsgi.Resource(Controller(), deserializer, serializer)
2020 from oslo_utils import excutils
2121 from oslo_utils import units
2222
23 import glance.async_
2324 from glance.common import exception
24 from glance.common import wsgi
2525 from glance.i18n import _, _LE, _LW
2626
2727 LOG = logging.getLogger(__name__)
196196 return memoizer_wrapper
197197
198198
199 def get_thread_pool(lock_name, size=1024):
200 """Initializes eventlet thread pool.
199 # NOTE(danms): This is the default pool size that will be used for
200 # the get_thread_pool() cache wrapper below. This is a global here
201 # because it needs to be overridden for the pure-wsgi mode in
202 # wsgi_app.py where native threads are used.
203 DEFAULT_POOL_SIZE = 1024
204
205
206 def get_thread_pool(lock_name, size=None):
207 """Initializes thread pool.
201208
202209 If thread pool is present in cache, then returns it from cache
203210 else create new pool, stores it in cache and return newly created
204211 pool.
205212
206213 @param lock_name: Name of the lock.
207 @param size: Size of eventlet pool.
208
209 @return: eventlet pool
214 @param size: Size of pool.
215
216 @return: ThreadPoolModel
210217 """
218
219 if size is None:
220 size = DEFAULT_POOL_SIZE
221
211222 @memoize(lock_name)
212223 def _get_thread_pool():
213 return wsgi.get_asynchronous_eventlet_pool(size=size)
224 threadpool_cls = glance.async_.get_threadpool_model()
225 LOG.debug('Initializing named threadpool %r', lock_name)
226 return threadpool_cls(size)
214227
215228 return _get_thread_pool
3737 from glance.i18n import _LE, _LI
3838 from glance import image_cache
3939 from glance import notifier
40 import glance.registry.client.v1.api as registry
4140
4241 LOG = logging.getLogger(__name__)
4342
105104 LOG.debug("User not permitted to perform '%s' action", action)
106105 raise webob.exc.HTTPForbidden(explanation=e.msg, request=req)
107106
108 def _get_v1_image_metadata(self, request, image_id):
109 """
110 Retrieves image metadata using registry for v1 api and creates
111 dictionary-like mash-up of image core and custom properties.
112 """
113 try:
114 image_metadata = registry.get_image_metadata(request.context,
115 image_id)
116 return utils.create_mashup_dict(image_metadata)
117 except exception.NotFound as e:
118 LOG.debug("No metadata found for image '%s'", image_id)
119 raise webob.exc.HTTPNotFound(explanation=e.msg, request=request)
120
121107 def _get_v2_image_metadata(self, request, image_id):
122108 """
123109 Retrieves image and for v2 api and creates adapter like object
182168 return method(request, image_id, image_iterator, image_metadata)
183169 except exception.ImageNotFound:
184170 msg = _LE("Image cache contained image file for image '%s', "
185 "however the registry did not contain metadata for "
171 "however the database did not contain metadata for "
186172 "that image!") % image_id
187173 LOG.error(msg)
188174 self.cache.delete_cached_image(image_id)
7171
7272 def _get_allowed_versions(self):
7373 allowed_versions = {}
74 if CONF.enable_v2_api:
75 allowed_versions['v2'] = 2
76 allowed_versions['v2.0'] = 2
77 allowed_versions['v2.1'] = 2
78 allowed_versions['v2.2'] = 2
79 allowed_versions['v2.3'] = 2
80 allowed_versions['v2.4'] = 2
81 allowed_versions['v2.5'] = 2
82 allowed_versions['v2.6'] = 2
83 allowed_versions['v2.7'] = 2
84 allowed_versions['v2.9'] = 2
85 if CONF.enabled_backends:
86 allowed_versions['v2.8'] = 2
74 allowed_versions['v2'] = 2
75 allowed_versions['v2.0'] = 2
76 allowed_versions['v2.1'] = 2
77 allowed_versions['v2.2'] = 2
78 allowed_versions['v2.3'] = 2
79 allowed_versions['v2.4'] = 2
80 allowed_versions['v2.5'] = 2
81 allowed_versions['v2.6'] = 2
82 allowed_versions['v2.7'] = 2
83 allowed_versions['v2.9'] = 2
84 if CONF.enabled_backends:
85 allowed_versions['v2.8'] = 2
8786 return allowed_versions
8887
8988 def _match_version_string(self, subject):
1515
1616 """Policy Engine For Glance"""
1717
18 # TODO(smcginnis) update this once six has support for collections.abc
19 # (https://github.com/benjaminp/six/pull/241) or clean up once we drop py2.7.
20 try:
21 from collections.abc import Mapping
22 except ImportError:
23 from collections import Mapping
24
18 from collections import abc
2519 import copy
2620
2721 from oslo_config import cfg
389383 task_proxy_kwargs=proxy_kwargs)
390384
391385
392 class ImageTarget(Mapping):
386 class ImageTarget(abc.Mapping):
393387 SENTINEL = object()
394388
395389 def __init__(self, target):
0 # Copyright 2011 OpenStack Foundation
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 SUPPORTED_FILTERS = ['name', 'status', 'container_format', 'disk_format',
16 'min_ram', 'min_disk', 'size_min', 'size_max',
17 'is_public', 'changes-since', 'protected']
18
19 SUPPORTED_PARAMS = ('limit', 'marker', 'sort_key', 'sort_dir')
20
21 # Metadata which only an admin can change once the image is active
22 ACTIVE_IMMUTABLE = ('size', 'checksum')
23
24 # Metadata which cannot be changed (irrespective of the current image state)
25 IMMUTABLE = ('status', 'id')
+0
-96
glance/api/v1/controller.py less more
0 # Copyright 2011 OpenStack Foundation
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 import glance_store as store
16 from oslo_log import log as logging
17 import webob.exc
18
19 from glance.common import exception
20 from glance.i18n import _
21 import glance.registry.client.v1.api as registry
22
23
24 LOG = logging.getLogger(__name__)
25
26
27 class BaseController(object):
28 def get_image_meta_or_404(self, request, image_id):
29 """
30 Grabs the image metadata for an image with a supplied
31 identifier or raises an HTTPNotFound (404) response
32
33 :param request: The WSGI/Webob Request object
34 :param image_id: The opaque image identifier
35
36 :raises HTTPNotFound: if image does not exist
37 """
38 context = request.context
39 try:
40 return registry.get_image_metadata(context, image_id)
41 except exception.NotFound:
42 LOG.debug("Image with identifier %s not found", image_id)
43 msg = _("Image with identifier %s not found") % image_id
44 raise webob.exc.HTTPNotFound(
45 msg, request=request, content_type='text/plain')
46 except exception.Forbidden:
47 LOG.debug("Forbidden image access")
48 raise webob.exc.HTTPForbidden(_("Forbidden image access"),
49 request=request,
50 content_type='text/plain')
51
52 def get_active_image_meta_or_error(self, request, image_id):
53 """
54 Same as get_image_meta_or_404 except that it will raise a 403 if the
55 image is deactivated or 404 if the image is otherwise not 'active'.
56 """
57 image = self.get_image_meta_or_404(request, image_id)
58 if image['status'] == 'deactivated':
59 LOG.debug("Image %s is deactivated", image_id)
60 msg = _("Image %s is deactivated") % image_id
61 raise webob.exc.HTTPForbidden(
62 msg, request=request, content_type='text/plain')
63 if image['status'] != 'active':
64 LOG.debug("Image %s is not active", image_id)
65 msg = _("Image %s is not active") % image_id
66 raise webob.exc.HTTPNotFound(
67 msg, request=request, content_type='text/plain')
68 return image
69
70 def update_store_acls(self, req, image_id, location_uri, public=False):
71 if location_uri:
72 try:
73 read_tenants = []
74 write_tenants = []
75 members = registry.get_image_members(req.context, image_id)
76 if members:
77 for member in members:
78 if member['can_share']:
79 write_tenants.append(member['member_id'])
80 else:
81 read_tenants.append(member['member_id'])
82 store.set_acls(location_uri, public=public,
83 read_tenants=read_tenants,
84 write_tenants=write_tenants,
85 context=req.context)
86 except store.UnknownScheme:
87 msg = _("Store for image_id not found: %s") % image_id
88 raise webob.exc.HTTPBadRequest(explanation=msg,
89 request=req,
90 content_type='text/plain')
91 except store.NotFound:
92 msg = _("Data for image_id not found: %s") % image_id
93 raise webob.exc.HTTPNotFound(explanation=msg,
94 request=req,
95 content_type='text/plain')
+0
-40
glance/api/v1/filters.py less more
0 # Copyright 2012, Piston Cloud Computing, Inc.
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 def validate(filter, value):
17 return FILTER_FUNCTIONS.get(filter, lambda v: True)(value)
18
19
20 def validate_int_in_range(min=0, max=None):
21 def _validator(v):
22 try:
23 if max is None:
24 return min <= int(v)
25 return min <= int(v) <= max
26 except ValueError:
27 return False
28 return _validator
29
30
31 def validate_boolean(v):
32 return v.lower() in ('none', 'true', 'false', '1', '0')
33
34
35 FILTER_FUNCTIONS = {'size_max': validate_int_in_range(), # build validator
36 'size_min': validate_int_in_range(), # build validator
37 'min_ram': validate_int_in_range(), # build validator
38 'protected': validate_boolean,
39 'is_public': validate_boolean, }
0 # Copyright 2011 OpenStack Foundation
0 # Copyright 2020 Red Hat, Inc.
11 # All Rights Reserved.
22 #
33 # Licensed under the Apache License, Version 2.0 (the "License"); you may
1212 # License for the specific language governing permissions and limitations
1313 # under the License.
1414
15
1615 from glance.common import wsgi
1716
1817
18 def init(mapper):
19 reject_resource = wsgi.Resource(wsgi.RejectMethodController())
20 mapper.connect("/v1", controller=reject_resource,
21 action="reject")
22
23
1924 class API(wsgi.Router):
20
21 """WSGI router for Glance v1 API requests."""
25 """WSGI entry point for satisfy grenade."""
2226
2327 def __init__(self, mapper):
24 reject_method_resource = wsgi.Resource(wsgi.RejectMethodController())
28 mapper = mapper or wsgi.APIMapper()
2529
26 mapper.connect("/",
27 controller=reject_method_resource,
28 action="reject")
30 init(mapper)
2931
3032 super(API, self).__init__(mapper)
+0
-293
glance/api/v1/upload_utils.py less more
0 # Copyright 2013 OpenStack Foundation
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14 import glance_store as store_api
15 from oslo_config import cfg
16 from oslo_log import log as logging
17 from oslo_utils import encodeutils
18 from oslo_utils import excutils
19 import webob.exc
20
21 from glance.common import exception
22 from glance.common import store_utils
23 from glance.common import utils
24 import glance.db
25 from glance.i18n import _, _LE, _LI
26 import glance.registry.client.v1.api as registry
27
28
29 CONF = cfg.CONF
30 LOG = logging.getLogger(__name__)
31
32
33 def initiate_deletion(req, location_data, id):
34 """
35 Deletes image data from the location of backend store.
36
37 :param req: The WSGI/Webob Request object
38 :param location_data: Location to the image data in a data store
39 :param id: Opaque image identifier
40 """
41 store_utils.delete_image_location_from_backend(req.context,
42 id, location_data)
43
44
45 def _kill(req, image_id, from_state):
46 """
47 Marks the image status to `killed`.
48
49 :param req: The WSGI/Webob Request object
50 :param image_id: Opaque image identifier
51 :param from_state: Permitted current status for transition to 'killed'
52 """
53 # TODO(dosaboy): http://docs.openstack.org/developer/glance/statuses.html
54 # needs updating to reflect the fact that queued->killed and saving->killed
55 # are both allowed.
56 registry.update_image_metadata(req.context, image_id,
57 {'status': 'killed'},
58 from_state=from_state)
59
60
61 def safe_kill(req, image_id, from_state):
62 """
63 Mark image killed without raising exceptions if it fails.
64
65 Since _kill is meant to be called from exceptions handlers, it should
66 not raise itself, rather it should just log its error.
67
68 :param req: The WSGI/Webob Request object
69 :param image_id: Opaque image identifier
70 :param from_state: Permitted current status for transition to 'killed'
71 """
72 try:
73 _kill(req, image_id, from_state)
74 except Exception:
75 LOG.exception(_LE("Unable to kill image %(id)s: "), {'id': image_id})
76
77
78 def upload_data_to_store(req, image_meta, image_data, store, notifier):
79 """
80 Upload image data to specified store.
81
82 Upload image data to the store and cleans up on error.
83 """
84 image_id = image_meta['id']
85
86 db_api = glance.db.get_api(v1_mode=True)
87 image_size = image_meta.get('size')
88
89 try:
90 # By default image_data will be passed as CooperativeReader object.
91 # But if 'user_storage_quota' is enabled and 'remaining' is not None
92 # then it will be passed as object of LimitingReader to
93 # 'store_add_to_backend' method.
94 image_data = utils.CooperativeReader(image_data)
95
96 remaining = glance.api.common.check_quota(
97 req.context, image_size, db_api, image_id=image_id)
98 if remaining is not None:
99 image_data = utils.LimitingReader(image_data, remaining)
100
101 (uri,
102 size,
103 checksum,
104 location_metadata) = store_api.store_add_to_backend(
105 image_meta['id'],
106 image_data,
107 image_meta['size'],
108 store,
109 context=req.context)
110
111 location_data = {'url': uri,
112 'metadata': location_metadata,
113 'status': 'active'}
114
115 try:
116 # recheck the quota in case there were simultaneous uploads that
117 # did not provide the size
118 glance.api.common.check_quota(
119 req.context, size, db_api, image_id=image_id)
120 except exception.StorageQuotaFull:
121 with excutils.save_and_reraise_exception():
122 LOG.info(_LI('Cleaning up %s after exceeding '
123 'the quota'), image_id)
124 store_utils.safe_delete_from_backend(
125 req.context, image_meta['id'], location_data)
126
127 def _kill_mismatched(image_meta, attr, actual):
128 supplied = image_meta.get(attr)
129 if supplied and supplied != actual:
130 msg = (_("Supplied %(attr)s (%(supplied)s) and "
131 "%(attr)s generated from uploaded image "
132 "(%(actual)s) did not match. Setting image "
133 "status to 'killed'.") % {'attr': attr,
134 'supplied': supplied,
135 'actual': actual})
136 LOG.error(msg)
137 safe_kill(req, image_id, 'saving')
138 initiate_deletion(req, location_data, image_id)
139 raise webob.exc.HTTPBadRequest(explanation=msg,
140 content_type="text/plain",
141 request=req)
142
143 # Verify any supplied size/checksum value matches size/checksum
144 # returned from store when adding image
145 _kill_mismatched(image_meta, 'size', size)
146 _kill_mismatched(image_meta, 'checksum', checksum)
147
148 # Update the database with the checksum returned
149 # from the backend store
150 LOG.debug("Updating image %(image_id)s data. "
151 "Checksum set to %(checksum)s, size set "
152 "to %(size)d", {'image_id': image_id,
153 'checksum': checksum,
154 'size': size})
155 update_data = {'checksum': checksum,
156 'size': size}
157 try:
158 try:
159 state = 'saving'
160 image_meta = registry.update_image_metadata(req.context,
161 image_id,
162 update_data,
163 from_state=state)
164 except exception.Duplicate:
165 image = registry.get_image_metadata(req.context, image_id)
166 if image['status'] == 'deleted':
167 raise exception.ImageNotFound()
168 else:
169 raise
170 except exception.NotAuthenticated as e:
171 # Delete image data due to possible token expiration.
172 LOG.debug("Authentication error - the token may have "
173 "expired during file upload. Deleting image data for "
174 " %s", image_id)
175 initiate_deletion(req, location_data, image_id)
176 raise webob.exc.HTTPUnauthorized(explanation=e.msg, request=req)
177 except exception.ImageNotFound:
178 msg = _("Image %s could not be found after upload. The image may"
179 " have been deleted during the upload.") % image_id
180 LOG.info(msg)
181
182 # NOTE(jculp): we need to clean up the datastore if an image
183 # resource is deleted while the image data is being uploaded
184 #
185 # We get "location_data" from above call to store.add(), any
186 # exceptions that occur there handle this same issue internally,
187 # Since this is store-agnostic, should apply to all stores.
188 initiate_deletion(req, location_data, image_id)
189 raise webob.exc.HTTPPreconditionFailed(explanation=msg,
190 request=req,
191 content_type='text/plain')
192
193 except store_api.StoreAddDisabled:
194 msg = _("Error in store configuration. Adding images to store "
195 "is disabled.")
196 LOG.exception(msg)
197 safe_kill(req, image_id, 'saving')
198 notifier.error('image.upload', msg)
199 raise webob.exc.HTTPGone(explanation=msg, request=req,
200 content_type='text/plain')
201
202 except (store_api.Duplicate, exception.Duplicate) as e:
203 msg = (_("Attempt to upload duplicate image: %s") %
204 encodeutils.exception_to_unicode(e))
205 LOG.warn(msg)
206 # NOTE(dosaboy): do not delete the image since it is likely that this
207 # conflict is a result of another concurrent upload that will be
208 # successful.
209 notifier.error('image.upload', msg)
210 raise webob.exc.HTTPConflict(explanation=msg,
211 request=req,
212 content_type="text/plain")
213
214 except exception.Forbidden as e:
215 msg = (_("Forbidden upload attempt: %s") %
216 encodeutils.exception_to_unicode(e))
217 LOG.warn(msg)
218 safe_kill(req, image_id, 'saving')
219 notifier.error('image.upload', msg)
220 raise webob.exc.HTTPForbidden(explanation=msg,
221 request=req,
222 content_type="text/plain")
223
224 except store_api.StorageFull as e:
225 msg = (_("Image storage media is full: %s") %
226 encodeutils.exception_to_unicode(e))
227 LOG.error(msg)
228 safe_kill(req, image_id, 'saving')
229 notifier.error('image.upload', msg)
230 raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg,
231 request=req,
232 content_type='text/plain')
233
234 except store_api.StorageWriteDenied as e:
235 msg = (_("Insufficient permissions on image storage media: %s") %
236 encodeutils.exception_to_unicode(e))
237 LOG.error(msg)
238 safe_kill(req, image_id, 'saving')
239 notifier.error('image.upload', msg)
240 raise webob.exc.HTTPServiceUnavailable(explanation=msg,
241 request=req,
242 content_type='text/plain')
243
244 except exception.ImageSizeLimitExceeded:
245 msg = (_("Denying attempt to upload image larger than %d bytes.")
246 % CONF.image_size_cap)
247 LOG.warn(msg)
248 safe_kill(req, image_id, 'saving')
249 notifier.error('image.upload', msg)
250 raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg,
251 request=req,
252 content_type='text/plain')
253
254 except exception.StorageQuotaFull as e:
255 msg = (_("Denying attempt to upload image because it exceeds the "
256 "quota: %s") % encodeutils.exception_to_unicode(e))
257 LOG.warn(msg)
258 safe_kill(req, image_id, 'saving')
259 notifier.error('image.upload', msg)
260 raise webob.exc.HTTPRequestEntityTooLarge(explanation=msg,
261 request=req,
262 content_type='text/plain')
263
264 except webob.exc.HTTPError:
265 # NOTE(bcwaldon): Ideally, we would just call 'raise' here,
266 # but something in the above function calls is affecting the
267 # exception context and we must explicitly re-raise the
268 # caught exception.
269 msg = _LE("Received HTTP error while uploading image %s") % image_id
270 notifier.error('image.upload', msg)
271 with excutils.save_and_reraise_exception():
272 LOG.exception(msg)
273 safe_kill(req, image_id, 'saving')
274
275 except (ValueError, IOError):
276 msg = _("Client disconnected before sending all data to backend")
277 LOG.warn(msg)
278 safe_kill(req, image_id, 'saving')
279 raise webob.exc.HTTPBadRequest(explanation=msg,
280 content_type="text/plain",
281 request=req)
282
283 except Exception:
284 msg = _("Failed to upload image %s") % image_id
285 LOG.exception(msg)
286 safe_kill(req, image_id, 'saving')
287 notifier.error('image.upload', msg)
288 raise webob.exc.HTTPInternalServerError(explanation=msg,
289 request=req,
290 content_type='text/plain')
291
292 return image_meta, location_data
140140 image = image_repo.get(image_id)
141141 image.status = 'saving'
142142 try:
143 if CONF.data_api == 'glance.db.registry.api':
144 # create a trust if backend is registry
145 try:
146 # request user plugin for current token
147 user_plugin = req.environ.get('keystone.token_auth')
148 roles = []
149 # use roles from request environment because they
150 # are not transformed to lower-case unlike cxt.roles
151 for role_info in req.environ.get(
152 'keystone.token_info')['token']['roles']:
153 roles.append(role_info['name'])
154 refresher = trust_auth.TokenRefresher(user_plugin,
155 cxt.project_id,
156 roles)
157 except Exception as e:
158 LOG.info(_LI("Unable to create trust: %s "
159 "Use the existing user token."),
160 encodeutils.exception_to_unicode(e))
143 # create a trust if backend is registry
144 try:
145 # request user plugin for current token
146 user_plugin = req.environ.get('keystone.token_auth')
147 roles = []
148 # use roles from request environment because they
149 # are not transformed to lower-case unlike cxt.roles
150 for role_info in req.environ.get(
151 'keystone.token_info')['token']['roles']:
152 roles.append(role_info['name'])
153 refresher = trust_auth.TokenRefresher(user_plugin,
154 cxt.project_id,
155 roles)
156 except Exception as e:
157 LOG.info(_LI("Unable to create trust: %s "
158 "Use the existing user token."),
159 encodeutils.exception_to_unicode(e))
161160
162161 image_repo.save(image, from_state='queued')
163162 image.set_data(data, size, backend=backend)
1212 # License for the specific language governing permissions and limitations
1313 # under the License.
1414
15 import datetime
1516 import hashlib
1617 import os
1718 import re
2425 from oslo_log import log as logging
2526 from oslo_serialization import jsonutils as json
2627 from oslo_utils import encodeutils
28 from oslo_utils import timeutils as oslo_timeutils
2729 import six
2830 from six.moves import http_client as http
2931 import six.moves.urllib.parse as urlparse
9799
98100 return image
99101
102 def _bust_import_lock(self, admin_image_repo, admin_task_repo,
103 image, task, task_id):
104 if task:
105 # FIXME(danms): It would be good if we had a 'canceled' or
106 # 'aborted' status here.
107 try:
108 task.fail('Expired lock preempted')
109 admin_task_repo.save(task)
110 except exception.InvalidTaskStatusTransition:
111 # NOTE(danms): This may happen if we try to fail a
112 # task that is in a terminal state, but where the lock
113 # was never dropped from the image. We will log the
114 # image, task, and status below so we can just ignore
115 # here.
116 pass
117
118 try:
119 admin_image_repo.delete_property_atomic(
120 image, 'os_glance_import_task', task_id)
121 except exception.NotFound:
122 LOG.warning('Image %(image)s has stale import task %(task)s '
123 'but we lost the race to remove it.',
124 {'image': image.image_id,
125 'task': task_id})
126 # We probably lost the race to expire the old lock, but
127 # act like it is not yet expired to avoid a retry loop.
128 raise exception.Conflict('Image has active task')
129
130 LOG.warning('Image %(image)s has stale import task %(task)s '
131 'in status %(status)s from %(owner)s; removed lock '
132 'because it had expired.',
133 {'image': image.image_id,
134 'task': task_id,
135 'status': task and task.status or 'missing',
136 'owner': task and task.owner or 'unknown owner'})
137
138 def _enforce_import_lock(self, req, image):
139 admin_context = req.context.elevated()
140 admin_image_repo = self.gateway.get_repo(admin_context)
141 admin_task_repo = self.gateway.get_task_repo(admin_context)
142 other_task = image.extra_properties['os_glance_import_task']
143
144 expiry = datetime.timedelta(minutes=60)
145 bustable_states = ('pending', 'processing', 'success', 'failure')
146
147 try:
148 task = admin_task_repo.get(other_task)
149 except exception.NotFound:
150 # NOTE(danms): This could happen if we failed to do an import
151 # a long time ago, and the task record has since been culled from
152 # the database, but the task id is still in the lock field.
153 LOG.warning('Image %(image)s has non-existent import '
154 'task %(task)s; considering it stale',
155 {'image': image.image_id,
156 'task': other_task})
157 task = None
158 age = 0
159 else:
160 age = oslo_timeutils.utcnow() - task.updated_at
161 if task.status == 'pending':
162 # NOTE(danms): Tasks in pending state could be queued,
163 # blocked or otherwise right-about-to-get-going, so we
164 # double the expiry time for safety. We will report
165 # time remaining below, so this is not too obscure.
166 expiry *= 2
167
168 if not task or (task.status in bustable_states and age >= expiry):
169 self._bust_import_lock(admin_image_repo, admin_task_repo,
170 image, task, other_task)
171 return task
172
173 if task.status in bustable_states:
174 LOG.warning('Image %(image)s has active import task %(task)s in '
175 'status %(status)s; lock remains valid for %(expire)i '
176 'more seconds',
177 {'image': image.image_id,
178 'task': task.task_id,
179 'status': task.status,
180 'expire': (expiry - age).total_seconds()})
181 else:
182 LOG.debug('Image %(image)s has import task %(task)s in status '
183 '%(status)s and does not qualify for expiry.',
184 {'image': image.image_id,
185 'task': task.task_id,
186 'status': task.status})
187 raise exception.Conflict('Image has active task')
188
189 def _cleanup_stale_task_progress(self, image_repo, image, task):
190 """Cleanup stale in-progress information from a previous task.
191
192 If we stole the lock from another task, we should try to clean up
193 the in-progress status information from that task while we have
194 the lock.
195 """
196 stores = task.task_input.get('backend', [])
197 keys = ['os_glance_importing_to_stores', 'os_glance_failed_import']
198 changed = set()
199 for store in stores:
200 for key in keys:
201 values = image.extra_properties.get(key, '').split(',')
202 if store in values:
203 values.remove(store)
204 changed.add(key)
205 image.extra_properties[key] = ','.join(values)
206 if changed:
207 image_repo.save(image)
208 LOG.debug('Image %(image)s had stale import progress info '
209 '%(keys)s from task %(task)s which was cleaned up',
210 {'image': image.image_id, 'task': task.task_id,
211 'keys': ','.join(changed)})
212
100213 @utils.mutating
101214 def import_image(self, req, image_id, body):
102215 image_repo = self.gateway.get_repo(req.context)
103216 task_factory = self.gateway.get_task_factory(req.context)
104 executor_factory = self.gateway.get_task_executor_factory(req.context)
105217 task_repo = self.gateway.get_task_repo(req.context)
106218 import_method = body.get('method').get('name')
107219 uri = body.get('method').get('uri')
108220 all_stores_must_succeed = body.get('all_stores_must_succeed', True)
221 stole_lock_from_task = None
109222
110223 try:
111224 image = image_repo.get(image_id)
135248 raise webob.exc.HTTPForbidden(
136249 explanation=_("Operation not permitted"))
137250
251 # NOTE(danms): For copy-image only, we check policy to decide
252 # if the user should be able to do this. Otherwise, we forbid
253 # the import if the user is not the owner.
254 if import_method == 'copy-image':
255 self.policy.enforce(req.context, 'copy_image',
256 dict(policy.ImageTarget(image)))
257 elif not authorization.is_image_mutable(req.context, image):
258 raise webob.exc.HTTPForbidden(
259 explanation=_("Operation not permitted"))
260
261 if 'os_glance_import_task' in image.extra_properties:
262 # NOTE(danms): This will raise exception.Conflict if the
263 # lock is present and valid, or return if absent or invalid.
264 stole_lock_from_task = self._enforce_import_lock(req, image)
265
138266 stores = [None]
139267 if CONF.enabled_backends:
140268 try:
174302 raise webob.exc.HTTPConflict(explanation=e.msg)
175303 except exception.NotFound as e:
176304 raise webob.exc.HTTPNotFound(explanation=e.msg)
305 except exception.Forbidden as e:
306 raise webob.exc.HTTPForbidden(explanation=e.msg)
177307
178308 if (not all_stores_must_succeed) and (not CONF.enabled_backends):
179309 msg = (_("All_stores_must_succeed can only be set with "
183313 task_input = {'image_id': image_id,
184314 'import_req': body,
185315 'backend': stores}
316
317 if import_method == 'copy-image':
318 # If this is a copy-image import and we passed the policy check,
319 # grab an admin context for the task so it can manipulate metadata
320 # as admin.
321 admin_context = req.context.elevated()
322 else:
323 admin_context = None
324
325 executor_factory = self.gateway.get_task_executor_factory(
326 req.context, admin_context=admin_context)
186327
187328 if (import_method == 'web-download' and
188329 not utils.validate_import_uri(uri)):
194335 import_task = task_factory.new_task(task_type='api_image_import',
195336 owner=req.context.owner,
196337 task_input=task_input)
338
339 # NOTE(danms): Try to grab the lock for this task
340 try:
341 image_repo.set_property_atomic(image,
342 'os_glance_import_task',
343 import_task.task_id)
344 except exception.Duplicate:
345 msg = (_("New operation on image '%s' is not permitted as "
346 "prior operation is still in progress") % image_id)
347 raise exception.Conflict(msg)
348
349 # NOTE(danms): We now have the import lock on this image. If we
350 # busted the lock above and have a reference to that task, try
351 # to clean up the import status information left over from that
352 # execution.
353 if stole_lock_from_task:
354 self._cleanup_stale_task_progress(image_repo, image,
355 stole_lock_from_task)
356
197357 task_repo.add(import_task)
198358 task_executor = executor_factory.new_task_executor(req.context)
199 pool = common.get_thread_pool("tasks_eventlet_pool")
200 pool.spawn_n(import_task.run, task_executor)
359 pool = common.get_thread_pool("tasks_pool")
360 pool.spawn(import_task.run, task_executor)
201361 except exception.Forbidden as e:
202362 LOG.debug("User not permitted to create image import task.")
203363 raise webob.exc.HTTPForbidden(explanation=e.msg)
714874 'size', 'virtual_size', 'direct_url', 'self',
715875 'file', 'schema', 'id', 'os_hash_algo',
716876 'os_hash_value')
717 _reserved_properties = ('location', 'deleted', 'deleted_at')
877 _reserved_properties = ('location', 'deleted', 'deleted_at',
878 'os_glance_import_task')
718879 _base_properties = ('checksum', 'created_at', 'container_format',
719880 'disk_format', 'id', 'min_disk', 'min_ram', 'name',
720881 'size', 'virtual_size', 'status', 'tags', 'owner',
7777 task_input=task['input'])
7878 task_repo.add(new_task)
7979 task_executor = executor_factory.new_task_executor(req.context)
80 pool = common.get_thread_pool("tasks_eventlet_pool")
81 pool.spawn_n(new_task.run, task_executor)
80 pool = common.get_thread_pool("tasks_pool")
81 pool.spawn(new_task.run, task_executor)
8282 except exception.Forbidden as e:
8383 msg = (_LW("Forbidden to create task. Reason: %(reason)s")
8484 % {'reason': encodeutils.exception_to_unicode(e)})
7575 }
7676
7777 version_objs = []
78 if CONF.enable_v2_api:
79 if CONF.enabled_backends:
80 version_objs.extend([
81 build_version_object(2.10, 'v2', 'CURRENT'),
82 build_version_object(2.9, 'v2', 'SUPPORTED'),
83 build_version_object(2.8, 'v2', 'SUPPORTED')
84 ])
85 else:
86 version_objs.extend([
87 build_version_object(2.9, 'v2', 'CURRENT'),
88 ])
78 if CONF.enabled_backends:
8979 version_objs.extend([
90 build_version_object(2.7, 'v2', 'SUPPORTED'),
91 build_version_object(2.6, 'v2', 'SUPPORTED'),
92 build_version_object(2.5, 'v2', 'SUPPORTED'),
93 build_version_object(2.4, 'v2', 'SUPPORTED'),
94 build_version_object(2.3, 'v2', 'SUPPORTED'),
95 build_version_object(2.2, 'v2', 'SUPPORTED'),
96 build_version_object(2.1, 'v2', 'SUPPORTED'),
97 build_version_object(2.0, 'v2', 'SUPPORTED'),
80 build_version_object(2.10, 'v2', 'CURRENT'),
81 build_version_object(2.9, 'v2', 'SUPPORTED'),
82 build_version_object(2.8, 'v2', 'SUPPORTED')
9883 ])
84 else:
85 version_objs.extend([
86 build_version_object(2.9, 'v2', 'CURRENT'),
87 ])
88 version_objs.extend([
89 build_version_object(2.7, 'v2', 'SUPPORTED'),
90 build_version_object(2.6, 'v2', 'SUPPORTED'),
91 build_version_object(2.5, 'v2', 'SUPPORTED'),
92 build_version_object(2.4, 'v2', 'SUPPORTED'),
93 build_version_object(2.3, 'v2', 'SUPPORTED'),
94 build_version_object(2.2, 'v2', 'SUPPORTED'),
95 build_version_object(2.1, 'v2', 'SUPPORTED'),
96 build_version_object(2.0, 'v2', 'SUPPORTED'),
97 ])
9998
10099 status = explicit and http_client.OK or http_client.MULTIPLE_CHOICES
101100 response = webob.Response(request=req,
1212 # License for the specific language governing permissions and limitations
1313 # under the License.
1414
15 import futurist
1516 from oslo_log import log as logging
1617
1718 from glance.i18n import _LE
4546 glance.domain.Image object into ORM semantics
4647 image_factory: glance.domain.ImageFactory object to be used for
4748 creating new images for certain types of tasks viz. import, cloning
49 admin_repo: glance.db.ImageRepo object which acts as a translator for
50 glance.domain.Image object into ORM semantics, but with an admin
51 context (optional)
4852 """
4953
50 def __init__(self, context, task_repo, image_repo, image_factory):
54 def __init__(self, context, task_repo, image_repo, image_factory,
55 admin_repo=None):
5156 self.context = context
5257 self.task_repo = task_repo
5358 self.image_repo = image_repo
5459 self.image_factory = image_factory
60 self.admin_repo = admin_repo
5561
5662 def begin_processing(self, task_id):
5763 task = self.task_repo.get(task_id)
6975 LOG.error(msg)
7076 task.fail(_LE("Internal error occurred while trying to process task."))
7177 self.task_repo.save(task)
78
79
80 class ThreadPoolModel(object):
81 """Base class for an abstract ThreadPool.
82
83 Do not instantiate this directly, use one of the concrete
84 implementations.
85 """
86
87 DEFAULTSIZE = 1
88
89 @staticmethod
90 def get_threadpool_executor_class():
91 """Returns a futurist.ThreadPoolExecutor class."""
92 pass
93
94 def __init__(self, size=None):
95 if size is None:
96 size = self.DEFAULTSIZE
97
98 threadpool_cls = self.get_threadpool_executor_class()
99 LOG.debug('Creating threadpool model %r with size %i',
100 threadpool_cls.__name__, size)
101 self.pool = threadpool_cls(size)
102
103 def spawn(self, fn, *args, **kwargs):
104 """Spawn a function with args using the thread pool."""
105 LOG.debug('Spawning with %s: %s(%s, %s)' % (
106 self.get_threadpool_executor_class().__name__,
107 fn, args, kwargs))
108 return self.pool.submit(fn, *args, **kwargs)
109
110
111 class EventletThreadPoolModel(ThreadPoolModel):
112 """A ThreadPoolModel suitable for use with evenlet/greenthreads."""
113 DEFAULTSIZE = 1024
114
115 @staticmethod
116 def get_threadpool_executor_class():
117 return futurist.GreenThreadPoolExecutor
118
119
120 class NativeThreadPoolModel(ThreadPoolModel):
121 """A ThreadPoolModel suitable for use with native threads."""
122 DEFAULTSIZE = 16
123
124 @staticmethod
125 def get_threadpool_executor_class():
126 return futurist.ThreadPoolExecutor
127
128
129 _THREADPOOL_MODEL = None
130
131
132 def set_threadpool_model(thread_type):
133 """Set the system-wide threadpool model.
134
135 This sets the type of ThreadPoolModel to use globally in the process.
136 It should be called very early in init, and only once.
137
138 :param thread_type: A string indicating the threading type in use,
139 either "eventlet" or "native"
140 :raises: RuntimeError if the model is already set or some thread_type
141 other than one of the supported ones is provided.
142 """
143 global _THREADPOOL_MODEL
144
145 if thread_type == 'native':
146 model = NativeThreadPoolModel
147 elif thread_type == 'eventlet':
148 model = EventletThreadPoolModel
149 else:
150 raise RuntimeError(
151 ('Invalid thread type %r '
152 '(must be "native" or "eventlet")') % (thread_type))
153
154 if _THREADPOOL_MODEL is model:
155 # Re-setting the same model is fine...
156 return
157
158 if _THREADPOOL_MODEL is not None:
159 # ...changing it is not.
160 raise RuntimeError('Thread model is already set')
161
162 LOG.info('Threadpool model set to %r', model.__name__)
163 _THREADPOOL_MODEL = model
164
165
166 def get_threadpool_model():
167 """Returns the system-wide threadpool model class.
168
169 This must be called after set_threadpool_model() whenever
170 some code needs to know what the threadpool implementation is.
171
172 This may only be called after set_threadpool_model() has been
173 called to set the desired threading mode. If it is called before
174 the model is set, it will raise AssertionError. This would likely
175 be the case if this got run in a test before the model was
176 initialized, or if glance modules that use threading were imported
177 and run from some other code without setting the model first.
178
179 :raises: AssertionError if the model has not yet been set.
180 """
181 global _THREADPOOL_MODEL
182 assert _THREADPOOL_MODEL
183 return _THREADPOOL_MODEL
4040 self.image_repo = image_repo
4141 self.image_id = image_id
4242 self.uri = uri
43 self._path = None
4344 super(_WebDownload, self).__init__(
4445 name='%s-WebDownload-%s' % (task_type, task_id))
4546
116117 {"error": encodeutils.exception_to_unicode(e),
117118 "task_id": self.task_id})
118119
119 path = self.store.add(self.image_id, data, 0)[0]
120 return path
120 self._path, bytes_written = self.store.add(self.image_id, data, 0)[0:2]
121 try:
122 content_length = int(data.headers['content-length'])
123 if bytes_written != content_length:
124 msg = (_("Task %(task_id)s failed because downloaded data "
125 "size %(data_size)i is different from expected %("
126 "expected)i") %
127 {"task_id": self.task_id, "data_size": bytes_written,
128 "expected": content_length})
129 raise exception.ImportTaskError(msg)
130 except (KeyError, ValueError):
131 pass
132 return self._path
121133
122134 def revert(self, result, **kwargs):
123135 if isinstance(result, failure.Failure):
130142 image = self.image_repo.get(self.image_id)
131143 image.status = 'queued'
132144 self.image_repo.save(image)
145
146 # NOTE(abhishekk): Deleting partial image data from staging area
147 if self._path is not None:
148 LOG.debug(('Deleting image %(image_id)s from staging '
149 'area.'), {'image_id': self.image_id})
150 try:
151 if CONF.enabled_backends:
152 store_api.delete(self._path, None)
153 else:
154 store_api.delete_from_backend(self._path)
155 except Exception:
156 LOG.exception(_LE("Error reverting web-download "
157 "task: %(task_id)s"), {
158 'task_id': self.task_id})
133159
134160
135161 def get_flow(**kwargs):
1111 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
1212 # License for the specific language governing permissions and limitations
1313 # under the License.
14 import functools
1415 import os
1516
1617 import glance_store as store_api
1920 from oslo_config import cfg
2021 from oslo_log import log as logging
2122 from oslo_utils import encodeutils
23 from oslo_utils import timeutils
24 from oslo_utils import units
2225 import six
26 import taskflow
2327 from taskflow.patterns import linear_flow as lf
2428 from taskflow import retry
2529 from taskflow import task
7983
8084 def __init__(self, message):
8185 super(_NoStoresSucceeded, self).__init__(message)
86
87
88 class ImportActionWrapper(object):
89 """Wrapper for all the image metadata operations we do during an import.
90
91 This is used to consolidate the changes we make to image metadata during
92 an import operation, and can be used with an admin-capable repo to
93 enable non-owner controlled modification of that data if desired.
94
95 Use this as a context manager to make multiple changes followed by
96 a save of the image in one operation. An _ImportActions object is
97 yielded from the context manager, which defines the available operations.
98
99 :param image_repo: The ImageRepo we should use to fetch/save the image
100 :param image-id: The ID of the image we should be altering
101 """
102
103 def __init__(self, image_repo, image_id, task_id):
104 self._image_repo = image_repo
105 self._image_id = image_id
106 self._task_id = task_id
107
108 def __enter__(self):
109 self._image = self._image_repo.get(self._image_id)
110 self._image_previous_status = self._image.status
111 self._assert_task_lock(self._image)
112
113 return _ImportActions(self._image)
114
115 def __exit__(self, type, value, traceback):
116 if type is not None:
117 # NOTE(danms): Do not save the image if we raised in context
118 return
119
120 # NOTE(danms): If we were in the middle of a long-running
121 # set_data() where someone else stole our lock, we may race
122 # with them to update image locations and erase one that
123 # someone else is working on. Checking the task lock here
124 # again is not perfect exclusion, but in lieu of actual
125 # thread-safe location updating, this at least reduces the
126 # likelihood of that happening.
127 self.assert_task_lock()
128
129 if self._image_previous_status != self._image.status:
130 LOG.debug('Image %(image_id)s status changing from '
131 '%(old_status)s to %(new_status)s',
132 {'image_id': self._image_id,
133 'old_status': self._image_previous_status,
134 'new_status': self._image.status})
135 self._image_repo.save(self._image, self._image_previous_status)
136
137 @property
138 def image_id(self):
139 return self._image_id
140
141 def drop_lock_for_task(self):
142 """Delete the import lock for our task.
143
144 This is an atomic operation and thus does not require a context
145 for the image save. Note that after calling this method, no
146 further actions will be allowed on the image.
147
148 :raises: NotFound if the image was not locked by the expected task.
149 """
150 image = self._image_repo.get(self._image_id)
151 self._image_repo.delete_property_atomic(image,
152 'os_glance_import_task',
153 self._task_id)
154
155 def _assert_task_lock(self, image):
156 task_lock = image.extra_properties.get('os_glance_import_task')
157 if task_lock != self._task_id:
158 LOG.error('Image %(image)s import task %(task)s attempted to '
159 'take action on image, but other task %(other)s holds '
160 'the lock; Aborting.',
161 {'image': self._image_id,
162 'task': self._task_id,
163 'other': task_lock})
164 raise exception.TaskAbortedError()
165
166 def assert_task_lock(self):
167 """Assert that we own the task lock on the image.
168
169 :raises: TaskAbortedError if we do not
170 """
171 image = self._image_repo.get(self._image_id)
172 self._assert_task_lock(image)
173
174
175 class _ImportActions(object):
176 """Actions available for being performed on an image during import.
177
178 This defines the available actions that can be performed on an image
179 during import, which may be done with an image owned by another user.
180
181 Do not instantiate this object directly, get it from ImportActionWrapper.
182 """
183
184 IMPORTING_STORES_KEY = 'os_glance_importing_to_stores'
185 IMPORT_FAILED_KEY = 'os_glance_failed_import'
186
187 def __init__(self, image):
188 self._image = image
189
190 @property
191 def image_id(self):
192 return self._image.image_id
193
194 @property
195 def image_status(self):
196 return self._image.status
197
198 def merge_store_list(self, list_key, stores, subtract=False):
199 stores = set([store for store in stores if store])
200 existing = set(
201 self._image.extra_properties.get(list_key, '').split(','))
202
203 if subtract:
204 if stores - existing:
205 LOG.debug('Stores %(stores)s not in %(key)s for '
206 'image %(image_id)s',
207 {'stores': ','.join(sorted(stores - existing)),
208 'key': list_key,
209 'image_id': self.image_id})
210 merged_stores = existing - stores
211 else:
212 merged_stores = existing | stores
213
214 stores_list = ','.join(sorted((store for store in
215 merged_stores if store)))
216 self._image.extra_properties[list_key] = stores_list
217 LOG.debug('Image %(image_id)s %(key)s=%(stores)s',
218 {'image_id': self.image_id,
219 'key': list_key,
220 'stores': stores_list})
221
222 def add_importing_stores(self, stores):
223 """Add a list of stores to the importing list.
224
225 Add stores to os_glance_importing_to_stores
226
227 :param stores: A list of store names
228 """
229 self.merge_store_list(self.IMPORTING_STORES_KEY, stores)
230
231 def remove_importing_stores(self, stores):
232 """Remove a list of stores from the importing list.
233
234 Remove stores from os_glance_importing_to_stores
235
236 :param stores: A list of store names
237 """
238 self.merge_store_list(self.IMPORTING_STORES_KEY, stores, subtract=True)
239
240 def add_failed_stores(self, stores):
241 """Add a list of stores to the failed list.
242
243 Add stores to os_glance_failed_import
244
245 :param stores: A list of store names
246 """
247 self.merge_store_list(self.IMPORT_FAILED_KEY, stores)
248
249 def remove_failed_stores(self, stores):
250 """Remove a list of stores from the failed list.
251
252 Remove stores from os_glance_failed_import
253
254 :param stores: A list of store names
255 """
256 self.merge_store_list(self.IMPORT_FAILED_KEY, stores, subtract=True)
257
258 def set_image_data(self, uri, task_id, backend, set_active,
259 callback=None):
260 """Populate image with data on a specific backend.
261
262 This is used during an image import operation to populate the data
263 in a given store for the image. If this object wraps an admin-capable
264 image_repo, then this will be done with admin credentials on behalf
265 of a user already determined to be able to perform this operation
266 (such as a copy-image import of an existing image owned by another
267 user).
268
269 :param uri: Source URL for image data
270 :param task_id: The task responsible for this operation
271 :param backend: The backend store to target the data
272 :param set_active: Whether or not to set the image to 'active'
273 state after the operation completes
274 :param callback: A callback function with signature:
275 fn(action, chunk_bytes, total_bytes)
276 which should be called while processing the image
277 approximately every minute.
278 """
279 if callback:
280 callback = functools.partial(callback, self)
281 return image_import.set_image_data(self._image, uri, task_id,
282 backend=backend,
283 set_active=set_active,
284 callback=callback)
285
286 def set_image_status(self, status):
287 """Set the image status.
288
289 :param status: The new status of the image
290 """
291 self._image.status = status
292
293 def remove_location_for_store(self, backend):
294 """Remove a location from an image given a backend store.
295
296 Given a backend store, remove the corresponding location from the
297 image's set of locations. If the last location is removed, remove
298 the image checksum, hash information, and size.
299
300 :param backend: The backend store to remove from the image
301 """
302
303 for i, location in enumerate(self._image.locations):
304 if location.get('metadata', {}).get('store') == backend:
305 try:
306 self._image.locations.pop(i)
307 except (store_exceptions.NotFound,
308 store_exceptions.Forbidden):
309 msg = (_("Error deleting from store %(store)s when "
310 "reverting.") % {'store': backend})
311 LOG.warning(msg)
312 # NOTE(yebinama): Some store drivers doesn't document which
313 # exceptions they throw.
314 except Exception:
315 msg = (_("Unexpected exception when deleting from store "
316 "%(store)s.") % {'store': backend})
317 LOG.warning(msg)
318 else:
319 if len(self._image.locations) == 0:
320 self._image.checksum = None
321 self._image.os_hash_algo = None
322 self._image.os_hash_value = None
323 self._image.size = None
324 break
82325
83326
84327 class _DeleteFromFS(task.Task):
124367 'fn': file_path})
125368
126369
370 class _ImageLock(task.Task):
371 def __init__(self, task_id, task_type, action_wrapper):
372 self.task_id = task_id
373 self.task_type = task_type
374 self.action_wrapper = action_wrapper
375 super(_ImageLock, self).__init__(
376 name='%s-ImageLock-%s' % (task_type, task_id))
377
378 def execute(self):
379 self.action_wrapper.assert_task_lock()
380 LOG.debug('Image %(image)s import task %(task)s lock confirmed',
381 {'image': self.action_wrapper.image_id,
382 'task': self.task_id})
383
384 def revert(self, result, **kwargs):
385 """Drop our claim on the image.
386
387 If we have failed, we need to drop our import_task lock on the image
388 so that something else can have a try. Note that we may have been
389 preempted so we should only drop *our* lock.
390 """
391 try:
392 self.action_wrapper.drop_lock_for_task()
393 except exception.NotFound:
394 LOG.warning('Image %(image)s import task %(task)s lost its '
395 'lock during execution!',
396 {'image': self.action_wrapper.image_id,
397 'task': self.task_id})
398 else:
399 LOG.debug('Image %(image)s import task %(task)s dropped '
400 'its lock after failure',
401 {'image': self.action_wrapper.image_id,
402 'task': self.task_id})
403
404
127405 class _VerifyStaging(task.Task):
128406
129407 # NOTE(jokke): This could be also for example "staging_path" but to
200478
201479 class _ImportToStore(task.Task):
202480
203 def __init__(self, task_id, task_type, image_repo, uri, image_id, backend,
204 all_stores_must_succeed, set_active):
481 def __init__(self, task_id, task_type, task_repo, action_wrapper, uri,
482 backend, all_stores_must_succeed, set_active):
205483 self.task_id = task_id
206484 self.task_type = task_type
207 self.image_repo = image_repo
485 self.task_repo = task_repo
486 self.action_wrapper = action_wrapper
208487 self.uri = uri
209 self.image_id = image_id
210488 self.backend = backend
211489 self.all_stores_must_succeed = all_stores_must_succeed
212490 self.set_active = set_active
491 self.last_status = 0
213492 super(_ImportToStore, self).__init__(
214493 name='%s-ImportToStore-%s' % (task_type, task_id))
215494
216495 def execute(self, file_path=None):
217496 """Bringing the imported image to back end store
218497
219 :param image_id: Glance Image ID
220498 :param file_path: path to the image file
221499 """
222500 # NOTE(flaper87): Let's dance... and fall
259537 # NOTE(jokke): The different options here are kind of pointless as we
260538 # will need the file path anyways for our delete workflow for now.
261539 # For future proofing keeping this as is.
262 image = self.image_repo.get(self.image_id)
263 if image.status == "deleted":
540
541 with self.action_wrapper as action:
542 self._execute(action, file_path)
543
544 def _execute(self, action, file_path):
545 self.last_status = timeutils.now()
546
547 if action.image_status == "deleted":
264548 raise exception.ImportTaskError("Image has been deleted, aborting"
265549 " import.")
266550 try:
267 image_import.set_image_data(image, file_path or self.uri,
268 self.task_id, backend=self.backend,
269 set_active=self.set_active)
551 action.set_image_data(file_path or self.uri,
552 self.task_id, backend=self.backend,
553 set_active=self.set_active,
554 callback=self._status_callback)
270555 # NOTE(yebinama): set_image_data catches Exception and raises from
271556 # them. Can't be more specific on exceptions catched.
272557 except Exception:
277562 {'task_id': self.task_id, 'task_type': self.task_type})
278563 LOG.warning(msg)
279564 if self.backend is not None:
280 failed_import = image.extra_properties.get(
281 'os_glance_failed_import', '').split(',')
282 failed_import.append(self.backend)
283 image.extra_properties['os_glance_failed_import'] = ','.join(
284 failed_import).lstrip(',')
565 action.add_failed_stores([self.backend])
566
285567 if self.backend is not None:
286 importing = image.extra_properties.get(
287 'os_glance_importing_to_stores', '').split(',')
288 try:
289 importing.remove(self.backend)
290 image.extra_properties[
291 'os_glance_importing_to_stores'] = ','.join(
292 importing).lstrip(',')
293 except ValueError:
294 LOG.debug("Store %s not found in property "
295 "os_glance_importing_to_stores.", self.backend)
296 # NOTE(flaper87): We need to save the image again after
297 # the locations have been set in the image.
298 self.image_repo.save(image)
568 action.remove_importing_stores([self.backend])
569
570 def _status_callback(self, action, chunk_bytes, total_bytes):
571 # NOTE(danms): Only log status every five minutes
572 if timeutils.now() - self.last_status > 300:
573 LOG.debug('Image import %(image_id)s copied %(copied)i MiB',
574 {'image_id': action.image_id,
575 'copied': total_bytes // units.Mi})
576 self.last_status = timeutils.now()
577
578 task = script_utils.get_task(self.task_repo, self.task_id)
579 if task is None:
580 LOG.error(
581 'Status callback for task %(task)s found no task object!',
582 {'task': self.task_id})
583 raise exception.TaskNotFound(self.task_id)
584 if task.status != 'processing':
585 LOG.error('Task %(task)s expected "processing" status, '
586 'but found "%(status)s"; aborting.')
587 raise exception.TaskAbortedError()
588
589 task.message = _('Copied %i MiB') % (total_bytes // units.Mi)
590 self.task_repo.save(task)
299591
300592 def revert(self, result, **kwargs):
301593 """
303595
304596 :param result: taskflow result object
305597 """
306 image = self.image_repo.get(self.image_id)
307 for i, location in enumerate(image.locations):
308 if location.get('metadata', {}).get('store') == self.backend:
309 try:
310 image.locations.pop(i)
311 except (store_exceptions.NotFound,
312 store_exceptions.Forbidden):
313 msg = (_("Error deleting from store %{store}s when "
314 "reverting.") % {'store': self.backend})
315 LOG.warning(msg)
316 # NOTE(yebinama): Some store drivers doesn't document which
317 # exceptions they throw.
318 except Exception:
319 msg = (_("Unexpected exception when deleting from store"
320 "%{store}s.") % {'store': self.backend})
321 LOG.warning(msg)
322 else:
323 if len(image.locations) == 0:
324 image.checksum = None
325 image.os_hash_algo = None
326 image.os_hash_value = None
327 image.size = None
328 self.image_repo.save(image)
329 break
598 with self.action_wrapper as action:
599 action.remove_location_for_store(self.backend)
600 action.remove_importing_stores([self.backend])
601 if isinstance(result, taskflow.types.failure.Failure):
602 # We are the store that failed, so add us to the failed list
603 action.add_failed_stores([self.backend])
330604
331605
332606 class _VerifyImageState(task.Task):
333607
334 def __init__(self, task_id, task_type, image_repo, image_id,
335 import_method):
608 def __init__(self, task_id, task_type, action_wrapper, import_method):
336609 self.task_id = task_id
337610 self.task_type = task_type
338 self.image_repo = image_repo
339 self.image_id = image_id
611 self.action_wrapper = action_wrapper
340612 self.import_method = import_method
341613 super(_VerifyImageState, self).__init__(
342614 name='%s-VerifyImageState-%s' % (task_type, task_id))
346618
347619 :param image_id: Glance Image ID
348620 """
349 new_image = self.image_repo.get(self.image_id)
350 if new_image.status != 'active':
351 raise _NoStoresSucceeded(_('None of the uploads finished!'))
621 with self.action_wrapper as action:
622 if action.image_status != 'active':
623 raise _NoStoresSucceeded(_('None of the uploads finished!'))
352624
353625 def revert(self, result, **kwargs):
354626 """Set back to queued if this wasn't copy-image job."""
355 if self.import_method != 'copy-image':
356 new_image = self.image_repo.get(self.image_id)
357 new_image.status = 'queued'
358 self.image_repo.save_image(new_image)
627 with self.action_wrapper as action:
628 if self.import_method != 'copy-image':
629 action.set_image_status('queued')
359630
360631
361632 class _CompleteTask(task.Task):
362633
363 def __init__(self, task_id, task_type, task_repo, image_id):
634 def __init__(self, task_id, task_type, task_repo, action_wrapper):
364635 self.task_id = task_id
365636 self.task_type = task_type
366637 self.task_repo = task_repo
367 self.image_id = image_id
638 self.action_wrapper = action_wrapper
368639 super(_CompleteTask, self).__init__(
369640 name='%s-CompleteTask-%s' % (task_type, task_id))
370641
371 def execute(self):
372 """Finishing the task flow
373
374 :param image_id: Glance Image ID
375 """
376 task = script_utils.get_task(self.task_repo, self.task_id)
377 if task is None:
378 return
642 def _finish_task(self, task):
379643 try:
380 task.succeed({'image_id': self.image_id})
644 task.succeed({'image_id': self.action_wrapper.image_id})
381645 except Exception as e:
382646 # Note: The message string contains Error in it to indicate
383647 # in the task.message that it's a error message for the user.
395659 'e': encodeutils.exception_to_unicode(e)})
396660 finally:
397661 self.task_repo.save(task)
662
663 def _drop_lock(self):
664 try:
665 self.action_wrapper.drop_lock_for_task()
666 except exception.NotFound:
667 # NOTE(danms): This would be really bad, but there is probably
668 # not much point in reverting all the way back if we got this
669 # far. Log the carnage for forensics.
670 LOG.error('Image %(image)s import task %(task)s did not hold the '
671 'lock upon completion!',
672 {'image': self.action_wrapper.image_id,
673 'task': self.task_id})
674
675 def execute(self):
676 """Finishing the task flow
677
678 :param image_id: Glance Image ID
679 """
680 task = script_utils.get_task(self.task_repo, self.task_id)
681 if task is not None:
682 self._finish_task(task)
683 self._drop_lock()
398684
399685 LOG.info(_LI("%(task_id)s of %(task_type)s completed"),
400686 {'task_id': self.task_id, 'task_type': self.task_type})
414700 task_type = kwargs.get('task_type')
415701 task_repo = kwargs.get('task_repo')
416702 image_repo = kwargs.get('image_repo')
703 admin_repo = kwargs.get('admin_repo')
417704 image_id = kwargs.get('image_id')
418705 import_method = kwargs.get('import_req')['method']['name']
419706 uri = kwargs.get('import_req')['method'].get('uri')
425712 if not CONF.enabled_backends and not CONF.node_staging_uri.endswith('/'):
426713 separator = '/'
427714
715 # Instantiate an action wrapper with the admin repo if we got one,
716 # otherwise with the regular repo.
717 action_wrapper = ImportActionWrapper(admin_repo or image_repo, image_id,
718 task_id)
719
428720 if not uri and import_method in ['glance-direct', 'copy-image']:
429721 if CONF.enabled_backends:
430722 separator, staging_dir = store_utils.get_dir_separator()
433725 uri = separator.join((CONF.node_staging_uri, str(image_id)))
434726
435727 flow = lf.Flow(task_type, retry=retry.AlwaysRevert())
728
729 flow.add(_ImageLock(task_id, task_type, action_wrapper))
436730
437731 if import_method in ['web-download', 'copy-image']:
438732 internal_plugin = internal_plugins.get_import_plugin(**kwargs)
464758 import_task = lf.Flow(task_name)
465759 import_to_store = _ImportToStore(task_id,
466760 task_name,
467 image_repo,
761 task_repo,
762 action_wrapper,
468763 file_uri,
469 image_id,
470764 store,
471765 all_stores_must_succeed,
472766 set_active)
478772
479773 verify_task = _VerifyImageState(task_id,
480774 task_type,
481 image_repo,
482 image_id,
775 action_wrapper,
483776 import_method)
484777 flow.add(verify_task)
485778
486779 complete_task = _CompleteTask(task_id,
487780 task_type,
488781 task_repo,
489 image_id)
782 action_wrapper)
490783 flow.add(complete_task)
491784
492 image = image_repo.get(image_id)
493 from_state = image.status
494 if import_method != 'copy-image':
495 image.status = 'importing'
496
497 image.extra_properties[
498 'os_glance_importing_to_stores'] = ','.join((store for store in
499 stores if
500 store is not None))
501 image.extra_properties['os_glance_failed_import'] = ''
502 image_repo.save(image, from_state=from_state)
785 with action_wrapper as action:
786 if import_method != 'copy-image':
787 action.set_image_status('importing')
788 action.add_importing_stores(stores)
789 action.remove_failed_stores(stores)
503790
504791 return flow
6969 self.image_repo = image_repo
7070 self.image_id = image_id
7171 self.dest_path = ""
72 self.python = CONF.wsgi.python_interpreter
7273 super(_ConvertImage, self).__init__(
7374 name='%s-Convert_Image-%s' % (task_type, task_id))
7475
8788 "--output=json",
8889 src_path,
8990 prlimit=utils.QEMU_IMG_PROC_LIMITS,
91 python_exec=self.python,
9092 log_errors=putils.LOG_ALL_ERRORS,)
9193 except OSError as exc:
9294 with excutils.save_and_reraise_exception():
1212 # License for the specific language governing permissions and limitations
1313 # under the License.
1414
15 import futurist
1615 from oslo_config import cfg
1716 from oslo_log import log as logging
1817 from oslo_utils import encodeutils
8685
8786 class TaskExecutor(glance.async_.TaskExecutor):
8887
89 def __init__(self, context, task_repo, image_repo, image_factory):
88 def __init__(self, context, task_repo, image_repo, image_factory,
89 admin_repo=None):
9090 self.context = context
9191 self.task_repo = task_repo
9292 self.image_repo = image_repo
9393 self.image_factory = image_factory
94 self.admin_repo = admin_repo
9495 super(TaskExecutor, self).__init__(context, task_repo, image_repo,
95 image_factory)
96 image_factory,
97 admin_repo=admin_repo)
9698
9799 @staticmethod
98100 def _fetch_an_executor():
100102 return None
101103 else:
102104 max_workers = CONF.taskflow_executor.max_workers
103 try:
104 return futurist.GreenThreadPoolExecutor(
105 max_workers=max_workers)
106 except RuntimeError:
107 # NOTE(harlowja): I guess eventlet isn't being made
108 # useable, well just use native threads then (or try to).
109 return futurist.ThreadPoolExecutor(max_workers=max_workers)
105 threadpool_cls = glance.async_.get_threadpool_model()
106 return threadpool_cls(max_workers).pool
110107
111108 def _get_flow(self, task):
112109 try:
121118 'image_factory': self.image_factory,
122119 'backend': task_input.get('backend')
123120 }
121
122 if self.admin_repo:
123 kwds['admin_repo'] = self.admin_repo
124124
125125 if task.type == "import":
126126 uri = script_utils.validate_location_uri(
6161 from oslo_log import log as logging
6262 import osprofiler.initializer
6363
64 import glance.async_
6465 from glance.common import config
6566 from glance.common import exception
6667 from glance.common import wsgi
106107 host=CONF.bind_host
107108 )
108109
110 # NOTE(danms): Configure system-wide threading model to use eventlet
111 glance.async_.set_threadpool_model('eventlet')
112
109113 # NOTE(abhishekk): Added initialize_prefetcher KW argument to Server
110114 # object so that prefetcher object should only be initialized in case
111115 # of API service and ignored in case of registry. Once registry is
1717 """
1818 A simple cache management utility for Glance.
1919 """
20 from __future__ import print_function
2120
2221 import argparse
2322 import collections
1717 Thanks for some of the code, Swifties ;)
1818 """
1919
20 from __future__ import print_function
21 from __future__ import with_statement
22
2320 import argparse
2421 import fcntl
2522 import os
5047
5148 ALL_COMMANDS = ['start', 'status', 'stop', 'shutdown', 'restart',
5249 'reload', 'force-reload']
53 ALL_SERVERS = ['api', 'registry', 'scrubber']
54 RELOAD_SERVERS = ['glance-api', 'glance-registry']
55 GRACEFUL_SHUTDOWN_SERVERS = ['glance-api', 'glance-registry',
56 'glance-scrubber']
50 ALL_SERVERS = ['api', 'scrubber']
51 RELOAD_SERVERS = ['glance-api']
52 GRACEFUL_SHUTDOWN_SERVERS = ['glance-api', 'glance-scrubber']
5753 MAX_DESCRIPTORS = 32768
5854 MAX_MEMORY = 2 * units.Gi # 2 GB
5955 USAGE = """%(prog)s [options] <SERVER> <COMMAND> [CONFPATH]
1919 """
2020 Glance Management Utility
2121 """
22
23 from __future__ import print_function
2422
2523 # FIXME(sirp): When we have glance-admin we can consider merging this into it
2624 # Perhaps for consistency with Nova, we would then rename glance-admin ->
546544 logging.register_options(CONF)
547545 CONF.set_default(name='use_stderr', default=True)
548546 cfg_files = cfg.find_config_files(project='glance',
549 prog='glance-registry')
550 cfg_files.extend(cfg.find_config_files(project='glance',
551 prog='glance-api'))
547 prog='glance-api')
552548 cfg_files.extend(cfg.find_config_files(project='glance',
553549 prog='glance-manage'))
554550 config.parse_args(default_config_files=cfg_files)
+0
-102
glance/cmd/registry.py less more
0 #!/usr/bin/env python
1
2 # Copyright 2010 United States Government as represented by the
3 # Administrator of the National Aeronautics and Space Administration.
4 # Copyright 2011 OpenStack Foundation
5 # All Rights Reserved.
6 #
7 # Licensed under the Apache License, Version 2.0 (the "License"); you may
8 # not use this file except in compliance with the License. You may obtain
9 # a copy of the License at
10 #
11 # http://www.apache.org/licenses/LICENSE-2.0
12 #
13 # Unless required by applicable law or agreed to in writing, software
14 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
15 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
16 # License for the specific language governing permissions and limitations
17 # under the License.
18
19 """
20 Reference implementation server for Glance Registry
21 """
22
23 import os
24 import sys
25
26 import eventlet
27 # NOTE(jokke): As per the eventlet commit
28 # b756447bab51046dfc6f1e0e299cc997ab343701 there's circular import happening
29 # which can be solved making sure the hubs are properly and fully imported
30 # before calling monkey_patch(). This is solved in eventlet 0.22.0 but we
31 # need to address it before that is widely used around.
32 eventlet.hubs.get_hub()
33
34 if os.name == 'nt':
35 # eventlet monkey patching the os module causes subprocess.Popen to fail
36 # on Windows when using pipes due to missing non-blocking IO support.
37 eventlet.patcher.monkey_patch(os=False)
38 else:
39 eventlet.patcher.monkey_patch()
40
41 # Monkey patch the original current_thread to use the up-to-date _active
42 # global variable. See https://bugs.launchpad.net/bugs/1863021 and
43 # https://github.com/eventlet/eventlet/issues/592
44 import __original_module_threading as orig_threading
45 import threading
46 orig_threading.current_thread.__globals__['_active'] = threading._active
47
48 from oslo_reports import guru_meditation_report as gmr
49 from oslo_utils import encodeutils
50
51 # If ../glance/__init__.py exists, add ../ to Python search path, so that
52 # it will override what happens to be installed in /usr/(local/)lib/python...
53 possible_topdir = os.path.normpath(os.path.join(os.path.abspath(sys.argv[0]),
54 os.pardir,
55 os.pardir))
56 if os.path.exists(os.path.join(possible_topdir, 'glance', '__init__.py')):
57 sys.path.insert(0, possible_topdir)
58
59 from oslo_config import cfg
60 from oslo_log import log as logging
61 import osprofiler.initializer
62
63 from glance.common import config
64 from glance.common import wsgi
65 from glance import notifier
66 from glance import version
67
68 CONF = cfg.CONF
69 CONF.import_group("profiler", "glance.common.wsgi")
70 logging.register_options(CONF)
71 wsgi.register_cli_opts()
72
73
74 def main():
75 try:
76 config.parse_args()
77 config.set_config_defaults()
78 wsgi.set_eventlet_hub()
79 logging.setup(CONF, 'glance')
80 gmr.TextGuruMeditation.setup_autorun(version)
81 notifier.set_defaults()
82
83 if CONF.profiler.enabled:
84 osprofiler.initializer.init_from_conf(
85 conf=CONF,
86 context={},
87 project="glance",
88 service="registry",
89 host=CONF.bind_host
90 )
91
92 server = wsgi.Server()
93 server.start(config.load_paste_app('glance-registry'),
94 default_port=9191)
95 server.wait()
96 except RuntimeError as e:
97 sys.exit("ERROR: %s" % encodeutils.exception_to_unicode(e))
98
99
100 if __name__ == '__main__':
101 main()
1414 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
1515 # License for the specific language governing permissions and limitations
1616 # under the License.
17
18 from __future__ import print_function
1917
2018 import os
2119 import sys
5957 help=("Pass in your authentication token if you have "
6058 "one. If you use this option the same token is "
6159 "used for both the source and the target.")),
62 cfg.StrOpt('mastertoken',
63 short='M',
64 default='',
65 deprecated_since='Pike',
66 deprecated_reason='use sourcetoken instead',
67 help=("Pass in your authentication token if you have "
68 "one. This is the token used for the source system.")),
69 cfg.StrOpt('slavetoken',
70 short='S',
71 default='',
72 deprecated_since='Pike',
73 deprecated_reason='use targettoken instead',
74 help=("Pass in your authentication token if you have "
75 "one. This is the token used for the target system.")),
7660 cfg.StrOpt('command',
7761 positional=True,
7862 required=False,
8670 CONF = cfg.CONF
8771 CONF.register_cli_opts(cli_opts)
8872
89 # TODO(stevelle) Remove deprecated opts some time after Queens
9073 CONF.register_opt(
9174 cfg.StrOpt('sourcetoken',
9275 default='',
93 deprecated_opts=[cfg.DeprecatedOpt('mastertoken')],
9476 help=("Pass in your authentication token if you have "
9577 "one. This is the token used for the source.")))
9678 CONF.register_opt(
9779 cfg.StrOpt('targettoken',
9880 default='',
99 deprecated_opts=[cfg.DeprecatedOpt('slavetoken')],
10081 help=("Pass in your authentication token if you have "
10182 "one. This is the token used for the target.")))
10283
1818
1919 import logging
2020 import os
21 import sys
2122
2223 from oslo_config import cfg
2324 from oslo_middleware import cors
158159 """)),
159160 ]
160161
161 _DEPRECATE_GLANCE_V1_MSG = _('The Images (Glance) version 1 API has been '
162 'DEPRECATED in the Newton release and will be '
163 'removed on or after Pike release, following '
164 'the standard OpenStack deprecation policy. '
165 'Hence, the configuration options specific to '
166 'the Images (Glance) v1 API are hereby '
167 'deprecated and subject to removal. Operators '
168 'are advised to deploy the Images (Glance) v2 '
169 'API.')
170
171162 common_opts = [
172163 cfg.BoolOpt('allow_additional_image_properties', default=True,
173164 deprecated_for_removal=True,
276267
277268 Related options:
278269 * None
279
280 """)),
281 # TODO(abashmak): Add choices parameter to this option:
282 # choices('glance.db.sqlalchemy.api',
283 # 'glance.db.registry.api',
284 # 'glance.db.simple.api')
285 # This will require a fix to the functional tests which
286 # set this option to a test version of the registry api module:
287 # (glance.tests.functional.v2.registry_data_api), in order to
288 # bypass keystone authentication for the Registry service.
289 # All such tests are contained in:
290 # glance/tests/functional/v2/test_images.py
291 cfg.StrOpt('data_api',
292 default='glance.db.sqlalchemy.api',
293 deprecated_for_removal=True,
294 deprecated_since="Queens",
295 deprecated_reason=_("""
296 Glance registry service is deprecated for removal.
297
298 More information can be found from the spec:
299 http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html
300 """),
301 help=_("""
302 Python module path of data access API.
303
304 Specifies the path to the API to use for accessing the data model.
305 This option determines how the image catalog data will be accessed.
306
307 Possible values:
308 * glance.db.sqlalchemy.api
309 * glance.db.registry.api
310 * glance.db.simple.api
311
312 If this option is set to ``glance.db.sqlalchemy.api`` then the image
313 catalog data is stored in and read from the database via the
314 SQLAlchemy Core and ORM APIs.
315
316 Setting this option to ``glance.db.registry.api`` will force all
317 database access requests to be routed through the Registry service.
318 This avoids data access from the Glance API nodes for an added layer
319 of security, scalability and manageability.
320
321 NOTE: In v2 OpenStack Images API, the registry service is optional.
322 In order to use the Registry API in v2, the option
323 ``enable_v2_registry`` must be set to ``True``.
324
325 Finally, when this configuration option is set to
326 ``glance.db.simple.api``, image catalog data is stored in and read
327 from an in-memory data structure. This is primarily used for testing.
328
329 Related options:
330 * enable_v2_api
331 * enable_v2_registry
332270
333271 """)),
334272 cfg.IntOpt('limit_param_default', default=25, min=1,
510448 * None
511449
512450 """)),
513 cfg.BoolOpt('enable_v2_api',
514 default=True,
515 deprecated_reason=_('The Images (Glance) version 1 API has '
516 'been DEPRECATED in the Newton release. '
517 'It will be removed on or after Pike '
518 'release, following the standard '
519 'OpenStack deprecation policy. Once we '
520 'remove the Images (Glance) v1 API, only '
521 'the Images (Glance) v2 API can be '
522 'deployed and will be enabled by default '
523 'making this option redundant.'),
524 deprecated_since='Newton',
525 help=_("""
526 Deploy the v2 OpenStack Images API.
527
528 When this option is set to ``True``, Glance service will respond
529 to requests on registered endpoints conforming to the v2 OpenStack
530 Images API.
531
532 NOTES:
533 * If this option is disabled, then the ``enable_v2_registry``
534 option, which is enabled by default, is also recommended
535 to be disabled.
536
537 Possible values:
538 * True
539 * False
540
541 Related options:
542 * enable_v2_registry
543
544 """)),
545 cfg.BoolOpt('enable_v1_registry',
546 default=True,
547 deprecated_reason=_DEPRECATE_GLANCE_V1_MSG,
548 deprecated_since='Newton',
549 help=_("""
550 DEPRECATED FOR REMOVAL
551 """)),
552 cfg.BoolOpt('enable_v2_registry',
553 default=True,
554 deprecated_for_removal=True,
555 deprecated_since="Queens",
556 deprecated_reason=_("""
557 Glance registry service is deprecated for removal.
558
559 More information can be found from the spec:
560 http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html
561 """),
562 help=_("""
563 Deploy the v2 API Registry service.
564
565 When this option is set to ``True``, the Registry service
566 will be enabled in Glance for v2 API requests.
567
568 NOTES:
569 * Use of Registry is optional in v2 API, so this option
570 must only be enabled if both ``enable_v2_api`` is set to
571 ``True`` and the ``data_api`` option is set to
572 ``glance.db.registry.api``.
573
574 * If deploying only the v1 OpenStack Images API, this option,
575 which is enabled by default, should be disabled.
576
577 Possible values:
578 * True
579 * False
580
581 Related options:
582 * enable_v2_api
583 * data_api
584
585 """)),
586451 cfg.HostAddressOpt('pydev_worker_debug_host',
587452 sample_default='localhost',
588453 help=_("""
701566 * [DEFAULT]/node_staging_uri""")),
702567 ]
703568
569 wsgi_opts = [
570 cfg.IntOpt('task_pool_threads',
571 default=16,
572 min=1,
573 help=_("""
574 The number of threads (per worker process) in the pool for processing
575 asynchronous tasks. This controls how many asynchronous tasks (i.e. for
576 image interoperable import) each worker can run at a time. If this is
577 too large, you *may* have increased memory footprint per worker and/or you
578 may overwhelm other system resources such as disk or outbound network
579 bandwidth. If this is too small, image import requests will have to wait
580 until a thread becomes available to begin processing.""")),
581 cfg.StrOpt('python_interpreter',
582 default=sys.executable,
583 help=_("""
584 Path to the python interpreter to use when spawning external
585 processes. By default this is sys.executable, which should be the
586 same interpreter running Glance itself. However, in some situations
587 (i.e. uwsgi) this may not actually point to a python interpreter
588 itself.""")),
589 ]
590
591
704592 CONF = cfg.CONF
705593 CONF.register_opts(paste_deploy_opts, group='paste_deploy')
706594 CONF.register_opts(image_format_opts, group='image_format')
707595 CONF.register_opts(task_opts, group='task')
708596 CONF.register_opts(common_opts)
597 CONF.register_opts(wsgi_opts, group='wsgi')
709598 policy.Enforcer(CONF)
710599
711600
358358 message = _("An import task exception occurred")
359359
360360
361 class TaskAbortedError(ImportTaskError):
362 message = _("Task was aborted externally")
363
364
361365 class DuplicateLocation(Duplicate):
362366 message = _("The location %(location)s already exists")
363367
0 # Copyright 2020 Red Hat, Inc
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 """
16 This is a python implementation of virtual disk format inspection routines
17 gathered from various public specification documents, as well as qemu disk
18 driver code. It attempts to store and parse the minimum amount of data
19 required, and in a streaming-friendly manner to collect metadata about
20 complex-format images.
21 """
22
23 import struct
24
25 from oslo_log import log as logging
26
27 LOG = logging.getLogger(__name__)
28
29
30 class CaptureRegion(object):
31 """Represents a region of a file we want to capture.
32
33 A region of a file we want to capture requires a byte offset into
34 the file and a length. This is expected to be used by a data
35 processing loop, calling capture() with the most recently-read
36 chunk. This class handles the task of grabbing the desired region
37 of data across potentially multiple fractional and unaligned reads.
38
39 :param offset: Byte offset into the file starting the region
40 :param length: The length of the region
41 """
42 def __init__(self, offset, length):
43 self.offset = offset
44 self.length = length
45 self.data = b''
46
47 @property
48 def complete(self):
49 """Returns True when we have captured the desired data."""
50 return self.length == len(self.data)
51
52 def capture(self, chunk, current_position):
53 """Process a chunk of data.
54
55 This should be called for each chunk in the read loop, at least
56 until complete returns True.
57
58 :param chunk: A chunk of bytes in the file
59 :param current_position: The position of the file processed by the
60 read loop so far. Note that this will be
61 the position in the file *after* the chunk
62 being presented.
63 """
64 read_start = current_position - len(chunk)
65 if (read_start <= self.offset <= current_position or
66 self.offset <= read_start <= (self.offset + self.length)):
67 if read_start < self.offset:
68 lead_gap = self.offset - read_start
69 else:
70 lead_gap = 0
71 self.data += chunk[lead_gap:]
72 self.data = self.data[:self.length]
73
74
75 class ImageFormatError(Exception):
76 """An unrecoverable image format error that aborts the process."""
77 pass
78
79
80 class TraceDisabled(object):
81 """A logger-like thing that swallows tracing when we do not want it."""
82 def debug(self, *a, **k):
83 pass
84
85 info = debug
86 warning = debug
87 error = debug
88
89
90 class FileInspector(object):
91 """A stream-based disk image inspector.
92
93 This base class works on raw images and is subclassed for more
94 complex types. It is to be presented with the file to be examined
95 one chunk at a time, during read processing and will only store
96 as much data as necessary to determine required attributes of
97 the file.
98 """
99
100 def __init__(self, tracing=False):
101 self._total_count = 0
102
103 # NOTE(danms): The logging in here is extremely verbose for a reason,
104 # but should never really be enabled at that level at runtime. To
105 # retain all that work and assist in future debug, we have a separate
106 # debug flag that can be passed from a manual tool to turn it on.
107 if tracing:
108 self._log = logging.getLogger(str(self))
109 else:
110 self._log = TraceDisabled()
111 self._capture_regions = {}
112
113 def _capture(self, chunk, only=None):
114 for name, region in self._capture_regions.items():
115 if only and name not in only:
116 continue
117 if not region.complete:
118 region.capture(chunk, self._total_count)
119
120 def eat_chunk(self, chunk):
121 """Call this to present chunks of the file to the inspector."""
122 pre_regions = set(self._capture_regions.keys())
123
124 # Increment our position-in-file counter
125 self._total_count += len(chunk)
126
127 # Run through the regions we know of to see if they want this
128 # data
129 self._capture(chunk)
130
131 # Let the format do some post-read processing of the stream
132 self.post_process()
133
134 # Check to see if the post-read processing added new regions
135 # which may require the current chunk.
136 new_regions = set(self._capture_regions.keys()) - pre_regions
137 if new_regions:
138 self._capture(chunk, only=new_regions)
139
140 def post_process(self):
141 """Post-read hook to process what has been read so far.
142
143 This will be called after each chunk is read and potentially captured
144 by the defined regions. If any regions are defined by this call,
145 those regions will be presented with the current chunk in case it
146 is within one of the new regions.
147 """
148 pass
149
150 def region(self, name):
151 """Get a CaptureRegion by name."""
152 return self._capture_regions[name]
153
154 def new_region(self, name, region):
155 """Add a new CaptureRegion by name."""
156 if self.has_region(name):
157 # This is a bug, we tried to add the same region twice
158 raise ImageFormatError('Inspector re-added region %s' % name)
159 self._capture_regions[name] = region
160
161 def has_region(self, name):
162 """Returns True if named region has been defined."""
163 return name in self._capture_regions
164
165 @property
166 def format_match(self):
167 """Returns True if the file appears to be the expected format."""
168 return True
169
170 @property
171 def virtual_size(self):
172 """Returns the virtual size of the disk image, or zero if unknown."""
173 return self._total_count
174
175 @property
176 def actual_size(self):
177 """Returns the total size of the file, usually smaller than
178 virtual_size.
179 """
180 return self._total_count
181
182 def __str__(self):
183 """The string name of this file format."""
184 return 'raw'
185
186 @property
187 def context_info(self):
188 """Return info on amount of data held in memory for auditing.
189
190 This is a dict of region:sizeinbytes items that the inspector
191 uses to examine the file.
192 """
193 return {name: len(region.data) for name, region in
194 self._capture_regions.items()}
195
196
197 # The qcow2 format consists of a big-endian 72-byte header, of which
198 # only a small portion has information we care about:
199 #
200 # Dec Hex Name
201 # 0 0x00 Magic 4-bytes 'QFI\xfb'
202 # 4 0x04 Version (uint32_t, should always be 2 for modern files)
203 # . . .
204 # 24 0x18 Size in bytes (unint64_t)
205 #
206 # https://people.gnome.org/~markmc/qcow-image-format.html
207 class QcowInspector(FileInspector):
208 """QEMU QCOW2 Format
209
210 This should only require about 32 bytes of the beginning of the file
211 to determine the virtual size.
212 """
213 def __init__(self, *a, **k):
214 super(QcowInspector, self).__init__(*a, **k)
215 self.new_region('header', CaptureRegion(0, 512))
216
217 def _qcow_header_data(self):
218 magic, version, bf_offset, bf_sz, cluster_bits, size = (
219 struct.unpack('>4sIQIIQ', self.region('header').data[:32]))
220 return magic, size
221
222 @property
223 def virtual_size(self):
224 if not self.region('header').complete:
225 return 0
226 if not self.format_match:
227 return 0
228 magic, size = self._qcow_header_data()
229 return size
230
231 @property
232 def format_match(self):
233 if not self.region('header').complete:
234 return False
235 magic, size = self._qcow_header_data()
236 return magic == b'QFI\xFB'
237
238 def __str__(self):
239 return 'qcow2'
240
241
242 # The VHD (or VPC as QEMU calls it) format consists of a big-endian
243 # 512-byte "footer" at the beginning fo the file with various
244 # information, most of which does not matter to us:
245 #
246 # Dec Hex Name
247 # 0 0x00 Magic string (8-bytes, always 'conectix')
248 # 40 0x28 Disk size (uint64_t)
249 #
250 # https://github.com/qemu/qemu/blob/master/block/vpc.c
251 class VHDInspector(FileInspector):
252 """Connectix/MS VPC VHD Format
253
254 This should only require about 512 bytes of the beginning of the file
255 to determine the virtual size.
256 """
257 def __init__(self, *a, **k):
258 super(VHDInspector, self).__init__(*a, **k)
259 self.new_region('header', CaptureRegion(0, 512))
260
261 @property
262 def format_match(self):
263 return self.region('header').data.startswith(b'conectix')
264
265 @property
266 def virtual_size(self):
267 if not self.region('header').complete:
268 return 0
269
270 if not self.format_match:
271 return 0
272
273 return struct.unpack('>Q', self.region('header').data[40:48])[0]
274
275 def __str__(self):
276 return 'vhd'
277
278
279 # The VHDX format consists of a complex dynamic little-endian
280 # structure with multiple regions of metadata and data, linked by
281 # offsets with in the file (and within regions), identified by MSFT
282 # GUID strings. The header is a 320KiB structure, only a few pieces of
283 # which we actually need to capture and interpret:
284 #
285 # Dec Hex Name
286 # 0 0x00000 Identity (Technically 9-bytes, padded to 64KiB, the first
287 # 8 bytes of which are 'vhdxfile')
288 # 196608 0x30000 The Region table (64KiB of a 32-byte header, followed
289 # by up to 2047 36-byte region table entry structures)
290 #
291 # The region table header includes two items we need to read and parse,
292 # which are:
293 #
294 # 196608 0x30000 4-byte signature ('regi')
295 # 196616 0x30008 Entry count (uint32-t)
296 #
297 # The region table entries follow the region table header immediately
298 # and are identified by a 16-byte GUID, and provide an offset of the
299 # start of that region. We care about the "metadata region", identified
300 # by the METAREGION class variable. The region table entry is (offsets
301 # from the beginning of the entry, since it could be in multiple places):
302 #
303 # 0 0x00000 16-byte MSFT GUID
304 # 16 0x00010 Offset of the actual metadata region (uint64_t)
305 #
306 # When we find the METAREGION table entry, we need to grab that offset
307 # and start examining the region structure at that point. That
308 # consists of a metadata table of structures, which point to places in
309 # the data in an unstructured space that follows. The header is
310 # (offsets relative to the region start):
311 #
312 # 0 0x00000 8-byte signature ('metadata')
313 # . . .
314 # 16 0x00010 2-byte entry count (up to 2047 entries max)
315 #
316 # This header is followed by the specified number of metadata entry
317 # structures, identified by GUID:
318 #
319 # 0 0x00000 16-byte MSFT GUID
320 # 16 0x00010 4-byte offset (uint32_t, relative to the beginning of
321 # the metadata region)
322 #
323 # We need to find the "Virtual Disk Size" metadata item, identified by
324 # the GUID in the VIRTUAL_DISK_SIZE class variable, grab the offset,
325 # add it to the offset of the metadata region, and examine that 8-byte
326 # chunk of data that follows.
327 #
328 # The "Virtual Disk Size" is a naked uint64_t which contains the size
329 # of the virtual disk, and is our ultimate target here.
330 #
331 # https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-vhdx/83e061f8-f6e2-4de1-91bd-5d518a43d477
332 class VHDXInspector(FileInspector):
333 """MS VHDX Format
334
335 This requires some complex parsing of the stream. The first 256KiB
336 of the image is stored to get the header and region information,
337 and then we capture the first metadata region to read those
338 records, find the location of the virtual size data and parse
339 it. This needs to store the metadata table entries up until the
340 VDS record, which may consist of up to 2047 32-byte entries at
341 max. Finally, it must store a chunk of data at the offset of the
342 actual VDS uint64.
343
344 """
345 METAREGION = '8B7CA206-4790-4B9A-B8FE-575F050F886E'
346 VIRTUAL_DISK_SIZE = '2FA54224-CD1B-4876-B211-5DBED83BF4B8'
347
348 def __init__(self, *a, **k):
349 super(VHDXInspector, self).__init__(*a, **k)
350 self.new_region('ident', CaptureRegion(0, 32))
351 self.new_region('header', CaptureRegion(192 * 1024, 64 * 1024))
352
353 def post_process(self):
354 # After reading a chunk, we may have the following conditions:
355 #
356 # 1. We may have just completed the header region, and if so,
357 # we need to immediately read and calculate the location of
358 # the metadata region, as it may be starting in the same
359 # read we just did.
360 # 2. We may have just completed the metadata region, and if so,
361 # we need to immediately calculate the location of the
362 # "virtual disk size" record, as it may be starting in the
363 # same read we just did.
364 if self.region('header').complete and not self.has_region('metadata'):
365 region = self._find_meta_region()
366 if region:
367 self.new_region('metadata', region)
368 elif self.has_region('metadata') and not self.has_region('vds'):
369 region = self._find_meta_entry(self.VIRTUAL_DISK_SIZE)
370 if region:
371 self.new_region('vds', region)
372
373 @property
374 def format_match(self):
375 return self.region('ident').data.startswith(b'vhdxfile')
376
377 @staticmethod
378 def _guid(buf):
379 """Format a MSFT GUID from the 16-byte input buffer."""
380 guid_format = '<IHHBBBBBBBB'
381 return '%08X-%04X-%04X-%02X%02X-%02X%02X%02X%02X%02X%02X' % (
382 struct.unpack(guid_format, buf))
383
384 def _find_meta_region(self):
385 # The region table entries start after a 16-byte table header
386 region_entry_first = 16
387
388 # Parse the region table header to find the number of regions
389 regi, cksum, count, reserved = struct.unpack(
390 '<IIII', self.region('header').data[:16])
391 if regi != 0x69676572:
392 raise ImageFormatError('Region signature not found at %x' % (
393 self.region('header').offset))
394
395 if count >= 2048:
396 raise ImageFormatError('Region count is %i (limit 2047)' % count)
397
398 # Process the regions until we find the metadata one; grab the
399 # offset and return
400 self._log.debug('Region entry first is %x', region_entry_first)
401 self._log.debug('Region entries %i', count)
402 meta_offset = 0
403 for i in range(0, count):
404 entry_start = region_entry_first + (i * 32)
405 entry_end = entry_start + 32
406 entry = self.region('header').data[entry_start:entry_end]
407 self._log.debug('Entry offset is %x', entry_start)
408
409 # GUID is the first 16 bytes
410 guid = self._guid(entry[:16])
411 if guid == self.METAREGION:
412 # This entry is the metadata region entry
413 meta_offset, meta_len, meta_req = struct.unpack(
414 '<QII', entry[16:])
415 self._log.debug('Meta entry %i specifies offset: %x',
416 i, meta_offset)
417 # NOTE(danms): The meta_len in the region descriptor is the
418 # entire size of the metadata table and data. This can be
419 # very large, so we should only capture the size required
420 # for the maximum length of the table, which is one 32-byte
421 # table header, plus up to 2047 32-byte entries.
422 meta_len = 2048 * 32
423 return CaptureRegion(meta_offset, meta_len)
424
425 self._log.warning('Did not find metadata region')
426 return None
427
428 def _find_meta_entry(self, desired_guid):
429 meta_buffer = self.region('metadata').data
430 if len(meta_buffer) < 32:
431 # Not enough data yet for full header
432 return None
433
434 # Make sure we found the metadata region by checking the signature
435 sig, reserved, count = struct.unpack('<8sHH', meta_buffer[:12])
436 if sig != b'metadata':
437 raise ImageFormatError(
438 'Invalid signature for metadata region: %r' % sig)
439
440 entries_size = 32 + (count * 32)
441 if len(meta_buffer) < entries_size:
442 # Not enough data yet for all metadata entries. This is not
443 # strictly necessary as we could process whatever we have until
444 # we find the V-D-S one, but there are only 2047 32-byte
445 # entries max (~64k).
446 return None
447
448 if count >= 2048:
449 raise ImageFormatError(
450 'Metadata item count is %i (limit 2047)' % count)
451
452 for i in range(0, count):
453 entry_offset = 32 + (i * 32)
454 guid = self._guid(meta_buffer[entry_offset:entry_offset + 16])
455 if guid == desired_guid:
456 # Found the item we are looking for by id.
457 # Stop our region from capturing
458 item_offset, item_length, _reserved = struct.unpack(
459 '<III',
460 meta_buffer[entry_offset + 16:entry_offset + 28])
461 self.region('metadata').length = len(meta_buffer)
462 self._log.debug('Found entry at offset %x', item_offset)
463 # Metadata item offset is from the beginning of the metadata
464 # region, not the file.
465 return CaptureRegion(
466 self.region('metadata').offset + item_offset,
467 item_length)
468
469 self._log.warning('Did not find guid %s', desired_guid)
470 return None
471
472 @property
473 def virtual_size(self):
474 # Until we have found the offset and have enough metadata buffered
475 # to read it, return "unknown"
476 if not self.has_region('vds') or not self.region('vds').complete:
477 return 0
478
479 size, = struct.unpack('<Q', self.region('vds').data)
480 return size
481
482 def __str__(self):
483 return 'vhdx'
484
485
486 # The VMDK format comes in a large number of variations, but the
487 # single-file 'monolithicSparse' version 4 one is mostly what we care
488 # about. It contains a 512-byte little-endian header, followed by a
489 # variable-length "descriptor" region of text. The header looks like:
490 #
491 # Dec Hex Name
492 # 0 0x00 4-byte magic string 'KDMV'
493 # 4 0x04 Version (uint32_t)
494 # 8 0x08 Flags (uint32_t, unused by us)
495 # 16 0x10 Number of 512 byte sectors in the disk (uint64_t)
496 # 24 0x18 Granularity (uint64_t, unused by us)
497 # 32 0x20 Descriptor offset in 512-byte sectors (uint64_t)
498 # 40 0x28 Descriptor size in 512-byte sectors (uint64_t)
499 #
500 # After we have the header, we need to find the descriptor region,
501 # which starts at the sector identified in the "descriptor offset"
502 # field, and is "descriptor size" 512-byte sectors long. Once we have
503 # that region, we need to parse it as text, looking for the
504 # createType=XXX line that specifies the mechanism by which the data
505 # extents are stored in this file. We only support the
506 # "monolithicSparse" format, so we just need to confirm that this file
507 # contains that specifier.
508 #
509 # https://www.vmware.com/app/vmdk/?src=vmdk
510 class VMDKInspector(FileInspector):
511 """vmware VMDK format (monolithicSparse variant only)
512
513 This needs to store the 512 byte header and the descriptor region
514 which should be just after that. The descriptor region is some
515 variable number of 512 byte sectors, but is just text defining the
516 layout of the disk.
517 """
518 def __init__(self, *a, **k):
519 super(VMDKInspector, self).__init__(*a, **k)
520 self.new_region('header', CaptureRegion(0, 512))
521
522 def post_process(self):
523 # If we have just completed the header region, we need to calculate
524 # the location and length of the descriptor, which should immediately
525 # follow and may have been partially-read in this read.
526 if not self.region('header').complete:
527 return
528
529 sig, ver, _flags, _sectors, _grain, desc_sec, desc_num = struct.unpack(
530 '<4sIIQQQQ', self.region('header').data[:44])
531
532 if sig != b'KDMV':
533 raise ImageFormatError('Signature KDMV not found: %r' % sig)
534 return
535
536 if ver not in (1, 2, 3):
537 raise ImageFormatError('Unsupported format version %i' % ver)
538 return
539
540 if not self.has_region('descriptor'):
541 self.new_region('descriptor', CaptureRegion(
542 desc_sec * 512, desc_num * 512))
543
544 @property
545 def format_match(self):
546 return self.region('header').data.startswith(b'KDMV')
547
548 @property
549 def virtual_size(self):
550 if not self.has_region('descriptor'):
551 # Not enough data yet
552 return 0
553
554 descriptor_rgn = self.region('descriptor')
555 if not descriptor_rgn.complete:
556 # Not enough data yet
557 return 0
558
559 descriptor = descriptor_rgn.data
560 type_idx = descriptor.index(b'createType="') + len(b'createType="')
561 type_end = descriptor.find(b'"', type_idx)
562 # Make sure we don't grab and log a huge chunk of data in a
563 # maliciously-formatted descriptor region
564 if type_end - type_idx < 64:
565 vmdktype = descriptor[type_idx:type_end]
566 else:
567 vmdktype = b'formatnotfound'
568 if vmdktype != b'monolithicSparse':
569 raise ImageFormatError('Unsupported VMDK format %s' % vmdktype)
570 return 0
571
572 # If we have the descriptor, we definitely have the header
573 _sig, _ver, _flags, sectors, _grain, _desc_sec, _desc_num = (
574 struct.unpack('<IIIQQQQ', self.region('header').data[:44]))
575
576 return sectors * 512
577
578 def __str__(self):
579 return 'vmdk'
580
581
582 # The VirtualBox VDI format consists of a 512-byte little-endian
583 # header, some of which we care about:
584 #
585 # Dec Hex Name
586 # 64 0x40 4-byte Magic (0xbeda107f)
587 # . . .
588 # 368 0x170 Size in bytes (uint64_t)
589 #
590 # https://github.com/qemu/qemu/blob/master/block/vdi.c
591 class VDIInspector(FileInspector):
592 """VirtualBox VDI format
593
594 This only needs to store the first 512 bytes of the image.
595 """
596 def __init__(self, *a, **k):
597 super(VDIInspector, self).__init__(*a, **k)
598 self.new_region('header', CaptureRegion(0, 512))
599
600 @property
601 def format_match(self):
602 if not self.region('header').complete:
603 return False
604
605 signature, = struct.unpack('<I', self.region('header').data[0x40:0x44])
606 return signature == 0xbeda107f
607
608 @property
609 def virtual_size(self):
610 if not self.region('header').complete:
611 return 0
612 if not self.format_match:
613 return 0
614
615 size, = struct.unpack('<Q', self.region('header').data[0x170:0x178])
616 return size
617
618 def __str__(self):
619 return 'vdi'
620
621
622 class InfoWrapper(object):
623 """A file-like object that wraps another and updates a format inspector.
624
625 This passes chunks to the format inspector while reading. If the inspector
626 fails, it logs the error and stops calling it, but continues proxying data
627 from the source to its user.
628 """
629 def __init__(self, source, fmt):
630 self._source = source
631 self._format = fmt
632 self._error = False
633
634 def __iter__(self):
635 return self
636
637 def _process_chunk(self, chunk):
638 if not self._error:
639 try:
640 self._format.eat_chunk(chunk)
641 except Exception as e:
642 # Absolutely do not allow the format inspector to break
643 # our streaming of the image. If we failed, just stop
644 # trying, log and keep going.
645 LOG.error('Format inspector failed, aborting: %s', e)
646 self._error = True
647
648 def __next__(self):
649 try:
650 chunk = next(self._source)
651 except StopIteration:
652 raise
653 self._process_chunk(chunk)
654 return chunk
655
656 def read(self, size):
657 chunk = self._source.read(size)
658 self._process_chunk(chunk)
659 return chunk
660
661 def close(self):
662 if hasattr(self._source, 'close'):
663 self._source.close()
664
665
666 def get_inspector(format_name):
667 """Returns a FormatInspector class based on the given name.
668
669 :param format_name: The name of the disk_format (raw, qcow2, etc).
670 :returns: A FormatInspector or None if unsupported.
671 """
672 formats = {
673 'raw': FileInspector,
674 'qcow2': QcowInspector,
675 'vhd': VHDInspector,
676 'vhdx': VHDXInspector,
677 'vmdk': VMDKInspector,
678 'vdi': VDIInspector,
679 }
680
681 return formats.get(format_name)
+0
-302
glance/common/rpc.py less more
0 # Copyright 2013 Red Hat, Inc.
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 """
16 RPC Controller
17 """
18 import datetime
19 import traceback
20
21 from oslo_config import cfg
22 from oslo_log import log as logging
23 from oslo_utils import encodeutils
24 import oslo_utils.importutils as imp
25 import six
26 from webob import exc
27
28 from glance.common import client
29 from glance.common import exception
30 from glance.common import timeutils
31 from glance.common import wsgi
32 from glance.i18n import _, _LE
33
34 LOG = logging.getLogger(__name__)
35
36
37 rpc_opts = [
38 cfg.ListOpt('allowed_rpc_exception_modules',
39 default=['glance.common.exception',
40 'builtins',
41 'exceptions',
42 ],
43 help=_("""
44 List of allowed exception modules to handle RPC exceptions.
45
46 Provide a comma separated list of modules whose exceptions are
47 permitted to be recreated upon receiving exception data via an RPC
48 call made to Glance. The default list includes
49 ``glance.common.exception``, ``builtins``, and ``exceptions``.
50
51 The RPC protocol permits interaction with Glance via calls across a
52 network or within the same system. Including a list of exception
53 namespaces with this option enables RPC to propagate the exceptions
54 back to the users.
55
56 Possible values:
57 * A comma separated list of valid exception modules
58
59 Related options:
60 * None
61 """)),
62 ]
63
64 CONF = cfg.CONF
65 CONF.register_opts(rpc_opts)
66
67
68 class RPCJSONSerializer(wsgi.JSONResponseSerializer):
69
70 @staticmethod
71 def _to_primitive(_type, _value):
72 return {"_type": _type, "_value": _value}
73
74 def _sanitizer(self, obj):
75 if isinstance(obj, datetime.datetime):
76 return self._to_primitive("datetime",
77 obj.isoformat())
78
79 return super(RPCJSONSerializer, self)._sanitizer(obj)
80
81
82 class RPCJSONDeserializer(wsgi.JSONRequestDeserializer):
83
84 @staticmethod
85 def _to_datetime(obj):
86 return timeutils.normalize_time(timeutils.parse_isotime(obj))
87
88 def _sanitizer(self, obj):
89 try:
90 _type, _value = obj["_type"], obj["_value"]
91 return getattr(self, "_to_" + _type)(_value)
92 except (KeyError, AttributeError):
93 return obj
94
95
96 class Controller(object):
97 """
98 Base RPCController.
99
100 This is the base controller for RPC based APIs. Commands
101 handled by this controller respect the following form:
102
103 ::
104
105 [{
106 'command': 'method_name',
107 'kwargs': {...}
108 }]
109
110 The controller is capable of processing more than one command
111 per request and will always return a list of results.
112
113 :param bool raise_exc: Specifies whether to raise
114 exceptions instead of "serializing" them.
115
116 """
117
118 def __init__(self, raise_exc=False):
119 self._registered = {}
120 self.raise_exc = raise_exc
121
122 def register(self, resource, filtered=None, excluded=None, refiner=None):
123 """
124 Exports methods through the RPC Api.
125
126 :param resource: Resource's instance to register.
127 :param filtered: List of methods that *can* be registered. Read
128 as "Method must be in this list".
129 :param excluded: List of methods to exclude.
130 :param refiner: Callable to use as filter for methods.
131
132 :raises TypeError: If refiner is not callable.
133
134 """
135
136 funcs = [x for x in dir(resource) if not x.startswith("_")]
137
138 if filtered:
139 funcs = [f for f in funcs if f in filtered]
140
141 if excluded:
142 funcs = [f for f in funcs if f not in excluded]
143
144 if refiner:
145 funcs = filter(refiner, funcs)
146
147 for name in funcs:
148 meth = getattr(resource, name)
149
150 if not callable(meth):
151 continue
152
153 self._registered[name] = meth
154
155 def __call__(self, req, body):
156 """
157 Executes the command
158 """
159
160 if not isinstance(body, list):
161 msg = _("Request must be a list of commands")
162 raise exc.HTTPBadRequest(explanation=msg)
163
164 def validate(cmd):
165 if not isinstance(cmd, dict):
166 msg = _("Bad Command: %s") % str(cmd)
167 raise exc.HTTPBadRequest(explanation=msg)
168
169 command, kwargs = cmd.get("command"), cmd.get("kwargs")
170
171 if (not command or not isinstance(command, six.string_types) or
172 (kwargs and not isinstance(kwargs, dict))):
173 msg = _("Wrong command structure: %s") % (str(cmd))
174 raise exc.HTTPBadRequest(explanation=msg)
175
176 method = self._registered.get(command)
177 if not method:
178 # Just raise 404 if the user tries to
179 # access a private method. No need for
180 # 403 here since logically the command
181 # is not registered to the rpc dispatcher
182 raise exc.HTTPNotFound(explanation=_("Command not found"))
183
184 return True
185
186 # If more than one command were sent then they might
187 # be intended to be executed sequentially, that for,
188 # lets first verify they're all valid before executing
189 # them.
190 commands = filter(validate, body)
191
192 results = []
193 for cmd in commands:
194 # kwargs is not required
195 command, kwargs = cmd["command"], cmd.get("kwargs", {})
196 method = self._registered[command]
197 try:
198 result = method(req.context, **kwargs)
199 except Exception as e:
200 if self.raise_exc:
201 raise
202
203 cls, val = e.__class__, encodeutils.exception_to_unicode(e)
204 msg = (_LE("RPC Call Error: %(val)s\n%(tb)s") %
205 dict(val=val, tb=traceback.format_exc()))
206 LOG.error(msg)
207
208 # NOTE(flaper87): Don't propagate all exceptions
209 # but the ones allowed by the user.
210 module = cls.__module__
211 if module not in CONF.allowed_rpc_exception_modules:
212 cls = exception.RPCError
213 val = encodeutils.exception_to_unicode(
214 exception.RPCError(cls=cls, val=val))
215
216 cls_path = "%s.%s" % (cls.__module__, cls.__name__)
217 result = {"_error": {"cls": cls_path, "val": val}}
218 results.append(result)
219 return results
220
221
222 class RPCClient(client.BaseClient):
223
224 def __init__(self, *args, **kwargs):
225 self._serializer = RPCJSONSerializer()
226 self._deserializer = RPCJSONDeserializer()
227
228 self.raise_exc = kwargs.pop("raise_exc", True)
229 self.base_path = kwargs.pop("base_path", '/rpc')
230 super(RPCClient, self).__init__(*args, **kwargs)
231
232 @client.handle_unauthenticated
233 def bulk_request(self, commands):
234 """
235 Execute multiple commands in a single request.
236
237 :param commands: List of commands to send. Commands
238 must respect the following form
239
240 ::
241
242 {
243 'command': 'method_name',
244 'kwargs': method_kwargs
245 }
246
247 """
248 body = self._serializer.to_json(commands)
249 response = super(RPCClient, self).do_request('POST',
250 self.base_path,
251 body)
252 return self._deserializer.from_json(response.read())
253
254 def do_request(self, method, **kwargs):
255 """
256 Simple do_request override. This method serializes
257 the outgoing body and builds the command that will
258 be sent.
259
260 :param method: The remote python method to call
261 :param kwargs: Dynamic parameters that will be
262 passed to the remote method.
263 """
264 content = self.bulk_request([{'command': method,
265 'kwargs': kwargs}])
266
267 # NOTE(flaper87): Return the first result if
268 # a single command was executed.
269 content = content[0]
270
271 # NOTE(flaper87): Check if content is an error
272 # and re-raise it if raise_exc is True. Before
273 # checking if content contains the '_error' key,
274 # verify if it is an instance of dict - since the
275 # RPC call may have returned something different.
276 if self.raise_exc and (isinstance(content, dict)
277 and '_error' in content):
278 error = content['_error']
279 try:
280 exc_cls = imp.import_class(error['cls'])
281 raise exc_cls(error['val'])
282 except ImportError:
283 # NOTE(flaper87): The exception
284 # class couldn't be imported, using
285 # a generic exception.
286 raise exception.RPCError(**error)
287 return content
288
289 def __getattr__(self, item):
290 """
291 This method returns a method_proxy that
292 will execute the rpc call in the registry
293 service.
294 """
295 if item.startswith('_'):
296 raise AttributeError(item)
297
298 def method_proxy(**kw):
299 return self.do_request(item, **kw)
300
301 return method_proxy
136136 return image
137137
138138
139 def set_image_data(image, uri, task_id, backend=None, set_active=True):
139 def set_image_data(image, uri, task_id, backend=None, set_active=True,
140 callback=None):
140141 data_iter = None
141142 try:
142143 LOG.info(_LI("Task %(task_id)s: Got image data uri %(data_uri)s to be "
143144 "imported"), {"data_uri": uri, "task_id": task_id})
144145 data_iter = script_utils.get_image_data_iter(uri)
146 if callback:
147 # If a callback was provided, wrap our data iterator to call
148 # the function every 60 seconds.
149 data_iter = script_utils.CallbackIterator(
150 data_iter, callback, min_interval=60)
145151 image.set_data(data_iter, backend=backend, set_active=set_active)
146152 except Exception as e:
147153 with excutils.save_and_reraise_exception():
2222
2323
2424 from oslo_log import log as logging
25 from oslo_utils import timeutils
2526 from six.moves import urllib
2627
2728 from glance.common import exception
138139 return open(uri, "rb")
139140
140141 return urllib.request.urlopen(uri)
142
143
144 class CallbackIterator(object):
145 """A proxy iterator that calls a callback function periodically
146
147 This is used to wrap a reading file object and proxy its chunks
148 through to another caller. Periodically, the callback function
149 will be called with information about the data processed so far,
150 allowing for status updating or cancel flag checking. The function
151 can be called every time we process a chunk, or only after we have
152 processed a certain amount of data since the last call.
153
154 :param source: A source iterator whose content will be proxied
155 through this object.
156 :param callback: A function to be called periodically while iterating.
157 The signature should be fn(chunk_bytes, total_bytes),
158 where chunk is the number of bytes since the last
159 call of the callback, and total_bytes is the total amount
160 copied thus far.
161 :param min_interval: Limit the calls to callback to only when this many
162 seconds have elapsed since the last callback (a
163 close() or final iteration may fire the callback in
164 less time to ensure completion).
165 """
166
167 def __init__(self, source, callback, min_interval=None):
168 self._source = source
169 self._callback = callback
170 self._min_interval = min_interval
171 self._chunk_bytes = 0
172 self._total_bytes = 0
173 self._timer = None
174
175 @property
176 def callback_due(self):
177 """Indicates if a callback should be made.
178
179 If no time-based limit is set, this will always be True.
180 If a limit is set, then this returns True exactly once,
181 resetting the timer when it does.
182 """
183 if not self._min_interval:
184 return True
185
186 if not self._timer:
187 self._timer = timeutils.StopWatch(self._min_interval)
188 self._timer.start()
189
190 if self._timer.expired():
191 self._timer.restart()
192 return True
193 else:
194 return False
195
196 def __iter__(self):
197 return self
198
199 def __next__(self):
200 try:
201 chunk = next(self._source)
202 except StopIteration:
203 # NOTE(danms): Make sure we call the callback the last
204 # time if we have processed data since the last one.
205 self._call_callback(b'', is_last=True)
206 raise
207
208 self._call_callback(chunk)
209 return chunk
210
211 def close(self):
212 self._call_callback(b'', is_last=True)
213 if hasattr(self._source, 'close'):
214 return self._source.close()
215
216 def _call_callback(self, chunk, is_last=False):
217 self._total_bytes += len(chunk)
218 self._chunk_bytes += len(chunk)
219
220 if not self._chunk_bytes:
221 # NOTE(danms): Never call the callback if we haven't processed
222 # any data since the last time
223 return
224
225 if is_last or self.callback_due:
226 # FIXME(danms): Perhaps we should only abort the read if
227 # the callback raises a known abort exception, otherwise
228 # log and swallow. Need to figure out what exception
229 # read() callers would be expecting that we could raise
230 # from here.
231 self._callback(self._chunk_bytes, self._total_bytes)
232 self._chunk_bytes = 0
233
234 def read(self, size=None):
235 chunk = self._source.read(size)
236 self._call_callback(chunk)
237 return chunk
2020 import six.moves.urllib.parse as urlparse
2121
2222 import glance.db as db_api
23 from glance.i18n import _LE
23 from glance.i18n import _LE, _LW
2424 from glance import scrubber
2525
2626 LOG = logging.getLogger(__name__)
2727
2828 CONF = cfg.CONF
29 CONF.import_opt('use_user_token', 'glance.registry.client')
3029
3130 RESTRICTED_URI_SCHEMAS = frozenset(['file', 'filesystem', 'swift+config'])
3231
9291
9392 db_queue = scrubber.get_scrub_queue()
9493
95 if not CONF.use_user_token:
96 context = None
94 context = None
9795
9896 ret = db_queue.add_location(image_id, location)
9997 if ret:
179177 return
180178
181179
182 def update_store_in_locations(image, image_repo):
180 def update_store_in_locations(context, image, image_repo):
181 store_updated = False
183182 for loc in image.locations:
184183 if (not loc['metadata'].get(
185184 'store') or loc['metadata'].get(
186185 'store') not in CONF.enabled_backends):
186 if loc['url'].startswith("cinder://"):
187 _update_cinder_location_and_store_id(context, loc)
188
187189 store_id = _get_store_id_from_uri(loc['url'])
188190 if store_id:
189191 if 'store' in loc['metadata']:
196198 'new': store_id,
197199 'id': image.image_id})
198200
201 store_updated = True
199202 loc['metadata']['store'] = store_id
200 image_repo.save(image)
203
204 if store_updated:
205 image_repo.save(image)
206
207
208 def _update_cinder_location_and_store_id(context, loc):
209 """Update store location of legacy images
210
211 While upgrading from single cinder store to multiple stores,
212 the images having a store configured with a volume type matching
213 the image-volume's type will be migrated/associated to that store
214 and their location url will be updated respectively to the new format
215 i.e. cinder://store-id/volume-id
216 If there is no store configured for the image, the location url will
217 not be updated.
218 """
219 uri = loc['url']
220 volume_id = loc['url'].split("/")[-1]
221 scheme = urlparse.urlparse(uri).scheme
222 location_map = store_api.location.SCHEME_TO_CLS_BACKEND_MAP
223 if scheme not in location_map:
224 LOG.warning(_LW("Unknown scheme '%(scheme)s' found in uri '%(uri)s'"),
225 {'scheme': scheme, 'uri': uri})
226 return
227
228 for store in location_map[scheme]:
229 store_instance = location_map[scheme][store]['store']
230 if store_instance.is_image_associated_with_store(context, volume_id):
231 url_prefix = store_instance.url_prefix
232 loc['url'] = "%s/%s" % (url_prefix, volume_id)
233 loc['metadata']['store'] = "%s" % store
234 return
235
236 LOG.warning(_LW("Not able to update location url '%s' of legacy image "
237 "due to unknown issues."), uri)
201238
202239
203240 def get_updated_store_location(locations):
1818 """
1919 Utility methods for working with WSGI servers
2020 """
21 from __future__ import print_function
2221
2322 import abc
2423 import errno
99 # License for the specific language governing permissions and limitations
1010 # under the License.
1111
12 import atexit
1213 import os
1314
1415 import glance_store
1617 from oslo_log import log as logging
1718 import osprofiler.initializer
1819
20 from glance.api import common
21 import glance.async_
1922 from glance.common import config
2023 from glance.common import store_utils
2124 from glance.i18n import _
2528 CONF.import_group("profiler", "glance.common.wsgi")
2629 CONF.import_opt("enabled_backends", "glance.common.wsgi")
2730 logging.register_options(CONF)
31 LOG = logging.getLogger(__name__)
2832
2933 CONFIG_FILES = ['glance-api-paste.ini',
3034 'glance-image-import.conf',
6468 host=CONF.bind_host)
6569
6670
71 def drain_threadpools():
72 # NOTE(danms): If there are any other named pools that we need to
73 # drain before exit, they should be in this list.
74 pools_to_drain = ['tasks_pool']
75 for pool_name in pools_to_drain:
76 pool_model = common.get_thread_pool(pool_name)
77 LOG.info('Waiting for remaining threads in pool %r', pool_name)
78 pool_model.pool.shutdown()
79
80
6781 def init_app():
6882 config.set_config_defaults()
6983 config_files = _get_config_files()
7084 CONF([], project='glance', default_config_files=config_files)
7185 logging.setup(CONF, "glance")
86
87 # NOTE(danms): We are running inside uwsgi or mod_wsgi, so no eventlet;
88 # use native threading instead.
89 glance.async_.set_threadpool_model('native')
90 atexit.register(drain_threadpools)
91
92 # NOTE(danms): Change the default threadpool size since we
93 # are dealing with native threads and not greenthreads.
94 # Right now, the only pool of default size is tasks_pool,
95 # so if others are created this will need to change to be
96 # more specific.
97 common.DEFAULT_POOL_SIZE = CONF.wsgi.task_pool_threads
7298
7399 if CONF.enabled_backends:
74100 if store_utils.check_reserved_stores(CONF.enabled_backends):
1111 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
1212 # License for the specific language governing permissions and limitations
1313 # under the License.
14
15 import copy
1416
1517 from oslo_context import context
1618
7173 """Admins can see deleted by default"""
7274 return self.show_deleted or self.is_admin
7375
76 def elevated(self):
77 """Return a copy of this context with admin flag set."""
78
79 context = copy.copy(self)
80 context.roles = copy.deepcopy(self.roles)
81 if 'admin' not in context.roles:
82 context.roles.append('admin')
83
84 context.is_admin = True
85
86 return context
87
7488
7589 def get_admin_context(show_deleted=False):
7690 """Create an administrator context."""
3333 CONF.import_opt('metadata_encryption_key', 'glance.common.config')
3434
3535
36 def get_api(v1_mode=False):
37 """
38 When using v2_registry with v2_api or alone, it is essential that the opt
39 'data_api' be set to 'glance.db.registry.api'. This requires us to
40 differentiate what this method returns as the db api. i.e., we do not want
41 to return 'glance.db.registry.api' for a call from v1 api.
42 Reference bug #1516706
43 """
44 if v1_mode:
45 # prevent v1_api from talking to v2_registry.
46 if CONF.data_api == 'glance.db.simple.api':
47 api = importutils.import_module(CONF.data_api)
48 else:
49 api = importutils.import_module('glance.db.sqlalchemy.api')
50 else:
51 api = importutils.import_module(CONF.data_api)
36 def get_api():
37 api = importutils.import_module('glance.db.sqlalchemy.api')
5238
5339 if hasattr(api, 'configure'):
5440 api.configure()
7056 'min_disk', 'min_ram', 'is_public',
7157 'locations', 'checksum', 'owner',
7258 'protected'])
59 IMAGE_ATOMIC_PROPS = set(['os_glance_import_task'])
7360
7461
7562 class ImageRepo(object):
199186 image.image_id,
200187 image_values,
201188 purge_props=True,
202 from_state=from_state)
189 from_state=from_state,
190 atomic_props=(
191 IMAGE_ATOMIC_PROPS))
203192 except (exception.ImageNotFound, exception.Forbidden):
204193 msg = _("No image found with ID %s") % image.image_id
205194 raise exception.ImageNotFound(msg)
218207 # NOTE(markwash): don't update tags?
219208 new_values = self.db_api.image_destroy(self.context, image.image_id)
220209 image.updated_at = new_values['updated_at']
210
211 def set_property_atomic(self, image, name, value):
212 self.db_api.image_set_property_atomic(
213 image.image_id, name, value)
214
215 def delete_property_atomic(self, image, name, value):
216 self.db_api.image_delete_property_atomic(
217 image.image_id, name, value)
221218
222219
223220 class ImageProxy(glance.domain.proxy.Image):
4646 # Migration-related constants
4747 EXPAND_BRANCH = 'expand'
4848 CONTRACT_BRANCH = 'contract'
49 CURRENT_RELEASE = 'ussuri'
49 CURRENT_RELEASE = 'victoria'
5050 ALEMBIC_INIT_VERSION = 'liberty'
51 LATEST_REVISION = 'train_contract01'
51 LATEST_REVISION = 'ussuri_contract01'
5252 INIT_VERSION = 0
5353
5454 MIGRATE_REPO_PATH = os.path.join(
+0
-0
glance/db/registry/__init__.py less more
(Empty file)
+0
-546
glance/db/registry/api.py less more
0 # Copyright 2013 Red Hat, Inc.
1 # Copyright 2015 Mirantis, Inc.
2 # All Rights Reserved.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License"); you may
5 # not use this file except in compliance with the License. You may obtain
6 # a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
12 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
13 # License for the specific language governing permissions and limitations
14 # under the License.
15
16 """
17 This is the Registry's Driver API.
18
19 This API relies on the registry RPC client (version >= 2). The functions bellow
20 work as a proxy for the database back-end configured in the registry service,
21 which means that everything returned by that back-end will be also returned by
22 this API.
23
24
25 This API exists for supporting deployments not willing to put database
26 credentials in glance-api. Those deployments can rely on this registry driver
27 that will talk to a remote registry service, which will then access the
28 database back-end.
29 """
30
31 import functools
32
33 from glance.db import utils as db_utils
34 from glance.registry.client.v2 import api
35
36
37 def configure():
38 api.configure_registry_client()
39
40
41 def _get_client(func):
42 """Injects a client instance to the each function
43
44 This decorator creates an instance of the Registry
45 client and passes it as an argument to each function
46 in this API.
47 """
48 @functools.wraps(func)
49 def wrapper(context, *args, **kwargs):
50 client = api.get_registry_client(context)
51 return func(client, *args, **kwargs)
52 return wrapper
53
54
55 @_get_client
56 def image_create(client, values, v1_mode=False):
57 """Create an image from the values dictionary."""
58 return client.image_create(values=values, v1_mode=v1_mode)
59
60
61 @_get_client
62 def image_update(client, image_id, values, purge_props=False, from_state=None,
63 v1_mode=False):
64 """
65 Set the given properties on an image and update it.
66
67 :raises ImageNotFound: if image does not exist.
68 """
69 return client.image_update(values=values,
70 image_id=image_id,
71 purge_props=purge_props,
72 from_state=from_state,
73 v1_mode=v1_mode)
74
75
76 @_get_client
77 def image_destroy(client, image_id):
78 """Destroy the image or raise if it does not exist."""
79 return client.image_destroy(image_id=image_id)
80
81
82 @_get_client
83 def image_get(client, image_id, force_show_deleted=False, v1_mode=False):
84 return client.image_get(image_id=image_id,
85 force_show_deleted=force_show_deleted,
86 v1_mode=v1_mode)
87
88
89 def is_image_visible(context, image, status=None):
90 """Return True if the image is visible in this context."""
91 return db_utils.is_image_visible(context, image, image_member_find, status)
92
93
94 @_get_client
95 def image_get_all(client, filters=None, marker=None, limit=None,
96 sort_key=None, sort_dir=None,
97 member_status='accepted', is_public=None,
98 admin_as_user=False, return_tag=False, v1_mode=False):
99 """
100 Get all images that match zero or more filters.
101
102 :param filters: dict of filter keys and values. If a 'properties'
103 key is present, it is treated as a dict of key/value
104 filters on the image properties attribute
105 :param marker: image id after which to start page
106 :param limit: maximum number of images to return
107 :param sort_key: image attribute by which results should be sorted
108 :param sort_dir: direction in which results should be sorted (asc, desc)
109 :param member_status: only return shared images that have this membership
110 status
111 :param is_public: If true, return only public images. If false, return
112 only private and shared images.
113 :param admin_as_user: For backwards compatibility. If true, then return to
114 an admin the equivalent set of images which it would see
115 if it were a regular user
116 :param return_tag: To indicates whether image entry in result includes it
117 relevant tag entries. This could improve upper-layer
118 query performance, to prevent using separated calls
119 :param v1_mode: If true, mutates the 'visibility' value of each image
120 into the v1-compatible field 'is_public'
121 """
122 sort_key = ['created_at'] if not sort_key else sort_key
123 sort_dir = ['desc'] if not sort_dir else sort_dir
124 return client.image_get_all(filters=filters, marker=marker, limit=limit,
125 sort_key=sort_key, sort_dir=sort_dir,
126 member_status=member_status,
127 is_public=is_public,
128 admin_as_user=admin_as_user,
129 return_tag=return_tag,
130 v1_mode=v1_mode)
131
132
133 @_get_client
134 def image_property_create(client, values, session=None):
135 """Create an ImageProperty object"""
136 return client.image_property_create(values=values)
137
138
139 @_get_client
140 def image_property_delete(client, prop_ref, image_ref, session=None):
141 """
142 Used internally by _image_property_create and image_property_update
143 """
144 return client.image_property_delete(prop_ref=prop_ref, image_ref=image_ref)
145
146
147 @_get_client
148 def image_member_create(client, values, session=None):
149 """Create an ImageMember object"""
150 return client.image_member_create(values=values)
151
152
153 @_get_client
154 def image_member_update(client, memb_id, values):
155 """Update an ImageMember object"""
156 return client.image_member_update(memb_id=memb_id, values=values)
157
158
159 @_get_client
160 def image_member_delete(client, memb_id, session=None):
161 """Delete an ImageMember object"""
162 client.image_member_delete(memb_id=memb_id)
163
164
165 @_get_client
166 def image_member_find(client, image_id=None, member=None, status=None,
167 include_deleted=False):
168 """Find all members that meet the given criteria.
169
170 Note, currently include_deleted should be true only when create a new
171 image membership, as there may be a deleted image membership between
172 the same image and tenant, the membership will be reused in this case.
173 It should be false in other cases.
174
175 :param image_id: identifier of image entity
176 :param member: tenant to which membership has been granted
177 :include_deleted: A boolean indicating whether the result should include
178 the deleted record of image member
179 """
180 return client.image_member_find(image_id=image_id,
181 member=member,
182 status=status,
183 include_deleted=include_deleted)
184
185
186 @_get_client
187 def image_member_count(client, image_id):
188 """Return the number of image members for this image
189
190 :param image_id: identifier of image entity
191 """
192 return client.image_member_count(image_id=image_id)
193
194
195 @_get_client
196 def image_tag_set_all(client, image_id, tags):
197 client.image_tag_set_all(image_id=image_id, tags=tags)
198
199
200 @_get_client
201 def image_tag_create(client, image_id, value, session=None):
202 """Create an image tag."""
203 return client.image_tag_create(image_id=image_id, value=value)
204
205
206 @_get_client
207 def image_tag_delete(client, image_id, value, session=None):
208 """Delete an image tag."""
209 client.image_tag_delete(image_id=image_id, value=value)
210
211
212 @_get_client
213 def image_tag_get_all(client, image_id, session=None):
214 """Get a list of tags for a specific image."""
215 return client.image_tag_get_all(image_id=image_id)
216
217
218 @_get_client
219 def image_location_delete(client, image_id, location_id, status, session=None):
220 """Delete an image location."""
221 client.image_location_delete(image_id=image_id, location_id=location_id,
222 status=status)
223
224
225 @_get_client
226 def image_location_update(client, image_id, location, session=None):
227 """Update image location."""
228 client.image_location_update(image_id=image_id, location=location)
229
230
231 @_get_client
232 def user_get_storage_usage(client, owner_id, image_id=None, session=None):
233 return client.user_get_storage_usage(owner_id=owner_id, image_id=image_id)
234
235
236 @_get_client
237 def task_get(client, task_id, session=None, force_show_deleted=False):
238 """Get a single task object
239 :returns: task dictionary
240 """
241 return client.task_get(task_id=task_id, session=session,
242 force_show_deleted=force_show_deleted)
243
244
245 @_get_client
246 def task_get_all(client, filters=None, marker=None, limit=None,
247 sort_key='created_at', sort_dir='desc', admin_as_user=False):
248 """Get all tasks that match zero or more filters.
249
250 :param filters: dict of filter keys and values.
251 :param marker: task id after which to start page
252 :param limit: maximum number of tasks to return
253 :param sort_key: task attribute by which results should be sorted
254 :param sort_dir: direction in which results should be sorted (asc, desc)
255 :param admin_as_user: For backwards compatibility. If true, then return to
256 an admin the equivalent set of tasks which it would see
257 if it were a regular user
258 :returns: tasks set
259 """
260 return client.task_get_all(filters=filters, marker=marker, limit=limit,
261 sort_key=sort_key, sort_dir=sort_dir,
262 admin_as_user=admin_as_user)
263
264
265 @_get_client
266 def task_create(client, values, session=None):
267 """Create a task object"""
268 return client.task_create(values=values, session=session)
269
270
271 @_get_client
272 def task_delete(client, task_id, session=None):
273 """Delete a task object"""
274 return client.task_delete(task_id=task_id, session=session)
275
276
277 @_get_client
278 def task_update(client, task_id, values, session=None):
279 return client.task_update(task_id=task_id, values=values, session=session)
280
281
282 # Metadef
283 @_get_client
284 def metadef_namespace_get_all(
285 client, marker=None, limit=None, sort_key='created_at',
286 sort_dir=None, filters=None, session=None):
287 return client.metadef_namespace_get_all(
288 marker=marker, limit=limit,
289 sort_key=sort_key, sort_dir=sort_dir, filters=filters)
290
291
292 @_get_client
293 def metadef_namespace_get(client, namespace_name, session=None):
294 return client.metadef_namespace_get(namespace_name=namespace_name)
295
296
297 @_get_client
298 def metadef_namespace_create(client, values, session=None):
299 return client.metadef_namespace_create(values=values)
300
301
302 @_get_client
303 def metadef_namespace_update(
304 client, namespace_id, namespace_dict,
305 session=None):
306 return client.metadef_namespace_update(
307 namespace_id=namespace_id, namespace_dict=namespace_dict)
308
309
310 @_get_client
311 def metadef_namespace_delete(client, namespace_name, session=None):
312 return client.metadef_namespace_delete(
313 namespace_name=namespace_name)
314
315
316 @_get_client
317 def metadef_object_get_all(client, namespace_name, session=None):
318 return client.metadef_object_get_all(
319 namespace_name=namespace_name)
320
321
322 @_get_client
323 def metadef_object_get(
324 client,
325 namespace_name, object_name, session=None):
326 return client.metadef_object_get(
327 namespace_name=namespace_name, object_name=object_name)
328
329
330 @_get_client
331 def metadef_object_create(
332 client,
333 namespace_name, object_dict, session=None):
334 return client.metadef_object_create(
335 namespace_name=namespace_name, object_dict=object_dict)
336
337
338 @_get_client
339 def metadef_object_update(
340 client,
341 namespace_name, object_id,
342 object_dict, session=None):
343 return client.metadef_object_update(
344 namespace_name=namespace_name, object_id=object_id,
345 object_dict=object_dict)
346
347
348 @_get_client
349 def metadef_object_delete(
350 client,
351 namespace_name, object_name,
352 session=None):
353 return client.metadef_object_delete(
354 namespace_name=namespace_name, object_name=object_name)
355
356
357 @_get_client
358 def metadef_object_delete_namespace_content(
359 client,
360 namespace_name, session=None):
361 return client.metadef_object_delete_namespace_content(
362 namespace_name=namespace_name)
363
364
365 @_get_client
366 def metadef_object_count(
367 client,
368 namespace_name, session=None):
369 return client.metadef_object_count(
370 namespace_name=namespace_name)
371
372
373 @_get_client
374 def metadef_property_get_all(
375 client,
376 namespace_name, session=None):
377 return client.metadef_property_get_all(
378 namespace_name=namespace_name)
379
380
381 @_get_client
382 def metadef_property_get(
383 client,
384 namespace_name, property_name,
385 session=None):
386 return client.metadef_property_get(
387 namespace_name=namespace_name, property_name=property_name)
388
389
390 @_get_client
391 def metadef_property_create(
392 client,
393 namespace_name, property_dict,
394 session=None):
395 return client.metadef_property_create(
396 namespace_name=namespace_name, property_dict=property_dict)
397
398
399 @_get_client
400 def metadef_property_update(
401 client,
402 namespace_name, property_id,
403 property_dict, session=None):
404 return client.metadef_property_update(
405 namespace_name=namespace_name, property_id=property_id,
406 property_dict=property_dict)
407
408
409 @_get_client
410 def metadef_property_delete(
411 client,
412 namespace_name, property_name,
413 session=None):
414 return client.metadef_property_delete(
415 namespace_name=namespace_name, property_name=property_name)
416
417
418 @_get_client
419 def metadef_property_delete_namespace_content(
420 client,
421 namespace_name, session=None):
422 return client.metadef_property_delete_namespace_content(
423 namespace_name=namespace_name)
424
425
426 @_get_client
427 def metadef_property_count(
428 client,
429 namespace_name, session=None):
430 return client.metadef_property_count(
431 namespace_name=namespace_name)
432
433
434 @_get_client
435 def metadef_resource_type_create(client, values, session=None):
436 return client.metadef_resource_type_create(values=values)
437
438
439 @_get_client
440 def metadef_resource_type_get(
441 client,
442 resource_type_name, session=None):
443 return client.metadef_resource_type_get(
444 resource_type_name=resource_type_name)
445
446
447 @_get_client
448 def metadef_resource_type_get_all(client, session=None):
449 return client.metadef_resource_type_get_all()
450
451
452 @_get_client
453 def metadef_resource_type_delete(
454 client,
455 resource_type_name, session=None):
456 return client.metadef_resource_type_delete(
457 resource_type_name=resource_type_name)
458
459
460 @_get_client
461 def metadef_resource_type_association_get(
462 client,
463 namespace_name, resource_type_name,
464 session=None):
465 return client.metadef_resource_type_association_get(
466 namespace_name=namespace_name, resource_type_name=resource_type_name)
467
468
469 @_get_client
470 def metadef_resource_type_association_create(
471 client,
472 namespace_name, values, session=None):
473 return client.metadef_resource_type_association_create(
474 namespace_name=namespace_name, values=values)
475
476
477 @_get_client
478 def metadef_resource_type_association_delete(
479 client,
480 namespace_name, resource_type_name, session=None):
481 return client.metadef_resource_type_association_delete(
482 namespace_name=namespace_name, resource_type_name=resource_type_name)
483
484
485 @_get_client
486 def metadef_resource_type_association_get_all_by_namespace(
487 client,
488 namespace_name, session=None):
489 return client.metadef_resource_type_association_get_all_by_namespace(
490 namespace_name=namespace_name)
491
492
493 @_get_client
494 def metadef_tag_get_all(client, namespace_name, filters=None, marker=None,
495 limit=None, sort_key='created_at', sort_dir=None,
496 session=None):
497 return client.metadef_tag_get_all(
498 namespace_name=namespace_name, filters=filters, marker=marker,
499 limit=limit, sort_key=sort_key, sort_dir=sort_dir, session=session)
500
501
502 @_get_client
503 def metadef_tag_get(client, namespace_name, name, session=None):
504 return client.metadef_tag_get(
505 namespace_name=namespace_name, name=name)
506
507
508 @_get_client
509 def metadef_tag_create(
510 client, namespace_name, tag_dict, session=None):
511 return client.metadef_tag_create(
512 namespace_name=namespace_name, tag_dict=tag_dict)
513
514
515 @_get_client
516 def metadef_tag_create_tags(
517 client, namespace_name, tag_list, session=None):
518 return client.metadef_tag_create_tags(
519 namespace_name=namespace_name, tag_list=tag_list)
520
521
522 @_get_client
523 def metadef_tag_update(
524 client, namespace_name, id, tag_dict, session=None):
525 return client.metadef_tag_update(
526 namespace_name=namespace_name, id=id, tag_dict=tag_dict)
527
528
529 @_get_client
530 def metadef_tag_delete(
531 client, namespace_name, name, session=None):
532 return client.metadef_tag_delete(
533 namespace_name=namespace_name, name=name)
534
535
536 @_get_client
537 def metadef_tag_delete_namespace_content(
538 client, namespace_name, session=None):
539 return client.metadef_tag_delete_namespace_content(
540 namespace_name=namespace_name)
541
542
543 @_get_client
544 def metadef_tag_count(client, namespace_name, session=None):
545 return client.metadef_tag_count(namespace_name=namespace_name)
0 # flake8: noqa
1 # Note(jokke): SimpleDB is only used for unittests and #noqa
2 # has not been supported in production since moving
3 # to alembic migrations.
425425 return images
426426
427427
428 def image_set_property_atomic(image_id, name, value):
429 try:
430 image = DATA['images'][image_id]
431 except KeyError:
432 LOG.warn(_LW('Could not find image %s'), image_id)
433 raise exception.ImageNotFound()
434
435 prop = _image_property_format(image_id,
436 name,
437 value)
438 image['properties'].append(prop)
439
440
441 def image_delete_property_atomic(image_id, name, value):
442 try:
443 image = DATA['images'][image_id]
444 except KeyError:
445 LOG.warn(_LW('Could not find image %s'), image_id)
446 raise exception.ImageNotFound()
447
448 for i, prop in enumerate(image['properties']):
449 if prop['name'] == name and prop['value'] == value:
450 del image['properties'][i]
451 return
452
453 raise exception.NotFound()
454
455
428456 def _image_get(context, image_id, force_show_deleted=False, status=None):
429457 try:
430458 image = DATA['images'][image_id]
755783
756784 @log_call
757785 def image_update(context, image_id, image_values, purge_props=False,
758 from_state=None, v1_mode=False):
786 from_state=None, v1_mode=False, atomic_props=None):
759787 global DATA
760788 try:
761789 image = DATA['images'][image_id]
766794 if location_data is not None:
767795 _image_locations_set(context, image_id, location_data)
768796
797 if atomic_props is None:
798 atomic_props = []
799
769800 # replace values for properties that already exist
770801 new_properties = image_values.pop('properties', {})
771802 for prop in image['properties']:
772 if prop['name'] in new_properties:
803 if prop['name'] in atomic_props:
804 continue
805 elif prop['name'] in new_properties:
773806 prop['value'] = new_properties.pop(prop['name'])
774807 elif purge_props:
775808 # this matches weirdness in the sqlalchemy api
776809 prop['deleted'] = True
777810
778811 image['updated_at'] = timeutils.utcnow()
779 _image_update(image, image_values, new_properties)
812 _image_update(image, image_values,
813 {k: v for k, v in new_properties.items()
814 if k not in atomic_props})
780815 DATA['images'][image_id] = image
781816
782817 image = _normalize_locations(context, copy.deepcopy(image))
1111 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
1212 # License for the specific language governing permissions and limitations
1313 # under the License.
14
15 from __future__ import with_statement
1614 from logging import config as log_config
1715
1816 from alembic import context
150150
151151
152152 def image_update(context, image_id, values, purge_props=False,
153 from_state=None, v1_mode=False):
153 from_state=None, v1_mode=False, atomic_props=None):
154154 """
155155 Set the given properties on an image and update it.
156156
157157 :raises: ImageNotFound if image does not exist.
158158 """
159159 image = _image_update(context, values, image_id, purge_props,
160 from_state=from_state)
160 from_state=from_state, atomic_props=atomic_props)
161161 if v1_mode:
162162 image = db_utils.mutate_image_dict_to_v1(image)
163163 return image
777777 setattr(image_ref, k, values[k])
778778
779779
780 def image_set_property_atomic(image_id, name, value):
781 """
782 Atomically set an image property to a value.
783
784 This will only succeed if the property does not currently exist
785 and it was created successfully. This can be used by multiple
786 competing threads to ensure that only one of those threads
787 succeeded in creating the property.
788
789 Note that ImageProperty objects are marked as deleted=$id and so we must
790 first try to atomically update-and-undelete such a property, if it
791 exists. If that does not work, we should try to create the property. The
792 latter should fail with DBDuplicateEntry because of the UniqueConstraint
793 across ImageProperty(image_id, name).
794
795 :param image_id: The ID of the image on which to create the property
796 :param name: The property name
797 :param value: The value to set for the property
798 :raises Duplicate: If the property already exists
799 """
800 session = get_session()
801 with session.begin():
802 connection = session.connection()
803 table = models.ImageProperty.__table__
804
805 # This should be:
806 # UPDATE image_properties SET value=$value, deleted=0
807 # WHERE name=$name AND deleted!=0
808 result = connection.execute(table.update().where(
809 sa_sql.and_(table.c.name == name,
810 table.c.image_id == image_id,
811 table.c.deleted != 0)).values(
812 value=value, deleted=0))
813 if result.rowcount == 1:
814 # Found and updated a deleted property, so we win
815 return
816
817 # There might have been no deleted property, or the property
818 # exists and is undeleted, so try to create it and use that
819 # to determine if we've lost the race or not.
820
821 try:
822 connection.execute(table.insert(),
823 dict(deleted=False,
824 created_at=timeutils.utcnow(),
825 image_id=image_id,
826 name=name,
827 value=value))
828 except db_exception.DBDuplicateEntry:
829 # Lost the race to create the new property
830 raise exception.Duplicate()
831
832 # If we got here, we created a new row, UniqueConstraint would have
833 # caused us to fail if we lost the race
834
835
836 def image_delete_property_atomic(image_id, name, value):
837 """
838 Atomically delete an image property.
839
840 This will only succeed if the referenced image has a property set
841 to exactly the value provided.
842
843 :param image_id: The ID of the image on which to delete the property
844 :param name: The property name
845 :param value: The value the property is expected to be set to
846 :raises NotFound: If the property does not exist
847 """
848 session = get_session()
849 with session.begin():
850 connection = session.connection()
851 table = models.ImageProperty.__table__
852
853 result = connection.execute(table.delete().where(
854 sa_sql.and_(table.c.name == name,
855 table.c.value == value,
856 table.c.image_id == image_id,
857 table.c.deleted == 0)))
858 if result.rowcount == 1:
859 return
860
861 raise exception.NotFound()
862
863
780864 @retry(retry_on_exception=_retry_on_deadlock, wait_fixed=500,
781865 stop_max_attempt_number=50)
782866 @utils.no_4byte_params
783867 def _image_update(context, values, image_id, purge_props=False,
784 from_state=None):
868 from_state=None, atomic_props=None):
785869 """
786870 Used internally by image_create and image_update
787871
788872 :param context: Request context
789873 :param values: A dict of attributes to set
790874 :param image_id: If None, create the image, otherwise, find and update it
875 :param from_state: If not None, reequire the image be in this state to do
876 the update
877 :param purge_props: If True, delete properties found in the database but
878 not present in values
879 :param atomic_props: If non-None, refuse to create or update properties
880 in this list
791881 """
792882
793883 # NOTE(jbresnah) values is altered in this so a copy is needed
878968 % values['id'])
879969
880970 _set_properties_for_image(context, image_ref, properties, purge_props,
881 session)
971 atomic_props, session)
882972
883973 if location_data:
884974 _image_locations_set(context, image_ref.id, location_data,
9931083
9941084 @utils.no_4byte_params
9951085 def _set_properties_for_image(context, image_ref, properties,
996 purge_props=False, session=None):
1086 purge_props=False, atomic_props=None,
1087 session=None):
9971088 """
9981089 Create or update a set of image_properties for a given image
9991090
10001091 :param context: Request context
10011092 :param image_ref: An Image object
10021093 :param properties: A dict of properties to set
1094 :param purge_props: If True, delete properties in the database
1095 that are not in properties
1096 :param atomic_props: If non-None, skip update/create/delete of properties
1097 named in this list
10031098 :param session: A SQLAlchemy session to use (if present)
10041099 """
1100
1101 if atomic_props is None:
1102 atomic_props = []
1103
10051104 orig_properties = {}
10061105 for prop_ref in image_ref.properties:
10071106 orig_properties[prop_ref.name] = prop_ref
10101109 prop_values = {'image_id': image_ref.id,
10111110 'name': name,
10121111 'value': value}
1013 if name in orig_properties:
1112 if name in atomic_props:
1113 # NOTE(danms): Never update or create properties in the list
1114 # of atomics
1115 continue
1116 elif name in orig_properties:
10141117 prop_ref = orig_properties[name]
10151118 _image_property_update(context, prop_ref, prop_values,
10161119 session=session)
10191122
10201123 if purge_props:
10211124 for key in orig_properties.keys():
1022 if key not in properties:
1125 if key in atomic_props:
1126 continue
1127 elif key not in properties:
10231128 prop_ref = orig_properties[key]
10241129 image_property_delete(context, prop_ref.name,
10251130 image_ref.id, session=session)
1313 # License for the specific language governing permissions and limitations
1414 # under the License.
1515
16 # TODO(smcginnis) update this once six has support for collections.abc
17 # (https://github.com/benjaminp/six/pull/241) or clean up once we drop py2.7.
18 try:
19 from collections.abc import MutableMapping
20 except ImportError:
21 from collections import MutableMapping
22
16 from collections import abc
2317 import datetime
2418 import uuid
2519
292286 raise NotImplementedError()
293287
294288
295 class ExtraProperties(MutableMapping, dict):
289 class ExtraProperties(abc.MutableMapping, dict):
296290
297291 def __getitem__(self, key):
298292 return dict.__getitem__(self, key)
496490 class TaskExecutorFactory(object):
497491 eventlet_deprecation_warned = False
498492
499 def __init__(self, task_repo, image_repo, image_factory):
493 def __init__(self, task_repo, image_repo, image_factory, admin_repo=None):
500494 self.task_repo = task_repo
501495 self.image_repo = image_repo
502496 self.image_factory = image_factory
497 self.admin_repo = admin_repo
503498
504499 def new_task_executor(self, context):
505500 try:
525520 return executor(context,
526521 self.task_repo,
527522 self.image_repo,
528 self.image_factory)
523 self.image_factory,
524 admin_repo=self.admin_repo)
529525 except ImportError:
530526 with excutils.save_and_reraise_exception():
531527 LOG.exception(_LE("Failed to load the %s executor provided "
102102 base_item = self.helper.unproxy(item)
103103 result = self.base.remove(base_item)
104104 return self.helper.proxy(result)
105
106 def set_property_atomic(self, item, name, value):
107 msg = '%s is only valid for images' % __name__
108 assert hasattr(item, 'image_id'), msg
109 self.base.set_property_atomic(item, name, value)
110
111 def delete_property_atomic(self, item, name, value):
112 msg = '%s is only valid for images' % __name__
113 assert hasattr(item, 'image_id'), msg
114 self.base.delete_property_atomic(item, name, value)
105115
106116
107117 class MemberRepo(object):
132132 notifier_task_stub_repo, context)
133133 return authorized_task_stub_repo
134134
135 def get_task_executor_factory(self, context):
135 def get_task_executor_factory(self, context, admin_context=None):
136136 task_repo = self.get_task_repo(context)
137137 image_repo = self.get_repo(context)
138138 image_factory = self.get_image_factory(context)
139 if admin_context:
140 admin_repo = self.get_repo(admin_context)
141 else:
142 admin_repo = None
139143 return glance.domain.TaskExecutorFactory(task_repo,
140144 image_repo,
141 image_factory)
145 image_factory,
146 admin_repo=admin_repo)
142147
143148 def get_metadef_namespace_factory(self, context):
144149 ns_factory = glance.domain.MetadefNamespaceFactory()
8888 "glance/domain",
8989 "glance/image_cache",
9090 "glance/quota",
91 "glance/registry",
9291 "glance/store",
9392 "glance/tests",
9493 ]
1515 """
1616 Cache driver that uses SQLite to store information about cached images
1717 """
18
19 from __future__ import absolute_import
2018 from contextlib import contextmanager
2119 import os
2220 import sqlite3
4949 invalid/
5050 queue/
5151 """
52
53 from __future__ import absolute_import
5452 from contextlib import contextmanager
5553 import errno
5654 import os
1313 msgstr ""
1414 "Project-Id-Version: glance VERSION\n"
1515 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
16 "POT-Creation-Date: 2020-04-09 18:19+0000\n"
16 "POT-Creation-Date: 2020-06-09 14:43+0000\n"
1717 "MIME-Version: 1.0\n"
1818 "Content-Type: text/plain; charset=UTF-8\n"
1919 "Content-Transfer-Encoding: 8bit\n"
11351135 msgstr "Umleitung auf %(uri)s für Autorisierung."
11361136
11371137 #, python-format
1138 msgid "Registry service can't use %s"
1139 msgstr "Registrierungsdienst kann %s nicht verwenden"
1140
1141 #, python-format
11421138 msgid "Registry was not configured correctly on API server. Reason: %(reason)s"
11431139 msgstr ""
11441140 "Registrierungsdatenbank wurde nicht ordnungsgemäß auf einem API-Server "
1313 msgstr ""
1414 "Project-Id-Version: glance VERSION\n"
1515 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
16 "POT-Creation-Date: 2020-04-23 13:13+0000\n"
16 "POT-Creation-Date: 2020-06-09 14:43+0000\n"
1717 "MIME-Version: 1.0\n"
1818 "Content-Type: text/plain; charset=UTF-8\n"
1919 "Content-Transfer-Encoding: 8bit\n"
47574757 msgstr "Redirecting to %(uri)s for authorisation."
47584758
47594759 #, python-format
4760 msgid "Registry service can't use %s"
4761 msgstr "Registry service can't use %s"
4762
4763 #, python-format
47644760 msgid "Registry was not configured correctly on API server. Reason: %(reason)s"
47654761 msgstr ""
47664762 "Registry was not configured correctly on API server. Reason: %(reason)s"
1111 msgstr ""
1212 "Project-Id-Version: glance VERSION\n"
1313 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
14 "POT-Creation-Date: 2020-04-09 18:19+0000\n"
14 "POT-Creation-Date: 2020-06-09 14:43+0000\n"
1515 "MIME-Version: 1.0\n"
1616 "Content-Type: text/plain; charset=UTF-8\n"
1717 "Content-Transfer-Encoding: 8bit\n"
11061106 msgstr "Redirigiendo a %(uri)s para la autorización. "
11071107
11081108 #, python-format
1109 msgid "Registry service can't use %s"
1110 msgstr "El servicio de registro no puede usar %s"
1111
1112 #, python-format
11131109 msgid "Registry was not configured correctly on API server. Reason: %(reason)s"
11141110 msgstr ""
11151111 "El registro no se ha configurado correctamente en el servidor de API. Razón: "
1111 msgstr ""
1212 "Project-Id-Version: glance VERSION\n"
1313 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
14 "POT-Creation-Date: 2020-04-09 18:19+0000\n"
14 "POT-Creation-Date: 2020-06-09 14:43+0000\n"
1515 "MIME-Version: 1.0\n"
1616 "Content-Type: text/plain; charset=UTF-8\n"
1717 "Content-Transfer-Encoding: 8bit\n"
11281128 msgstr "Redirection vers %(uri)s pour autorisation."
11291129
11301130 #, python-format
1131 msgid "Registry service can't use %s"
1132 msgstr "Le service de registre ne peut pas utiliser %s"
1133
1134 #, python-format
11351131 msgid "Registry was not configured correctly on API server. Reason: %(reason)s"
11361132 msgstr ""
11371133 "Le registre n'a pas été configuré correctement sur le serveur d'API. Cause : "
88 msgstr ""
99 "Project-Id-Version: glance VERSION\n"
1010 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
11 "POT-Creation-Date: 2020-04-09 18:19+0000\n"
11 "POT-Creation-Date: 2020-06-09 14:43+0000\n"
1212 "MIME-Version: 1.0\n"
1313 "Content-Type: text/plain; charset=UTF-8\n"
1414 "Content-Transfer-Encoding: 8bit\n"
11151115 msgstr "Reindirizzamento a %(uri)s per l'autorizzazione."
11161116
11171117 #, python-format
1118 msgid "Registry service can't use %s"
1119 msgstr "Il servizio registro non può utilizzare %s"
1120
1121 #, python-format
11221118 msgid "Registry was not configured correctly on API server. Reason: %(reason)s"
11231119 msgstr ""
11241120 "Il registro non è stato configurato correttamente sul server API. Motivo: "
99 msgstr ""
1010 "Project-Id-Version: glance VERSION\n"
1111 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
12 "POT-Creation-Date: 2020-04-09 18:19+0000\n"
12 "POT-Creation-Date: 2020-06-09 14:43+0000\n"
1313 "MIME-Version: 1.0\n"
1414 "Content-Type: text/plain; charset=UTF-8\n"
1515 "Content-Transfer-Encoding: 8bit\n"
12371237 msgstr "許可のために %(uri)s にリダイレクトしています。"
12381238
12391239 #, python-format
1240 msgid "Registry service can't use %s"
1241 msgstr "レジストリーサービスでは %s を使用できません"
1242
1243 #, python-format
12441240 msgid "Registry was not configured correctly on API server. Reason: %(reason)s"
12451241 msgstr ""
12461242 "レジストリーが API サーバーで正しく設定されていませんでした。理由: %(reason)s"
88 msgstr ""
99 "Project-Id-Version: glance VERSION\n"
1010 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
11 "POT-Creation-Date: 2020-04-09 18:19+0000\n"
11 "POT-Creation-Date: 2020-06-09 14:43+0000\n"
1212 "MIME-Version: 1.0\n"
1313 "Content-Type: text/plain; charset=UTF-8\n"
1414 "Content-Transfer-Encoding: 8bit\n"
10571057 msgstr "권한 부여를 위해 %(uri)s(으)로 경로 재지정 중입니다."
10581058
10591059 #, python-format
1060 msgid "Registry service can't use %s"
1061 msgstr "레지스트리 서비스에서 %s을(를) 사용할 수 없음"
1062
1063 #, python-format
10641060 msgid "Registry was not configured correctly on API server. Reason: %(reason)s"
10651061 msgstr ""
10661062 "API 서버에서 레지스트리가 올바르게 구성되지 않았습니다. 이유: %(reason)s"
1111 msgstr ""
1212 "Project-Id-Version: glance VERSION\n"
1313 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
14 "POT-Creation-Date: 2020-04-09 18:19+0000\n"
14 "POT-Creation-Date: 2020-06-09 14:43+0000\n"
1515 "MIME-Version: 1.0\n"
1616 "Content-Type: text/plain; charset=UTF-8\n"
1717 "Content-Transfer-Encoding: 8bit\n"
11001100 msgstr "Redirecionando para %(uri)s para obter autorização."
11011101
11021102 #, python-format
1103 msgid "Registry service can't use %s"
1104 msgstr "Serviço de registro não pode utilizar %s"
1105
1106 #, python-format
11071103 msgid "Registry was not configured correctly on API server. Reason: %(reason)s"
11081104 msgstr ""
11091105 "O registro não foi configurado corretamente no servidor de API. Motivo: "
22 msgstr ""
33 "Project-Id-Version: glance VERSION\n"
44 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
5 "POT-Creation-Date: 2020-04-09 18:19+0000\n"
5 "POT-Creation-Date: 2020-06-09 14:43+0000\n"
66 "MIME-Version: 1.0\n"
77 "Content-Type: text/plain; charset=UTF-8\n"
88 "Content-Transfer-Encoding: 8bit\n"
10791079 msgstr "Перенаправляется на %(uri)s для предоставления доступа."
10801080
10811081 #, python-format
1082 msgid "Registry service can't use %s"
1083 msgstr "Служба реестра не может использовать %s"
1084
1085 #, python-format
10861082 msgid "Registry was not configured correctly on API server. Reason: %(reason)s"
10871083 msgstr "Реестр настроен неправильно на сервере API. Причина: %(reason)s"
10881084
88 msgstr ""
99 "Project-Id-Version: glance VERSION\n"
1010 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
11 "POT-Creation-Date: 2020-04-09 18:19+0000\n"
11 "POT-Creation-Date: 2020-06-09 14:43+0000\n"
1212 "MIME-Version: 1.0\n"
1313 "Content-Type: text/plain; charset=UTF-8\n"
1414 "Content-Transfer-Encoding: 8bit\n"
924924 msgstr "Yetkilendirme için %(uri)s adresine yeniden yönlendiriliyor."
925925
926926 #, python-format
927 msgid "Registry service can't use %s"
928 msgstr "Kayıt defteri servisi %s kullanamaz"
929
930 #, python-format
931927 msgid "Registry was not configured correctly on API server. Reason: %(reason)s"
932928 msgstr ""
933929 "Kayıt defteri API sunucusunda doğru bir şekilde yapılandırılamadı. Nedeni: "
1414 msgstr ""
1515 "Project-Id-Version: glance VERSION\n"
1616 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
17 "POT-Creation-Date: 2020-04-09 18:19+0000\n"
17 "POT-Creation-Date: 2020-06-09 14:43+0000\n"
1818 "MIME-Version: 1.0\n"
1919 "Content-Type: text/plain; charset=UTF-8\n"
2020 "Content-Transfer-Encoding: 8bit\n"
10671067 msgstr "对于授权,正在重定向至 %(uri)s。"
10681068
10691069 #, python-format
1070 msgid "Registry service can't use %s"
1071 msgstr "注册服务无法使用 %s"
1072
1073 #, python-format
10741070 msgid "Registry was not configured correctly on API server. Reason: %(reason)s"
10751071 msgstr "API 服务器上未正确配置注册表。原因:%(reason)s"
10761072
77 msgstr ""
88 "Project-Id-Version: glance VERSION\n"
99 "Report-Msgid-Bugs-To: https://bugs.launchpad.net/openstack-i18n/\n"
10 "POT-Creation-Date: 2020-04-09 18:19+0000\n"
10 "POT-Creation-Date: 2020-06-09 14:43+0000\n"
1111 "MIME-Version: 1.0\n"
1212 "Content-Type: text/plain; charset=UTF-8\n"
1313 "Content-Transfer-Encoding: 8bit\n"
10131013 msgstr "正在重新導向至 %(uri)s 以進行授權。"
10141014
10151015 #, python-format
1016 msgid "Registry service can't use %s"
1017 msgstr "登錄服務無法使用 %s"
1018
1019 #, python-format
10201016 msgid "Registry was not configured correctly on API server. Reason: %(reason)s"
10211017 msgstr "API 伺服器上未正確地配置登錄。原因:%(reason)s"
10221018
1212 # License for the specific language governing permissions and limitations
1313 # under the License.
1414
15 # TODO(smcginnis) update this once six has support for collections.abc
16 # (https://github.com/benjaminp/six/pull/241) or clean up once we drop py2.7.
17 try:
18 from collections.abc import MutableSequence
19 except ImportError:
20 from collections import MutableSequence
21
15 from collections import abc
2216 import copy
2317 import functools
2418
3226 from oslo_utils import excutils
3327
3428 from glance.common import exception
29 from glance.common import format_inspector
3530 from glance.common import utils
3631 import glance.domain.proxy
3732 from glance.i18n import _, _LE, _LI, _LW
191186
192187
193188 @functools.total_ordering
194 class StoreLocations(MutableSequence):
189 class StoreLocations(abc.MutableSequence):
195190 """
196191 The proxy for store location property. It takes responsibility for::
197192
555550 img_signature_key_type=key_type
556551 )
557552
553 if not self.image.virtual_size:
554 inspector = format_inspector.get_inspector(self.image.disk_format)
555 else:
556 # No need to do this again
557 inspector = None
558
559 if inspector and self.image.container_format == 'bare':
560 fmt = inspector()
561 data = format_inspector.InfoWrapper(data, fmt)
562 LOG.debug('Enabling in-flight format inspection for %s', fmt)
563 else:
564 fmt = None
565
558566 self._upload_to_store(data, verifier, backend, size)
567
568 if fmt and fmt.format_match and fmt.virtual_size:
569 self.image.virtual_size = fmt.virtual_size
570 LOG.info('Image format matched and virtual size computed: %i',
571 self.image.virtual_size)
572 elif fmt:
573 LOG.warning('Image format %s did not match; '
574 'unable to calculate virtual size',
575 self.image.disk_format)
576
559577 if set_active and self.image.status != 'active':
560578 self.image.status = 'active'
561579
1313
1414 __all__ = [
1515 'list_api_opts',
16 'list_registry_opts',
1716 'list_scrubber_opts',
1817 'list_cache_opts',
1918 'list_manage_opts',
3635 import glance.common.location_strategy
3736 import glance.common.location_strategy.store_type
3837 import glance.common.property_utils
39 import glance.common.rpc
4038 import glance.common.wsgi
4139 import glance.image_cache
4240 import glance.image_cache.drivers.sqlite
4341 import glance.notifier
44 import glance.registry
45 import glance.registry.client
46 import glance.registry.client.v1.api
4742 import glance.scrubber
4843
4944
5449 glance.common.config.common_opts,
5550 glance.common.location_strategy.location_strategy_opts,
5651 glance.common.property_utils.property_opts,
57 glance.common.rpc.rpc_opts,
5852 glance.common.wsgi.bind_opts,
5953 glance.common.wsgi.eventlet_opts,
6054 glance.common.wsgi.socket_opts,
6357 glance.image_cache.drivers.sqlite.sqlite_opts,
6458 glance.image_cache.image_cache_opts,
6559 glance.notifier.notifier_opts,
66 glance.registry.registry_addr_opts,
67 glance.registry.client.registry_client_ctx_opts,
68 glance.registry.client.registry_client_opts,
69 glance.registry.client.v1.api.registry_client_ctx_opts,
7060 glance.scrubber.scrubber_opts))),
7161 ('image_format', glance.common.config.image_format_opts),
7262 ('task', glance.common.config.task_opts),
7666 ('store_type_location_strategy',
7767 glance.common.location_strategy.store_type.store_type_opts),
7868 profiler.list_opts()[0],
79 ('paste_deploy', glance.common.config.paste_deploy_opts)
80 ]
81 _registry_opts = [
82 (None, list(itertools.chain(
83 glance.api.middleware.context.context_opts,
84 glance.common.config.common_opts,
85 glance.common.wsgi.bind_opts,
86 glance.common.wsgi.socket_opts,
87 glance.common.wsgi.wsgi_opts,
88 glance.common.wsgi.eventlet_opts))),
89 profiler.list_opts()[0],
90 ('paste_deploy', glance.common.config.paste_deploy_opts)
69 ('paste_deploy', glance.common.config.paste_deploy_opts),
70 ('wsgi', glance.common.config.wsgi_opts),
9171 ]
9272 _scrubber_opts = [
9373 (None, list(itertools.chain(
10080 (None, list(itertools.chain(
10181 glance.common.config.common_opts,
10282 glance.image_cache.drivers.sqlite.sqlite_opts,
103 glance.image_cache.image_cache_opts,
104 glance.registry.registry_addr_opts,
105 glance.registry.client.registry_client_opts,
106 glance.registry.client.registry_client_ctx_opts))),
83 glance.image_cache.image_cache_opts))),
10784 ]
10885 _manage_opts = [
10986 (None, [])
136113 return [(g, copy.deepcopy(o)) for g, o in _api_opts]
137114
138115
139 def list_registry_opts():
140 """Return a list of oslo_config options available in Glance Registry
141 service.
142 """
143 return [(g, copy.deepcopy(o)) for g, o in _registry_opts]
144
145
146116 def list_scrubber_opts():
147117 """Return a list of oslo_config options available in Glance Scrubber
148118 service.
2020 policy.RuleDefault(name="modify_image", check_str="rule:default"),
2121 policy.RuleDefault(name="publicize_image", check_str="role:admin"),
2222 policy.RuleDefault(name="communitize_image", check_str="rule:default"),
23 policy.RuleDefault(name="copy_from", check_str="rule:default"),
2423
2524 policy.RuleDefault(name="download_image", check_str="rule:default"),
2625 policy.RuleDefault(name="upload_image", check_str="rule:default"),
3938
4039 policy.RuleDefault(name="deactivate", check_str="rule:default"),
4140 policy.RuleDefault(name="reactivate", check_str="rule:default"),
41
42 policy.RuleDefault(name="copy_image", check_str="role:admin"),
4243 ]
4344
4445
+0
-68
glance/registry/__init__.py less more
0 # Copyright 2010-2011 OpenStack Foundation
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 """
16 Registry API
17 """
18
19 from oslo_config import cfg
20
21 from glance.i18n import _
22
23
24 registry_addr_opts = [
25 cfg.HostAddressOpt('registry_host',
26 default='0.0.0.0',
27 deprecated_for_removal=True,
28 deprecated_since="Queens",
29 deprecated_reason=_("""
30 Glance registry service is deprecated for removal.
31
32 More information can be found from the spec:
33 http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html
34 """),
35 help=_("""
36 Address the registry server is hosted on.
37
38 Possible values:
39 * A valid IP or hostname
40
41 Related options:
42 * None
43
44 """)),
45 cfg.PortOpt('registry_port', default=9191,
46 deprecated_for_removal=True,
47 deprecated_since="Queens",
48 deprecated_reason=_("""
49 Glance registry service is deprecated for removal.
50
51 More information can be found from the spec:
52 http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html
53 """),
54 help=_("""
55 Port the registry server is listening on.
56
57 Possible values:
58 * A valid port number
59
60 Related options:
61 * None
62
63 """)),
64 ]
65
66 CONF = cfg.CONF
67 CONF.register_opts(registry_addr_opts)
+0
-40
glance/registry/api/__init__.py less more
0 # Copyright 2014 Hewlett-Packard Development Company, L.P.
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 import debtcollector
16 from oslo_config import cfg
17
18 from glance.common import wsgi
19 from glance.registry.api import v1
20 from glance.registry.api import v2
21
22 CONF = cfg.CONF
23 CONF.import_opt('enable_v1_registry', 'glance.common.config')
24 CONF.import_opt('enable_v2_registry', 'glance.common.config')
25
26
27 class API(wsgi.Router):
28 """WSGI entry point for all Registry requests."""
29
30 def __init__(self, mapper):
31 mapper = mapper or wsgi.APIMapper()
32 if CONF.enable_v1_registry:
33 v1.init(mapper)
34 if CONF.enable_v2_registry:
35 debtcollector.deprecate("Glance Registry service has been "
36 "deprecated for removal.")
37 v2.init(mapper)
38
39 super(API, self).__init__(mapper)
+0
-91
glance/registry/api/v1/__init__.py less more
0 # Copyright 2010-2011 OpenStack Foundation
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 from glance.common import wsgi
16 from glance.registry.api.v1 import images
17 from glance.registry.api.v1 import members
18
19
20 def init(mapper):
21 images_resource = images.create_resource()
22
23 mapper.connect("/",
24 controller=images_resource,
25 action="index")
26 mapper.connect("/images",
27 controller=images_resource,
28 action="index",
29 conditions={'method': ['GET']})
30 mapper.connect("/images",
31 controller=images_resource,
32 action="create",
33 conditions={'method': ['POST']})
34 mapper.connect("/images/detail",
35 controller=images_resource,
36 action="detail",
37 conditions={'method': ['GET']})
38 mapper.connect("/images/{id}",
39 controller=images_resource,
40 action="show",
41 conditions=dict(method=["GET"]))
42 mapper.connect("/images/{id}",
43 controller=images_resource,
44 action="update",
45 conditions=dict(method=["PUT"]))
46 mapper.connect("/images/{id}",
47 controller=images_resource,
48 action="delete",
49 conditions=dict(method=["DELETE"]))
50
51 members_resource = members.create_resource()
52
53 mapper.connect("/images/{image_id}/members",
54 controller=members_resource,
55 action="index",
56 conditions={'method': ['GET']})
57 mapper.connect("/images/{image_id}/members",
58 controller=members_resource,
59 action="create",
60 conditions={'method': ['POST']})
61 mapper.connect("/images/{image_id}/members",
62 controller=members_resource,
63 action="update_all",
64 conditions=dict(method=["PUT"]))
65 mapper.connect("/images/{image_id}/members/{id}",
66 controller=members_resource,
67 action="show",
68 conditions={'method': ['GET']})
69 mapper.connect("/images/{image_id}/members/{id}",
70 controller=members_resource,
71 action="update",
72 conditions={'method': ['PUT']})
73 mapper.connect("/images/{image_id}/members/{id}",
74 controller=members_resource,
75 action="delete",
76 conditions={'method': ['DELETE']})
77 mapper.connect("/shared-images/{id}",
78 controller=members_resource,
79 action="index_shared_images")
80
81
82 class API(wsgi.Router):
83 """WSGI entry point for all Registry requests."""
84
85 def __init__(self, mapper):
86 mapper = mapper or wsgi.APIMapper()
87
88 init(mapper)
89
90 super(API, self).__init__(mapper)
+0
-569
glance/registry/api/v1/images.py less more
0 # Copyright 2010-2011 OpenStack Foundation
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 """
16 Reference implementation registry server WSGI controller
17 """
18
19 from oslo_config import cfg
20 from oslo_log import log as logging
21 from oslo_utils import encodeutils
22 from oslo_utils import strutils
23 from oslo_utils import uuidutils
24 from webob import exc
25
26 from glance.common import exception
27 from glance.common import timeutils
28 from glance.common import utils
29 from glance.common import wsgi
30 import glance.db
31 from glance.i18n import _, _LE, _LI, _LW
32
33
34 LOG = logging.getLogger(__name__)
35
36 CONF = cfg.CONF
37
38 DISPLAY_FIELDS_IN_INDEX = ['id', 'name', 'size',
39 'disk_format', 'container_format',
40 'checksum']
41
42 SUPPORTED_FILTERS = ['name', 'status', 'container_format', 'disk_format',
43 'min_ram', 'min_disk', 'size_min', 'size_max',
44 'changes-since', 'protected']
45
46 SUPPORTED_SORT_KEYS = ('name', 'status', 'container_format', 'disk_format',
47 'size', 'id', 'created_at', 'updated_at')
48
49 SUPPORTED_SORT_DIRS = ('asc', 'desc')
50
51 SUPPORTED_PARAMS = ('limit', 'marker', 'sort_key', 'sort_dir')
52
53
54 def _normalize_image_location_for_db(image_data):
55 """
56 This function takes the legacy locations field and the newly added
57 location_data field from the image_data values dictionary which flows
58 over the wire between the registry and API servers and converts it
59 into the location_data format only which is then consumable by the
60 Image object.
61
62 :param image_data: a dict of values representing information in the image
63 :returns: a new image data dict
64 """
65 if 'locations' not in image_data and 'location_data' not in image_data:
66 image_data['locations'] = None
67 return image_data
68
69 locations = image_data.pop('locations', [])
70 location_data = image_data.pop('location_data', [])
71
72 location_data_dict = {}
73 for l in locations:
74 location_data_dict[l] = {}
75 for l in location_data:
76 location_data_dict[l['url']] = {'metadata': l['metadata'],
77 'status': l['status'],
78 # Note(zhiyan): New location has no ID.
79 'id': l['id'] if 'id' in l else None}
80
81 # NOTE(jbresnah) preserve original order. tests assume original order,
82 # should that be defined functionality
83 ordered_keys = locations[:]
84 for ld in location_data:
85 if ld['url'] not in ordered_keys:
86 ordered_keys.append(ld['url'])
87
88 location_data = []
89 for loc in ordered_keys:
90 data = location_data_dict[loc]
91 if data:
92 location_data.append({'url': loc,
93 'metadata': data['metadata'],
94 'status': data['status'],
95 'id': data['id']})
96 else:
97 location_data.append({'url': loc,
98 'metadata': {},
99 'status': 'active',
100 'id': None})
101
102 image_data['locations'] = location_data
103 return image_data
104
105
106 class Controller(object):
107
108 def __init__(self):
109 self.db_api = glance.db.get_api()
110
111 def _get_images(self, context, filters, **params):
112 """Get images, wrapping in exception if necessary."""
113 # NOTE(markwash): for backwards compatibility, is_public=True for
114 # admins actually means "treat me as if I'm not an admin and show me
115 # all my images"
116 if context.is_admin and params.get('is_public') is True:
117 params['admin_as_user'] = True
118 del params['is_public']
119 try:
120 return self.db_api.image_get_all(context, filters=filters,
121 v1_mode=True, **params)
122 except exception.ImageNotFound:
123 LOG.warn(_LW("Invalid marker. Image %(id)s could not be "
124 "found."), {'id': params.get('marker')})
125 msg = _("Invalid marker. Image could not be found.")
126 raise exc.HTTPBadRequest(explanation=msg)
127 except exception.Forbidden:
128 LOG.warn(_LW("Access denied to image %(id)s but returning "
129 "'not found'"), {'id': params.get('marker')})
130 msg = _("Invalid marker. Image could not be found.")
131 raise exc.HTTPBadRequest(explanation=msg)
132 except Exception:
133 LOG.exception(_LE("Unable to get images"))
134 raise
135
136 def index(self, req):
137 """Return a basic filtered list of public, non-deleted images
138
139 :param req: the Request object coming from the wsgi layer
140 :returns: a mapping of the following form
141
142 .. code-block:: python
143
144 dict(images=[image_list])
145
146 Where image_list is a sequence of mappings
147
148 ::
149
150 {
151 'id': <ID>,
152 'name': <NAME>,
153 'size': <SIZE>,
154 'disk_format': <DISK_FORMAT>,
155 'container_format': <CONTAINER_FORMAT>,
156 'checksum': <CHECKSUM>
157 }
158
159 """
160 params = self._get_query_params(req)
161 images = self._get_images(req.context, **params)
162
163 results = []
164 for image in images:
165 result = {}
166 for field in DISPLAY_FIELDS_IN_INDEX:
167 result[field] = image[field]
168 results.append(result)
169
170 LOG.debug("Returning image list")
171 return dict(images=results)
172
173 def detail(self, req):
174 """Return a filtered list of public, non-deleted images in detail
175
176 :param req: the Request object coming from the wsgi layer
177 :returns: a mapping of the following form
178
179 ::
180
181 {'images':
182 [{
183 'id': <ID>,
184 'name': <NAME>,
185 'size': <SIZE>,
186 'disk_format': <DISK_FORMAT>,
187 'container_format': <CONTAINER_FORMAT>,
188 'checksum': <CHECKSUM>,
189 'min_disk': <MIN_DISK>,
190 'min_ram': <MIN_RAM>,
191 'store': <STORE>,
192 'status': <STATUS>,
193 'created_at': <TIMESTAMP>,
194 'updated_at': <TIMESTAMP>,
195 'deleted_at': <TIMESTAMP>|<NONE>,
196 'properties': {'distro': 'Ubuntu 10.04 LTS', {...}}
197 }, {...}]
198 }
199
200 """
201 params = self._get_query_params(req)
202
203 images = self._get_images(req.context, **params)
204 image_dicts = [make_image_dict(i) for i in images]
205 LOG.debug("Returning detailed image list")
206 return dict(images=image_dicts)
207
208 def _get_query_params(self, req):
209 """Extract necessary query parameters from http request.
210
211 :param req: the Request object coming from the wsgi layer
212 :returns: dictionary of filters to apply to list of images
213 """
214 params = {
215 'filters': self._get_filters(req),
216 'limit': self._get_limit(req),
217 'sort_key': [self._get_sort_key(req)],
218 'sort_dir': [self._get_sort_dir(req)],
219 'marker': self._get_marker(req),
220 }
221
222 if req.context.is_admin:
223 # Only admin gets to look for non-public images
224 params['is_public'] = self._get_is_public(req)
225
226 # need to coy items because the params is modified in the loop body
227 items = list(params.items())
228 for key, value in items:
229 if value is None:
230 del params[key]
231
232 # Fix for LP Bug #1132294
233 # Ensure all shared images are returned in v1
234 params['member_status'] = 'all'
235 return params
236
237 def _get_filters(self, req):
238 """Return a dictionary of query param filters from the request
239
240 :param req: the Request object coming from the wsgi layer
241 :returns: a dict of key/value filters
242 """
243 filters = {}
244 properties = {}
245
246 for param in req.params:
247 if param in SUPPORTED_FILTERS:
248 filters[param] = req.params.get(param)
249 if param.startswith('property-'):
250 _param = param[9:]
251 properties[_param] = req.params.get(param)
252
253 if 'changes-since' in filters:
254 isotime = filters['changes-since']
255 try:
256 filters['changes-since'] = timeutils.parse_isotime(isotime)
257 except ValueError:
258 raise exc.HTTPBadRequest(_("Unrecognized changes-since value"))
259
260 if 'protected' in filters:
261 value = self._get_bool(filters['protected'])
262 if value is None:
263 raise exc.HTTPBadRequest(_("protected must be True, or "
264 "False"))
265
266 filters['protected'] = value
267
268 # only allow admins to filter on 'deleted'
269 if req.context.is_admin:
270 deleted_filter = self._parse_deleted_filter(req)
271 if deleted_filter is not None:
272 filters['deleted'] = deleted_filter
273 elif 'changes-since' not in filters:
274 filters['deleted'] = False
275 elif 'changes-since' not in filters:
276 filters['deleted'] = False
277
278 if properties:
279 filters['properties'] = properties
280
281 return filters
282
283 def _get_limit(self, req):
284 """Parse a limit query param into something usable."""
285 try:
286 limit = int(req.params.get('limit', CONF.limit_param_default))
287 except ValueError:
288 raise exc.HTTPBadRequest(_("limit param must be an integer"))
289
290 if limit < 0:
291 raise exc.HTTPBadRequest(_("limit param must be positive"))
292
293 return min(CONF.api_limit_max, limit)
294
295 def _get_marker(self, req):
296 """Parse a marker query param into something usable."""
297 marker = req.params.get('marker')
298
299 if marker and not uuidutils.is_uuid_like(marker):
300 msg = _('Invalid marker format')
301 raise exc.HTTPBadRequest(explanation=msg)
302
303 return marker
304
305 def _get_sort_key(self, req):
306 """Parse a sort key query param from the request object."""
307 sort_key = req.params.get('sort_key', 'created_at')
308 if sort_key is not None and sort_key not in SUPPORTED_SORT_KEYS:
309 _keys = ', '.join(SUPPORTED_SORT_KEYS)
310 msg = _("Unsupported sort_key. Acceptable values: %s") % (_keys,)
311 raise exc.HTTPBadRequest(explanation=msg)
312 return sort_key
313
314 def _get_sort_dir(self, req):
315 """Parse a sort direction query param from the request object."""
316 sort_dir = req.params.get('sort_dir', 'desc')
317 if sort_dir is not None and sort_dir not in SUPPORTED_SORT_DIRS:
318 _keys = ', '.join(SUPPORTED_SORT_DIRS)
319 msg = _("Unsupported sort_dir. Acceptable values: %s") % (_keys,)
320 raise exc.HTTPBadRequest(explanation=msg)
321 return sort_dir
322
323 def _get_bool(self, value):
324 value = value.lower()
325 if value == 'true' or value == '1':
326 return True
327 elif value == 'false' or value == '0':
328 return False
329
330 return None
331
332 def _get_is_public(self, req):
333 """Parse is_public into something usable."""
334 is_public = req.params.get('is_public')
335
336 if is_public is None:
337 # NOTE(vish): This preserves the default value of showing only
338 # public images.
339 return True
340 elif is_public.lower() == 'none':
341 return None
342
343 value = self._get_bool(is_public)
344 if value is None:
345 raise exc.HTTPBadRequest(_("is_public must be None, True, or "
346 "False"))
347
348 return value
349
350 def _parse_deleted_filter(self, req):
351 """Parse deleted into something usable."""
352 deleted = req.params.get('deleted')
353 if deleted is None:
354 return None
355 return strutils.bool_from_string(deleted)
356
357 def show(self, req, id):
358 """Return data about the given image id."""
359 try:
360 image = self.db_api.image_get(req.context, id, v1_mode=True)
361 LOG.debug("Successfully retrieved image %(id)s", {'id': id})
362 except exception.ImageNotFound:
363 LOG.info(_LI("Image %(id)s not found"), {'id': id})
364 raise exc.HTTPNotFound()
365 except exception.Forbidden:
366 # If it's private and doesn't belong to them, don't let on
367 # that it exists
368 LOG.info(_LI("Access denied to image %(id)s but returning"
369 " 'not found'"), {'id': id})
370 raise exc.HTTPNotFound()
371 except Exception:
372 LOG.exception(_LE("Unable to show image %s"), id)
373 raise
374
375 return dict(image=make_image_dict(image))
376
377 @utils.mutating
378 def delete(self, req, id):
379 """Deletes an existing image with the registry.
380
381 :param req: wsgi Request object
382 :param id: The opaque internal identifier for the image
383
384 :returns: 200 if delete was successful, a fault if not. On
385 success, the body contains the deleted image
386 information as a mapping.
387 """
388 try:
389 deleted_image = self.db_api.image_destroy(req.context, id)
390 LOG.info(_LI("Successfully deleted image %(id)s"), {'id': id})
391 return dict(image=make_image_dict(deleted_image))
392 except exception.ForbiddenPublicImage:
393 LOG.info(_LI("Delete denied for public image %(id)s"), {'id': id})
394 raise exc.HTTPForbidden()
395 except exception.Forbidden:
396 # If it's private and doesn't belong to them, don't let on
397 # that it exists
398 LOG.info(_LI("Access denied to image %(id)s but returning"
399 " 'not found'"), {'id': id})
400 return exc.HTTPNotFound()
401 except exception.ImageNotFound:
402 LOG.info(_LI("Image %(id)s not found"), {'id': id})
403 return exc.HTTPNotFound()
404 except Exception:
405 LOG.exception(_LE("Unable to delete image %s"), id)
406 raise
407
408 @utils.mutating
409 def create(self, req, body):
410 """Registers a new image with the registry.
411
412 :param req: wsgi Request object
413 :param body: Dictionary of information about the image
414
415 :returns: The newly-created image information as a mapping,
416 which will include the newly-created image's internal id
417 in the 'id' field
418 """
419 image_data = body['image']
420
421 # Ensure the image has a status set
422 image_data.setdefault('status', 'active')
423
424 # Set up the image owner
425 if not req.context.is_admin or 'owner' not in image_data:
426 image_data['owner'] = req.context.owner
427
428 image_id = image_data.get('id')
429 if image_id and not uuidutils.is_uuid_like(image_id):
430 LOG.info(_LI("Rejecting image creation request for invalid image "
431 "id '%(bad_id)s'"), {'bad_id': image_id})
432 msg = _("Invalid image id format")
433 return exc.HTTPBadRequest(explanation=msg)
434
435 if 'location' in image_data:
436 image_data['locations'] = [image_data.pop('location')]
437
438 try:
439 image_data = _normalize_image_location_for_db(image_data)
440 image_data = self.db_api.image_create(req.context, image_data,
441 v1_mode=True)
442 image_data = dict(image=make_image_dict(image_data))
443 LOG.info(_LI("Successfully created image %(id)s"),
444 {'id': image_data['image']['id']})
445 return image_data
446 except exception.Duplicate:
447 msg = _("Image with identifier %s already exists!") % image_id
448 LOG.warn(msg)
449 return exc.HTTPConflict(msg)
450 except exception.Invalid as e:
451 msg = (_("Failed to add image metadata. "
452 "Got error: %s") % encodeutils.exception_to_unicode(e))
453 LOG.error(msg)
454 return exc.HTTPBadRequest(msg)
455 except Exception:
456 LOG.exception(_LE("Unable to create image %s"), image_id)
457 raise
458
459 @utils.mutating
460 def update(self, req, id, body):
461 """Updates an existing image with the registry.
462
463 :param req: wsgi Request object
464 :param body: Dictionary of information about the image
465 :param id: The opaque internal identifier for the image
466
467 :returns: Returns the updated image information as a mapping,
468 """
469 image_data = body['image']
470 from_state = body.get('from_state')
471
472 # Prohibit modification of 'owner'
473 if not req.context.is_admin and 'owner' in image_data:
474 del image_data['owner']
475
476 if 'location' in image_data:
477 image_data['locations'] = [image_data.pop('location')]
478
479 purge_props = req.headers.get("X-Glance-Registry-Purge-Props", "false")
480 try:
481 # These fields hold sensitive data, which should not be printed in
482 # the logs.
483 sensitive_fields = ['locations', 'location_data']
484 LOG.debug("Updating image %(id)s with metadata: %(image_data)r",
485 {'id': id,
486 'image_data': {k: v for k, v in image_data.items()
487 if k not in sensitive_fields}})
488 image_data = _normalize_image_location_for_db(image_data)
489 if purge_props == "true":
490 purge_props = True
491 else:
492 purge_props = False
493
494 updated_image = self.db_api.image_update(req.context, id,
495 image_data,
496 purge_props=purge_props,
497 from_state=from_state,
498 v1_mode=True)
499
500 LOG.info(_LI("Updating metadata for image %(id)s"), {'id': id})
501 return dict(image=make_image_dict(updated_image))
502 except exception.Invalid as e:
503 msg = (_("Failed to update image metadata. "
504 "Got error: %s") % encodeutils.exception_to_unicode(e))
505 LOG.error(msg)
506 return exc.HTTPBadRequest(msg)
507 except exception.ImageNotFound:
508 LOG.info(_LI("Image %(id)s not found"), {'id': id})
509 raise exc.HTTPNotFound(body='Image not found',
510 request=req,
511 content_type='text/plain')
512 except exception.ForbiddenPublicImage:
513 LOG.info(_LI("Update denied for public image %(id)s"), {'id': id})
514 raise exc.HTTPForbidden()
515 except exception.Forbidden:
516 # If it's private and doesn't belong to them, don't let on
517 # that it exists
518 LOG.info(_LI("Access denied to image %(id)s but returning"
519 " 'not found'"), {'id': id})
520 raise exc.HTTPNotFound(body='Image not found',
521 request=req,
522 content_type='text/plain')
523 except exception.Conflict as e:
524 LOG.info(encodeutils.exception_to_unicode(e))
525 raise exc.HTTPConflict(body='Image operation conflicts',
526 request=req,
527 content_type='text/plain')
528 except Exception:
529 LOG.exception(_LE("Unable to update image %s"), id)
530 raise
531
532
533 def _limit_locations(image):
534 locations = image.pop('locations', [])
535 image['location_data'] = locations
536 image['location'] = None
537 for loc in locations:
538 if loc['status'] == 'active':
539 image['location'] = loc['url']
540 break
541
542
543 def make_image_dict(image):
544 """Create a dict representation of an image which we can use to
545 serialize the image.
546 """
547
548 def _fetch_attrs(d, attrs):
549 return {a: d[a] for a in attrs if a in d.keys()}
550
551 # TODO(sirp): should this be a dict, or a list of dicts?
552 # A plain dict is more convenient, but list of dicts would provide
553 # access to created_at, etc
554 properties = {p['name']: p['value'] for p in image['properties']
555 if not p['deleted']}
556
557 image_dict = _fetch_attrs(image, glance.db.IMAGE_ATTRS)
558 image_dict['properties'] = properties
559 _limit_locations(image_dict)
560
561 return image_dict
562
563
564 def create_resource():
565 """Images resource factory method."""
566 deserializer = wsgi.JSONRequestDeserializer()
567 serializer = wsgi.JSONResponseSerializer()
568 return wsgi.Resource(Controller(), deserializer, serializer)
+0
-366
glance/registry/api/v1/members.py less more
0 # Copyright 2010-2011 OpenStack Foundation
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 from oslo_log import log as logging
16 from oslo_utils import encodeutils
17 import webob.exc
18
19 from glance.common import exception
20 from glance.common import utils
21 from glance.common import wsgi
22 import glance.db
23 from glance.i18n import _, _LI, _LW
24
25
26 LOG = logging.getLogger(__name__)
27
28
29 class Controller(object):
30
31 def _check_can_access_image_members(self, context):
32 if context.owner is None and not context.is_admin:
33 raise webob.exc.HTTPUnauthorized(_("No authenticated user"))
34
35 def __init__(self):
36 self.db_api = glance.db.get_api()
37
38 def is_image_sharable(self, context, image):
39 """Return True if the image can be shared to others in this context."""
40 # Is admin == image sharable
41 if context.is_admin:
42 return True
43
44 # Only allow sharing if we have an owner
45 if context.owner is None:
46 return False
47
48 # If we own the image, we can share it
49 if context.owner == image['owner']:
50 return True
51
52 members = self.db_api.image_member_find(context,
53 image_id=image['id'],
54 member=context.owner)
55 if members:
56 return members[0]['can_share']
57
58 return False
59
60 def index(self, req, image_id):
61 """
62 Get the members of an image.
63 """
64 try:
65 self.db_api.image_get(req.context, image_id, v1_mode=True)
66 except exception.NotFound:
67 msg = _("Image %(id)s not found") % {'id': image_id}
68 LOG.warn(msg)
69 raise webob.exc.HTTPNotFound(msg)
70 except exception.Forbidden:
71 # If it's private and doesn't belong to them, don't let on
72 # that it exists
73 msg = _LW("Access denied to image %(id)s but returning"
74 " 'not found'") % {'id': image_id}
75 LOG.warn(msg)
76 raise webob.exc.HTTPNotFound()
77
78 members = self.db_api.image_member_find(req.context, image_id=image_id)
79 LOG.debug("Returning member list for image %(id)s", {'id': image_id})
80 return dict(members=make_member_list(members,
81 member_id='member',
82 can_share='can_share'))
83
84 @utils.mutating
85 def update_all(self, req, image_id, body):
86 """
87 Replaces the members of the image with those specified in the
88 body. The body is a dict with the following format::
89
90 {'memberships': [
91 {'member_id': <MEMBER_ID>,
92 ['can_share': [True|False]]}, ...
93 ]}
94 """
95 self._check_can_access_image_members(req.context)
96
97 # Make sure the image exists
98 try:
99 image = self.db_api.image_get(req.context, image_id, v1_mode=True)
100 except exception.NotFound:
101 msg = _("Image %(id)s not found") % {'id': image_id}
102 LOG.warn(msg)
103 raise webob.exc.HTTPNotFound(msg)
104 except exception.Forbidden:
105 # If it's private and doesn't belong to them, don't let on
106 # that it exists
107 msg = _LW("Access denied to image %(id)s but returning"
108 " 'not found'") % {'id': image_id}
109 LOG.warn(msg)
110 raise webob.exc.HTTPNotFound()
111
112 # Can they manipulate the membership?
113 if not self.is_image_sharable(req.context, image):
114 msg = (_LW("User lacks permission to share image %(id)s") %
115 {'id': image_id})
116 LOG.warn(msg)
117 msg = _("No permission to share that image")
118 raise webob.exc.HTTPForbidden(msg)
119
120 # Get the membership list
121 try:
122 memb_list = body['memberships']
123 except Exception as e:
124 # Malformed entity...
125 msg = _LW("Invalid membership association specified for "
126 "image %(id)s") % {'id': image_id}
127 LOG.warn(msg)
128 msg = (_("Invalid membership association: %s") %
129 encodeutils.exception_to_unicode(e))
130 raise webob.exc.HTTPBadRequest(explanation=msg)
131
132 add = []
133 existing = {}
134 # Walk through the incoming memberships
135 for memb in memb_list:
136 try:
137 datum = dict(image_id=image['id'],
138 member=memb['member_id'],
139 can_share=None)
140 except Exception as e:
141 # Malformed entity...
142 msg = _LW("Invalid membership association specified for "
143 "image %(id)s") % {'id': image_id}
144 LOG.warn(msg)
145 msg = (_("Invalid membership association: %s") %
146 encodeutils.exception_to_unicode(e))
147 raise webob.exc.HTTPBadRequest(explanation=msg)
148
149 # Figure out what can_share should be
150 if 'can_share' in memb:
151 datum['can_share'] = bool(memb['can_share'])
152
153 # Try to find the corresponding membership
154 members = self.db_api.image_member_find(req.context,
155 image_id=datum['image_id'],
156 member=datum['member'],
157 include_deleted=True)
158 try:
159 member = members[0]
160 except IndexError:
161 # Default can_share
162 datum['can_share'] = bool(datum['can_share'])
163 add.append(datum)
164 else:
165 # Are we overriding can_share?
166 if datum['can_share'] is None:
167 datum['can_share'] = members[0]['can_share']
168
169 existing[member['id']] = {
170 'values': datum,
171 'membership': member,
172 }
173
174 # We now have a filtered list of memberships to add and
175 # memberships to modify. Let's start by walking through all
176 # the existing image memberships...
177 existing_members = self.db_api.image_member_find(req.context,
178 image_id=image['id'],
179 include_deleted=True)
180 for member in existing_members:
181 if member['id'] in existing:
182 # Just update the membership in place
183 update = existing[member['id']]['values']
184 self.db_api.image_member_update(req.context,
185 member['id'],
186 update)
187 else:
188 if not member['deleted']:
189 # Outdated one; needs to be deleted
190 self.db_api.image_member_delete(req.context, member['id'])
191
192 # Now add the non-existent ones
193 for memb in add:
194 self.db_api.image_member_create(req.context, memb)
195
196 # Make an appropriate result
197 LOG.info(_LI("Successfully updated memberships for image %(id)s"),
198 {'id': image_id})
199 return webob.exc.HTTPNoContent()
200
201 @utils.mutating
202 def update(self, req, image_id, id, body=None):
203 """
204 Adds a membership to the image, or updates an existing one.
205 If a body is present, it is a dict with the following format::
206
207 {'member': {
208 'can_share': [True|False]
209 }}
210
211 If `can_share` is provided, the member's ability to share is
212 set accordingly. If it is not provided, existing memberships
213 remain unchanged and new memberships default to False.
214 """
215 self._check_can_access_image_members(req.context)
216
217 # Make sure the image exists
218 try:
219 image = self.db_api.image_get(req.context, image_id, v1_mode=True)
220 except exception.NotFound:
221 msg = _("Image %(id)s not found") % {'id': image_id}
222 LOG.warn(msg)
223 raise webob.exc.HTTPNotFound(msg)
224 except exception.Forbidden:
225 # If it's private and doesn't belong to them, don't let on
226 # that it exists
227 msg = _LW("Access denied to image %(id)s but returning"
228 " 'not found'") % {'id': image_id}
229 LOG.warn(msg)
230 raise webob.exc.HTTPNotFound()
231
232 # Can they manipulate the membership?
233 if not self.is_image_sharable(req.context, image):
234 msg = (_LW("User lacks permission to share image %(id)s") %
235 {'id': image_id})
236 LOG.warn(msg)
237 msg = _("No permission to share that image")
238 raise webob.exc.HTTPForbidden(msg)
239
240 # Determine the applicable can_share value
241 can_share = None
242 if body:
243 try:
244 can_share = bool(body['member']['can_share'])
245 except Exception as e:
246 # Malformed entity...
247 msg = _LW("Invalid membership association specified for "
248 "image %(id)s") % {'id': image_id}
249 LOG.warn(msg)
250 msg = (_("Invalid membership association: %s") %
251 encodeutils.exception_to_unicode(e))
252 raise webob.exc.HTTPBadRequest(explanation=msg)
253
254 # Look up an existing membership...
255 members = self.db_api.image_member_find(req.context,
256 image_id=image_id,
257 member=id,
258 include_deleted=True)
259 if members:
260 if can_share is not None:
261 values = dict(can_share=can_share)
262 self.db_api.image_member_update(req.context,
263 members[0]['id'],
264 values)
265 else:
266 values = dict(image_id=image['id'], member=id,
267 can_share=bool(can_share))
268 self.db_api.image_member_create(req.context, values)
269
270 LOG.info(_LI("Successfully updated a membership for image %(id)s"),
271 {'id': image_id})
272 return webob.exc.HTTPNoContent()
273
274 @utils.mutating
275 def delete(self, req, image_id, id):
276 """
277 Removes a membership from the image.
278 """
279 self._check_can_access_image_members(req.context)
280
281 # Make sure the image exists
282 try:
283 image = self.db_api.image_get(req.context, image_id, v1_mode=True)
284 except exception.NotFound:
285 msg = _("Image %(id)s not found") % {'id': image_id}
286 LOG.warn(msg)
287 raise webob.exc.HTTPNotFound(msg)
288 except exception.Forbidden:
289 # If it's private and doesn't belong to them, don't let on
290 # that it exists
291 msg = _LW("Access denied to image %(id)s but returning"
292 " 'not found'") % {'id': image_id}
293 LOG.warn(msg)
294 raise webob.exc.HTTPNotFound()
295
296 # Can they manipulate the membership?
297 if not self.is_image_sharable(req.context, image):
298 msg = (_LW("User lacks permission to share image %(id)s") %
299 {'id': image_id})
300 LOG.warn(msg)
301 msg = _("No permission to share that image")
302 raise webob.exc.HTTPForbidden(msg)
303
304 # Look up an existing membership
305 members = self.db_api.image_member_find(req.context,
306 image_id=image_id,
307 member=id)
308 if members:
309 self.db_api.image_member_delete(req.context, members[0]['id'])
310 else:
311 LOG.debug("%(id)s is not a member of image %(image_id)s",
312 {'id': id, 'image_id': image_id})
313 msg = _("Membership could not be found.")
314 raise webob.exc.HTTPNotFound(explanation=msg)
315
316 # Make an appropriate result
317 LOG.info(_LI("Successfully deleted a membership from image %(id)s"),
318 {'id': image_id})
319 return webob.exc.HTTPNoContent()
320
321 def default(self, req, *args, **kwargs):
322 """This will cover the missing 'show' and 'create' actions"""
323 LOG.debug("The method %s is not allowed for this resource",
324 req.environ['REQUEST_METHOD'])
325 raise webob.exc.HTTPMethodNotAllowed(
326 headers=[('Allow', 'PUT, DELETE')])
327
328 def index_shared_images(self, req, id):
329 """
330 Retrieves images shared with the given member.
331 """
332 try:
333 members = self.db_api.image_member_find(req.context, member=id)
334 except exception.NotFound:
335 msg = _LW("Member %(id)s not found") % {'id': id}
336 LOG.warn(msg)
337 msg = _("Membership could not be found.")
338 raise webob.exc.HTTPBadRequest(explanation=msg)
339
340 LOG.debug("Returning list of images shared with member %(id)s",
341 {'id': id})
342 return dict(shared_images=make_member_list(members,
343 image_id='image_id',
344 can_share='can_share'))
345
346
347 def make_member_list(members, **attr_map):
348 """
349 Create a dict representation of a list of members which we can use
350 to serialize the members list. Keyword arguments map the names of
351 optional attributes to include to the database attribute.
352 """
353
354 def _fetch_memb(memb, attr_map):
355 return {k: memb[v] for k, v in attr_map.items() if v in memb.keys()}
356
357 # Return the list of members with the given attribute mapping
358 return [_fetch_memb(memb, attr_map) for memb in members]
359
360
361 def create_resource():
362 """Image members resource factory method."""
363 deserializer = wsgi.JSONRequestDeserializer()
364 serializer = wsgi.JSONResponseSerializer()
365 return wsgi.Resource(Controller(), deserializer, serializer)
+0
-35
glance/registry/api/v2/__init__.py less more
0 # Copyright 2013 Red Hat, Inc.
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 from glance.common import wsgi
16 from glance.registry.api.v2 import rpc
17
18
19 def init(mapper):
20 rpc_resource = rpc.create_resource()
21 mapper.connect("/rpc", controller=rpc_resource,
22 conditions=dict(method=["POST"]),
23 action="__call__")
24
25
26 class API(wsgi.Router):
27 """WSGI entry point for all Registry requests."""
28
29 def __init__(self, mapper):
30 mapper = mapper or wsgi.APIMapper()
31
32 init(mapper)
33
34 super(API, self).__init__(mapper)
+0
-53
glance/registry/api/v2/rpc.py less more
0 # Copyright 2013 Red Hat, Inc.
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 """
16 RPC Controller
17 """
18
19 from oslo_config import cfg
20
21 from glance.common import rpc
22 from glance.common import wsgi
23 import glance.db
24 from glance.i18n import _
25
26
27 CONF = cfg.CONF
28
29
30 class Controller(rpc.Controller):
31
32 def __init__(self, raise_exc=False):
33 super(Controller, self).__init__(raise_exc)
34
35 # NOTE(flaper87): Avoid using registry's db
36 # driver for the registry service. It would
37 # end up in an infinite loop.
38 if CONF.data_api == "glance.db.registry.api":
39 msg = _("Registry service can't use %s") % CONF.data_api
40 raise RuntimeError(msg)
41
42 # NOTE(flaper87): Register the
43 # db_api as a resource to expose.
44 db_api = glance.db.get_api()
45 self.register(glance.db.unwrap(db_api))
46
47
48 def create_resource():
49 """Images resource factory method."""
50 deserializer = rpc.RPCJSONDeserializer()
51 serializer = rpc.RPCJSONSerializer()
52 return wsgi.Resource(Controller(), deserializer, serializer)
+0
-264
glance/registry/client/__init__.py less more
0 # Copyright 2013 OpenStack Foundation
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 from oslo_config import cfg
16
17 from glance.i18n import _
18
19
20 registry_client_opts = [
21 cfg.StrOpt('registry_client_protocol',
22 default='http',
23 choices=('http', 'https'),
24 deprecated_for_removal=True,
25 deprecated_since="Queens",
26 deprecated_reason=_("""
27 Glance registry service is deprecated for removal.
28
29 More information can be found from the spec:
30 http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html
31 """),
32 help=_("""
33 Protocol to use for communication with the registry server.
34
35 Provide a string value representing the protocol to use for
36 communication with the registry server. By default, this option is
37 set to ``http`` and the connection is not secure.
38
39 This option can be set to ``https`` to establish a secure connection
40 to the registry server. In this case, provide a key to use for the
41 SSL connection using the ``registry_client_key_file`` option. Also
42 include the CA file and cert file using the options
43 ``registry_client_ca_file`` and ``registry_client_cert_file``
44 respectively.
45
46 Possible values:
47 * http
48 * https
49
50 Related options:
51 * registry_client_key_file
52 * registry_client_cert_file
53 * registry_client_ca_file
54
55 """)),
56 cfg.StrOpt('registry_client_key_file',
57 sample_default='/etc/ssl/key/key-file.pem',
58 deprecated_for_removal=True,
59 deprecated_since="Queens",
60 deprecated_reason=_("""
61 Glance registry service is deprecated for removal.
62
63 More information can be found from the spec:
64 http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html
65 """),
66 help=_("""
67 Absolute path to the private key file.
68
69 Provide a string value representing a valid absolute path to the
70 private key file to use for establishing a secure connection to
71 the registry server.
72
73 NOTE: This option must be set if ``registry_client_protocol`` is
74 set to ``https``. Alternatively, the GLANCE_CLIENT_KEY_FILE
75 environment variable may be set to a filepath of the key file.
76
77 Possible values:
78 * String value representing a valid absolute path to the key
79 file.
80
81 Related options:
82 * registry_client_protocol
83
84 """)),
85 cfg.StrOpt('registry_client_cert_file',
86 sample_default='/etc/ssl/certs/file.crt',
87 deprecated_for_removal=True,
88 deprecated_since="Queens",
89 deprecated_reason=_("""
90 Glance registry service is deprecated for removal.
91
92 More information can be found from the spec:
93 http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html
94 """),
95 help=_("""
96 Absolute path to the certificate file.
97
98 Provide a string value representing a valid absolute path to the
99 certificate file to use for establishing a secure connection to
100 the registry server.
101
102 NOTE: This option must be set if ``registry_client_protocol`` is
103 set to ``https``. Alternatively, the GLANCE_CLIENT_CERT_FILE
104 environment variable may be set to a filepath of the certificate
105 file.
106
107 Possible values:
108 * String value representing a valid absolute path to the
109 certificate file.
110
111 Related options:
112 * registry_client_protocol
113
114 """)),
115 cfg.StrOpt('registry_client_ca_file',
116 sample_default='/etc/ssl/cafile/file.ca',
117 deprecated_for_removal=True,
118 deprecated_since="Queens",
119 deprecated_reason=_("""
120 Glance registry service is deprecated for removal.
121
122 More information can be found from the spec:
123 http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html
124 """),
125 help=_("""
126 Absolute path to the Certificate Authority file.
127
128 Provide a string value representing a valid absolute path to the
129 certificate authority file to use for establishing a secure
130 connection to the registry server.
131
132 NOTE: This option must be set if ``registry_client_protocol`` is
133 set to ``https``. Alternatively, the GLANCE_CLIENT_CA_FILE
134 environment variable may be set to a filepath of the CA file.
135 This option is ignored if the ``registry_client_insecure`` option
136 is set to ``True``.
137
138 Possible values:
139 * String value representing a valid absolute path to the CA
140 file.
141
142 Related options:
143 * registry_client_protocol
144 * registry_client_insecure
145
146 """)),
147 cfg.BoolOpt('registry_client_insecure',
148 default=False,
149 deprecated_for_removal=True,
150 deprecated_since="Queens",
151 deprecated_reason=_("""
152 Glance registry service is deprecated for removal.
153
154 More information can be found from the spec:
155 http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html
156 """),
157 help=_("""
158 Set verification of the registry server certificate.
159
160 Provide a boolean value to determine whether or not to validate
161 SSL connections to the registry server. By default, this option
162 is set to ``False`` and the SSL connections are validated.
163
164 If set to ``True``, the connection to the registry server is not
165 validated via a certifying authority and the
166 ``registry_client_ca_file`` option is ignored. This is the
167 registry's equivalent of specifying --insecure on the command line
168 using glanceclient for the API.
169
170 Possible values:
171 * True
172 * False
173
174 Related options:
175 * registry_client_protocol
176 * registry_client_ca_file
177
178 """)),
179 cfg.IntOpt('registry_client_timeout',
180 default=600,
181 min=0,
182 deprecated_for_removal=True,
183 deprecated_since="Queens",
184 deprecated_reason=_("""
185 Glance registry service is deprecated for removal.
186
187 More information can be found from the spec:
188 http://specs.openstack.org/openstack/glance-specs/specs/queens/approved/glance/deprecate-registry.html
189 """),
190 help=_("""
191 Timeout value for registry requests.
192
193 Provide an integer value representing the period of time in seconds
194 that the API server will wait for a registry request to complete.
195 The default value is 600 seconds.
196
197 A value of 0 implies that a request will never timeout.
198
199 Possible values:
200 * Zero
201 * Positive integer
202
203 Related options:
204 * None
205
206 """)),
207 ]
208
209 _DEPRECATE_USE_USER_TOKEN_MSG = ('This option was considered harmful and '
210 'has been deprecated in M release. It will '
211 'be removed in O release. For more '
212 'information read OSSN-0060. '
213 'Related functionality with uploading big '
214 'images has been implemented with Keystone '
215 'trusts support.')
216
217 registry_client_ctx_opts = [
218 cfg.BoolOpt('use_user_token', default=True, deprecated_for_removal=True,
219 deprecated_reason=_DEPRECATE_USE_USER_TOKEN_MSG,
220 help=_('Whether to pass through the user token when '
221 'making requests to the registry. To prevent '
222 'failures with token expiration during big '
223 'files upload, it is recommended to set this '
224 'parameter to False.'
225 'If "use_user_token" is not in effect, then '
226 'admin credentials can be specified.')),
227 cfg.StrOpt('admin_user', secret=True, deprecated_for_removal=True,
228 deprecated_reason=_DEPRECATE_USE_USER_TOKEN_MSG,
229 help=_('The administrators user name. '
230 'If "use_user_token" is not in effect, then '
231 'admin credentials can be specified.')),
232 cfg.StrOpt('admin_password', secret=True, deprecated_for_removal=True,
233 deprecated_reason=_DEPRECATE_USE_USER_TOKEN_MSG,
234 help=_('The administrators password. '
235 'If "use_user_token" is not in effect, then '
236 'admin credentials can be specified.')),
237 cfg.StrOpt('admin_tenant_name', secret=True, deprecated_for_removal=True,
238 deprecated_reason=_DEPRECATE_USE_USER_TOKEN_MSG,
239 help=_('The tenant name of the administrative user. '
240 'If "use_user_token" is not in effect, then '
241 'admin tenant name can be specified.')),
242 cfg.StrOpt('auth_url', deprecated_for_removal=True,
243 deprecated_reason=_DEPRECATE_USE_USER_TOKEN_MSG,
244 help=_('The URL to the keystone service. '
245 'If "use_user_token" is not in effect and '
246 'using keystone auth, then URL of keystone '
247 'can be specified.')),
248 cfg.StrOpt('auth_strategy', default='noauth', deprecated_for_removal=True,
249 deprecated_reason=_DEPRECATE_USE_USER_TOKEN_MSG,
250 help=_('The strategy to use for authentication. '
251 'If "use_user_token" is not in effect, then '
252 'auth strategy can be specified.')),
253 cfg.StrOpt('auth_region', deprecated_for_removal=True,
254 deprecated_reason=_DEPRECATE_USE_USER_TOKEN_MSG,
255 help=_('The region for the authentication service. '
256 'If "use_user_token" is not in effect and '
257 'using keystone auth, then region name can '
258 'be specified.')),
259 ]
260
261 CONF = cfg.CONF
262 CONF.register_opts(registry_client_opts)
263 CONF.register_opts(registry_client_ctx_opts)
+0
-0
glance/registry/client/v1/__init__.py less more
(Empty file)
+0
-227
glance/registry/client/v1/api.py less more
0 # Copyright 2010-2011 OpenStack Foundation
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 """
16 Registry's Client API
17 """
18
19 import os
20
21 from oslo_config import cfg
22 from oslo_log import log as logging
23 from oslo_serialization import jsonutils
24
25 from glance.common import exception
26 from glance.i18n import _
27 from glance.registry.client.v1 import client
28
29 LOG = logging.getLogger(__name__)
30
31 registry_client_ctx_opts = [
32 cfg.BoolOpt('send_identity_headers',
33 default=False,
34 help=_("""
35 Send headers received from identity when making requests to
36 registry.
37
38 Typically, Glance registry can be deployed in multiple flavors,
39 which may or may not include authentication. For example,
40 ``trusted-auth`` is a flavor that does not require the registry
41 service to authenticate the requests it receives. However, the
42 registry service may still need a user context to be populated to
43 serve the requests. This can be achieved by the caller
44 (the Glance API usually) passing through the headers it received
45 from authenticating with identity for the same request. The typical
46 headers sent are ``X-User-Id``, ``X-Tenant-Id``, ``X-Roles``,
47 ``X-Identity-Status`` and ``X-Service-Catalog``.
48
49 Provide a boolean value to determine whether to send the identity
50 headers to provide tenant and user information along with the
51 requests to registry service. By default, this option is set to
52 ``False``, which means that user and tenant information is not
53 available readily. It must be obtained by authenticating. Hence, if
54 this is set to ``False``, ``flavor`` must be set to value that
55 either includes authentication or authenticated user context.
56
57 Possible values:
58 * True
59 * False
60
61 Related options:
62 * flavor
63
64 """)),
65 ]
66
67 CONF = cfg.CONF
68 CONF.register_opts(registry_client_ctx_opts)
69 _registry_client = 'glance.registry.client'
70 CONF.import_opt('registry_client_protocol', _registry_client)
71 CONF.import_opt('registry_client_key_file', _registry_client)
72 CONF.import_opt('registry_client_cert_file', _registry_client)
73 CONF.import_opt('registry_client_ca_file', _registry_client)
74 CONF.import_opt('registry_client_insecure', _registry_client)
75 CONF.import_opt('registry_client_timeout', _registry_client)
76 CONF.import_opt('use_user_token', _registry_client)
77 CONF.import_opt('admin_user', _registry_client)
78 CONF.import_opt('admin_password', _registry_client)
79 CONF.import_opt('admin_tenant_name', _registry_client)
80 CONF.import_opt('auth_url', _registry_client)
81 CONF.import_opt('auth_strategy', _registry_client)
82 CONF.import_opt('auth_region', _registry_client)
83 CONF.import_opt('metadata_encryption_key', 'glance.common.config')
84
85 _CLIENT_CREDS = None
86 _CLIENT_HOST = None
87 _CLIENT_PORT = None
88 _CLIENT_KWARGS = {}
89 # AES key used to encrypt 'location' metadata
90 _METADATA_ENCRYPTION_KEY = None
91
92
93 def configure_registry_client():
94 """
95 Sets up a registry client for use in registry lookups
96 """
97 global _CLIENT_KWARGS, _CLIENT_HOST, _CLIENT_PORT, _METADATA_ENCRYPTION_KEY
98 try:
99 host, port = CONF.registry_host, CONF.registry_port
100 except cfg.ConfigFileValueError:
101 msg = _("Configuration option was not valid")
102 LOG.error(msg)
103 raise exception.BadRegistryConnectionConfiguration(reason=msg)
104 except IndexError:
105 msg = _("Could not find required configuration option")
106 LOG.error(msg)
107 raise exception.BadRegistryConnectionConfiguration(reason=msg)
108
109 _CLIENT_HOST = host
110 _CLIENT_PORT = port
111 _METADATA_ENCRYPTION_KEY = CONF.metadata_encryption_key
112 _CLIENT_KWARGS = {
113 'use_ssl': CONF.registry_client_protocol.lower() == 'https',
114 'key_file': CONF.registry_client_key_file,
115 'cert_file': CONF.registry_client_cert_file,
116 'ca_file': CONF.registry_client_ca_file,
117 'insecure': CONF.registry_client_insecure,
118 'timeout': CONF.registry_client_timeout,
119 }
120
121 if not CONF.use_user_token:
122 configure_registry_admin_creds()
123
124
125 def configure_registry_admin_creds():
126 global _CLIENT_CREDS
127
128 if CONF.auth_url or os.getenv('OS_AUTH_URL'):
129 strategy = 'keystone'
130 else:
131 strategy = CONF.auth_strategy
132
133 _CLIENT_CREDS = {
134 'user': CONF.admin_user,
135 'password': CONF.admin_password,
136 'username': CONF.admin_user,
137 'tenant': CONF.admin_tenant_name,
138 'auth_url': os.getenv('OS_AUTH_URL') or CONF.auth_url,
139 'strategy': strategy,
140 'region': CONF.auth_region,
141 }
142
143
144 def get_registry_client(cxt):
145 global _CLIENT_CREDS, _CLIENT_KWARGS, _CLIENT_HOST, _CLIENT_PORT
146 global _METADATA_ENCRYPTION_KEY
147 kwargs = _CLIENT_KWARGS.copy()
148 if CONF.use_user_token:
149 kwargs['auth_token'] = cxt.auth_token
150 if _CLIENT_CREDS:
151 kwargs['creds'] = _CLIENT_CREDS
152
153 if CONF.send_identity_headers:
154 identity_headers = {
155 'X-User-Id': cxt.user_id or '',
156 'X-Tenant-Id': cxt.project_id or '',
157 'X-Roles': ','.join(cxt.roles),
158 'X-Identity-Status': 'Confirmed',
159 'X-Service-Catalog': jsonutils.dumps(cxt.service_catalog),
160 }
161 kwargs['identity_headers'] = identity_headers
162
163 kwargs['request_id'] = cxt.request_id
164
165 return client.RegistryClient(_CLIENT_HOST, _CLIENT_PORT,
166 _METADATA_ENCRYPTION_KEY, **kwargs)
167
168
169 def get_images_list(context, **kwargs):
170 c = get_registry_client(context)
171 return c.get_images(**kwargs)
172
173
174 def get_images_detail(context, **kwargs):
175 c = get_registry_client(context)
176 return c.get_images_detailed(**kwargs)
177
178
179 def get_image_metadata(context, image_id):
180 c = get_registry_client(context)
181 return c.get_image(image_id)
182
183
184 def add_image_metadata(context, image_meta):
185 LOG.debug("Adding image metadata...")
186 c = get_registry_client(context)
187 return c.add_image(image_meta)
188
189
190 def update_image_metadata(context, image_id, image_meta,
191 purge_props=False, from_state=None):
192 LOG.debug("Updating image metadata for image %s...", image_id)
193 c = get_registry_client(context)
194 return c.update_image(image_id, image_meta, purge_props=purge_props,
195 from_state=from_state)
196
197
198 def delete_image_metadata(context, image_id):
199 LOG.debug("Deleting image metadata for image %s...", image_id)
200 c = get_registry_client(context)
201 return c.delete_image(image_id)
202
203
204 def get_image_members(context, image_id):
205 c = get_registry_client(context)
206 return c.get_image_members(image_id)
207
208
209 def get_member_images(context, member_id):
210 c = get_registry_client(context)
211 return c.get_member_images(member_id)
212
213
214 def replace_members(context, image_id, member_data):
215 c = get_registry_client(context)
216 return c.replace_members(image_id, member_data)
217
218
219 def add_member(context, image_id, member_id, can_share=None):
220 c = get_registry_client(context)
221 return c.add_member(image_id, member_id, can_share=can_share)
222
223
224 def delete_member(context, image_id, member_id):
225 c = get_registry_client(context)
226 return c.delete_member(image_id, member_id)
+0
-276
glance/registry/client/v1/client.py less more
0 # Copyright 2013 OpenStack Foundation
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 """
16 Simple client class to speak with any RESTful service that implements
17 the Glance Registry API
18 """
19
20 from oslo_log import log as logging
21 from oslo_serialization import jsonutils
22 from oslo_utils import excutils
23 import six
24
25 from glance.common.client import BaseClient
26 from glance.common import crypt
27 from glance.common import exception
28 from glance.i18n import _LE
29 from glance.registry.api.v1 import images
30
31 LOG = logging.getLogger(__name__)
32
33
34 class RegistryClient(BaseClient):
35
36 """A client for the Registry image metadata service."""
37
38 DEFAULT_PORT = 9191
39
40 def __init__(self, host=None, port=None, metadata_encryption_key=None,
41 identity_headers=None, **kwargs):
42 """
43 :param metadata_encryption_key: Key used to encrypt 'location' metadata
44 """
45 self.metadata_encryption_key = metadata_encryption_key
46 # NOTE (dprince): by default base client overwrites host and port
47 # settings when using keystone. configure_via_auth=False disables
48 # this behaviour to ensure we still send requests to the Registry API
49 self.identity_headers = identity_headers
50 # store available passed request id for do_request call
51 self._passed_request_id = kwargs.pop('request_id', None)
52 BaseClient.__init__(self, host, port, configure_via_auth=False,
53 **kwargs)
54
55 def decrypt_metadata(self, image_metadata):
56 if self.metadata_encryption_key:
57 if image_metadata.get('location'):
58 location = crypt.urlsafe_decrypt(self.metadata_encryption_key,
59 image_metadata['location'])
60 image_metadata['location'] = location
61 if image_metadata.get('location_data'):
62 ld = []
63 for loc in image_metadata['location_data']:
64 url = crypt.urlsafe_decrypt(self.metadata_encryption_key,
65 loc['url'])
66 ld.append({'id': loc['id'], 'url': url,
67 'metadata': loc['metadata'],
68 'status': loc['status']})
69 image_metadata['location_data'] = ld
70 return image_metadata
71
72 def encrypt_metadata(self, image_metadata):
73 if self.metadata_encryption_key:
74 location_url = image_metadata.get('location')
75 if location_url:
76 location = crypt.urlsafe_encrypt(self.metadata_encryption_key,
77 location_url,
78 64)
79 image_metadata['location'] = location
80 if image_metadata.get('location_data'):
81 ld = []
82 for loc in image_metadata['location_data']:
83 if loc['url'] == location_url:
84 url = location
85 else:
86 url = crypt.urlsafe_encrypt(
87 self.metadata_encryption_key, loc['url'], 64)
88 ld.append({'url': url, 'metadata': loc['metadata'],
89 'status': loc['status'],
90 # NOTE(zhiyan): New location has no ID field.
91 'id': loc.get('id')})
92 image_metadata['location_data'] = ld
93 return image_metadata
94
95 def get_images(self, **kwargs):
96 """
97 Returns a list of image id/name mappings from Registry
98
99 :param filters: dict of keys & expected values to filter results
100 :param marker: image id after which to start page
101 :param limit: max number of images to return
102 :param sort_key: results will be ordered by this image attribute
103 :param sort_dir: direction in which to order results (asc, desc)
104 """
105 params = self._extract_params(kwargs, images.SUPPORTED_PARAMS)
106 res = self.do_request("GET", "/images", params=params)
107 image_list = jsonutils.loads(res.read())['images']
108 for image in image_list:
109 image = self.decrypt_metadata(image)
110 return image_list
111
112 def do_request(self, method, action, **kwargs):
113 try:
114 kwargs['headers'] = kwargs.get('headers', {})
115 kwargs['headers'].update(self.identity_headers or {})
116 if self._passed_request_id:
117 request_id = self._passed_request_id
118 if six.PY3 and isinstance(request_id, bytes):
119 request_id = request_id.decode('utf-8')
120 kwargs['headers']['X-Openstack-Request-ID'] = request_id
121 res = super(RegistryClient, self).do_request(method,
122 action,
123 **kwargs)
124 status = res.status
125 request_id = res.getheader('x-openstack-request-id')
126 if six.PY3 and isinstance(request_id, bytes):
127 request_id = request_id.decode('utf-8')
128 LOG.debug("Registry request %(method)s %(action)s HTTP %(status)s"
129 " request id %(request_id)s",
130 {'method': method, 'action': action,
131 'status': status, 'request_id': request_id})
132
133 # a 404 condition is not fatal, we shouldn't log at a fatal
134 # level for it.
135 except exception.NotFound:
136 raise
137
138 # The following exception logging should only really be used
139 # in extreme and unexpected cases.
140 except Exception as exc:
141 with excutils.save_and_reraise_exception():
142 exc_name = exc.__class__.__name__
143 LOG.exception(_LE("Registry client request %(method)s "
144 "%(action)s raised %(exc_name)s"),
145 {'method': method, 'action': action,
146 'exc_name': exc_name})
147 return res
148
149 def get_images_detailed(self, **kwargs):
150 """
151 Returns a list of detailed image data mappings from Registry
152
153 :param filters: dict of keys & expected values to filter results
154 :param marker: image id after which to start page
155 :param limit: max number of images to return
156 :param sort_key: results will be ordered by this image attribute
157 :param sort_dir: direction in which to order results (asc, desc)
158 """
159 params = self._extract_params(kwargs, images.SUPPORTED_PARAMS)
160 res = self.do_request("GET", "/images/detail", params=params)
161 image_list = jsonutils.loads(res.read())['images']
162 for image in image_list:
163 image = self.decrypt_metadata(image)
164 return image_list
165
166 def get_image(self, image_id):
167 """Returns a mapping of image metadata from Registry."""
168 res = self.do_request("GET", "/images/%s" % image_id)
169 data = jsonutils.loads(res.read())['image']
170 return self.decrypt_metadata(data)
171
172 def add_image(self, image_metadata):
173 """
174 Tells registry about an image's metadata
175 """
176 headers = {
177 'Content-Type': 'application/json',
178 }
179
180 if 'image' not in image_metadata:
181 image_metadata = dict(image=image_metadata)
182
183 encrypted_metadata = self.encrypt_metadata(image_metadata['image'])
184 image_metadata['image'] = encrypted_metadata
185 body = jsonutils.dump_as_bytes(image_metadata)
186
187 res = self.do_request("POST", "/images", body=body, headers=headers)
188 # Registry returns a JSONified dict(image=image_info)
189 data = jsonutils.loads(res.read())
190 image = data['image']
191 return self.decrypt_metadata(image)
192
193 def update_image(self, image_id, image_metadata, purge_props=False,
194 from_state=None):
195 """
196 Updates Registry's information about an image
197 """
198 if 'image' not in image_metadata:
199 image_metadata = dict(image=image_metadata)
200
201 encrypted_metadata = self.encrypt_metadata(image_metadata['image'])
202 image_metadata['image'] = encrypted_metadata
203 image_metadata['from_state'] = from_state
204 body = jsonutils.dump_as_bytes(image_metadata)
205
206 headers = {
207 'Content-Type': 'application/json',
208 }
209
210 if purge_props:
211 headers["X-Glance-Registry-Purge-Props"] = "true"
212
213 res = self.do_request("PUT", "/images/%s" % image_id, body=body,
214 headers=headers)
215 data = jsonutils.loads(res.read())
216 image = data['image']
217 return self.decrypt_metadata(image)
218
219 def delete_image(self, image_id):
220 """
221 Deletes Registry's information about an image
222 """
223 res = self.do_request("DELETE", "/images/%s" % image_id)
224 data = jsonutils.loads(res.read())
225 image = data['image']
226 return image
227
228 def get_image_members(self, image_id):
229 """Return a list of membership associations from Registry."""
230 res = self.do_request("GET", "/images/%s/members" % image_id)
231 data = jsonutils.loads(res.read())['members']
232 return data
233
234 def get_member_images(self, member_id):
235 """Return a list of membership associations from Registry."""
236 res = self.do_request("GET", "/shared-images/%s" % member_id)
237 data = jsonutils.loads(res.read())['shared_images']
238 return data
239
240 def replace_members(self, image_id, member_data):
241 """Replace registry's information about image membership."""
242 if isinstance(member_data, (list, tuple)):
243 member_data = dict(memberships=list(member_data))
244 elif (isinstance(member_data, dict) and
245 'memberships' not in member_data):
246 member_data = dict(memberships=[member_data])
247
248 body = jsonutils.dump_as_bytes(member_data)
249
250 headers = {'Content-Type': 'application/json', }
251
252 res = self.do_request("PUT", "/images/%s/members" % image_id,
253 body=body, headers=headers)
254 return self.get_status_code(res) == 204
255
256 def add_member(self, image_id, member_id, can_share=None):
257 """Add to registry's information about image membership."""
258 body = None
259 headers = {}
260 # Build up a body if can_share is specified
261 if can_share is not None:
262 body = jsonutils.dump_as_bytes(
263 dict(member=dict(can_share=can_share)))
264 headers['Content-Type'] = 'application/json'
265
266 url = "/images/%s/members/%s" % (image_id, member_id)
267 res = self.do_request("PUT", url, body=body,
268 headers=headers)
269 return self.get_status_code(res) == 204
270
271 def delete_member(self, image_id, member_id):
272 """Delete registry's information about image membership."""
273 res = self.do_request("DELETE", "/images/%s/members/%s" %
274 (image_id, member_id))
275 return self.get_status_code(res) == 204
+0
-0
glance/registry/client/v2/__init__.py less more
(Empty file)
+0
-109
glance/registry/client/v2/api.py less more
0 # Copyright 2013 Red Hat, Inc
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 """
16 Registry's Client V2
17 """
18
19 import os
20
21 from oslo_config import cfg
22 from oslo_log import log as logging
23
24 from glance.common import exception
25 from glance.i18n import _
26 from glance.registry.client.v2 import client
27
28 LOG = logging.getLogger(__name__)
29
30 CONF = cfg.CONF
31 _registry_client = 'glance.registry.client'
32 CONF.import_opt('registry_client_protocol', _registry_client)
33 CONF.import_opt('registry_client_key_file', _registry_client)
34 CONF.import_opt('registry_client_ca_file', _registry_client)
35 CONF.import_opt('registry_client_insecure', _registry_client)
36 CONF.import_opt('registry_client_timeout', _registry_client)
37 CONF.import_opt('use_user_token', _registry_client)
38 CONF.import_opt('admin_user', _registry_client)
39 CONF.import_opt('admin_password', _registry_client)
40 CONF.import_opt('admin_tenant_name', _registry_client)
41 CONF.import_opt('auth_url', _registry_client)
42 CONF.import_opt('auth_strategy', _registry_client)
43 CONF.import_opt('auth_region', _registry_client)
44
45 _CLIENT_CREDS = None
46 _CLIENT_HOST = None
47 _CLIENT_PORT = None
48 _CLIENT_KWARGS = {}
49
50
51 def configure_registry_client():
52 """
53 Sets up a registry client for use in registry lookups
54 """
55 global _CLIENT_KWARGS, _CLIENT_HOST, _CLIENT_PORT
56 try:
57 host, port = CONF.registry_host, CONF.registry_port
58 except cfg.ConfigFileValueError:
59 msg = _("Configuration option was not valid")
60 LOG.error(msg)
61 raise exception.BadRegistryConnectionConfiguration(msg)
62 except IndexError:
63 msg = _("Could not find required configuration option")
64 LOG.error(msg)
65 raise exception.BadRegistryConnectionConfiguration(msg)
66
67 _CLIENT_HOST = host
68 _CLIENT_PORT = port
69 _CLIENT_KWARGS = {
70 'use_ssl': CONF.registry_client_protocol.lower() == 'https',
71 'key_file': CONF.registry_client_key_file,
72 'cert_file': CONF.registry_client_cert_file,
73 'ca_file': CONF.registry_client_ca_file,
74 'insecure': CONF.registry_client_insecure,
75 'timeout': CONF.registry_client_timeout,
76 }
77
78 if not CONF.use_user_token:
79 configure_registry_admin_creds()
80
81
82 def configure_registry_admin_creds():
83 global _CLIENT_CREDS
84
85 if CONF.auth_url or os.getenv('OS_AUTH_URL'):
86 strategy = 'keystone'
87 else:
88 strategy = CONF.auth_strategy
89
90 _CLIENT_CREDS = {
91 'user': CONF.admin_user,
92 'password': CONF.admin_password,
93 'username': CONF.admin_user,
94 'tenant': CONF.admin_tenant_name,
95 'auth_url': os.getenv('OS_AUTH_URL') or CONF.auth_url,
96 'strategy': strategy,
97 'region': CONF.auth_region,
98 }
99
100
101 def get_registry_client(cxt):
102 global _CLIENT_CREDS, _CLIENT_KWARGS, _CLIENT_HOST, _CLIENT_PORT
103 kwargs = _CLIENT_KWARGS.copy()
104 if CONF.use_user_token:
105 kwargs['auth_token'] = cxt.auth_token
106 if _CLIENT_CREDS:
107 kwargs['creds'] = _CLIENT_CREDS
108 return client.RegistryClient(_CLIENT_HOST, _CLIENT_PORT, **kwargs)
+0
-27
glance/registry/client/v2/client.py less more
0 # Copyright 2013 Red Hat, Inc.
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 """
16 Simple client class to speak with any RESTful service that implements
17 the Glance Registry API
18 """
19
20 from glance.common import rpc
21
22
23 class RegistryClient(rpc.RPCClient):
24 """Registry's V2 Client."""
25
26 DEFAULT_PORT = 9191
2929 else:
3030 eventlet.patcher.monkey_patch()
3131
32 import glance.async_
33 # NOTE(danms): Default to eventlet threading for tests
34 glance.async_.set_threadpool_model('eventlet')
35
3236 # See http://code.google.com/p/python-nose/issues/detail?id=373
3337 # The code below enables tests to work with i18n _() blocks
3438 import six.moves.builtins as __builtin__
99 "modify_image": "",
1010 "publicize_image": "",
1111 "communitize_image": "",
12 "copy_from": "",
1312
1413 "download_image": "",
1514 "upload_image": "",
3333 import subprocess
3434 import sys
3535 import tempfile
36 import textwrap
3637 import time
38 from unittest import mock
39 import uuid
3740
3841 import fixtures
42 import glance_store
3943 from os_win import utilsfactory as os_win_utilsfactory
4044 from oslo_config import cfg
4145 from oslo_serialization import jsonutils
4347 from six.moves import range
4448 import six.moves.urllib.parse as urlparse
4549 import testtools
46
50 import webob
51
52 from glance.common import config
4753 from glance.common import utils
54 from glance.common import wsgi
4855 from glance.db.sqlalchemy import api as db_api
4956 from glance import tests as glance_tests
5057 from glance.tests import utils as test_utils
5966
6067
6168 CONF = cfg.CONF
69
70
71 import glance.async_
72 # NOTE(danms): Default to eventlet threading for tests
73 try:
74 glance.async_.set_threadpool_model('eventlet')
75 except RuntimeError:
76 pass
6277
6378
6479 @six.add_metaclass(abc.ABCMeta)
87102 self.show_image_direct_url = False
88103 self.show_multiple_locations = False
89104 self.property_protection_file = ''
90 self.enable_v2_api = True
91105 self.needs_database = False
92106 self.log_file = None
93107 self.sock = sock
95109 self.process_pid = None
96110 self.server_module = None
97111 self.stop_kill = False
98 self.use_user_token = True
99112 self.send_identity_credentials = False
100113
101114 def write_conf(self, **kwargs):
267280 else:
268281 rc = test_utils.wait_for_fork(
269282 self.process_pid,
270 expected_exitcode=expected_exitcode)
283 expected_exitcode=expected_exitcode,
284 force=False)
271285 # avoid an FD leak
272286 if self.sock:
273287 os.close(fd)
283297
284298 if self.stop_kill:
285299 os.kill(self.process_pid, signal.SIGTERM)
286 rc = test_utils.wait_for_fork(self.process_pid, raise_error=False)
300 rc = test_utils.wait_for_fork(self.process_pid, raise_error=False,
301 force=self.stop_kill)
287302 return (rc, '', '')
288303
289304
400415 default_sql_connection = SQLITE_CONN_TEMPLATE % self.test_dir
401416 self.sql_connection = os.environ.get('GLANCE_TEST_SQL_CONNECTION',
402417 default_sql_connection)
403 self.data_api = kwargs.get("data_api",
404 "glance.db.sqlalchemy.api")
405418 self.user_storage_quota = '0'
406419 self.lock_path = self.test_dir
407420
412425
413426 self.conf_base = """[DEFAULT]
414427 debug = %(debug)s
415 default_log_levels = eventlet.wsgi.server=DEBUG
428 default_log_levels = eventlet.wsgi.server=DEBUG,stevedore.extension=INFO
416429 bind_host = %(bind_host)s
417430 bind_port = %(bind_port)s
418431 metadata_encryption_key = %(metadata_encryption_key)s
419 use_user_token = %(use_user_token)s
420432 send_identity_credentials = %(send_identity_credentials)s
421433 log_file = %(log_file)s
422434 image_size_cap = %(image_size_cap)d
427439 send_identity_headers = %(send_identity_headers)s
428440 image_cache_dir = %(image_cache_dir)s
429441 image_cache_driver = %(image_cache_driver)s
430 data_api = %(data_api)s
431442 sql_connection = %(sql_connection)s
432443 show_image_direct_url = %(show_image_direct_url)s
433444 show_multiple_locations = %(show_multiple_locations)s
434445 user_storage_quota = %(user_storage_quota)s
435 enable_v2_api = %(enable_v2_api)s
436446 lock_path = %(lock_path)s
437447 property_protection_file = %(property_protection_file)s
438448 property_protection_rule_format = %(property_protection_rule_format)s
488498 [composite:rootapp]
489499 paste.composite_factory = glance.api:root_app_factory
490500 /: apiversions
491 /v1: apiv1app
492501 /v2: apiv2app
493502
494503 [app:apiversions]
495504 paste.app_factory = glance.api.versions:create_resource
496
497 [app:apiv1app]
498 paste.app_factory = glance.api.v1.router:API.factory
499505
500506 [app:apiv2app]
501507 paste.app_factory = glance.api.v2.router:API.factory
578584 default_sql_connection = SQLITE_CONN_TEMPLATE % self.test_dir
579585 self.sql_connection = os.environ.get('GLANCE_TEST_SQL_CONNECTION',
580586 default_sql_connection)
581 self.data_api = kwargs.get("data_api",
582 "glance.db.sqlalchemy.api")
583587 self.user_storage_quota = '0'
584588 self.lock_path = self.test_dir
585589
590594
591595 self.conf_base = """[DEFAULT]
592596 debug = %(debug)s
593 default_log_levels = eventlet.wsgi.server=DEBUG
597 default_log_levels = eventlet.wsgi.server=DEBUG,stevedore.extension=INFO
594598 bind_host = %(bind_host)s
595599 bind_port = %(bind_port)s
596600 metadata_encryption_key = %(metadata_encryption_key)s
597 use_user_token = %(use_user_token)s
598601 send_identity_credentials = %(send_identity_credentials)s
599602 log_file = %(log_file)s
600603 image_size_cap = %(image_size_cap)d
605608 send_identity_headers = %(send_identity_headers)s
606609 image_cache_dir = %(image_cache_dir)s
607610 image_cache_driver = %(image_cache_driver)s
608 data_api = %(data_api)s
609611 sql_connection = %(sql_connection)s
610612 show_image_direct_url = %(show_image_direct_url)s
611613 show_multiple_locations = %(show_multiple_locations)s
612614 user_storage_quota = %(user_storage_quota)s
613 enable_v2_api = %(enable_v2_api)s
614615 lock_path = %(lock_path)s
615616 property_protection_file = %(property_protection_file)s
616617 property_protection_rule_format = %(property_protection_rule_format)s
674675 [composite:rootapp]
675676 paste.composite_factory = glance.api:root_app_factory
676677 /: apiversions
677 /v1: apiv1app
678678 /v2: apiv2app
679679
680680 [app:apiversions]
681681 paste.app_factory = glance.api.versions:create_resource
682
683 [app:apiv1app]
684 paste.app_factory = glance.api.v1.router:API.factory
685682
686683 [app:apiv2app]
687684 paste.app_factory = glance.api.v2.router:API.factory
14761473 self._attached_server_logs.append(s.log_file)
14771474 self.addDetail(
14781475 s.server_name, testtools.content.text_content(s.dump_log()))
1476
1477
1478 class SynchronousAPIBase(test_utils.BaseTestCase):
1479 """A base class that provides synchronous calling into the API.
1480
1481 This provides a way to directly call into the API WSGI stack
1482 without starting a separate server, and with a simple paste
1483 pipeline. Configured with multi-store and a real database.
1484
1485 This differs from the FunctionalTest lineage above in that they
1486 start a full copy of the API server as a separate process, whereas
1487 this calls directly into the WSGI stack. This test base is
1488 appropriate for situations where you need to be able to mock the
1489 state of the world (i.e. warp time, or inject errors) but should
1490 not be used for happy-path testing where FunctionalTest provides
1491 more isolation.
1492
1493 To use this, inherit and run start_server() before you are ready
1494 to make API calls (either in your setUp() or per-test if you need
1495 to change config or mocking).
1496
1497 Once started, use the api_get(), api_put(), api_post(), and
1498 api_delete() methods to make calls to the API.
1499
1500 """
1501
1502 TENANT = str(uuid.uuid4())
1503
1504 @mock.patch('oslo_db.sqlalchemy.enginefacade.writer.get_engine')
1505 def setup_database(self, mock_get_engine):
1506 """Configure and prepare a fresh sqlite database."""
1507 db_file = 'sqlite:///%s/test.db' % self.test_dir
1508 self.config(connection=db_file, group='database')
1509
1510 # NOTE(danms): Make sure that we clear the current global
1511 # database configuration, provision a temporary database file,
1512 # and run migrations with our configuration to define the
1513 # schema there.
1514 db_api.clear_db_env()
1515 engine = db_api.get_engine()
1516 mock_get_engine.return_value = engine
1517 with mock.patch('logging.config'):
1518 # NOTE(danms): The alembic config in the env module will break our
1519 # BaseTestCase logging setup. So mock that out to prevent it while
1520 # we db_sync.
1521 test_utils.db_sync(engine=engine)
1522
1523 def setup_simple_paste(self):
1524 """Setup a very simple no-auth paste pipeline.
1525
1526 This configures the API to be very direct, including only the
1527 middleware absolutely required for consistent API calls.
1528 """
1529 self.paste_config = os.path.join(self.test_dir, 'glance-api-paste.ini')
1530 with open(self.paste_config, 'w') as f:
1531 f.write(textwrap.dedent("""
1532 [filter:context]
1533 paste.filter_factory = glance.api.middleware.context:\
1534 ContextMiddleware.factory
1535 [filter:fakeauth]
1536 paste.filter_factory = glance.tests.utils:\
1537 FakeAuthMiddleware.factory
1538 [pipeline:glance-api]
1539 pipeline = context rootapp
1540 [composite:rootapp]
1541 paste.composite_factory = glance.api:root_app_factory
1542 /v2: apiv2app
1543 [app:apiv2app]
1544 paste.app_factory = glance.api.v2.router:API.factory
1545 """))
1546
1547 def _store_dir(self, store):
1548 return os.path.join(self.test_dir, store)
1549
1550 def setup_stores(self):
1551 """Configures multiple backend stores.
1552
1553 This configures the API with three file-backed stores (store1,
1554 store2, and store3) as well as a os_glance_staging_store for
1555 imports.
1556
1557 """
1558 self.config(enabled_backends={'store1': 'file', 'store2': 'file',
1559 'store3': 'file'})
1560 glance_store.register_store_opts(CONF,
1561 reserved_stores=wsgi.RESERVED_STORES)
1562 self.config(default_backend='store1',
1563 group='glance_store')
1564 self.config(filesystem_store_datadir=self._store_dir('store1'),
1565 group='store1')
1566 self.config(filesystem_store_datadir=self._store_dir('store2'),
1567 group='store2')
1568 self.config(filesystem_store_datadir=self._store_dir('store3'),
1569 group='store3')
1570 self.config(filesystem_store_datadir=self._store_dir('staging'),
1571 group='os_glance_staging_store')
1572
1573 glance_store.create_multi_stores(CONF,
1574 reserved_stores=wsgi.RESERVED_STORES)
1575 glance_store.verify_store()
1576
1577 def setUp(self):
1578 super(SynchronousAPIBase, self).setUp()
1579
1580 self.setup_database()
1581 self.setup_simple_paste()
1582 self.setup_stores()
1583
1584 def start_server(self):
1585 """Builds and "starts" the API server.
1586
1587 Note that this doesn't actually "start" anything like
1588 FunctionalTest does above, but that terminology is used here
1589 to make it seem like the same sort of pattern.
1590 """
1591 config.set_config_defaults()
1592 self.api = config.load_paste_app('glance-api',
1593 conf_file=self.paste_config)
1594
1595 def _headers(self, custom_headers=None):
1596 base_headers = {
1597 'X-Identity-Status': 'Confirmed',
1598 'X-Auth-Token': '932c5c84-02ac-4fe5-a9ba-620af0e2bb96',
1599 'X-User-Id': 'f9a41d13-0c13-47e9-bee2-ce4e8bfe958e',
1600 'X-Tenant-Id': self.TENANT,
1601 'Content-Type': 'application/json',
1602 'X-Roles': 'admin',
1603 }
1604 base_headers.update(custom_headers or {})
1605 return base_headers
1606
1607 def api_request(self, method, url, headers=None, data=None,
1608 json=None, body_file=None):
1609 """Perform a request against the API.
1610
1611 NOTE: Most code should use api_get(), api_post(), api_put(),
1612 or api_delete() instead!
1613
1614 :param method: The HTTP method to use (i.e. GET, POST, etc)
1615 :param url: The *path* part of the URL to call (i.e. /v2/images)
1616 :param headers: Optional updates to the default set of headers
1617 :param data: Optional bytes data payload to send (overrides @json)
1618 :param json: Optional dict structure to be jsonified and sent as
1619 the payload (mutually exclusive with @data)
1620 :param body_file: Optional io.IOBase to provide as the input data
1621 stream for the request (overrides @data)
1622 :returns: A webob.Response object
1623 """
1624 headers = self._headers(headers)
1625 req = webob.Request.blank(url, method=method,
1626 headers=headers)
1627 if json and not data:
1628 data = jsonutils.dumps(json).encode()
1629 if data and not body_file:
1630 req.body = data
1631 elif body_file:
1632 req.body_file = body_file
1633 return self.api(req)
1634
1635 def api_get(self, url, headers=None):
1636 """Perform a GET request against the API.
1637
1638 :param url: The *path* part of the URL to call (i.e. /v2/images)
1639 :param headers: Optional updates to the default set of headers
1640 :returns: A webob.Response object
1641 """
1642 return self.api_request('GET', url, headers=headers)
1643
1644 def api_post(self, url, headers=None, data=None, json=None,
1645 body_file=None):
1646 """Perform a POST request against the API.
1647
1648 :param url: The *path* part of the URL to call (i.e. /v2/images)
1649 :param headers: Optional updates to the default set of headers
1650 :param data: Optional bytes data payload to send (overrides @json)
1651 :param json: Optional dict structure to be jsonified and sent as
1652 the payload (mutually exclusive with @data)
1653 :param body_file: Optional io.IOBase to provide as the input data
1654 stream for the request (overrides @data)
1655 :returns: A webob.Response object
1656 """
1657 return self.api_request('POST', url, headers=headers,
1658 data=data, json=json,
1659 body_file=body_file)
1660
1661 def api_put(self, url, headers=None, data=None, json=None, body_file=None):
1662 """Perform a PUT request against the API.
1663
1664 :param url: The *path* part of the URL to call (i.e. /v2/images)
1665 :param headers: Optional updates to the default set of headers
1666 :param data: Optional bytes data payload to send (overrides @json,
1667 mutually exclusive with body_file)
1668 :param json: Optional dict structure to be jsonified and sent as
1669 the payload (mutually exclusive with @data)
1670 :param body_file: Optional io.IOBase to provide as the input data
1671 stream for the request (overrides @data)
1672 :returns: A webob.Response object
1673 """
1674 return self.api_request('PUT', url, headers=headers,
1675 data=data, json=json,
1676 body_file=body_file)
1677
1678 def api_delete(self, url, headers=None):
1679 """Perform a DELETE request against the API.
1680
1681 :param url: The *path* part of the URL to call (i.e. /v2/images)
1682 :param headers: Optional updates to the default set of headers
1683 :returns: A webob.Response object
1684 """
1685 return self.api_request('DELETE', url, heaers=headers)
+0
-91
glance/tests/functional/db/test_simple.py less more
0 # Copyright 2012 OpenStack Foundation
1 # Copyright 2013 IBM Corp.
2 # All Rights Reserved.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License"); you may
5 # not use this file except in compliance with the License. You may obtain
6 # a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
12 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
13 # License for the specific language governing permissions and limitations
14 # under the License.
15
16 from glance.api import CONF
17 import glance.db.simple.api
18 import glance.tests.functional.db as db_tests
19 from glance.tests.functional.db import base
20
21
22 def get_db(config, workers=1):
23 CONF.set_override('data_api', 'glance.db.simple.api')
24 CONF.set_override('workers', workers)
25 db_api = glance.db.get_api()
26 return db_api
27
28
29 def reset_db(db_api):
30 db_api.reset()
31
32
33 class TestSimpleDriver(base.TestDriver,
34 base.DriverTests,
35 base.FunctionalInitWrapper):
36
37 def setUp(self):
38 db_tests.load(get_db, reset_db)
39 super(TestSimpleDriver, self).setUp()
40 self.addCleanup(db_tests.reset)
41
42
43 class TestSimpleQuota(base.DriverQuotaTests,
44 base.FunctionalInitWrapper):
45
46 def setUp(self):
47 db_tests.load(get_db, reset_db)
48 super(TestSimpleQuota, self).setUp()
49 self.addCleanup(db_tests.reset)
50
51
52 class TestSimpleVisibility(base.TestVisibility,
53 base.VisibilityTests,
54 base.FunctionalInitWrapper):
55
56 def setUp(self):
57 db_tests.load(get_db, reset_db)
58 super(TestSimpleVisibility, self).setUp()
59 self.addCleanup(db_tests.reset)
60
61
62 class TestSimpleMembershipVisibility(base.TestMembershipVisibility,
63 base.MembershipVisibilityTests,
64 base.FunctionalInitWrapper):
65
66 def setUp(self):
67 db_tests.load(get_db, reset_db)
68 super(TestSimpleMembershipVisibility, self).setUp()
69 self.addCleanup(db_tests.reset)
70
71
72 class TestSimpleTask(base.TaskTests,
73 base.FunctionalInitWrapper):
74
75 def setUp(self):
76 db_tests.load(get_db, reset_db)
77 super(TestSimpleTask, self).setUp()
78 self.addCleanup(db_tests.reset)
79
80
81 class TestTooManyWorkers(base.TaskTests):
82
83 def setUp(self):
84 def get_db_too_many_workers(config):
85 self.assertRaises(SystemExit, get_db, config, 2)
86 return get_db(config)
87
88 db_tests.load(get_db_too_many_workers, reset_db)
89 super(TestTooManyWorkers, self).setUp()
90 self.addCleanup(db_tests.reset)
169169 db_tests.load(get_db, reset_db_metadef)
170170 super(TestMetadefSqlAlchemyDriver, self).setUp()
171171 self.addCleanup(db_tests.reset)
172
173
174 class TestImageAtomicOps(base.TestDriver,
175 base.FunctionalInitWrapper):
176
177 def setUp(self):
178 db_tests.load(get_db, reset_db)
179 super(TestImageAtomicOps, self).setUp()
180
181 self.addCleanup(db_tests.reset)
182 self.image = self.db_api.image_create(
183 self.adm_context,
184 {'status': 'active',
185 'owner': self.adm_context.owner,
186 'properties': {'speed': '88mph'}})
187
188 @staticmethod
189 def _propdict(list_of_props):
190 """
191 Convert a list of ImageProperty objects to dict, ignoring
192 deleted values.
193 """
194 return {x.name: x.value
195 for x in list_of_props
196 if x.deleted == 0}
197
198 def assertOnlyImageHasProp(self, image_id, name, value):
199 images_with_prop = self.db_api.image_get_all(
200 self.adm_context,
201 {'properties': {name: value}})
202 self.assertEqual(1, len(images_with_prop))
203 self.assertEqual(image_id, images_with_prop[0]['id'])
204
205 def test_update(self):
206 """Try to double-create a property atomically.
207
208 This should ensure that a second attempt to create the property
209 atomically fails with Duplicate.
210 """
211
212 # Atomically create the property
213 self.db_api.image_set_property_atomic(self.image['id'],
214 'test_property', 'foo')
215
216 # Make sure only the matched image got it
217 self.assertOnlyImageHasProp(self.image['id'], 'test_property', 'foo')
218
219 # Trying again should fail
220 self.assertRaises(exception.Duplicate,
221 self.db_api.image_set_property_atomic,
222 self.image['id'], 'test_property', 'bar')
223
224 # Ensure that only the first one stuck
225 image = self.db_api.image_get(self.adm_context, self.image['id'])
226 self.assertEqual({'speed': '88mph', 'test_property': 'foo'},
227 self._propdict(image['properties']))
228 self.assertOnlyImageHasProp(self.image['id'], 'test_property', 'foo')
229
230 def test_update_drop_update(self):
231 """Try to create, delete, re-create property atomically.
232
233 If we fail to undelete and claim the property, this will
234 fail as duplicate.
235 """
236
237 # Atomically create the property
238 self.db_api.image_set_property_atomic(self.image['id'],
239 'test_property', 'foo')
240
241 # Ensure that it stuck
242 image = self.db_api.image_get(self.adm_context, self.image['id'])
243 self.assertEqual({'speed': '88mph', 'test_property': 'foo'},
244 self._propdict(image['properties']))
245 self.assertOnlyImageHasProp(self.image['id'], 'test_property', 'foo')
246
247 # Update the image with the property removed, like image_repo.save()
248 new_props = self._propdict(image['properties'])
249 del new_props['test_property']
250 self.db_api.image_update(self.adm_context, self.image['id'],
251 values={'properties': new_props},
252 purge_props=True)
253
254 # Make sure that a fetch shows the property deleted
255 image = self.db_api.image_get(self.adm_context, self.image['id'])
256 self.assertEqual({'speed': '88mph'},
257 self._propdict(image['properties']))
258
259 # Atomically update the property, which still exists, but is
260 # deleted
261 self.db_api.image_set_property_atomic(self.image['id'],
262 'test_property', 'bar')
263
264 # Makes sure we updated the property and undeleted it
265 image = self.db_api.image_get(self.adm_context, self.image['id'])
266 self.assertEqual({'speed': '88mph', 'test_property': 'bar'},
267 self._propdict(image['properties']))
268 self.assertOnlyImageHasProp(self.image['id'], 'test_property', 'bar')
269
270 def test_update_prop_multiple_images(self):
271 """Create and delete properties on two images, then set on one.
272
273 This tests that the resurrect-from-deleted mode of the method only
274 matchs deleted properties from our image.
275 """
276
277 images = self.db_api.image_get_all(self.adm_context)
278
279 image_id1 = images[0]['id']
280 image_id2 = images[-1]['id']
281
282 # Atomically create the property on each image
283 self.db_api.image_set_property_atomic(image_id1,
284 'test_property', 'foo')
285 self.db_api.image_set_property_atomic(image_id2,
286 'test_property', 'bar')
287
288 # Make sure they got the right property value each
289 self.assertOnlyImageHasProp(image_id1, 'test_property', 'foo')
290 self.assertOnlyImageHasProp(image_id2, 'test_property', 'bar')
291
292 # Delete the property on both images
293 self.db_api.image_update(self.adm_context, image_id1,
294 {'properties': {}},
295 purge_props=True)
296 self.db_api.image_update(self.adm_context, image_id2,
297 {'properties': {}},
298 purge_props=True)
299
300 # Set the property value on one of the images. Both will have a
301 # deleted previous value for the property, but only one should
302 # be updated
303 self.db_api.image_set_property_atomic(image_id2,
304 'test_property', 'baz')
305
306 # Make sure the update affected only the intended image
307 self.assertOnlyImageHasProp(image_id2, 'test_property', 'baz')
308
309 def test_delete(self):
310 """Try to double-delete a property atomically.
311
312 This should ensure that a second attempt fails.
313 """
314
315 self.db_api.image_delete_property_atomic(self.image['id'],
316 'speed', '88mph')
317
318 self.assertRaises(exception.NotFound,
319 self.db_api.image_delete_property_atomic,
320 self.image['id'], 'speed', '88mph')
321
322 def test_delete_create_delete(self):
323 """Try to delete, re-create, and then re-delete property."""
324 self.db_api.image_delete_property_atomic(self.image['id'],
325 'speed', '88mph')
326 self.db_api.image_update(self.adm_context, self.image['id'],
327 {'properties': {'speed': '89mph'}},
328 purge_props=True)
329
330 # We should no longer be able to delete the property by the *old*
331 # value
332 self.assertRaises(exception.NotFound,
333 self.db_api.image_delete_property_atomic,
334 self.image['id'], 'speed', '88mph')
335
336 # Only the new value should result in proper deletion
337 self.db_api.image_delete_property_atomic(self.image['id'],
338 'speed', '89mph')
339
340 def test_image_update_ignores_atomics(self):
341 image = self.db_api.image_get_all(self.adm_context)[0]
342
343 # Set two atomic properties atomically
344 self.db_api.image_set_property_atomic(image['id'],
345 'test1', 'foo')
346 self.db_api.image_set_property_atomic(image['id'],
347 'test2', 'bar')
348
349 # Try to change test1, delete test2, add test3 and test4 via
350 # normal image_update() where the first three are passed as
351 # atomic
352 self.db_api.image_update(
353 self.adm_context, image['id'],
354 {'properties': {'test1': 'baz', 'test3': 'bat', 'test4': 'yep'}},
355 purge_props=True, atomic_props=['test1', 'test2', 'test3'])
356
357 # Expect that none of the updates to the atomics are applied, but
358 # the regular property is added.
359 image = self.db_api.image_get(self.adm_context, image['id'])
360 self.assertEqual({'test1': 'foo', 'test2': 'bar', 'test4': 'yep'},
361 self._propdict(image['properties']))
1616 import time
1717
1818 from oslo_serialization import jsonutils
19 from oslo_utils import timeutils
1920 import requests
2021 from six.moves import http_client as http
2122
128129 entity_id = request_path.rsplit('/', 1)[1]
129130 msg = "Entity {0} failed to copy image to stores '{1}' within {2} sec"
130131 raise Exception(msg.format(entity_id, ",".join(stores), max_sec))
132
133
134 def poll_entity(url, headers, callback, max_sec=10, delay_sec=0.2,
135 require_success=True):
136 """Poll a given URL passing the parsed entity to a callback.
137
138 This is a utility method that repeatedly GETs a URL, and calls
139 a callback with the result. The callback determines if we should
140 keep polling by returning True (up to the timeout).
141
142 :param url: The url to fetch
143 :param headers: The request headers to use for the fetch
144 :param callback: A function that takes the parsed entity and is expected
145 to return True if we should keep polling
146 :param max_sec: The overall timeout before we fail
147 :param delay_sec: The time between fetches
148 :param require_success: Assert resp_code is http.OK each time before
149 calling the callback
150 """
151
152 timer = timeutils.StopWatch(max_sec)
153 timer.start()
154
155 while not timer.expired():
156 resp = requests.get(url, headers=headers)
157 if require_success and resp.status_code != http.OK:
158 raise Exception(
159 'Received %i response from server' % resp.status_code)
160 entity = resp.json()
161 keep_polling = callback(entity)
162 if keep_polling is not True:
163 return keep_polling
164 time.sleep(delay_sec)
165
166 raise Exception('Poll timeout if %i seconds exceeded!' % max_sec)
7878 """
7979 self.cleanup()
8080 kwargs = self.__dict__.copy()
81 kwargs['use_user_token'] = True
8281 self.start_servers(delayed_delete=True, daemon=True,
8382 metadata_encryption_key='', **kwargs)
8483 path = "http://%s:%d/v2/images" % ("127.0.0.1", self.api_port)
114113 """
115114 self.cleanup()
116115 kwargs = self.__dict__.copy()
117 kwargs['use_user_token'] = True
118116 self.start_servers(delayed_delete=True, daemon=False,
119117 metadata_encryption_key='', **kwargs)
120118 path = "http://%s:%d/v2/images" % ("127.0.0.1", self.api_port)
163161 # Start servers.
164162 self.cleanup()
165163 kwargs = self.__dict__.copy()
166 kwargs['use_user_token'] = True
167164 self.start_servers(delayed_delete=True, daemon=False,
168165 default_store='file', **kwargs)
169166
241238 def test_scrubber_restore_image(self):
242239 self.cleanup()
243240 kwargs = self.__dict__.copy()
244 kwargs['use_user_token'] = True
245241 self.start_servers(delayed_delete=True, daemon=False,
246242 metadata_encryption_key='', **kwargs)
247243 path = "http://%s:%d/v2/images" % ("127.0.0.1", self.api_port)
1616 """
1717 Utility methods to set testcases up for Swift tests.
1818 """
19
20 from __future__ import print_function
2119
2220 import threading
2321
9999 self.assertEqual(versions, content)
100100
101101 def test_v2_api_configuration(self):
102 self.api_server.enable_v2_api = True
103102 self.start_servers(**self.__dict__.copy())
104103
105104 url = 'http://127.0.0.1:%d/v%%s/' % self.api_port
1414
1515 import hashlib
1616 import os
17 import subprocess
18 import tempfile
19 import time
1720 import uuid
1821
1922 from oslo_serialization import jsonutils
23 from oslo_utils import units
2024 import requests
2125 import six
2226 from six.moves import http_client as http
3539 TENANT4 = str(uuid.uuid4())
3640
3741
42 def get_auth_header(tenant, tenant_id=None, role='', headers=None):
43 """Return headers to authenticate as a specific tenant.
44
45 :param tenant: Tenant for the auth token
46 :param tenant_id: Optional tenant ID for the X-Tenant-Id header
47 :param role: Optional user role
48 :param headers: Optional list of headers to add to
49 """
50 if not headers:
51 headers = {}
52 auth_token = 'user:%s:%s' % (tenant, role)
53 headers.update({'X-Auth-Token': auth_token})
54 if tenant_id:
55 headers.update({'X-Tenant-Id': tenant_id})
56 return headers
57
58
3859 class TestImages(functional.FunctionalTest):
3960
4061 def setUp(self):
4263 self.cleanup()
4364 self.include_scrubber = False
4465 self.api_server.deployment_flavor = 'noauth'
45 self.api_server.data_api = 'glance.db.sqlalchemy.api'
4666 for i in range(3):
4767 ret = test_utils.start_http_server("foo_image_id%d" % i,
4868 "foo_image%d" % i)
4969 setattr(self, 'http_server%d' % i, ret[1])
5070 setattr(self, 'http_port%d' % i, ret[2])
51 self.api_server.use_user_token = True
5271 self.api_server.send_identity_credentials = True
5372
5473 def tearDown(self):
873892
874893 self.stop_servers()
875894
895 def _create_qcow(self, size):
896 fn = tempfile.mktemp(prefix='glance-unittest-images-',
897 suffix='.qcow')
898 subprocess.check_output(
899 'qemu-img create -f qcow %s %i' % (fn, size),
900 shell=True)
901 return fn
902
903 def test_image_upload_qcow_virtual_size_calculation(self):
904 self.start_servers(**self.__dict__.copy())
905
906 # Create an image
907 headers = self._headers({'Content-Type': 'application/json'})
908 data = jsonutils.dumps({'name': 'myqcow', 'disk_format': 'qcow2',
909 'container_format': 'bare'})
910 response = requests.post(self._url('/v2/images'),
911 headers=headers, data=data)
912 self.assertEqual(http.CREATED, response.status_code,
913 'Failed to create: %s' % response.text)
914 image = response.json()
915
916 # Upload a qcow
917 fn = self._create_qcow(128 * units.Mi)
918 raw_size = os.path.getsize(fn)
919 headers = self._headers({'Content-Type': 'application/octet-stream'})
920 response = requests.put(self._url('/v2/images/%s/file' % image['id']),
921 headers=headers,
922 data=open(fn, 'rb').read())
923 os.remove(fn)
924 self.assertEqual(http.NO_CONTENT, response.status_code)
925
926 # Check the image attributes
927 response = requests.get(self._url('/v2/images/%s' % image['id']),
928 headers=self._headers())
929 self.assertEqual(http.OK, response.status_code)
930 image = response.json()
931 self.assertEqual(128 * units.Mi, image['virtual_size'])
932 self.assertEqual(raw_size, image['size'])
933
934 def test_image_import_qcow_virtual_size_calculation(self):
935 self.start_servers(**self.__dict__.copy())
936
937 # Create an image
938 headers = self._headers({'Content-Type': 'application/json'})
939 data = jsonutils.dumps({'name': 'myqcow', 'disk_format': 'qcow2',
940 'container_format': 'bare'})
941 response = requests.post(self._url('/v2/images'),
942 headers=headers, data=data)
943 self.assertEqual(http.CREATED, response.status_code,
944 'Failed to create: %s' % response.text)
945 image = response.json()
946
947 # Stage a qcow
948 fn = self._create_qcow(128 * units.Mi)
949 raw_size = os.path.getsize(fn)
950 headers = self._headers({'Content-Type': 'application/octet-stream'})
951 response = requests.put(self._url('/v2/images/%s/stage' % image['id']),
952 headers=headers,
953 data=open(fn, 'rb').read())
954 os.remove(fn)
955 self.assertEqual(http.NO_CONTENT, response.status_code)
956
957 # Verify image is in uploading state and checksum is None
958 func_utils.verify_image_hashes_and_status(self, image['id'],
959 status='uploading')
960
961 # Import image to store
962 path = self._url('/v2/images/%s/import' % image['id'])
963 headers = self._headers({
964 'content-type': 'application/json',
965 'X-Roles': 'admin',
966 })
967 data = jsonutils.dumps({'method': {
968 'name': 'glance-direct'
969 }})
970 response = requests.post(
971 self._url('/v2/images/%s/import' % image['id']),
972 headers=headers, data=data)
973 self.assertEqual(http.ACCEPTED, response.status_code)
974
975 # Verify image is in active state and checksum is set
976 # NOTE(abhishekk): As import is a async call we need to provide
977 # some timelap to complete the call.
978 path = self._url('/v2/images/%s' % image['id'])
979 func_utils.wait_for_status(request_path=path,
980 request_headers=self._headers(),
981 status='active',
982 max_sec=15,
983 delay_sec=0.2)
984
985 # Check the image attributes
986 response = requests.get(self._url('/v2/images/%s' % image['id']),
987 headers=self._headers())
988 self.assertEqual(http.OK, response.status_code)
989 image = response.json()
990 self.assertEqual(128 * units.Mi, image['virtual_size'])
991 self.assertEqual(raw_size, image['size'])
992
876993 def test_hidden_images(self):
877994 # Image list should be empty
878995 self.api_server.show_multiple_locations = True
33403457 self.api_server.deployment_flavor = 'fakeauth'
33413458
33423459 kwargs = self.__dict__.copy()
3343 kwargs['use_user_token'] = True
33443460 self.start_servers(**kwargs)
33453461
33463462 owners = ['admin', 'tenant1', 'tenant2', 'none']
36423758 base_headers.update(custom_headers or {})
36433759 return base_headers
36443760
3645 def test_v2_not_enabled(self):
3646 self.api_server.enable_v2_api = False
3647 self.start_servers(**self.__dict__.copy())
3648 path = self._url('/v2/images')
3649 response = requests.get(path, headers=self._headers())
3650 self.assertEqual(http.MULTIPLE_CHOICES, response.status_code)
3651 self.stop_servers()
3652
3653 def test_v2_enabled(self):
3654 self.api_server.enable_v2_api = True
3655 self.start_servers(**self.__dict__.copy())
3656 path = self._url('/v2/images')
3657 response = requests.get(path, headers=self._headers())
3658 self.assertEqual(http.OK, response.status_code)
3659 self.stop_servers()
3660
36613761 def test_image_direct_url_visible(self):
36623762
36633763 self.api_server.show_image_direct_url = True
39434043
39444044 def test_image_member_lifecycle(self):
39454045
3946 def get_header(tenant, role=''):
3947 auth_token = 'user:%s:%s' % (tenant, role)
3948 headers = {'X-Auth-Token': auth_token}
3949 return headers
3950
39514046 # Image list should be empty
39524047 path = self._url('/v2/images')
3953 response = requests.get(path, headers=get_header('tenant1'))
4048 response = requests.get(path, headers=get_auth_header('tenant1'))
39544049 self.assertEqual(http.OK, response.status_code)
39554050 images = jsonutils.loads(response.text)['images']
39564051 self.assertEqual(0, len(images))
39754070
39764071 # Image list should contain 6 images for tenant1
39774072 path = self._url('/v2/images')
3978 response = requests.get(path, headers=get_header('tenant1'))
4073 response = requests.get(path, headers=get_auth_header('tenant1'))
39794074 self.assertEqual(http.OK, response.status_code)
39804075 images = jsonutils.loads(response.text)['images']
39814076 self.assertEqual(6, len(images))
39824077
39834078 # Image list should contain 3 images for TENANT3
39844079 path = self._url('/v2/images')
3985 response = requests.get(path, headers=get_header(TENANT3))
4080 response = requests.get(path, headers=get_auth_header(TENANT3))
39864081 self.assertEqual(http.OK, response.status_code)
39874082 images = jsonutils.loads(response.text)['images']
39884083 self.assertEqual(3, len(images))
39904085 # Add Image member for tenant1-shared image
39914086 path = self._url('/v2/images/%s/members' % image_fixture[3]['id'])
39924087 body = jsonutils.dumps({'member': TENANT3})
3993 response = requests.post(path, headers=get_header('tenant1'),
4088 response = requests.post(path, headers=get_auth_header('tenant1'),
39944089 data=body)
39954090 self.assertEqual(http.OK, response.status_code)
39964091 image_member = jsonutils.loads(response.text)
40024097
40034098 # Image list should contain 3 images for TENANT3
40044099 path = self._url('/v2/images')
4005 response = requests.get(path, headers=get_header(TENANT3))
4100 response = requests.get(path, headers=get_auth_header(TENANT3))
40064101 self.assertEqual(http.OK, response.status_code)
40074102 images = jsonutils.loads(response.text)['images']
40084103 self.assertEqual(3, len(images))
40104105 # Image list should contain 0 shared images for TENANT3
40114106 # because default is accepted
40124107 path = self._url('/v2/images?visibility=shared')
4013 response = requests.get(path, headers=get_header(TENANT3))
4108 response = requests.get(path, headers=get_auth_header(TENANT3))
40144109 self.assertEqual(http.OK, response.status_code)
40154110 images = jsonutils.loads(response.text)['images']
40164111 self.assertEqual(0, len(images))
40174112
40184113 # Image list should contain 4 images for TENANT3 with status pending
40194114 path = self._url('/v2/images?member_status=pending')
4020 response = requests.get(path, headers=get_header(TENANT3))
4115 response = requests.get(path, headers=get_auth_header(TENANT3))
40214116 self.assertEqual(http.OK, response.status_code)
40224117 images = jsonutils.loads(response.text)['images']
40234118 self.assertEqual(4, len(images))
40244119
40254120 # Image list should contain 4 images for TENANT3 with status all
40264121 path = self._url('/v2/images?member_status=all')
4027 response = requests.get(path, headers=get_header(TENANT3))
4122 response = requests.get(path, headers=get_auth_header(TENANT3))
40284123 self.assertEqual(http.OK, response.status_code)
40294124 images = jsonutils.loads(response.text)['images']
40304125 self.assertEqual(4, len(images))
40324127 # Image list should contain 1 image for TENANT3 with status pending
40334128 # and visibility shared
40344129 path = self._url('/v2/images?member_status=pending&visibility=shared')
4035 response = requests.get(path, headers=get_header(TENANT3))
4130 response = requests.get(path, headers=get_auth_header(TENANT3))
40364131 self.assertEqual(http.OK, response.status_code)
40374132 images = jsonutils.loads(response.text)['images']
40384133 self.assertEqual(1, len(images))
40414136 # Image list should contain 0 image for TENANT3 with status rejected
40424137 # and visibility shared
40434138 path = self._url('/v2/images?member_status=rejected&visibility=shared')
4044 response = requests.get(path, headers=get_header(TENANT3))
4139 response = requests.get(path, headers=get_auth_header(TENANT3))
40454140 self.assertEqual(http.OK, response.status_code)
40464141 images = jsonutils.loads(response.text)['images']
40474142 self.assertEqual(0, len(images))
40494144 # Image list should contain 0 image for TENANT3 with status accepted
40504145 # and visibility shared
40514146 path = self._url('/v2/images?member_status=accepted&visibility=shared')
4052 response = requests.get(path, headers=get_header(TENANT3))
4147 response = requests.get(path, headers=get_auth_header(TENANT3))
40534148 self.assertEqual(http.OK, response.status_code)
40544149 images = jsonutils.loads(response.text)['images']
40554150 self.assertEqual(0, len(images))
40574152 # Image list should contain 0 image for TENANT3 with status accepted
40584153 # and visibility private
40594154 path = self._url('/v2/images?visibility=private')
4060 response = requests.get(path, headers=get_header(TENANT3))
4155 response = requests.get(path, headers=get_auth_header(TENANT3))
40614156 self.assertEqual(http.OK, response.status_code)
40624157 images = jsonutils.loads(response.text)['images']
40634158 self.assertEqual(0, len(images))
40644159
40654160 # Image tenant2-shared's image members list should contain no members
40664161 path = self._url('/v2/images/%s/members' % image_fixture[7]['id'])
4067 response = requests.get(path, headers=get_header('tenant2'))
4162 response = requests.get(path, headers=get_auth_header('tenant2'))
40684163 self.assertEqual(http.OK, response.status_code)
40694164 body = jsonutils.loads(response.text)
40704165 self.assertEqual(0, len(body['members']))
40734168 path = self._url('/v2/images/%s/members/%s' % (image_fixture[3]['id'],
40744169 TENANT3))
40754170 body = jsonutils.dumps({'status': 'accepted'})
4076 response = requests.put(path, headers=get_header('tenant1'), data=body)
4171 response = requests.put(path, headers=get_auth_header('tenant1'),
4172 data=body)
40774173 self.assertEqual(http.FORBIDDEN, response.status_code)
40784174
40794175 # Tenant 1, who is the owner can get status of its own image member
40804176 path = self._url('/v2/images/%s/members/%s' % (image_fixture[3]['id'],
40814177 TENANT3))
4082 response = requests.get(path, headers=get_header('tenant1'))
4178 response = requests.get(path, headers=get_auth_header('tenant1'))
40834179 self.assertEqual(http.OK, response.status_code)
40844180 body = jsonutils.loads(response.text)
40854181 self.assertEqual('pending', body['status'])
40894185 # Tenant 3, who is the member can get status of its own status
40904186 path = self._url('/v2/images/%s/members/%s' % (image_fixture[3]['id'],
40914187 TENANT3))
4092 response = requests.get(path, headers=get_header(TENANT3))
4188 response = requests.get(path, headers=get_auth_header(TENANT3))
40934189 self.assertEqual(http.OK, response.status_code)
40944190 body = jsonutils.loads(response.text)
40954191 self.assertEqual('pending', body['status'])
40994195 # Tenant 2, who not the owner cannot get status of image member
41004196 path = self._url('/v2/images/%s/members/%s' % (image_fixture[3]['id'],
41014197 TENANT3))
4102 response = requests.get(path, headers=get_header('tenant2'))
4198 response = requests.get(path, headers=get_auth_header('tenant2'))
41034199 self.assertEqual(http.NOT_FOUND, response.status_code)
41044200
41054201 # Tenant 3 can change status of image member
41064202 path = self._url('/v2/images/%s/members/%s' % (image_fixture[3]['id'],
41074203 TENANT3))
41084204 body = jsonutils.dumps({'status': 'accepted'})
4109 response = requests.put(path, headers=get_header(TENANT3), data=body)
4205 response = requests.put(path, headers=get_auth_header(TENANT3),
4206 data=body)
41104207 self.assertEqual(http.OK, response.status_code)
41114208 image_member = jsonutils.loads(response.text)
41124209 self.assertEqual(image_fixture[3]['id'], image_member['image_id'])
41164213 # Image list should contain 4 images for TENANT3 because status is
41174214 # accepted
41184215 path = self._url('/v2/images')
4119 response = requests.get(path, headers=get_header(TENANT3))
4216 response = requests.get(path, headers=get_auth_header(TENANT3))
41204217 self.assertEqual(http.OK, response.status_code)
41214218 images = jsonutils.loads(response.text)['images']
41224219 self.assertEqual(4, len(images))
41254222 path = self._url('/v2/images/%s/members/%s' % (image_fixture[3]['id'],
41264223 TENANT3))
41274224 body = jsonutils.dumps({'status': 'invalid-status'})
4128 response = requests.put(path, headers=get_header(TENANT3), data=body)
4225 response = requests.put(path, headers=get_auth_header(TENANT3),
4226 data=body)
41294227 self.assertEqual(http.BAD_REQUEST, response.status_code)
41304228
41314229 # Owner cannot change status of image
41324230 path = self._url('/v2/images/%s/members/%s' % (image_fixture[3]['id'],
41334231 TENANT3))
41344232 body = jsonutils.dumps({'status': 'accepted'})
4135 response = requests.put(path, headers=get_header('tenant1'), data=body)
4233 response = requests.put(path, headers=get_auth_header('tenant1'),
4234 data=body)
41364235 self.assertEqual(http.FORBIDDEN, response.status_code)
41374236
41384237 # Add Image member for tenant2-shared image
41394238 path = self._url('/v2/images/%s/members' % image_fixture[7]['id'])
41404239 body = jsonutils.dumps({'member': TENANT4})
4141 response = requests.post(path, headers=get_header('tenant2'),
4240 response = requests.post(path, headers=get_auth_header('tenant2'),
41424241 data=body)
41434242 self.assertEqual(http.OK, response.status_code)
41444243 image_member = jsonutils.loads(response.text)
41504249 # Add Image member to public image
41514250 path = self._url('/v2/images/%s/members' % image_fixture[2]['id'])
41524251 body = jsonutils.dumps({'member': TENANT2})
4153 response = requests.post(path, headers=get_header('tenant1'),
4252 response = requests.post(path, headers=get_auth_header('tenant1'),
41544253 data=body)
41554254 self.assertEqual(http.FORBIDDEN, response.status_code)
41564255
41574256 # Add Image member to private image
41584257 path = self._url('/v2/images/%s/members' % image_fixture[1]['id'])
41594258 body = jsonutils.dumps({'member': TENANT2})
4160 response = requests.post(path, headers=get_header('tenant1'),
4259 response = requests.post(path, headers=get_auth_header('tenant1'),
41614260 data=body)
41624261 self.assertEqual(http.FORBIDDEN, response.status_code)
41634262
41644263 # Add Image member to community image
41654264 path = self._url('/v2/images/%s/members' % image_fixture[0]['id'])
41664265 body = jsonutils.dumps({'member': TENANT2})
4167 response = requests.post(path, headers=get_header('tenant1'),
4266 response = requests.post(path, headers=get_auth_header('tenant1'),
41684267 data=body)
41694268 self.assertEqual(http.FORBIDDEN, response.status_code)
41704269
41714270 # Image tenant1-shared's members list should contain 1 member
41724271 path = self._url('/v2/images/%s/members' % image_fixture[3]['id'])
4173 response = requests.get(path, headers=get_header('tenant1'))
4272 response = requests.get(path, headers=get_auth_header('tenant1'))
41744273 self.assertEqual(http.OK, response.status_code)
41754274 body = jsonutils.loads(response.text)
41764275 self.assertEqual(1, len(body['members']))
41774276
41784277 # Admin can see any members
41794278 path = self._url('/v2/images/%s/members' % image_fixture[3]['id'])
4180 response = requests.get(path, headers=get_header('tenant1', 'admin'))
4279 response = requests.get(path, headers=get_auth_header('tenant1',
4280 role='admin'))
41814281 self.assertEqual(http.OK, response.status_code)
41824282 body = jsonutils.loads(response.text)
41834283 self.assertEqual(1, len(body['members']))
41844284
41854285 # Image members not found for private image not owned by TENANT 1
41864286 path = self._url('/v2/images/%s/members' % image_fixture[7]['id'])
4187 response = requests.get(path, headers=get_header('tenant1'))
4287 response = requests.get(path, headers=get_auth_header('tenant1'))
41884288 self.assertEqual(http.NOT_FOUND, response.status_code)
41894289
41904290 # Image members forbidden for public image
41914291 path = self._url('/v2/images/%s/members' % image_fixture[2]['id'])
4192 response = requests.get(path, headers=get_header('tenant1'))
4292 response = requests.get(path, headers=get_auth_header('tenant1'))
41934293 self.assertIn("Only shared images have members", response.text)
41944294 self.assertEqual(http.FORBIDDEN, response.status_code)
41954295
41964296 # Image members forbidden for community image
41974297 path = self._url('/v2/images/%s/members' % image_fixture[0]['id'])
4198 response = requests.get(path, headers=get_header('tenant1'))
4298 response = requests.get(path, headers=get_auth_header('tenant1'))
41994299 self.assertIn("Only shared images have members", response.text)
42004300 self.assertEqual(http.FORBIDDEN, response.status_code)
42014301
42024302 # Image members forbidden for private image
42034303 path = self._url('/v2/images/%s/members' % image_fixture[1]['id'])
4204 response = requests.get(path, headers=get_header('tenant1'))
4304 response = requests.get(path, headers=get_auth_header('tenant1'))
42054305 self.assertIn("Only shared images have members", response.text)
42064306 self.assertEqual(http.FORBIDDEN, response.status_code)
42074307
42084308 # Image Member Cannot delete Image membership
42094309 path = self._url('/v2/images/%s/members/%s' % (image_fixture[3]['id'],
42104310 TENANT3))
4211 response = requests.delete(path, headers=get_header(TENANT3))
4311 response = requests.delete(path, headers=get_auth_header(TENANT3))
42124312 self.assertEqual(http.FORBIDDEN, response.status_code)
42134313
42144314 # Delete Image member
42154315 path = self._url('/v2/images/%s/members/%s' % (image_fixture[3]['id'],
42164316 TENANT3))
4217 response = requests.delete(path, headers=get_header('tenant1'))
4317 response = requests.delete(path, headers=get_auth_header('tenant1'))
42184318 self.assertEqual(http.NO_CONTENT, response.status_code)
42194319
42204320 # Now the image has no members
42214321 path = self._url('/v2/images/%s/members' % image_fixture[3]['id'])
4222 response = requests.get(path, headers=get_header('tenant1'))
4322 response = requests.get(path, headers=get_auth_header('tenant1'))
42234323 self.assertEqual(http.OK, response.status_code)
42244324 body = jsonutils.loads(response.text)
42254325 self.assertEqual(0, len(body['members']))
42284328 path = self._url('/v2/images/%s/members' % image_fixture[3]['id'])
42294329 for i in range(10):
42304330 body = jsonutils.dumps({'member': str(uuid.uuid4())})
4231 response = requests.post(path, headers=get_header('tenant1'),
4331 response = requests.post(path, headers=get_auth_header('tenant1'),
42324332 data=body)
42334333 self.assertEqual(http.OK, response.status_code)
42344334
42354335 body = jsonutils.dumps({'member': str(uuid.uuid4())})
4236 response = requests.post(path, headers=get_header('tenant1'),
4336 response = requests.post(path, headers=get_auth_header('tenant1'),
42374337 data=body)
42384338 self.assertEqual(http.REQUEST_ENTITY_TOO_LARGE, response.status_code)
42394339
42404340 # Get Image member should return not found for public image
42414341 path = self._url('/v2/images/%s/members/%s' % (image_fixture[2]['id'],
42424342 TENANT3))
4243 response = requests.get(path, headers=get_header('tenant1'))
4343 response = requests.get(path, headers=get_auth_header('tenant1'))
42444344 self.assertEqual(http.NOT_FOUND, response.status_code)
42454345
42464346 # Get Image member should return not found for community image
42474347 path = self._url('/v2/images/%s/members/%s' % (image_fixture[0]['id'],
42484348 TENANT3))
4249 response = requests.get(path, headers=get_header('tenant1'))
4349 response = requests.get(path, headers=get_auth_header('tenant1'))
42504350 self.assertEqual(http.NOT_FOUND, response.status_code)
42514351
42524352 # Get Image member should return not found for private image
42534353 path = self._url('/v2/images/%s/members/%s' % (image_fixture[1]['id'],
42544354 TENANT3))
4255 response = requests.get(path, headers=get_header('tenant1'))
4355 response = requests.get(path, headers=get_auth_header('tenant1'))
42564356 self.assertEqual(http.NOT_FOUND, response.status_code)
42574357
42584358 # Delete Image member should return forbidden for public image
42594359 path = self._url('/v2/images/%s/members/%s' % (image_fixture[2]['id'],
42604360 TENANT3))
4261 response = requests.delete(path, headers=get_header('tenant1'))
4361 response = requests.delete(path, headers=get_auth_header('tenant1'))
42624362 self.assertEqual(http.FORBIDDEN, response.status_code)
42634363
42644364 # Delete Image member should return forbidden for community image
42654365 path = self._url('/v2/images/%s/members/%s' % (image_fixture[0]['id'],
42664366 TENANT3))
4267 response = requests.delete(path, headers=get_header('tenant1'))
4367 response = requests.delete(path, headers=get_auth_header('tenant1'))
42684368 self.assertEqual(http.FORBIDDEN, response.status_code)
42694369
42704370 # Delete Image member should return forbidden for private image
42714371 path = self._url('/v2/images/%s/members/%s' % (image_fixture[1]['id'],
42724372 TENANT3))
4273 response = requests.delete(path, headers=get_header('tenant1'))
4373 response = requests.delete(path, headers=get_auth_header('tenant1'))
42744374 self.assertEqual(http.FORBIDDEN, response.status_code)
42754375
42764376 self.stop_servers()
43604460 self.cleanup()
43614461 self.include_scrubber = False
43624462 self.api_server_multiple_backend.deployment_flavor = 'noauth'
4363 self.api_server_multiple_backend.data_api = 'glance.db.sqlalchemy.api'
43644463 for i in range(3):
43654464 ret = test_utils.start_http_server("foo_image_id%d" % i,
43664465 "foo_image%d" % i)
55555654 'X-Roles': 'admin'
55565655 })
55575656
5657 # NOTE(abhishekk): Deleting file3 image directory to trigger the
5658 # failure, so that we can verify that revert call does not delete
5659 # the data from existing stores
5660 # NOTE(danms): Do this before we start the import, on a later store,
5661 # which will cause that store to fail after we have already completed
5662 # the first one.
5663 os.rmdir(self.test_dir + "/images_3")
5664
55585665 data = jsonutils.dumps(
55595666 {'method': {'name': 'copy-image'},
55605667 'stores': ['file2', 'file3']})
55615668 response = requests.post(path, headers=headers, data=data)
55625669 self.assertEqual(http.ACCEPTED, response.status_code)
55635670
5564 # NOTE(abhishekk): Deleting file3 image directory to trigger the
5565 # failure, so that we can verify that revert call does not delete
5566 # the data from existing stores
5567 os.rmdir(self.test_dir + "/images_3")
5568
5569 # Verify image is copied
5570 # NOTE(abhishekk): As import is a async call we need to provide
5571 # some timelap to complete the call.
5572 path = self._url('/v2/images/%s' % image_id)
5573 func_utils.wait_for_copying(request_path=path,
5574 request_headers=self._headers(),
5575 stores=['file2'],
5576 max_sec=10,
5577 delay_sec=0.2,
5578 start_delay_sec=1,
5579 failure_scenario=True)
5580
5581 # Ensure data is not deleted from existing stores on failure
5671 def poll_callback(image):
5672 # NOTE(danms): We need to wait for the specific
5673 # arrangement we're expecting, which is that file3 has
5674 # failed, nothing else is importing, and file2 has been
5675 # removed from stores by the revert.
5676 return not (image['os_glance_importing_to_stores'] == '' and
5677 image['os_glance_failed_import'] == 'file3' and
5678 image['stores'] == 'file1')
5679
5680 func_utils.poll_entity(self._url('/v2/images/%s' % image_id),
5681 self._headers(),
5682 poll_callback)
5683
5684 # Here we check that the failure of 'file3' caused 'file2' to
5685 # be removed from image['stores'], and that 'file3' is reported
5686 # as failed in the appropriate status list. Since the import
5687 # started with 'store1' being populated, that should remain,
5688 # but 'store2' should be reverted/removed.
55825689 path = self._url('/v2/images/%s' % image_id)
55835690 response = requests.get(path, headers=self._headers())
55845691 self.assertEqual(http.OK, response.status_code)
55855692 self.assertIn('file1', jsonutils.loads(response.text)['stores'])
55865693 self.assertNotIn('file2', jsonutils.loads(response.text)['stores'])
55875694 self.assertNotIn('file3', jsonutils.loads(response.text)['stores'])
5695 fail_key = 'os_glance_failed_import'
5696 pend_key = 'os_glance_importing_to_stores'
5697 self.assertEqual('file3', jsonutils.loads(response.text)[fail_key])
5698 self.assertEqual('', jsonutils.loads(response.text)[pend_key])
55885699
55895700 # Copy newly created image to file2 and file3 stores and
55905701 # all_stores_must_succeed set to false.
55985709 {'method': {'name': 'copy-image'},
55995710 'stores': ['file2', 'file3'],
56005711 'all_stores_must_succeed': False})
5601 response = requests.post(path, headers=headers, data=data)
5712
5713 for i in range(0, 5):
5714 response = requests.post(path, headers=headers, data=data)
5715 if response.status_code != http.CONFLICT:
5716 break
5717 # We might race with the revert of the previous task and do not
5718 # really have a good way to make sure that it's done. In order
5719 # to make sure we tolerate the 409 possibility when import
5720 # locking is added, gracefully wait a few times before failing.
5721 time.sleep(1)
5722
56025723 self.assertEqual(http.ACCEPTED, response.status_code)
56035724
56045725 # Verify image is copied
61506271 self.cleanup()
61516272 self.include_scrubber = False
61526273 self.api_server_multiple_backend.deployment_flavor = 'noauth'
6153 self.api_server_multiple_backend.data_api = 'glance.db.sqlalchemy.api'
61546274 for i in range(3):
61556275 ret = test_utils.start_http_server("foo_image_id%d" % i,
61566276 "foo_image%d" % i)
61856305
61866306 try:
61876307 def get_header(tenant, tenant_id=None, role=''):
6188 auth_token = 'user:%s:%s' % (tenant, role)
6189 headers = {'X-Auth-Token': auth_token}
6190 if tenant_id:
6191 headers.update({'X-Tenant-Id': tenant_id})
6192 return self._headers(custom_headers=headers)
6308 return self._headers(custom_headers=get_auth_header(
6309 tenant, tenant_id, role))
61936310
61946311 # Image list should be empty
61956312 path = self._url('/v2/images')
65556672 self.skipTest("Remote connection closed abruptly: %s" % e.args[0])
65566673
65576674 self.stop_servers()
6675
6676
6677 class TestCopyImagePermissions(functional.MultipleBackendFunctionalTest):
6678
6679 def setUp(self):
6680 super(TestCopyImagePermissions, self).setUp()
6681 self.cleanup()
6682 self.include_scrubber = False
6683 self.api_server_multiple_backend.deployment_flavor = 'noauth'
6684
6685 def _url(self, path):
6686 return 'http://127.0.0.1:%d%s' % (self.api_port, path)
6687
6688 def _headers(self, custom_headers=None):
6689 base_headers = {
6690 'X-Identity-Status': 'Confirmed',
6691 'X-Auth-Token': '932c5c84-02ac-4fe5-a9ba-620af0e2bb96',
6692 'X-User-Id': 'f9a41d13-0c13-47e9-bee2-ce4e8bfe958e',
6693 'X-Tenant-Id': TENANT1,
6694 'X-Roles': 'member',
6695 }
6696 base_headers.update(custom_headers or {})
6697 return base_headers
6698
6699 def _create_and_import_image_data(self):
6700 # Create a public image
6701 path = self._url('/v2/images')
6702 headers = self._headers({'content-type': 'application/json'})
6703 data = jsonutils.dumps({'name': 'image-1', 'type': 'kernel',
6704 'visibility': 'public',
6705 'disk_format': 'aki',
6706 'container_format': 'aki'})
6707 response = requests.post(path, headers=headers, data=data)
6708 self.assertEqual(http.CREATED, response.status_code)
6709
6710 image = jsonutils.loads(response.text)
6711 image_id = image['id']
6712
6713 path = self._url('/v2/images/%s/import' % image_id)
6714 headers = self._headers({
6715 'content-type': 'application/json',
6716 'X-Roles': 'admin'
6717 })
6718
6719 # Start http server locally
6720 thread, httpd, port = test_utils.start_standalone_http_server()
6721
6722 image_data_uri = 'http://localhost:%s/' % port
6723 data = jsonutils.dumps(
6724 {'method': {'name': 'web-download', 'uri': image_data_uri},
6725 'stores': ['file1']})
6726 response = requests.post(path, headers=headers, data=data)
6727 self.assertEqual(http.ACCEPTED, response.status_code)
6728
6729 # Verify image is in active state and checksum is set
6730 # NOTE(abhishekk): As import is a async call we need to provide
6731 # some timelap to complete the call.
6732 path = self._url('/v2/images/%s' % image_id)
6733 func_utils.wait_for_status(request_path=path,
6734 request_headers=self._headers(),
6735 status='active',
6736 max_sec=40,
6737 delay_sec=0.2,
6738 start_delay_sec=1)
6739 with requests.get(image_data_uri) as r:
6740 expect_c = six.text_type(hashlib.md5(r.content).hexdigest())
6741 expect_h = six.text_type(hashlib.sha512(r.content).hexdigest())
6742 func_utils.verify_image_hashes_and_status(self,
6743 image_id,
6744 checksum=expect_c,
6745 os_hash_value=expect_h,
6746 status='active')
6747
6748 # kill the local http server
6749 httpd.shutdown()
6750 httpd.server_close()
6751
6752 return image_id
6753
6754 def _test_copy_public_image_as_non_admin(self):
6755 self.start_servers(**self.__dict__.copy())
6756
6757 # Create a publicly-visible image as TENANT1
6758 image_id = self._create_and_import_image_data()
6759
6760 # Ensure image is created in the one store
6761 path = self._url('/v2/images/%s' % image_id)
6762 response = requests.get(path, headers=self._headers())
6763 self.assertEqual(http.OK, response.status_code)
6764 self.assertEqual('file1', jsonutils.loads(response.text)['stores'])
6765
6766 # Copy newly created image to file2 store as TENANT2
6767 path = self._url('/v2/images/%s/import' % image_id)
6768 headers = self._headers({
6769 'content-type': 'application/json',
6770 })
6771 headers = get_auth_header(TENANT2, TENANT2,
6772 role='member', headers=headers)
6773 data = jsonutils.dumps(
6774 {'method': {'name': 'copy-image'},
6775 'stores': ['file2']})
6776 response = requests.post(path, headers=headers, data=data)
6777 return image_id, response
6778
6779 def test_copy_public_image_as_non_admin(self):
6780 rules = {
6781 "context_is_admin": "role:admin",
6782 "default": "",
6783 "add_image": "",
6784 "get_image": "",
6785 "modify_image": "",
6786 "upload_image": "",
6787 "get_image_location": "",
6788 "delete_image": "",
6789 "restricted": "",
6790 "download_image": "",
6791 "add_member": "",
6792 "publicize_image": "",
6793 "copy_image": "role:admin",
6794 }
6795
6796 self.set_policy_rules(rules)
6797
6798 image_id, response = self._test_copy_public_image_as_non_admin()
6799 # Expect failure to copy another user's image
6800 self.assertEqual(http.FORBIDDEN, response.status_code)
6801
6802 def test_copy_public_image_as_non_admin_permitted(self):
6803 rules = {
6804 "context_is_admin": "role:admin",
6805 "default": "",
6806 "add_image": "",
6807 "get_image": "",
6808 "modify_image": "",
6809 "upload_image": "",
6810 "get_image_location": "",
6811 "delete_image": "",
6812 "restricted": "",
6813 "download_image": "",
6814 "add_member": "",
6815 "publicize_image": "",
6816 "copy_image": "'public':%(visibility)s",
6817 }
6818
6819 self.set_policy_rules(rules)
6820
6821 image_id, response = self._test_copy_public_image_as_non_admin()
6822 # Expect success because image is public
6823 self.assertEqual(http.ACCEPTED, response.status_code)
6824
6825 # Verify image is copied
6826 # NOTE(abhishekk): As import is a async call we need to provide
6827 # some timelap to complete the call.
6828 path = self._url('/v2/images/%s' % image_id)
6829 func_utils.wait_for_copying(request_path=path,
6830 request_headers=self._headers(),
6831 stores=['file2'],
6832 max_sec=40,
6833 delay_sec=0.2,
6834 start_delay_sec=1)
6835
6836 # Ensure image is copied to the file2 and file3 store
6837 path = self._url('/v2/images/%s' % image_id)
6838 response = requests.get(path, headers=self._headers())
6839 self.assertEqual(http.OK, response.status_code)
6840 self.assertIn('file2', jsonutils.loads(response.text)['stores'])
0 # Copyright 2020 Red Hat, Inc.
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 import datetime
16 from testtools import content as ttc
17 import time
18 from unittest import mock
19 import uuid
20
21 from oslo_log import log as logging
22 from oslo_serialization import jsonutils
23 from oslo_utils import fixture as time_fixture
24 from oslo_utils import units
25
26 from glance.tests import functional
27 from glance.tests import utils as test_utils
28
29
30 LOG = logging.getLogger(__name__)
31
32
33 class TestImageImportLocking(functional.SynchronousAPIBase):
34 def _import_copy(self, image_id, stores):
35 """Do an import of image_id to the given stores."""
36 body = {'method': {'name': 'copy-image'},
37 'stores': stores,
38 'all_stores': False}
39
40 return self.api_post(
41 '/v2/images/%s/import' % image_id,
42 json=body)
43
44 def _import_direct(self, image_id, stores):
45 """Do an import of image_id to the given stores."""
46 body = {'method': {'name': 'glance-direct'},
47 'stores': stores,
48 'all_stores': False}
49
50 return self.api_post(
51 '/v2/images/%s/import' % image_id,
52 json=body)
53
54 def _create_and_stage(self, data_iter=None):
55 resp = self.api_post('/v2/images',
56 json={'name': 'foo',
57 'container_format': 'bare',
58 'disk_format': 'raw'})
59 image = jsonutils.loads(resp.text)
60
61 if data_iter:
62 resp = self.api_put(
63 '/v2/images/%s/stage' % image['id'],
64 headers={'Content-Type': 'application/octet-stream'},
65 body_file=data_iter)
66 else:
67 resp = self.api_put(
68 '/v2/images/%s/stage' % image['id'],
69 headers={'Content-Type': 'application/octet-stream'},
70 data=b'IMAGEDATA')
71 self.assertEqual(204, resp.status_code)
72
73 return image['id']
74
75 def _create_and_import(self, stores=[], data_iter=None):
76 """Create an image, stage data, and import into the given stores.
77
78 :returns: image_id
79 """
80 image_id = self._create_and_stage(data_iter=data_iter)
81
82 resp = self._import_direct(image_id, stores)
83 self.assertEqual(202, resp.status_code)
84
85 # Make sure it goes active
86 for i in range(0, 10):
87 image = self.api_get('/v2/images/%s' % image_id).json
88 if not image.get('os_glance_import_task'):
89 break
90 self.addDetail('Create-Import task id',
91 ttc.text_content(image['os_glance_import_task']))
92 time.sleep(1)
93
94 self.assertEqual('active', image['status'])
95
96 return image_id
97
98 def _get_image_import_task(self, image_id, task_id=None):
99 if task_id is None:
100 image = self.api_get('/v2/images/%s' % image_id).json
101 task_id = image['os_glance_import_task']
102
103 return self.api_get('/v2/tasks/%s' % task_id).json
104
105 def _test_import_copy(self, warp_time=False):
106 self.start_server()
107 state = {'want_run': True}
108
109 # Create and import an image with no pipeline stall
110 image_id = self._create_and_import(stores=['store1'])
111
112 # Set up a fake data pipeline that will stall until we are ready
113 # to unblock it
114 def slow_fake_set_data(data_iter, backend=None, set_active=True):
115 me = str(uuid.uuid4())
116 while state['want_run'] == True:
117 LOG.info('fake_set_data running %s' % me)
118 state['running'] = True
119 time.sleep(0.1)
120 LOG.info('fake_set_data ended %s' % me)
121
122 # Constrain oslo timeutils time so we can manipulate it
123 tf = time_fixture.TimeFixture()
124 self.useFixture(tf)
125
126 # Turn on the delayed data pipeline and start a copy-image
127 # import which will hang out for a while
128 with mock.patch('glance.domain.proxy.Image.set_data') as mock_sd:
129 mock_sd.side_effect = slow_fake_set_data
130
131 resp = self._import_copy(image_id, ['store2'])
132 self.addDetail('First import response',
133 ttc.text_content(str(resp)))
134 self.assertEqual(202, resp.status_code)
135
136 # Wait to make sure the data stream gets started
137 for i in range(0, 10):
138 if 'running' in state:
139 break
140 time.sleep(0.1)
141
142 # Make sure the first import got to the point where the
143 # hanging loop will hold it in processing state
144 self.assertTrue(state.get('running', False),
145 'slow_fake_set_data() never ran')
146
147 # Make sure the task is available and in the right state
148 first_import_task = self._get_image_import_task(image_id)
149 self.assertEqual('processing', first_import_task['status'])
150
151 # If we're warping time, then advance the clock by two hours
152 if warp_time:
153 tf.advance_time_delta(datetime.timedelta(hours=2))
154
155 # Try a second copy-image import. If we are warping time,
156 # expect the lock to be busted. If not, then we should get
157 # a 409 Conflict.
158 resp = self._import_copy(image_id, ['store3'])
159 time.sleep(0.1)
160
161 self.addDetail('Second import response',
162 ttc.text_content(str(resp)))
163 if warp_time:
164 self.assertEqual(202, resp.status_code)
165 else:
166 self.assertEqual(409, resp.status_code)
167
168 self.addDetail('First task', ttc.text_content(str(first_import_task)))
169
170 # Grab the current import task for our image, and also
171 # refresh our first task object
172 second_import_task = self._get_image_import_task(image_id)
173 first_import_task = self._get_image_import_task(
174 image_id, first_import_task['id'])
175
176 if warp_time:
177 # If we warped time and busted the lock, then we expect the
178 # current task to be different than the original task
179 self.assertNotEqual(first_import_task['id'],
180 second_import_task['id'])
181 # The original task should be failed with the expected message
182 self.assertEqual('failure', first_import_task['status'])
183 self.assertEqual('Expired lock preempted',
184 first_import_task['message'])
185 # The new task should be off and running
186 self.assertEqual('processing', second_import_task['status'])
187 else:
188 # We didn't bust the lock, so we didn't start another
189 # task, so confirm it hasn't changed
190 self.assertEqual(first_import_task['id'],
191 second_import_task['id'])
192
193 return image_id, state
194
195 def test_import_copy_locked(self):
196 self._test_import_copy(warp_time=False)
197
198 def test_import_copy_bust_lock(self):
199 image_id, state = self._test_import_copy(warp_time=True)
200
201 # After the import has busted the lock, wait for our
202 # new import to start. We used a different store than
203 # the stalled task so we can tell the difference.
204 for i in range(0, 10):
205 image = self.api_get('/v2/images/%s' % image_id).json
206 if image['stores'] == 'store1,store3':
207 break
208 time.sleep(0.1)
209
210 # After completion, we expect store1 (original) and store3 (new)
211 # and that the other task is still stuck importing
212 image = self.api_get('/v2/images/%s' % image_id).json
213 self.assertEqual('store1,store3', image['stores'])
214 self.assertEqual('', image['os_glance_failed_import'])
215
216 # Free up the stalled task and give eventlet time to let it
217 # play out the rest of the task
218 state['want_run'] = False
219 for i in range(0, 10):
220 image = self.api_get('/v2/images/%s' % image_id).json
221 time.sleep(0.1)
222
223 # After that, we expect everything to be cleaned up and in the
224 # terminal state that we expect.
225 image = self.api_get('/v2/images/%s' % image_id).json
226 self.assertEqual('', image.get('os_glance_import_task', ''))
227 self.assertEqual('', image['os_glance_importing_to_stores'])
228 self.assertEqual('', image['os_glance_failed_import'])
229 self.assertEqual('store1,store3', image['stores'])
230
231 @mock.patch('oslo_utils.timeutils.StopWatch.expired', new=lambda x: True)
232 def test_import_task_status(self):
233 self.start_server()
234
235 # Generate 3 MiB of data for the image, enough to get a few
236 # status messages
237 limit = 3 * units.Mi
238 image_id = self._create_and_stage(data_iter=test_utils.FakeData(limit))
239
240 # This utility function will grab the current task status at
241 # any time and stash it into a list of statuses if it finds a
242 # new one
243 statuses = []
244
245 def grab_task_status():
246 image = self.api_get('/v2/images/%s' % image_id).json
247 task_id = image['os_glance_import_task']
248 task = self.api_get('/v2/tasks/%s' % task_id).json
249 msg = task['message']
250 if msg not in statuses:
251 statuses.append(msg)
252
253 # This is the only real thing we have mocked out, which is the
254 # "upload this to glance_store" part, which we override so we
255 # can control the block size and check our task status
256 # synchronously and not depend on timers. It just reads the
257 # source data in 64KiB chunks and throws it away.
258 def fake_upload(data, *a, **k):
259 while True:
260 grab_task_status()
261
262 if not data.read(65536):
263 break
264 time.sleep(0.1)
265
266 with mock.patch('glance.location.ImageProxy._upload_to_store') as mu:
267 mu.side_effect = fake_upload
268
269 # Start the import...
270 resp = self._import_direct(image_id, ['store2'])
271 self.assertEqual(202, resp.status_code)
272
273 # ...and wait until it finishes
274 for i in range(0, 100):
275 image = self.api_get('/v2/images/%s' % image_id).json
276 if not image.get('os_glance_import_task'):
277 break
278 time.sleep(0.1)
279
280 # Image should be in active state and we should have gotten a
281 # new message every 1MiB in the process. We mocked StopWatch
282 # to always be expired so that we fire the callback every
283 # time.
284 self.assertEqual('active', image['status'])
285 self.assertEqual(['', 'Copied 0 MiB', 'Copied 1 MiB', 'Copied 2 MiB',
286 'Copied 3 MiB'],
287 statuses)
2525 import glance.common.client
2626 from glance.common import config
2727 import glance.db.sqlalchemy.api
28 import glance.registry.client.v1.client
2928 from glance import tests as glance_tests
3029 from glance.tests import utils as test_utils
3130
5554 [composite:rootapp]
5655 paste.composite_factory = glance.api:root_app_factory
5756 /: apiversions
58 /v1: apiv1app
5957 /v2: apiv2app
6058
6159 [app:apiversions]
6260 paste.app_factory = glance.api.versions:create_resource
63
64 [app:apiv1app]
65 paste.app_factory = glance.api.v1.router:API.factory
6661
6762 [app:apiv2app]
6863 paste.app_factory = glance.api.v2.router:API.factory
9287 paste.filter_factory = glance.tests.utils:FakeAuthMiddleware.factory
9388 """
9489
95 TESTING_REGISTRY_PASTE_CONF = """
96 [pipeline:glance-registry]
97 pipeline = unauthenticated-context registryapp
98
99 [pipeline:glance-registry-fakeauth]
100 pipeline = fakeauth context registryapp
101
102 [app:registryapp]
103 paste.app_factory = glance.registry.api.v1:API.factory
104
105 [filter:context]
106 paste.filter_factory = glance.api.middleware.context:ContextMiddleware.factory
107
108 [filter:unauthenticated-context]
109 paste.filter_factory =
110 glance.api.middleware.context:UnauthenticatedContextMiddleware.factory
111
112 [filter:fakeauth]
113 paste.filter_factory = glance.tests.utils:FakeAuthMiddleware.factory
114 """
115
11690 CONF = cfg.CONF
11791
11892
12599 self._setup_database()
126100 self._setup_stores()
127101 self._setup_property_protection()
128 self.glance_registry_app = self._load_paste_app(
129 'glance-registry',
130 flavor=getattr(self, 'registry_flavor', ''),
131 conf=getattr(self, 'registry_paste_conf',
132 TESTING_REGISTRY_PASTE_CONF),
133 )
134 self._connect_registry_client()
135102 self.glance_api_app = self._load_paste_app(
136103 'glance-api',
137104 flavor=getattr(self, 'api_flavor', ''),
202169 return config.load_paste_app(name, flavor=flavor,
203170 conf_file=conf_file_path)
204171
205 def _connect_registry_client(self):
206 def get_connection_type(self2):
207 def wrapped(*args, **kwargs):
208 return test_utils.HttplibWsgiAdapter(self.glance_registry_app)
209 return wrapped
210
211 self.mock_object(glance.common.client.BaseClient,
212 'get_connection_type', get_connection_type)
213
214172 def tearDown(self):
215173 glance.db.sqlalchemy.api.clear_db_env()
216174 super(ApiTest, self).tearDown()
2727 def __init__(self, *args, **kwargs):
2828 super(TestPropertyQuotaViolations, self).__init__(*args, **kwargs)
2929 self.api_flavor = 'noauth'
30 self.registry_flavor = 'fakeauth'
3130
3231 def _headers(self, custom_headers=None):
3332 base_headers = {
5555 def __init__(self, *args, **kwargs):
5656 super(TestTasksApi, self).__init__(*args, **kwargs)
5757 self.api_flavor = 'fakeauth'
58 self.registry_flavor = 'fakeauth'
5958
6059 def _wait_on_task_execution(self, max_wait=5):
6160 """Wait until all the tasks have finished execution and are in
2626 import webob
2727
2828 from glance.api.middleware import context
29 from glance.api.v1 import router
29 from glance.api.v2 import router
3030 import glance.common.client
31 from glance.registry.api import v1 as rserver
32 from glance.tests import utils
3331
3432
3533 DEBUG = False
3634
3735
38 class FakeRegistryConnection(object):
39
40 def __init__(self, registry=None):
41 self.registry = registry or rserver
42
43 def __call__(self, *args, **kwargs):
44 # NOTE(flaper87): This method takes
45 # __init__'s place in the chain.
46 return self
47
48 def connect(self):
49 return True
50
51 def close(self):
52 return True
53
54 def request(self, method, url, body=None, headers=None):
55 self.req = webob.Request.blank("/" + url.lstrip("/"))
56 self.req.method = method
57 if headers:
58 self.req.headers = headers
59 if body:
60 self.req.body = body
61
62 def getresponse(self):
63 mapper = routes.Mapper()
64 server = self.registry.API(mapper)
65 # NOTE(markwash): we need to pass through context auth information if
66 # we have it.
67 if 'X-Auth-Token' in self.req.headers:
68 api = utils.FakeAuthMiddleware(server)
69 else:
70 api = context.UnauthenticatedContextMiddleware(server)
71 webob_res = self.req.get_response(api)
72
73 return utils.FakeHTTPResponse(status=webob_res.status_int,
74 headers=webob_res.headers,
75 data=webob_res.body)
76
77
78 def stub_out_registry_and_store_server(stubs, base_dir, **kwargs):
79 """Mocks calls to 127.0.0.1 on 9191 and 9292 for testing.
36 def stub_out_store_server(stubs, base_dir, **kwargs):
37 """Mocks calls to 127.0.0.1 on 9292 for testing.
8038
8139 Done so that a real Glance server does not need to be up and
8240 running
11472 def close(self):
11573 return True
11674
117 def _clean_url(self, url):
118 # TODO(bcwaldon): Fix the hack that strips off v1
119 return url.replace('/v1', '', 1) if url.startswith('/v1') else url
120
12175 def putrequest(self, method, url):
122 self.req = webob.Request.blank(self._clean_url(url))
76 self.req = webob.Request.blank(url)
12377 if self.stub_force_sendfile:
12478 fake_sendfile = FakeSendFile(self.req)
12579 stubs.Set(sendfile, 'sendfile', fake_sendfile.sendfile)
14195 self.req.body += data.split("\r\n")[1]
14296
14397 def request(self, method, url, body=None, headers=None):
144 self.req = webob.Request.blank(self._clean_url(url))
98 self.req = webob.Request.blank(url)
14599 self.req.method = method
146100 if headers:
147101 self.req.headers = headers
6464 sys.argv = self.__argv_backup
6565 super(TestGlanceApiCmd, self).tearDown()
6666
67 @mock.patch('glance.async_.set_threadpool_model',)
6768 @mock.patch.object(prefetcher, 'Prefetcher')
68 def test_supported_default_store(self, mock_prefetcher):
69 def test_supported_default_store(self, mock_prefetcher, mock_set_model):
6970 self.config(group='glance_store', default_store='file')
7071 glance.cmd.api.main()
72 # Make sure we declared the system threadpool model as eventlet
73 mock_set_model.assert_called_once_with('eventlet')
7174
7275 @mock.patch.object(prefetcher, 'Prefetcher')
76 @mock.patch('glance.async_.set_threadpool_model', new=mock.MagicMock())
7377 def test_worker_creation_failure(self, mock_prefetcher):
7478 failure = exc.WorkerCreationFailure(reason='test')
7579 self.mock_object(glance.common.wsgi.Server, 'start',
1313 # under the License.
1414
1515 import testtools
16 from unittest import mock
1617 import webob
1718
1819 import glance.api.common
123124 self.assertEqual('CD', next(checked_image))
124125 self.assertEqual('E', next(checked_image))
125126 self.assertRaises(exception.GlanceException, next, checked_image)
127
128
129 class TestThreadPool(testtools.TestCase):
130 @mock.patch('glance.async_.get_threadpool_model')
131 def test_get_thread_pool(self, mock_gtm):
132 get_thread_pool = glance.api.common.get_thread_pool
133
134 pool1 = get_thread_pool('pool1', size=123)
135 get_thread_pool('pool2', size=456)
136 pool1a = get_thread_pool('pool1')
137
138 # Two calls for the same pool should return the exact same thing
139 self.assertEqual(pool1, pool1a)
140
141 # Only two calls to get new threadpools should have been made
142 mock_gtm.return_value.assert_has_calls(
143 [mock.call(123), mock.call(456)])
144
145 @mock.patch('glance.async_.get_threadpool_model')
146 def test_get_thread_pool_log(self, mock_gtm):
147 with mock.patch.object(glance.api.common, 'LOG') as mock_log:
148 glance.api.common.get_thread_pool('test-pool')
149 mock_log.debug.assert_called_once_with(
150 'Initializing named threadpool %r', 'test-pool')
1212 # License for the specific language governing permissions and limitations
1313 # under the License.
1414
15 import sys
1516 from unittest import mock
1617
1718 from glance_store import exceptions as store_exceptions
1819 from oslo_config import cfg
20 from oslo_utils import units
21 import taskflow
1922
2023 import glance.async_.flows.api_image_import as import_flow
21 from glance.common.exception import ImportTaskError
24 from glance.common import exception
25 from glance.common.scripts.image_import import main as image_import
2226 from glance import context
2327 from glance import gateway
2428 import glance.tests.utils as test_utils
5862
5963 self.mock_task_repo = mock.MagicMock()
6064 self.mock_image_repo = mock.MagicMock()
65 self.mock_image_repo.get.return_value.extra_properties = {
66 'os_glance_import_task': TASK_ID1}
6167
6268 @mock.patch('glance.async_.flows.api_image_import._VerifyStaging.__init__')
6369 @mock.patch('taskflow.patterns.linear_flow.Flow.add')
97103 import_req=self.gd_task_input['import_req'])
98104
99105
106 class TestImageLock(test_utils.BaseTestCase):
107 def setUp(self):
108 super(TestImageLock, self).setUp()
109 self.img_repo = mock.MagicMock()
110
111 @mock.patch('glance.async_.flows.api_image_import.LOG')
112 def test_execute_confirms_lock(self, mock_log):
113 self.img_repo.get.return_value.extra_properties = {
114 'os_glance_import_task': TASK_ID1}
115 wrapper = import_flow.ImportActionWrapper(self.img_repo, IMAGE_ID1,
116 TASK_ID1)
117 imagelock = import_flow._ImageLock(TASK_ID1, TASK_TYPE, wrapper)
118 imagelock.execute()
119 mock_log.debug.assert_called_once_with('Image %(image)s import task '
120 '%(task)s lock confirmed',
121 {'image': IMAGE_ID1,
122 'task': TASK_ID1})
123
124 @mock.patch('glance.async_.flows.api_image_import.LOG')
125 def test_execute_confirms_lock_not_held(self, mock_log):
126 wrapper = import_flow.ImportActionWrapper(self.img_repo, IMAGE_ID1,
127 TASK_ID1)
128 imagelock = import_flow._ImageLock(TASK_ID1, TASK_TYPE, wrapper)
129 self.assertRaises(exception.TaskAbortedError,
130 imagelock.execute)
131
132 @mock.patch('glance.async_.flows.api_image_import.LOG')
133 def test_revert_drops_lock(self, mock_log):
134 wrapper = import_flow.ImportActionWrapper(self.img_repo, IMAGE_ID1,
135 TASK_ID1)
136 imagelock = import_flow._ImageLock(TASK_ID1, TASK_TYPE, wrapper)
137 with mock.patch.object(wrapper, 'drop_lock_for_task') as mock_drop:
138 imagelock.revert(None)
139 mock_drop.assert_called_once_with()
140 mock_log.debug.assert_called_once_with('Image %(image)s import task '
141 '%(task)s dropped its lock '
142 'after failure',
143 {'image': IMAGE_ID1,
144 'task': TASK_ID1})
145
146 @mock.patch('glance.async_.flows.api_image_import.LOG')
147 def test_revert_drops_lock_missing(self, mock_log):
148 wrapper = import_flow.ImportActionWrapper(self.img_repo, IMAGE_ID1,
149 TASK_ID1)
150 imagelock = import_flow._ImageLock(TASK_ID1, TASK_TYPE, wrapper)
151 with mock.patch.object(wrapper, 'drop_lock_for_task') as mock_drop:
152 mock_drop.side_effect = exception.NotFound()
153 imagelock.revert(None)
154 mock_log.warning.assert_called_once_with('Image %(image)s import task '
155 '%(task)s lost its lock '
156 'during execution!',
157 {'image': IMAGE_ID1,
158 'task': TASK_ID1})
159
160
100161 class TestImportToStoreTask(test_utils.BaseTestCase):
101162
102163 def setUp(self):
107168 overwrite=False)
108169 self.img_factory = self.gateway.get_image_factory(self.context)
109170
171 def test_execute(self):
172 wrapper = mock.MagicMock()
173 action = mock.MagicMock()
174 task_repo = mock.MagicMock()
175 wrapper.__enter__.return_value = action
176 image_import = import_flow._ImportToStore(TASK_ID1, TASK_TYPE,
177 task_repo, wrapper,
178 "http://url",
179 "store1", False,
180 True)
181 # Assert file_path is honored
182 with mock.patch.object(image_import, '_execute') as mock_execute:
183 image_import.execute(mock.sentinel.path)
184 mock_execute.assert_called_once_with(action, mock.sentinel.path)
185
186 # Assert file_path is optional
187 with mock.patch.object(image_import, '_execute') as mock_execute:
188 image_import.execute()
189 mock_execute.assert_called_once_with(action, None)
190
191 def test_execute_body_with_store(self):
192 image = mock.MagicMock()
193 img_repo = mock.MagicMock()
194 img_repo.get.return_value = image
195 task_repo = mock.MagicMock()
196 wrapper = import_flow.ImportActionWrapper(img_repo, IMAGE_ID1,
197 TASK_ID1)
198 image_import = import_flow._ImportToStore(TASK_ID1, TASK_TYPE,
199 task_repo, wrapper,
200 "http://url",
201 "store1", False,
202 True)
203 action = mock.MagicMock()
204 image_import._execute(action, mock.sentinel.path)
205 action.set_image_data.assert_called_once_with(
206 mock.sentinel.path,
207 TASK_ID1, backend='store1',
208 set_active=True,
209 callback=image_import._status_callback)
210 action.remove_importing_stores(['store1'])
211
212 def test_execute_body_with_store_no_path(self):
213 image = mock.MagicMock()
214 img_repo = mock.MagicMock()
215 img_repo.get.return_value = image
216 task_repo = mock.MagicMock()
217 wrapper = import_flow.ImportActionWrapper(img_repo, IMAGE_ID1,
218 TASK_ID1)
219 image_import = import_flow._ImportToStore(TASK_ID1, TASK_TYPE,
220 task_repo, wrapper,
221 "http://url",
222 "store1", False,
223 True)
224 action = mock.MagicMock()
225 image_import._execute(action, None)
226 action.set_image_data.assert_called_once_with(
227 'http://url',
228 TASK_ID1, backend='store1',
229 set_active=True,
230 callback=image_import._status_callback)
231 action.remove_importing_stores(['store1'])
232
233 def test_execute_body_without_store(self):
234 image = mock.MagicMock()
235 img_repo = mock.MagicMock()
236 img_repo.get.return_value = image
237 task_repo = mock.MagicMock()
238 wrapper = import_flow.ImportActionWrapper(img_repo, IMAGE_ID1,
239 TASK_ID1)
240 image_import = import_flow._ImportToStore(TASK_ID1, TASK_TYPE,
241 task_repo, wrapper,
242 "http://url",
243 None, False,
244 True)
245 action = mock.MagicMock()
246 image_import._execute(action, mock.sentinel.path)
247 action.set_image_data.assert_called_once_with(
248 mock.sentinel.path,
249 TASK_ID1, backend=None,
250 set_active=True,
251 callback=image_import._status_callback)
252 action.remove_importing_stores.assert_not_called()
253
254 @mock.patch('glance.async_.flows.api_image_import.LOG.debug')
255 @mock.patch('oslo_utils.timeutils.now')
256 def test_status_callback_limits_rate(self, mock_now, mock_log):
257 img_repo = mock.MagicMock()
258 task_repo = mock.MagicMock()
259 task_repo.get.return_value.status = 'processing'
260 wrapper = import_flow.ImportActionWrapper(img_repo, IMAGE_ID1,
261 TASK_ID1)
262 image_import = import_flow._ImportToStore(TASK_ID1, TASK_TYPE,
263 task_repo, wrapper,
264 "http://url",
265 None, False,
266 True)
267
268 expected_calls = []
269 log_call = mock.call('Image import %(image_id)s copied %(copied)i MiB',
270 {'image_id': IMAGE_ID1,
271 'copied': 0})
272 action = mock.MagicMock(image_id=IMAGE_ID1)
273
274 mock_now.return_value = 1000
275 image_import._status_callback(action, 32, 32)
276 # First call will emit immediately because we only ran __init__
277 # which sets the last status to zero
278 expected_calls.append(log_call)
279 mock_log.assert_has_calls(expected_calls)
280
281 image_import._status_callback(action, 32, 64)
282 # Second call will not emit any other logs because no time
283 # has passed
284 mock_log.assert_has_calls(expected_calls)
285
286 mock_now.return_value += 190
287 image_import._status_callback(action, 32, 96)
288 # Third call will not emit any other logs because not enough
289 # time has passed
290 mock_log.assert_has_calls(expected_calls)
291
292 mock_now.return_value += 300
293 image_import._status_callback(action, 32, 128)
294 # Fourth call will emit because we crossed five minutes
295 expected_calls.append(log_call)
296 mock_log.assert_has_calls(expected_calls)
297
298 mock_now.return_value += 150
299 image_import._status_callback(action, 32, 128)
300 # Fifth call will not emit any other logs because not enough
301 # time has passed
302 mock_log.assert_has_calls(expected_calls)
303
304 mock_now.return_value += 3600
305 image_import._status_callback(action, 32, 128)
306 # Sixth call will emit because we crossed five minutes
307 expected_calls.append(log_call)
308 mock_log.assert_has_calls(expected_calls)
309
110310 def test_raises_when_image_deleted(self):
111311 img_repo = mock.MagicMock()
112 image_import = import_flow._ImportToStore(TASK_ID1, TASK_TYPE,
113 img_repo, "http://url",
114 IMAGE_ID1, "store1", False,
312 task_repo = mock.MagicMock()
313 wrapper = import_flow.ImportActionWrapper(img_repo, IMAGE_ID1,
314 TASK_ID1)
315 image_import = import_flow._ImportToStore(TASK_ID1, TASK_TYPE,
316 task_repo, wrapper,
317 "http://url",
318 "store1", False,
115319 True)
116320 image = self.img_factory.new_image(image_id=UUID1)
117321 image.status = "deleted"
118322 img_repo.get.return_value = image
119 self.assertRaises(ImportTaskError, image_import.execute)
323 self.assertRaises(exception.ImportTaskError, image_import.execute)
120324
121325 @mock.patch("glance.async_.flows.api_image_import.image_import")
122326 def test_remove_store_from_property(self, mock_import):
123327 img_repo = mock.MagicMock()
124 image_import = import_flow._ImportToStore(TASK_ID1, TASK_TYPE,
125 img_repo, "http://url",
126 IMAGE_ID1, "store1", True,
127 True)
128 extra_properties = {"os_glance_importing_to_stores": "store1,store2"}
328 task_repo = mock.MagicMock()
329 wrapper = import_flow.ImportActionWrapper(img_repo, IMAGE_ID1,
330 TASK_ID1)
331 image_import = import_flow._ImportToStore(TASK_ID1, TASK_TYPE,
332 task_repo, wrapper,
333 "http://url",
334 "store1", True,
335 True)
336 extra_properties = {"os_glance_importing_to_stores": "store1,store2",
337 "os_glance_import_task": TASK_ID1}
129338 image = self.img_factory.new_image(image_id=UUID1,
130339 extra_properties=extra_properties)
131340 img_repo.get.return_value = image
133342 self.assertEqual(
134343 image.extra_properties['os_glance_importing_to_stores'], "store2")
135344
345 def test_revert_updates_status_keys(self):
346 img_repo = mock.MagicMock()
347 task_repo = mock.MagicMock()
348 wrapper = import_flow.ImportActionWrapper(img_repo, IMAGE_ID1,
349 TASK_ID1)
350 image_import = import_flow._ImportToStore(TASK_ID1, TASK_TYPE,
351 task_repo, wrapper,
352 "http://url",
353 "store1", True,
354 True)
355 extra_properties = {"os_glance_importing_to_stores": "store1,store2",
356 "os_glance_import_task": TASK_ID1}
357 image = self.img_factory.new_image(image_id=UUID1,
358 extra_properties=extra_properties)
359 img_repo.get.return_value = image
360
361 fail_key = 'os_glance_failed_import'
362 pend_key = 'os_glance_importing_to_stores'
363
364 image_import.revert(None)
365 self.assertEqual('store2', image.extra_properties[pend_key])
366
367 try:
368 raise Exception('foo')
369 except Exception:
370 fake_exc_info = sys.exc_info()
371
372 extra_properties = {"os_glance_importing_to_stores": "store1,store2"}
373 image_import.revert(taskflow.types.failure.Failure(fake_exc_info))
374 self.assertEqual('store2', image.extra_properties[pend_key])
375 self.assertEqual('store1', image.extra_properties[fail_key])
376
136377 @mock.patch("glance.async_.flows.api_image_import.image_import")
137378 def test_raises_when_all_stores_must_succeed(self, mock_import):
138379 img_repo = mock.MagicMock()
139 image_import = import_flow._ImportToStore(TASK_ID1, TASK_TYPE,
140 img_repo, "http://url",
141 IMAGE_ID1, "store1", True,
142 True)
143 image = self.img_factory.new_image(image_id=UUID1)
380 task_repo = mock.MagicMock()
381 wrapper = import_flow.ImportActionWrapper(img_repo, IMAGE_ID1,
382 TASK_ID1)
383 image_import = import_flow._ImportToStore(TASK_ID1, TASK_TYPE,
384 task_repo, wrapper,
385 "http://url",
386 "store1", True,
387 True)
388 extra_properties = {'os_glance_import_task': TASK_ID1}
389 image = self.img_factory.new_image(image_id=UUID1,
390 extra_properties=extra_properties)
144391 img_repo.get.return_value = image
145392 mock_import.set_image_data.side_effect = \
146393 cursive_exception.SignatureVerificationError(
151398 @mock.patch("glance.async_.flows.api_image_import.image_import")
152399 def test_doesnt_raise_when_not_all_stores_must_succeed(self, mock_import):
153400 img_repo = mock.MagicMock()
154 image_import = import_flow._ImportToStore(TASK_ID1, TASK_TYPE,
155 img_repo, "http://url",
156 IMAGE_ID1, "store1", False,
157 True)
158 image = self.img_factory.new_image(image_id=UUID1)
401 task_repo = mock.MagicMock()
402 wrapper = import_flow.ImportActionWrapper(img_repo, IMAGE_ID1,
403 TASK_ID1)
404 image_import = import_flow._ImportToStore(TASK_ID1, TASK_TYPE,
405 task_repo, wrapper,
406 "http://url",
407 "store1", False,
408 True)
409 extra_properties = {'os_glance_import_task': TASK_ID1}
410 image = self.img_factory.new_image(image_id=UUID1,
411 extra_properties=extra_properties)
159412 img_repo.get.return_value = image
160413 mock_import.set_image_data.side_effect = \
161414 cursive_exception.SignatureVerificationError(
166419 "store1")
167420 except cursive_exception.SignatureVerificationError:
168421 self.fail("Exception shouldn't be raised")
422
423 @mock.patch('glance.common.scripts.utils.get_task')
424 def test_status_callback_updates_task_message(self, mock_get):
425 task_repo = mock.MagicMock()
426 image_import = import_flow._ImportToStore(TASK_ID1, TASK_TYPE,
427 task_repo, mock.MagicMock(),
428 "http://url",
429 "store1", False,
430 True)
431 task = mock.MagicMock()
432 task.status = 'processing'
433 mock_get.return_value = task
434 action = mock.MagicMock()
435 image_import._status_callback(action, 128, 256 * units.Mi)
436 mock_get.assert_called_once_with(task_repo, TASK_ID1)
437 task_repo.save.assert_called_once_with(task)
438 self.assertEqual(_('Copied %i MiB' % 256), task.message)
439
440 @mock.patch('glance.common.scripts.utils.get_task')
441 def test_status_aborts_missing_task(self, mock_get):
442 task_repo = mock.MagicMock()
443 image_import = import_flow._ImportToStore(TASK_ID1, TASK_TYPE,
444 task_repo, mock.MagicMock(),
445 "http://url",
446 "store1", False,
447 True)
448 mock_get.return_value = None
449 action = mock.MagicMock()
450 self.assertRaises(exception.TaskNotFound,
451 image_import._status_callback,
452 action, 128, 256 * units.Mi)
453 mock_get.assert_called_once_with(task_repo, TASK_ID1)
454 task_repo.save.assert_not_called()
455
456 @mock.patch('glance.common.scripts.utils.get_task')
457 def test_status_aborts_invalid_task_state(self, mock_get):
458 task_repo = mock.MagicMock()
459 image_import = import_flow._ImportToStore(TASK_ID1, TASK_TYPE,
460 task_repo, mock.MagicMock(),
461 "http://url",
462 "store1", False,
463 True)
464 task = mock.MagicMock()
465 task.status = 'failed'
466 mock_get.return_value = task
467 action = mock.MagicMock()
468 self.assertRaises(exception.TaskAbortedError,
469 image_import._status_callback,
470 action, 128, 256 * units.Mi)
471 mock_get.assert_called_once_with(task_repo, TASK_ID1)
472 task_repo.save.assert_not_called()
169473
170474
171475 class TestDeleteFromFS(test_utils.BaseTestCase):
220524 mock_unlink.assert_not_called()
221525
222526
527 class TestImportCopyImageTask(test_utils.BaseTestCase):
528
529 def setUp(self):
530 super(TestImportCopyImageTask, self).setUp()
531
532 self.context = context.RequestContext(user_id=TENANT1,
533 project_id=TENANT1,
534 overwrite=False)
535
536 @mock.patch("glance.async_.flows.api_image_import.image_import")
537 def test_init_copy_flow_as_non_owner(self, mock_import):
538 img_repo = mock.MagicMock()
539 admin_repo = mock.MagicMock()
540
541 fake_req = {"method": {"name": "copy-image"},
542 "backend": ['cheap']}
543
544 fake_img = mock.MagicMock()
545 fake_img.id = IMAGE_ID1
546 fake_img.status = 'active'
547 fake_img.extra_properties = {'os_glance_import_task': TASK_ID1}
548 admin_repo.get.return_value = fake_img
549
550 import_flow.get_flow(task_id=TASK_ID1,
551 task_type=TASK_TYPE,
552 task_repo=mock.MagicMock(),
553 image_repo=img_repo,
554 admin_repo=admin_repo,
555 image_id=IMAGE_ID1,
556 import_req=fake_req,
557 backend=['cheap'])
558
559 # Assert that we saved the image with the admin repo instead of the
560 # user-context one at the end of get_flow() when we initialize the
561 # parameters.
562 admin_repo.save.assert_called_once_with(fake_img, 'active')
563 img_repo.save.assert_not_called()
564
565
223566 class TestVerifyImageStateTask(test_utils.BaseTestCase):
224567 def test_verify_active_status(self):
225 fake_img = mock.MagicMock(status='active')
568 fake_img = mock.MagicMock(status='active',
569 extra_properties={
570 'os_glance_import_task': TASK_ID1})
226571 mock_repo = mock.MagicMock()
227572 mock_repo.get.return_value = fake_img
573 wrapper = import_flow.ImportActionWrapper(mock_repo, IMAGE_ID1,
574 TASK_ID1)
228575
229576 task = import_flow._VerifyImageState(TASK_ID1, TASK_TYPE,
230 mock_repo, IMAGE_ID1,
231 'anything!')
577 wrapper, 'anything!')
232578
233579 task.execute()
234580
237583 task.execute)
238584
239585 def test_revert_copy_status_unchanged(self):
240 fake_img = mock.MagicMock(status='active')
586 wrapper = mock.MagicMock()
587 task = import_flow._VerifyImageState(TASK_ID1, TASK_TYPE,
588 wrapper, 'copy-image')
589 task.revert(mock.sentinel.result)
590
591 # If we are doing copy-image, no state update should be made
592 wrapper.__enter__.return_value.set_image_status.assert_not_called()
593
594 def test_reverts_state_nocopy(self):
595 wrapper = mock.MagicMock()
596 task = import_flow._VerifyImageState(TASK_ID1, TASK_TYPE,
597 wrapper, 'glance-direct')
598 task.revert(mock.sentinel.result)
599
600 # Except for copy-image, image state should revert to queued
601 action = wrapper.__enter__.return_value
602 action.set_image_status.assert_called_once_with('queued')
603
604
605 class TestImportActionWrapper(test_utils.BaseTestCase):
606 def test_wrapper_success(self):
241607 mock_repo = mock.MagicMock()
242 mock_repo.get.return_value = fake_img
243 task = import_flow._VerifyImageState(TASK_ID1, TASK_TYPE,
244 mock_repo, IMAGE_ID1,
245 'copy-image')
246 task.revert(mock.sentinel.result)
247
248 # If we are doing copy-image, no state update should be made
249 mock_repo.save_image.assert_not_called()
250
251 def test_reverts_state_nocopy(self):
252 fake_img = mock.MagicMock(status='importing')
608 mock_repo.get.return_value.extra_properties = {
609 'os_glance_import_task': TASK_ID1}
610 wrapper = import_flow.ImportActionWrapper(mock_repo, IMAGE_ID1,
611 TASK_ID1)
612 with wrapper as action:
613 self.assertIsInstance(action, import_flow._ImportActions)
614 mock_repo.get.assert_has_calls([mock.call(IMAGE_ID1),
615 mock.call(IMAGE_ID1)])
616 mock_repo.save.assert_called_once_with(
617 mock_repo.get.return_value,
618 mock_repo.get.return_value.status)
619
620 def test_wrapper_failure(self):
253621 mock_repo = mock.MagicMock()
254 mock_repo.get.return_value = fake_img
255 task = import_flow._VerifyImageState(TASK_ID1, TASK_TYPE,
256 mock_repo, IMAGE_ID1,
257 'glance-direct')
258 task.revert(mock.sentinel.result)
259
260 # Except for copy-image, image state should revert to queued
261 mock_repo.save_image.assert_called_once()
622 mock_repo.get.return_value.extra_properties = {
623 'os_glance_import_task': TASK_ID1}
624 wrapper = import_flow.ImportActionWrapper(mock_repo, IMAGE_ID1,
625 TASK_ID1)
626
627 class SpecificError(Exception):
628 pass
629
630 try:
631 with wrapper:
632 raise SpecificError('some failure')
633 except SpecificError:
634 # NOTE(danms): Make sure we only caught the test exception
635 # and aren't hiding anything else
636 pass
637
638 mock_repo.get.assert_called_once_with(IMAGE_ID1)
639 mock_repo.save.assert_not_called()
640
641 @mock.patch.object(import_flow, 'LOG')
642 def test_wrapper_logs_status(self, mock_log):
643 mock_repo = mock.MagicMock()
644 mock_image = mock_repo.get.return_value
645 mock_image.extra_properties = {'os_glance_import_task': TASK_ID1}
646 wrapper = import_flow.ImportActionWrapper(mock_repo, IMAGE_ID1,
647 TASK_ID1)
648
649 mock_image.status = 'foo'
650 with wrapper as action:
651 action.set_image_status('bar')
652
653 mock_log.debug.assert_called_once_with(
654 'Image %(image_id)s status changing from '
655 '%(old_status)s to %(new_status)s',
656 {'image_id': IMAGE_ID1,
657 'old_status': 'foo',
658 'new_status': 'bar'})
659 self.assertEqual('bar', mock_image.status)
660
661 def test_image_id_property(self):
662 mock_repo = mock.MagicMock()
663 wrapper = import_flow.ImportActionWrapper(mock_repo, IMAGE_ID1,
664 TASK_ID1)
665 self.assertEqual(IMAGE_ID1, wrapper.image_id)
666
667 def test_drop_lock_for_task(self):
668 mock_repo = mock.MagicMock()
669 mock_repo.get.return_value.extra_properties = {
670 'os_glance_import_task': TASK_ID1}
671 wrapper = import_flow.ImportActionWrapper(mock_repo, IMAGE_ID1,
672 TASK_ID1)
673 wrapper.drop_lock_for_task()
674 mock_repo.delete_property_atomic.assert_called_once_with(
675 mock_repo.get.return_value, 'os_glance_import_task', TASK_ID1)
676
677 def test_assert_task_lock(self):
678 mock_repo = mock.MagicMock()
679 mock_repo.get.return_value.extra_properties = {
680 'os_glance_import_task': TASK_ID1}
681 wrapper = import_flow.ImportActionWrapper(mock_repo, IMAGE_ID1,
682 TASK_ID1)
683 wrapper.assert_task_lock()
684
685 # Try again with a different task ID and it should fail
686 wrapper = import_flow.ImportActionWrapper(mock_repo, IMAGE_ID1,
687 'foo')
688 self.assertRaises(exception.TaskAbortedError,
689 wrapper.assert_task_lock)
690
691 def _grab_image(self, wrapper):
692 with wrapper:
693 pass
694
695 @mock.patch.object(import_flow, 'LOG')
696 def test_check_task_lock(self, mock_log):
697 mock_repo = mock.MagicMock()
698 wrapper = import_flow.ImportActionWrapper(mock_repo, IMAGE_ID1,
699 TASK_ID1)
700 image = mock.MagicMock(image_id=IMAGE_ID1)
701 image.extra_properties = {'os_glance_import_task': TASK_ID1}
702 mock_repo.get.return_value = image
703 self._grab_image(wrapper)
704 mock_log.error.assert_not_called()
705
706 image.extra_properties['os_glance_import_task'] = 'somethingelse'
707 self.assertRaises(exception.TaskAbortedError,
708 self._grab_image, wrapper)
709 mock_log.error.assert_called_once_with(
710 'Image %(image)s import task %(task)s attempted to take action on '
711 'image, but other task %(other)s holds the lock; Aborting.',
712 {'image': image.image_id,
713 'task': TASK_ID1,
714 'other': 'somethingelse'})
715
716
717 class TestImportActions(test_utils.BaseTestCase):
718 def setUp(self):
719 super(TestImportActions, self).setUp()
720 self.image = mock.MagicMock()
721 self.image.image_id = IMAGE_ID1
722 self.image.status = 'active'
723 self.image.extra_properties = {'speed': '88mph'}
724 self.image.checksum = mock.sentinel.checksum
725 self.image.os_hash_algo = mock.sentinel.hash_algo
726 self.image.os_hash_value = mock.sentinel.hash_value
727 self.image.size = mock.sentinel.size
728 self.actions = import_flow._ImportActions(self.image)
729
730 def test_image_property_proxies(self):
731 self.assertEqual(IMAGE_ID1, self.actions.image_id)
732 self.assertEqual('active', self.actions.image_status)
733
734 def test_merge_store_list(self):
735 # Addition with no existing property works
736 self.actions.merge_store_list('stores', ['foo', 'bar'])
737 self.assertEqual({'speed': '88mph',
738 'stores': 'bar,foo'},
739 self.image.extra_properties)
740
741 # Addition adds to the list
742 self.actions.merge_store_list('stores', ['baz'])
743 self.assertEqual('bar,baz,foo', self.image.extra_properties['stores'])
744
745 # Removal preserves the rest
746 self.actions.merge_store_list('stores', ['foo'], subtract=True)
747 self.assertEqual('bar,baz', self.image.extra_properties['stores'])
748
749 # Duplicates aren't duplicated
750 self.actions.merge_store_list('stores', ['bar'])
751 self.assertEqual('bar,baz', self.image.extra_properties['stores'])
752
753 # Removing the last store leaves the key empty but present
754 self.actions.merge_store_list('stores', ['baz', 'bar'], subtract=True)
755 self.assertEqual('', self.image.extra_properties['stores'])
756
757 # Make sure we ignore falsey stores
758 self.actions.merge_store_list('stores', ['', None])
759 self.assertEqual('', self.image.extra_properties['stores'])
760
761 @mock.patch.object(import_flow, 'LOG')
762 def test_merge_store_logs_info(self, mock_log):
763 # Removal from non-present key logs debug, but does not fail
764 self.actions.merge_store_list('stores', ['foo,bar'], subtract=True)
765 mock_log.debug.assert_has_calls([
766 mock.call(
767 'Stores %(stores)s not in %(key)s for image %(image_id)s',
768 {'image_id': IMAGE_ID1,
769 'key': 'stores',
770 'stores': 'foo,bar'}),
771 mock.call(
772 'Image %(image_id)s %(key)s=%(stores)s',
773 {'image_id': IMAGE_ID1,
774 'key': 'stores',
775 'stores': ''}),
776 ])
777
778 mock_log.debug.reset_mock()
779
780 self.actions.merge_store_list('stores', ['foo'])
781 self.assertEqual('foo', self.image.extra_properties['stores'])
782
783 mock_log.debug.reset_mock()
784
785 # Removal from a list where store is not present logs debug,
786 # but does not fail
787 self.actions.merge_store_list('stores', ['bar'], subtract=True)
788 self.assertEqual('foo', self.image.extra_properties['stores'])
789 mock_log.debug.assert_has_calls([
790 mock.call(
791 'Stores %(stores)s not in %(key)s for image %(image_id)s',
792 {'image_id': IMAGE_ID1,
793 'key': 'stores',
794 'stores': 'bar'}),
795 mock.call(
796 'Image %(image_id)s %(key)s=%(stores)s',
797 {'image_id': IMAGE_ID1,
798 'key': 'stores',
799 'stores': 'foo'}),
800 ])
801
802 def test_store_list_helpers(self):
803 self.actions.add_importing_stores(['foo', 'bar', 'baz'])
804 self.actions.remove_importing_stores(['bar'])
805 self.actions.add_failed_stores(['foo', 'bar'])
806 self.actions.remove_failed_stores(['foo'])
807 self.assertEqual({'speed': '88mph',
808 'os_glance_importing_to_stores': 'baz,foo',
809 'os_glance_failed_import': 'bar'},
810 self.image.extra_properties)
811
812 @mock.patch.object(image_import, 'set_image_data')
813 def test_set_image_data(self, mock_sid):
814 self.assertEqual(mock_sid.return_value,
815 self.actions.set_image_data(
816 mock.sentinel.uri, mock.sentinel.task_id,
817 mock.sentinel.backend, mock.sentinel.set_active))
818 mock_sid.assert_called_once_with(
819 self.image, mock.sentinel.uri, mock.sentinel.task_id,
820 backend=mock.sentinel.backend, set_active=mock.sentinel.set_active,
821 callback=None)
822
823 @mock.patch.object(image_import, 'set_image_data')
824 def test_set_image_data_with_callback(self, mock_sid):
825 def fake_set_image_data(image, uri, task_id, backend=None,
826 set_active=False,
827 callback=None):
828 callback(mock.sentinel.chunk, mock.sentinel.total)
829
830 mock_sid.side_effect = fake_set_image_data
831
832 callback = mock.MagicMock()
833 self.actions.set_image_data(mock.sentinel.uri, mock.sentinel.task_id,
834 mock.sentinel.backend,
835 mock.sentinel.set_active,
836 callback=callback)
837
838 # Make sure our callback was triggered through the functools.partial
839 # to include the original params and the action wrapper
840 callback.assert_called_once_with(self.actions,
841 mock.sentinel.chunk,
842 mock.sentinel.total)
843
844 def test_remove_location_for_store(self):
845 self.image.locations = [
846 {},
847 {'metadata': {}},
848 {'metadata': {'store': 'foo'}},
849 {'metadata': {'store': 'bar'}},
850 ]
851
852 self.actions.remove_location_for_store('foo')
853 self.assertEqual([{}, {'metadata': {}},
854 {'metadata': {'store': 'bar'}}],
855 self.image.locations)
856
857 # Add a second definition for bar and make sure only one is removed
858 self.image.locations.append({'metadata': {'store': 'bar'}})
859 self.actions.remove_location_for_store('bar')
860 self.assertEqual([{}, {'metadata': {}},
861 {'metadata': {'store': 'bar'}}],
862 self.image.locations)
863
864 def test_remove_location_for_store_last_location(self):
865 self.image.locations = [{'metadata': {'store': 'foo'}}]
866 self.actions.remove_location_for_store('foo')
867 self.assertEqual([], self.image.locations)
868 self.assertIsNone(self.image.checksum)
869 self.assertIsNone(self.image.os_hash_algo)
870 self.assertIsNone(self.image.os_hash_value)
871 self.assertIsNone(self.image.size)
872
873 @mock.patch.object(import_flow, 'LOG')
874 def test_remove_location_for_store_pop_failures(self, mock_log):
875 class TestList(list):
876 def pop(self):
877 pass
878
879 self.image.locations = TestList([{'metadata': {'store': 'foo'}}])
880 with mock.patch.object(self.image.locations, 'pop',
881 new_callable=mock.PropertyMock) as mock_pop:
882
883 mock_pop.side_effect = store_exceptions.NotFound(image='image')
884 self.actions.remove_location_for_store('foo')
885 mock_log.warning.assert_called_once_with(
886 _('Error deleting from store foo when reverting.'))
887 mock_log.warning.reset_mock()
888
889 mock_pop.side_effect = store_exceptions.Forbidden()
890 self.actions.remove_location_for_store('foo')
891 mock_log.warning.assert_called_once_with(
892 _('Error deleting from store foo when reverting.'))
893 mock_log.warning.reset_mock()
894
895 mock_pop.side_effect = Exception
896 self.actions.remove_location_for_store('foo')
897 mock_log.warning.assert_called_once_with(
898 _('Unexpected exception when deleting from store foo.'))
899 mock_log.warning.reset_mock()
900
901
902 @mock.patch('glance.common.scripts.utils.get_task')
903 class TestCompleteTask(test_utils.BaseTestCase):
904 def setUp(self):
905 super(TestCompleteTask, self).setUp()
906 self.task_repo = mock.MagicMock()
907 self.task = mock.MagicMock()
908 self.wrapper = mock.MagicMock(image_id=IMAGE_ID1)
909
910 def test_execute(self, mock_get_task):
911 complete = import_flow._CompleteTask(TASK_ID1, TASK_TYPE,
912 self.task_repo, self.wrapper)
913 mock_get_task.return_value = self.task
914 complete.execute()
915 mock_get_task.assert_called_once_with(self.task_repo,
916 TASK_ID1)
917 self.task.succeed.assert_called_once_with({'image_id': IMAGE_ID1})
918 self.task_repo.save.assert_called_once_with(self.task)
919 self.wrapper.drop_lock_for_task.assert_called_once_with()
920
921 def test_execute_no_task(self, mock_get_task):
922 mock_get_task.return_value = None
923 complete = import_flow._CompleteTask(TASK_ID1, TASK_TYPE,
924 self.task_repo, self.wrapper)
925 complete.execute()
926 self.task_repo.save.assert_not_called()
927 self.wrapper.drop_lock_for_task.assert_called_once_with()
928
929 def test_execute_succeed_fails(self, mock_get_task):
930 mock_get_task.return_value = self.task
931 self.task.succeed.side_effect = Exception('testing')
932 complete = import_flow._CompleteTask(TASK_ID1, TASK_TYPE,
933 self.task_repo, self.wrapper)
934 complete.execute()
935 self.task.fail.assert_called_once_with(
936 _('Error: <class \'Exception\'>: testing'))
937 self.task_repo.save.assert_called_once_with(self.task)
938 self.wrapper.drop_lock_for_task.assert_called_once_with()
939
940 def test_execute_drop_lock_fails(self, mock_get_task):
941 mock_get_task.return_value = self.task
942 self.wrapper.drop_lock_for_task.side_effect = exception.NotFound()
943 complete = import_flow._CompleteTask(TASK_ID1, TASK_TYPE,
944 self.task_repo, self.wrapper)
945 with mock.patch('glance.async_.flows.api_image_import.LOG') as m_log:
946 complete.execute()
947 m_log.error.assert_called_once_with('Image %(image)s import task '
948 '%(task)s did not hold the '
949 'lock upon completion!',
950 {'image': IMAGE_ID1,
951 'task': TASK_ID1})
952 self.task.succeed.assert_called_once_with({'image_id': IMAGE_ID1})
1818 from glance_store._drivers import filesystem
1919 from glance_store import backend
2020 from oslo_config import cfg
21 from taskflow.types import failure
2122
2223 from glance.async_.flows._internal_plugins import web_download
2324 from glance.async_.flows import api_image_import
6566 self.image_id, self.uri)
6667 with mock.patch.object(script_utils,
6768 'get_image_data_iter') as mock_iter:
68 mock_iter.return_value = b"dddd"
69 web_download_task.execute()
70 mock_add.assert_called_once_with(self.image_id, b"dddd", 0)
69 mock_add.return_value = ["path", 4]
70 mock_iter.return_value.headers = {}
71 self.assertEqual(web_download_task.execute(), "path")
72 mock_add.assert_called_once_with(self.image_id,
73 mock_iter.return_value, 0)
74
75 @mock.patch.object(filesystem.Store, 'add')
76 def test_web_download_with_content_length(self, mock_add):
77 web_download_task = web_download._WebDownload(
78 self.task.task_id, self.task_type, self.task_repo,
79 self.image_id, self.uri)
80 with mock.patch.object(script_utils,
81 'get_image_data_iter') as mock_iter:
82 mock_iter.return_value.headers = {'content-length': '4'}
83 mock_add.return_value = ["path", 4]
84 self.assertEqual(web_download_task.execute(), "path")
85 mock_add.assert_called_once_with(self.image_id,
86 mock_iter.return_value, 0)
87
88 @mock.patch.object(filesystem.Store, 'add')
89 def test_web_download_with_invalid_content_length(self, mock_add):
90 web_download_task = web_download._WebDownload(
91 self.task.task_id, self.task_type, self.task_repo,
92 self.image_id, self.uri)
93 with mock.patch.object(script_utils,
94 'get_image_data_iter') as mock_iter:
95 mock_iter.return_value.headers = {'content-length': "not_valid"}
96 mock_add.return_value = ["path", 4]
97 self.assertEqual(web_download_task.execute(), "path")
98 mock_add.assert_called_once_with(self.image_id,
99 mock_iter.return_value, 0)
100
101 @mock.patch.object(filesystem.Store, 'add')
102 def test_web_download_fails_when_data_size_different(self, mock_add):
103 web_download_task = web_download._WebDownload(
104 self.task.task_id, self.task_type, self.task_repo,
105 self.image_id, self.uri)
106 with mock.patch.object(script_utils,
107 'get_image_data_iter') as mock_iter:
108 mock_iter.return_value.headers = {'content-length': '4'}
109 mock_add.return_value = ["path", 3]
110 self.assertRaises(
111 glance.common.exception.ImportTaskError,
112 web_download_task.execute)
71113
72114 def test_web_download_node_staging_uri_is_none(self):
73115 self.config(node_staging_uri=None)
133175 delete_from_fs_task.execute(staging_path)
134176 self.assertEqual(1, mock_exists.call_count)
135177 self.assertEqual(1, mock_unlik.call_count)
178
179 @mock.patch.object(filesystem.Store, 'add')
180 @mock.patch("glance.async_.flows._internal_plugins.web_download.store_api")
181 def test_web_download_revert_with_failure(self, mock_store_api,
182 mock_add):
183 web_download_task = web_download._WebDownload(
184 self.task.task_id, self.task_type, self.task_repo,
185 self.image_id, self.uri)
186 with mock.patch.object(script_utils,
187 'get_image_data_iter') as mock_iter:
188 mock_iter.return_value.headers = {'content-length': '4'}
189 mock_add.return_value = "/path/to_downloaded_data", 3
190 self.assertRaises(
191 glance.common.exception.ImportTaskError,
192 web_download_task.execute)
193
194 web_download_task.revert(None)
195 mock_store_api.delete_from_backend.assert_called_once_with(
196 "/path/to_downloaded_data")
197
198 @mock.patch("glance.async_.flows._internal_plugins.web_download.store_api")
199 def test_web_download_revert_without_failure_multi_store(self,
200 mock_store_api):
201 enabled_backends = {
202 'fast': 'file',
203 'cheap': 'file'
204 }
205 self.config(enabled_backends=enabled_backends)
206 web_download_task = web_download._WebDownload(
207 self.task.task_id, self.task_type, self.task_repo,
208 self.image_id, self.uri)
209 web_download_task._path = "/path/to_downloaded_data"
210 web_download_task.revert("/path/to_downloaded_data")
211 mock_store_api.delete.assert_called_once_with(
212 "/path/to_downloaded_data", None)
213
214 @mock.patch("glance.async_.flows._internal_plugins.web_download.store_api")
215 def test_web_download_revert_with_failure_without_path(self,
216 mock_store_api):
217 result = failure.Failure.from_exception(
218 glance.common.exception.ImportTaskError())
219 web_download_task = web_download._WebDownload(
220 self.task.task_id, self.task_type, self.task_repo,
221 self.image_id, self.uri)
222 web_download_task.revert(result)
223 mock_store_api.delete_from_backend.assert_not_called()
224
225 @mock.patch("glance.async_.flows._internal_plugins.web_download.store_api")
226 def test_web_download_revert_with_failure_with_path(self, mock_store_api):
227 result = failure.Failure.from_exception(
228 glance.common.exception.ImportTaskError())
229 web_download_task = web_download._WebDownload(
230 self.task.task_id, self.task_type, self.task_repo,
231 self.image_id, self.uri)
232 web_download_task._path = "/path/to_downloaded_data"
233 web_download_task.revert(result)
234 mock_store_api.delete_from_backend.assert_called_once_with(
235 "/path/to_downloaded_data")
236
237 @mock.patch("glance.async_.flows._internal_plugins.web_download.store_api")
238 def test_web_download_delete_fails_on_revert(self, mock_store_api):
239 result = failure.Failure.from_exception(
240 glance.common.exception.ImportTaskError())
241 mock_store_api.delete_from_backend.side_effect = Exception
242 web_download_task = web_download._WebDownload(
243 self.task.task_id, self.task_type, self.task_repo,
244 self.image_id, self.uri)
245 web_download_task._path = "/path/to_downloaded_data"
246 # this will verify that revert does not break because of failure
247 # while deleting data in staging area
248 web_download_task.revert(result)
1515
1616 from unittest import mock
1717
18 import futurist
1819 import glance_store as store
1920 from oslo_config import cfg
2021 from taskflow.patterns import linear_flow
5354
5455 # assert the call
5556 mock_run.assert_called_once_with(task_id, task_type)
57
58 def test_with_admin_repo(self):
59 admin_repo = mock.MagicMock()
60 executor = glance.async_.TaskExecutor(self.context,
61 self.task_repo,
62 self.image_repo,
63 self.image_factory,
64 admin_repo=admin_repo)
65 self.assertEqual(admin_repo, executor.admin_repo)
5666
5767
5868 class TestImportTaskFlow(test_utils.BaseTestCase):
6878 'glance-direct', 'web-download', 'copy-image'])
6979 self.config(node_staging_uri='file:///tmp/staging')
7080 store.create_stores(CONF)
71 self.base_flow = ['ConfigureStaging', 'ImportToStore',
81 self.base_flow = ['ImageLock', 'ConfigureStaging', 'ImportToStore',
7282 'DeleteFromFS', 'VerifyImageState',
7383 'CompleteTask']
7484 self.import_plugins = ['Convert_Image',
7787
7888 def _get_flow(self, import_req=None):
7989 inputs = {
80 'task_id': mock.MagicMock(),
90 'task_id': mock.sentinel.task_id,
8191 'task_type': mock.MagicMock(),
8292 'task_repo': mock.MagicMock(),
8393 'image_repo': mock.MagicMock(),
8494 'image_id': mock.MagicMock(),
8595 'import_req': import_req or mock.MagicMock()
8696 }
97 inputs['image_repo'].get.return_value = mock.MagicMock(
98 extra_properties={'os_glance_import_task': mock.sentinel.task_id})
8799 flow = api_image_import.get_flow(**inputs)
88100 return flow
89101
105117 flow = self._get_flow()
106118
107119 flow_comp = self._get_flow_tasks(flow)
108 # assert flow has 5 tasks
109 self.assertEqual(5, len(flow_comp))
120 # assert flow has all the tasks
121 self.assertEqual(len(self.base_flow), len(flow_comp))
110122 for c in self.base_flow:
111123 self.assertIn(c, flow_comp)
112124
124136 flow = self._get_flow(import_req=import_req)
125137
126138 flow_comp = self._get_flow_tasks(flow)
127 # assert flow has 6 tasks
128 self.assertEqual(6, len(flow_comp))
139 # assert flow has all the tasks
140 self.assertEqual(len(self.base_flow) + 1, len(flow_comp))
129141 for c in self.base_flow:
130142 self.assertIn(c, flow_comp)
131143 self.assertIn('WebDownload', flow_comp)
146158 flow = self._get_flow(import_req=import_req)
147159
148160 flow_comp = self._get_flow_tasks(flow)
149 # assert flow has 6 tasks
150 self.assertEqual(6, len(flow_comp))
161 # assert flow has all the tasks
162 self.assertEqual(len(self.base_flow) + 1, len(flow_comp))
151163 for c in self.base_flow:
152164 self.assertIn(c, flow_comp)
153165 self.assertIn('CopyImage', flow_comp)
163175 flow = self._get_flow()
164176
165177 flow_comp = self._get_flow_tasks(flow)
166 # assert flow has 8 tasks (base_flow + plugins)
167 self.assertEqual(8, len(flow_comp))
178 # assert flow has all the tasks (base_flow + plugins)
179 plugins = CONF.image_import_opts.image_import_plugins
180 self.assertEqual(len(self.base_flow) + len(plugins), len(flow_comp))
168181 for c in self.base_flow:
169182 self.assertIn(c, flow_comp)
170183 for c in self.import_plugins:
191204 flow = self._get_flow(import_req=import_req)
192205
193206 flow_comp = self._get_flow_tasks(flow)
194 # assert flow has 6 tasks
195 self.assertEqual(6, len(flow_comp))
207 # assert flow has all the tasks (just base and conversion)
208 self.assertEqual(len(self.base_flow) + 1, len(flow_comp))
196209 for c in self.base_flow:
197210 self.assertIn(c, flow_comp)
198211 self.assertIn('CopyImage', flow_comp)
212
213
214 @mock.patch('glance.async_._THREADPOOL_MODEL', new=None)
215 class TestSystemThreadPoolModel(test_utils.BaseTestCase):
216 def test_eventlet_model(self):
217 model_cls = glance.async_.EventletThreadPoolModel
218 self.assertEqual(futurist.GreenThreadPoolExecutor,
219 model_cls.get_threadpool_executor_class())
220
221 def test_native_model(self):
222 model_cls = glance.async_.NativeThreadPoolModel
223 self.assertEqual(futurist.ThreadPoolExecutor,
224 model_cls.get_threadpool_executor_class())
225
226 @mock.patch('glance.async_.ThreadPoolModel.get_threadpool_executor_class')
227 def test_base_model_spawn(self, mock_gte):
228 pool_cls = mock.MagicMock()
229 pool_cls.configure_mock(__name__='fake')
230 mock_gte.return_value = pool_cls
231
232 model = glance.async_.ThreadPoolModel()
233 result = model.spawn(print, 'foo', bar='baz')
234
235 pool = pool_cls.return_value
236
237 # Make sure the default size was passed to the executor
238 pool_cls.assert_called_once_with(1)
239
240 # Make sure we submitted the function to the executor
241 pool.submit.assert_called_once_with(print, 'foo', bar='baz')
242
243 # This isn't used anywhere, but make sure we get the future
244 self.assertEqual(pool.submit.return_value, result)
245
246 @mock.patch('glance.async_.ThreadPoolModel.get_threadpool_executor_class')
247 def test_base_model_init_with_size(self, mock_gte):
248 mock_gte.return_value.__name__ = 'TestModel'
249 with mock.patch.object(glance.async_, 'LOG') as mock_log:
250 glance.async_.ThreadPoolModel(123)
251 mock_log.debug.assert_called_once_with(
252 'Creating threadpool model %r with size %i',
253 'TestModel', 123)
254 mock_gte.return_value.assert_called_once_with(123)
255
256 def test_set_threadpool_model_native(self):
257 glance.async_.set_threadpool_model('native')
258 self.assertEqual(glance.async_.NativeThreadPoolModel,
259 glance.async_._THREADPOOL_MODEL)
260
261 def test_set_threadpool_model_eventlet(self):
262 glance.async_.set_threadpool_model('eventlet')
263 self.assertEqual(glance.async_.EventletThreadPoolModel,
264 glance.async_._THREADPOOL_MODEL)
265
266 def test_set_threadpool_model_unknown(self):
267 # Unknown threadpool models are not tolerated
268 self.assertRaises(RuntimeError,
269 glance.async_.set_threadpool_model,
270 'danthread9000')
271
272 def test_set_threadpool_model_again(self):
273 # Setting the model to the same thing is fine
274 glance.async_.set_threadpool_model('native')
275 glance.async_.set_threadpool_model('native')
276
277 def test_set_threadpool_model_different(self):
278 glance.async_.set_threadpool_model('native')
279 # The model cannot be switched at runtime
280 self.assertRaises(RuntimeError,
281 glance.async_.set_threadpool_model,
282 'eventlet')
283
284 def test_set_threadpool_model_log(self):
285 with mock.patch.object(glance.async_, 'LOG') as mock_log:
286 glance.async_.set_threadpool_model('eventlet')
287 mock_log.info.assert_called_once_with(
288 'Threadpool model set to %r', 'EventletThreadPoolModel')
289
290 def test_get_threadpool_model(self):
291 glance.async_.set_threadpool_model('native')
292 self.assertEqual(glance.async_.NativeThreadPoolModel,
293 glance.async_.get_threadpool_model())
294
295 def test_get_threadpool_model_unset(self):
296 # If the model is not set, we get an AssertionError
297 self.assertRaises(AssertionError,
298 glance.async_.get_threadpool_model)
1414
1515 from unittest import mock
1616
17 import futurist
1718 import glance_store
1819 from oslo_config import cfg
1920 from taskflow import engines
2021
22 import glance.async_
2123 from glance.async_ import taskflow_executor
2224 from glance.common.scripts.image_import import main as image_import
2325 from glance import domain
3032 class TestTaskExecutor(test_utils.BaseTestCase):
3133
3234 def setUp(self):
35 # NOTE(danms): Makes sure that we have a model set to something
36 glance.async_._THREADPOOL_MODEL = None
37 glance.async_.set_threadpool_model('eventlet')
38
3339 super(TestTaskExecutor, self).setUp()
3440
3541 glance_store.register_opts(CONF)
6773 self.image_repo,
6874 self.image_factory)
6975
76 def test_fetch_an_executor_parallel(self):
77 self.config(engine_mode='parallel', group='taskflow_executor')
78 pool = self.executor._fetch_an_executor()
79 self.assertIsInstance(pool, futurist.GreenThreadPoolExecutor)
80
81 def test_fetch_an_executor_serial(self):
82 pool = self.executor._fetch_an_executor()
83 self.assertIsNone(pool)
84
7085 def test_begin_processing(self):
7186 with mock.patch.object(engines, 'load') as load_mock:
7287 engine = mock.Mock()
99114 self.assertEqual('failure', self.task.status)
100115 self.task_repo.save.assert_called_with(self.task)
101116 self.assertEqual(1, import_mock.call_count)
117
118 @mock.patch('stevedore.driver.DriverManager')
119 def test_get_flow_with_admin_repo(self, mock_driver):
120 admin_repo = mock.MagicMock()
121 executor = taskflow_executor.TaskExecutor(self.context,
122 self.task_repo,
123 self.image_repo,
124 self.image_factory,
125 admin_repo=admin_repo)
126 self.assertEqual(mock_driver.return_value.driver,
127 executor._get_flow(self.task))
128 mock_driver.assert_called_once_with(
129 'glance.flows', self.task.type,
130 invoke_on_load=True,
131 invoke_kwds={'task_id': self.task.task_id,
132 'task_type': self.task.type,
133 'context': self.context,
134 'task_repo': self.task_repo,
135 'image_repo': self.image_repo,
136 'image_factory': self.image_factory,
137 'backend': None,
138 'admin_repo': admin_repo,
139 'uri': 'http://cloud.foo/image.qcow2'})
7070 :returns: the number of how many store drivers been loaded.
7171 """
7272 self.config(enabled_backends={'fast': 'file', 'cheap': 'file',
73 'readonly_store': 'http'})
73 'readonly_store': 'http',
74 'fast-cinder': 'cinder'})
7475 store.register_store_opts(CONF)
7576
7677 self.config(default_backend='fast',
121121 self.assertEqual(1, mock_set_img_data.call_count)
122122 mock_delete_data.assert_called_once_with(
123123 mock_create_image().context, image_id, 'location')
124
125 @mock.patch('oslo_utils.timeutils.StopWatch')
126 @mock.patch('glance.common.scripts.utils.get_image_data_iter')
127 def test_set_image_data_with_callback(self, mock_gidi, mock_sw):
128 data = [b'0' * 60, b'0' * 50, b'0' * 10, b'0' * 150]
129 result_data = []
130 mock_gidi.return_value = iter(data)
131 mock_sw.return_value.expired.side_effect = [False, True, False,
132 False]
133 image = mock.MagicMock()
134 callback = mock.MagicMock()
135
136 def fake_set_data(data_iter, **kwargs):
137 for chunk in data_iter:
138 result_data.append(chunk)
139
140 image.set_data.side_effect = fake_set_data
141 image_import_script.set_image_data(image, 'http://fake', None,
142 callback=callback)
143
144 mock_gidi.assert_called_once_with('http://fake')
145 self.assertEqual(data, result_data)
146 # Since we only fired the timer once, only two calls expected
147 # for the four reads we did, including the final obligatory one
148 callback.assert_has_calls([mock.call(110, 110),
149 mock.call(160, 270)])
123123 location = 'cinder://'
124124 self.assertRaises(urllib.error.URLError,
125125 script_utils.validate_location_uri, location)
126
127
128 class TestCallbackIterator(test_utils.BaseTestCase):
129 def test_iterator_iterates(self):
130 # Include a zero-length generation to make sure we don't trigger
131 # the callback when nothing new has happened.
132 items = ['1', '2', '', '3']
133 callback = mock.MagicMock()
134 cb_iter = script_utils.CallbackIterator(iter(items), callback)
135 iter_items = list(cb_iter)
136 callback.assert_has_calls([mock.call(1, 1),
137 mock.call(1, 2),
138 mock.call(1, 3)])
139 self.assertEqual(items, iter_items)
140
141 # Make sure we don't call the callback on close if we
142 # have processed all the data
143 callback.reset_mock()
144 cb_iter.close()
145 callback.assert_not_called()
146
147 @mock.patch('oslo_utils.timeutils.StopWatch')
148 def test_iterator_iterates_granularly(self, mock_sw):
149 items = ['1', '2', '3']
150 callback = mock.MagicMock()
151 mock_sw.return_value.expired.side_effect = [False, True, False]
152 cb_iter = script_utils.CallbackIterator(iter(items), callback,
153 min_interval=30)
154 iter_items = list(cb_iter)
155 self.assertEqual(items, iter_items)
156 # The timer only fired once, but we should still expect the final
157 # chunk to be emitted.
158 callback.assert_has_calls([mock.call(2, 2),
159 mock.call(1, 3)])
160
161 mock_sw.assert_called_once_with(30)
162 mock_sw.return_value.start.assert_called_once_with()
163 mock_sw.return_value.restart.assert_called_once_with()
164
165 # Make sure we don't call the callback on close if we
166 # have processed all the data
167 callback.reset_mock()
168 cb_iter.close()
169 callback.assert_not_called()
170
171 def test_proxy_close(self):
172 callback = mock.MagicMock()
173 source = mock.MagicMock()
174 del source.close
175 # NOTE(danms): This will generate AttributeError if it
176 # tries to call close after the del above.
177 script_utils.CallbackIterator(source, callback).close()
178
179 source = mock.MagicMock()
180 source.close.return_value = 'foo'
181 script_utils.CallbackIterator(source, callback).close()
182 source.close.assert_called_once_with()
183
184 # We didn't process any data, so no callback should be expected
185 callback.assert_not_called()
186
187 @mock.patch('oslo_utils.timeutils.StopWatch')
188 def test_proxy_read(self, mock_sw):
189 items = ['1', '2', '3']
190 source = mock.MagicMock()
191 source.read.side_effect = items
192 callback = mock.MagicMock()
193 mock_sw.return_value.expired.side_effect = [False, True, False]
194 cb_iter = script_utils.CallbackIterator(source, callback,
195 min_interval=30)
196 results = [cb_iter.read(1) for i in range(len(items))]
197 self.assertEqual(items, results)
198 # The timer only fired once while reading, so we only expect
199 # one callback.
200 callback.assert_has_calls([mock.call(2, 2)])
201 cb_iter.close()
202 # If we close with residue since the last callback, we should
203 # call the callback with that.
204 callback.assert_has_calls([mock.call(2, 2),
205 mock.call(1, 3)])
0 # Copyright 2020 Red Hat, Inc
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 import io
16 import os
17 import re
18 import subprocess
19 import tempfile
20 from unittest import mock
21
22 from oslo_utils import units
23
24 from glance.common import format_inspector
25 from glance.tests import utils as test_utils
26
27
28 def get_size_from_qemu_img(filename):
29 output = subprocess.check_output('qemu-img info "%s"' % filename,
30 shell=True)
31 for line in output.split(b'\n'):
32 m = re.search(b'^virtual size: .* .([0-9]+) bytes', line.strip())
33 if m:
34 return int(m.group(1))
35
36 raise Exception('Could not find virtual size with qemu-img')
37
38
39 class TestFormatInspectors(test_utils.BaseTestCase):
40 def setUp(self):
41 super(TestFormatInspectors, self).setUp()
42 self._created_files = []
43
44 def tearDown(self):
45 super(TestFormatInspectors, self).tearDown()
46 for fn in self._created_files:
47 try:
48 os.remove(fn)
49 except Exception:
50 pass
51
52 def _create_img(self, fmt, size):
53 if fmt == 'vhd':
54 # QEMU calls the vhd format vpc
55 fmt = 'vpc'
56
57 fn = tempfile.mktemp(prefix='glance-unittest-formatinspector-',
58 suffix='.%s' % fmt)
59 self._created_files.append(fn)
60 subprocess.check_output(
61 'qemu-img create -f %s %s %i' % (fmt, fn, size),
62 shell=True)
63 return fn
64
65 def _test_format_at_block_size(self, format_name, img, block_size):
66 fmt = format_inspector.get_inspector(format_name)()
67 self.assertIsNotNone(fmt,
68 'Did not get format inspector for %s' % (
69 format_name))
70 wrapper = format_inspector.InfoWrapper(open(img, 'rb'), fmt)
71
72 while True:
73 chunk = wrapper.read(block_size)
74 if not chunk:
75 break
76
77 wrapper.close()
78 return fmt
79
80 def _test_format_at_image_size(self, format_name, image_size):
81 img = self._create_img(format_name, image_size)
82
83 # Some formats have internal alignment restrictions making this not
84 # always exactly like image_size, so get the real value for comparison
85 virtual_size = get_size_from_qemu_img(img)
86
87 # Read the format in various sizes, some of which will read whole
88 # sections in a single read, others will be completely unaligned, etc.
89 for block_size in (64 * units.Ki, 512, 17, 1 * units.Mi):
90 fmt = self._test_format_at_block_size(format_name, img, block_size)
91 self.assertTrue(fmt.format_match,
92 'Failed to match %s at size %i block %i' % (
93 format_name, image_size, block_size))
94 self.assertEqual(virtual_size, fmt.virtual_size,
95 ('Failed to calculate size for %s at size %i '
96 'block %i') % (format_name, image_size,
97 block_size))
98 memory = sum(fmt.context_info.values())
99 self.assertLess(memory, 512 * units.Ki,
100 'Format used more than 512KiB of memory: %s' % (
101 fmt.context_info))
102
103 def _test_format(self, format_name):
104 # Try a few different image sizes, including some odd and very small
105 # sizes
106 for image_size in (512, 513, 2057, 7):
107 self._test_format_at_image_size(format_name, image_size * units.Mi)
108
109 def test_qcow2(self):
110 self._test_format('qcow2')
111
112 def test_vhd(self):
113 self._test_format('vhd')
114
115 def test_vhdx(self):
116 self._test_format('vhdx')
117
118 def test_vmdk(self):
119 self._test_format('vmdk')
120
121 def test_vdi(self):
122 self._test_format('vdi')
123
124 def _test_format_with_invalid_data(self, format_name):
125 fmt = format_inspector.get_inspector(format_name)()
126 wrapper = format_inspector.InfoWrapper(open(__file__, 'rb'), fmt)
127 while True:
128 chunk = wrapper.read(32)
129 if not chunk:
130 break
131
132 wrapper.close()
133 self.assertFalse(fmt.format_match)
134 self.assertEqual(0, fmt.virtual_size)
135 memory = sum(fmt.context_info.values())
136 self.assertLess(memory, 512 * units.Ki,
137 'Format used more than 512KiB of memory: %s' % (
138 fmt.context_info))
139
140 def test_qcow2_invalid(self):
141 self._test_format_with_invalid_data('qcow2')
142
143 def test_vhd_invalid(self):
144 self._test_format_with_invalid_data('vhd')
145
146 def test_vhdx_invalid(self):
147 self._test_format_with_invalid_data('vhdx')
148
149 def test_vmdk_invalid(self):
150 self._test_format_with_invalid_data('vmdk')
151
152 def test_vdi_invalid(self):
153 self._test_format_with_invalid_data('vdi')
154
155
156 class TestFormatInspectorInfra(test_utils.BaseTestCase):
157 def _test_capture_region_bs(self, bs):
158 data = b''.join(chr(x).encode() for x in range(ord('A'), ord('z')))
159
160 regions = [
161 format_inspector.CaptureRegion(3, 9),
162 format_inspector.CaptureRegion(0, 256),
163 format_inspector.CaptureRegion(32, 8),
164 ]
165
166 for region in regions:
167 # None of them should be complete yet
168 self.assertFalse(region.complete)
169
170 pos = 0
171 for i in range(0, len(data), bs):
172 chunk = data[i:i + bs]
173 pos += len(chunk)
174 for region in regions:
175 region.capture(chunk, pos)
176
177 self.assertEqual(data[3:12], regions[0].data)
178 self.assertEqual(data[0:256], regions[1].data)
179 self.assertEqual(data[32:40], regions[2].data)
180
181 # The small regions should be complete
182 self.assertTrue(regions[0].complete)
183 self.assertTrue(regions[2].complete)
184
185 # This region extended past the available data, so not complete
186 self.assertFalse(regions[1].complete)
187
188 def test_capture_region(self):
189 for block_size in (1, 3, 7, 13, 32, 64):
190 self._test_capture_region_bs(block_size)
191
192 def _get_wrapper(self, data):
193 source = io.BytesIO(data)
194 fake_fmt = mock.create_autospec(format_inspector.get_inspector('raw'))
195 return format_inspector.InfoWrapper(source, fake_fmt)
196
197 def test_info_wrapper_file_like(self):
198 data = b''.join(chr(x).encode() for x in range(ord('A'), ord('z')))
199 wrapper = self._get_wrapper(data)
200
201 read_data = b''
202 while True:
203 chunk = wrapper.read(8)
204 if not chunk:
205 break
206 read_data += chunk
207
208 self.assertEqual(data, read_data)
209
210 def test_info_wrapper_iter_like(self):
211 data = b''.join(chr(x).encode() for x in range(ord('A'), ord('z')))
212 wrapper = self._get_wrapper(data)
213
214 read_data = b''
215 for chunk in wrapper:
216 read_data += chunk
217
218 self.assertEqual(data, read_data)
219
220 def test_info_wrapper_file_like_eats_error(self):
221 wrapper = self._get_wrapper(b'123456')
222 wrapper._format.eat_chunk.side_effect = Exception('fail')
223
224 data = b''
225 while True:
226 chunk = wrapper.read(3)
227 if not chunk:
228 break
229 data += chunk
230
231 # Make sure we got all the data despite the error
232 self.assertEqual(b'123456', data)
233
234 # Make sure we only called this once and never again after
235 # the error was raised
236 wrapper._format.eat_chunk.assert_called_once_with(b'123')
237
238 def test_info_wrapper_iter_like_eats_error(self):
239 fake_fmt = mock.create_autospec(format_inspector.get_inspector('raw'))
240 wrapper = format_inspector.InfoWrapper(iter([b'123', b'456']),
241 fake_fmt)
242 fake_fmt.eat_chunk.side_effect = Exception('fail')
243
244 data = b''
245 for chunk in wrapper:
246 data += chunk
247
248 # Make sure we got all the data despite the error
249 self.assertEqual(b'123456', data)
250
251 # Make sure we only called this once and never again after
252 # the error was raised
253 fake_fmt.eat_chunk.assert_called_once_with(b'123')
254
255 def test_get_inspector(self):
256 self.assertEqual(format_inspector.QcowInspector,
257 format_inspector.get_inspector('qcow2'))
258 self.assertIsNone(format_inspector.get_inspector('foo'))
+0
-358
glance/tests/unit/common/test_rpc.py less more
0 # -*- coding: utf-8 -*-
1
2 # Copyright 2013 Red Hat, Inc.
3 # All Rights Reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License"); you may
6 # not use this file except in compliance with the License. You may obtain
7 # a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
13 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
14 # License for the specific language governing permissions and limitations
15 # under the License.
16 import datetime
17
18 from oslo_serialization import jsonutils
19 from oslo_utils import encodeutils
20 import routes
21 import six
22 from six.moves import http_client as http
23 import webob
24
25 from glance.common import exception
26 from glance.common import rpc
27 from glance.common import wsgi
28 from glance.tests.unit import base
29 from glance.tests import utils as test_utils
30
31
32 class FakeResource(object):
33 """
34 Fake resource defining some methods that
35 will be called later by the api.
36 """
37
38 def get_images(self, context, keyword=None):
39 return keyword
40
41 def count_images(self, context, images):
42 return len(images)
43
44 def get_all_images(self, context):
45 return False
46
47 def raise_value_error(self, context):
48 raise ValueError("Yep, Just like that!")
49
50 def raise_weird_error(self, context):
51 class WeirdError(Exception):
52 pass
53 raise WeirdError("Weirdness")
54
55
56 def create_api():
57 deserializer = rpc.RPCJSONDeserializer()
58 serializer = rpc.RPCJSONSerializer()
59 controller = rpc.Controller()
60 controller.register(FakeResource())
61 res = wsgi.Resource(controller, deserializer, serializer)
62
63 mapper = routes.Mapper()
64 mapper.connect("/rpc", controller=res,
65 conditions=dict(method=["POST"]),
66 action="__call__")
67 return test_utils.FakeAuthMiddleware(wsgi.Router(mapper), is_admin=True)
68
69
70 class TestRPCController(base.IsolatedUnitTest):
71
72 def setUp(self):
73 super(TestRPCController, self).setUp()
74 self.res = FakeResource()
75 self.controller = rpc.Controller()
76 self.controller.register(self.res)
77
78 def test_register(self):
79 res = FakeResource()
80 controller = rpc.Controller()
81 controller.register(res)
82 self.assertIn("get_images", controller._registered)
83 self.assertIn("get_all_images", controller._registered)
84
85 def test_reigster_filtered(self):
86 res = FakeResource()
87 controller = rpc.Controller()
88 controller.register(res, filtered=["get_all_images"])
89 self.assertIn("get_all_images", controller._registered)
90
91 def test_reigster_excluded(self):
92 res = FakeResource()
93 controller = rpc.Controller()
94 controller.register(res, excluded=["get_all_images"])
95 self.assertIn("get_images", controller._registered)
96
97 def test_reigster_refiner(self):
98 res = FakeResource()
99 controller = rpc.Controller()
100
101 # Not callable
102 self.assertRaises(TypeError,
103 controller.register,
104 res, refiner="get_all_images")
105
106 # Filter returns False
107 controller.register(res, refiner=lambda x: False)
108 self.assertNotIn("get_images", controller._registered)
109 self.assertNotIn("get_images", controller._registered)
110
111 # Filter returns True
112 controller.register(res, refiner=lambda x: True)
113 self.assertIn("get_images", controller._registered)
114 self.assertIn("get_images", controller._registered)
115
116 def test_request(self):
117 api = create_api()
118 req = webob.Request.blank('/rpc')
119 req.method = 'POST'
120 req.body = jsonutils.dump_as_bytes([
121 {
122 "command": "get_images",
123 "kwargs": {"keyword": 1}
124 }
125 ])
126 res = req.get_response(api)
127 returned = jsonutils.loads(res.body)
128 self.assertIsInstance(returned, list)
129 self.assertEqual(1, returned[0])
130
131 def test_request_exc(self):
132 api = create_api()
133 req = webob.Request.blank('/rpc')
134 req.method = 'POST'
135 req.body = jsonutils.dump_as_bytes([
136 {
137 "command": "get_all_images",
138 "kwargs": {"keyword": 1}
139 }
140 ])
141
142 # Sending non-accepted keyword
143 # to get_all_images method
144 res = req.get_response(api)
145 returned = jsonutils.loads(res.body)
146 self.assertIn("_error", returned[0])
147
148 def test_rpc_errors(self):
149 api = create_api()
150 req = webob.Request.blank('/rpc')
151 req.method = 'POST'
152 req.content_type = 'application/json'
153
154 # Body is not a list, it should fail
155 req.body = jsonutils.dump_as_bytes({})
156 res = req.get_response(api)
157 self.assertEqual(http.BAD_REQUEST, res.status_int)
158
159 # cmd is not dict, it should fail.
160 req.body = jsonutils.dump_as_bytes([None])
161 res = req.get_response(api)
162 self.assertEqual(http.BAD_REQUEST, res.status_int)
163
164 # No command key, it should fail.
165 req.body = jsonutils.dump_as_bytes([{}])
166 res = req.get_response(api)
167 self.assertEqual(http.BAD_REQUEST, res.status_int)
168
169 # kwargs not dict, it should fail.
170 req.body = jsonutils.dump_as_bytes([{"command": "test", "kwargs": 2}])
171 res = req.get_response(api)
172 self.assertEqual(http.BAD_REQUEST, res.status_int)
173
174 # Command does not exist, it should fail.
175 req.body = jsonutils.dump_as_bytes([{"command": "test"}])
176 res = req.get_response(api)
177 self.assertEqual(http.NOT_FOUND, res.status_int)
178
179 def test_rpc_exception_propagation(self):
180 api = create_api()
181 req = webob.Request.blank('/rpc')
182 req.method = 'POST'
183 req.content_type = 'application/json'
184
185 req.body = jsonutils.dump_as_bytes([{"command": "raise_value_error"}])
186 res = req.get_response(api)
187 self.assertEqual(http.OK, res.status_int)
188
189 returned = jsonutils.loads(res.body)[0]
190 err_cls = 'builtins.ValueError' if six.PY3 else 'exceptions.ValueError'
191 self.assertEqual(err_cls, returned['_error']['cls'])
192
193 req.body = jsonutils.dump_as_bytes([{"command": "raise_weird_error"}])
194 res = req.get_response(api)
195 self.assertEqual(http.OK, res.status_int)
196
197 returned = jsonutils.loads(res.body)[0]
198 self.assertEqual('glance.common.exception.RPCError',
199 returned['_error']['cls'])
200
201
202 class TestRPCClient(base.IsolatedUnitTest):
203
204 def setUp(self):
205 super(TestRPCClient, self).setUp()
206 self.api = create_api()
207 self.client = rpc.RPCClient(host="http://127.0.0.1:9191")
208 self.client._do_request = self.fake_request
209
210 def fake_request(self, method, url, body, headers):
211 req = webob.Request.blank(url.path)
212 body = encodeutils.to_utf8(body)
213 req.body = body
214 req.method = method
215
216 webob_res = req.get_response(self.api)
217 return test_utils.FakeHTTPResponse(status=webob_res.status_int,
218 headers=webob_res.headers,
219 data=webob_res.body)
220
221 def test_method_proxy(self):
222 proxy = self.client.some_method
223 self.assertIn("method_proxy", str(proxy))
224
225 def test_bulk_request(self):
226 commands = [{"command": "get_images", 'kwargs': {'keyword': True}},
227 {"command": "get_all_images"}]
228
229 res = self.client.bulk_request(commands)
230 self.assertEqual(2, len(res))
231 self.assertTrue(res[0])
232 self.assertFalse(res[1])
233
234 def test_exception_raise(self):
235 try:
236 self.client.raise_value_error()
237 self.fail("Exception not raised")
238 except ValueError as exc:
239 self.assertEqual("Yep, Just like that!", str(exc))
240
241 def test_rpc_exception(self):
242 try:
243 self.client.raise_weird_error()
244 self.fail("Exception not raised")
245 except exception.RPCError:
246 pass
247
248 def test_non_str_or_dict_response(self):
249 rst = self.client.count_images(images=[1, 2, 3, 4])
250 self.assertEqual(4, rst)
251 self.assertIsInstance(rst, int)
252
253
254 class TestRPCJSONSerializer(test_utils.BaseTestCase):
255
256 def test_to_json(self):
257 fixture = {"key": "value"}
258 expected = b'{"key": "value"}'
259 actual = rpc.RPCJSONSerializer().to_json(fixture)
260 self.assertEqual(expected, actual)
261
262 def test_to_json_with_date_format_value(self):
263 fixture = {"date": datetime.datetime(1900, 3, 8, 2)}
264 expected = {"date": {"_value": "1900-03-08T02:00:00",
265 "_type": "datetime"}}
266 actual = rpc.RPCJSONSerializer().to_json(fixture)
267 actual = jsonutils.loads(actual)
268 for k in expected['date']:
269 self.assertEqual(expected['date'][k], actual['date'][k])
270
271 def test_to_json_with_more_deep_format(self):
272 fixture = {"is_public": True, "name": [{"name1": "test"}]}
273 expected = {"is_public": True, "name": [{"name1": "test"}]}
274 actual = rpc.RPCJSONSerializer().to_json(fixture)
275 actual = wsgi.JSONResponseSerializer().to_json(fixture)
276 actual = jsonutils.loads(actual)
277 for k in expected:
278 self.assertEqual(expected[k], actual[k])
279
280 def test_default(self):
281 fixture = {"key": "value"}
282 response = webob.Response()
283 rpc.RPCJSONSerializer().default(response, fixture)
284 self.assertEqual(http.OK, response.status_int)
285 content_types = [h for h in response.headerlist
286 if h[0] == 'Content-Type']
287 self.assertEqual(1, len(content_types))
288 self.assertEqual('application/json', response.content_type)
289 self.assertEqual(b'{"key": "value"}', response.body)
290
291
292 class TestRPCJSONDeserializer(test_utils.BaseTestCase):
293
294 def test_has_body_no_content_length(self):
295 request = wsgi.Request.blank('/')
296 request.method = 'POST'
297 request.body = b'asdf'
298 request.headers.pop('Content-Length')
299 self.assertFalse(rpc.RPCJSONDeserializer().has_body(request))
300
301 def test_has_body_zero_content_length(self):
302 request = wsgi.Request.blank('/')
303 request.method = 'POST'
304 request.body = b'asdf'
305 request.headers['Content-Length'] = 0
306 self.assertFalse(rpc.RPCJSONDeserializer().has_body(request))
307
308 def test_has_body_has_content_length(self):
309 request = wsgi.Request.blank('/')
310 request.method = 'POST'
311 request.body = b'asdf'
312 self.assertIn('Content-Length', request.headers)
313 self.assertTrue(rpc.RPCJSONDeserializer().has_body(request))
314
315 def test_no_body_no_content_length(self):
316 request = wsgi.Request.blank('/')
317 self.assertFalse(rpc.RPCJSONDeserializer().has_body(request))
318
319 def test_from_json(self):
320 fixture = '{"key": "value"}'
321 expected = {"key": "value"}
322 actual = rpc.RPCJSONDeserializer().from_json(fixture)
323 self.assertEqual(expected, actual)
324
325 def test_from_json_malformed(self):
326 fixture = 'kjasdklfjsklajf'
327 self.assertRaises(webob.exc.HTTPBadRequest,
328 rpc.RPCJSONDeserializer().from_json, fixture)
329
330 def test_default_no_body(self):
331 request = wsgi.Request.blank('/')
332 actual = rpc.RPCJSONDeserializer().default(request)
333 expected = {}
334 self.assertEqual(expected, actual)
335
336 def test_default_with_body(self):
337 request = wsgi.Request.blank('/')
338 request.method = 'POST'
339 request.body = b'{"key": "value"}'
340 actual = rpc.RPCJSONDeserializer().default(request)
341 expected = {"body": {"key": "value"}}
342 self.assertEqual(expected, actual)
343
344 def test_has_body_has_transfer_encoding(self):
345 request = wsgi.Request.blank('/')
346 request.method = 'POST'
347 request.body = b'fake_body'
348 request.headers['transfer-encoding'] = ''
349 self.assertIn('transfer-encoding', request.headers)
350 self.assertTrue(rpc.RPCJSONDeserializer().has_body(request))
351
352 def test_to_json_with_date_format_value(self):
353 fixture = ('{"date": {"_value": "1900-03-08T02:00:00.000000",'
354 '"_type": "datetime"}}')
355 expected = {"date": datetime.datetime(1900, 3, 8, 2)}
356 actual = rpc.RPCJSONDeserializer().from_json(fixture)
357 self.assertEqual(expected, actual)
1313 # License for the specific language governing permissions and limitations
1414 # under the License.
1515
16 import glance_store as store
1716 import tempfile
1817 from unittest import mock
1918
19 import glance_store as store
20 from glance_store._drivers import cinder
2021 from oslo_config import cfg
2122 from oslo_log import log as logging
2223 import six
2526 from glance.common import exception
2627 from glance.common import store_utils
2728 from glance.common import utils
29 from glance.tests.unit import base
2830 from glance.tests import utils as test_utils
2931
3032
4042 image = mock.Mock()
4143 image_repo = mock.Mock()
4244 image_repo.save = mock.Mock()
45 context = mock.Mock()
4346 locations = [{
4447 'url': 'rbd://aaaaaaaa/images/id',
4548 'metadata': metadata
4851 with mock.patch.object(
4952 store_utils, '_get_store_id_from_uri') as mock_get_store_id:
5053 mock_get_store_id.return_value = store_id
51 store_utils.update_store_in_locations(image, image_repo)
54 store_utils.update_store_in_locations(context, image, image_repo)
5255 self.assertEqual(image.locations[0]['metadata'].get(
5356 'store'), expected)
5457 self.assertEqual(store_id_call_count, mock_get_store_id.call_count)
8992 self.config(enabled_backends=enabled_backends)
9093 self._test_update_store_in_location({}, None, None,
9194 save_call_count=0)
95
96
97 class TestCinderStoreUtils(base.MultiStoreClearingUnitTest):
98 """Test glance.common.store_utils module for cinder multistore"""
99
100 @mock.patch.object(cinder.Store, 'is_image_associated_with_store')
101 @mock.patch.object(cinder.Store, 'url_prefix',
102 new_callable=mock.PropertyMock)
103 def _test_update_cinder_store_in_location(self, mock_url_prefix,
104 mock_associate_store,
105 is_valid=True):
106 volume_id = 'db457a25-8f16-4b2c-a644-eae8d17fe224'
107 store_id = 'fast-cinder'
108 expected = 'fast-cinder'
109 image = mock.Mock()
110 image_repo = mock.Mock()
111 image_repo.save = mock.Mock()
112 context = mock.Mock()
113 mock_associate_store.return_value = is_valid
114 locations = [{
115 'url': 'cinder://%s' % volume_id,
116 'metadata': {}
117 }]
118 mock_url_prefix.return_value = 'cinder://%s' % store_id
119 image.locations = locations
120 store_utils.update_store_in_locations(context, image, image_repo)
121
122 if is_valid:
123 # This is the case where we found an image that has an
124 # old-style URL which does not include the store name,
125 # but for which we know the corresponding store that
126 # refers to the volume type that backs it. We expect that
127 # the URL should be updated to point to the store/volume from
128 # just a naked pointer to the volume, as was the old
129 # format i.e. this is the case when store is valid and location
130 # url, metadata are updated and image_repo.save is called
131 expected_url = mock_url_prefix.return_value + '/' + volume_id
132 self.assertEqual(expected_url, image.locations[0].get('url'))
133 self.assertEqual(expected, image.locations[0]['metadata'].get(
134 'store'))
135 self.assertEqual(1, image_repo.save.call_count)
136 else:
137 # Here, we've got an image backed by a volume which does
138 # not have a corresponding store specifying the volume_type.
139 # Expect that we leave these alone and do not touch the
140 # location URL since we cannot update it with a valid store i.e.
141 # this is the case when store is invalid and location url,
142 # metadata are not updated and image_repo.save is not called
143 self.assertEqual(locations[0]['url'],
144 image.locations[0].get('url'))
145 self.assertEqual({}, image.locations[0]['metadata'])
146 self.assertEqual(0, image_repo.save.call_count)
147
148 def test_update_cinder_store_location_valid_type(self):
149 self._test_update_cinder_store_in_location()
150
151 def test_update_cinder_store_location_invalid_type(self):
152 self._test_update_cinder_store_in_location(is_valid=False)
92153
93154
94155 class TestUtils(test_utils.BaseTestCase):
0 # -*- coding: utf-8 -*-
1 # Copyright 2020, Red Hat, Inc.
2 # All Rights Reserved.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License"); you may
5 # not use this file except in compliance with the License. You may obtain
6 # a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
12 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
13 # License for the specific language governing permissions and limitations
14 # under the License.
15
16 from unittest import mock
17
18 from glance.api import common
19 import glance.async_
20 from glance.common import wsgi_app
21 from glance.tests import utils as test_utils
22
23
24 class TestWsgiAppInit(test_utils.BaseTestCase):
25 @mock.patch('glance.common.config.load_paste_app')
26 @mock.patch('glance.async_.set_threadpool_model')
27 @mock.patch('glance.common.wsgi_app._get_config_files')
28 def test_wsgi_init_sets_thread_settings(self, mock_config_files,
29 mock_set_model,
30 mock_load):
31 mock_config_files.return_value = []
32 self.config(task_pool_threads=123, group='wsgi')
33 common.DEFAULT_POOL_SIZE = 1024
34 wsgi_app.init_app()
35 # Make sure we declared the system threadpool model as native
36 mock_set_model.assert_called_once_with('native')
37 # Make sure we set the default pool size
38 self.assertEqual(123, common.DEFAULT_POOL_SIZE)
39 mock_load.assert_called_once_with('glance-api')
40
41 @mock.patch('atexit.register')
42 @mock.patch('glance.common.config.load_paste_app')
43 @mock.patch('glance.async_.set_threadpool_model')
44 @mock.patch('glance.common.wsgi_app._get_config_files')
45 def test_wsgi_init_registers_exit_handler(self, mock_config_files,
46 mock_set_model,
47 mock_load, mock_exit):
48 mock_config_files.return_value = []
49 wsgi_app.init_app()
50 mock_exit.assert_called_once_with(wsgi_app.drain_threadpools)
51
52 @mock.patch('glance.async_._THREADPOOL_MODEL', new=None)
53 def test_drain_threadpools(self):
54 # Initialize the thread pool model and tasks_pool, like API
55 # under WSGI would, and so we have a pointer to that exact
56 # pool object in the cache
57 glance.async_.set_threadpool_model('native')
58 model = common.get_thread_pool('tasks_pool')
59
60 with mock.patch.object(model.pool, 'shutdown') as mock_shutdown:
61 wsgi_app.drain_threadpools()
62 # Make sure that shutdown() was called on the tasks_pool
63 # ThreadPoolExecutor
64 mock_shutdown.assert_called_once_with()
1111
1212 """Fixtures for Glance unit tests."""
1313 # NOTE(mriedem): This is needed for importing from fixtures.
14 from __future__ import absolute_import
1514
1615 import logging as std_logging
1716 import os
1111 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
1212 # License for the specific language governing permissions and limitations
1313 # under the License.
14 from mock import patch
14 from unittest.mock import patch
15
1516 from oslo_policy import policy
1617 # NOTE(jokke): simplified transition to py3, behaves like py2 xrange
1718 from six.moves import http_client as http
100101 self.context = context.RequestContext(is_admin=True)
101102 self.request = webob.Request.blank('')
102103 self.request.context = self.context
103
104 def test_checksum_v1_header(self):
105 cache_filter = ChecksumTestCacheFilter()
106 headers = {"x-image-meta-checksum": "1234567890"}
107 resp = webob.Response(request=self.request, headers=headers)
108 cache_filter._process_GET_response(resp, None)
109
110 self.assertEqual("1234567890", cache_filter.cache.image_checksum)
111104
112105 def test_checksum_v2_header(self):
113106 cache_filter = ChecksumTestCacheFilter()
275268 context has not 'download_image' role.
276269 """
277270
278 def fake_get_v1_image_metadata(*args, **kwargs):
271 def fake_get_v2_image_metadata(*args, **kwargs):
279272 return {'status': 'active', 'properties': {}}
280273
281274 image_id = 'test1'
282 request = webob.Request.blank('/v1/images/%s' % image_id)
275 request = webob.Request.blank('/v2/images/%s/file' % image_id)
283276 request.context = context.RequestContext()
284277
285278 cache_filter = ProcessRequestTestCacheFilter()
286 cache_filter._get_v1_image_metadata = fake_get_v1_image_metadata
279 cache_filter._get_v2_image_metadata = fake_get_v2_image_metadata
287280
288281 enforcer = self._enforcer_from_rules({'download_image': '!'})
289282 cache_filter.policy = enforcer
1515 import testtools
1616 import webob
1717
18 from glance.api import cached_images
1918 from glance.api import policy
19 from glance.api.v2 import cached_images
2020 from glance.common import exception
2121 from glance import image_cache
2222
7070 return 1
7171
7272
73 class FakeController(cached_images.Controller):
73 class FakeController(cached_images.CacheController):
7474 def __init__(self):
7575 self.cache = FakeCache()
7676 self.policy = FakePolicyEnforcer()
7979 class TestController(testtools.TestCase):
8080 def test_initialization_without_conf(self):
8181 self.assertRaises(exception.BadDriverConfiguration,
82 cached_images.Controller)
82 cached_images.CacheController)
8383
8484
8585 class TestCachedImages(testtools.TestCase):
170170 project_domain_id="project-domain")
171171 self.assertEqual('user tenant domain user-domain project-domain',
172172 ctx.to_dict()["user_identity"])
173
174 def test_elevated(self):
175 """Make sure we get a whole admin-capable context from elevated()."""
176 ctx = context.RequestContext(service_catalog=['foo'],
177 user_id='dan',
178 project_id='openstack',
179 roles=['member'])
180 admin = ctx.elevated()
181 self.assertEqual('dan', admin.user_id)
182 self.assertEqual('openstack', admin.project_id)
183 self.assertEqual(sorted(['member', 'admin']),
184 sorted(admin.roles))
185 self.assertEqual(['foo'], admin.service_catalog)
186 self.assertTrue(admin.is_admin)
187
188 def test_elevated_again(self):
189 """Make sure a second elevation looks the same."""
190 ctx = context.RequestContext(service_catalog=['foo'],
191 user_id='dan',
192 project_id='openstack',
193 roles=['member'])
194 admin = ctx.elevated()
195 admin = admin.elevated()
196 self.assertEqual('dan', admin.user_id)
197 self.assertEqual('openstack', admin.project_id)
198 self.assertEqual(sorted(['member', 'admin']),
199 sorted(admin.roles))
200 self.assertEqual(['foo'], admin.service_catalog)
201 self.assertTrue(admin.is_admin)
3232
3333 CONF = cfg.CONF
3434 CONF.import_opt('metadata_encryption_key', 'glance.common.config')
35
36
37 @mock.patch('oslo_utils.importutils.import_module')
38 class TestDbUtilities(test_utils.BaseTestCase):
39 def setUp(self):
40 super(TestDbUtilities, self).setUp()
41 self.config(data_api='silly pants')
42 self.api = mock.Mock()
43
44 def test_get_api_calls_configure_if_present(self, import_module):
45 import_module.return_value = self.api
46 self.assertEqual(glance.db.get_api(), self.api)
47 import_module.assert_called_once_with('silly pants')
48 self.api.configure.assert_called_once_with()
49
50 def test_get_api_skips_configure_if_missing(self, import_module):
51 import_module.return_value = self.api
52 del self.api.configure
53 self.assertEqual(glance.db.get_api(), self.api)
54 import_module.assert_called_once_with('silly pants')
55 self.assertFalse(hasattr(self.api, 'configure'))
56
57 def test_get_api_calls_for_v1_api(self, import_module):
58 api = glance.db.get_api(v1_mode=True)
59 self.assertNotEqual(api, self.api)
60 import_module.assert_called_once_with('glance.db.sqlalchemy.api')
61 api.configure.assert_called_once_with()
6235
6336
6437 UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d'
394367 image)
395368 self.assertIn(fake_uuid, encodeutils.exception_to_unicode(exc))
396369
370 def test_save_excludes_atomic_props(self):
371 fake_uuid = str(uuid.uuid4())
372 image = self.image_repo.get(UUID1)
373
374 # Try to set the property normally
375 image.extra_properties['os_glance_import_task'] = fake_uuid
376 self.image_repo.save(image)
377
378 # Expect it was ignored
379 image = self.image_repo.get(UUID1)
380 self.assertNotIn('os_glance_import_task', image.extra_properties)
381
382 # Set the property atomically
383 self.image_repo.set_property_atomic(image,
384 'os_glance_import_task', fake_uuid)
385 # Expect it is set
386 image = self.image_repo.get(UUID1)
387 self.assertEqual(fake_uuid,
388 image.extra_properties['os_glance_import_task'])
389
390 # Try to clobber it
391 image.extra_properties['os_glance_import_task'] = 'foo'
392 self.image_repo.save(image)
393
394 # Expect it is unchanged
395 image = self.image_repo.get(UUID1)
396 self.assertEqual(fake_uuid,
397 image.extra_properties['os_glance_import_task'])
398
399 # Try to delete it
400 del image.extra_properties['os_glance_import_task']
401 self.image_repo.save(image)
402
403 # Expect it is still present and set accordingly
404 image = self.image_repo.get(UUID1)
405 self.assertEqual(fake_uuid,
406 image.extra_properties['os_glance_import_task'])
407
397408 def test_remove_image(self):
398409 image = self.image_repo.get(UUID1)
399410 previous_update_time = image.updated_at
435446 self.db.image_restore,
436447 self.context,
437448 image_id)
449
450 def test_image_set_property_atomic(self):
451 image_id = uuid.uuid4()
452 image = _db_fixture(image_id, name='test')
453
454 self.assertRaises(exception.ImageNotFound,
455 self.db.image_set_property_atomic,
456 image_id, 'foo', 'bar')
457
458 self.db.image_create(self.context, image)
459 self.db.image_set_property_atomic(image_id, 'foo', 'bar')
460 image = self.db.image_get(self.context, image_id)
461 self.assertEqual('foo', image['properties'][0]['name'])
462 self.assertEqual('bar', image['properties'][0]['value'])
463
464 def test_set_property_atomic(self):
465 image = self.image_repo.get(UUID1)
466 self.image_repo.set_property_atomic(image, 'foo', 'bar')
467 image = self.image_repo.get(image.image_id)
468 self.assertEqual({'foo': 'bar'}, image.extra_properties)
469
470 def test_image_delete_property_atomic(self):
471 image_id = uuid.uuid4()
472 image = _db_fixture(image_id, name='test')
473
474 self.assertRaises(exception.NotFound,
475 self.db.image_delete_property_atomic,
476 image_id, 'foo', 'bar')
477 self.db.image_create(self.context, image)
478 self.db.image_set_property_atomic(image_id, 'foo', 'bar')
479 self.db.image_delete_property_atomic(image_id, 'foo', 'bar')
480 image = self.image_repo.get(image_id)
481 self.assertEqual({}, image.extra_properties)
482
483 def test_delete_property_atomic(self):
484 image = self.image_repo.get(UUID1)
485 self.image_repo.set_property_atomic(image, 'foo', 'bar')
486 image = self.image_repo.get(image.image_id)
487 self.image_repo.delete_property_atomic(image, 'foo', 'bar')
488 image = self.image_repo.get(image.image_id)
489 self.assertEqual({}, image.extra_properties)
438490
439491
440492 class TestEncryptedLocations(test_utils.BaseTestCase):
545545 mock_executor.assert_called_once_with(context,
546546 self.task_repo,
547547 self.image_repo,
548 self.image_factory)
548 self.image_factory,
549 admin_repo=None)
550
551 def test_new_task_executor_with_admin(self):
552 admin_repo = mock.MagicMock()
553 task_executor_factory = domain.TaskExecutorFactory(
554 self.task_repo,
555 self.image_repo,
556 self.image_factory,
557 admin_repo=admin_repo)
558 context = mock.Mock()
559 with mock.patch.object(oslo_utils.importutils,
560 'import_class') as mock_import_class:
561 mock_executor = mock.Mock()
562 mock_import_class.return_value = mock_executor
563 task_executor_factory.new_task_executor(context)
564
565 mock_executor.assert_called_once_with(context,
566 self.task_repo,
567 self.image_repo,
568 self.image_factory,
569 admin_repo=admin_repo)
549570
550571 def test_new_task_executor_error(self):
551572 task_executor_factory = domain.TaskExecutorFactory(self.task_repo,
4949 add = fake_method
5050 save = fake_method
5151 remove = fake_method
52 set_property_atomic = fake_method
53 delete_property_atomic = fake_method
5254
5355
5456 class TestProxyRepoPlain(test_utils.BaseTestCase):
7981
8082 def test_remove(self):
8183 self._test_method('add', None, 'flying')
84
85 def test_set_property_atomic(self):
86 image = mock.MagicMock()
87 image.image_id = 'foo'
88 self._test_method('set_property_atomic', None, image, 'foo', 'bar')
89
90 def test_set_property_nonimage(self):
91 self.assertRaises(
92 AssertionError,
93 self._test_method,
94 'set_property_atomic', None, 'notimage', 'foo', 'bar')
95
96 def test_delete_property_atomic(self):
97 image = mock.MagicMock()
98 image.image_id = 'foo'
99 self._test_method('delete_property_atomic', None, image, 'foo', 'bar')
100
101 def test_delete_property_nonimage(self):
102 self.assertRaises(
103 AssertionError,
104 self._test_method,
105 'delete_property_atomic', None, 'notimage', 'foo', 'bar')
82106
83107
84108 class TestProxyRepoWrapping(test_utils.BaseTestCase):
0 # Copyright 2020 Red Hat, Inc.
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 from unittest import mock
16
17 from glance import gateway
18 import glance.tests.utils as test_utils
19
20
21 class TestGateway(test_utils.BaseTestCase):
22 def setUp(self):
23 super(TestGateway, self).setUp()
24 self.gateway = gateway.Gateway()
25 self.context = mock.sentinel.context
26
27 @mock.patch('glance.domain.TaskExecutorFactory')
28 def test_get_task_executor_factory(self, mock_factory):
29 @mock.patch.object(self.gateway, 'get_task_repo')
30 @mock.patch.object(self.gateway, 'get_repo')
31 @mock.patch.object(self.gateway, 'get_image_factory')
32 def _test(mock_gif, mock_gr, mock_gtr):
33 self.gateway.get_task_executor_factory(self.context)
34 mock_gtr.assert_called_once_with(self.context)
35 mock_gr.assert_called_once_with(self.context)
36 mock_gif.assert_called_once_with(self.context)
37 mock_factory.assert_called_once_with(
38 mock_gtr.return_value,
39 mock_gr.return_value,
40 mock_gif.return_value,
41 admin_repo=None)
42
43 _test()
44
45 @mock.patch('glance.domain.TaskExecutorFactory')
46 def test_get_task_executor_factory_with_admin(self, mock_factory):
47 @mock.patch.object(self.gateway, 'get_task_repo')
48 @mock.patch.object(self.gateway, 'get_repo')
49 @mock.patch.object(self.gateway, 'get_image_factory')
50 def _test(mock_gif, mock_gr, mock_gtr):
51 mock_gr.side_effect = [mock.sentinel.image_repo,
52 mock.sentinel.admin_repo]
53 self.gateway.get_task_executor_factory(
54 self.context,
55 admin_context=mock.sentinel.admin_context)
56 mock_gtr.assert_called_once_with(self.context)
57 mock_gr.assert_has_calls([
58 mock.call(self.context),
59 mock.call(mock.sentinel.admin_context),
60 ])
61 mock_gif.assert_called_once_with(self.context)
62 mock_factory.assert_called_once_with(
63 mock_gtr.return_value,
64 mock.sentinel.image_repo,
65 mock_gif.return_value,
66 admin_repo=mock.sentinel.admin_repo)
67
68 _test()
1010 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
1111 # License for the specific language governing permissions and limitations
1212 # under the License.
13
14 from __future__ import absolute_import
1513 from unittest import mock
1614
1715 import copy
1111 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
1212 # License for the specific language governing permissions and limitations
1313 # under the License.
14
15 from __future__ import absolute_import
1614
1715 from contextlib import contextmanager
1816 import datetime
1111 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
1212 # License for the specific language governing permissions and limitations
1313 # under the License.
14
15 from __future__ import absolute_import
1614 from unittest import mock
1715
1816 import fixtures
1313 # License for the specific language governing permissions and limitations
1414 # under the License.
1515
16 # TODO(smcginnis) update this once six has support for collections.abc
17 # (https://github.com/benjaminp/six/pull/241) or clean up once we drop py2.7.
18 try:
19 from collections.abc import Iterable
20 except ImportError:
21 from collections import Iterable
16 from collections import abc
2217 from unittest import mock
2318
2419 import hashlib
2520 import os.path
26
2721 import oslo_config.cfg
2822
2923 import glance.api.policy
3630 UUID1 = 'c80a1a6c-bd1f-41c5-90ee-81afedb1d58d'
3731
3832
39 class IterableMock(mock.Mock, Iterable):
33 class IterableMock(mock.Mock, abc.Iterable):
4034
4135 def __iter__(self):
4236 while False:
1313 # under the License.
1414 import copy
1515 from unittest import mock
16 from unittest.mock import patch
1617 import uuid
1718
18 from mock import patch
1919 from oslo_utils import encodeutils
2020 from oslo_utils import units
2121
1313 # under the License.
1414
1515 from unittest import mock
16 from unittest.mock import patch
1617 import uuid
1718
1819 import glance_store
19 from mock import patch
2020 from oslo_config import cfg
2121 # NOTE(jokke): simplified transition to py3, behaves like py2 xrange
2222 from six.moves import range
5353 self.os_hash_algo = None
5454 self.os_hash_value = None
5555 self.checksum = None
56 self.disk_format = 'raw'
57 self.container_format = 'bare'
58 self.virtual_size = 0
5659
5760 def delete(self):
5861 self.status = 'deleted'
109112 }
110113 image_stub = ImageStub(UUID2, status='queued', locations=[],
111114 extra_properties=extra_properties)
115 image_stub.disk_format = 'iso'
112116 image = glance.location.ImageProxy(image_stub, context,
113117 self.store_api, self.store_utils)
114118 with mock.patch.object(image, "_upload_to_store") as mloc:
236240 self.mock_object(unit_test_utils.FakeStoreAPI, 'get_from_backend',
237241 fake_get_from_backend)
238242 # This time, image1.get_data() returns the data wrapped in a
239 # LimitingReader|CooperativeReader pipeline, so peeking under
240 # the hood of those objects to get at the underlying string.
241 self.assertEqual('ZZZ', image1.get_data().data.fd)
243 # LimitingReader|CooperativeReader|InfoWrapper pipeline, so
244 # peeking under the hood of those objects to get at the
245 # underlying string.
246 self.assertEqual('ZZZ', image1.get_data().data.fd._source)
242247
243248 image1.locations.pop(0)
244249 self.assertEqual(1, len(image1.locations))
247252 def test_image_set_data(self):
248253 context = glance.context.RequestContext(user=USER1)
249254 image_stub = ImageStub(UUID2, status='queued', locations=[])
250 image = glance.location.ImageProxy(image_stub, context,
251 self.store_api, self.store_utils)
252 image.set_data('YYYY', 4)
255 # We are going to pass an iterable data source, so use the
256 # FakeStoreAPIReader that actually reads from that data
257 store_api = unit_test_utils.FakeStoreAPIReader()
258 image = glance.location.ImageProxy(image_stub, context,
259 store_api, self.store_utils)
260 image.set_data(iter(['YYYY']), 4)
253261 self.assertEqual(4, image.size)
254262 # NOTE(markwash): FakeStore returns image_id for location
255263 self.assertEqual(UUID2, image.locations[0]['url'])
256264 self.assertEqual('Z', image.checksum)
257265 self.assertEqual('active', image.status)
266 self.assertEqual(4, image.virtual_size)
267
268 def test_image_set_data_inspector_no_match(self):
269 context = glance.context.RequestContext(user=USER1)
270 image_stub = ImageStub(UUID2, status='queued', locations=[])
271 image_stub.disk_format = 'qcow2'
272 # We are going to pass an iterable data source, so use the
273 # FakeStoreAPIReader that actually reads from that data
274 store_api = unit_test_utils.FakeStoreAPIReader()
275 image = glance.location.ImageProxy(image_stub, context,
276 store_api, self.store_utils)
277 image.set_data(iter(['YYYY']), 4)
278 self.assertEqual(4, image.size)
279 # NOTE(markwash): FakeStore returns image_id for location
280 self.assertEqual(UUID2, image.locations[0]['url'])
281 self.assertEqual('Z', image.checksum)
282 self.assertEqual('active', image.status)
283 self.assertEqual(0, image.virtual_size)
284
285 @mock.patch('glance.common.format_inspector.get_inspector')
286 def test_image_set_data_inspector_not_needed(self, mock_gi):
287 context = glance.context.RequestContext(user=USER1)
288 image_stub = ImageStub(UUID2, status='queued', locations=[])
289 image_stub.virtual_size = 123
290 image_stub.disk_format = 'qcow2'
291 # We are going to pass an iterable data source, so use the
292 # FakeStoreAPIReader that actually reads from that data
293 store_api = unit_test_utils.FakeStoreAPIReader()
294 image = glance.location.ImageProxy(image_stub, context,
295 store_api, self.store_utils)
296 image.set_data(iter(['YYYY']), 4)
297 self.assertEqual(4, image.size)
298 # NOTE(markwash): FakeStore returns image_id for location
299 self.assertEqual(UUID2, image.locations[0]['url'])
300 self.assertEqual('Z', image.checksum)
301 self.assertEqual('active', image.status)
302 self.assertEqual(123, image.virtual_size)
303 # If the image already had virtual_size set (i.e. we're setting
304 # a new location), we should not re-calculate the value.
305 mock_gi.assert_not_called()
258306
259307 def test_image_set_data_location_metadata(self):
260308 context = glance.context.RequestContext(user=USER1)
280328 def test_image_set_data_unknown_size(self):
281329 context = glance.context.RequestContext(user=USER1)
282330 image_stub = ImageStub(UUID2, status='queued', locations=[])
331 image_stub.disk_format = 'iso'
283332 image = glance.location.ImageProxy(image_stub, context,
284333 self.store_api, self.store_utils)
285334 image.set_data('YYYY', None)
311360 self.store_api, self.store_utils)
312361 image.set_data('YYYY', 4)
313362 self.assertEqual('active', image.status)
314 mock_log.info.assert_called_once_with(
363 mock_log.info.assert_any_call(
315364 u'Successfully verified signature for image %s',
316365 UUID2)
317366
0 # Copyright 2020 Red Hat, Inc
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 from glance.tests import utils as test_utils
16
17
18 class TestFakeData(test_utils.BaseTestCase):
19 def test_via_read(self):
20 fd = test_utils.FakeData(1024)
21 data = []
22 for i in range(0, 1025, 256):
23 chunk = fd.read(256)
24 data.append(chunk)
25 if not chunk:
26 break
27
28 self.assertEqual(5, len(data))
29 # Make sure we got a zero-length final read
30 self.assertEqual(b'', data[-1])
31 # Make sure we only got 1024 bytes
32 self.assertEqual(1024, len(b''.join(data)))
33
34 def test_via_iter(self):
35 data = b''.join(list(test_utils.FakeData(1024)))
36 self.assertEqual(1024, len(data))
269269 pass
270270
271271
272 class FakeStoreAPIReader(FakeStoreAPI):
273 """A store API that actually reads from the data pipe."""
274
275 def add_to_backend_with_multihash(self, conf, image_id, data, size,
276 hashing_algo, scheme=None, context=None,
277 verifier=None):
278 for chunk in data:
279 pass
280
281 return super(FakeStoreAPIReader, self).add_to_backend_with_multihash(
282 conf, image_id, data, size, hashing_algo,
283 scheme=scheme, context=context, verifier=verifier)
284
285
272286 class FakePolicyEnforcer(object):
273287 def __init__(self, *_args, **kwargs):
274288 self.rules = {}
3838 req)
3939
4040 def test_get_stores(self):
41 available_stores = ['cheap', 'fast', 'readonly_store']
41 available_stores = ['cheap', 'fast', 'readonly_store', 'fast-cinder']
4242 req = unit_test_utils.get_fake_request()
4343 output = self.controller.get_stores(req)
4444 self.assertIn('stores', output)
4747 self.assertIn(stores['id'], available_stores)
4848
4949 def test_get_stores_read_only_store(self):
50 available_stores = ['cheap', 'fast', 'readonly_store']
50 available_stores = ['cheap', 'fast', 'readonly_store', 'fast-cinder']
5151 req = unit_test_utils.get_fake_request()
5252 output = self.controller.get_stores(req)
5353 self.assertIn('stores', output)
1313 # under the License.
1414
1515 import datetime
16 import eventlet
1716 import hashlib
1817 import os
1918 from unittest import mock
2322 import glance_store as store
2423 from oslo_config import cfg
2524 from oslo_serialization import jsonutils
25 from oslo_utils import fixture
2626 import six
2727 from six.moves import http_client as http
2828 # NOTE(jokke): simplified transition to py3, behaves like py2 xrange
3939 from glance.tests.unit import base
4040 from glance.tests.unit.keymgr import fake as fake_keymgr
4141 import glance.tests.unit.utils as unit_test_utils
42 from glance.tests.unit.v2 import test_tasks_resource
4243 import glance.tests.utils as test_utils
4344
4445 CONF = cfg.CONF
139140 self.disk_format = disk_format
140141 self.locations = locations
141142 self.owner = unit_test_utils.TENANT1
143 self.created_at = ''
144 self.updated_at = ''
145 self.min_disk = ''
146 self.min_ram = ''
147 self.protected = False
148 self.checksum = ''
149 self.os_hash_algo = ''
150 self.os_hash_value = ''
151 self.size = 0
152 self.virtual_size = 0
153 self.visibility = 'public'
154 self.os_hidden = False
155 self.name = 'foo'
156 self.tags = []
157 self.extra_properties = {}
158
159 # NOTE(danms): This fixture looks more like the db object than
160 # the proxy model. This needs fixing all through the tests
161 # below.
162 self.image_id = self.id
142163
143164
144165 class TestImagesController(base.IsolatedUnitTest):
725746 self.controller.import_image, request, UUID4,
726747 {'method': {'name': 'glance-direct'}})
727748
728 def test_image_import_raises_bad_request(self):
749 @mock.patch('glance.db.simple.api.image_set_property_atomic')
750 @mock.patch('glance.api.common.get_thread_pool')
751 def test_image_import_raises_bad_request(self, mock_gpt, mock_spa):
729752 request = unit_test_utils.get_fake_request()
730753 with mock.patch.object(
731754 glance.api.authorization.ImageRepoProxy, 'get') as mock_get:
732755 mock_get.return_value = FakeImage(status='uploading')
733756 # NOTE(abhishekk): Due to
734757 # https://bugs.launchpad.net/glance/+bug/1712463 taskflow is not
735 # executing. Once it is fixed instead of mocking spawn_n method
758 # executing. Once it is fixed instead of mocking spawn method
736759 # we should mock execute method of _ImportToStore task.
737 with mock.patch.object(eventlet.GreenPool, 'spawn_n',
738 side_effect=ValueError):
739 self.assertRaises(webob.exc.HTTPBadRequest,
740 self.controller.import_image, request, UUID4,
741 {'method': {'name': 'glance-direct'}})
760 mock_gpt.return_value.spawn.side_effect = ValueError
761 self.assertRaises(webob.exc.HTTPBadRequest,
762 self.controller.import_image, request, UUID4,
763 {'method': {'name': 'glance-direct'}})
764 self.assertTrue(mock_gpt.return_value.spawn.called)
742765
743766 def test_image_import_invalid_uri_filtering(self):
744767 request = unit_test_utils.get_fake_request()
29342957 pos = self.controller._get_locations_op_pos('1', None, True)
29352958 self.assertIsNone(pos)
29362959
2937 def test_image_import(self):
2938 request = unit_test_utils.get_fake_request()
2960 @mock.patch('glance.db.simple.api.image_set_property_atomic')
2961 @mock.patch.object(glance.api.authorization.TaskFactoryProxy, 'new_task')
2962 @mock.patch.object(glance.domain.TaskExecutorFactory, 'new_task_executor')
2963 @mock.patch('glance.api.common.get_thread_pool')
2964 def test_image_import(self, mock_gtp, mock_nte, mock_nt, mock_spa):
2965 request = unit_test_utils.get_fake_request()
2966 image = FakeImage(status='uploading')
29392967 with mock.patch.object(
29402968 glance.api.authorization.ImageRepoProxy, 'get') as mock_get:
2941 mock_get.return_value = FakeImage(status='uploading')
2969 mock_get.return_value = image
29422970 output = self.controller.import_image(
29432971 request, UUID4, {'method': {'name': 'glance-direct'}})
29442972
29452973 self.assertEqual(UUID4, output)
2974
2975 # Make sure we set the lock on the image
2976 mock_spa.assert_called_once_with(UUID4, 'os_glance_import_task',
2977 mock_nt.return_value.task_id)
2978
2979 # Make sure we grabbed a thread pool, and that we asked it
2980 # to spawn the task's run method with it.
2981 mock_gtp.assert_called_once_with('tasks_pool')
2982 mock_gtp.return_value.spawn.assert_called_once_with(
2983 mock_nt.return_value.run, mock_nte.return_value)
29462984
29472985 @mock.patch.object(glance.domain.TaskFactory, 'new_task')
29482986 @mock.patch.object(glance.api.authorization.ImageRepoProxy, 'get')
29582996 # NOTE(danms): Make sure we failed early and never even created
29592997 # a task
29602998 mock_new_task.assert_not_called()
2999
3000 @mock.patch('glance.db.simple.api.image_set_property_atomic')
3001 @mock.patch('glance.context.RequestContext.elevated')
3002 @mock.patch.object(glance.domain.TaskFactory, 'new_task')
3003 @mock.patch.object(glance.api.authorization.ImageRepoProxy, 'get')
3004 def test_image_import_copy_allowed_by_policy(self, mock_get,
3005 mock_new_task,
3006 mock_elevated,
3007 mock_spa,
3008 allowed=True):
3009 # NOTE(danms): FakeImage is owned by utils.TENANT1. Try to do the
3010 # import as TENANT2, but with a policy exception
3011 request = unit_test_utils.get_fake_request(tenant=TENANT2)
3012 mock_get.return_value = FakeImage(status='active', locations=[])
3013
3014 self.policy.rules = {'copy_image': allowed}
3015
3016 req_body = {'method': {'name': 'copy-image'},
3017 'stores': ['cheap']}
3018
3019 with mock.patch.object(
3020 self.controller.gateway,
3021 'get_task_executor_factory',
3022 side_effect=self.controller.gateway.get_task_executor_factory
3023 ) as mock_tef:
3024 self.controller.import_image(request, UUID4, req_body)
3025 # Make sure we passed an admin context to our task executor factory
3026 mock_tef.assert_called_once_with(
3027 request.context,
3028 admin_context=mock_elevated.return_value)
3029
3030 expected_input = {'image_id': UUID4,
3031 'import_req': mock.ANY,
3032 'backend': mock.ANY}
3033 mock_new_task.assert_called_with(task_type='api_image_import',
3034 owner=TENANT2,
3035 task_input=expected_input)
3036
3037 def test_image_import_copy_not_allowed_by_policy(self):
3038 # Make sure that if the policy check fails, we fail a copy-image with
3039 # Forbidden
3040 self.assertRaises(webob.exc.HTTPForbidden,
3041 self.test_image_import_copy_allowed_by_policy,
3042 allowed=False)
3043
3044 @mock.patch.object(glance.api.authorization.ImageRepoProxy, 'get')
3045 def test_image_import_locked(self, mock_get):
3046 task = test_tasks_resource._db_fixture(test_tasks_resource.UUID1,
3047 status='pending')
3048 self.db.task_create(None, task)
3049 image = FakeImage(status='uploading')
3050 # Image is locked with a valid task that has not aged out, so
3051 # the lock will not be busted.
3052 image.extra_properties['os_glance_import_task'] = task['id']
3053 mock_get.return_value = image
3054
3055 request = unit_test_utils.get_fake_request(tenant=TENANT1)
3056 req_body = {'method': {'name': 'glance-direct'}}
3057
3058 exc = self.assertRaises(webob.exc.HTTPConflict,
3059 self.controller.import_image,
3060 request, UUID1, req_body)
3061 self.assertEqual('Image has active task', str(exc))
3062
3063 @mock.patch('glance.db.simple.api.image_set_property_atomic')
3064 @mock.patch('glance.db.simple.api.image_delete_property_atomic')
3065 @mock.patch.object(glance.api.authorization.TaskFactoryProxy, 'new_task')
3066 @mock.patch.object(glance.api.authorization.ImageRepoProxy, 'get')
3067 def test_image_import_locked_by_reaped_task(self, mock_get, mock_nt,
3068 mock_dpi, mock_spi):
3069 image = FakeImage(status='uploading')
3070 # Image is locked by some other task that TaskRepo will not find
3071 image.extra_properties['os_glance_import_task'] = 'missing'
3072 mock_get.return_value = image
3073
3074 request = unit_test_utils.get_fake_request(tenant=TENANT1)
3075 req_body = {'method': {'name': 'glance-direct'}}
3076
3077 mock_nt.return_value.task_id = 'mytask'
3078 self.controller.import_image(request, UUID1, req_body)
3079
3080 # We should have atomically deleted the missing task lock
3081 mock_dpi.assert_called_once_with(image.id, 'os_glance_import_task',
3082 'missing')
3083 # We should have atomically grabbed the lock with our task id
3084 mock_spi.assert_called_once_with(image.id, 'os_glance_import_task',
3085 'mytask')
3086
3087 @mock.patch.object(glance.api.authorization.ImageRepoProxy, 'save')
3088 @mock.patch('glance.db.simple.api.image_set_property_atomic')
3089 @mock.patch('glance.db.simple.api.image_delete_property_atomic')
3090 @mock.patch.object(glance.api.authorization.TaskFactoryProxy, 'new_task')
3091 @mock.patch.object(glance.api.authorization.ImageRepoProxy, 'get')
3092 def test_image_import_locked_by_bustable_task(self, mock_get, mock_nt,
3093 mock_dpi, mock_spi,
3094 mock_save,
3095 task_status='processing'):
3096 if task_status == 'processing':
3097 # NOTE(danms): Only set task_input on one of the tested
3098 # states to make sure we don't choke on a task without
3099 # some of the data set yet.
3100 task_input = {'backend': ['store2']}
3101 else:
3102 task_input = {}
3103 task = test_tasks_resource._db_fixture(
3104 test_tasks_resource.UUID1,
3105 status=task_status,
3106 input=task_input)
3107 self.db.task_create(None, task)
3108 image = FakeImage(status='uploading')
3109 # Image is locked by a task in 'processing' state
3110 image.extra_properties['os_glance_import_task'] = task['id']
3111 image.extra_properties['os_glance_importing_to_stores'] = 'store2'
3112 mock_get.return_value = image
3113
3114 request = unit_test_utils.get_fake_request(tenant=TENANT1)
3115 req_body = {'method': {'name': 'glance-direct'}}
3116
3117 # Task has only been running for ten minutes
3118 time_fixture = fixture.TimeFixture(task['updated_at'] +
3119 datetime.timedelta(minutes=10))
3120 self.useFixture(time_fixture)
3121
3122 mock_nt.return_value.task_id = 'mytask'
3123
3124 # Task holds the lock, API refuses to bust it
3125 self.assertRaises(webob.exc.HTTPConflict,
3126 self.controller.import_image,
3127 request, UUID1, req_body)
3128 mock_dpi.assert_not_called()
3129 mock_spi.assert_not_called()
3130 mock_nt.assert_not_called()
3131
3132 # Fast forward to 90 minutes from now
3133 time_fixture.advance_time_delta(datetime.timedelta(minutes=90))
3134 self.controller.import_image(request, UUID1, req_body)
3135
3136 # API deleted the other task's lock and locked it for us
3137 mock_dpi.assert_called_once_with(image.id, 'os_glance_import_task',
3138 task['id'])
3139 mock_spi.assert_called_once_with(image.id, 'os_glance_import_task',
3140 'mytask')
3141
3142 # If we stored task_input with information about the stores
3143 # and thus triggered the cleanup code, make sure that cleanup
3144 # happened here.
3145 if task_status == 'processing':
3146 self.assertNotIn('store2',
3147 image.extra_properties[
3148 'os_glance_importing_to_stores'])
3149
3150 def test_image_import_locked_by_bustable_terminal_task_failure(self):
3151 # Make sure we don't fail with a task status transition error
3152 self.test_image_import_locked_by_bustable_task(task_status='failure')
3153
3154 def test_image_import_locked_by_bustable_terminal_task_success(self):
3155 # Make sure we don't fail with a task status transition error
3156 self.test_image_import_locked_by_bustable_task(task_status='success')
3157
3158 def test_cleanup_stale_task_progress(self):
3159 img_repo = mock.MagicMock()
3160 image = mock.MagicMock()
3161 task = mock.MagicMock()
3162
3163 # No backend info from the old task, means no action
3164 task.task_input = {}
3165 image.extra_properties = {}
3166 self.controller._cleanup_stale_task_progress(img_repo, image, task)
3167 img_repo.save.assert_not_called()
3168
3169 # If we have info but no stores, no action
3170 task.task_input = {'backend': []}
3171 self.controller._cleanup_stale_task_progress(img_repo, image, task)
3172 img_repo.save.assert_not_called()
3173
3174 # If task had stores, but image does not have those stores in
3175 # the lists, no action
3176 task.task_input = {'backend': ['store1', 'store2']}
3177 self.controller._cleanup_stale_task_progress(img_repo, image, task)
3178 img_repo.save.assert_not_called()
3179
3180 # If the image has stores in the lists, but not the ones we care
3181 # about, make sure they are not disturbed
3182 image.extra_properties = {'os_glance_failed_import': 'store3'}
3183 self.controller._cleanup_stale_task_progress(img_repo, image, task)
3184 img_repo.save.assert_not_called()
3185
3186 # Only if the image has stores that relate to our old task should
3187 # take action, and only on those stores.
3188 image.extra_properties = {
3189 'os_glance_importing_to_stores': 'foo,store1,bar',
3190 'os_glance_failed_import': 'foo,store2,bar',
3191 }
3192 self.controller._cleanup_stale_task_progress(img_repo, image, task)
3193 img_repo.save.assert_called_once_with(image)
3194 self.assertEqual({'os_glance_importing_to_stores': 'foo,bar',
3195 'os_glance_failed_import': 'foo,bar'},
3196 image.extra_properties)
3197
3198 def test_bust_import_lock_race_to_delete(self):
3199 image_repo = mock.MagicMock()
3200 task_repo = mock.MagicMock()
3201 image = mock.MagicMock()
3202 task = mock.MagicMock(id='foo')
3203 # Simulate a race where we tried to bust a specific lock and
3204 # someone else already had, and/or re-locked it
3205 image_repo.delete_property_atomic.side_effect = exception.NotFound
3206 self.assertRaises(exception.Conflict,
3207 self.controller._bust_import_lock,
3208 image_repo, task_repo,
3209 image, task, task.id)
3210
3211 def test_enforce_lock_log_not_bustable(self, task_status='processing'):
3212 task = test_tasks_resource._db_fixture(
3213 test_tasks_resource.UUID1,
3214 status=task_status)
3215 self.db.task_create(None, task)
3216 request = unit_test_utils.get_fake_request(tenant=TENANT1)
3217 image = FakeImage()
3218 image.extra_properties['os_glance_import_task'] = task['id']
3219
3220 # Freeze time to make this repeatable
3221 time_fixture = fixture.TimeFixture(task['updated_at'] +
3222 datetime.timedelta(minutes=55))
3223 self.useFixture(time_fixture)
3224
3225 expected_expire = 300
3226 if task_status == 'pending':
3227 # NOTE(danms): Tasks in 'pending' get double the expiry time,
3228 # so we'd be expecting an extra hour here.
3229 expected_expire += 3600
3230
3231 with mock.patch.object(glance.api.v2.images, 'LOG') as mock_log:
3232 self.assertRaises(exception.Conflict,
3233 self.controller._enforce_import_lock,
3234 request, image)
3235 mock_log.warning.assert_called_once_with(
3236 'Image %(image)s has active import task %(task)s in '
3237 'status %(status)s; lock remains valid for %(expire)i '
3238 'more seconds',
3239 {'image': image.id,
3240 'task': task['id'],
3241 'status': task_status,
3242 'expire': expected_expire})
3243
3244 def test_enforce_lock_pending_takes_longer(self):
3245 self.test_enforce_lock_log_not_bustable(task_status='pending')
29613246
29623247 def test_delete_encryption_key_no_encryption_key(self):
29633248 request = unit_test_utils.get_fake_request()
45724857 self.assertEqual(http.CREATED, response.status_int)
45734858 header_value = response.headers.get(header_name)
45744859 self.assertIsNotNone(header_value)
4575 self.assertItemsEqual(enabled_methods, header_value.split(','))
4860 self.assertCountEqual(enabled_methods, header_value.split(','))
45764861
45774862 # check single method
45784863 self.config(enabled_import_methods=['swift-party-time'])
292292 self.assertRaises(webob.exc.HTTPNotFound,
293293 self.controller.get, request, UUID4)
294294
295 @mock.patch('glance.api.common.get_thread_pool')
295296 @mock.patch.object(glance.gateway.Gateway, 'get_task_factory')
296297 @mock.patch.object(glance.gateway.Gateway, 'get_task_executor_factory')
297298 @mock.patch.object(glance.gateway.Gateway, 'get_task_repo')
298299 def test_create(self, mock_get_task_repo, mock_get_task_executor_factory,
299 mock_get_task_factory):
300 mock_get_task_factory, mock_get_thread_pool):
300301 # setup
301302 request = unit_test_utils.get_fake_request()
302303 task = {
331332 self.assertEqual(1, get_task_repo.add.call_count)
332333 self.assertEqual(
333334 1, get_task_executor_factory.new_task_executor.call_count)
335
336 # Make sure that we spawned the task's run method
337 mock_get_thread_pool.assert_called_once_with('tasks_pool')
338 mock_get_thread_pool.return_value.spawn.assert_called_once_with(
339 new_task.run,
340 get_task_executor_factory.new_task_executor.return_value)
334341
335342 @mock.patch('glance.common.scripts.utils.get_image_data_iter')
336343 @mock.patch('glance.common.scripts.utils.validate_location_uri')
1919 import os
2020 import shlex
2121 import shutil
22 import signal
2223 import socket
2324 import subprocess
2425 import threading
3132 from oslo_config import fixture as cfg_fixture
3233 from oslo_log.fixture import logging_error as log_fixture
3334 from oslo_log import log
35 from oslo_utils import timeutils
36 from oslo_utils import units
3437 import six
3538 from six.moves import BaseHTTPServer
3639 from six.moves import http_client as http
4851 from glance.tests.unit import fixtures as glance_fixtures
4952
5053 CONF = cfg.CONF
54 LOG = log.getLogger(__name__)
5155 try:
5256 CONF.debug
5357 except cfg.NoSuchOptError:
279283
280284 def wait_for_fork(pid,
281285 raise_error=True,
282 expected_exitcode=0):
286 expected_exitcode=0,
287 force=True):
283288 """
284289 Wait for a process to complete
285290
288293 is raised.
289294 """
290295
291 rc = 0
292 try:
293 (pid, rc) = os.waitpid(pid, 0)
294 rc = os.WEXITSTATUS(rc)
295 if rc != expected_exitcode:
296 raise RuntimeError('The exit code %d is not %d'
297 % (rc, expected_exitcode))
298 except Exception:
299 if raise_error:
300 raise
301
302 return rc
296 # For the first period, we wait without being pushy, but after
297 # this timer expires, we start sending SIGTERM
298 term_timer = timeutils.StopWatch(5)
299 term_timer.start()
300
301 # After this timer expires we start sending SIGKILL
302 nice_timer = timeutils.StopWatch(7)
303 nice_timer.start()
304
305 # Process gets a maximum amount of time to exit before we fail the
306 # test
307 total_timer = timeutils.StopWatch(10)
308 total_timer.start()
309
310 while not total_timer.expired():
311 try:
312 cpid, rc = os.waitpid(pid, force and os.WNOHANG or 0)
313 if cpid == 0 and force:
314 if not term_timer.expired():
315 # Waiting for exit on first signal
316 pass
317 elif not nice_timer.expired():
318 # Politely ask the process to GTFO
319 LOG.warning('Killing child %i with SIGTERM' % pid)
320 os.kill(pid, signal.SIGTERM)
321 else:
322 # No more Mr. Nice Guy
323 LOG.warning('Killing child %i with SIGKILL' % pid)
324 os.kill(pid, signal.SIGKILL)
325 expected_exitcode = signal.SIGKILL
326 time.sleep(1)
327 continue
328 LOG.info('waitpid(%i) returned %i,%i' % (pid, cpid, rc))
329 if rc != expected_exitcode:
330 raise RuntimeError('The exit code %d is not %d'
331 % (rc, expected_exitcode))
332 return rc
333 except ChildProcessError:
334 # Nothing to wait for
335 return 0
336 except Exception as e:
337 LOG.error('Got wait error: %s' % e)
338 if raise_error:
339 raise
340
341 raise RuntimeError('Gave up waiting for %i to exit!' % pid)
303342
304343
305344 def execute(cmd,
689728 thread.start()
690729
691730 return thread, httpd, port
731
732
733 class FakeData(object):
734 """Generate a bunch of data without storing it in memory.
735
736 This acts like a read-only file object which generates fake data
737 in chunks when read() is called or it is used as a generator. It
738 can generate an arbitrary amount of data without storing it in
739 memory.
740
741 :param length: The number of bytes to generate
742 :param chunk_size: The chunk size to return in iteration mode, or when
743 read() is called unbounded
744
745 """
746 def __init__(self, length, chunk_size=64 * units.Ki):
747 self._max = length
748 self._chunk_size = chunk_size
749 self._len = 0
750
751 def read(self, length=None):
752 if length is None:
753 length = self._chunk_size
754
755 length = min(length, self._max - self._len)
756
757 self._len += length
758 if length == 0:
759 return b''
760 else:
761 return b'0' * length
762
763 def __iter__(self):
764 return self
765
766 def __next__(self):
767 r = self.read()
768 if len(r) == 0:
769 raise StopIteration()
770 else:
771 return r
0 alabaster==0.7.10
10 alembic==0.8.10
21 amqp==2.2.2
32 appdirs==1.4.3
43 asn1crypto==0.24.0
54 automaton==1.14.0
65 Babel==2.3.4
6 boto3==1.9.199
77 cachetools==2.0.1
88 castellan==0.17.0
9 certifi==2018.1.18
10 cffi==1.11.5
9 cffi==1.13.2
1110 chardet==3.0.4
1211 cliff==2.11.0
1312 cmd2==0.8.1
1817 ddt==1.0.1
1918 debtcollector==1.2.0
2019 decorator==4.2.1
21 defusedxml==0.5.0
20 defusedxml==0.6.0
21 dnspython==1.16.0
2222 doc8==0.6.0
2323 docutils==0.14
24 dogpile.cache==0.6.5
25 dulwich==0.19.0
26 enum-compat==0.0.2
27 eventlet==0.22.0
24 entrypoints==0.3
25 eventlet==0.25.1
2826 extras==1.0.0
2927 fasteners==0.14.1
3028 fixtures==3.0.0
3129 future==0.16.0
3230 futurist==1.2.0
33 gitdb2==2.0.3
34 GitPython==2.1.8
35 glance-store==1.0.0
31 glance-store==2.3.0
3632 greenlet==0.4.13
3733 httplib2==0.9.1
3834 idna==2.6
39 imagesize==1.0.0
4035 iso8601==0.1.11
4136 Jinja2==2.10
42 jsonschema==2.6.0
37 jsonschema==3.2.0
4338 keystoneauth1==3.4.0
4439 keystonemiddleware==4.17.0
45 kombu==4.1.0
40 kombu==4.3.0
4641 linecache2==1.0.0
4742 lxml==4.1.1
4843 Mako==1.0.7
4944 MarkupSafe==1.0
50 mccabe==0.2.1
5145 mock==2.0.0
46 monotonic==1.5
5247 mox3==0.25.0
5348 msgpack==0.5.6
5449 netaddr==0.7.19
5550 netifaces==0.10.6
5651 networkx==1.11
57 openstackdocstheme==1.20.0
58 os-api-ref==1.4.0
5952 os-client-config==1.29.0
60 os-testr==1.0.0
6153 os-win==3.0.0
62 oslo.cache==1.29.0
6354 oslo.concurrency==3.26.0
6455 oslo.config==5.2.0
6556 oslo.context==2.19.2
66 oslo.db==4.27.0
57 oslo.db==5.0.0
6758 oslo.i18n==3.15.3
6859 oslo.log==3.36.0
6960 oslo.messaging==5.29.0
7970 Paste==2.0.2
8071 PasteDeploy==1.5.0
8172 pbr==2.0.0
73 pika==0.10.0
8274 pika-pool==0.1.3
83 pika==0.10.0
75 positional==1.2.1
8476 prettytable==0.7.1
8577 psutil==3.2.2
8678 psycopg2==2.8.4
9183 PyMySQL==0.7.6
9284 pyOpenSSL==17.1.0
9385 pyparsing==2.2.0
94 pyperclip==1.6.0
86 pyperclip==1.8.0
9587 pysendfile==2.0.0
9688 python-barbicanclient==4.6.0
9789 python-dateutil==2.7.0
10294 python-swiftclient==3.2.0
10395 pytz==2018.3
10496 PyYAML==3.12
105 qpid-python==0.26
106 reno==2.5.0
10797 repoze.lru==0.7
10898 requests==2.14.2
10999 requestsexceptions==1.4.0
113103 Routes==2.3.1
114104 simplegeneric==0.8.1
115105 six==1.10.0
116 smmap2==2.0.3
117 snowballstemmer==1.2.1
118 Sphinx==1.6.2
119 sphinxcontrib-websupport==1.0.1
106 SQLAlchemy==1.0.10
120107 sqlalchemy-migrate==0.11.0
121 SQLAlchemy==1.0.10
122108 sqlparse==0.2.2
123109 statsd==3.2.2
124110 stestr==2.0.0
132118 testtools==2.2.0
133119 traceback2==1.4.0
134120 unittest2==1.1.0
135 urllib3==1.22
136121 vine==1.1.4
137122 voluptuous==0.11.1
138123 WebOb==1.8.1
139 whereto===0.3.0
140124 wrapt==1.10.11
141125 WSME==0.8.0
142126 xattr==0.9.2
0 # This playbook is for OpenDev infra consumption only.
1 - hosts: controller
2 tasks:
3 - name: Run glance validation script
4 shell:
5 executable: /bin/bash
6 cmd: |
7 source /opt/stack/devstack/openrc
8 set -xe
9 cirrosimg=$(glance image-list | grep cirros | cut -d" " -f 2)
10
11 echo "Dumping the cirros image for debugging..."
12 glance image-show $cirrosimg
13
14 echo "Checking that the cirros image was decorated with metdata on import..."
15 glance image-list --property-filter 'glance_devstack_test=doyouseeme?' | grep cirros
16
17 echo "Checking that the cirros image was converted to raw on import..."
18 glance image-show $cirrosimg | egrep -e 'disk_format.*raw'
19 environment: '{{ zuul | zuul_legacy_vars }}'
0 ---
1 deprecations:
2 - |
3 This release removes endpoints and config options related
4 to glance-registry. Including but not limited to config
5 option 'data-api' which has no production supported
6 options left. SimpleDB has not been supported since
7 moving DB migrations to alembic and registry is removed.
8 All registry specific options and config files have been
9 removed. 'glance-registry' command has been removed.
0 ---
1 deprecations:
2 - |
3 The deprecated 'enable_v2_api' config option has been
4 removed.
0 ---
1 fixes:
2 - |
3 Bug 1884596_: A change was added to the import API which provides
4 time-based locking of an image to exclude other import operations
5 from starting until the lock-holding task completes. The lock is
6 based on the task that we start to do the work, and the UUID of
7 that task is stored in the ``os_glance_import_task`` image property,
8 which indicates who owns the lock. If the task holding the lock fails
9 to make progress for 60 minutes, another import operation will be
10 allowed to steal the lock and start another import operation.
11
12 .. _1884596: https://bugs.launchpad.net/glance/+bug/1884596
0 ---
1 upgrade:
2 - |
3 The ``glance-replicator`` options ``mastertoken`` and ``slavetoken`` were
4 deprecated in the Pike release cycle. These options have now been removed.
5 The options ``sourcetoken`` and ``targettoken`` should be used instead.
0 ---
1 features:
2 - |
3 Added support for cinder multiple stores.
4 upgrade:
5 - |
6 During upgrade from single cinder store to multiple cinder stores, legacy
7 images location url will be updated to the new format with respect to the
8 volume type configured in the stores.
9 Legacy location url: cinder://<volume-id>
10 New location url: cinder://<store-id>/<volume-id>
11
0 ---
1 features:
2 - |
3 Added policy support to allow copying image to multiple stores,
4 even if those images are not owned by the current user's project.
5
6 fixes:
7 - |
8 Bug 1888349_: glance-cache-manage utility is broken
9 - |
10 Bug 1886374_: Improve lazy loading mechanism for multiple stores
11 - |
12 Bug 1885003_: Interrupted copy-image may break a subsequent operation
13 - |
14 Bug 1884587_: image import copy-image API should reflect proper authorization
15 - |
16 Bug 1876419_: Failed to parse json file /etc/glance/metadefs/compute-vmware.json
17 - |
18 Bug 1856581_: metadefs: OS::Glance::CommonImageProperties out of date
19 - |
20 Bug 1843576_: Glance metadefs is missing Image property hw_vif_multiqueue_enabled
21 - |
22 Bug 1856578_: docs: image schema customization restrictions
23 - |
24 Bug 1808814_: admin docs: interoperable image import revision for stein
25 - |
26 Bug 1870336_: Update 'common image properties' doc
27 - |
28 Bug 1888713_: Async tasks, image import not supported in pure-WSGI mode
29
30 .. _1888349: https://code.launchpad.net/bugs/1888349
31 .. _1886374: https://code.launchpad.net/bugs/1886374
32 .. _1885003: https://code.launchpad.net/bugs/1885003
33 .. _1884587: https://code.launchpad.net/bugs/1884587
34 .. _1876419: https://code.launchpad.net/bugs/1876419
35 .. _1856581: https://code.launchpad.net/bugs/1856581
36 .. _1843576: https://code.launchpad.net/bugs/1843576
37 .. _1856578: https://code.launchpad.net/bugs/1856578
38 .. _1808814: https://code.launchpad.net/bugs/1808814
39 .. _1870336: https://code.launchpad.net/bugs/1870336
40 .. _1888713: https://code.launchpad.net/bugs/1888713
0 ---
1 features:
2 - |
3 Added support to calculate virtual size of image based on disk format
4 - |
5 Added support for sparse image upload for filesystem and rbd driver
6 - |
7 Improved performance of rbd store chunk upload
8 - |
9 Added support to configure multiple cinder stores
10
11 upgrade:
12 - |
13 After upgrading, deployments using the cinder backend should
14 update their config to specify a volume type. Existing images on
15 those backends will be updated at runtime (lazily, when they are
16 first read) to a location URL that includes the store and volume
17 type information.
18
19 fixes:
20 - |
21 Bug 1891190_: test_reload() functional test causes hang and jobs TIMED_OUT
22 - |
23 Bug 1891352_: Failed import of one store will remain in progress forever if all_stores_must_succeed=True
24 - |
25 Bug 1887099_: Invalid metadefs for watchdog
26
27 .. _1891190: https://code.launchpad.net/bugs/1891190
28 .. _1891352: https://code.launchpad.net/bugs/1891352
29 .. _1887099: https://code.launchpad.net/bugs/1887099
0 ---
1 prelude: |
2 The Victoria release includes some important milestones in Glance
3 development priorities.
4
5 * Added support to calculate virtual size of image based on disk format
6
7 * Added support for sparse image upload for filesystem and rbd driver of
8 glance_store
9
10 * Improved performance of rbd store chunk upload
11
12 * Fixed some important bugs around copy-image import method and importing
13 image to multiple stores
14
15 * Added support to configure multiple cinder stores
16
17 fixes:
18 - |
19 Bug 1795950_: Fix cleaning of web-download image import in node_staging_uri
20 - |
21 Bug 1895663_: Image import "web-download" doesn't check on download size
22
23 .. _1795950: https://code.launchpad.net/bugs/1795950
24 .. _1895663: https://code.launchpad.net/bugs/1895663
3030
3131 # -- General configuration ------------------------------------------------
3232
33 import openstackdocstheme
34
3533 # If your documentation needs a minimal Sphinx version, state it here.
3634 # needs_sphinx = '1.0'
3735
3937 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
4038 # ones.
4139 extensions = [
40 'openstackdocstheme',
4241 'reno.sphinxext',
4342 ]
4443
9291 # show_authors = False
9392
9493 # The name of the Pygments (syntax highlighting) style to use.
95 pygments_style = 'sphinx'
94 pygments_style = 'native'
9695
9796 # A list of ignored prefixes for module index sorting.
9897 # modindex_common_prefix = []
9998
10099 # If true, keep warnings as "system message" paragraphs in the built documents.
101100 # keep_warnings = False
101
102 # openstackdocstheme options
103 openstackdocs_repo_name = 'openstack/glance'
104 openstackdocs_bug_project = 'glance'
105 openstackdocs_auto_name = False
106 openstackdocs_bug_tag = 'releasenotes'
102107
103108
104109 # -- Options for HTML output ----------------------------------------------
111116 # further. For a list of options available for each theme, see the
112117 # documentation.
113118 # html_theme_options = {}
114
115 # Add any paths that contain custom themes here, relative to this directory.
116 # html_theme_path = []
117 html_theme_path = [openstackdocstheme.get_html_theme_path()]
118119
119120 # The name for this set of Sphinx documents. If None, it defaults to
120121 # "<project> v<release> documentation".
55 :maxdepth: 1
66
77 unreleased
8 ussuri
89 train
910 stein
1011 rocky
0 ===========================
1 Ussuri Series Release Notes
2 ===========================
3
4 .. release-notes::
5 :branch: stable/ussuri
22 # process, which may cause wedges in the gate later.
33
44 pbr!=2.1.0,>=2.0.0 # Apache-2.0
5 defusedxml>=0.5.0 # PSF
5 defusedxml>=0.6.0 # PSF
66
77 # < 0.8.0/0.8 does not work, see https://bugs.launchpad.net/bugs/1153983
88 SQLAlchemy!=1.1.5,!=1.1.6,!=1.1.7,!=1.1.8,>=1.0.10 # MIT
9 eventlet!=0.23.0,!=0.25.0,>=0.22.0 # MIT
9 eventlet>=0.25.1 # MIT
1010 PasteDeploy>=1.5.0 # MIT
1111 Routes>=2.3.1 # MIT
1212 WebOb>=1.8.1 # MIT
1919 oslo.context>=2.19.2 # Apache-2.0
2020 oslo.upgradecheck>=0.1.0 # Apache-2.0
2121 oslo.utils>=3.33.0 # Apache-2.0
22 stevedore>=1.20.0 # Apache-2.0
22 stevedore!=3.0.0,>=1.20.0 # Apache-2.0
2323 futurist>=1.2.0 # Apache-2.0
2424 taskflow>=2.16.0 # Apache-2.0
2525 keystoneauth1>=3.4.0 # Apache-2.0
3030 # For paste.util.template used in keystone.common.template
3131 Paste>=2.0.2 # MIT
3232
33 jsonschema>=2.6.0 # MIT
33 jsonschema>=3.2.0 # MIT
3434 pyOpenSSL>=17.1.0 # Apache-2.0
3535 # Required by openstack.common libraries
3636 six>=1.10.0 # MIT
3737
38 oslo.db>=4.27.0 # Apache-2.0
38 oslo.db>=5.0.0 # Apache-2.0
3939 oslo.i18n>=3.15.3 # Apache-2.0
4040 oslo.log>=3.36.0 # Apache-2.0
4141 oslo.messaging>=5.29.0,!=9.0.0 # Apache-2.0
4747 osprofiler>=1.4.0 # Apache-2.0
4848
4949 # Glance Store
50 glance-store>=1.0.0 # Apache-2.0
50 glance-store>=2.3.0 # Apache-2.0
5151
5252
5353 debtcollector>=1.2.0 # Apache-2.0
1616 Programming Language :: Python :: 3
1717 Programming Language :: Python :: 3.6
1818 Programming Language :: Python :: 3.7
19 Programming Language :: Python :: 3.8
1920
2021 [files]
2122 data_files =
2324 etc/glance-api.conf
2425 etc/glance-cache.conf
2526 etc/glance-manage.conf
26 etc/glance-registry.conf
2727 etc/glance-scrubber.conf
2828 etc/glance-api-paste.ini
29 etc/glance-registry-paste.ini
3029 etc/glance/metadefs = etc/metadefs/*
3130 packages =
3231 glance
4039 glance-cache-cleaner = glance.cmd.cache_cleaner:main
4140 glance-control = glance.cmd.control:main
4241 glance-manage = glance.cmd.manage:main
43 glance-registry = glance.cmd.registry:main
4442 glance-replicator = glance.cmd.replicator:main
4543 glance-scrubber = glance.cmd.scrubber:main
4644 glance-status = glance.cmd.status:main
5149 store_type_strategy = glance.common.location_strategy.store_type
5250 oslo.config.opts =
5351 glance.api = glance.opts:list_api_opts
54 glance.registry = glance.opts:list_registry_opts
5552 glance.scrubber = glance.opts:list_scrubber_opts
5653 glance.cache= glance.opts:list_cache_opts
5754 glance.manage = glance.opts:list_manage_opts
9390 tag_date = 0
9491 tag_svn_revision = 0
9592
96 [compile_catalog]
97 directory = glance/locale
98 domain = glance
99
100 [update_catalog]
101 domain = glance
102 output_dir = glance/locale
103 input_file = glance/locale/glance.pot
104
105 [extract_messages]
106 keywords = _ gettext ngettext l_ lazy_gettext
107 mapping_file = babel.cfg
108 output_file = glance/locale/glance.pot
22 # process, which may cause wedges in the gate later.
33
44 # Hacking already pins down pep8, pyflakes and flake8
5 hacking>=3.0,<3.1.0 # Apache-2.0
5 hacking>=3.0.1,<3.1.0 # Apache-2.0
66
77 # For translations processing
88 Babel!=2.4.0,>=2.3.4 # BSD
2121 stestr>=2.0.0 # Apache-2.0
2222 doc8>=0.6.0 # Apache-2.0
2323 Pygments>=2.2.0 # BSD license
24 boto3>=1.9.199 # Apache-2.0
2425
2526 # Optional packages that should be installed when testing
2627 PyMySQL>=0.7.6 # MIT License
2222 sudo -H mysql -u root -p$DB_ROOT_PW -h localhost -e "
2323 DELETE FROM mysql.user WHERE User='';
2424 FLUSH PRIVILEGES;
25 GRANT ALL PRIVILEGES ON *.*
26 TO '$DB_USER'@'%' identified by '$DB_PW' WITH GRANT OPTION;"
25 CREATE USER '$DB_USER'@'%' IDENTIFIED BY '$DB_PW';
26 GRANT ALL PRIVILEGES ON *.* TO '$DB_USER'@'%' WITH GRANT OPTION;"
2727
2828 # Now create our database.
2929 mysql -u $DB_USER -p$DB_PW -h 127.0.0.1 -e "
0 #!/usr/bin/env python3
1 # Copyright 2020 Red Hat, Inc
2 # All Rights Reserved.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License"); you may
5 # not use this file except in compliance with the License. You may obtain
6 # a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
12 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
13 # License for the specific language governing permissions and limitations
14 # under the License.
15
16 """This is a helper tool to test Glance's stream-based format inspection."""
17
18 # Example usage:
19 #
20 # test_format_inspector.py -f qcow2 -v -i ~/cirros-0.5.1-x86_64-disk.img
21
22 import argparse
23 import logging
24 import sys
25
26 from oslo_utils import units
27
28 from glance.common import format_inspector
29 from glance.tests.unit.common import test_format_inspector
30
31
32 def main():
33 formats = ['raw', 'qcow2', 'vhd', 'vhdx', 'vmdk', 'vdi']
34
35 parser = argparse.ArgumentParser()
36 parser.add_argument('-d', '--debug', action='store_true')
37 parser.add_argument('-f', '--format', default='raw',
38 help='Format (%s)' % ','.join(sorted(formats)))
39 parser.add_argument('-b', '--block-size', default=65536, type=int,
40 help='Block read size')
41 parser.add_argument('--context-limit', default=(1 * 1024), type=int,
42 help='Maximum memory footprint (KiB)')
43 parser.add_argument('-i', '--input', default=None,
44 help='Input file. Defaults to stdin')
45 parser.add_argument('-v', '--verify', action='store_true',
46 help=('Verify our number with qemu-img '
47 '(requires --input)'))
48 args = parser.parse_args()
49
50 if args.debug:
51 logging.basicConfig(level=logging.DEBUG)
52 else:
53 logging.basicConfig(level=logging.INFO)
54
55 fmt = format_inspector.get_inspector(args.format)(tracing=args.debug)
56
57 if args.input:
58 input_stream = open(args.input, 'rb')
59 else:
60 input_stream = sys.stdin.buffer
61
62 stream = format_inspector.InfoWrapper(input_stream, fmt)
63 count = 0
64 found_size = False
65 while True:
66 chunk = stream.read(int(args.block_size))
67 # This could stream to an output destination or stdin for testing
68 # sys.stdout.write(chunk)
69 if not chunk:
70 break
71 count += len(chunk)
72 if args.format != 'raw' and not found_size and fmt.virtual_size != 0:
73 # Print the point at which we've seen enough of the file to
74 # know what the virtual size is. This is almost always less
75 # than the raw_size
76 print('Determined virtual size at byte %i' % count)
77 found_size = True
78
79 if fmt.format_match:
80 print('Source was %s file, virtual size %i MiB (%i bytes)' % (
81 fmt, fmt.virtual_size / units.Mi, fmt.virtual_size))
82 else:
83 print('*** Format inspector did not detect file as %s' % args.format)
84
85 print('Raw size %i MiB (%i bytes)' % (fmt.actual_size / units.Mi,
86 fmt.actual_size))
87 print('Required contexts: %s' % str(fmt.context_info))
88 mem_total = sum(fmt.context_info.values())
89 print('Total memory footprint: %i bytes' % mem_total)
90
91 # To make sure we're not storing the whole image, complain if the
92 # format inspector stored more than context_limit data
93 if mem_total > args.context_limit * 1024:
94 print('*** ERROR: Memory footprint exceeded!')
95
96 if args.verify and args.input:
97 size = test_format_inspector.get_size_from_qemu_img(args.input)
98 if size != fmt.virtual_size:
99 print('*** QEMU disagrees with our size of %i: %i' % (
100 fmt.virtual_size, size))
101 else:
102 print('Confirmed size with qemu-img')
103
104
105 if __name__ == '__main__':
106 sys.exit(main())
00 [tox]
11 minversion = 3.1.0
22 # python runtimes: https://governance.openstack.org/tc/reference/runtimes/ussuri.html
3 envlist = functional-py37,py37,py36,pep8
3 envlist = functional-py38,py38,py36,pep8
44 skipsdist = True
55 skip_missing_interpreters = true
66 # this allows tox to infer the base python from the environment name
8585 [testenv:genconfig]
8686 commands =
8787 oslo-config-generator --config-file etc/oslo-config-generator/glance-api.conf
88 oslo-config-generator --config-file etc/oslo-config-generator/glance-registry.conf
8988 oslo-config-generator --config-file etc/oslo-config-generator/glance-scrubber.conf
9089 oslo-config-generator --config-file etc/oslo-config-generator/glance-cache.conf
9190 oslo-config-generator --config-file etc/oslo-config-generator/glance-manage.conf
198197 -c{toxinidir}/lower-constraints.txt
199198 -r{toxinidir}/test-requirements.txt
200199 -r{toxinidir}/requirements.txt
200 commands =
201 {[testenv]commands}
202 stestr run --slowest {posargs}