Codebase list cinder-tempest-plugin / 89023fe
Merge tag '1.7.0' into debian/zed cinder-tempest-plugin 1.7.0 release meta:version: 1.7.0 meta:diff-start: - meta:series: zed meta:release-type: release meta:pypi: no meta:first: yes meta:release:Author: Luigi Toscano <ltoscano@redhat.com> meta:release:Commit: Luigi Toscano <ltoscano@redhat.com> meta:release:Change-Id: Ib461bdb37e9c918477ad91e86afcd6ee6aee3fe8 meta:release:Code-Review+1: Rajat Dhasmana <rajatdhasmana@gmail.com> meta:release:Code-Review+2: Elod Illes <elod.illes@est.tech> meta:release:Code-Review+1: Jon Bernard <jobernar@redhat.com> meta:release:Code-Review+2: Hervé Beraud <herveberaud.pro@gmail.com> meta:release:Workflow+1: Hervé Beraud <herveberaud.pro@gmail.com> Thomas Goirand 3 years ago
11 changed file(s) with 412 addition(s) and 166 deletion(s). Raw diff Collapse all Expand all
33 - tempest-plugin-jobs
44 check:
55 jobs:
6 - cinder-tempest-plugin-lvm-multiattach
67 - cinder-tempest-plugin-lvm-lio-barbican
7 - cinder-tempest-plugin-lvm-lio-barbican-centos-8-stream:
8 - cinder-tempest-plugin-lvm-lio-barbican-centos-9-stream:
89 voting: false
910 - cinder-tempest-plugin-lvm-tgt-barbican
1011 - nova-ceph-multistore:
1112 voting: false
1213 - cinder-tempest-plugin-cbak-ceph
1314 - cinder-tempest-plugin-cbak-s3
15 # As per the Tempest "Stable Branch Support Policy", Tempest will only
16 # support the "Maintained" stable branches and not the "Extended Maintained"
17 # branches. That is what we need to do for all tempest plugins. Only jobs
18 # for the current releasable ("Maintained") stable branches should be listed
19 # here.
20 - cinder-tempest-plugin-basic-yoga
1421 - cinder-tempest-plugin-basic-xena
1522 - cinder-tempest-plugin-basic-wallaby
16 - cinder-tempest-plugin-basic-victoria
17 - cinder-tempest-plugin-basic-ussuri
1823 # Set this job to voting once we have some actual tests to run
1924 - cinder-tempest-plugin-protection-functional:
2025 voting: false
2530 - cinder-tempest-plugin-cbak-ceph
2631 experimental:
2732 jobs:
33 - cinder-tempest-plugin-cbak-ceph-yoga
2834 - cinder-tempest-plugin-cbak-ceph-xena
2935 - cinder-tempest-plugin-cbak-ceph-wallaby
30 - cinder-tempest-plugin-cbak-ceph-victoria
31 - cinder-tempest-plugin-cbak-ceph-ussuri
3236
3337 - job:
3438 name: cinder-tempest-plugin-protection-functional
5155 - cinder-tempest-plugin
5256
5357 - job:
58 name: cinder-tempest-plugin-lvm-multiattach
59 description: |
60 This enables multiattach tests along with standard tempest tests
61 parent: devstack-tempest
62 required-projects:
63 - opendev.org/openstack/tempest
64 - opendev.org/openstack/cinder-tempest-plugin
65 - opendev.org/openstack/cinder
66 vars:
67 tempest_test_regex: '(^tempest\.(api|scenario)|(^cinder_tempest_plugin))'
68 tempest_test_exclude_list: '{{ ansible_user_dir }}/{{ zuul.projects["opendev.org/openstack/tempest"].src_dir }}/tools/tempest-integrated-gate-storage-exclude-list.txt'
69 tox_envlist: all
70 devstack_localrc:
71 ENABLE_VOLUME_MULTIATTACH: true
72 tempest_plugins:
73 - cinder-tempest-plugin
74 irrelevant-files:
75 - ^.*\.rst$
76 - ^doc/.*$
77 - ^releasenotes/.*$
78
79 - job:
5480 name: cinder-tempest-plugin-lvm-barbican-base-abstract
5581 description: |
5682 This is a base job for lvm with lio & tgt targets
79105 # FIXME: 'creator' should be re-added by the barbican devstack plugin
80106 # but the value below override everything.
81107 tempest_roles: member,creator
108 volume:
109 build_timeout: 300
82110 volume-feature-enabled:
83111 volume_revert: True
84112 devstack_services:
95123 description: |
96124 This is a base job for lvm with lio & tgt targets
97125 with cinderlib tests.
98 branches: ^(?!stable/(ocata|pike|queens|rocky|stein|train)).*$
126 branches: ^(?!stable/(ocata|pike|queens|rocky|stein|train|ussuri|victoria)).*$
99127 parent: cinder-tempest-plugin-lvm-barbican-base-abstract
100128 roles:
101129 - zuul: opendev.org/openstack/cinderlib
113141 name: cinder-tempest-plugin-lvm-barbican-base
114142 description: |
115143 This is a base job for lvm with lio & tgt targets
116 with cinderlib tests to run on stable/train testing.
117 branches: stable/train
144 with cinderlib tests to run on stable/train to stable/victoria
145 testing. To run on those stable branches that are using tempest
146 26.1.0 (which is set in the devstack stackrc file), we must
147 use cinder-tempest-plugin compatible version 1.3.0.
148 branches:
149 - stable/train
150 - stable/ussuri
151 - stable/victoria
118152 parent: cinder-tempest-plugin-lvm-barbican-base-abstract
119153 roles:
120154 - zuul: opendev.org/openstack/cinderlib
149183 Integration tests that runs with the ceph devstack plugin, py3
150184 and enable the backup service.
151185 vars:
186 configure_swap_size: 4096
152187 devstack_local_conf:
153188 test-config:
154189 $TEMPEST_CONFIG:
158193 c-bak: true
159194
160195 - job:
196 name: cinder-tempest-plugin-cbak-ceph-yoga
197 parent: cinder-tempest-plugin-cbak-ceph
198 nodeset: openstack-single-node-focal
199 override-checkout: stable/yoga
200
201 - job:
161202 name: cinder-tempest-plugin-cbak-ceph-xena
162203 parent: cinder-tempest-plugin-cbak-ceph
163204 nodeset: openstack-single-node-focal
168209 parent: cinder-tempest-plugin-cbak-ceph
169210 nodeset: openstack-single-node-focal
170211 override-checkout: stable/wallaby
171
172 - job:
173 name: cinder-tempest-plugin-cbak-ceph-victoria
174 parent: cinder-tempest-plugin-cbak-ceph
175 nodeset: openstack-single-node-focal
176 override-checkout: stable/victoria
177
178 - job:
179 name: cinder-tempest-plugin-cbak-ceph-ussuri
180 parent: cinder-tempest-plugin-cbak-ceph
181 nodeset: openstack-single-node-bionic
182 override-checkout: stable/ussuri
183212
184213 # variant for pre-Ussuri branches (no volume revert for Ceph),
185214 # should this job be used on those branches
209238 nodeset: devstack-single-node-centos-8-stream
210239 description: |
211240 This jobs configures Cinder with LVM, LIO, barbican and
212 runs tempest tests and cinderlib tests on CentOS 8.
241 runs tempest tests and cinderlib tests on CentOS Stream 8.
242
243 - job:
244 name: cinder-tempest-plugin-lvm-lio-barbican-centos-9-stream
245 parent: cinder-tempest-plugin-lvm-lio-barbican
246 nodeset: devstack-single-node-centos-9-stream
247 description: |
248 This jobs configures Cinder with LVM, LIO, barbican and
249 runs tempest tests and cinderlib tests on CentOS Stream 9.
213250
214251 - job:
215252 name: cinder-tempest-plugin-lvm-tgt-barbican
257294 - ^releasenotes/.*$
258295
259296 - job:
297 name: cinder-tempest-plugin-basic-yoga
298 parent: cinder-tempest-plugin-basic
299 nodeset: openstack-single-node-focal
300 override-checkout: stable/yoga
301
302 - job:
260303 name: cinder-tempest-plugin-basic-xena
261304 parent: cinder-tempest-plugin-basic
262305 nodeset: openstack-single-node-focal
267310 parent: cinder-tempest-plugin-basic
268311 nodeset: openstack-single-node-focal
269312 override-checkout: stable/wallaby
270
271 - job:
272 name: cinder-tempest-plugin-basic-victoria
273 parent: cinder-tempest-plugin-basic
274 nodeset: openstack-single-node-focal
275 override-checkout: stable/victoria
276
277 - job:
278 name: cinder-tempest-plugin-basic-ussuri
279 parent: cinder-tempest-plugin-basic
280 nodeset: openstack-single-node-bionic
281 override-checkout: stable/ussuri
3434 LOG_COLOR=False
3535 RECLONE=yes
3636 ENABLED_SERVICES=c-api,c-bak,c-sch,c-vol,cinder,dstat,g-api,g-reg,key
37 ENABLED_SERVICES+=,mysql,n-api,n-cond,n-cpu,n-crt,n-sch,rabbit,tempest
37 ENABLED_SERVICES+=,mysql,n-api,n-cond,n-cpu,n-crt,n-sch,rabbit,tempest,placement-api
3838 CINDER_ENABLED_BACKENDS=lvmdriver-1
3939 CINDER_DEFAULT_VOLUME_TYPE=lvmdriver-1
4040 CINDER_VOLUME_CLEAR=none
1212 # License for the specific language governing permissions and limitations
1313 # under the License.
1414
15 from tempest.api.volume import api_microversion_fixture
1615 from tempest.common import compute
1716 from tempest.common import waiters
1817 from tempest import config
18 from tempest.lib.common import api_microversion_fixture
1919 from tempest.lib.common import api_version_utils
2020 from tempest.lib.common.utils import data_utils
2121 from tempest.lib.common.utils import test_utils
5757 def setUp(self):
5858 super(BaseVolumeTest, self).setUp()
5959 self.useFixture(api_microversion_fixture.APIMicroversionFixture(
60 self.request_microversion))
60 volume_microversion=self.request_microversion))
6161
6262 @classmethod
6363 def resource_setup(cls):
7171 def create_volume(cls, wait_until='available', **kwargs):
7272 """Wrapper utility that returns a test volume.
7373
74 :param wait_until: wait till volume status.
74 :param wait_until: wait till volume status, None means no wait.
7575 """
7676 if 'size' not in kwargs:
7777 kwargs['size'] = CONF.volume.volume_size
9292 cls.addClassResourceCleanup(test_utils.call_and_ignore_notfound_exc,
9393 cls.volumes_client.delete_volume,
9494 volume['id'])
95 waiters.wait_for_volume_resource_status(cls.volumes_client,
96 volume['id'], wait_until)
95 if wait_until:
96 waiters.wait_for_volume_resource_status(cls.volumes_client,
97 volume['id'], wait_until)
9798 return volume
9899
99100 @classmethod
198199 cls.admin_volume_types_client.delete_volume_type, type_id)
199200 test_utils.call_and_ignore_notfound_exc(
200201 cls.admin_volume_types_client.wait_for_resource_deletion, type_id)
202
203
204 class CreateMultipleResourceTest(BaseVolumeTest):
205
206 def _create_multiple_resource(self, callback, repeat_count=5,
207 **kwargs):
208
209 res = []
210 for _ in range(repeat_count):
211 res.append(callback(**kwargs)['id'])
212 return res
213
214 def _wait_for_multiple_resources(self, callback, wait_list, **kwargs):
215
216 for r in wait_list:
217 callback(resource_id=r, **kwargs)
2020 from cinder_tempest_plugin.api.volume import base
2121
2222 CONF = config.CONF
23
24
25 class VolumeFromImageTest(base.BaseVolumeTest):
26
27 @classmethod
28 def skip_checks(cls):
29 super(VolumeFromImageTest, cls).skip_checks()
30 if not CONF.service_available.glance:
31 raise cls.skipException("Glance service is disabled")
32
33 @classmethod
34 def create_volume_no_wait(cls, **kwargs):
35 """Returns a test volume.
36
37 This does not wait for volume creation to finish,
38 so that multiple operations can happen on the
39 Cinder server in parallel.
40 """
41 if 'size' not in kwargs:
42 kwargs['size'] = CONF.volume.volume_size
43
44 if 'imageRef' in kwargs:
45 image = cls.os_primary.image_client_v2.show_image(
46 kwargs['imageRef'])
47 min_disk = image['min_disk']
48 kwargs['size'] = max(kwargs['size'], min_disk)
49
50 if 'name' not in kwargs:
51 name = data_utils.rand_name(cls.__name__ + '-Volume')
52 kwargs['name'] = name
53
54 volume = cls.volumes_client.create_volume(**kwargs)['volume']
55 cls.addClassResourceCleanup(
56 cls.volumes_client.wait_for_resource_deletion, volume['id'])
57 cls.addClassResourceCleanup(test_utils.call_and_ignore_notfound_exc,
58 cls.volumes_client.delete_volume,
59 volume['id'])
60
61 return volume
62
63 @decorators.idempotent_id('8976a11b-1ddc-49b6-b66f-8c26adf3fa9e')
64 def test_create_from_image_multiple(self):
65 """Create a handful of volumes from the same image at once.
66
67 The purpose of this test is to stress volume drivers,
68 image download, the image cache, etc., within Cinder.
69 """
70
71 img_uuid = CONF.compute.image_ref
72
73 vols = []
74 for v in range(0, 5):
75 vols.append(self.create_volume_no_wait(imageRef=img_uuid))
76
77 for v in vols:
78 waiters.wait_for_volume_resource_status(self.volumes_client,
79 v['id'],
80 'available')
8123
8224
8325 class VolumeAndVolumeTypeFromImageTest(base.BaseVolumeAdminTest):
0 # Copyright 2022 Red Hat, Inc.
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 from tempest.common import waiters
16 from tempest import config
17 from tempest.lib import decorators
18
19 from cinder_tempest_plugin.api.volume import base
20
21 CONF = config.CONF
22
23
24 class CreateVolumesFromSnapshotTest(base.CreateMultipleResourceTest):
25
26 @decorators.idempotent_id('3b879ad1-d861-4ad3-b2c8-c89162e867c3')
27 def test_create_multiple_volume_from_snapshot(self):
28 """Create multiple volumes from a snapshot."""
29
30 volume = self.create_volume()
31 snapshot = self.create_snapshot(volume_id=volume['id'])
32 kwargs_create = {"'snapshot_id": snapshot['id'], "wait_until": None}
33 res = self._create_multiple_resource(self.create_volume,
34 **kwargs_create)
35 kwargs_wait = {"client": self.volumes_client, "status": "available"}
36 self._wait_for_multiple_resources(
37 waiters.wait_for_volume_resource_status, res, **kwargs_wait)
38
39
40 class CreateVolumesFromSourceVolumeTest(base.CreateMultipleResourceTest):
41
42 @decorators.idempotent_id('b4a250d1-3ffd-4727-a2f5-9d858b298558')
43 def test_create_multiple_volume_from_source_volume(self):
44 """Create multiple volumes from a source volume.
45
46 The purpose of this test is to check the synchronization
47 of driver clone method with simultaneous requests.
48 """
49
50 volume = self.create_volume()
51 kwargs_create = {"'source_volid": volume['id'], "wait_until": None}
52 res = self._create_multiple_resource(self.create_volume,
53 **kwargs_create)
54 kwargs_wait = {"client": self.volumes_client, "status": "available"}
55 self._wait_for_multiple_resources(
56 waiters.wait_for_volume_resource_status, res, **kwargs_wait)
57
58
59 class CreateVolumesFromBackupTest(base.CreateMultipleResourceTest):
60
61 @classmethod
62 def skip_checks(cls):
63 super(CreateVolumesFromBackupTest, cls).skip_checks()
64 if not CONF.volume_feature_enabled.backup:
65 raise cls.skipException("Cinder backup feature disabled")
66
67 @decorators.idempotent_id('9db67083-bf1a-486c-8f77-3778467f39a1')
68 def test_create_multiple_volume_from_backup(self):
69 """Create multiple volumes from a backup."""
70
71 volume = self.create_volume()
72 backup = self.create_backup(volume_id=volume['id'])
73 kwargs_create = {"'backup_id": backup['id'], "wait_until": None}
74 res = self._create_multiple_resource(self.create_volume,
75 **kwargs_create)
76 kwargs_wait = {"client": self.volumes_client, "status": "available"}
77 self._wait_for_multiple_resources(
78 waiters.wait_for_volume_resource_status, res, **kwargs_wait)
79
80
81 class CreateVolumesFromImageTest(base.CreateMultipleResourceTest):
82
83 @classmethod
84 def skip_checks(cls):
85 super(CreateVolumesFromImageTest, cls).skip_checks()
86 if not CONF.service_available.glance:
87 raise cls.skipException("Glance service is disabled")
88
89 @decorators.idempotent_id('8976a11b-1ddc-49b6-b66f-8c26adf3fa9e')
90 def test_create_from_image_multiple(self):
91 """Create a handful of volumes from the same image at once.
92
93 The purpose of this test is to stress volume drivers,
94 image download, the image cache, etc., within Cinder.
95 """
96
97 img_uuid = CONF.compute.image_ref
98
99 kwargs_create = {"'imageRef": img_uuid, "wait_until": None}
100 res = self._create_multiple_resource(self.create_volume,
101 **kwargs_create)
102 kwargs_wait = {"client": self.volumes_client, "status": "available"}
103 self._wait_for_multiple_resources(
104 waiters.wait_for_volume_resource_status, res, **kwargs_wait)
1212 # License for the specific language governing permissions and limitations
1313 # under the License.
1414
15 import contextlib
16
1517 from oslo_log import log
1618
1719 from tempest.common import waiters
5456 if item not in disks_list_before_attach][0]
5557 return volume_name
5658
57 def _get_file_md5(self, ip_address, filename, dev_name=None,
58 mount_path='/mnt', private_key=None, server=None):
59
60 ssh_client = self.get_remote_client(ip_address,
61 private_key=private_key,
62 server=server)
59 @contextlib.contextmanager
60 def mount_dev_path(self, ssh_client, dev_name, mount_path):
6361 if dev_name is not None:
6462 ssh_client.exec_command('sudo mount /dev/%s %s' % (dev_name,
6563 mount_path))
66
67 md5_sum = ssh_client.exec_command(
68 'sudo md5sum %s/%s|cut -c 1-32' % (mount_path, filename))
69 if dev_name is not None:
64 yield
7065 ssh_client.exec_command('sudo umount %s' % mount_path)
66 else:
67 yield
68
69 def _get_file_md5(self, ip_address, filename, dev_name=None,
70 mount_path='/mnt', private_key=None, server=None):
71
72 ssh_client = self.get_remote_client(ip_address,
73 private_key=private_key,
74 server=server)
75 with self.mount_dev_path(ssh_client, dev_name, mount_path):
76 md5_sum = ssh_client.exec_command(
77 'sudo md5sum %s/%s|cut -c 1-32' % (mount_path, filename))
7178 return md5_sum
7279
7380 def _count_files(self, ip_address, dev_name=None, mount_path='/mnt',
7582 ssh_client = self.get_remote_client(ip_address,
7683 private_key=private_key,
7784 server=server)
78 if dev_name is not None:
79 ssh_client.exec_command('sudo mount /dev/%s %s' % (dev_name,
80 mount_path))
81 count = ssh_client.exec_command('sudo ls -l %s | wc -l' % mount_path)
82 if dev_name is not None:
83 ssh_client.exec_command('sudo umount %s' % mount_path)
85 with self.mount_dev_path(ssh_client, dev_name, mount_path):
86 count = ssh_client.exec_command(
87 'sudo ls -l %s | wc -l' % mount_path)
8488 # We subtract 2 from the count since `wc -l` also includes the count
8589 # of new line character and while creating the filesystem, a
8690 # lost+found folder is also created
99103 private_key=private_key,
100104 server=server)
101105
102 if dev_name is not None:
103 ssh_client.exec_command('sudo mount /dev/%s %s' % (dev_name,
104 mount_path))
105 ssh_client.exec_command(
106 'sudo dd bs=1024 count=100 if=/dev/urandom of=/%s/%s' %
107 (mount_path, filename))
108 md5 = ssh_client.exec_command(
109 'sudo md5sum -b %s/%s|cut -c 1-32' % (mount_path, filename))
110 ssh_client.exec_command('sudo sync')
111 if dev_name is not None:
112 ssh_client.exec_command('sudo umount %s' % mount_path)
106 with self.mount_dev_path(ssh_client, dev_name, mount_path):
107 ssh_client.exec_command(
108 'sudo dd bs=1024 count=100 if=/dev/urandom of=/%s/%s' %
109 (mount_path, filename))
110 md5 = ssh_client.exec_command(
111 'sudo md5sum -b %s/%s|cut -c 1-32' % (mount_path, filename))
112 ssh_client.exec_command('sudo sync')
113113 return md5
114114
115115 def get_md5_from_file(self, instance, instance_ip, filename,
123123 private_key=self.keypair['private_key'],
124124 server=instance)
125125 return count, md5_sum
126
127 def write_data_to_device(self, ip_address, out_dev, in_dev='/dev/urandom',
128 bs=1024, count=100, private_key=None,
129 server=None, sha_sum=False):
130 ssh_client = self.get_remote_client(
131 ip_address, private_key=private_key, server=server)
132
133 # Write data to device
134 write_command = (
135 'sudo dd bs=%(bs)s count=%(count)s if=%(in_dev)s of=%(out_dev)s '
136 '&& sudo dd bs=%(bs)s count=%(count)s if=%(out_dev)s' %
137 {'bs': str(bs), 'count': str(count), 'in_dev': in_dev,
138 'out_dev': out_dev})
139 if sha_sum:
140 # If we want to read sha1sum instead of the device data
141 write_command += ' | sha1sum | head -c 40'
142 data = ssh_client.exec_command(write_command)
143
144 return data
145
146 def read_data_from_device(self, ip_address, in_dev, bs=1024, count=100,
147 private_key=None, server=None, sha_sum=False):
148 ssh_client = self.get_remote_client(
149 ip_address, private_key=private_key, server=server)
150
151 # Read data from device
152 read_command = ('sudo dd bs=%(bs)s count=%(count)s if=%(in_dev)s' %
153 {'bs': bs, 'count': count, 'in_dev': in_dev})
154 if sha_sum:
155 # If we want to read sha1sum instead of the device data
156 read_command += ' | sha1sum | head -c 40'
157 data = ssh_client.exec_command(read_command)
158
159 return data
126160
127161 def _attach_and_get_volume_device_name(self, server, volume, instance_ip,
128162 private_key):
3535 1) Create an instance with ephemeral disk
3636 2) Create a volume, attach it to the instance and create a filesystem
3737 on it and mount it
38 3) Mount the volume, create a file and write data into it, Unmount it
38 3) Create a file and write data into it, Unmount it
3939 4) create snapshot
4040 5) repeat 3 and 4 two more times (simply creating 3 snapshots)
4141
9292 # Detach the volume
9393 self.nova_volume_detach(server, volume)
9494
95 # Create volume from snapshot, attach it to instance and check file
96 # and contents for snap1
97 volume_snap_1 = self.create_volume(snapshot_id=snapshot1['id'])
98 volume_device_name, __ = self._attach_and_get_volume_device_name(
99 server, volume_snap_1, instance_ip, self.keypair['private_key'])
100 count_snap_1, md5_file_1 = self.get_md5_from_file(
101 server, instance_ip, 'file1', dev_name=volume_device_name)
102 # Detach the volume
103 self.nova_volume_detach(server, volume_snap_1)
95 snap_map = {1: snapshot1, 2: snapshot2, 3: snapshot3}
96 file_map = {1: file1_md5, 2: file2_md5, 3: file3_md5}
10497
105 self.assertEqual(count_snap_1, 1)
106 self.assertEqual(file1_md5, md5_file_1)
98 # Loop over 3 times to check the data integrity of all 3 snapshots
99 for i in range(1, 4):
100 # Create volume from snapshot, attach it to instance and check file
101 # and contents for snap
102 volume_snap = self.create_volume(snapshot_id=snap_map[i]['id'])
103 volume_device_name, __ = self._attach_and_get_volume_device_name(
104 server, volume_snap, instance_ip, self.keypair['private_key'])
105 count_snap, md5_file = self.get_md5_from_file(
106 server, instance_ip, 'file' + str(i),
107 dev_name=volume_device_name)
108 # Detach the volume
109 self.nova_volume_detach(server, volume_snap)
107110
108 # Create volume from snapshot, attach it to instance and check file
109 # and contents for snap2
110 volume_snap_2 = self.create_volume(snapshot_id=snapshot2['id'])
111 volume_device_name, __ = self._attach_and_get_volume_device_name(
112 server, volume_snap_2, instance_ip, self.keypair['private_key'])
113 count_snap_2, md5_file_2 = self.get_md5_from_file(
114 server, instance_ip, 'file2', dev_name=volume_device_name)
115 # Detach the volume
116 self.nova_volume_detach(server, volume_snap_2)
117
118 self.assertEqual(count_snap_2, 2)
119 self.assertEqual(file2_md5, md5_file_2)
120
121 # Create volume from snapshot, attach it to instance and check file
122 # and contents for snap3
123 volume_snap_3 = self.create_volume(snapshot_id=snapshot3['id'])
124 volume_device_name, __ = self._attach_and_get_volume_device_name(
125 server, volume_snap_3, instance_ip, self.keypair['private_key'])
126 count_snap_3, md5_file_3 = self.get_md5_from_file(
127 server, instance_ip, 'file3', dev_name=volume_device_name)
128 # Detach the volume
129 self.nova_volume_detach(server, volume_snap_3)
130
131 self.assertEqual(count_snap_3, 3)
132 self.assertEqual(file3_md5, md5_file_3)
111 self.assertEqual(count_snap, i)
112 self.assertEqual(file_map[i], md5_file)
0 # Copyright 2022 Red Hat, Inc.
1 # All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License"); you may
4 # not use this file except in compliance with the License. You may obtain
5 # a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
11 # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
12 # License for the specific language governing permissions and limitations
13 # under the License.
14
15 from tempest import config
16 from tempest.lib import decorators
17 from tempest.lib import exceptions as lib_exc
18
19 from cinder_tempest_plugin.scenario import manager
20 from tempest.scenario import manager as tempest_manager
21
22 CONF = config.CONF
23
24
25 class VolumeMultiattachTests(manager.ScenarioTest,
26 tempest_manager.EncryptionScenarioTest):
27
28 compute_min_microversion = '2.60'
29 compute_max_microversion = 'latest'
30
31 def setUp(self):
32 super(VolumeMultiattachTests, self).setUp()
33 self.keypair = self.create_keypair()
34 self.security_group = self.create_security_group()
35
36 @classmethod
37 def skip_checks(cls):
38 super(VolumeMultiattachTests, cls).skip_checks()
39 if not CONF.compute_feature_enabled.volume_multiattach:
40 raise cls.skipException('Volume multi-attach is not available.')
41
42 def _verify_attachment(self, volume_id, server_id):
43 volume = self.volumes_client.show_volume(volume_id)['volume']
44 server_ids = (
45 [attachment['server_id'] for attachment in volume['attachments']])
46 self.assertIn(server_id, server_ids)
47
48 @decorators.idempotent_id('e6604b85-5280-4f7e-90b5-186248fd3423')
49 def test_multiattach_data_integrity(self):
50
51 # Create an instance
52 server_1 = self.create_server(
53 key_name=self.keypair['name'],
54 security_groups=[{'name': self.security_group['name']}])
55
56 # Create multiattach type
57 multiattach_vol_type = self.create_volume_type(
58 extra_specs={'multiattach': "<is> True"})
59
60 # Create a multiattach volume
61 volume = self.create_volume(volume_type=multiattach_vol_type['id'])
62
63 # Create encrypted volume
64 encrypted_volume = self.create_encrypted_volume(
65 'luks', volume_type='luks')
66
67 # Create a normal volume
68 simple_volume = self.create_volume()
69
70 # Attach normal and encrypted volumes (These volumes are not used in
71 # the current test but is used to emulate a real world scenario
72 # where different types of volumes will be attached to the server)
73 self.attach_volume(server_1, simple_volume)
74 self.attach_volume(server_1, encrypted_volume)
75
76 instance_ip = self.get_server_ip(server_1)
77
78 # Attach volume to instance and find it's device name (eg: /dev/vdb)
79 volume_device_name_inst_1, __ = (
80 self._attach_and_get_volume_device_name(
81 server_1, volume, instance_ip, self.keypair['private_key']))
82
83 out_device = '/dev/' + volume_device_name_inst_1
84
85 # This data is written from the first server and will be used to
86 # verify when reading data from second server
87 device_data_inst_1 = self.write_data_to_device(
88 instance_ip, out_device, private_key=self.keypair['private_key'],
89 server=server_1, sha_sum=True)
90
91 # Create another instance
92 server_2 = self.create_server(
93 key_name=self.keypair['name'],
94 security_groups=[{'name': self.security_group['name']}])
95
96 instance_2_ip = self.get_server_ip(server_2)
97
98 # Attach volume to instance and find it's device name (eg: /dev/vdc)
99 volume_device_name_inst_2, __ = (
100 self._attach_and_get_volume_device_name(
101 server_2, volume, instance_2_ip, self.keypair['private_key']))
102
103 in_device = '/dev/' + volume_device_name_inst_2
104
105 # Read data from volume device
106 device_data_inst_2 = self.read_data_from_device(
107 instance_2_ip, in_device, private_key=self.keypair['private_key'],
108 server=server_2, sha_sum=True)
109
110 self._verify_attachment(volume['id'], server_1['id'])
111 self._verify_attachment(volume['id'], server_2['id'])
112 self.assertEqual(device_data_inst_1, device_data_inst_2)
113
114 @decorators.idempotent_id('53514da8-f49c-4cda-8792-ff4a2fa69977')
115 def test_volume_multiattach_same_host_negative(self):
116 # Create an instance
117 server = self.create_server(
118 key_name=self.keypair['name'],
119 security_groups=[{'name': self.security_group['name']}])
120
121 # Create multiattach type
122 multiattach_vol_type = self.create_volume_type(
123 extra_specs={'multiattach': "<is> True"})
124
125 # Create an empty volume
126 volume = self.create_volume(volume_type=multiattach_vol_type['id'])
127
128 # Attach volume to instance
129 attachment = self.attach_volume(server, volume)
130
131 self.assertEqual(server['id'], attachment['serverId'])
132
133 # Try attaching the volume to the same instance
134 self.assertRaises(lib_exc.BadRequest, self.attach_volume, server,
135 volume)
1313 # License for the specific language governing permissions and limitations
1414 # under the License.
1515
16 import http.client as http_client
1617 import time
1718
1819 from oslo_serialization import jsonutils as json
19 from six.moves import http_client
2020 from tempest.lib.common import rest_client
2121 from tempest.lib import exceptions as lib_exc
2222
33
44 pbr!=2.1.0,>=2.0.0 # Apache-2.0
55 oslo.config>=5.1.0 # Apache-2.0
6 six>=1.10.0 # MIT
76 oslo.serialization!=2.19.1,>=2.18.0 # Apache-2.0
87 tempest>=27.0.0 # Apache-2.0
00 [metadata]
11 name = cinder-tempest-plugin
22 summary = Tempest plugin tests for Cinder.
3 description-file =
3 description_file =
44 README.rst
55 author = OpenStack
6 author-email = openstack-discuss@lists.openstack.org
7 home-page = http://www.openstack.org/
6 author_email = openstack-discuss@lists.openstack.org
7 home_page = http://www.openstack.org/
8 python_requires = >=3.6
89 classifier =
910 Environment :: OpenStack
1011 Intended Audience :: Information Technology
1213 License :: OSI Approved :: Apache Software License
1314 Operating System :: POSIX :: Linux
1415 Programming Language :: Python
16 Programming Language :: Python :: 3 :: Only
1517 Programming Language :: Python :: 3
1618 Programming Language :: Python :: 3.6
1719 Programming Language :: Python :: 3.7