OpenStack-k版本cinder多Ceph存储
环境
主机 |
地址 |
版本 |
controller | 66.66.66.71 | OpenStack-kilo |
compute |
66.66.66.72 |
OpenStack-kilo |
ceph01 |
66.66.66.235 |
Ceph-Giant |
ceph02 |
66.66.66.232 |
Ceph-Giant |
Cinder配置
在compute节点上操作
[root@compute ~]# vi /etc/cinder/cinder.conf
[DEFAULT]glance_host = controllerenabled_backends = ceph01,ceph02 # 这里设置两种存储[ceph01]volume_driver = cinder.volume.drivers.rbd.RBDDriverrbd_pool = volumesrbd_ceph_conf = /etc/ceph/ceph01.confrbd_flatten_volume_from_snapshot = falserbd_max_clone_depth = 5rbd_store_chunk_size = 4rados_connect_timeout = -1glance_api_version = 2rbd_user = cinderrbd_secret_uuid = df76a280-68fb-4bfb-bfab-976f0c71efa2 # 这里要自行导入到vrish中,下同volume_backend_name = ceph01 # 设置命名,以便引用,下同[ceph02]volume_driver = cinder.volume.drivers.rbd.RBDDriverrbd_pool = ceph-cinderrbd_ceph_conf = /etc/ceph/ceph02.confrbd_flatten_volume_from_snapshot = falserbd_max_clone_depth = 5rbd_store_chunk_size = 4rados_connect_timeout = -1glance_api_version = 2rbd_user = ceph-cinderrbd_secret_uuid = 207a92a6-acaf-47c2-9556-e560a79ba472volume_backend_name = ceph02
具体如何配置key到virsh中,请看我的另一篇设置cinder存储
重启cinder服务
[root@compute ~]# systemctl restart openstack-cinder-volumes
查看存储是否都启动成功
在controller节点上查看
[root@controller ~]# source admin-openrc[root@controller ~]# cinder service-list+------------------+-----------------+------+---------+-------+----------------------------+-----------------+| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |+------------------+-----------------+------+---------+-------+----------------------------+-----------------+| cinder-scheduler | controller | nova | enabled | up | 2018-09-05T04:28:07.000000 | - || cinder-volume | compute@ceph01| nova | enabled | up | 2018-09-05T04:28:05.000000 | - || cinder-volume | compute@ceph02| nova | enabled | up | 2018-09-05T04:28:05.000000 | - |+------------------+-----------------+------+---------+-------+----------------------------+-----------------+
能看到compute@ceph01和compute@ceph02,并且状态都为up状态,说明多ceph存储启用成功
设置存储类型
现在如果创建卷,scheduler会选择合适的位置创建卷,如果想要创建的时候选择某个存储池,可以设置存储类型
创建卷类型
[root@controller ~]# cinder type-create ceph01[root@controller ~]# cinder type-create ceph02
查看卷类型
[root@controller ~]# cinder type-list+--------------------------------------+------+| ID | Name |+--------------------------------------+------+| 8c50de7d-d6ba-4866-ba42-93d14859860b | ceph01|| abdaa7a5-f95b-4f24-ab21-6d5bc74344a7 | ceph02|+--------------------------------------+------+
这时候还不能选择存储池,还需要设置volume_backend_name
[root@controller ~]# cinder type-key ceph01 set volume_backend_name=ceph01[root@controller ~]# cinder type-key ceph02 set volume_backend_name=ceph02
这里的volume_backend_name是我们一开始设置的
查看是否设置成功
[root@controller ~]# cinder extra-specs-list+--------------------------------------+-------+-------------------------------------+| ID | Name | extra_specs |+--------------------------------------+-------+-------------------------------------+| 8c50de7d-d6ba-4866-ba42-93d14859860b | ceph01| {u'volume_backend_name': u'ceph01'} || abdaa7a5-f95b-4f24-ab21-6d5bc74344a7 | ceph02| {u'volume_backend_name': u'ceph02'} |+--------------------------------------+------+-----------------------------------+
现在可以根据类型选择存储池位置创建卷了
创建一个ceph01类型的存储
[root@controller ~]# cinder create --display-name ceph01 --volume-type ceph01 1+---------------------+--------------------------------------+| Property | Value |+---------------------+--------------------------------------+| attachments | [] || availability_zone | nova || bootable | false || created_at | 2018-09-05T04:40:44.892618 || display_description | None || display_name | ceph01 || encrypted | False || id | 29b71020-8f0f-46d5-a2e3-5e89953a15ee || metadata | {} || multiattach | false || size | 1 || snapshot_id | None || source_volid | None || status | creating || volume_type | ceph01 |+---------------------+--------------------------------------+
查看是否创建成功
[root@controller ~]# cinder list+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| 29b71020-8f0f-46d5-a2e3-5e89953a15ee | available | ceph01 | 1 | ceph01 | false | |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
看到状态为available,说明创建成功
再看看ceph01的存储池是否有数据
[root@ceph01 my-cluster]# rbd ls ceph-cindervolume-29b71020-8f0f-46d5-a2e3-5e89953a15ee
看到id对应刚刚创建的卷id,说明类型设置成功
接下来创建ceph02卷
[root@controller ~]# cinder create --display-name ceph02 --volume-type ceph02 1+---------------------+--------------------------------------+| Property | Value |+---------------------+--------------------------------------+| attachments | [] || availability_zone | nova || bootable | false || created_at | 2018-09-05T04:44:26.657639 || display_description | None || display_name | ceph02 || encrypted | False || id | e170af88-5cb3-4b47-b550-f254bf544b50 || metadata | {} || multiattach | false || size | 1 || snapshot_id | None || source_volid | None || status | creating || volume_type | ceph02 |+---------------------+--------------------------------------+
查看卷是否创建成功
[root@controller ~]# cinder list+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+| 29b71020-8f0f-46d5-a2e3-5e89953a15ee | available | ceph01 | 1 | ceph01 | false | || e170af88-5cb3-4b47-b550-f254bf544b50 | available | ceph02 | 1 | ceph02 | false | |+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
可以看到ceph02的状态为available,说明创建成功
再看看ceph02存储池中是否有数据
[root@ceph02 ~]# rbd ls volumesvolume-e170af88-5cb3-4b47-b550-f254bf544b50
看到存储池中id对应卷id,说明类型设置成功