openstack 与 ceph 集成
- 在
ceph
上创建openstack
需要的pool
.
sudo ceph osd pool create volumes 128sudo ceph osd pool create images 128sudo ceph osd pool create backups 128sudo ceph osd pool create vms 128
- 将 ceph 服务器上
/etc/ceph/ceph.conf
复制到openstack
的compute
和glance
节点中。 - 安装
ceph
相关依赖
sudo yum install python-rbd ceph-common
- 去
ceph
的admin-node
上创建相关的用户
sudo ceph auth get-or-create client.glance mon 'allow *' osd 'allow * pool=images' -o client.glance.keyringsudo ceph auth get-or-create client.cinder mon 'allow *' osd 'allow * pool=volumes, allow * pool=vms, allow * pool=images' -o client.cinder.keyringsudo ceph auth get-or-create client.cinder-backup mon 'allow *' osd 'allow * pool=backups' -o client.cinder-backup.keyring
如果填错权限可以改 sudo ceph auth caps client.glance mon 'allow ' osd 'allow pool=images' -o client.glance.keyring
- 拿到
cinder
的 key
ceph auth get-key client.cinder >> client.cinder.keysz client.cinder.key# 然后把 文件发到每一个 compute node 上uuidgen # aff9070f-b853-4d19-b77c-b2aa7baca432#d2b06849-6a8c-40b7-bfea-0d2a729ac70d# 生成一个 uuid 然后写到 secret.xml 中
{your UUID} client.cinder secret
然后执行
sudo virsh secret-define --file secret.xmlsudo virsh secret-set-value --secret {your UUID} --base64 $(cat client.cinder.key)rm -rf client.cinder.key secret.xml
到这了 compute2 上
- 编辑 /etc/glance/glance-api.conf
[DEFAULT]...default_store = rbd...[glance_store]stores = rbdrbd_store_pool = imagesrbd_store_user = glancerbd_store_ceph_conf = /etc/ceph/ceph.confrbd_store_chunk_size = 8show_image_direct_url = Trueshow_multiple_locations = True[paste_deploy]flavor = keystone
如果 glance 连接失败可以考虑是不是 /etc/cinder 下的 keyring 文件是不是 ceph.client.*.keyring 格式 。 ceph!!!
- 编辑
/etc/cinder/cinder.conf
[DEFAULT]...enabled_backends = cephglance_api_version = 2### 添加[ceph]volume_driver = cinder.volume.drivers.rbd.RBDDrivervolume_backend_name = cephrbd_pool = volumesrbd_ceph_conf = /etc/ceph/ceph.confrbd_flatten_volume_from_snapshot = falserbd_max_clone_depth = 5rbd_store_chunk_size = 4rados_connect_timeout = -1rbd_user = cinderhost_ip = 10.0.5.10 ## 这个地方用本地机器替换一下rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337# * backup *backup_driver = cinder.backup.drivers.cephbackup_ceph_conf = /etc/ceph/ceph.confbackup_ceph_user = cinder-backupbackup_ceph_chunk_size = 134217728backup_ceph_pool = backupsbackup_ceph_stripe_unit = 0backup_ceph_stripe_count = 0restore_discard_excess_bytes = true[libvirt]rbd_user = cinderrbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337
如果 cinder 失败,可以看看 /etc/ceph/ceph.conf 下的 public network 是不是加了一个下划线
- 编辑
/etc/nova/nova.conf
[client]rbd cache = truerbd cache writethrough until flush = trueadmin socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asoklog file = /var/log/qemu/qemu-guest-$pid.logrbd concurrent management ops = 20
mkdir -p /var/run/ceph/guests/ /var/log/qemu/chown qemu:libvirt /var/run/ceph/guests /var/log/qemu/