绑定完请刷新页面
取消
刷新

分享好友

×
取消 复制
Ceph 安装 及 用于Openstack卷存储
2019-08-27 16:59:41

0,安装源配置:

[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

1,ceph-deploy主机安装:

yum install ceph-deploy

2,每台ceph主机防火墙配置:firewall-cmd --add-service ceph

firewall-cmd --add-service ceph-mon
firewall-cmd --permanent --add-service ceph
firewall-cmd --permanent --add-service ceph-mon

3,每台ceph主机安装

yum install ntp ntpdate
yum install ceph

4,在ceph-deploy节点,创建块存储集群:

ceph-deploy new server-1
ceph-deploy mon create-initial
ceph-deploy admin server-1 server-2 server-3
ceph-deploy mgr create server-1
ceph-deploy mgr create server-2
ceph-deploy mgr create server-3
ceph-deploy osd create --data /dev/sdb server-1
ceph-deploy osd create --data /dev/sdb server-2
ceph-deploy osd create --data /dev/sdb server-3
ceph -s

5,划sata和ssd到不同的存储池;

  5.1 导出crush规则:

    ceph osd getcrushmap -o mycrushmap

  5.2 反编译crush规则:

    crushtool -d ./mycrushmap > mycrushmap.txt

  5.3 编辑crush 文本:

  5.4 编译文本规则成二进制文件

    crushtool -c mycrushmap.txt  -o mycrushmap-new 

  5.4 重新导入

    ceph osd setcrushmap -i mycrushmap-new

6,创建及设置存储池;

ceph osd pool create test-pool 128 128
ceph osd pool set test-pool crush_rule ssd_rule

7,结合Openstack:

  7.1 glance-api节点安装组件:yum install python-rbd

      nova-compute和cinder-volume节点安装组件:yum install ceph-common

7.2 创建接入token,并导出

ceph auth get-or-create client.glance mon 'profile rbd' osd 'profile rbd pool=images' > ceph.client.glance.keyring
    ceph auth get-or-create client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images' > ceph.client.cinder.keyring

7.3 配置glance服务使用ceph后端存储:

[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
show_image_direct_url = True

7.4 配置cinder服务

[DEFAULT]
 ...
 enabled_backends = ceph
 glance_api_version = 2
 ...
 [ceph]
 volume_driver = cinder.volume.drivers.rbd.RBDDriver
 volume_backend_name = ceph
 rbd_pool = volumes
 rbd_ceph_conf = /etc/ceph/ceph.conf
 rbd_flatten_volume_from_snapshot = false
 rbd_max_clone_depth = 5
 rbd_store_chunk_size = 4
 rados_connect_timeout = -1
 rbd_user = cinder
 rbd_secret_uuid = 457eb676-33da-42ec-9a8c-9293d545c337

7.5 拷贝ceph配置文件到glance节点和cinder节点:

scp ./ceph.conf glance:/etc/ceph/
scp ./ceph.conf cinder:/etc/ceph/
#重启服务
sytemctl restart openstack-glance-api
sytemctl restart openstack-glance-volume

7.6 向libvirt中租入访问密钥:

<secret ephemeral='no' private='no'>
   <uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>
   <usage type='ceph'>
      <name>client.cinder secret</name>
   </usage>
</secret>
sudo virsh secret-define --file secret.xml
 Secret 457eb676-33da-42ec-9a8c-9293d545c337 created
 sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key)


分享好友

分享这个小栈给你的朋友们,一起进步吧。

DevOps与AIOPS
创建时间:2020-06-17 17:58:10
随着DevOps团队依赖于AIOps工具,管理层将需要利用一组新的技能,以大限度地利用AIOps能够提供的操作自动化类型。
展开
订阅须知

• 所有用户可根据关注领域订阅专区或所有专区

• 付费订阅:虚拟交易,一经交易不退款;若特殊情况,可3日内客服咨询

• 专区发布评论属默认订阅所评论专区(除付费小栈外)

技术专家

查看更多
  • 栈栈
    专家
戳我,来吐槽~