ceph-deploy bclinux aarch64 ceph 14.2.10

ssh-copy-id,部署机免密登录其他三台主机

所有机器硬盘配置参考如下,计划采用vdb作为ceph数据盘

下载ceph-deploy

 pip install ceph-deploy

免密登录+设置主机名

hostnamectl --static set-hostname ceph-0 .. 3

配置hosts

172.17.163.105 ceph-0
172.17.112.206 ceph-1
172.17.227.100 ceph-2
172.17.67.157 ceph-3
 scp /etc/hosts root@ceph-1:/etc/hostsscp /etc/hosts root@ceph-2:/etc/hostsscp /etc/hosts root@ceph-3:/etc/hosts

本机先安装各软件包

 rpm -ivhU liboath/liboath-*

过滤掉安装不上的包

 find rpmbuild/RPMS/ | grep \\.rpm | grep -v debug | grep -v k8s | grep -v mgr\-rook | grep -v mgr\-ssh | xargs -i echo "{} \\"

添加用户,安装软件

useradd ceph
yum install -y rpmbuild/RPMS/noarch/ceph-mgr-dashboard-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/noarch/ceph-mgr-diskprediction-cloud-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/noarch/ceph-grafana-dashboards-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/noarch/ceph-mgr-diskprediction-local-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/aarch64/librgw-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librados-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-base-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rbd-mirror-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-ceph-argparse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librbd-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-test-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rados-objclass-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-rbd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-mds-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-fuse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-rbd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-mgr-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librbd1-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-cephfs-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/libcephfs2-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-rgw-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/libradospp-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rbd-nbd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/libcephfs-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-mon-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-radosgw-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-ceph-argparse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-cephfs-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-rados-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rbd-fuse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librgw2-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-rgw-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-common-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librados2-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-ceph-compat-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-osd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-rados-14.2.10-0.oe1.bclinux.aarch64.rpm

安装成功日志(截图是二次reinstall)

注意:如未提前创建用户,警报

分发编译好的rpm包+el8的 liboath

ceph-0

rsync -avr -P liboath root@ceph-1:~/
rsync -avr -P liboath root@ceph-2:~/
rsync -avr -P liboath root@ceph-3:~/
rsync -avr -P rpmbuild/RPMS root@ceph-1:~/rpmbuild/
rsync -avr -P rpmbuild/RPMS root@ceph-2:~/rpmbuild/
rsync -avr -P rpmbuild/RPMS root@ceph-3:~/rpmbuild/

分别登录ceph-1、ceph-2、ceph-3执行(后续可以考虑ansible封装)

cd ~ 
rpm -ivhU liboath/liboath-*
useradd ceph
yum install -y rpmbuild/RPMS/noarch/ceph-mgr-dashboard-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/noarch/ceph-mgr-diskprediction-cloud-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/noarch/ceph-grafana-dashboards-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/noarch/ceph-mgr-diskprediction-local-14.2.10-0.oe1.bclinux.noarch.rpm \
rpmbuild/RPMS/aarch64/librgw-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librados-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-base-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rbd-mirror-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-ceph-argparse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librbd-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-test-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rados-objclass-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-rbd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-mds-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-fuse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-rbd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-mgr-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librbd1-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-cephfs-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/libcephfs2-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-rgw-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/libradospp-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rbd-nbd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/libcephfs-devel-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-mon-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-radosgw-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-ceph-argparse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-cephfs-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-rados-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/rbd-fuse-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librgw2-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python3-rgw-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-common-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/librados2-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-ceph-compat-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/ceph-osd-14.2.10-0.oe1.bclinux.aarch64.rpm \
rpmbuild/RPMS/aarch64/python-rados-14.2.10-0.oe1.bclinux.aarch64.rpm

安装日志

时间同步ntpdate

ceph-0

yum install ntpdate

编辑/etc/ntp.conf

driftfile /var/lib/ntp/drift
restrict default nomodify notrap nopeer noepeer noquery
restrict source nomodify notrap noepeer noquery
restrict 127.0.0.1 
restrict ::1
tos maxclock 5
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
server asia.pool.ntp.org

 启动ntpd

systemctl enable ntpd --now

ceph-1 ceph-2 ceph-3

 yum install -y ntpdate 

 编辑/etc/ntp.conf

driftfile /var/lib/ntp/drift
restrict default nomodify notrap nopeer noepeer noquery
restrict source nomodify notrap noepeer noquery
restrict 127.0.0.1 
restrict ::1
tos maxclock 5
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
server ceph-0

  启动ntpd

systemctl enable ntpd --now

部署mon节点,生成ceph.conf

cd /etc/ceph/
ceph-deploy new ceph-0 ceph-1 ceph-2 ceph-3

报错,如下

[ceph_deploy][ERROR ] UnsupportedPlatform: Platform is not supported: bclinux 21.10U3 LTS 21.10U3

修改/usr/lib/python2.7/site-packages/ceph_deploy/calamari.py,新增一个bclinux 差异如下

[root@ceph-0 ceph_deploy]# diff calamari.py calamari.py.bak -Npr
*** calamari.py	2023-11-10 16:56:49.445013228 +0800
--- calamari.py.bak	2023-11-10 16:56:14.793013228 +0800
*************** def distro_is_supported(distro_name):
*** 13,19 ****An enforcer of supported distros that can differ from what ceph-deploysupports."""
!     supported = ['centos', 'redhat', 'ubuntu', 'debian', 'bclinux']if distro_name in supported:return Truereturn False
--- 13,19 ----An enforcer of supported distros that can differ from what ceph-deploysupports."""
!     supported = ['centos', 'redhat', 'ubuntu', 'debian']if distro_name in supported:return Truereturn False

修改/usr/lib/python2.7/site-packages/ceph_deploy/hosts/__init__.py

[root@ceph-0 ceph_deploy]# diff -Npr hosts/__init__.py hosts/__init__.py.bak 
*** hosts/__init__.py	2023-11-10 17:06:27.585013228 +0800
--- hosts/__init__.py.bak	2023-11-10 17:05:48.697013228 +0800
*************** def _get_distro(distro, fallback=None, u
*** 101,107 ****'fedora': fedora,'suse': suse,'virtuozzo': centos,
-         'bclinux': centos,'arch': arch}--- 101,106 ----

ceph-deploy new ceph-0 ceph-1 ceph-2 ceph-3

成功生成ceph.conf,过程日志

[root@ceph-0 ceph]# ceph-deploy new ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy new ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffffb246c280>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph-0', 'ceph-1', 'ceph-2', 'ceph-3']
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0xffffb236e9d0>
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-0][DEBUG ] connected to host: ceph-0 
[ceph-0][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-0][DEBUG ] detect machine type
[ceph-0][DEBUG ] find the location of an executable
[ceph-0][INFO  ] Running command: /usr/sbin/ip link show
[ceph-0][INFO  ] Running command: /usr/sbin/ip addr show
[ceph-0][DEBUG ] IP addresses found: [u'172.18.0.1', u'172.17.163.105']
[ceph_deploy.new][DEBUG ] Resolving host ceph-0
[ceph_deploy.new][DEBUG ] Monitor ceph-0 at 172.17.163.105
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-1][DEBUG ] connected to host: ceph-0 
[ceph-1][INFO  ] Running command: ssh -CT -o BatchMode=yes ceph-1
dhclient(1613) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1613) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-1][DEBUG ] connected to host: ceph-1 
[ceph-1][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-1][DEBUG ] detect machine type
[ceph-1][DEBUG ] find the location of an executable
[ceph-1][INFO  ] Running command: /usr/sbin/ip link show
[ceph-1][INFO  ] Running command: /usr/sbin/ip addr show
[ceph-1][DEBUG ] IP addresses found: [u'172.17.112.206']
[ceph_deploy.new][DEBUG ] Resolving host ceph-1
[ceph_deploy.new][DEBUG ] Monitor ceph-1 at 172.17.112.206
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-2][DEBUG ] connected to host: ceph-0 
[ceph-2][INFO  ] Running command: ssh -CT -o BatchMode=yes ceph-2
dhclient(1626) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1626) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-2][DEBUG ] connected to host: ceph-2 
[ceph-2][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-2][DEBUG ] detect machine type
[ceph-2][DEBUG ] find the location of an executable
[ceph-2][INFO  ] Running command: /usr/sbin/ip link show
[ceph-2][INFO  ] Running command: /usr/sbin/ip addr show
[ceph-2][DEBUG ] IP addresses found: [u'172.17.227.100']
[ceph_deploy.new][DEBUG ] Resolving host ceph-2
[ceph_deploy.new][DEBUG ] Monitor ceph-2 at 172.17.227.100
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[ceph-3][DEBUG ] connected to host: ceph-0 
[ceph-3][INFO  ] Running command: ssh -CT -o BatchMode=yes ceph-3
dhclient(1634) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1634) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-3][DEBUG ] connected to host: ceph-3 
[ceph-3][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-3][DEBUG ] detect machine type
[ceph-3][DEBUG ] find the location of an executable
[ceph-3][INFO  ] Running command: /usr/sbin/ip link show
[ceph-3][INFO  ] Running command: /usr/sbin/ip addr show
[ceph-3][DEBUG ] IP addresses found: [u'172.17.67.157']
[ceph_deploy.new][DEBUG ] Resolving host ceph-3
[ceph_deploy.new][DEBUG ] Monitor ceph-3 at 172.17.67.157
[ceph_deploy.new][DEBUG ] Monitor initial members are ['ceph-0', 'ceph-1', 'ceph-2', 'ceph-3']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['172.17.163.105', '172.17.112.206', '172.17.227.100', '172.17.67.157']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

自动生成的/etc/ceph/ceph.conf内容如下

[global]
fsid = ff72b496-d036-4f1b-b2ad-55358f3c16cb
mon_initial_members = ceph-0, ceph-1, ceph-2, ceph-3
mon_host = 172.17.163.105,172.17.112.206,172.17.227.100,172.17.67.157
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

由于只是测试环境,且未挂第二个网络,暂时不设置public_network参数

部署monitor

cd /etc/ceph
ceph-deploy mon create ceph-0 ceph-1 ceph-2 ceph-3

故障

[ceph-3][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-3.asok mon_status
[ceph-3][ERROR ] Traceback (most recent call last):
[ceph-3][ERROR ]   File "/bin/ceph", line 151, in <module>
[ceph-3][ERROR ]     from ceph_daemon import admin_socket, DaemonWatcher, Termsize
[ceph-3][ERROR ]   File "/usr/lib/python2.7/site-packages/ceph_daemon.py", line 24, in <module>
[ceph-3][ERROR ]     from prettytable import PrettyTable, HEADER
[ceph-3][ERROR ] ImportError: No module named prettytable
[ceph-3][WARNIN] monitor: mon.ceph-3, might not be running yet
[ceph-3][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-3.asok mon_status
[ceph-3][ERROR ] Traceback (most recent call last):
[ceph-3][ERROR ]   File "/bin/ceph", line 151, in <module>
[ceph-3][ERROR ]     from ceph_daemon import admin_socket, DaemonWatcher, Termsize
[ceph-3][ERROR ]   File "/usr/lib/python2.7/site-packages/ceph_daemon.py", line 24, in <module>
[ceph-3][ERROR ]     from prettytable import PrettyTable, HEADER
[ceph-3][ERROR ] ImportError: No module named prettytable

[ceph-3][WARNIN] monitor ceph-3 does not exist in monmap
[ceph-3][WARNIN] neither `public_addr` nor `public_network` keys are defined for monitors
[ceph-3][WARNIN] monitors may not be able to form quorum

No module named prettytable

pip install PrettyTable

下载离线包

分发到其他三台机器上离线安装

rsync -avr -P preetytable-python27 root@ceph-1:~/
rsync -avr -P preetytable-python27 root@ceph-2:~/
rsync -avr -P preetytable-python27 root@ceph-3:~/

再次部署monitor

cd /etc/ceph
ceph-deploy mon create ceph-0 ceph-1 ceph-2 ceph-3

日志记录

[root@ceph-0 ~]# cd /etc/ceph
[root@ceph-0 ceph]# ceph-deploy mon create ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff992fb320>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['ceph-0', 'ceph-1', 'ceph-2', 'ceph-3']
[ceph_deploy.cli][INFO  ]  func                          : <function mon at 0xffff993967d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  keyrings                      : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-0 ...
[ceph-0][DEBUG ] connected to host: ceph-0 
[ceph-0][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-0][DEBUG ] detect machine type
[ceph-0][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: bclinux 21.10U3 21.10U3 LTS
[ceph-0][DEBUG ] determining if provided host has same hostname in remote
[ceph-0][DEBUG ] get remote short hostname
[ceph-0][DEBUG ] deploying mon to ceph-0
[ceph-0][DEBUG ] get remote short hostname
[ceph-0][DEBUG ] remote hostname: ceph-0
[ceph-0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-0][DEBUG ] create the mon path if it does not exist
[ceph-0][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-0/done
[ceph-0][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-0][DEBUG ] create the init path if it does not exist
[ceph-0][INFO  ] Running command: systemctl enable ceph.target
[ceph-0][INFO  ] Running command: systemctl enable ceph-mon@ceph-0
[ceph-0][INFO  ] Running command: systemctl start ceph-mon@ceph-0
[ceph-0][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-0.asok mon_status
[ceph-0][DEBUG ] ********************************************************************************
[ceph-0][DEBUG ] status for monitor: mon.ceph-0
[ceph-0][DEBUG ] {
[ceph-0][DEBUG ]   "election_epoch": 8, 
[ceph-0][DEBUG ]   "extra_probe_peers": [
[ceph-0][DEBUG ]     {
[ceph-0][DEBUG ]       "addrvec": [
[ceph-0][DEBUG ]         {
[ceph-0][DEBUG ]           "addr": "172.17.67.157:3300", 
[ceph-0][DEBUG ]           "nonce": 0, 
[ceph-0][DEBUG ]           "type": "v2"
[ceph-0][DEBUG ]         }, 
[ceph-0][DEBUG ]         {
[ceph-0][DEBUG ]           "addr": "172.17.67.157:6789", 
[ceph-0][DEBUG ]           "nonce": 0, 
[ceph-0][DEBUG ]           "type": "v1"
[ceph-0][DEBUG ]         }
[ceph-0][DEBUG ]       ]
[ceph-0][DEBUG ]     }, 
[ceph-0][DEBUG ]     {
[ceph-0][DEBUG ]       "addrvec": [
[ceph-0][DEBUG ]         {
[ceph-0][DEBUG ]           "addr": "172.17.112.206:3300", 
[ceph-0][DEBUG ]           "nonce": 0, 
[ceph-0][DEBUG ]           "type": "v2"
[ceph-0][DEBUG ]         }, 
[ceph-0][DEBUG ]         {
[ceph-0][DEBUG ]           "addr": "172.17.112.206:6789", 
[ceph-0][DEBUG ]           "nonce": 0, 
[ceph-0][DEBUG ]           "type": "v1"
[ceph-0][DEBUG ]         }
[ceph-0][DEBUG ]       ]
[ceph-0][DEBUG ]     }, 
[ceph-0][DEBUG ]     {
[ceph-0][DEBUG ]       "addrvec": [
[ceph-0][DEBUG ]         {
[ceph-0][DEBUG ]           "addr": "172.17.227.100:3300", 
[ceph-0][DEBUG ]           "nonce": 0, 
[ceph-0][DEBUG ]           "type": "v2"
[ceph-0][DEBUG ]         }, 
[ceph-0][DEBUG ]         {
[ceph-0][DEBUG ]           "addr": "172.17.227.100:6789", 
[ceph-0][DEBUG ]           "nonce": 0, 
[ceph-0][DEBUG ]           "type": "v1"
[ceph-0][DEBUG ]         }
[ceph-0][DEBUG ]       ]
[ceph-0][DEBUG ]     }
[ceph-0][DEBUG ]   ], 
[ceph-0][DEBUG ]   "feature_map": {
[ceph-0][DEBUG ]     "mon": [
[ceph-0][DEBUG ]       {
[ceph-0][DEBUG ]         "features": "0x3ffddff8ffacffff", 
[ceph-0][DEBUG ]         "num": 1, 
[ceph-0][DEBUG ]         "release": "luminous"
[ceph-0][DEBUG ]       }
[ceph-0][DEBUG ]     ]
[ceph-0][DEBUG ]   }, 
[ceph-0][DEBUG ]   "features": {
[ceph-0][DEBUG ]     "quorum_con": "4611087854031667199", 
[ceph-0][DEBUG ]     "quorum_mon": [
[ceph-0][DEBUG ]       "kraken", 
[ceph-0][DEBUG ]       "luminous", 
[ceph-0][DEBUG ]       "mimic", 
[ceph-0][DEBUG ]       "osdmap-prune", 
[ceph-0][DEBUG ]       "nautilus"
[ceph-0][DEBUG ]     ], 
[ceph-0][DEBUG ]     "required_con": "2449958747315912708", 
[ceph-0][DEBUG ]     "required_mon": [
[ceph-0][DEBUG ]       "kraken", 
[ceph-0][DEBUG ]       "luminous", 
[ceph-0][DEBUG ]       "mimic", 
[ceph-0][DEBUG ]       "osdmap-prune", 
[ceph-0][DEBUG ]       "nautilus"
[ceph-0][DEBUG ]     ]
[ceph-0][DEBUG ]   }, 
[ceph-0][DEBUG ]   "monmap": {
[ceph-0][DEBUG ]     "created": "2023-11-11 09:54:05.372287", 
[ceph-0][DEBUG ]     "epoch": 1, 
[ceph-0][DEBUG ]     "features": {
[ceph-0][DEBUG ]       "optional": [], 
[ceph-0][DEBUG ]       "persistent": [
[ceph-0][DEBUG ]         "kraken", 
[ceph-0][DEBUG ]         "luminous", 
[ceph-0][DEBUG ]         "mimic", 
[ceph-0][DEBUG ]         "osdmap-prune", 
[ceph-0][DEBUG ]         "nautilus"
[ceph-0][DEBUG ]       ]
[ceph-0][DEBUG ]     }, 
[ceph-0][DEBUG ]     "fsid": "ff72b496-d036-4f1b-b2ad-55358f3c16cb", 
[ceph-0][DEBUG ]     "min_mon_release": 14, 
[ceph-0][DEBUG ]     "min_mon_release_name": "nautilus", 
[ceph-0][DEBUG ]     "modified": "2023-11-11 09:54:05.372287", 
[ceph-0][DEBUG ]     "mons": [
[ceph-0][DEBUG ]       {
[ceph-0][DEBUG ]         "addr": "172.17.67.157:6789/0", 
[ceph-0][DEBUG ]         "name": "ceph-3", 
[ceph-0][DEBUG ]         "public_addr": "172.17.67.157:6789/0", 
[ceph-0][DEBUG ]         "public_addrs": {
[ceph-0][DEBUG ]           "addrvec": [
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.67.157:3300", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v2"
[ceph-0][DEBUG ]             }, 
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.67.157:6789", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v1"
[ceph-0][DEBUG ]             }
[ceph-0][DEBUG ]           ]
[ceph-0][DEBUG ]         }, 
[ceph-0][DEBUG ]         "rank": 0
[ceph-0][DEBUG ]       }, 
[ceph-0][DEBUG ]       {
[ceph-0][DEBUG ]         "addr": "172.17.112.206:6789/0", 
[ceph-0][DEBUG ]         "name": "ceph-1", 
[ceph-0][DEBUG ]         "public_addr": "172.17.112.206:6789/0", 
[ceph-0][DEBUG ]         "public_addrs": {
[ceph-0][DEBUG ]           "addrvec": [
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.112.206:3300", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v2"
[ceph-0][DEBUG ]             }, 
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.112.206:6789", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v1"
[ceph-0][DEBUG ]             }
[ceph-0][DEBUG ]           ]
[ceph-0][DEBUG ]         }, 
[ceph-0][DEBUG ]         "rank": 1
[ceph-0][DEBUG ]       }, 
[ceph-0][DEBUG ]       {
[ceph-0][DEBUG ]         "addr": "172.17.163.105:6789/0", 
[ceph-0][DEBUG ]         "name": "ceph-0", 
[ceph-0][DEBUG ]         "public_addr": "172.17.163.105:6789/0", 
[ceph-0][DEBUG ]         "public_addrs": {
[ceph-0][DEBUG ]           "addrvec": [
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.163.105:3300", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v2"
[ceph-0][DEBUG ]             }, 
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.163.105:6789", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v1"
[ceph-0][DEBUG ]             }
[ceph-0][DEBUG ]           ]
[ceph-0][DEBUG ]         }, 
[ceph-0][DEBUG ]         "rank": 2
[ceph-0][DEBUG ]       }, 
[ceph-0][DEBUG ]       {
[ceph-0][DEBUG ]         "addr": "172.17.227.100:6789/0", 
[ceph-0][DEBUG ]         "name": "ceph-2", 
[ceph-0][DEBUG ]         "public_addr": "172.17.227.100:6789/0", 
[ceph-0][DEBUG ]         "public_addrs": {
[ceph-0][DEBUG ]           "addrvec": [
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.227.100:3300", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v2"
[ceph-0][DEBUG ]             }, 
[ceph-0][DEBUG ]             {
[ceph-0][DEBUG ]               "addr": "172.17.227.100:6789", 
[ceph-0][DEBUG ]               "nonce": 0, 
[ceph-0][DEBUG ]               "type": "v1"
[ceph-0][DEBUG ]             }
[ceph-0][DEBUG ]           ]
[ceph-0][DEBUG ]         }, 
[ceph-0][DEBUG ]         "rank": 3
[ceph-0][DEBUG ]       }
[ceph-0][DEBUG ]     ]
[ceph-0][DEBUG ]   }, 
[ceph-0][DEBUG ]   "name": "ceph-0", 
[ceph-0][DEBUG ]   "outside_quorum": [], 
[ceph-0][DEBUG ]   "quorum": [
[ceph-0][DEBUG ]     0, 
[ceph-0][DEBUG ]     1, 
[ceph-0][DEBUG ]     2, 
[ceph-0][DEBUG ]     3
[ceph-0][DEBUG ]   ], 
[ceph-0][DEBUG ]   "quorum_age": 917, 
[ceph-0][DEBUG ]   "rank": 2, 
[ceph-0][DEBUG ]   "state": "peon", 
[ceph-0][DEBUG ]   "sync_provider": []
[ceph-0][DEBUG ] }
[ceph-0][DEBUG ] ********************************************************************************
[ceph-0][INFO  ] monitor: mon.ceph-0 is running
[ceph-0][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-0.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-1 ...
dhclient(1613) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1613) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-1][DEBUG ] connected to host: ceph-1 
[ceph-1][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-1][DEBUG ] detect machine type
[ceph-1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: bclinux 21.10U3 21.10U3 LTS
[ceph-1][DEBUG ] determining if provided host has same hostname in remote
[ceph-1][DEBUG ] get remote short hostname
[ceph-1][DEBUG ] deploying mon to ceph-1
[ceph-1][DEBUG ] get remote short hostname
[ceph-1][DEBUG ] remote hostname: ceph-1
[ceph-1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-1][DEBUG ] create the mon path if it does not exist
[ceph-1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-1/done
[ceph-1][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-1][DEBUG ] create the init path if it does not exist
[ceph-1][INFO  ] Running command: systemctl enable ceph.target
[ceph-1][INFO  ] Running command: systemctl enable ceph-mon@ceph-1
[ceph-1][INFO  ] Running command: systemctl start ceph-mon@ceph-1
[ceph-1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-1.asok mon_status
[ceph-1][DEBUG ] ********************************************************************************
[ceph-1][DEBUG ] status for monitor: mon.ceph-1
[ceph-1][DEBUG ] {
[ceph-1][DEBUG ]   "election_epoch": 8, 
[ceph-1][DEBUG ]   "extra_probe_peers": [
[ceph-1][DEBUG ]     {
[ceph-1][DEBUG ]       "addrvec": [
[ceph-1][DEBUG ]         {
[ceph-1][DEBUG ]           "addr": "172.17.67.157:3300", 
[ceph-1][DEBUG ]           "nonce": 0, 
[ceph-1][DEBUG ]           "type": "v2"
[ceph-1][DEBUG ]         }, 
[ceph-1][DEBUG ]         {
[ceph-1][DEBUG ]           "addr": "172.17.67.157:6789", 
[ceph-1][DEBUG ]           "nonce": 0, 
[ceph-1][DEBUG ]           "type": "v1"
[ceph-1][DEBUG ]         }
[ceph-1][DEBUG ]       ]
[ceph-1][DEBUG ]     }, 
[ceph-1][DEBUG ]     {
[ceph-1][DEBUG ]       "addrvec": [
[ceph-1][DEBUG ]         {
[ceph-1][DEBUG ]           "addr": "172.17.163.105:3300", 
[ceph-1][DEBUG ]           "nonce": 0, 
[ceph-1][DEBUG ]           "type": "v2"
[ceph-1][DEBUG ]         }, 
[ceph-1][DEBUG ]         {
[ceph-1][DEBUG ]           "addr": "172.17.163.105:6789", 
[ceph-1][DEBUG ]           "nonce": 0, 
[ceph-1][DEBUG ]           "type": "v1"
[ceph-1][DEBUG ]         }
[ceph-1][DEBUG ]       ]
[ceph-1][DEBUG ]     }, 
[ceph-1][DEBUG ]     {
[ceph-1][DEBUG ]       "addrvec": [
[ceph-1][DEBUG ]         {
[ceph-1][DEBUG ]           "addr": "172.17.227.100:3300", 
[ceph-1][DEBUG ]           "nonce": 0, 
[ceph-1][DEBUG ]           "type": "v2"
[ceph-1][DEBUG ]         }, 
[ceph-1][DEBUG ]         {
[ceph-1][DEBUG ]           "addr": "172.17.227.100:6789", 
[ceph-1][DEBUG ]           "nonce": 0, 
[ceph-1][DEBUG ]           "type": "v1"
[ceph-1][DEBUG ]         }
[ceph-1][DEBUG ]       ]
[ceph-1][DEBUG ]     }
[ceph-1][DEBUG ]   ], 
[ceph-1][DEBUG ]   "feature_map": {
[ceph-1][DEBUG ]     "mon": [
[ceph-1][DEBUG ]       {
[ceph-1][DEBUG ]         "features": "0x3ffddff8ffacffff", 
[ceph-1][DEBUG ]         "num": 1, 
[ceph-1][DEBUG ]         "release": "luminous"
[ceph-1][DEBUG ]       }
[ceph-1][DEBUG ]     ]
[ceph-1][DEBUG ]   }, 
[ceph-1][DEBUG ]   "features": {
[ceph-1][DEBUG ]     "quorum_con": "4611087854031667199", 
[ceph-1][DEBUG ]     "quorum_mon": [
[ceph-1][DEBUG ]       "kraken", 
[ceph-1][DEBUG ]       "luminous", 
[ceph-1][DEBUG ]       "mimic", 
[ceph-1][DEBUG ]       "osdmap-prune", 
[ceph-1][DEBUG ]       "nautilus"
[ceph-1][DEBUG ]     ], 
[ceph-1][DEBUG ]     "required_con": "2449958747315912708", 
[ceph-1][DEBUG ]     "required_mon": [
[ceph-1][DEBUG ]       "kraken", 
[ceph-1][DEBUG ]       "luminous", 
[ceph-1][DEBUG ]       "mimic", 
[ceph-1][DEBUG ]       "osdmap-prune", 
[ceph-1][DEBUG ]       "nautilus"
[ceph-1][DEBUG ]     ]
[ceph-1][DEBUG ]   }, 
[ceph-1][DEBUG ]   "monmap": {
[ceph-1][DEBUG ]     "created": "2023-11-11 09:54:05.372287", 
[ceph-1][DEBUG ]     "epoch": 1, 
[ceph-1][DEBUG ]     "features": {
[ceph-1][DEBUG ]       "optional": [], 
[ceph-1][DEBUG ]       "persistent": [
[ceph-1][DEBUG ]         "kraken", 
[ceph-1][DEBUG ]         "luminous", 
[ceph-1][DEBUG ]         "mimic", 
[ceph-1][DEBUG ]         "osdmap-prune", 
[ceph-1][DEBUG ]         "nautilus"
[ceph-1][DEBUG ]       ]
[ceph-1][DEBUG ]     }, 
[ceph-1][DEBUG ]     "fsid": "ff72b496-d036-4f1b-b2ad-55358f3c16cb", 
[ceph-1][DEBUG ]     "min_mon_release": 14, 
[ceph-1][DEBUG ]     "min_mon_release_name": "nautilus", 
[ceph-1][DEBUG ]     "modified": "2023-11-11 09:54:05.372287", 
[ceph-1][DEBUG ]     "mons": [
[ceph-1][DEBUG ]       {
[ceph-1][DEBUG ]         "addr": "172.17.67.157:6789/0", 
[ceph-1][DEBUG ]         "name": "ceph-3", 
[ceph-1][DEBUG ]         "public_addr": "172.17.67.157:6789/0", 
[ceph-1][DEBUG ]         "public_addrs": {
[ceph-1][DEBUG ]           "addrvec": [
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.67.157:3300", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v2"
[ceph-1][DEBUG ]             }, 
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.67.157:6789", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v1"
[ceph-1][DEBUG ]             }
[ceph-1][DEBUG ]           ]
[ceph-1][DEBUG ]         }, 
[ceph-1][DEBUG ]         "rank": 0
[ceph-1][DEBUG ]       }, 
[ceph-1][DEBUG ]       {
[ceph-1][DEBUG ]         "addr": "172.17.112.206:6789/0", 
[ceph-1][DEBUG ]         "name": "ceph-1", 
[ceph-1][DEBUG ]         "public_addr": "172.17.112.206:6789/0", 
[ceph-1][DEBUG ]         "public_addrs": {
[ceph-1][DEBUG ]           "addrvec": [
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.112.206:3300", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v2"
[ceph-1][DEBUG ]             }, 
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.112.206:6789", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v1"
[ceph-1][DEBUG ]             }
[ceph-1][DEBUG ]           ]
[ceph-1][DEBUG ]         }, 
[ceph-1][DEBUG ]         "rank": 1
[ceph-1][DEBUG ]       }, 
[ceph-1][DEBUG ]       {
[ceph-1][DEBUG ]         "addr": "172.17.163.105:6789/0", 
[ceph-1][DEBUG ]         "name": "ceph-0", 
[ceph-1][DEBUG ]         "public_addr": "172.17.163.105:6789/0", 
[ceph-1][DEBUG ]         "public_addrs": {
[ceph-1][DEBUG ]           "addrvec": [
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.163.105:3300", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v2"
[ceph-1][DEBUG ]             }, 
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.163.105:6789", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v1"
[ceph-1][DEBUG ]             }
[ceph-1][DEBUG ]           ]
[ceph-1][DEBUG ]         }, 
[ceph-1][DEBUG ]         "rank": 2
[ceph-1][DEBUG ]       }, 
[ceph-1][DEBUG ]       {
[ceph-1][DEBUG ]         "addr": "172.17.227.100:6789/0", 
[ceph-1][DEBUG ]         "name": "ceph-2", 
[ceph-1][DEBUG ]         "public_addr": "172.17.227.100:6789/0", 
[ceph-1][DEBUG ]         "public_addrs": {
[ceph-1][DEBUG ]           "addrvec": [
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.227.100:3300", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v2"
[ceph-1][DEBUG ]             }, 
[ceph-1][DEBUG ]             {
[ceph-1][DEBUG ]               "addr": "172.17.227.100:6789", 
[ceph-1][DEBUG ]               "nonce": 0, 
[ceph-1][DEBUG ]               "type": "v1"
[ceph-1][DEBUG ]             }
[ceph-1][DEBUG ]           ]
[ceph-1][DEBUG ]         }, 
[ceph-1][DEBUG ]         "rank": 3
[ceph-1][DEBUG ]       }
[ceph-1][DEBUG ]     ]
[ceph-1][DEBUG ]   }, 
[ceph-1][DEBUG ]   "name": "ceph-1", 
[ceph-1][DEBUG ]   "outside_quorum": [], 
[ceph-1][DEBUG ]   "quorum": [
[ceph-1][DEBUG ]     0, 
[ceph-1][DEBUG ]     1, 
[ceph-1][DEBUG ]     2, 
[ceph-1][DEBUG ]     3
[ceph-1][DEBUG ]   ], 
[ceph-1][DEBUG ]   "quorum_age": 921, 
[ceph-1][DEBUG ]   "rank": 1, 
[ceph-1][DEBUG ]   "state": "peon", 
[ceph-1][DEBUG ]   "sync_provider": []
[ceph-1][DEBUG ] }
[ceph-1][DEBUG ] ********************************************************************************
[ceph-1][INFO  ] monitor: mon.ceph-1 is running
[ceph-1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-1.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-2 ...
dhclient(1626) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1626) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-2][DEBUG ] connected to host: ceph-2 
[ceph-2][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-2][DEBUG ] detect machine type
[ceph-2][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: bclinux 21.10U3 21.10U3 LTS
[ceph-2][DEBUG ] determining if provided host has same hostname in remote
[ceph-2][DEBUG ] get remote short hostname
[ceph-2][DEBUG ] deploying mon to ceph-2
[ceph-2][DEBUG ] get remote short hostname
[ceph-2][DEBUG ] remote hostname: ceph-2
[ceph-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-2][DEBUG ] create the mon path if it does not exist
[ceph-2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-2/done
[ceph-2][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-2][DEBUG ] create the init path if it does not exist
[ceph-2][INFO  ] Running command: systemctl enable ceph.target
[ceph-2][INFO  ] Running command: systemctl enable ceph-mon@ceph-2
[ceph-2][INFO  ] Running command: systemctl start ceph-mon@ceph-2
[ceph-2][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-2.asok mon_status
[ceph-2][DEBUG ] ********************************************************************************
[ceph-2][DEBUG ] status for monitor: mon.ceph-2
[ceph-2][DEBUG ] {
[ceph-2][DEBUG ]   "election_epoch": 8, 
[ceph-2][DEBUG ]   "extra_probe_peers": [
[ceph-2][DEBUG ]     {
[ceph-2][DEBUG ]       "addrvec": [
[ceph-2][DEBUG ]         {
[ceph-2][DEBUG ]           "addr": "172.17.67.157:3300", 
[ceph-2][DEBUG ]           "nonce": 0, 
[ceph-2][DEBUG ]           "type": "v2"
[ceph-2][DEBUG ]         }, 
[ceph-2][DEBUG ]         {
[ceph-2][DEBUG ]           "addr": "172.17.67.157:6789", 
[ceph-2][DEBUG ]           "nonce": 0, 
[ceph-2][DEBUG ]           "type": "v1"
[ceph-2][DEBUG ]         }
[ceph-2][DEBUG ]       ]
[ceph-2][DEBUG ]     }, 
[ceph-2][DEBUG ]     {
[ceph-2][DEBUG ]       "addrvec": [
[ceph-2][DEBUG ]         {
[ceph-2][DEBUG ]           "addr": "172.17.112.206:3300", 
[ceph-2][DEBUG ]           "nonce": 0, 
[ceph-2][DEBUG ]           "type": "v2"
[ceph-2][DEBUG ]         }, 
[ceph-2][DEBUG ]         {
[ceph-2][DEBUG ]           "addr": "172.17.112.206:6789", 
[ceph-2][DEBUG ]           "nonce": 0, 
[ceph-2][DEBUG ]           "type": "v1"
[ceph-2][DEBUG ]         }
[ceph-2][DEBUG ]       ]
[ceph-2][DEBUG ]     }, 
[ceph-2][DEBUG ]     {
[ceph-2][DEBUG ]       "addrvec": [
[ceph-2][DEBUG ]         {
[ceph-2][DEBUG ]           "addr": "172.17.163.105:3300", 
[ceph-2][DEBUG ]           "nonce": 0, 
[ceph-2][DEBUG ]           "type": "v2"
[ceph-2][DEBUG ]         }, 
[ceph-2][DEBUG ]         {
[ceph-2][DEBUG ]           "addr": "172.17.163.105:6789", 
[ceph-2][DEBUG ]           "nonce": 0, 
[ceph-2][DEBUG ]           "type": "v1"
[ceph-2][DEBUG ]         }
[ceph-2][DEBUG ]       ]
[ceph-2][DEBUG ]     }
[ceph-2][DEBUG ]   ], 
[ceph-2][DEBUG ]   "feature_map": {
[ceph-2][DEBUG ]     "mon": [
[ceph-2][DEBUG ]       {
[ceph-2][DEBUG ]         "features": "0x3ffddff8ffacffff", 
[ceph-2][DEBUG ]         "num": 1, 
[ceph-2][DEBUG ]         "release": "luminous"
[ceph-2][DEBUG ]       }
[ceph-2][DEBUG ]     ]
[ceph-2][DEBUG ]   }, 
[ceph-2][DEBUG ]   "features": {
[ceph-2][DEBUG ]     "quorum_con": "4611087854031667199", 
[ceph-2][DEBUG ]     "quorum_mon": [
[ceph-2][DEBUG ]       "kraken", 
[ceph-2][DEBUG ]       "luminous", 
[ceph-2][DEBUG ]       "mimic", 
[ceph-2][DEBUG ]       "osdmap-prune", 
[ceph-2][DEBUG ]       "nautilus"
[ceph-2][DEBUG ]     ], 
[ceph-2][DEBUG ]     "required_con": "2449958747315912708", 
[ceph-2][DEBUG ]     "required_mon": [
[ceph-2][DEBUG ]       "kraken", 
[ceph-2][DEBUG ]       "luminous", 
[ceph-2][DEBUG ]       "mimic", 
[ceph-2][DEBUG ]       "osdmap-prune", 
[ceph-2][DEBUG ]       "nautilus"
[ceph-2][DEBUG ]     ]
[ceph-2][DEBUG ]   }, 
[ceph-2][DEBUG ]   "monmap": {
[ceph-2][DEBUG ]     "created": "2023-11-11 09:54:05.372287", 
[ceph-2][DEBUG ]     "epoch": 1, 
[ceph-2][DEBUG ]     "features": {
[ceph-2][DEBUG ]       "optional": [], 
[ceph-2][DEBUG ]       "persistent": [
[ceph-2][DEBUG ]         "kraken", 
[ceph-2][DEBUG ]         "luminous", 
[ceph-2][DEBUG ]         "mimic", 
[ceph-2][DEBUG ]         "osdmap-prune", 
[ceph-2][DEBUG ]         "nautilus"
[ceph-2][DEBUG ]       ]
[ceph-2][DEBUG ]     }, 
[ceph-2][DEBUG ]     "fsid": "ff72b496-d036-4f1b-b2ad-55358f3c16cb", 
[ceph-2][DEBUG ]     "min_mon_release": 14, 
[ceph-2][DEBUG ]     "min_mon_release_name": "nautilus", 
[ceph-2][DEBUG ]     "modified": "2023-11-11 09:54:05.372287", 
[ceph-2][DEBUG ]     "mons": [
[ceph-2][DEBUG ]       {
[ceph-2][DEBUG ]         "addr": "172.17.67.157:6789/0", 
[ceph-2][DEBUG ]         "name": "ceph-3", 
[ceph-2][DEBUG ]         "public_addr": "172.17.67.157:6789/0", 
[ceph-2][DEBUG ]         "public_addrs": {
[ceph-2][DEBUG ]           "addrvec": [
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.67.157:3300", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v2"
[ceph-2][DEBUG ]             }, 
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.67.157:6789", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v1"
[ceph-2][DEBUG ]             }
[ceph-2][DEBUG ]           ]
[ceph-2][DEBUG ]         }, 
[ceph-2][DEBUG ]         "rank": 0
[ceph-2][DEBUG ]       }, 
[ceph-2][DEBUG ]       {
[ceph-2][DEBUG ]         "addr": "172.17.112.206:6789/0", 
[ceph-2][DEBUG ]         "name": "ceph-1", 
[ceph-2][DEBUG ]         "public_addr": "172.17.112.206:6789/0", 
[ceph-2][DEBUG ]         "public_addrs": {
[ceph-2][DEBUG ]           "addrvec": [
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.112.206:3300", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v2"
[ceph-2][DEBUG ]             }, 
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.112.206:6789", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v1"
[ceph-2][DEBUG ]             }
[ceph-2][DEBUG ]           ]
[ceph-2][DEBUG ]         }, 
[ceph-2][DEBUG ]         "rank": 1
[ceph-2][DEBUG ]       }, 
[ceph-2][DEBUG ]       {
[ceph-2][DEBUG ]         "addr": "172.17.163.105:6789/0", 
[ceph-2][DEBUG ]         "name": "ceph-0", 
[ceph-2][DEBUG ]         "public_addr": "172.17.163.105:6789/0", 
[ceph-2][DEBUG ]         "public_addrs": {
[ceph-2][DEBUG ]           "addrvec": [
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.163.105:3300", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v2"
[ceph-2][DEBUG ]             }, 
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.163.105:6789", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v1"
[ceph-2][DEBUG ]             }
[ceph-2][DEBUG ]           ]
[ceph-2][DEBUG ]         }, 
[ceph-2][DEBUG ]         "rank": 2
[ceph-2][DEBUG ]       }, 
[ceph-2][DEBUG ]       {
[ceph-2][DEBUG ]         "addr": "172.17.227.100:6789/0", 
[ceph-2][DEBUG ]         "name": "ceph-2", 
[ceph-2][DEBUG ]         "public_addr": "172.17.227.100:6789/0", 
[ceph-2][DEBUG ]         "public_addrs": {
[ceph-2][DEBUG ]           "addrvec": [
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.227.100:3300", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v2"
[ceph-2][DEBUG ]             }, 
[ceph-2][DEBUG ]             {
[ceph-2][DEBUG ]               "addr": "172.17.227.100:6789", 
[ceph-2][DEBUG ]               "nonce": 0, 
[ceph-2][DEBUG ]               "type": "v1"
[ceph-2][DEBUG ]             }
[ceph-2][DEBUG ]           ]
[ceph-2][DEBUG ]         }, 
[ceph-2][DEBUG ]         "rank": 3
[ceph-2][DEBUG ]       }
[ceph-2][DEBUG ]     ]
[ceph-2][DEBUG ]   }, 
[ceph-2][DEBUG ]   "name": "ceph-2", 
[ceph-2][DEBUG ]   "outside_quorum": [], 
[ceph-2][DEBUG ]   "quorum": [
[ceph-2][DEBUG ]     0, 
[ceph-2][DEBUG ]     1, 
[ceph-2][DEBUG ]     2, 
[ceph-2][DEBUG ]     3
[ceph-2][DEBUG ]   ], 
[ceph-2][DEBUG ]   "quorum_age": 926, 
[ceph-2][DEBUG ]   "rank": 3, 
[ceph-2][DEBUG ]   "state": "peon", 
[ceph-2][DEBUG ]   "sync_provider": []
[ceph-2][DEBUG ] }
[ceph-2][DEBUG ] ********************************************************************************
[ceph-2][INFO  ] monitor: mon.ceph-2 is running
[ceph-2][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-2.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-3 ...
dhclient(1634) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1634) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-3][DEBUG ] connected to host: ceph-3 
[ceph-3][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-3][DEBUG ] detect machine type
[ceph-3][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO  ] distro info: bclinux 21.10U3 21.10U3 LTS
[ceph-3][DEBUG ] determining if provided host has same hostname in remote
[ceph-3][DEBUG ] get remote short hostname
[ceph-3][DEBUG ] deploying mon to ceph-3
[ceph-3][DEBUG ] get remote short hostname
[ceph-3][DEBUG ] remote hostname: ceph-3
[ceph-3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-3][DEBUG ] create the mon path if it does not exist
[ceph-3][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-3/done
[ceph-3][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-3][DEBUG ] create the init path if it does not exist
[ceph-3][INFO  ] Running command: systemctl enable ceph.target
[ceph-3][INFO  ] Running command: systemctl enable ceph-mon@ceph-3
[ceph-3][INFO  ] Running command: systemctl start ceph-mon@ceph-3
[ceph-3][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-3.asok mon_status
[ceph-3][DEBUG ] ********************************************************************************
[ceph-3][DEBUG ] status for monitor: mon.ceph-3
[ceph-3][DEBUG ] {
[ceph-3][DEBUG ]   "election_epoch": 8, 
[ceph-3][DEBUG ]   "extra_probe_peers": [
[ceph-3][DEBUG ]     {
[ceph-3][DEBUG ]       "addrvec": [
[ceph-3][DEBUG ]         {
[ceph-3][DEBUG ]           "addr": "172.17.112.206:3300", 
[ceph-3][DEBUG ]           "nonce": 0, 
[ceph-3][DEBUG ]           "type": "v2"
[ceph-3][DEBUG ]         }, 
[ceph-3][DEBUG ]         {
[ceph-3][DEBUG ]           "addr": "172.17.112.206:6789", 
[ceph-3][DEBUG ]           "nonce": 0, 
[ceph-3][DEBUG ]           "type": "v1"
[ceph-3][DEBUG ]         }
[ceph-3][DEBUG ]       ]
[ceph-3][DEBUG ]     }, 
[ceph-3][DEBUG ]     {
[ceph-3][DEBUG ]       "addrvec": [
[ceph-3][DEBUG ]         {
[ceph-3][DEBUG ]           "addr": "172.17.163.105:3300", 
[ceph-3][DEBUG ]           "nonce": 0, 
[ceph-3][DEBUG ]           "type": "v2"
[ceph-3][DEBUG ]         }, 
[ceph-3][DEBUG ]         {
[ceph-3][DEBUG ]           "addr": "172.17.163.105:6789", 
[ceph-3][DEBUG ]           "nonce": 0, 
[ceph-3][DEBUG ]           "type": "v1"
[ceph-3][DEBUG ]         }
[ceph-3][DEBUG ]       ]
[ceph-3][DEBUG ]     }, 
[ceph-3][DEBUG ]     {
[ceph-3][DEBUG ]       "addrvec": [
[ceph-3][DEBUG ]         {
[ceph-3][DEBUG ]           "addr": "172.17.227.100:3300", 
[ceph-3][DEBUG ]           "nonce": 0, 
[ceph-3][DEBUG ]           "type": "v2"
[ceph-3][DEBUG ]         }, 
[ceph-3][DEBUG ]         {
[ceph-3][DEBUG ]           "addr": "172.17.227.100:6789", 
[ceph-3][DEBUG ]           "nonce": 0, 
[ceph-3][DEBUG ]           "type": "v1"
[ceph-3][DEBUG ]         }
[ceph-3][DEBUG ]       ]
[ceph-3][DEBUG ]     }
[ceph-3][DEBUG ]   ], 
[ceph-3][DEBUG ]   "feature_map": {
[ceph-3][DEBUG ]     "mon": [
[ceph-3][DEBUG ]       {
[ceph-3][DEBUG ]         "features": "0x3ffddff8ffacffff", 
[ceph-3][DEBUG ]         "num": 1, 
[ceph-3][DEBUG ]         "release": "luminous"
[ceph-3][DEBUG ]       }
[ceph-3][DEBUG ]     ]
[ceph-3][DEBUG ]   }, 
[ceph-3][DEBUG ]   "features": {
[ceph-3][DEBUG ]     "quorum_con": "4611087854031667199", 
[ceph-3][DEBUG ]     "quorum_mon": [
[ceph-3][DEBUG ]       "kraken", 
[ceph-3][DEBUG ]       "luminous", 
[ceph-3][DEBUG ]       "mimic", 
[ceph-3][DEBUG ]       "osdmap-prune", 
[ceph-3][DEBUG ]       "nautilus"
[ceph-3][DEBUG ]     ], 
[ceph-3][DEBUG ]     "required_con": "2449958747315912708", 
[ceph-3][DEBUG ]     "required_mon": [
[ceph-3][DEBUG ]       "kraken", 
[ceph-3][DEBUG ]       "luminous", 
[ceph-3][DEBUG ]       "mimic", 
[ceph-3][DEBUG ]       "osdmap-prune", 
[ceph-3][DEBUG ]       "nautilus"
[ceph-3][DEBUG ]     ]
[ceph-3][DEBUG ]   }, 
[ceph-3][DEBUG ]   "monmap": {
[ceph-3][DEBUG ]     "created": "2023-11-11 09:54:05.372287", 
[ceph-3][DEBUG ]     "epoch": 1, 
[ceph-3][DEBUG ]     "features": {
[ceph-3][DEBUG ]       "optional": [], 
[ceph-3][DEBUG ]       "persistent": [
[ceph-3][DEBUG ]         "kraken", 
[ceph-3][DEBUG ]         "luminous", 
[ceph-3][DEBUG ]         "mimic", 
[ceph-3][DEBUG ]         "osdmap-prune", 
[ceph-3][DEBUG ]         "nautilus"
[ceph-3][DEBUG ]       ]
[ceph-3][DEBUG ]     }, 
[ceph-3][DEBUG ]     "fsid": "ff72b496-d036-4f1b-b2ad-55358f3c16cb", 
[ceph-3][DEBUG ]     "min_mon_release": 14, 
[ceph-3][DEBUG ]     "min_mon_release_name": "nautilus", 
[ceph-3][DEBUG ]     "modified": "2023-11-11 09:54:05.372287", 
[ceph-3][DEBUG ]     "mons": [
[ceph-3][DEBUG ]       {
[ceph-3][DEBUG ]         "addr": "172.17.67.157:6789/0", 
[ceph-3][DEBUG ]         "name": "ceph-3", 
[ceph-3][DEBUG ]         "public_addr": "172.17.67.157:6789/0", 
[ceph-3][DEBUG ]         "public_addrs": {
[ceph-3][DEBUG ]           "addrvec": [
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.67.157:3300", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v2"
[ceph-3][DEBUG ]             }, 
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.67.157:6789", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v1"
[ceph-3][DEBUG ]             }
[ceph-3][DEBUG ]           ]
[ceph-3][DEBUG ]         }, 
[ceph-3][DEBUG ]         "rank": 0
[ceph-3][DEBUG ]       }, 
[ceph-3][DEBUG ]       {
[ceph-3][DEBUG ]         "addr": "172.17.112.206:6789/0", 
[ceph-3][DEBUG ]         "name": "ceph-1", 
[ceph-3][DEBUG ]         "public_addr": "172.17.112.206:6789/0", 
[ceph-3][DEBUG ]         "public_addrs": {
[ceph-3][DEBUG ]           "addrvec": [
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.112.206:3300", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v2"
[ceph-3][DEBUG ]             }, 
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.112.206:6789", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v1"
[ceph-3][DEBUG ]             }
[ceph-3][DEBUG ]           ]
[ceph-3][DEBUG ]         }, 
[ceph-3][DEBUG ]         "rank": 1
[ceph-3][DEBUG ]       }, 
[ceph-3][DEBUG ]       {
[ceph-3][DEBUG ]         "addr": "172.17.163.105:6789/0", 
[ceph-3][DEBUG ]         "name": "ceph-0", 
[ceph-3][DEBUG ]         "public_addr": "172.17.163.105:6789/0", 
[ceph-3][DEBUG ]         "public_addrs": {
[ceph-3][DEBUG ]           "addrvec": [
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.163.105:3300", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v2"
[ceph-3][DEBUG ]             }, 
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.163.105:6789", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v1"
[ceph-3][DEBUG ]             }
[ceph-3][DEBUG ]           ]
[ceph-3][DEBUG ]         }, 
[ceph-3][DEBUG ]         "rank": 2
[ceph-3][DEBUG ]       }, 
[ceph-3][DEBUG ]       {
[ceph-3][DEBUG ]         "addr": "172.17.227.100:6789/0", 
[ceph-3][DEBUG ]         "name": "ceph-2", 
[ceph-3][DEBUG ]         "public_addr": "172.17.227.100:6789/0", 
[ceph-3][DEBUG ]         "public_addrs": {
[ceph-3][DEBUG ]           "addrvec": [
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.227.100:3300", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v2"
[ceph-3][DEBUG ]             }, 
[ceph-3][DEBUG ]             {
[ceph-3][DEBUG ]               "addr": "172.17.227.100:6789", 
[ceph-3][DEBUG ]               "nonce": 0, 
[ceph-3][DEBUG ]               "type": "v1"
[ceph-3][DEBUG ]             }
[ceph-3][DEBUG ]           ]
[ceph-3][DEBUG ]         }, 
[ceph-3][DEBUG ]         "rank": 3
[ceph-3][DEBUG ]       }
[ceph-3][DEBUG ]     ]
[ceph-3][DEBUG ]   }, 
[ceph-3][DEBUG ]   "name": "ceph-3", 
[ceph-3][DEBUG ]   "outside_quorum": [], 
[ceph-3][DEBUG ]   "quorum": [
[ceph-3][DEBUG ]     0, 
[ceph-3][DEBUG ]     1, 
[ceph-3][DEBUG ]     2, 
[ceph-3][DEBUG ]     3
[ceph-3][DEBUG ]   ], 
[ceph-3][DEBUG ]   "quorum_age": 931, 
[ceph-3][DEBUG ]   "rank": 0, 
[ceph-3][DEBUG ]   "state": "leader", 
[ceph-3][DEBUG ]   "sync_provider": []
[ceph-3][DEBUG ] }
[ceph-3][DEBUG ] ********************************************************************************
[ceph-3][INFO  ] monitor: mon.ceph-3 is running
[ceph-3][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.ceph-3.asok mon_status

[errno 2] error connecting to the cluster

没有key?继续下个步骤再观察

收集秘钥

ceph-deploy gatherkeys ceph-0 ceph-1 ceph-2 ceph-3

日志

ceph -s 可以看到daemons服务了

部署admin节点

ceph-deploy admin ceph-0 ceph-1 ceph-2 ceph-3
[root@ceph-0 ceph]# ceph-deploy admin ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy admin ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff91add0f0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['ceph-0', 'ceph-1', 'ceph-2', 'ceph-3']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0xffff91c777d0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-0
[ceph-0][DEBUG ] connected to host: ceph-0 
[ceph-0][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-0][DEBUG ] detect machine type
[ceph-0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-1
dhclient(1613) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1613) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-1][DEBUG ] connected to host: ceph-1 
[ceph-1][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-1][DEBUG ] detect machine type
[ceph-1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-2
dhclient(1626) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1626) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-2][DEBUG ] connected to host: ceph-2 
[ceph-2][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-2][DEBUG ] detect machine type
[ceph-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-3
dhclient(1634) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1634) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-3][DEBUG ] connected to host: ceph-3 
[ceph-3][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-3][DEBUG ] detect machine type
[ceph-3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

ceph -s

部署OSD

ceph-deploy osd create ceph-0 --data /dev/vdb
ceph-deploy osd create ceph-1 --data /dev/vdb
ceph-deploy osd create ceph-2 --data /dev/vdb
ceph-deploy osd create ceph-3 --data /dev/vdb

日志ceph 0 1 2 3

[root@ceph-0 ceph]# ceph-deploy osd create ceph-0 --data /dev/vdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-0 --data /dev/vdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff9cc8cd20>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph-0
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0xffff9cd1bed0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/vdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdb
[ceph-0][DEBUG ] connected to host: ceph-0 
[ceph-0][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-0][DEBUG ] detect machine type
[ceph-0][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-0
[ceph-0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-0][WARNIN] osd keyring does not exist yet, creating one
[ceph-0][DEBUG ] create a keyring file
[ceph-0][DEBUG ] find the location of an executable
[ceph-0][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[ceph-0][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-0][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new c1870346-8e19-4788-b1dd-19bd75d6ec2f
[ceph-0][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-837353d8-91ff-4418-bc8f-a655d94049d4 /dev/vdb
[ceph-0][WARNIN]  stdout: Physical volume "/dev/vdb" successfully created.
[ceph-0][WARNIN]  stdout: Volume group "ceph-837353d8-91ff-4418-bc8f-a655d94049d4" successfully created
[ceph-0][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 25599 -n osd-block-c1870346-8e19-4788-b1dd-19bd75d6ec2f ceph-837353d8-91ff-4418-bc8f-a655d94049d4
[ceph-0][WARNIN]  stdout: Logical volume "osd-block-c1870346-8e19-4788-b1dd-19bd75d6ec2f" created.
[ceph-0][WARNIN] Running command: /usr/bin/ceph-authtool --gen-print-key
[ceph-0][WARNIN] Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[ceph-0][WARNIN] Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-837353d8-91ff-4418-bc8f-a655d94049d4/osd-block-c1870346-8e19-4788-b1dd-19bd75d6ec2f
[ceph-0][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3
[ceph-0][WARNIN] Running command: /usr/bin/ln -s /dev/ceph-837353d8-91ff-4418-bc8f-a655d94049d4/osd-block-c1870346-8e19-4788-b1dd-19bd75d6ec2f /var/lib/ceph/osd/ceph-0/block
[ceph-0][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[ceph-0][WARNIN]  stderr: 2023-11-11 10:48:34.800 ffff843261e0 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-0][WARNIN] 2023-11-11 10:48:34.800 ffff843261e0 -1 AuthRegistry(0xffff7c081d58) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-0][WARNIN]  stderr: got monmap epoch 1
[ceph-0][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring --create-keyring --name osd.0 --add-key AQB/605l8419HxAAhIoXMxEJCV5J6qOB8AyHrw==
[ceph-0][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-0/keyring
[ceph-0][WARNIN] added entity osd.0 auth(key=AQB/605l8419HxAAhIoXMxEJCV5J6qOB8AyHrw==)
[ceph-0][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[ceph-0][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[ceph-0][WARNIN] Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid c1870346-8e19-4788-b1dd-19bd75d6ec2f --setuser ceph --setgroup ceph
[ceph-0][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdb
[ceph-0][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-0][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-837353d8-91ff-4418-bc8f-a655d94049d4/osd-block-c1870346-8e19-4788-b1dd-19bd75d6ec2f --path /var/lib/ceph/osd/ceph-0 --no-mon-config
[ceph-0][WARNIN] Running command: /usr/bin/ln -snf /dev/ceph-837353d8-91ff-4418-bc8f-a655d94049d4/osd-block-c1870346-8e19-4788-b1dd-19bd75d6ec2f /var/lib/ceph/osd/ceph-0/block
[ceph-0][WARNIN] Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[ceph-0][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /dev/dm-3
[ceph-0][WARNIN] Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-0][WARNIN] Running command: /usr/bin/systemctl enable ceph-volume@lvm-0-c1870346-8e19-4788-b1dd-19bd75d6ec2f
[ceph-0][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-c1870346-8e19-4788-b1dd-19bd75d6ec2f.service → /usr/lib/systemd/system/ceph-volume@.service.
[ceph-0][WARNIN] Running command: /usr/bin/systemctl enable --runtime ceph-osd@0
[ceph-0][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service → /usr/lib/systemd/system/ceph-osd@.service.
[ceph-0][WARNIN] Running command: /usr/bin/systemctl start ceph-osd@0
[ceph-0][WARNIN] --> ceph-volume lvm activate successful for osd ID: 0
[ceph-0][WARNIN] --> ceph-volume lvm create successful for: /dev/vdb
[ceph-0][INFO  ] checking OSD status...
[ceph-0][DEBUG ] find the location of an executable
[ceph-0][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-0 is now ready for osd use.
[root@ceph-0 ceph]# ceph-deploy osd create ceph-1 --data /dev/vdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-1 --data /dev/vdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff87d9ed20>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph-1
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0xffff87e2ded0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/vdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdb
dhclient(1613) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1613) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-1][DEBUG ] connected to host: ceph-1 
[ceph-1][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-1][DEBUG ] detect machine type
[ceph-1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-1
[ceph-1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-1][WARNIN] osd keyring does not exist yet, creating one
[ceph-1][DEBUG ] create a keyring file
[ceph-1][DEBUG ] find the location of an executable
[ceph-1][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[ceph-1][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-1][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 4aa0152e-d817-4583-817b-81ada419624a
[ceph-1][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-89d26557-d392-4a46-8d3d-6904076cd4e0 /dev/vdb
[ceph-1][WARNIN]  stdout: Physical volume "/dev/vdb" successfully created.
[ceph-1][WARNIN]  stdout: Volume group "ceph-89d26557-d392-4a46-8d3d-6904076cd4e0" successfully created
[ceph-1][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 25599 -n osd-block-4aa0152e-d817-4583-817b-81ada419624a ceph-89d26557-d392-4a46-8d3d-6904076cd4e0
[ceph-1][WARNIN]  stdout: Logical volume "osd-block-4aa0152e-d817-4583-817b-81ada419624a" created.
[ceph-1][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-1][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
[ceph-1][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-89d26557-d392-4a46-8d3d-6904076cd4e0/osd-block-4aa0152e-d817-4583-817b-81ada419624a
[ceph-1][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-3
[ceph-1][WARNIN] Running command: /bin/ln -s /dev/ceph-89d26557-d392-4a46-8d3d-6904076cd4e0/osd-block-4aa0152e-d817-4583-817b-81ada419624a /var/lib/ceph/osd/ceph-1/block
[ceph-1][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
[ceph-1][WARNIN]  stderr: 2023-11-11 10:49:41.805 ffff89d6d1e0 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-1][WARNIN] 2023-11-11 10:49:41.805 ffff89d6d1e0 -1 AuthRegistry(0xffff84081d58) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-1][WARNIN]  stderr: got monmap epoch 1
[ceph-1][WARNIN] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-1/keyring --create-keyring --name osd.1 --add-key AQDC605lWArnLhAAEYhGC+H+Jy224yAIJhL0gA==
[ceph-1][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-1/keyring
[ceph-1][WARNIN] added entity osd.1 auth(key=AQDC605lWArnLhAAEYhGC+H+Jy224yAIJhL0gA==)
[ceph-1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
[ceph-1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
[ceph-1][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 4aa0152e-d817-4583-817b-81ada419624a --setuser ceph --setgroup ceph
[ceph-1][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdb
[ceph-1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
[ceph-1][WARNIN] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-89d26557-d392-4a46-8d3d-6904076cd4e0/osd-block-4aa0152e-d817-4583-817b-81ada419624a --path /var/lib/ceph/osd/ceph-1 --no-mon-config
[ceph-1][WARNIN] Running command: /bin/ln -snf /dev/ceph-89d26557-d392-4a46-8d3d-6904076cd4e0/osd-block-4aa0152e-d817-4583-817b-81ada419624a /var/lib/ceph/osd/ceph-1/block
[ceph-1][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
[ceph-1][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-3
[ceph-1][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
[ceph-1][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-1-4aa0152e-d817-4583-817b-81ada419624a
[ceph-1][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-1-4aa0152e-d817-4583-817b-81ada419624a.service → /usr/lib/systemd/system/ceph-volume@.service.
[ceph-1][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@1
[ceph-1][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@1.service → /usr/lib/systemd/system/ceph-osd@.service.
[ceph-1][WARNIN] Running command: /bin/systemctl start ceph-osd@1
[ceph-1][WARNIN] --> ceph-volume lvm activate successful for osd ID: 1
[ceph-1][WARNIN] --> ceph-volume lvm create successful for: /dev/vdb
[ceph-1][INFO  ] checking OSD status...
[ceph-1][DEBUG ] find the location of an executable
[ceph-1][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-1 is now ready for osd use.
[root@ceph-0 ceph]# ceph-deploy osd create ceph-2 --data /dev/vdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-2 --data /dev/vdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff9a808d20>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph-2
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0xffff9a897ed0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/vdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdb
dhclient(1626) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1626) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-2][DEBUG ] connected to host: ceph-2 
[ceph-2][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-2][DEBUG ] detect machine type
[ceph-2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-2
[ceph-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-2][WARNIN] osd keyring does not exist yet, creating one
[ceph-2][DEBUG ] create a keyring file
[ceph-2][DEBUG ] find the location of an executable
[ceph-2][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[ceph-2][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-2][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new fe7a2030-94ac-4bbb-af27-7950509b0960
[ceph-2][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-8d4ef242-6dc1-4161-9b6a-15a626b86c6f /dev/vdb
[ceph-2][WARNIN]  stdout: Physical volume "/dev/vdb" successfully created.
[ceph-2][WARNIN]  stdout: Volume group "ceph-8d4ef242-6dc1-4161-9b6a-15a626b86c6f" successfully created
[ceph-2][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 25599 -n osd-block-fe7a2030-94ac-4bbb-af27-7950509b0960 ceph-8d4ef242-6dc1-4161-9b6a-15a626b86c6f
[ceph-2][WARNIN]  stdout: Logical volume "osd-block-fe7a2030-94ac-4bbb-af27-7950509b0960" created.
[ceph-2][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-2][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
[ceph-2][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-8d4ef242-6dc1-4161-9b6a-15a626b86c6f/osd-block-fe7a2030-94ac-4bbb-af27-7950509b0960
[ceph-2][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-3
[ceph-2][WARNIN] Running command: /bin/ln -s /dev/ceph-8d4ef242-6dc1-4161-9b6a-15a626b86c6f/osd-block-fe7a2030-94ac-4bbb-af27-7950509b0960 /var/lib/ceph/osd/ceph-2/block
[ceph-2][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
[ceph-2][WARNIN]  stderr: 2023-11-11 10:50:01.837 ffff947321e0 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-2][WARNIN] 2023-11-11 10:50:01.837 ffff947321e0 -1 AuthRegistry(0xffff8c081d58) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-2][WARNIN]  stderr: got monmap epoch 1
[ceph-2][WARNIN] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-2/keyring --create-keyring --name osd.2 --add-key AQDW605lUIA0MhAAqOoCGrnDsVpfoIIKVtCXHg==
[ceph-2][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-2/keyring
[ceph-2][WARNIN] added entity osd.2 auth(key=AQDW605lUIA0MhAAqOoCGrnDsVpfoIIKVtCXHg==)
[ceph-2][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
[ceph-2][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
[ceph-2][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid fe7a2030-94ac-4bbb-af27-7950509b0960 --setuser ceph --setgroup ceph
[ceph-2][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdb
[ceph-2][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
[ceph-2][WARNIN] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-8d4ef242-6dc1-4161-9b6a-15a626b86c6f/osd-block-fe7a2030-94ac-4bbb-af27-7950509b0960 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
[ceph-2][WARNIN] Running command: /bin/ln -snf /dev/ceph-8d4ef242-6dc1-4161-9b6a-15a626b86c6f/osd-block-fe7a2030-94ac-4bbb-af27-7950509b0960 /var/lib/ceph/osd/ceph-2/block
[ceph-2][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
[ceph-2][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-3
[ceph-2][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
[ceph-2][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-2-fe7a2030-94ac-4bbb-af27-7950509b0960
[ceph-2][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-2-fe7a2030-94ac-4bbb-af27-7950509b0960.service → /usr/lib/systemd/system/ceph-volume@.service.
[ceph-2][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@2
[ceph-2][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@2.service → /usr/lib/systemd/system/ceph-osd@.service.
[ceph-2][WARNIN] Running command: /bin/systemctl start ceph-osd@2
[ceph-2][WARNIN] --> ceph-volume lvm activate successful for osd ID: 2
[ceph-2][WARNIN] --> ceph-volume lvm create successful for: /dev/vdb
[ceph-2][INFO  ] checking OSD status...
[ceph-2][DEBUG ] find the location of an executable
[ceph-2][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-2 is now ready for osd use.
[root@ceph-0 ceph]# ceph-deploy osd create ceph-3 --data /dev/vdb
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create ceph-3 --data /dev/vdb
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff7f600d20>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : ceph-3
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0xffff7f68fed0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/vdb
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/vdb
dhclient(1634) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1634) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-3][DEBUG ] connected to host: ceph-3 
[ceph-3][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-3][DEBUG ] detect machine type
[ceph-3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-3
[ceph-3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-3][WARNIN] osd keyring does not exist yet, creating one
[ceph-3][DEBUG ] create a keyring file
[ceph-3][DEBUG ] find the location of an executable
[ceph-3][INFO  ] Running command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/vdb
[ceph-3][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-3][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 223dea89-7b5f-4584-b294-bbc0457cd250
[ceph-3][WARNIN] Running command: /usr/sbin/vgcreate --force --yes ceph-a75dd665-280f-4901-90db-d72aea971fd7 /dev/vdb
[ceph-3][WARNIN]  stdout: Physical volume "/dev/vdb" successfully created.
[ceph-3][WARNIN]  stdout: Volume group "ceph-a75dd665-280f-4901-90db-d72aea971fd7" successfully created
[ceph-3][WARNIN] Running command: /usr/sbin/lvcreate --yes -l 25599 -n osd-block-223dea89-7b5f-4584-b294-bbc0457cd250 ceph-a75dd665-280f-4901-90db-d72aea971fd7
[ceph-3][WARNIN]  stdout: Logical volume "osd-block-223dea89-7b5f-4584-b294-bbc0457cd250" created.
[ceph-3][WARNIN] Running command: /bin/ceph-authtool --gen-print-key
[ceph-3][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3
[ceph-3][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-a75dd665-280f-4901-90db-d72aea971fd7/osd-block-223dea89-7b5f-4584-b294-bbc0457cd250
[ceph-3][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-3
[ceph-3][WARNIN] Running command: /bin/ln -s /dev/ceph-a75dd665-280f-4901-90db-d72aea971fd7/osd-block-223dea89-7b5f-4584-b294-bbc0457cd250 /var/lib/ceph/osd/ceph-3/block
[ceph-3][WARNIN] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-3/activate.monmap
[ceph-3][WARNIN]  stderr: 2023-11-11 10:50:22.197 ffffa80151e0 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,: (2) No such file or directory
[ceph-3][WARNIN] 2023-11-11 10:50:22.197 ffffa80151e0 -1 AuthRegistry(0xffffa0081d58) no keyring found at /etc/ceph/ceph.client.bootstrap-osd.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin,, disabling cephx
[ceph-3][WARNIN]  stderr: got monmap epoch 1
[ceph-3][WARNIN] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-3/keyring --create-keyring --name osd.3 --add-key AQDr605lrGtEDRAAvHq/3Wbxx0jH8NgtcKN/aA==
[ceph-3][WARNIN]  stdout: creating /var/lib/ceph/osd/ceph-3/keyring
[ceph-3][WARNIN] added entity osd.3 auth(key=AQDr605lrGtEDRAAvHq/3Wbxx0jH8NgtcKN/aA==)
[ceph-3][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/keyring
[ceph-3][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/
[ceph-3][WARNIN] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 3 --monmap /var/lib/ceph/osd/ceph-3/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-3/ --osd-uuid 223dea89-7b5f-4584-b294-bbc0457cd250 --setuser ceph --setgroup ceph
[ceph-3][WARNIN] --> ceph-volume lvm prepare successful for: /dev/vdb
[ceph-3][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
[ceph-3][WARNIN] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-a75dd665-280f-4901-90db-d72aea971fd7/osd-block-223dea89-7b5f-4584-b294-bbc0457cd250 --path /var/lib/ceph/osd/ceph-3 --no-mon-config
[ceph-3][WARNIN] Running command: /bin/ln -snf /dev/ceph-a75dd665-280f-4901-90db-d72aea971fd7/osd-block-223dea89-7b5f-4584-b294-bbc0457cd250 /var/lib/ceph/osd/ceph-3/block
[ceph-3][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block
[ceph-3][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-3
[ceph-3][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
[ceph-3][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-3-223dea89-7b5f-4584-b294-bbc0457cd250
[ceph-3][WARNIN]  stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-3-223dea89-7b5f-4584-b294-bbc0457cd250.service → /usr/lib/systemd/system/ceph-volume@.service.
[ceph-3][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@3
[ceph-3][WARNIN]  stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@3.service → /usr/lib/systemd/system/ceph-osd@.service.
[ceph-3][WARNIN] Running command: /bin/systemctl start ceph-osd@3
[ceph-3][WARNIN] --> ceph-volume lvm activate successful for osd ID: 3
[ceph-3][WARNIN] --> ceph-volume lvm create successful for: /dev/vdb
[ceph-3][INFO  ] checking OSD status...
[ceph-3][DEBUG ] find the location of an executable
[ceph-3][INFO  ] Running command: /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph-3 is now ready for osd use.

ceph -s 状态没变,还没有看到存储状态

部署mgr

 ceph-deploy mgr create ceph-0 ceph-1 ceph-2 ceph-3

日志

[root@ceph-0 ceph]# ceph-deploy mgr create ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create ceph-0 ceph-1 ceph-2 ceph-3
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  mgr                           : [('ceph-0', 'ceph-0'), ('ceph-1', 'ceph-1'), ('ceph-2', 'ceph-2'), ('ceph-3', 'ceph-3')]
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xffff94d07730>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          : <function mgr at 0xffff94e71dd0>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph-0:ceph-0 ceph-1:ceph-1 ceph-2:ceph-2 ceph-3:ceph-3
[ceph-0][DEBUG ] connected to host: ceph-0 
[ceph-0][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-0][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-0
[ceph-0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-0][WARNIN] mgr keyring does not exist yet, creating one
[ceph-0][DEBUG ] create a keyring file
[ceph-0][DEBUG ] create path recursively if it doesn't exist
[ceph-0][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-0 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-0/keyring
[ceph-0][INFO  ] Running command: systemctl enable ceph-mgr@ceph-0
[ceph-0][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-0.service → /usr/lib/systemd/system/ceph-mgr@.service.
[ceph-0][INFO  ] Running command: systemctl start ceph-mgr@ceph-0
[ceph-0][INFO  ] Running command: systemctl enable ceph.target
dhclient(1613) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1613) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-1][DEBUG ] connected to host: ceph-1 
[ceph-1][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-1][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-1
[ceph-1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-1][WARNIN] mgr keyring does not exist yet, creating one
[ceph-1][DEBUG ] create a keyring file
[ceph-1][DEBUG ] create path recursively if it doesn't exist
[ceph-1][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-1 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-1/keyring
[ceph-1][INFO  ] Running command: systemctl enable ceph-mgr@ceph-1
[ceph-1][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-1.service → /usr/lib/systemd/system/ceph-mgr@.service.
[ceph-1][INFO  ] Running command: systemctl start ceph-mgr@ceph-1
[ceph-1][INFO  ] Running command: systemctl enable ceph.target
dhclient(1626) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1626) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-2][DEBUG ] connected to host: ceph-2 
[ceph-2][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-2][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-2
[ceph-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-2][WARNIN] mgr keyring does not exist yet, creating one
[ceph-2][DEBUG ] create a keyring file
[ceph-2][DEBUG ] create path recursively if it doesn't exist
[ceph-2][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-2 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-2/keyring
[ceph-2][INFO  ] Running command: systemctl enable ceph-mgr@ceph-2
[ceph-2][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-2.service → /usr/lib/systemd/system/ceph-mgr@.service.
[ceph-2][INFO  ] Running command: systemctl start ceph-mgr@ceph-2
[ceph-2][INFO  ] Running command: systemctl enable ceph.target
dhclient(1634) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
dhclient(1634) is already running - exiting. This version of ISC DHCP is based on the release available
on ftp.isc.org. Features have been added and other changes
have been made to the base software release in order to make
it work better with this distribution.Please report issues with this software via: 
https://gitee.com/src-openeuler/dhcp/issuesexiting.
[ceph-3][DEBUG ] connected to host: ceph-3 
[ceph-3][DEBUG ] detect platform information from remote host
21.10U3 LTS
bclinux
[ceph-3][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO  ] Distro info: bclinux 21.10U3 21.10U3 LTS
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-3
[ceph-3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-3][WARNIN] mgr keyring does not exist yet, creating one
[ceph-3][DEBUG ] create a keyring file
[ceph-3][DEBUG ] create path recursively if it doesn't exist
[ceph-3][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mgr --keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-3 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-3/keyring
[ceph-3][INFO  ] Running command: systemctl enable ceph-mgr@ceph-3
[ceph-3][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-3.service → /usr/lib/systemd/system/ceph-mgr@.service.
[ceph-3][INFO  ] Running command: systemctl start ceph-mgr@ceph-3
[ceph-3][INFO  ] Running command: systemctl enable ceph.target

ceph -s 查看mgr

!osd经过了约15分钟,才显示数据空间情况

验证ceph块存储

创建存储池(后面两个参数还不清楚意思)

ceph osd pool create vdbench 250 250

指定类型为块存储

ceph osd pool application enable vdbench rbd

创建一个20G的镜像(未设置压缩)

rbd create image1 --size 20G --pool vdbench --image-format 2 --image-feature layering

映射到linux设备文件

rbd map vdbench/image1

参考以下日志,可以看到,已经生成了/dev/rdb0设备文件

参考文档:

ceph-deploy – 使用最少的基础架构部署 Ceph — ceph-deploy 2.1.0 文档

【精选】ceph-deploy部署指定版本ceph集群_mons are allowing insecure global_id reclaim_ggrong0213的博客-CSDN博客

Ceph使用---dashboard启用及Prometheus监控 - cyh00001 - 博客园 (cnblogs.com)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/187712.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

华为防火墙vrrp+hrp双机热备主备备份(两端为交换机)

默认上下来全两个vrrp主都是左边 工作原理&#xff1a; vrrp刚开机都是先initialize状态&#xff0c;然后切成active或standb状态。 hrp使用18514端口&#xff0c;且用的单播&#xff0c;要策略放行&#xff0c;由主设备发hrp心跳报文 如果设备为acitve状态时自动优先级为65…

mysql基础 --子查询

文章目录 子查询 子查询 一个查询语句&#xff0c;嵌套在另一个查询语句内部&#xff1b;子查询先执行&#xff0c;其结果被外层主查询使用&#xff1b;子查询放入括号内&#xff1b;子查询放在比较条件的右侧&#xff1b;子查询返回一条&#xff0c;为单行子查询&#xff1b;…

Flutter 实战:构建跨平台应用

文章目录 一、简介二、开发环境搭建三、实战案例&#xff1a;开发一个简单的天气应用1. 项目创建2. 界面设计3. 数据获取4. 实现数据获取和处理5. 界面展示6. 添加动态效果和交互7. 添加网络错误处理8. 添加刷新功能9. 添加定位功能10. 添加通知功能11. 添加数据持久化功能 《F…

刚学C语言太无趣 推荐一个好用易学的可视化框架:EasyX。VC6.0就能写

很多同学在大一刚学C语言时&#xff0c;是不是很好奇为什么别人编程都在做软件&#xff0c;而自己只能面对着黑窗口进行 printf &#xff1f; EasyX&#xff0c;C语言可视化编程。 分享我大一时候做的一个项目&#xff0c;用 VC6.0 开发的一款画图软件&#xff1a; 这个软件源…

分享76个Python管理系统源代码总有一个是你想要的

分享76个Python管理系统源代码总有一个是你想要的 学习知识费力气&#xff0c;收集整理更不易。 知识付费甚欢喜&#xff0c;为咱码农谋福利。 下载链接&#xff1a;https://pan.baidu.com/s/1JtcEHG9m8ro4-dc29kVyDg?pwd8888 提取码&#xff1a;8888 项目名称 A simpl…

计蒜客详解合集(2)期

目录 T1126——单词倒排 T1617——地瓜烧 T1612——蒜头君的数字游戏 T1488——旋转单词 T1461——校验信用卡号码 T1437——最大值和次大值 T1126——单词倒排 超级水的一道题&#xff0c;和T1122类似但更简单&#xff0c;分割后逆序输出即可~ 编写程序&#xff0c;读入…

Python---练习:把8名讲师随机分配到3个教室

案例&#xff1a;把8名讲师随机分配到3个教室 列表嵌套&#xff1a;有3个教室[[],[],[]]&#xff0c;8名讲师[A,B,C,D,E,F,G,H]&#xff0c;将8名讲师随机分配到3个教室中。 分析&#xff1a; 思考1&#xff1a;我们第一间教室、第二间教室、第三间教室&#xff0c;怎么表示…

C# .NET Core API Controller以及辅助专案

准备工作 Windows 10Visual Studio 2019(2017就有可以集中发布到publish目录的功能了吧)C#将方法封装(据说可以提高效率,就像是我们用的dll那种感觉新增专案作为我们API的辅助专案(作用类似dll&#xff0c;此处&#xff0c;你也可以在你自己的API专案里建文件夹&#xff0c;但…

什么是UV贴图?

UV 是与几何图形的顶点信息相对应的二维纹理坐标。UV 至关重要&#xff0c;因为它们提供了表面网格与图像纹理如何应用于该表面之间的联系。它们基本上是控制纹理上哪些像素对应于 3D 网格上的哪个顶点的标记点。它们在雕刻中也很重要。 为什么UV映射很重要&#xff1f; 默认情…

网络层+数据链路层+物理层

一)网络层协议: 一)IP协议报头介绍: 咱们的IP协议能够在两点之间规划处一条合适的路径&#xff0c;什么叫做合适&#xff1f;那就得看咱们的TOS是怎么进行选的&#xff0c;比如说选择最大吞吐量&#xff0c;咱们就需要进行选择一个最大的带宽路径&#xff1b; 16位总长度:IP数据…

【数据结构】树的基本性质(计算树的总结点数与叶结点数)

树的基本性质 ⭐️计算树的总结点与叶结点数&#x1f4ab;性质1&#x1f4ab;性质2&#x1f4ab;例题1&#x1f4ab;例题2 ⭐️计算树的总结点与叶结点数 &#x1f4ab;性质1 性质1 树中的结点数等于所有结点的度数之和加1 例如上面这棵树&#xff0c;A的孩子为B、C、D&…

解决win11更新后,文件夹打不开的bug

更新win11系统了&#xff0c;给我更了个bug&#xff0c;找了好多解决方案&#xff0c;发现下面这个可以解决问题。 第一步 找到注册表 第二步 备份注册表 为了防止意外情况&#xff0c;备份注册表。如有意外问题&#xff0c;可以导入导出的注册表进行恢复。 第三步 删除指定…

python3.8及以上版本绑定gdal库的一个注意事项

作者&#xff1a;朱金灿 来源&#xff1a;clever101的专栏 为什么大多数人学不会人工智能编程&#xff1f;>>> gdal和python绑定参考文章&#xff1a;windows环境下python和gdal绑定方法   值得注意的是绑定python3.8及以上版本后在python程序中初始化gdal库时会出…

使用Java语言实现基本RS触发器

使用Java语言实现计算机程序来模拟基本RS触发器的工作过程&#xff0c;通过本账号2023年10月17日所发布博客“使用Java语言实现数字电路模拟器”中模拟基本逻辑门组成半加器电路的方法来模拟基本触发器的组成和时间延迟。 1 基本RS触发器电路结构 基本RS触发器&#xff08;又…

C# PaddleInference.PP-HumanSeg 人像分割 替换背景色

效果 项目 VS2022.net4.8OpenCvSharp4Sdcb.PaddleInference 包含4个分割模型 modnet-hrnet_w18 modnet-mobilenetv2 ppmatting-hrnet_w18-human_512 ppmattingv2-stdc1-human_512 代码 using OpenCvSharp; using Sdcb.PaddleInference; using System; using System.Col…

如何查看网站的https的数字证书

如题 打开Chrome浏览器&#xff0c;之后输入想要抓取https证书的网址&#xff0c;此处以知乎为例点击浏览器地址栏左侧的锁的按钮&#xff0c;如下图 点击“连接是安全的”选项&#xff0c;如下图 点击“证书有效”选项卡&#xff0c;如下图 查看基本信息和详细信息 点击详细信…

1. 深度学习——激活函数

机器学习面试题汇总与解析——激活函数 本章讲解知识点 什么是激活函数&#xff1f; 为什么要使用激活函数&#xff1f; 详细讲解激活函数 本专栏适合于Python已经入门的学生或人士&#xff0c;有一定的编程基础。本专栏适合于算法工程师、机器学习、图像处理求职的学生或人…

2023.11.8 hadoop学习-概述,hdfs dfs的shell命令

目录 1.分布式和集群 2.Hadoop框架 3.版本更新 4.hadoop架构详解 5.页面访问端口 6.Hadoop-HDFS HDFS架构 HDFS副本 7.SHELL命令 8.启动hive服务 1.分布式和集群 分布式: 多台服务器协同配合完成同一个大任务(每个服务器都只完成大任务拆分出来的单独1个子任务)集 群:…

积分上限函数

定积分的形式 a&#xff1a;积分下限 b&#xff1a;积分上限 定积分的值与积分变量无关 积分上限函数的形式 x&#xff1a;自变量 t&#xff1a;积分变量 积分上限是变量&#xff0c;积分下限是常数 定积分的几何意义 x轴所围成面积 x轴以上面积为正 x轴以下面积为负 积分…

Linux的基本指令(1)

目录 快速认识的几个指令 pwd指令 mkdir指令 touch指令 cd指令 clear指令 whoami指令 ls指令 ls -l ls -la ls 目录名 ls -ld 目录名 文件 路径 路径是什么&#xff1f; 路径的形成 ​ 怎么保证路径必须有唯一性&#xff1f; ls -la隐藏文件 隐藏文件的是什…