Ceph学习记录(不定期更新)

发表于 2024-02-06  558 次阅读


文章目录

简介

Ceph支持为云平台提供对象存储或者块设备服务、部署文件系统或其他目的。一个Ceph集群至少需要一个Ceph Monitor、Ceph Manager和Ceph OSD(Object Storage Daemon)。运行Ceph文件系统客户端时需要Ceph元数据服务

Monitors:Ceph Monitor(ceph-mon)维护集群状态的映射,包括monitor映射、manager映射、OSD映射、MDS映射和CRUSH映射。这些映射时Ceph守护进程相互协调所需的关键集群状态。monitor还负责管理守护进程和客户端之间的身份验证。通常需要三个monitor以上才能实现冗余和高可用性

Managers:Ceph Managers守护进程(ceph-mgr)负责跟踪运行时指标和Ceph集群的当前状态,包括存储利用率、当前性能指标和系统负载。Ceph Manager守护进程还托管基于python的模块来管理和公开Ceph集群信息,包括基于web的Ceph Dashboard和REST API。高可用性通常至少需要两个Managers

Ceph OSDs:对象存储守护进程(ceph-osd)存储数据,处理数据复制,恢复,重新平衡,并通过检查其他Ceph OSD守护进程的心跳向Ceph Monitors和Managers提供一些监控信息。通常至少需要三个Ceph OSD才能实现冗余和高可用性

MDSs:Ceph元数据服务器(ceph-mds)代表Ceph文件系统存储元数据(即Ceph块设备和Ceph对象存储)

Ceph架构

每个组件可扩展(空间能轻松扩展的PB级别)

不存在单点故障(高可靠)

可以运行在常规硬件上

Ceph集群部署

完全手动编译安装:

安装必要的依赖
下载源码包
手动编写ceph.conf文件
测试集群状态

借助ceph-deploy、saltstack、ansible等工具部署

ceph-deploy方式部署

操作系统:centos7
网卡:双网卡,存储网络,管理网络
节点数量:3(3个osd,1个mon)
硬盘数量:2(1个日志,1个数据盘)
节点IP信息:
192.168.227.71 ceph-node1
192.168.227.72 ceph-node2
192.168.227.73 ceph-node3

hostnamectl set-hostname ceph-node1

echo "192.168.227.71 ceph-node1 ceph_node1
192.168.227.72 ceph-node2 ceph_node2
192.168.227.73 ceph-node3 ceph_node3" >> /etc/hosts

ceph orch daemon add osd ceph-node2:/dev/sdb
ceph orch

使用cephadmin安装

centos8
rpm -ivh https://mirrors.ustc.edu.cn/epel/8/Everything/x86_64/Packages/e/epel-release-8-19.el8.noarch.rpm
yum install ceph podman-docker python3 lvm2 -y
dnf search release-ceph
dnf install --assumeyes centos-release-ceph-reef  ##centos-release-ceph-quincy
dnf install --assumeyes cephadm
cephadm install ceph-common 

引导一个新集群

创建新 Ceph 集群的第一步是在 Ceph 集群的第一台主机上运行命令。在 Ceph 集群的第一台主机上运行该命令的行为 会创建 Ceph 集群的第一个“监控守护进程”,并且该监控守护进程需要一个 IP 地址。您必须将 Ceph 集群的第一台主机的 IP 地址传递给命令,因此您需要知道该主机的 IP 地址。

cephadm bootstrap --mon-ip *<mon-ip>*

[root@ceph_node1 ~]# cephadm bootstrap --mon-ip 192.168.227.71

该命令将:

1、在本地主机上为新集群创建监视器和管理器守护程序。
2、为 Ceph 集群生成新的 SSH 密钥并将其添加到 root 用户的/root/.ssh/authorized_keys文件中。
3、将公钥的副本写入/etc/ceph/ceph.pub。
4、将最小配置文件写入/etc/ceph/ceph.conf. 需要此文件与新集群进行通信。
5、client.admin将管理(特权!)密钥的副本写入/etc/ceph/ceph.client.admin.keyring。
6、将_admin标签添加到引导主机。/etc/ceph/ceph.conf默认情况下,任何具有此标签的主机都将(也)获得和 的副本/etc/ceph/ceph.client.admin.keyring。


	     URL: https://ceph-node1:8443/
	    User: admin
	Password: vdeqc6nx11



[root@ceph_node1 ~]# cephadm bootstrap --mon-ip 192.168.227.71
Creating directory /etc/ceph for ceph.conf
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
podman (/usr/bin/podman) version 4.6.1 is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: dc5452d2-bb53-11ee-a6f0-00155d1f4246
Verifying IP 192.168.227.71 port 3300 ...
Verifying IP 192.168.227.71 port 6789 ...
Mon IP `192.168.227.71` is in CIDR network `192.168.227.0/24`
Mon IP `192.168.227.71` is in CIDR network `192.168.227.0/24`
Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image quay.io/ceph/ceph:v18...

Ceph version: ceph version 18.2.1 (7fe91d5d5842e04be3b4f514d6dd990c54b29c76) reef (stable)
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
firewalld ready
Enabling firewalld service ceph-mon in current zone...
Waiting for mon to start...
Waiting for mon...
/usr/bin/ceph: timeout after 60 seconds
Non-zero exit code 124 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18 -e NODE_NAME=ceph_node1 -e CEPH_USE_RANDOM_NONCE=1 -v /var/lib/ceph/dc5452d2-bb53-11ee-a6f0-00155d1f4246/mon.ceph_node1:/var/lib/ceph/mon/ceph-ceph_node1:z -v /tmp/ceph-tmp2xmdiax3:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpw6qkop7z:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18 status
mon not available, waiting (1/15)...
/usr/bin/ceph: timeout after 60 seconds
Non-zero exit code 124 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18 -e NODE_NAME=ceph_node1 -e CEPH_USE_RANDOM_NONCE=1 -v /var/lib/ceph/dc5452d2-bb53-11ee-a6f0-00155d1f4246/mon.ceph_node1:/var/lib/ceph/mon/ceph-ceph_node1:z -v /tmp/ceph-tmp2xmdiax3:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpw6qkop7z:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18 status
mon not available, waiting (2/15)...
/usr/bin/ceph: timeout after 60 seconds
Non-zero exit code 124 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18 -e NODE_NAME=ceph_node1 -e CEPH_USE_RANDOM_NONCE=1 -v /var/lib/ceph/dc5452d2-bb53-11ee-a6f0-00155d1f4246/mon.ceph_node1:/var/lib/ceph/mon/ceph-ceph_node1:z -v /tmp/ceph-tmp2xmdiax3:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpw6qkop7z:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18 status
mon not available, waiting (3/15)...
/usr/bin/ceph: timeout after 60 seconds
Non-zero exit code 124 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18 -e NODE_NAME=ceph_node1 -e CEPH_USE_RANDOM_NONCE=1 -v /var/lib/ceph/dc5452d2-bb53-11ee-a6f0-00155d1f4246/mon.ceph_node1:/var/lib/ceph/mon/ceph-ceph_node1:z -v /tmp/ceph-tmp2xmdiax3:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpw6qkop7z:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18 status
mon not available, waiting (4/15)...
/usr/bin/ceph: timeout after 60 seconds
Non-zero exit code 124 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18 -e NODE_NAME=ceph_node1 -e CEPH_USE_RANDOM_NONCE=1 -v /var/lib/ceph/dc5452d2-bb53-11ee-a6f0-00155d1f4246/mon.ceph_node1:/var/lib/ceph/mon/ceph-ceph_node1:z -v /tmp/ceph-tmp2xmdiax3:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpw6qkop7z:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18 status
mon not available, waiting (5/15)...
/usr/bin/ceph: timeout after 60 seconds
Non-zero exit code 124 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18 -e NODE_NAME=ceph_node1 -e CEPH_USE_RANDOM_NONCE=1 -v /var/lib/ceph/dc5452d2-bb53-11ee-a6f0-00155d1f4246/mon.ceph_node1:/var/lib/ceph/mon/ceph-ceph_node1:z -v /tmp/ceph-tmp2xmdiax3:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpw6qkop7z:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18 status
mon not available, waiting (6/15)...
/usr/bin/ceph: timeout after 60 seconds
Non-zero exit code 124 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18 -e NODE_NAME=ceph_node1 -e CEPH_USE_RANDOM_NONCE=1 -v /var/lib/ceph/dc5452d2-bb53-11ee-a6f0-00155d1f4246/mon.ceph_node1:/var/lib/ceph/mon/ceph-ceph_node1:z -v /tmp/ceph-tmp2xmdiax3:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpw6qkop7z:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18 status
mon not available, waiting (7/15)...
/usr/bin/ceph: timeout after 60 seconds
Non-zero exit code 124 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18 -e NODE_NAME=ceph_node1 -e CEPH_USE_RANDOM_NONCE=1 -v /var/lib/ceph/dc5452d2-bb53-11ee-a6f0-00155d1f4246/mon.ceph_node1:/var/lib/ceph/mon/ceph-ceph_node1:z -v /tmp/ceph-tmp2xmdiax3:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpw6qkop7z:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18 status
mon not available, waiting (8/15)...
/usr/bin/ceph: timeout after 60 seconds
Non-zero exit code 124 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18 -e NODE_NAME=ceph_node1 -e CEPH_USE_RANDOM_NONCE=1 -v /var/lib/ceph/dc5452d2-bb53-11ee-a6f0-00155d1f4246/mon.ceph_node1:/var/lib/ceph/mon/ceph-ceph_node1:z -v /tmp/ceph-tmp2xmdiax3:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpw6qkop7z:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18 status
mon not available, waiting (9/15)...
/usr/bin/ceph: timeout after 60 seconds
Non-zero exit code 124 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18 -e NODE_NAME=ceph_node1 -e CEPH_USE_RANDOM_NONCE=1 -v /var/lib/ceph/dc5452d2-bb53-11ee-a6f0-00155d1f4246/mon.ceph_node1:/var/lib/ceph/mon/ceph-ceph_node1:z -v /tmp/ceph-tmp2xmdiax3:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpw6qkop7z:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18 status
mon not available, waiting (10/15)...
/usr/bin/ceph: timeout after 60 seconds
Non-zero exit code 124 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18 -e NODE_NAME=ceph_node1 -e CEPH_USE_RANDOM_NONCE=1 -v /var/lib/ceph/dc5452d2-bb53-11ee-a6f0-00155d1f4246/mon.ceph_node1:/var/lib/ceph/mon/ceph-ceph_node1:z -v /tmp/ceph-tmp2xmdiax3:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpw6qkop7z:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18 status
mon not available, waiting (11/15)...
/usr/bin/ceph: timeout after 60 seconds
Non-zero exit code 124 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18 -e NODE_NAME=ceph_node1 -e CEPH_USE_RANDOM_NONCE=1 -v /var/lib/ceph/dc5452d2-bb53-11ee-a6f0-00155d1f4246/mon.ceph_node1:/var/lib/ceph/mon/ceph-ceph_node1:z -v /tmp/ceph-tmp2xmdiax3:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpw6qkop7z:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18 status
mon not available, waiting (12/15)...
/usr/bin/ceph: timeout after 60 seconds
Non-zero exit code 124 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18 -e NODE_NAME=ceph_node1 -e CEPH_USE_RANDOM_NONCE=1 -v /var/lib/ceph/dc5452d2-bb53-11ee-a6f0-00155d1f4246/mon.ceph_node1:/var/lib/ceph/mon/ceph-ceph_node1:z -v /tmp/ceph-tmp2xmdiax3:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpw6qkop7z:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18 status
mon not available, waiting (13/15)...
/usr/bin/ceph: timeout after 60 seconds
Non-zero exit code 124 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18 -e NODE_NAME=ceph_node1 -e CEPH_USE_RANDOM_NONCE=1 -v /var/lib/ceph/dc5452d2-bb53-11ee-a6f0-00155d1f4246/mon.ceph_node1:/var/lib/ceph/mon/ceph-ceph_node1:z -v /tmp/ceph-tmp2xmdiax3:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpw6qkop7z:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18 status
mon not available, waiting (14/15)...
/usr/bin/ceph: timeout after 60 seconds
Non-zero exit code 124 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18 -e NODE_NAME=ceph_node1 -e CEPH_USE_RANDOM_NONCE=1 -v /var/lib/ceph/dc5452d2-bb53-11ee-a6f0-00155d1f4246/mon.ceph_node1:/var/lib/ceph/mon/ceph-ceph_node1:z -v /tmp/ceph-tmp2xmdiax3:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpw6qkop7z:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18 status
mon not available, waiting (15/15)...
/usr/bin/ceph: timeout after 60 seconds
Non-zero exit code 124 from /usr/bin/podman run --rm --ipc=host --stop-signal=SIGTERM --net=host --entrypoint /usr/bin/ceph --init -e CONTAINER_IMAGE=quay.io/ceph/ceph:v18 -e NODE_NAME=ceph_node1 -e CEPH_USE_RANDOM_NONCE=1 -v /var/lib/ceph/dc5452d2-bb53-11ee-a6f0-00155d1f4246/mon.ceph_node1:/var/lib/ceph/mon/ceph-ceph_node1:z -v /tmp/ceph-tmp2xmdiax3:/etc/ceph/ceph.client.admin.keyring:z -v /tmp/ceph-tmpw6qkop7z:/etc/ceph/ceph.conf:z quay.io/ceph/ceph:v18 status
Error: mon not available after 15 tries


	***************
	Cephadm hit an issue during cluster installation. Current cluster files will NOT BE DELETED automatically to change
	this behaviour you can pass the --cleanup-on-failure. To remove this broken cluster manually please run:

	   > cephadm rm-cluster --force --fsid dc5452d2-bb53-11ee-a6f0-00155d1f4246

	in case of any previous broken installation user must use the rm-cluster command to delete the broken cluster:

	   > cephadm rm-cluster --force --zap-osds --fsid <fsid>

	for more information please refer to https://docs.ceph.com/en/latest/cephadm/operations/#purging-a-cluster
	***************


ERROR: mon not available after 15 tries



 cephadm rm-cluster --force --fsid dc5452d2-bb53-11ee-a6f0-00155d1f4246

 cephadm rm-cluster --force --zap-osds --fsid dc5452d2-bb53-11ee-a6f0-00155d1f4246


[root@ceph_node1 ~]# cephadm bootstrap --mon-ip 192.168.227.71
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
podman (/usr/bin/podman) version 4.6.1 is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: bd1270f8-bb61-11ee-be68-00155d1f4246
Verifying IP 192.168.227.71 port 3300 ...
OSError: [Errno 99] Cannot assign requested address


	***************
	Cephadm hit an issue during cluster installation. Current cluster files will NOT BE DELETED automatically to change
	this behaviour you can pass the --cleanup-on-failure. To remove this broken cluster manually please run:

	   > cephadm rm-cluster --force --fsid bd1270f8-bb61-11ee-be68-00155d1f4246

	in case of any previous broken installation user must use the rm-cluster command to delete the broken cluster:

	   > cephadm rm-cluster --force --zap-osds --fsid <fsid>

	for more information please refer to https://docs.ceph.com/en/latest/cephadm/operations/#purging-a-cluster
	***************


Traceback (most recent call last):
  File "/usr/lib64/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib64/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/usr/sbin/cephadm/__main__.py", line 10700, in <module>
  File "/usr/sbin/cephadm/__main__.py", line 10688, in main
  File "/usr/sbin/cephadm/__main__.py", line 6156, in _rollback
  File "/usr/sbin/cephadm/__main__.py", line 2495, in _default_image
  File "/usr/sbin/cephadm/__main__.py", line 6262, in command_bootstrap
  File "/usr/sbin/cephadm/__main__.py", line 5522, in prepare_mon_addresses
  File "/usr/sbin/cephadm/__main__.py", line 1693, in check_ip_port
  File "/usr/sbin/cephadm/__main__.py", line 1644, in attempt_bind
  File "/usr/sbin/cephadm/__main__.py", line 1637, in attempt_bind
OSError: [Errno 99] Cannot assign requested address



podman image rm quay.io/ceph/ceph:v18 
[root@ceph-node1 ~]# podman image ls
REPOSITORY         TAG         IMAGE ID      CREATED      SIZE
quay.io/ceph/ceph  v18         7f099bcd7014  4 weeks ago  1.28 GB
[root@ceph-node1 ~]# podman ps
CONTAINER ID  IMAGE                  COMMAND               CREATED         STATUS         PORTS       NAMES
908283df6120  quay.io/ceph/ceph:v18  -n mon.ceph_node1...  25 minutes ago  Up 25 minutes              ceph-9c903c22-c32f-11ee-a0cd-00155d1f4246-mon-ceph_node1
ba527bf12f52  quay.io/ceph/ceph:v18  -n mgr.ceph_node1...  25 minutes ago  Up 25 minutes              ceph-9c903c22-c32f-11ee-a0cd-00155d1f4246-mgr-ceph_node1-btosat
[root@ceph-node1 ~]# 








ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-node1
ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-node2
ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-node3

ceph config set mon public_network 192.168.227.0/24

ceph orch host add ceph-node2 192.168.227.72 --labels=_admin
ceph orch host add ceph-node3 192.168.227.73 --labels=_admin

ceph orch host label add *<host>* _admin

ceph config set mon public_network 192.168.227.0/24
ceph orch daemon add mon ceph-node2:192.168.227.72
ceph orch daemon add mon ceph-node3:192.168.227.73

 ceph orch apply mon ceph-node1,ceph-node2,ceph-node3

ceph orch ls --service_type mgr
ceph orch ps  --daemon_type mgr

ceph orch apply osd --all-available-devices 全部


ceph orch daemon add osd ceph-node1:/dev/sdb
ceph orch daemon add osd ceph-node2:/dev/sdb
ceph orch daemon add osd ceph-node3:/dev/sdb

  173  ceph orch osd tree
  174  ceph orch osd ls
  175  ceph osd tree
  176  ceph osd ps
  177  ceph osd ls
  178  ceph osd tree


ceph fs volume create <fs_name> --placement="<placement spec>"
  180  ceph fs volume create zhnfs 

ceph fs volume info zhnfs

 ceph osd pool stats
 ceph fs authorize zhnfs
 ceph  fs authorize  cephfs client.foo1 / rw


  225  mount -t ceph :/ /mnt/cephfs -o name=admin,secret=AQBYSjZfQF+UJBAAC6QJjNACndkw2LcCR2XLFA==
  226  mount -t ceph :/ /mnt -o name=admin,secret=AQBYSjZfQF+UJBAAC6QJjNACndkw2LcCR2XLFA==
  227  mkdir /mnt/
  228  mount -t ceph :/ /mnt -o name=admin,secret=AQBYSjZfQF+UJBAAC6QJjNACndkw2LcCR2XLFA==
  229  mount -t ceph :/ /mnt 



查看osd状态
ceph osd tree

1.删除对应osd
ceph osd crush reweight osd.1 0.1
2.停止osd进程
systemctl stop osd.1
3.将节点状态标记为out
ceph osd out osd.1
4.从crush中移除节点
ceph osd crush remove osd.1
5.删除节点
ceph osd rm osd.1
6.删除节点认证
ceph auth del osd.1

准备并激活osd
1.格式化硬盘
mkfs.ext4 /dev/sdb
2.挂载指定硬盘
mount /dev/sdb /var/lib/ceph/osd/ceph-1
3.准备osd
ceph-volume lvm prepare --data /dav/sdb
4.激活osd
ceph-volume lvm activce $id $fsid


激活后若还有down 执行以下命令重启并观察
systemctl | grep ceph
systemctl restart ceph-osd@1.service
ceph osd tree
本站文章基于国际协议BY-NA-SA 4.0协议共享;
如未特殊说明,本站文章皆为原创文章,请规范转载。

0

scanz个人博客