markdown示例
  qmj2xyt4e0cp 2023年11月02日 46 0

## 一. 环境准备
- 参考链接:https://www.kancloud.cn/willseecloud/ceph/1788301
- 部署版本说明
- deploy版本:ceph-deploy 2.0.x
- ceph版本:nautilus 14.2.x
- 操作系统:centos7.6

### 1. 组件说明

- ceph-deploy:ceph集群部署节点,负责集群整体部署,也可以复用cpeh集群中的节点作为部署节点。
- monitor:Ceph监视管理节点,承担Ceph集群重要的管理任务,一般需要3或5个节点。
- mgr: Ceph 集群管理节点(manager),为外界提供统一的入口。
- osd:Ceph存储节点Object Storage Daemon,实际负责数据存储的节点。
- rgw: Ceph对象网关,是一种服务,使客户端能够利用标准对象存储API来访问Ceph集群
- mds:Ceph元数据服务器,主要保存的文件系统服务的元数据,使用文件存储时才需要该组件

### 2. 版本说明

第一个Ceph版本是0.1,要回溯到2008年1月。多年来,版本号方案一直没变,直到2015年4月0.94.1(Hammer的第一个修正版)发布后,为了避免0.99(以及 0.100 或 1.00?),制定了新策略。

- x.0.z - 开发版
- x.1.z - 候选版
- x.2.z - 稳定、修正版

x将从9算起,它代表Infernalis(I是第九个字母),这样第九个发布周期的第一个开发版就是9.0.0。后续的开发版依次是9.0.1、9.0.2等

| 版本名称 | 版本号 | 发布时间 |
| ---------- | ------------- | -------------------------- |
| Argonaut | 0.48版本(LTS) | 2012年6月3日 |
| Bobtail | 0.56版本(LTS) | 2013年5月7日 |
| Cuttlefish | 0.61版本 | 2013年1月1日 |
| Dumpling | 0.67版本(LTS) | 2013年8月14日 |
| Emperor | 0.72版本 | 2013年11月9 |
| Firefly | 0.80版本(LTS) | 2014年5月 |
| Giant | Giant | October 2014 - April 2015 |
| Hammer | Hammer | April 2015 - November 2016 |
| Infernalis | Infernalis | November 2015 - June 2016 |
| Jewel | 10.2.9 | 2016年4月 |
| Kraken | 11.2.1 | 2017年10月 |
| Luminous | 12.2.12 | 2017年10月 |
| mimic | 13.2.7 | 2018年5月 |
| nautilus | 14.2.5 | 2019年2月 |
| octopus | 15.2.15 | 2021年10月 |
| pacific | 16.2.0 | 2021年9月 |

### 3. 基础配置

> 基础配置: 包括安装常用包、hosts解析、关闭防火墙、SELinux、NetworkManager、文件描述符、添加Yum源需要每个节点都执行

- 安装常用包

```sh
yum install -y epel-release
yum install bash-completion vim net-tools tree nmap telnet gcc gcc-c++ unzip lrzsz wget jq sshpass -y
```

- hosts解析

```sh
cat <<- 'EOF' >> /etc/hosts
172.20.5.25 ceph-k8s-01
172.20.5.4 ceph-k8s-02
172.20.5.52 ceph-k8s-03
EOF
```

- 关闭防火墙、SELinux

```sh
# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
# 关闭SELinux
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
```

- 文件描述符

```sh
ulimit -n 65536
\cp -a /etc/security/limits.conf{,.bak}
cat <<- 'EOF' >> /etc/security/limits.conf
* soft nofile 65535
* hard nofile 65535
EOF
```

- 添加Yum源

```sh
cat <<- 'EOF' > /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/$basearch
enabled=1
gpgcheck=0
type=rpm-md

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=0
type=rpm-md

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=0
type=rpm-md
EOF
```
### 5. 时间同步

> 京东云主机已经做好时间同步了,可不操作

- 安装NTP

```sh
yum -y install ntp
```

- 配置NTP

```sh
# ceph-k8s-01
$ vi /etc/ntp.conf
driftfile /var/lib/ntp/drift
restrict default nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict ::1
restrict 172.20.5.0 mask 255.255.255.0 nomodify notrap # 添加可以同步时间的网段
server 127.127.1.0 # 外部时间服务器不可用时, 以本地时间作为时间服务
Fudge 127.127.1.0 stratum 10
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
disable monitor
# ceph-k8s-02和ceph-k8s-03,同步ceph-k8s-01
$ vi /etc/ntp.conf
server 172.20.5.25 iburst minpoll 4 maxpoll 10
```

- 启动

```sh
systemctl start ntpd
systemctl enable ntpd
```

- 查看

```sh
$ ntpq -p
remote refid st t when poll reach delay offset jitter
==============================================================================
*ceph-k8s-01 LOCAL(0) 6 u 8 16 1 0.211 74.407 0.837
```

## 二. 部署集群

> 选择ceph-k8s-01为部署节点。如无特殊说明,以下操作均在ceph-k8s-01执行

### 1. 安装ceph-deploy

```sh
# 安装在ceph-k8s-01
yum install ceph-deploy python-setuptools -y
# 校验版本
ceph-deploy --version
```

- 免密登录

```sh
ssh-keygen
sshpass -p ${password} ssh-copy-id -o StrictHostKeyChecking=no root@ceph-k8s-01
sshpass -p ${password} ssh-copy-id -o StrictHostKeyChecking=no root@ceph-k8s-02
sshpass -p ${password} ssh-copy-id -o StrictHostKeyChecking=no root@ceph-k8s-03
```

### 2. 安装ceph

```sh
# 工作目录
mkdir -p /export/ceph/ && cd /export/ceph/
```

- 部署3个monitor节点

```sh
# public-network:外部访问Ceph集群
# cluster-network:集群内部通讯
ceph-deploy new \
--public-network 172.20.5.0/24 \
ceph-k8s-01 ceph-k8s-02 ceph-k8s-03
```
- 查看配置文件
```sh
$ cat ceph.conf
[global]
fsid = 8523ac6f-68ba-43a9-9498-5efcaf397a79
public_network = 172.20.5.0/24
mon_initial_members = ceph-k8s-01, ceph-k8s-02, ceph-k8s-03
mon_host = 172.20.5.25,172.20.5.4,172.20.5.52
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

$ cat ceph.mon.keyring
[mon.]
key = AQDWXXJhAAAAABAAZmrw/yIqAyfhJJMHygHrQQ==
caps mon = allow *
```

- 安装ceph
- 默认安装是最新的小版本(14.2.22),如果需要指定版本。在每个节点提前安装yum install ceph-14.2.5 ceph-radosgw-14.2.5, 不需要在执行ceph-deploy install 命令
```sh
# --no-adjust-repos:直接使用本地源,不下载官方源
ceph-deploy install --no-adjust-repos ceph-k8s-01 ceph-k8s-02 ceph-k8s-03
```

### 3. 初始化monitor

- 初始化monitor,并收集所有密钥,为了获得高可用性,应该运行至少三个monitor的生产Ceph集群。

```sh
ceph-deploy mon create-initial
```

- 同步admin用户

```sh
ceph-deploy admin ceph-k8s-01 ceph-k8s-02 ceph-k8s-03
```

- 命令检查仲裁状态
- 查看quorum_leader_name字段

```sh
ceph quorum_status
```

### 4. 创建manager

- Ceph Manager守护程序以 active/standby模式运行。active状态意味着提供服务,standby状态意味着处于休眠状态,只进行数据同步,时刻准备着提供服务,两者可以切换

```sh
ceph-deploy mgr create ceph-k8s-01 ceph-k8s-02 ceph-k8s-03
```

- 查看集群状态

```sh
$ ceph -s
cluster:
id: 8523ac6f-68ba-43a9-9498-5efcaf397a79
health: HEALTH_WARN
mons are allowing insecure global_id reclaim

services:
mon: 3 daemons, quorum ceph-k8s-02,ceph-k8s-01,ceph-k8s-03 (age 44m)
mgr: ceph-k8s-01(active, since 16s), standbys: ceph-k8s-03, ceph-k8s-02
osd: 0 osds: 0 up, 0 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
# 禁用不安全模式
$ ceph config set mon auth_allow_insecure_global_id_reclaim false
```

- 此时集群健康状态为HEALTH_WARN,查看健康状态详情,由于OSD数量为0,小于osd_pool_default_size大小3

```sh
$ ceph health detail
HEALTH_WARN OSD count 0 < osd_pool_default_size 3
TOO_FEW_OSDS OSD count 0 < osd_pool_default_size 3
```

### 5. 添加OSD

- 添加3个osd,使用/dev/vdc磁盘。确保该设备当前未使用并且不包含任何重要数据

```sh
ceph-deploy osd create --data /dev/vdc ceph-k8s-01
ceph-deploy osd create --data /dev/vdc ceph-k8s-02
ceph-deploy osd create --data /dev/vdc ceph-k8s-03
```

- ceph将磁盘创建为lvm格式然后加入ceph集群

```sh
$ pvs |grep ceph
/dev/vdc ceph-ecd61702-3518-4782-aa1a-ea1d9070dcc3 lvm2 a-- <500.00g 0
$ vgs |grep ceph
ceph-ecd61702-3518-4782-aa1a-ea1d9070dcc3 1 1 0 wz--n- <500.00g 0
$ lvs | grep ceph
osd-block-7a6a3f0c-2bb9-4511-99d6-a19ecf090a45 ceph-ecd61702-3518-4782-aa1a-ea1d9070dcc3 -wi-ao---- <500.00g
```

- 查看集群状态

```sh
$ ceph -s
cluster:
id: 8523ac6f-68ba-43a9-9498-5efcaf397a79
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph-k8s-02,ceph-k8s-01,ceph-k8s-03 (age 57m)
mgr: ceph-k8s-01(active, since 13m), standbys: ceph-k8s-03, ceph-k8s-02
osd: 3 osds: 3 up (since 4m), 3 in (since 4m)

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 1.5 TiB / 1.5 TiB avail
pgs:
```

- 查看osd的列表情况

```sh
$ ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 1.46489 root default
-3 0.48830 host ceph-k8s-01
0 hdd 0.48830 osd.0 up 1.00000 1.00000
-5 0.48830 host ceph-k8s-02
1 hdd 0.48830 osd.1 up 1.00000 1.00000
-7 0.48830 host ceph-k8s-03
2 hdd 0.48830 osd.2 up 1.00000 1.00000
```

### 6. 清理集群
- 重新安装集群时,需要先清理
```sh
# 如果已经添加osd,需要先删除lvm然后zap磁盘
lvs
lvremove osd-xxx ceph-xxx
ceph-deploy disk zap {ceph-node} /dev/vdc
# 清除ceph
ceph-deploy purgedata {ceph-node} [{ceph-node}]
ceph-deploy forgetkeys
ceph-deploy purge {ceph-node} [{ceph-node}]
# 删除残留
yum remove -y `rpm -qa|grep ceph`
yum clean all
rm -rf /export/ceph/
rm -rf /etc/ceph/*
rm -rf /var/lib/ceph/*
rm -rf /var/log/ceph/*
rm -rf /var/run/ceph/*
```

## 三. 块设备存储

```sh
# 保证/etc/ceph目录下有Ceph集群的配置文件ceph.conf和/etc/ceph/ceph.client.admin.keyring
# 创建池
$ ceph osd pool create disk 16 16
pool 'disk' created
# 创建设备
# rbd create (pool_name)/(rbd_name) --size xxxxxMB
$ rbd create disk/my-disk-01 --size 10GB
$ rbd -p disk ls
my-disk-01
# 映射磁盘
$ rbd map disk/my-disk-01
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable disk/my-disk-01 object-map fast-diff deep-flatten".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address
# 内核不支持块设备镜像的一些特性,所以映射失败, 查看feature。CentOS的3.10内核仅支持其中的layering feature
$ rbd info disk/my-disk-01
rbd image 'my-disk-01':
size 10 GiB in 2560 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 380b2a08c7a3
block_name_prefix: rbd_data.380b2a08c7a3
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
op_features:
flags:
create_timestamp: Fri Oct 22 17:32:11 2021
access_timestamp: Fri Oct 22 17:32:11 2021
modify_timestamp: Fri Oct 22 17:32:11 2021

$ rbd feature disable disk/my-disk-01 object-map fast-diff deep-flatten
$ rbd map disk/my-disk-01
/dev/rbd0
$ mkfs.xfs /dev/rbd0
$ mount -t xfs /dev/rbd0 /mnt/
$ df -h
文件系统 容量 已用 可用 已用% 挂载点
/dev/vda3 40G 3.0G 37G 8% /
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 16K 3.9G 1% /dev/shm
tmpfs 3.9G 11M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/vdb 500G 33M 500G 1% /export
tmpfs 783M 0 783M 0% /run/user/0
tmpfs 3.9G 52K 3.9G 1% /var/lib/ceph/osd/ceph-0
/dev/rbd0 10G 33M 10G 1% /mnt
```

## 四. 对象存储

### 1. 部署RGW

- RGW: Ceph对象网关,是一种服务,使客户端能够利用标准对象存储API来访问Ceph集群。要使用Ceph Object Gateway对象网关组件,必须部署RGW的实例。执行以下操作以创建RGW实例

```sh
# 部署集群时,已经默认安装
$ rpm -qa |grep radosgw
ceph-radosgw-14.2.22-0.el7.x86_64
# 如果是新服务器,或者没有安装,使用手动安装
$ ceph-deploy install --no-adjust-repos --rgw {node_name}
```

- 创建rgw实例

```sh
ceph-deploy rgw create ceph-k8s-01 ceph-k8s-02 ceph-k8s-03
```

- 默认情况下,RGW实例将侦听7480端口

```sh
# 修改RGW端口
cat >> ceph.conf <<EOF
[client.rgw.ceph-k8s-01]
rgw frontends = civetweb port=81
[client.rgw.ceph-k8s-02]
rgw frontends = civetweb port=81
[client.rgw.ceph-k8s-01]
rgw frontends = civetweb port=81
EOF
# 更新配置
ceph-deploy --overwrite-conf config push ceph-k8s-01 ceph-k8s-02 ceph-k8s-03
# 重启服务, 对应每个节点分别执行
systemctl restart ceph-radosgw@rgw.ceph-k8s-01
systemctl restart ceph-radosgw@rgw.ceph-k8s-02
systemctl restart ceph-radosgw@rgw.ceph-k8s-03
```

- 访问rgb

```sh
curl http://172.20.5.25:81
<?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
```

- 默认部署rgw后会自动创建以下4个存储池

```sh
$ ceph osd lspools
1 .rgw.root
2 default.rgw.control
3 default.rgw.meta
4 default.rgw.log
```
### 2. 安装s3cmd

- 创建用户

```sh
radosgw-admin user create --conf $WORK_DIR/ceph.conf --uid=${USERNAME} \
--display-name="${USERNAME} for ceph" \
--email=${USERNAME}@jd.com
```

- 查看用户

```sh
radosgw-admin user list
radosgw-admin user info --uid ${USERNAME}
```

- 安装s3cmd
> s3cmd.tar.gz下载: https://coding.jd.com/delivery-ops/ceph-deploy/tree/master/s3

```sh
tar -xzvf s3cmd.tar.gz
mv s3cmd /usr/local/
ln -s /usr/local/s3cmd/s3cmd /usr/bin/s3cmd
# 安装依赖
rpm -ivh python-dateutil-1.5-7.el7.noarch.rpm
```

- 执行s3cmd --configure

```yaml
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: HF48SN6NPEZ9XO071UWY
Secret Key: F680MfSu1kcPCJElLfc12GbXyvRnOQwAKDLuTRpx
Default Region [US]:

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: 169.0.0.31:7480

Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: 169.0.0.31:7480

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: no

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name:

New settings:
Access Key: HF48SN6NPEZ9XO071UWY
Secret Key: F680MfSu1kcPCJElLfc12GbXyvRnOQwAKDLuTRpx
Default Region: US
S3 Endpoint: 169.0.0.31:7480
DNS-style bucket+hostname:port template for accessing a bucket: 169.0.0.31:7480
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)

Now verifying that encryption works...
Not configured. Never mind.

Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'
```

- 验证s3cmd工具及OSS状态

```sh
s3cmd mb s3://oss-demo1
s3cmd ls
```

- 关于bucket名称大小写问题
- host_bucket = 127.0.0.1:7480/%(bucket)s
- 在这种配置下bucket名称首字母大写(例:s3://Oss-demo1)或者使用_(下划线连接,例:s3://my_test_1)
- host_bucket = 127.0.0.1:7480
- 在这种配置下bucket名称支持:s3://oss-demo1

## 五. 文件系统

### 1. 部署元数据服务器

- Ceph元数据服务器,主要保存的文件系统服务的元数据,使用文件存储时才需要该组件

```sh
# ceph-deploy节点部署mds
ceph-deploy mds create ceph-k8s-01 ceph-k8s-02 ceph-k8s-03
```

- 查看mds状态

```sh
$ ceph -s
cluster:
id: 8523ac6f-68ba-43a9-9498-5efcaf397a79
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph-k8s-02,ceph-k8s-01,ceph-k8s-03 (age 46m)
mgr: ceph-k8s-01(active, since 67m), standbys: ceph-k8s-03, ceph-k8s-02
mds: 3 up:standby
osd: 3 osds: 3 up (since 57m), 3 in (since 57m)
rgw: 3 daemons active (ceph-k8s-01, ceph-k8s-02, ceph-k8s-03)

task status:

data:
pools: 4 pools, 128 pgs
objects: 187 objects, 1.2 KiB
usage: 3.0 GiB used, 1.5 TiB / 1.5 TiB avail
pgs: 128 active+clean

io:
client: 51 KiB/s rd, 0 B/s wr, 51 op/s rd, 34 op/s wr
$ ceph mds stat
3 up:standby
```

### 2. 创建文件系统

- 创建pool。每个osd的pg个数在100个左右,pg的个数是2个N次方,每一个pool的总容量和pg的个数换算出来的pg的容量都基本上一致

```sh
# pool-name: 存储池的名称
# pg-num: 存储池的pg总数
# pgp-num: 存储池的pg的有效数, 通常与pg相等
# pg-num与pgp-num只可以扩大不可以缩小
# ceph osd pool create {pool-name} {pg-num} [{pgp-num}]
$ ceph osd pool create cephfs_data 16 16
pool 'cephfs_data' created
$ ceph osd pool create cephfs_metadata 16 16
pool 'cephfs_metadata' created
$ ceph osd pool ls
.rgw.root
default.rgw.control
default.rgw.meta
default.rgw.log
cephfs_data
$ ceph df
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 1.5 TiB 1.5 TiB 11 MiB 3.0 GiB 0.20
TOTAL 1.5 TiB 1.5 TiB 11 MiB 3.0 GiB 0.20

POOLS:
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
.rgw.root 1 32 1.2 KiB 4 768 KiB 0 474 GiB
default.rgw.control 2 32 0 B 8 0 B 0 474 GiB
default.rgw.meta 3 32 0 B 0 0 B 0 474 GiB
default.rgw.log 4 32 0 B 175 0 B 0 474 GiB
cephfs_data 5 16 0 B 0 0 B 0 474 GiB
```

- 创建文件系统

```sh
$ ceph fs new cephfs_demo cephfs_metadata cephfs_data
new fs with metadata pool 6 and data pool 5
$ ceph fs ls
name: cephfs_demo, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
```

- 查看状态

```sh
$ ceph -s
cluster:
id: 8523ac6f-68ba-43a9-9498-5efcaf397a79
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph-k8s-02,ceph-k8s-01,ceph-k8s-03 (age 81m)
mgr: ceph-k8s-01(active, since 102m), standbys: ceph-k8s-03, ceph-k8s-02
mds: cephfs_demo:1 {0=ceph-k8s-02=up:active} 2 up:standby
osd: 3 osds: 3 up (since 92m), 3 in (since 92m)
rgw: 3 daemons active (ceph-k8s-01, ceph-k8s-02, ceph-k8s-03)

task status:

data:
pools: 6 pools, 160 pgs
objects: 209 objects, 3.4 KiB
usage: 3.0 GiB used, 1.5 TiB / 1.5 TiB avail
pgs: 160 active+clean
```

- 一个active,2个standby

```sh
$ ceph mds stat
cephfs_demo:1 {0=ceph-k8s-02=up:active} 2 up:standby
```

### 3. 内核级别挂载

Linux内核>=2.6.34使用内核级别挂载.内核原生支持ceph fs
- 挂载ceph文件系统

```sh
$ yum install -y ceph-common
$ mkdir /opt/mycephfs
$ mount -o name=admin -t ceph 172.20.5.25:6789:/ /opt/mycephfs
$ df -h|grep mycephfs
172.20.5.25:6789:/ 474G 0 474G 0% /opt/mycephfs
```

### 4. 用户级别挂载

Linux内核版本小于2.6.34不支持内核级别挂载,使用ceph-fuse挂载
- 安装客户端

```sh
yum install -y ceph-fuse
```

- 执行挂载

```sh
mkdir -p /opt/ceph-fuse
ceph-fuse -n client.admin \
-m 172.20.5.25:6789,172.20.5.4:6789,172.20.5.52:6789 /opt/ceph-fuse
```

- 查看

```sh
$ df -Th | grep ceph-fuse
ceph-fuse fuse.ceph-fuse 474G 0 474G 0% /opt/ceph-fuse
```
## 六. 常见问题

### 1. 修复OSD

- 手动修复/var/lib/ceph/osd/ceph-x下内容丢失导致OSD无法启动
- 有时重启某个Ceph节点时,出现一种异常情况:就是该节点下所有osd都无法启动,且挂载目录(/var/lib/ceph/osd/ceph-x/)下所有文件也均消失,有时误删除也可能导致此现象

```sh
# ceph-deploy-01节点的osd.0已经down掉了,并且/var/lib/ceph/osd/ceph-0/为空
$ ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 1.46489 root default
-3 0.48830 host ceph-deploy-01
0 hdd 0.48830 osd.0 down 1.00000 1.00000
-5 0.48830 host ceph-deploy-02
1 hdd 0.48830 osd.1 up 1.00000 1.00000
-7 0.48830 host ceph-deploy-03
2 hdd 0.48830 osd.2 up 1.00000 1.00000
```

- 挂载tmpfs临时文件系统

```sh
mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
```

- 找到一个正常的节点,查看目录下应该有的文件。拷贝至ceph-deploy-01

```sh
$ ll /var/lib/ceph/osd/ceph-1/
总用量 24
lrwxrwxrwx 1 ceph ceph 93 11月 3 18:42 block -> /dev/ceph-c9d0db9d-8291-4b86-9c80-120e7430fa1f/osd-block-8b485f6c-0890-4fec-9eef-ced2c2a8d6c8
-rw------- 1 ceph ceph 37 11月 3 18:42 ceph_fsid
-rw------- 1 ceph ceph 37 11月 3 18:42 fsid
-rw------- 1 ceph ceph 55 11月 3 18:42 keyring
-rw------- 1 ceph ceph 6 11月 3 18:42 ready
-rw------- 1 ceph ceph 10 11月 3 18:42 type
-rw------- 1 ceph ceph 2 11月 3 18:42 whoami
$ scp /var/lib/ceph/osd/ceph-1/* root@ceph-deploy-01:/var/lib/ceph/osd/ceph-0
```

- 查看逻辑卷信息

```sh
$ ceph-volume lvm list

====== osd.0 =======

[block] /dev/ceph-784527fe-91ff-40d7-8c8a-cb8c899099da/osd-block-9a88c371-5b60-423d-841b-7c226179110e

block device /dev/ceph-784527fe-91ff-40d7-8c8a-cb8c899099da/osd-block-9a88c371-5b60-423d-841b-7c226179110e
block uuid LvT13e-kgvk-pTkM-mPWp-Wk2X-RmcA-GwPe5F
cephx lockbox secret
cluster fsid 7af6e250-e7e3-454d-8220-e9557c445164
cluster name ceph
crush device class None
encrypted 0
osd fsid 9a88c371-5b60-423d-841b-7c226179110e
osd id 0
osdspec affinity
type block
vdo 0
devices /dev/sdb
```

- 查看auth信息

```sh
$ ceph auth list
osd.0
key: AQCAK4FhkxXuBxAA/KfMzP3JAiA8sy3yBx+REg==
caps: [mgr] allow profile osd
caps: [mon] allow profile osd
caps: [osd] allow *
...
```

- 使用ceph-volume信息的osd fsid字段,修改fsid文件

```sh
echo "9a88c371-5b60-423d-841b-7c226179110e" > fsid
```

- 使用auth list中osd.0的key字段。修改keyring文件的osd id和key字段,格式如下:

```sh
cat <<- 'EOF' > keyring
[osd.0]
key = AQCAK4FhkxXuBxAA/KfMzP3JAiA8sy3yBx+REg==
EOF
```

- 修改whoami为当前osd的id号

```sh
echo 0 > whoami
```

- 通过ceph-volume中的块设备[block],建立软连接

```sh
ln -s /dev/ceph-784527fe-91ff-40d7-8c8a-cb8c899099da/osd-block-9a88c371-5b60-423d-841b-7c226179110e /var/lib/ceph/osd/ceph-0/block
```

- 修改权限

```sh
chown ceph:ceph /dev/ceph-784527fe-91ff-40d7-8c8a-cb8c899099da/osd-block-9a88c371-5b60-423d-841b-7c226179110e
chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
```

- 启动OSD

```sh
systemctl start ceph-osd@0
```

- 此时osd.0已经是up状态

```sh
ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 1.46489 root default
-3 0.48830 host ceph-deploy-01
0 hdd 0.48830 osd.0 up 1.00000 1.00000
-5 0.48830 host ceph-deploy-02
1 hdd 0.48830 osd.1 up 1.00000 1.00000
-7 0.48830 host ceph-deploy-03
2 hdd 0.48830 osd.2 up 1.00000 1.00000
```
## 七. 和K8S集成

### 1.rbd-provisioner

> rbd-provisioner为kubernetes 1.5+版本提供了类似于kubernetes.io/rbd的ceph rbd持久化存储动态配置实现。一些用户会使用kubeadm来部署集群,或者将kube-controller-manager以容器的方式运行
>
> 这种方式下,kubernetes在创建使用ceph rbd pv/pvc时没任何问题,但使用dynamic provisioning自动管理存储生命周期时会报错。提示`"rbd: create volume failed, err: failed to create rbd image: executable file not found in $PATH:"`
>
> 问题来自gcr.io提供的kube-controller-manager容器镜像未打包ceph-common组件,缺少了rbd命令,因此无法通过rbd命令为pod创建rbd image

- 注意:rbd-provisioner内置的ceph客户端是13.2.1版本,如果ceph版本比较高(例如: 14.2.22)会不兼容,推荐ceph版本使用(14.2.5)

#### 1. Ceph操作

- 创建Pool
```sh
ceph osd pool create kubeadm-rbd 16 16
ceph osd pool application enable kubeadm-rbd rbd
```
- 获取key
```sh
ceph auth get-key client.admin |base64
```

#### 2. K8S操作

##### 1. Namespace
- 如果修改了Namespace,注意修改下方yaml中的namespace
```sh
mkdir -p /data/deploy/ceph-rbd && cd /data/deploy/ceph-rbd
mkdir -p setup
cat <<- 'EOF' > setup/ceph-rbd-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ceph
EOF
kubectl apply -f setup
```

##### 2. rbd-provisioner

###### 1. RBAC

```yaml
cat <<- 'EOF' > ceph-rbd-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: rbd-provisioner
namespace: ceph
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
namespace: ceph
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["kube-dns","coredns"]
verbs: ["list", "get"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
namespace: ceph
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: ceph
roleRef:
kind: ClusterRole
name: rbd-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: rbd-provisioner
namespace: ceph
rules:
- apiGroups: [""]
resources: ["*"]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rbd-provisioner
namespace: ceph
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: ceph
EOF
```

###### 2. Deployment

```yaml
cat <<- 'EOF' > ceph-rbd-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: rbd-provisioner
namespace: ceph
spec:
replicas: 1
selector:
matchLabels:
app: rbd-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: rbd-provisioner
spec:
containers:
- name: rbd-provisioner
image: "ai-image.jd.com/ceph/rbd-provisioner:latest"
imagePullPolicy: IfNotPresent
env:
- name: PROVISIONER_NAME
value: ceph.com/rbd
serviceAccount: rbd-provisioner
EOF
```

##### 3. Secret

- 修改为上方获取的key

```sh
cat <<- 'EOF' > ceph-rbd-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-rbd-admin-secret
namespace: ceph
type: "kubernetes.io/rbd"
data:
# ceph auth get-key client.admin | base64
key: QVFBWkdJbGgzcnV4Q0JBQW9WVS9yVjFmZHRPQzIvajA5Y1d6QVE9PQ==
EOF
```

##### 4. StorageClass

- 修改monitors地址和pool

```yaml
cat <<- 'EOF' > ceph-rbd-storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: kubeadm-rbd
provisioner: ceph.com/rbd
parameters:
monitors: 169.0.0.21:6789,169.0.0.22:6789,169.0.0.23:6789
pool: kubeadm-rbd
adminId: admin
adminSecretNamespace: ceph
adminSecretName: ceph-rbd-admin-secret
userId: admin
userSecretNamespace: ceph
userSecretName: ceph-rbd-admin-secret
imageFormat: "2"
imageFeatures: layering
fsType: ext4
EOF
```
- 应用

```sh
kubectl apply -f .
```

##### 5. Demo

- 部署demo

```yaml
mkdir -p demo
cat <<- 'EOF' > demo/ceph-pvc-demo.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ceph-pvc-demo
spec:
accessModes:
- ReadWriteOnce
storageClassName: kubeadm-rbd
resources:
requests:
storage: 2Gi
EOF
cat <<- 'EOF' > demo/ceph-nginx-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: my-nginx
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: my-nginx
template:
metadata:
labels:
app: my-nginx
spec:
initContainers:
- image: nginx
name: init-nginx
command:
- "/bin/sh"
args:
- "-c"
- "echo '123' > /usr/share/nginx/html/index.html && exit 0 || exit 1"
volumeMounts:
- name: pvc
mountPath: "/usr/share/nginx/html"
containers:
- image: nginx
name: nginx
volumeMounts:
- name: pvc
mountPath: "/usr/share/nginx/html"
volumes:
- name: pvc
persistentVolumeClaim:
claimName: ceph-pvc-demo
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
spec:
selector:
app: my-nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
EOF
kubectl apply -f demo/
```

- 测试demo

```sh
$ curl `kubectl get svc my-nginx -o jsonpath={".spec.clusterIP"}`
123
```

- 删除demo

```sh
kubectl delete -f demo/
```

 

 

### 2. Ceph-csi
> 官网: https://docs.ceph.com/docs/master/rbd/rbd-kubernetes/
> GIthub:https://github.com/ceph/ceph-csi/

- CSI:容器存储接口用于将任意块和文件存储系统暴露给Kubernetes之类的容器编排系统(CO)上的容器化工作负载的标准
- 在kubernetes v1.13和更高版本中,可以通过ceph-csi使用Ceph块设备
- 环境版本
- k8s版本: 1.19.15
- ceph: 14.2.22
- ceph-csi: v3.4.0

#### 1. Ceph操作
##### 1. 创建Pool

```sh
ceph osd pool create kubernetes 16 16
# 初始化
rbd pool init kubernetes
```

##### 2. 配置ceph-csi

- 为Kubernetes和ceph-csi创建一个新用户。执行以下操作并记录生成的密钥

```sh
$ ceph auth get-or-create client.kubernetes \
mon 'profile rbd' \
osd 'profile rbd pool=kubernetes' \
mgr 'profile rbd pool=kubernetes'
[client.kubernetes]
key = AQAINIFh2uXTFBAArcmnnMkvT8hA/mgYE3Uddg==
```

- 获取mon地址

```sh
$ ceph mon dump
epoch 1
fsid 3c351db3-0b53-491e-9296-07d06a5f75ce
last_changed 2021-11-04 11:33:12.205128
created 2021-11-04 11:33:12.205128
min_mon_release 14 (nautilus)
0: [v2:169.0.0.31:3300/0,v1:169.0.0.31:6789/0] mon.ceph-deploy-01
1: [v2:169.0.0.32:3300/0,v1:169.0.0.32:6789/0] mon.ceph-deploy-02
2: [v2:169.0.0.33:3300/0,v1:169.0.0.33:6789/0] mon.ceph-deploy-03
dumped monmap epoch 1
```

#### 2. K8S操作

- 和Githab区别:添加了Namespace配置

##### 1. Namespace

- 如果使用其他的命名空间,注意修改下方yaml中的namespace

```sh
# 工作目录
mkdir -p /data/deploy/ceph-csi && cd /data/deploy/ceph-csi
mkdir -p setup
cat <<- 'EOF' > setup/cephcsi-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ceph-system
EOF
kubectl apply -f setup
```

##### 2. ConfigMap

- 注意:Ceph-csi当前仅支持旧版V1协议。也就是使用6789端口
- clusterID:替换为fsid
- monitors:替换为mon地址列表

```yaml
cat <<EOF > csi-config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ceph-csi-config
namespace: ceph-system
data:
config.json: |-
[
{
"clusterID": "3c351db3-0b53-491e-9296-07d06a5f75ce",
"monitors": [
"169.0.0.31:6789",
"169.0.0.32:6789",
"169.0.0.33:6789"
]
}
]
EOF
```

- 在新版本的ceph-csi,需要定义密钥管理服务(KMS)提供者的详细信息。 如果没有设置KMS,请在csi-kms-config-map中放置一个空配置。参考:https://github.com/ceph/ceph-csi/tree/master/examples/kms

```yaml
cat <<EOF > csi-kms-config-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: ceph-csi-encryption-kms-config
namespace: ceph-system
data:
config.json: |-
{}
EOF
```

##### 3. Secret

- 生成csi cephx秘钥,ceph-csi需要使用cephx凭据才能与Ceph集群进行通信。使用新创建的Kubernetes用户ID和cephx密钥生成类似于以下示例

```yaml
cat <<EOF > csi-rbd-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: csi-rbd-secret
namespace: ceph-system
stringData:
userID: kubernetes
# ceph auth get-key client.kubernetes
userKey: AQAGlYhhn1FvGRAALZ6NwVSrr1vSiDtgmAwikQ==
EOF
```

##### 4. RBAC

- csi-provisioner

```yaml
cat <<- 'EOF' > csi-provisioner-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: rbd-csi-provisioner
namespace: ceph-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-external-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "update", "delete", "patch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["persistentvolumeclaims/status"]
verbs: ["update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshots"]
verbs: ["get", "list"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents"]
verbs: ["create", "get", "list", "watch", "update", "delete"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments/status"]
verbs: ["patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["csinodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["snapshot.storage.k8s.io"]
resources: ["volumesnapshotcontents/status"]
verbs: ["update"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
- apiGroups: [""]
resources: ["serviceaccounts"]
verbs: ["get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-csi-provisioner-role
subjects:
- kind: ServiceAccount
name: rbd-csi-provisioner
namespace: ceph-system
roleRef:
kind: ClusterRole
name: rbd-external-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# replace with non-default namespace name
namespace: ceph-system
name: rbd-external-provisioner-cfg
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch", "create", "update", "delete"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["get", "watch", "list", "delete", "update", "create"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-csi-provisioner-role-cfg
# replace with non-default namespace name
namespace: ceph-system
subjects:
- kind: ServiceAccount
name: rbd-csi-provisioner
# replace with non-default namespace name
namespace: ceph-system
roleRef:
kind: Role
name: rbd-external-provisioner-cfg
apiGroup: rbac.authorization.k8s.io
EOF
```

- 节点插件

```yaml
cat <<- 'EOF' > csi-nodeplugin-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: rbd-csi-nodeplugin
namespace: ceph-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-csi-nodeplugin
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
# allow to read Vault Token and connection options from the Tenants namespace
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get"]
- apiGroups: [""]
resources: ["serviceaccounts"]
verbs: ["get"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["list", "get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-csi-nodeplugin
subjects:
- kind: ServiceAccount
name: rbd-csi-nodeplugin
namespace: ceph-system
roleRef:
kind: ClusterRole
name: rbd-csi-nodeplugin
apiGroup: rbac.authorization.k8s.io
EOF
```

##### 5. Deploy

- csi-provisioner

```yaml
cat <<- 'EOF' > csi-rbdplugin-provisioner.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
name: csi-rbdplugin-provisioner
namespace: ceph-system
spec:
replicas: 3
selector:
matchLabels:
app: csi-rbdplugin-provisioner
template:
metadata:
labels:
app: csi-rbdplugin-provisioner
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- csi-rbdplugin-provisioner
topologyKey: "kubernetes.io/hostname"
serviceAccountName: rbd-csi-provisioner
priorityClassName: system-cluster-critical
containers:
- name: csi-provisioner
image: k8s.gcr.io/sig-storage/csi-provisioner:v2.2.2
args:
- "--csi-address=$(ADDRESS)"
- "--v=5"
- "--timeout=150s"
- "--retry-interval-start=500ms"
- "--leader-election=true"
# set it to true to use topology based provisioning
- "--feature-gates=Topology=false"
# if fstype is not specified in storageclass, ext4 is default
- "--default-fstype=ext4"
- "--extra-create-metadata=true"
env:
- name: ADDRESS
value: unix:///csi/csi-provisioner.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: csi-snapshotter
image: k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1
args:
- "--csi-address=$(ADDRESS)"
- "--v=5"
- "--timeout=150s"
- "--leader-election=true"
env:
- name: ADDRESS
value: unix:///csi/csi-provisioner.sock
imagePullPolicy: "IfNotPresent"
securityContext:
privileged: true
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: csi-attacher
image: k8s.gcr.io/sig-storage/csi-attacher:v3.2.1
args:
- "--v=5"
- "--csi-address=$(ADDRESS)"
- "--leader-election=true"
- "--retry-interval-start=500ms"
env:
- name: ADDRESS
value: /csi/csi-provisioner.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: csi-resizer
image: k8s.gcr.io/sig-storage/csi-resizer:v1.2.0
args:
- "--csi-address=$(ADDRESS)"
- "--v=5"
- "--timeout=150s"
- "--leader-election"
- "--retry-interval-start=500ms"
- "--handle-volume-inuse-error=false"
env:
- name: ADDRESS
value: unix:///csi/csi-provisioner.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: csi-rbdplugin
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
# for stable functionality replace canary with latest release version
image: quay.io/cephcsi/cephcsi:v3.4.0
args:
- "--nodeid=$(NODE_ID)"
- "--type=rbd"
- "--controllerserver=true"
- "--endpoint=$(CSI_ENDPOINT)"
- "--v=5"
- "--drivername=rbd.csi.ceph.com"
- "--pidlimit=-1"
- "--rbdhardmaxclonedepth=8"
- "--rbdsoftmaxclonedepth=4"
- "--enableprofiling=false"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# - name: POD_NAMESPACE
# valueFrom:
# fieldRef:
# fieldPath: spec.namespace
# - name: KMS_CONFIGMAP_NAME
# value: encryptionConfig
- name: CSI_ENDPOINT
value: unix:///csi/csi-provisioner.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- mountPath: /dev
name: host-dev
- mountPath: /sys
name: host-sys
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- name: ceph-csi-config
mountPath: /etc/ceph-csi-config/
- name: ceph-csi-encryption-kms-config
mountPath: /etc/ceph-csi-encryption-kms-config/
- name: keys-tmp-dir
mountPath: /tmp/csi/keys
- name: csi-rbdplugin-controller
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
# for stable functionality replace canary with latest release version
image: quay.io/cephcsi/cephcsi:v3.4.0
args:
- "--type=controller"
- "--v=5"
- "--drivername=rbd.csi.ceph.com"
- "--drivernamespace=$(DRIVER_NAMESPACE)"
env:
- name: DRIVER_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: ceph-csi-config
mountPath: /etc/ceph-csi-config/
- name: keys-tmp-dir
mountPath: /tmp/csi/keys
- name: liveness-prometheus
image: quay.io/cephcsi/cephcsi:v3.4.0
args:
- "--type=liveness"
- "--endpoint=$(CSI_ENDPOINT)"
- "--metricsport=8680"
- "--metricspath=/metrics"
- "--polltime=60s"
- "--timeout=3s"
env:
- name: CSI_ENDPOINT
value: unix:///csi/csi-provisioner.sock
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: socket-dir
mountPath: /csi
imagePullPolicy: "IfNotPresent"
volumes:
- name: host-dev
hostPath:
path: /dev
- name: host-sys
hostPath:
path: /sys
- name: lib-modules
hostPath:
path: /lib/modules
- name: socket-dir
emptyDir: {
medium: "Memory"
}
- name: ceph-csi-config
configMap:
name: ceph-csi-config
- name: ceph-csi-encryption-kms-config
configMap:
name: ceph-csi-encryption-kms-config
- name: keys-tmp-dir
emptyDir: {
medium: "Memory"
}
---
kind: Service
apiVersion: v1
metadata:
name: csi-rbdplugin-provisioner
namespace: ceph-system
labels:
app: csi-metrics
spec:
selector:
app: csi-rbdplugin-provisioner
ports:
- name: http-metrics
port: 8080
protocol: TCP
targetPort: 8680
EOF
```

- nodeplugin

```yaml
cat <<- 'EOF' > csi-rbdplugin.yaml
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: csi-rbdplugin
namespace: ceph-system
spec:
selector:
matchLabels:
app: csi-rbdplugin
template:
metadata:
labels:
app: csi-rbdplugin
spec:
serviceAccountName: rbd-csi-nodeplugin
hostNetwork: true
hostPID: true
priorityClassName: system-node-critical
# to use e.g. Rook orchestrated cluster, and mons' FQDN is
# resolved through k8s service, set dns policy to cluster first
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: driver-registrar
# This is necessary only for systems with SELinux, where
# non-privileged sidecar containers cannot access unix domain socket
# created by privileged CSI driver container.
securityContext:
privileged: true
image: k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0
args:
- "--v=5"
- "--csi-address=/csi/csi.sock"
- "--kubelet-registration-path=/var/lib/kubelet/plugins/rbd.csi.ceph.com/csi.sock"
env:
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: registration-dir
mountPath: /registration
- name: csi-rbdplugin
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
# for stable functionality replace canary with latest release version
image: quay.io/cephcsi/cephcsi:v3.4.0
args:
- "--nodeid=$(NODE_ID)"
- "--pluginpath=/var/lib/kubelet/plugins"
- "--stagingpath=/var/lib/kubelet/plugins/kubernetes.io/csi/pv/"
- "--type=rbd"
- "--nodeserver=true"
- "--endpoint=$(CSI_ENDPOINT)"
- "--v=5"
- "--drivername=rbd.csi.ceph.com"
- "--enableprofiling=false"
# If topology based provisioning is desired, configure required
# node labels representing the nodes topology domain
# and pass the label names below, for CSI to consume and advertise
# its equivalent topology domain
# - "--domainlabels=failure-domain/region,failure-domain/zone"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# - name: POD_NAMESPACE
# valueFrom:
# fieldRef:
# fieldPath: spec.namespace
# - name: KMS_CONFIGMAP_NAME
# value: encryptionConfig
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- mountPath: /dev
name: host-dev
- mountPath: /sys
name: host-sys
- mountPath: /run/mount
name: host-mount
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- name: ceph-csi-config
mountPath: /etc/ceph-csi-config/
- name: ceph-csi-encryption-kms-config
mountPath: /etc/ceph-csi-encryption-kms-config/
- name: plugin-dir
mountPath: /var/lib/kubelet/plugins
mountPropagation: "Bidirectional"
- name: mountpoint-dir
mountPath: /var/lib/kubelet/pods
mountPropagation: "Bidirectional"
- name: keys-tmp-dir
mountPath: /tmp/csi/keys
- name: liveness-prometheus
securityContext:
privileged: true
image: quay.io/cephcsi/cephcsi:v3.4.0
args:
- "--type=liveness"
- "--endpoint=$(CSI_ENDPOINT)"
- "--metricsport=8680"
- "--metricspath=/metrics"
- "--polltime=60s"
- "--timeout=3s"
env:
- name: CSI_ENDPOINT
value: unix:///csi/csi.sock
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: socket-dir
mountPath: /csi
imagePullPolicy: "IfNotPresent"
volumes:
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/rbd.csi.ceph.com
type: DirectoryOrCreate
- name: plugin-dir
hostPath:
path: /var/lib/kubelet/plugins
type: Directory
- name: mountpoint-dir
hostPath:
path: /var/lib/kubelet/pods
type: DirectoryOrCreate
- name: registration-dir
hostPath:
path: /var/lib/kubelet/plugins_registry/
type: Directory
- name: host-dev
hostPath:
path: /dev
- name: host-sys
hostPath:
path: /sys
- name: host-mount
hostPath:
path: /run/mount
- name: lib-modules
hostPath:
path: /lib/modules
- name: ceph-csi-config
configMap:
name: ceph-csi-config
- name: ceph-csi-encryption-kms-config
configMap:
name: ceph-csi-encryption-kms-config
- name: keys-tmp-dir
emptyDir: {
medium: "Memory"
}
---
# This is a service to expose the liveness metrics
apiVersion: v1
kind: Service
metadata:
name: csi-metrics-rbdplugin
namespace: ceph-system
labels:
app: csi-metrics
spec:
ports:
- name: http-metrics
port: 8080
protocol: TCP
targetPort: 8680
selector:
app: csi-rbdplugin
EOF
```

##### 6. StorageClass
- clusterID:替换为fsid

```yaml
cat <<EOF > csi-rbd-storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
clusterID: 3c351db3-0b53-491e-9296-07d06a5f75ce
pool: kubernetes
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: ceph-system
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: ceph-system
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: ceph-system
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard
EOF
```
- 应用

```sh
# 对象存储中有相关镜像包:finance-bbo-for-download/ceph-csi/v3.4.0/
kubectl apply -f .
```
##### 7. Demo

```yaml
mkdir -p demo && cd demo
cat <<EOF > raw-block-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: raw-block-pvc
spec:
accessModes:
- ReadWriteOnce
volumeMode: Block
resources:
requests:
storage: 1Gi
storageClassName: csi-rbd-sc
EOF
kubectl apply -f demo/
```
- 删除demo
```sh
kubectl delete -f demo/
```

 

【版权声明】本文内容来自摩杜云社区用户原创、第三方投稿、转载,内容版权归原作者所有。本网站的目的在于传递更多信息,不拥有版权,亦不承担相应法律责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@moduyun.com

  1. 分享:
最后一次编辑于 2023年11月08日 0

暂无评论

qmj2xyt4e0cp