Ceph文件系统使用
  udwmRxMYfN1Y 2023年11月02日 96 0

本次演示环境如下:

hostname

IP

roles

node01.srv.world

192.168.10.101

Object Storage;Monitor Daemon;Manager Daemon

node02.srv.world

192.168.10.102

Object Storage

node03.srv.world

192.168.10.103

Object Storage

dlp.srv.world

192.168.10.142

client

1.在dlp节点安装所需的软件包

[root@node01 ~]#  ssh dlp "dnf -y install centos-release-ceph-quincy; dnf -y install ceph-fuse"
Last metadata expiration check: 0:16:40 ago on Tue 12 Sep 2023 04:55:35 PM CST.
Dependencies resolved.
================================================================================
 Package                       Arch      Version         Repository        Size
================================================================================
Installing:
 centos-release-ceph-quincy    noarch    1.0-2.el9s      extras-common    7.4 k

Transaction Summary
================================================================================
Install  1 Package

Total download size: 7.4 k
Installed size: 915
Downloading Packages:
centos-release-ceph-quincy-1.0-2.el9s.noarch.rp  45 kB/s | 7.4 kB     00:00
--------------------------------------------------------------------------------
Total                                           578  B/s | 7.4 kB     00:13
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                        1/1
  Installing       : centos-release-ceph-quincy-1.0-2.el9s.noarch           1/1
  Verifying        : centos-release-ceph-quincy-1.0-2.el9s.noarch           1/1

Installed:
  centos-release-ceph-quincy-1.0-2.el9s.noarch

Complete!
CentOS-9-stream - Ceph Quincy                   280 kB/s | 526 kB     00:01
Last metadata expiration check: 0:00:01 ago on Tue 12 Sep 2023 05:12:31 PM CST.
Dependencies resolved.
================================================================================
 Package         Arch        Version                Repository             Size
================================================================================
Installing:
 ceph-fuse       x86_64      2:18.2.0-1.el9s        centos-ceph-reef      815 k
Installing dependencies:
 fuse3-libs      x86_64      3.10.2-6.el9           appstream              91 k

Transaction Summary
================================================================================
Install  2 Packages

Total download size: 906 k
Installed size: 2.6 M
Downloading Packages:
(1/2): fuse3-libs-3.10.2-6.el9.x86_64.rpm       9.0 kB/s |  91 kB     00:10
(2/2): ceph-fuse-18.2.0-1.el9s.x86_64.rpm        81 kB/s | 815 kB     00:10
--------------------------------------------------------------------------------
Total                                            39 kB/s | 906 kB     00:23
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                        1/1
  Installing       : fuse3-libs-3.10.2-6.el9.x86_64                         1/2
  Installing       : ceph-fuse-2:18.2.0-1.el9s.x86_64                       2/2
  Running scriptlet: ceph-fuse-2:18.2.0-1.el9s.x86_64                       2/2
  Verifying        : ceph-fuse-2:18.2.0-1.el9s.x86_64                       1/2
  Verifying        : fuse3-libs-3.10.2-6.el9.x86_64                         2/2

Installed:
  ceph-fuse-2:18.2.0-1.el9s.x86_64        fuse3-libs-3.10.2-6.el9.x86_64

Complete!
[root@node01 ~]#

2.在node01节点配置Metadata Server a.创建所需的目录,并生成相应的秘钥,设置相应的权限,并启动服务

[root@node01 ~]# mkdir -p /var/lib/ceph/mds/ceph-node01
[root@node01 ~]# ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-node01/keyring --gen-key -n mds.node01
creating /var/lib/ceph/mds/ceph-node01/keyring
[root@node01 ~]# chown -R ceph:ceph /var/lib/ceph/mds/ceph-node01
[root@node01 ~]# ceph auth add mds.node01 osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-node01/keyring
added key for mds.node01
[root@node01 ~]# systemctl enable --now ceph-mds@node01
Created symlink /etc/systemd/system/ceph-mds.target.wants/ceph-mds@node01.service → /usr/lib/systemd/system/ceph-mds@.service.
[root@node01 ~]# systemctl  status ceph-mds@node01.service
● ceph-mds@node01.service - Ceph metadata server daemon
     Loaded: loaded (/usr/lib/systemd/system/ceph-mds@.service; enabled; preset: disabled)
     Active: active (running) since Tue 2023-09-12 17:21:18 CST; 1min 8s ago
   Main PID: 15670 (ceph-mds)
      Tasks: 16
     Memory: 15.1M
        CPU: 213ms
     CGroup: /system.slice/system-ceph\x2dmds.slice/ceph-mds@node01.service
             └─15670 /usr/bin/ceph-mds -f --cluster ceph --id node01 --setuser ceph --setgroup ceph

Sep 12 17:21:18 node01.srv.world systemd[1]: Started Ceph metadata server daemon.
Sep 12 17:21:19 node01.srv.world ceph-mds[15670]: starting mds.node01 at
[root@node01 ~]#

3.在node01节点创建data与metadata存储池

[root@node01 ~]# ceph osd pool create cephfs_data 32
pool 'cephfs_data' created
[root@node01 ~]# ceph osd pool create cephfs_metadata 32
pool 'cephfs_metadata' created
[root@node01 ~]# ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 4 and data pool 3
[root@node01 ~]# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[root@node01 ~]# ceph mds stat
cephfs:1 {0=node01=up:active}
[root@node01 ~]#  ceph fs status cephfs
cephfs - 0 clients
======
RANK  STATE    MDS       ACTIVITY     DNS    INOS   DIRS   CAPS
 0    active  node01  Reqs:    0 /s    10     13     12      0
      POOL         TYPE     USED  AVAIL
cephfs_metadata  metadata  96.0k  18.9G
  cephfs_data      data       0   18.9G
MDS version: ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)
[root@node01 ~]#

4.在dlp节点挂载存储

[root@dlp ~]# ceph-authtool -p /etc/ceph/ceph.client.admin.keyring > admin.key
[root@dlp ~]# chmod 600 admin.key
[root@dlp ~]#  mount -t ceph node01.srv.world:6789:/ /mnt -o name=admin,secretfile=admin.key
[root@dlp ~]# df -hT
Filesystem            Type      Size  Used Avail Use% Mounted on
devtmpfs              devtmpfs  4.0M     0  4.0M   0% /dev
tmpfs                 tmpfs     467M     0  467M   0% /dev/shm
tmpfs                 tmpfs     187M  3.9M  183M   3% /run
/dev/mapper/cs-root   xfs        17G  1.5G   16G   9% /
/dev/nvme0n1p1        xfs       960M  197M  764M  21% /boot
tmpfs                 tmpfs      94M     0   94M   0% /run/user/0
192.168.10.101:6789:/ ceph       19G     0   19G   0% /mnt
[root@dlp ~]#

在node01,我们可以看到有一个客户端使用Ceph

[root@node01 ~]#  ceph fs status cephfs
cephfs - 1 clients
======
RANK  STATE    MDS       ACTIVITY     DNS    INOS   DIRS   CAPS
 0    active  node01  Reqs:    0 /s    11     14     12      2
      POOL         TYPE     USED  AVAIL
cephfs_metadata  metadata  31.7M  17.7G
  cephfs_data      data    3072M  17.7G
MDS version: ceph version 18.2.0 (5dd24139a1eada541a3bc16b6941c5dde975e26d) reef (stable)

5.在node01执行操作删除创建的存储池

a.停止MDS服务

[root@node01 ~]#  systemctl stop ceph-mds@node01
[root@node01 ~]#  systemctl status ceph-mds@node01
○ ceph-mds@node01.service - Ceph metadata server daemon
     Loaded: loaded (/usr/lib/systemd/system/ceph-mds@.service; enabled; preset: disabled)
     Active: inactive (dead) since Tue 2023-09-12 19:18:45 CST; 9s ago
   Duration: 1h 57min 21.264s
    Process: 15670 ExecStart=/usr/bin/ceph-mds -f --cluster ${CLUSTER} --id node01 --setuser ceph --setgroup ceph (code=exited, status=0/SUCCESS)
   Main PID: 15670 (code=exited, status=0/SUCCESS)
        CPU: 43.374s

Sep 12 17:21:18 node01.srv.world systemd[1]: Started Ceph metadata server daemon.
Sep 12 17:21:19 node01.srv.world ceph-mds[15670]: starting mds.node01 at
Sep 12 19:18:40 node01.srv.world systemd[1]: Stopping Ceph metadata server daemon...
Sep 12 19:18:40 node01.srv.world ceph-mds[15670]: 2023-09-12T19:18:40.206+0800 7fa9e5698640 -1 received  signal: Terminated from /usr/lib/systemd/systemd -->
Sep 12 19:18:40 node01.srv.world ceph-mds[15670]: 2023-09-12T19:18:40.206+0800 7fa9e5698640 -1 mds.node01 *** got signal Terminated ***
Sep 12 19:18:45 node01.srv.world systemd[1]: ceph-mds@node01.service: Deactivated successfully.
Sep 12 19:18:45 node01.srv.world systemd[1]: Stopped Ceph metadata server daemon.
Sep 12 19:18:45 node01.srv.world systemd[1]: ceph-mds@node01.service: Consumed 43.374s CPU time.

b.删除CephFS

[root@node01 ~]# ceph fs rm cephfs --yes-i-really-mean-it

c.删除存储池

[root@node01 ~]# ceph osd pool delete cephfs_data cephfs_data --yes-i-really-really-mean-it
pool 'cephfs_data' removed
[root@node01 ~]# ceph osd pool delete cephfs_metadata cephfs_metadata --yes-i-really-really-mean-it
pool 'cephfs_metadata' removed

下一章节将介绍Ceph Object Gateway配置,欢迎持续关注!

【版权声明】本文内容来自摩杜云社区用户原创、第三方投稿、转载,内容版权归原作者所有。本网站的目的在于传递更多信息,不拥有版权,亦不承担相应法律责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@moduyun.com

上一篇: Ceph Object Gateway 下一篇: 使用块设备
  1. 分享:
最后一次编辑于 2023年11月08日 0

暂无评论

udwmRxMYfN1Y
作者其他文章 更多

2023-11-02

2023-11-02

2023-11-02

2023-11-02