Tìm hiểu Triển khai CEPH Storage trên Ubuntu 18.04 LTS A-Z

Tìm hiểu Triển khai CEPH Storage trên Ubuntu 18.04​​ LTS​​ A-Z

1./ Setup Three Node Ceph Storage Cluster on Ubuntu 18.04

https://computingforgeeks.com/wp-content/uploads/2018/10/Ceph-Architecture-Ubuntu-18.04.png

 

The basic components of a Ceph storage cluster

  • Monitors: A Ceph​​ Monitor (ceph-mon) maintains maps of the cluster state, including the monitor map, manager map, the OSD map, and the CRUSH map

  • Ceph OSDs: A Ceph OSD (object storage daemon, ceph-osd) stores data, handles data replication, recovery, rebalancing, and provides some monitoring information to Ceph Monitors and Managers by checking other Ceph OSD Daemons for a heartbeat. At least 3 Ceph OSDs are normally required for redundancy and high availability.

  • MDSs: A Ceph Metadata Server (MDS, ceph-mds) stores metadata on​​ behalf of the Ceph Filesystem (i.e., Ceph Block Devices and Ceph Object Storage do not use MDS). Ceph Metadata Servers allow POSIX file system users to execute basic commands (like,ls, find etc.) without placing an enormous burden on the Ceph Storage Cluster.\

  • Managers: A Ceph Manager daemon (ceph-mgr) is responsible for keeping track of runtime metrics and the current state of the Ceph cluster, including storage utilization, current performance metrics, and system load.

2./ Chuẩn bị​​ môi trường cho 8 server LAB

Cài đặt 8 server Ubuntu 18.04LTS

CPU: 4 Threads

Ram: 2Gb

HDD: 80GB

Thay đổi Hostname như sau trên tất cả​​ các Node

nano /etc/hosts

#Paste

10.0.1.51 rgw.ceph.com  ​​​​ rgw

10.0.1.52 mon01.ceph.com mon01

10.0.1.53 mon02.ceph.com mon02

10.0.1.54​​ mon03.ceph.com mon03

10.0.1.55 ceph-admin.com ceph-admin

10.0.1.56 osd01.ceph.com osd01

10.0.1.57 osd02.ceph.com osd02

10.0.1.58 osd03.ceph.com osd03

10.0.1.42 client01-ceph.ceph.com client01-ceph

Thay đổi hostname tương​​ ứng với từng server và PASTE

HOSTNAME=rgw

hostnamectl

sudo hostnamectl set-hostname $HOSTNAME

#echo -e with escape

echo -e "127.0.0.1  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ $HOSTNAME" >> /etc/hosts

sed -i -e s/"preserve_hostname: false"/"preserve_hostname: true"/g /etc/cloud/cloud.cfg

Tiếp theo​​ 

 

sudo apt update

sudo apt -y upgrade

sudo reboot

3./ Tiến hành cài đặt

3.1/ Chuẩn bị​​ server ceph-admin node

Server ceph-admin có địa chỉ​​ IP:​​ 10.0.1.55

Import repository key

wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -

#Kết quả

OK

Add the Ceph repository to your system. This installation will do Ceph nautilus

echo deb​​ https://download.ceph.com/debian-nautilus/​​ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

Update your repository and install ceph-deploy:

sudo apt update -y

sudo apt -y install ceph-deploy

3.2/ Chuẩn bị​​ server ceph node

The admin node must have password-less SSH access to Ceph nodes. When ceph-deploy logs in to a Ceph node as a user, that particular user must have passwordless sudo privileges.

Add SSH user cho tất cả​​ các Ceph Node bao gồm: rgw, osd và monitors Nodes

Trên server ceph-admin

export USER_NAME="ceph-admin"

export USER_PASS="StrOngP@ssw0rd"

sudo useradd --create-home -s /bin/bash ${USER_NAME}

echo "${USER_NAME}:${USER_PASS}"|sudo chpasswd

echo "${USER_NAME} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/${USER_NAME}

sudo chmod 0440 /etc/sudoers.d/${USER_NAME}

Kiểm tra lại user ceph-admin có thể​​ chạy quyền root không cần password

root@ceph-admin:/etc/ceph# su - ceph-admin

ceph-admin@ceph-admin:~$ sudo su -

root@ceph-admin:~#

Tiến hành khởi tạo SSH keys trên ceph-admin node. lưu ý không đặt password cho passphrase:

# su - ceph-admin

$ ssh-keygen​​ 

Generating public/private rsa key pair.

Enter file in which to save the key (/home/ceph-admin/.ssh/id_rsa):​​ 

Created directory '/home/ceph-admin/.ssh'.

Enter passphrase (empty for no passphrase):​​ 

Enter same​​ passphrase again:​​ 

Your identification has been saved in /home/ceph-admin/.ssh/id_rsa.

Your public key has been saved in /home/ceph-admin/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:DZZdfRS1Yo+unWAkDum7juShEF67pm7VdSkfWlsCpbA ceph-admin@ceph-admin

The​​ key's randomart image is:

+---[RSA 2048]----+

|  ​​ ​​ ​​​​ . ​​ .. ​​ .. o=|

|  ​​ ​​ ​​ ​​​​ o..o . ​​ . o|

|  ​​ ​​ ​​​​ E .= o ​​ o o |

|  ​​ ​​ ​​ ​​ ​​​​ +.O .. + ​​ |

| . .. .oS.*. . . |

|. o.....ooo .  ​​ ​​​​ |

| o.. o . . o .  ​​​​ |

| ...= o . . + . ​​ |

|oooo o.+. ​​ . o  ​​​​ |

+----[SHA256]-----+

 

$ ls /home/ceph-admin/.ssh/

config ​​ id_rsa ​​ id_rsa.pub ​​ known_hosts

cấu hình file ~/.ssh/config

cat /home/ceph-admin/.ssh/config​​ 

Host osd01

 ​​​​ Hostname osd01

 ​​​​ User ceph-admin

Host osd02

 ​​​​ Hostname osd02

 ​​​​ User ceph-admin

Host osd03

 ​​​​ Hostname osd03

 ​​​​ User​​ ceph-admin

Host ceph-admin

 ​​​​ Hostname ceph-admin

 ​​​​ User ceph-admin

Host mon01

 ​​​​ Hostname mon01

 ​​​​ User ceph-admin

Host mon02

 ​​​​ Hostname mon02

 ​​​​ User ceph-admin

Host mon03

 ​​​​ Hostname mon03

 ​​​​ User ceph-admin

Host rgw

 ​​​​ Hostname rgw

 ​​​​ User ceph-admin

Tiến hành​​ copy key cho tất cả​​ các node

for i in rgw mon01 mon02 mon03 osd01 osd02 osd03; do

​​ ssh-copy-id $i

done

Như vậy đã có thể​​ remote sang tất cả​​ các node không bị​​ hỏi password

su - ceph-admin

ssh ceph-admin@rgw​​ 

Last login: Tue Sep ​​ 8 15:36:40 2020 from​​ 10.0.1.55

ceph-admin@rgw:~$

4./ Triển khai Ceph Storage Cluster

Yêu cầu tất cả​​ các Node cần được update thời gian chuẩn, nếu chưa cài đặt ntp có thể​​ cài đặt như sau:

sudo apt install ntp

Trên server ceph-admin tạo folder cài đặt files và keys nơi mà ceph-deploy khởi tạo cluster

su - ceph-admin

cd ~

mkdir ceph-deploy

cd ceph-deploy

4.1/ Tiến hành khởi tạo ceph monitor nodes

Trên ceph-admin node

ceph-deploy new mon01 mon02 mon03

#kết quả

[mon03][DEBUG ] connected to host: mon03​​ 

[mon03][DEBUG ] detect platform information from remote host

[mon03][DEBUG ] detect machine type

[mon03][DEBUG ] find the location of an executable

[mon03][INFO ​​ ] Running command: sudo /bin/ip link show

[mon03][INFO ​​ ] Running command: sudo /bin/ip addr show

[mon03][DEBUG ] IP addresses found: [u'10.0.1.54']

[ceph_deploy.new][DEBUG ] Resolving host mon03

[ceph_deploy.new][DEBUG ] Monitor mon03 at 10.0.1.54

[ceph_deploy.new][DEBUG ] Monitor initial members are ['mon01', 'mon02', 'mon03']

[ceph_deploy.new][DEBUG ] Monitor addrs are ['10.0.1.52', '10.0.1.53', '10.0.1.54']

[ceph_deploy.new][DEBUG ] Creating a random mon key...

[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...

[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

4.2/ Cài đặt ceph package​​ trên tất cả​​ các node

Trên ceph-admin node

ceph-deploy install mon01 mon02 mon03 osd01 osd02 osd03 rgw

#kết quả

[rgw][DEBUG ] Adding system user ceph....done

[rgw][DEBUG ] Setting system user ceph properties....done

[rgw][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target.

[rgw][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service.

[rgw][DEBUG ] Setting up radosgw (13.2.10-1bionic) ...

[rgw][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target.

[rgw][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target.

[rgw][DEBUG ] Setting up python-webtest (2.0.28-1ubuntu1) ...

[rgw][DEBUG ] Setting up ceph-base (13.2.10-1bionic) ...

[rgw][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service →​​ /lib/systemd/system/ceph-crash.service.

[rgw][DEBUG ] Setting up python-pecan (1.2.1-2) ...

[rgw][DEBUG ] update-alternatives: using /usr/bin/python2-pecan to provide /usr/bin/pecan (pecan) in auto mode

[rgw][DEBUG ] update-alternatives: using /usr/bin/python2-gunicorn_pecan to provide /usr/bin/gunicorn_pecan (gunicorn_pecan) in auto mode

[rgw][DEBUG ] Setting up ceph-osd (13.2.10-1bionic) ...

[rgw][DEBUG ] chown: cannot access '/var/lib/ceph/osd/*/block*': No such file or directory

[rgw][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target.

[rgw][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target.

[rgw][DEBUG ] Setting up ceph-mds (13.2.10-1bionic) ...

[rgw][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target.

[rgw][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target.

[rgw][DEBUG ] Setting up ceph-mon (13.2.10-1bionic) ...

[rgw][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target.

[rgw][DEBUG ] Created symlink​​ /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target.

[rgw][DEBUG ] Setting up ceph-mgr (13.2.10-1bionic) ...

[rgw][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target.

[rgw][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target.

[rgw][DEBUG ] Setting up ceph (13.2.10-1bionic) ...

[rgw][DEBUG ] Processing triggers for systemd (237-3ubuntu10.42) ...

[rgw][DEBUG ] Processing triggers for man-db (2.8.3-2ubuntu0.1) ...

[rgw][DEBUG ] Processing triggers for ureadahead (0.100.0-21) ...

[rgw][DEBUG ] Processing triggers for libc-bin (2.27-3ubuntu1.2) ...

[rgw][INFO ​​ ] Running command: sudo ceph --version

[rgw][DEBUG ] ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stabl

 

# Kiểm tra trên các máy ceph khác

root@rgw:~/.ssh# sudo ceph --version

ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)

#

root@mon01:~# sudo ceph --version

ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)

 

4.3/ Triển khai khởi tạo các server monitor và các keys

Khởi tạo server monitor

su -ceph-admin

cd /home/ceph-admin/ceph-deploy

ceph-deploy mon create-initial

#Kết quả

[mon01][DEBUG ] connected to host: mon01​​ 

[mon01][DEBUG ] detect platform information from remote host

[mon01][DEBUG ] detect machine type

[mon01][DEBUG ] get remote short hostname

[mon01][DEBUG ] fetch remote file

[mon01][INFO ​​ ]​​ Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.mon01.asok mon_status

[mon01][INFO ​​ ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon01/keyring auth get client.admin

[mon01][INFO ​​ ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon01/keyring auth get client.bootstrap-mds

[mon01][INFO ​​ ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon01/keyring auth get client.bootstrap-mgr

[mon01][INFO ​​ ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon01/keyring auth get client.bootstrap-osd

[mon01][INFO ​​ ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon01/keyring auth get client.bootstrap-rgw

[ceph_deploy.gatherkeys][INFO ​​ ] Storing ceph.client.admin.keyring

[ceph_deploy.gatherkeys][INFO ​​ ] Storing ceph.bootstrap-mds.keyring

[ceph_deploy.gatherkeys][INFO ​​ ] Storing ceph.bootstrap-mgr.keyring

[ceph_deploy.gatherkeys][INFO ​​ ] keyring 'ceph.mon.keyring' already exists

[ceph_deploy.gatherkeys][INFO ​​ ] Storing ceph.bootstrap-osd.keyring

[ceph_deploy.gatherkeys][INFO ​​ ] Storing ceph.bootstrap-rgw.keyring

[ceph_deploy.gatherkeys][INFO ​​ ] Destroy temp directory /tmp/tmp1nOYb4

1 keyrings sẽ​​ được khởi tạo vào trong working folder

trên ceph-admin

su - ceph-admin

cd ~/ceph-deploy

 

cat ceph.client.admin.keyring

[client.admin]

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ key = AQDOW1dfhiOEBRAAjspKZW4cea3P8qzwDm12gg==

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ caps mds = "allow *"

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ caps mgr = "allow *"

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ caps mon = "allow *"

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ caps osd = "allow *"

Triển khai 1 manager daemon (service)

ceph-deploy mgr create mon01 mon02 mon03

#Kết quả

[mon03][DEBUG ] connected to host: mon03​​ 

[mon03][DEBUG ] detect platform information from remote host

[mon03][DEBUG ] detect​​ machine type

[ceph_deploy.mgr][INFO ​​ ] Distro info: Ubuntu 18.04 bionic

[ceph_deploy.mgr][DEBUG ] remote host will use systemd

[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to mon03

[mon03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[mon03][WARNIN] mgr keyring does not exist yet, creating one

[mon03][DEBUG ] create a keyring file

[mon03][DEBUG ] create path recursively if it doesn't exist

[mon03][INFO ​​ ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring​​ /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.mon03 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-mon03/keyring

[mon03][INFO ​​ ] Running command: sudo systemctl enable ceph-mgr@mon03

[mon03][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/[email protected] → /lib/systemd/system/[email protected].

[mon03][INFO ​​ ] Running command: sudo systemctl start ceph-mgr@mon03

[mon03][INFO ​​ ] Running command: sudo systemctl enable ceph.target

Thêm 1 server Metadata

ceph-deploy mds create mon01 mon02 mon03

#Kết quả

[mon03][DEBUG ] connected to host: mon03​​ 

[mon03][DEBUG ] detect platform information from remote host

[mon03][DEBUG ] detect machine type

[ceph_deploy.mds][INFO ​​ ] Distro info: Ubuntu 18.04 bionic

[ceph_deploy.mds][DEBUG ] remote host will use systemd

[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to mon03

[mon03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[mon03][WARNIN] mds keyring does not exist yet, creating one

[mon03][DEBUG ] create a keyring file

[mon03][DEBUG ] create path if it doesn't exist

[mon03][INFO ​​ ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.mon03 osd​​ allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-mon03/keyring

[mon03][INFO ​​ ] Running command: sudo systemctl enable ceph-mds@mon03

[mon03][WARNIN] Created symlink /etc/systemd/system/ceph-mds.target.wants/[email protected] → /lib/systemd/system/[email protected].

[mon03][INFO ​​ ] Running command: sudo systemctl start ceph-mds@mon03

[mon03][INFO ​​ ] Running command: sudo systemctl enable ceph.target

4.4/ Copy ceph Admin key

Copy file cấu hình và admin key đến server admin node và các​​ ceph nodes:

ceph-deploy admin mon01 mon02 mon03 osd01 osd02 osd03

#kết quả

[osd03][DEBUG ] connected to host: osd03​​ 

[osd03][DEBUG ] detect platform information from remote host

[osd03][DEBUG ] detect machine type

[osd03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

4.5/ Tiến hành add các osds disk

Trên các server osd thêm 3 disk mỗi disk có dung lựng 5Gb: sdb sdc sdd

root@osd01:~# lsblk

NAME  ​​​​ MAJ:MIN RM ​​ SIZE RO TYPE MOUNTPOINT

loop0  ​​ ​​​​ 7:0  ​​ ​​​​ 0 89.1M ​​ 1 loop /snap/core/7917

loop1  ​​ ​​​​ 7:1  ​​ ​​​​ 0 96.6M ​​ 1 loop /snap/core/9804

sda  ​​ ​​ ​​ ​​​​ 8:0  ​​ ​​​​ 0 ​​ 100G ​​ 0 disk​​ 

├─sda1  ​​​​ 8:1  ​​ ​​​​ 0  ​​ ​​​​ 1M ​​ 0 part​​ 

├─sda2  ​​​​ 8:2  ​​ ​​​​ 0  ​​ ​​​​ 2G ​​ 0 part /boot

├─sda3  ​​​​ 8:3  ​​ ​​​​ 0  ​​ ​​​​ 4G ​​ 0 part [SWAP]

└─sda4  ​​​​ 8:4  ​​ ​​​​ 0  ​​​​ 94G ​​ 0 part /

sdb  ​​ ​​ ​​ ​​​​ 8:16  ​​​​ 0  ​​ ​​​​ 5G ​​ 0 disk​​ 

sdc  ​​ ​​ ​​ ​​​​ 8:32  ​​​​ 0  ​​ ​​​​ 5G ​​ 0 disk​​ 

sdd  ​​ ​​ ​​ ​​​​ 8:48  ​​​​ 0  ​​ ​​​​ 5G ​​ 0 disk

Câu lệnh để​​ tiến hành create osd data như sau:

ceph-deploy osd create --data {device} {ceph-node}

Trong trường hợp này

for i in sdb sdc sdd; do

 ​​​​ for j in osd01 osd02 osd03; do

 ​​ ​​ ​​​​ ceph-deploy osd create --data /dev/$i $j

done

done

#Kết quả:

/usr/local/sbin:/usr/sbin:/sbin

[osd03][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-511557d7-9e40-48ff-b5f7-50191cf4394f/osd-block-57b836ef-6f83-495e-8d9b-4ab413ee6961

[osd03][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-2

[osd03][WARNIN] Running command: /bin/ln -s /dev/ceph-511557d7-9e40-48ff-b5f7-50191cf4394f/osd-block-57b836ef-6f83-495e-8d9b-4ab413ee6961 /var/lib/ceph/osd/ceph-8/block

[osd03][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-8/activate.monmap

[osd03][WARNIN] ​​ stderr: got monmap epoch 1

[osd03][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-8/keyring --create-keyring --name osd.8 --add-key AQCqYFdfTn/4MhAA7CmiQgsX3fpMAVSZTsJ4Mw==

[osd03][WARNIN] ​​ stdout: creating /var/lib/ceph/osd/ceph-8/keyring

[osd03][WARNIN] added entity osd.8 auth auth(auid = 18446744073709551615 key=AQCqYFdfTn/4MhAA7CmiQgsX3fpMAVSZTsJ4Mw== with 0 caps)

[osd03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-8/keyring

[osd03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-8/

[osd03][WARNIN] Running command:​​ /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 8 --monmap /var/lib/ceph/osd/ceph-8/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-8/ --osd-uuid 57b836ef-6f83-495e-8d9b-4ab413ee6961 --setuser ceph --setgroup ceph

[osd03][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sdd

[osd03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-8

[osd03][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-511557d7-9e40-48ff-b5f7-50191cf4394f/osd-block-57b836ef-6f83-495e-8d9b-4ab413ee6961 --path /var/lib/ceph/osd/ceph-8 --no-mon-config

[osd03][WARNIN] Running command: /bin/ln -snf​​ /dev/ceph-511557d7-9e40-48ff-b5f7-50191cf4394f/osd-block-57b836ef-6f83-495e-8d9b-4ab413ee6961 /var/lib/ceph/osd/ceph-8/block

[osd03][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-8/block

[osd03][WARNIN] Running command: /bin/chown​​ -R ceph:ceph /dev/dm-2

[osd03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-8

[osd03][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-8-57b836ef-6f83-495e-8d9b-4ab413ee6961

[osd03][WARNIN] ​​ stderr: Created symlink​​ /etc/systemd/system/multi-user.target.wants/[email protected] → /lib/systemd/system/[email protected].

[osd03][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@8

[osd03][WARNIN] ​​ stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/[email protected] → /lib/systemd/system/[email protected].

[osd03][WARNIN] Running command: /bin/systemctl start ceph-osd@8

[osd03][WARNIN] --> ceph-volume lvm activate successful for osd ID: 8

[osd03][WARNIN] --> ceph-volume lvm create successful for: /dev/sdd

[osd03][INFO ​​ ] checking OSD status...

[osd03][DEBUG ] find the location of an executable

[osd03][INFO ​​ ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json

[ceph_deploy.osd][DEBUG ] Host osd03 is now ready for osd use.

Tiến hành kiểm tra lại

root@osd01:~# lsblk

NAME  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ MAJ:MIN RM ​​ SIZE RO TYPE MOUNTPOINT

loop0  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 7:0  ​​ ​​​​ 0 89.1M ​​ 1 loop /snap/core/7917

loop1  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 7:1  ​​ ​​​​ 0 96.6M ​​ 1 loop /snap/core/9804

sda  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 8:0  ​​ ​​​​ 0 ​​ 100G ​​ 0 disk​​ 

├─sda1  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 8:1  ​​ ​​​​ 0  ​​ ​​​​ 1M ​​ 0​​ part​​ 

├─sda2  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 8:2  ​​ ​​​​ 0  ​​ ​​​​ 2G ​​ 0 part /boot

├─sda3  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 8:3  ​​ ​​​​ 0  ​​ ​​​​ 4G ​​ 0 part [SWAP]

└─sda4  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 8:4  ​​ ​​​​ 0  ​​​​ 94G ​​ 0 part /

sdb  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 8:16 ​​ ​​ 0  ​​ ​​​​ 5G ​​ 0 disk​​ 

└─ceph--0cb8bcc0--6c7e--4b48--b2e9--95524d974fff-osd--block--ee5db9ec--c8f9--41e8--8d76--32074fba8775 253:0  ​​ ​​​​ 0  ​​ ​​​​ 5G ​​ 0 lvm ​​ 

sdc  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 8:32​​  ​​​​ 0  ​​ ​​​​ 5G ​​ 0 disk​​ 

└─ceph--ec347fc1--b759--4849--8487--7487a9b55965-osd--block--e3118288--36fd--42d2--8217--09f36aa0e65e 253:1  ​​ ​​​​ 0  ​​ ​​​​ 5G ​​ 0 lvm ​​ 

sdd  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 8:48​​  ​​​​ 0  ​​ ​​​​ 5G ​​ 0 disk​​ 

└─ceph--a307c8cd--9567--4e05--b3c3--a09a9c1db476-osd--block--93371b2d--ee43--493b--ac23--e05b73db2942 253:2  ​​ ​​​​ 0  ​​ ​​​​ 5G ​​ 0 lvm ​​ 

4.6/ Kiểm tra trạng thái của cluster

root@osd01:~# sudo ceph health

HEALTH_OK

#

root@osd02:~# ceph health

HEALTH_OK

root@osd02:~# sudo ceph status

 ​​​​ cluster:

 ​​ ​​ ​​​​ id:  ​​ ​​ ​​​​ ae0d4ca3-da52-4c95-a9c4-8bbfcd31c41a

 ​​ ​​ ​​​​ health: HEALTH_OK

​​ 

 ​​​​ services:

 ​​ ​​ ​​​​ mon: 3 daemons, quorum mon01,mon02,mon03

 ​​ ​​ ​​​​ mgr: mon01(active), standbys: mon02, mon03

 ​​ ​​ ​​​​ osd: 9 osds: 9 up, 9 in

​​ 

 ​​​​ data:

 ​​ ​​ ​​​​ pools:  ​​​​ 0 pools, 0 pgs

 ​​ ​​ ​​​​ objects: 0 ​​ objects, 0 B

 ​​ ​​ ​​​​ usage:  ​​​​ 9.0 GiB used, 36 GiB / 45 GiB avail

 ​​ ​​ ​​​​ pgs:  ​​ ​​ ​​​​ 

Check Quorum status

ceph quorum_status --format json-pretty

 

root@osd01:~# ceph quorum_status --format json-pretty

 

{

 ​​ ​​ ​​​​ "election_epoch": 6,

 ​​ ​​ ​​​​ "quorum": [

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 0,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 1,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 2

 ​​ ​​ ​​​​ ],

 ​​ ​​ ​​​​ "quorum_names": [

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "mon01",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "mon02",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "mon03"

 ​​ ​​ ​​​​ ],

 ​​ ​​ ​​​​ "quorum_leader_name": "mon01",

 ​​ ​​ ​​​​ "monmap": {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "epoch": 1,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "fsid": "ae0d4ca3-da52-4c95-a9c4-8bbfcd31c41a",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "modified": "2020-09-08 17:23:55.482563",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "created": "2020-09-08 17:23:55.482563",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "features": {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "persistent": [

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "kraken",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "luminous",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "mimic",

 ​​​​  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "osdmap-prune"

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ ],

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "optional": []

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ },

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "mons": [

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "rank": 0,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "mon01",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "addr": "10.0.1.52:6789/0",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "public_addr":​​ "10.0.1.52:6789/0"

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ },

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "rank": 1,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "mon02",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "addr": "10.0.1.53:6789/0",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "public_addr": "10.0.1.53:6789/0"

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ },

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "rank":​​ 2,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "mon03",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "addr": "10.0.1.54:6789/0",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "public_addr": "10.0.1.54:6789/0"

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ }

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ ]

 ​​ ​​ ​​​​ }

}

 

4.7./ Enable ceph Dashboard

sudo ceph mgr module enable dashboard

sudo ceph mgr module ls

 

{

 ​​ ​​ ​​​​ "enabled_modules": [

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "balancer",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "crash",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "dashboard",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "iostat",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "restful",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "status"

 ​​ ​​ ​​​​ ],

 ​​ ​​ ​​​​ "disabled_modules": [

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "hello",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "can_run": true,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "error_string": ""

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ },

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "influx",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "can_run": false,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "error_string": "influxdb python module not found"

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ },

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "localpool",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "can_run": true,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "error_string": ""

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ },

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "prometheus",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "can_run": true,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "error_string": ""

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ },

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "selftest",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "can_run": true,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "error_string": ""

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ },

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "smart",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "can_run": true,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "error_string": ""

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ },

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "telegraf",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "can_run": true,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "error_string": ""

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ },

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "telemetry",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "can_run": true,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "error_string": ""

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ },

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "zabbix",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "can_run": true,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "error_string": ""

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ }

 ​​ ​​ ​​​​ ]

}

Khởi tạo 1 self certificates để​​ truy cập vào dashboard

sudo ceph dashboard create-self-signed-cert

#Kết quả

Self-signed certificate created

# Câu lệnh này không sử​​ dụng được do ceph bản mới không còn hỗ​​ trợ

#​​ sudo ceph dashboard ac-user-create admin 'Str0ngP@sswOrd' administrator

sudo ceph dashboard ac-user-create admin Str0ngP@sswOrd administrator

no valid command found; 10 closest matches:

dashboard set-enable-browsable-api <value>

dashboard set-rgw-api-port <int>

dashboard get-rgw-api-admin-resource

dashboard get-rgw-api-ssl-verify

dashboard get-rgw-api-secret-key

dashboard set-rgw-api-access-key <value>

dashboard get-rgw-api-port

dashboard get-enable-browsable-api

dashboard set-rgw-api-ssl-verify <value>

dashboard set-rest-requests-timeout <int>

Error EINVAL: invalid command

#https://tracker.ceph.com/issues/23973

#câu lệnh này mới tạo được user và password

ceph dashboard set-login-credentials​​ ceph-admin​​ 123456@@##

Username and password updated

Enabling the Object Gateway Management Frontend:

sudo radosgw-admin user create --uid=ceph-admin --display-name='Ceph Admin' --system

#Kết quả

{

 ​​ ​​ ​​​​ "user_id": "ceph-admin",

 ​​ ​​ ​​​​ "display_name": "Ceph Admin",

 ​​ ​​ ​​​​ "email": "",

 ​​ ​​ ​​​​ "suspended": 0,

 ​​ ​​ ​​​​ "max_buckets": 1000,

 ​​ ​​ ​​​​ "auid": 0,

 ​​ ​​ ​​​​ "subusers": [],

 ​​ ​​ ​​​​ "keys": [

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "user": "ceph-admin",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "access_key": "HP0IUU3NGNP1S10BJSS9",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "secret_key": "BIn9FN5R4CABc8wayajKQQ1N0wtRolBRZKReVYg7"

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ }

 ​​ ​​ ​​​​ ],

 ​​ ​​ ​​​​ "swift_keys": [],

 ​​ ​​ ​​​​ "caps": [],

 ​​ ​​ ​​​​ "op_mask": "read, write, delete",

 ​​ ​​ ​​​​ "system": "true",

 ​​ ​​ ​​​​ "default_placement": "",

 ​​ ​​ ​​​​ "placement_tags": [],

 ​​ ​​ ​​​​ "bucket_quota": {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "enabled": false,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "check_on_raw": false,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "max_size": -1,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "max_size_kb": 0,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "max_objects": -1

 ​​ ​​ ​​​​ },

 ​​ ​​ ​​​​ "user_quota": {

 ​​ ​​ ​​ ​​ ​​ ​​​​ ​​ "enabled": false,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "check_on_raw": false,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "max_size": -1,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "max_size_kb": 0,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "max_objects": -1

 ​​ ​​ ​​​​ },

 ​​ ​​ ​​​​ "temp_url_keys": [],

 ​​ ​​ ​​​​ "type": "rgw",

 ​​ ​​ ​​​​ "mfa_ids": []

}

Finally, provide the credentials to the dashboard:

sudo​​ ceph dashboard set-rgw-api-access-key <api-access-key>

sudo ceph dashboard set-rgw-api-secret-key <api-secret-key>

ceph dashboard set-rgw-api-access-key HP0IUU3NGNP1S10BJSS9

#kết quả

Option RGW_API_ACCESS_KEY updated

 

sudo ceph dashboard set-rgw-api-secret-key BIn9FN5R4CABc8wayajKQQ1N0wtRolBRZKReVYg7

#kết quả

Option RGW_API_SECRET_KEY updated

If you are using a self-signed certificate in your Object Gateway setup, then you should disable certificate verification:

#

sudo ceph dashboard set-rgw-api-ssl-verify False

#kết quả

Option RGW_API_SSL_VERIFY updated

4.8./ Add Rados Gateway

To use the Ceph Object Gateway component of Ceph, you must deploy an instance of RGW. Execute the following to create a new instance of Rados Gateway:

$ ceph-deploy rgw create {gateway-node}

Ví dụ:

su - ceph-admin

cd /home/ceph-admin/ceph-deploy

ceph-deploy rgw create rgw

#Kết quả

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph-admin/.cephdeploy.conf

[ceph_deploy.cli][INFO ​​ ] Invoked (2.0.1): /usr/bin/ceph-deploy​​ rgw create rgw

[ceph_deploy.cli][INFO ​​ ] ceph-deploy options:

[ceph_deploy.cli][INFO ​​ ] ​​ username  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : None

[ceph_deploy.cli][INFO ​​ ] ​​ verbose  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : False

[ceph_deploy.cli][INFO ​​ ] ​​ rgw  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : [('rgw', 'rgw.rgw')]

[ceph_deploy.cli][INFO ​​ ] ​​ overwrite_conf  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : False

[ceph_deploy.cli][INFO ​​ ] ​​ subcommand  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : create

[ceph_deploy.cli][INFO ​​ ] ​​ quiet  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : False

[ceph_deploy.cli][INFO ​​ ] ​​ cd_conf  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7faa2e8136e0>

[ceph_deploy.cli][INFO ​​ ] ​​ cluster  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : ceph

[ceph_deploy.cli][INFO ​​ ] ​​ func  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : <function rgw at 0x7faa2eeb15d0>

[ceph_deploy.cli][INFO ​​ ] ​​ ceph_conf  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : None

[ceph_deploy.cli][INFO ​​ ] ​​ default_release  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : False

[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts rgw:rgw.rgw

...

[rgw][DEBUG ] connected to host: rgw​​ 

[rgw][DEBUG ] detect platform information from remote host

[rgw][DEBUG ] detect machine type

[ceph_deploy.rgw][INFO ​​ ] Distro info: Ubuntu 18.04 bionic

[ceph_deploy.rgw][DEBUG ] remote host will use systemd

[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to rgw

[rgw][DEBUG ] write cluster​​ configuration to /etc/ceph/{cluster}.conf

[rgw][WARNIN] rgw keyring does not exist yet, creating one

[rgw][DEBUG ] create a keyring file

[rgw][DEBUG ] create path recursively if it doesn't exist

[rgw][INFO ​​ ] Running command: sudo ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.rgw osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.rgw/keyring

[rgw][INFO ​​ ] Running command: sudo systemctl enable [email protected]

[rgw][WARNIN] Created symlink /etc/systemd/system/ceph-radosgw.target.wants/[email protected] → /lib/systemd/system/[email protected].

[rgw][INFO ​​ ] Running command: sudo systemctl start [email protected]

[rgw][INFO ​​ ] Running command: sudo systemctl enable ceph.target

[ceph_deploy.rgw][INFO ​​ ] The Ceph Object Gateway (RGW) is now running on host rgw and default port 7480

By default, the RGW instance will listen on port 7480. This can be changed by editing ceph.conf on the node running the​​ RGW as follows:

Trên server​​ rgw

nano /etc/ceph/ceph.conf​​ 

#Paste​​ ​​ cuối cùng

#edit port of RGW Port 80

[client]

rgw frontends = civetweb port=80

Restart lại service rados trên server rgw

service [email protected] restart

service [email protected] status

#kết quả

[email protected] - Ceph rados gateway

 ​​ ​​​​ Loaded: loaded (/lib/systemd/system/[email protected]; indirect; vendor preset: enabled)

 ​​ ​​​​ Active: active (running) since Thu 2020-09-17 17:27:53 +07; 1s ago

​​ Main PID: 650818 (radosgw)

 ​​ ​​ ​​​​ Tasks: 581

 ​​ ​​​​ CGroup: /system.slice/system-ceph\x2dradosgw.slice/[email protected]

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ └─650818 /usr/bin/radosgw -f --cluster ceph --name client.rgw.rgw --setuser ceph --setgroup ceph

 

Sep 17 17:27:53 rgw systemd[1]: Started Ceph​​ rados gateway.

netstat -pnltu

 

tcp  ​​ ​​ ​​ ​​ ​​ ​​​​ 0  ​​ ​​ ​​ ​​​​ 0 0.0.0.0:80  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 0.0.0.0:*  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ LISTEN  ​​ ​​ ​​ ​​​​ 650818/radosgw  ​​​​ 

Tiến hành truy cập và kiểm tra ceph

https://mon01.ceph.com:8443/#/dashboard

#​​ https://docs.ceph.com/en/latest/rados/operations/data-placement/

5./ Tạo và sử​​ dụng Block Storage

Mặc định ceph block devices sử​​ dụng​​ rbd pool. thông tin các loại Pool sẽ​​ được đề​​ cập​​ ​​ phần dưới

5.1/ Cài đặt kết nối client Ubuntu vào block storage

Trước​​ tiên cần tạo 1 vps client có thông tin như sau:

IP: 10.0.1.42

Cài đặt thông tin hostname cho tất cả​​ các node ceph và client

10.0.1.51 rgw.ceph.com  ​​​​ rgw

10.0.1.52 mon01.ceph.com mon01

10.0.1.53 mon02.ceph.com mon02

10.0.1.54 mon03.ceph.com mon03

10.0.1.55​​ ceph-admin.com ceph-admin

10.0.1.56 osd01.ceph.com osd01

10.0.1.57 osd02.ceph.com osd02

10.0.1.58 osd03.ceph.com osd03

10.0.1.42 client01-ceph.ceph.com client01-ceph

Tiếp tục trên server client01-ceph

export USER_NAME="ceph-admin"

export USER_PASS="StrOngP@ssw0rd"

sudo useradd --create-home -s /bin/bash ${USER_NAME}

echo "${USER_NAME}:${USER_PASS}"|sudo chpasswd

echo "${USER_NAME} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/${USER_NAME}

sudo chmod 0440 /etc/sudoers.d/${USER_NAME}

Trên server ceph-admin copy public keys sang note client01-ceph

ssh-copy-id client01-ceph

 

Number of key(s) added: 1

 

Now try logging into the machine, with:  ​​​​ "ssh 'client01-ceph'"

and check to make sure that only the key(s) you wanted were added.

Test connect nếu thành​​ công không hỏi password là được.

ssh 'client01-ceph'

 

 

 

 

 

 

6./Fix lỗi

6.1/ Trên mon01 báo lỗi

_check_auth_rotating possible clock skew, rotating keys expired way too early

không thể​​ start service ceph-mgr

có thể​​ vào​​ https://mon02.ceph.com:8443/​​ đã được tự​​ động thay thế​​ mon01 để​​ quản lý ceph. Chưa sử​​ dụng ceph nhưng​​ ​​ cứng đã báo sử​​ dụng gần hết

check log trên mon2 và mon3 thì không phát hiện lỗi.

Kiểm tra mon1 thì không còn​​ ​​ trong Quorum nữa.

Kiểm tra lại​​ file cấu hình

cat /etc/ceph/ceph.conf

[global]

fsid = ae0d4ca3-da52-4c95-a9c4-8bbfcd31c41a

mon_initial_members = mon01, mon02, mon03

mon_host = 10.0.1.52,10.0.1.53,10.0.1.54

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required =​​ cephx

#################

#auth_cluster_required = none

#auth_service_required = none

#auth_client_required = none

Sau khi thay đổi cấu hình authen cluster, service, client thông qua cephx thì mon1 tự​​ động được join lại vào cluster và hết lỗi.

Tiến hành enable các service khi khởi động lại trên mon1

systemctl enable ceph-mgr@mon01

systemctl enable ceph-mds@mon01

6.2/ Fix lỗi osd bị​​ đầy

ceph-admin@osd01:/etc/ceph$ ceph osd df

ceph osd status

ceph osd reweight-by-utilization

#kết quả​​ không thay đổi

no change

moved 0 / 120 (0%)

avg 60

stddev 36.7696 -> 36.7696 (expected baseline 5.47723)

min osd.2 with 12 -> 12 pgs (0.2 -> 0.2 * mean)

max osd.4 with 40 -> 40 pgs (0.666667 -> 0.666667 * mean)

 

oload 120

max_change 0.05

max_change_osds 4

average_utilization​​ 0.9293

overload_utilization 1.1152

ceph pg set_full_ratio 0.97

Lỗi

no valid command found; 10 closest matches:

pg ls {<int>} {<states> [<states>...]}

pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}​​ {<int>}

pg ls-by-primary <osdname (id|osd.id)>​​ {<int>} {<states> [<states>...]}

pg ls-by-osd <osdname (id|osd.id)> {<int>} {<states> [<states>...]}

pg dump_pools_json

pg ls-by-pool <poolstr> {<states> [<states>...]}

pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief [all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}

pg dump_json {all|summary|sum|pools|osds|pgs [all|summary|sum|pools|osds|pgs...]}

pg stat

pg getmap

Error EINVAL: invalid command

Kiểm tra lại version

root@osd01:~# ceph -v

ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)

root@osd01:~# ​​ ceph osd set-full-ratio 0.97

osd set-full-ratio 0.97

Với lỗi như trên phương án xử​​ lý là add thêm osd hoặc add thêm disk cho osd

# https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/pdf/troubleshooting_guide/Red_Hat_Ceph_Storage-3-Troubleshooting_Guide-en-US.pdf

 

#​​ http://centosquestions.com/what-do-you-do-when-a-ceph-osd-is-nearfull/

6.3/ OSD báo Down

root@mon01:~# sudo ceph status

 ​​​​ cluster:

 ​​ ​​ ​​​​ id:  ​​ ​​ ​​​​ ae0d4ca3-da52-4c95-a9c4-8bbfcd31c41a

 ​​ ​​ ​​​​ health: HEALTH_WARN

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ noout flag(s) set

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 2 osds down

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 2 hosts (2 osds) down

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 908/681 objects misplaced (133.333%)

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ Reduced​​ data availability: 2 pgs inactive, 38 pgs stale

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ Degraded data redundancy: 38 pgs undersized

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 1 slow ops, oldest one blocked for 82 sec, mon.mon01 has slow ops

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ too few PGs per OSD (13 < min 30)

​​ 

 ​​​​ services:

 ​​ ​​ ​​​​ mon: 3 daemons, quorum mon01,mon02,mon03

 ​​ ​​ ​​​​ mgr: mon03(active), standbys: mon01, mon02

 ​​ ​​ ​​​​ osd: 3 osds: 1 up, 3 in

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ flags noout

 ​​ ​​ ​​​​ rgw: 1 daemon active

​​ 

 ​​​​ data:

 ​​ ​​ ​​​​ pools:  ​​​​ 5 pools, 40 pgs

 ​​ ​​ ​​​​ objects: 227 ​​ objects, 1.6 KiB

 ​​ ​​ ​​​​ usage:  ​​​​ 3.1 GiB used,​​ 297 GiB / 300 GiB avail

 ​​ ​​ ​​​​ pgs:  ​​ ​​ ​​​​ 5.000% pgs unknown

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 908/681 objects misplaced (133.333%)

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 38 stale+active+undersized+remapped

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 2 ​​ unknown

Khi reboot host osd3 thì 2 osd còn lại chuyển status sang up

Ngay sau khi​​ osd3 up thì status lại như cũ.

#​​ https://docs.ceph.com/en/latest/rados/operations/monitoring-osd-pg/

Kiểm tra lại xem đã open Firewall chưa. Sau khi mở​​ thì status đã ok

 

6.4/ Fix lỗi OSD bị​​ Down, dữ​​ liệu OSD đầy.

Tiến hành xoá sạch OSD và add lại.

Remove OSD from Ceph Cluster

Firstly check which OSD is down and want to remove from Ceph Cluster by using given command: ceph osd tree . Let’s say it is osd.20 which is down​​ and want to remove. Now use the following commands

a. ceph osd out osd.20 (If you see “osd.20 is already out” — it’s ok.)

b. ceph osd down osd.20

c. Remove it: ceph osd rm osd.20If it says ‘Error EBUSY: osd.11 is still up; must be down before removal.’ that means OSD is not dead yet. Go to the host it resides on and kill it (systemctl stop ceph-osd@20), and repeat rm operation.

d. ceph osd crush rm osd.20

e. remove it authorization (it should prevent problems with ‘couldn’t add new osd with same number’): ceph auth del osd.20.

f. Make sure it is safe to destroy the OSD:

ceph osd destroy 20 --yes-i-really-mean-it

g. Now check with the following command: ceph -s or ceph -w

h. If want to remove ceph LVM volume created on host machine (suppose lvm created on sdb) so use given command :

To find ceph volume group name : lvs

To remove Logical Volume : lvremove <ceph-VG-name>

 

#​​ https://vineetcic.medium.com/how-to-remove-add-osd-from-ceph-cluster-1c038eefe522

7./ Tạo pool và Share storage trên CEPH

7.1/ Tạo Pool

root@mon01:~# ceph osd lspools

 

1 .rgw.root

2 default.rgw.control

3 default.rgw.meta

4 default.rgw.log

5 default.rgw.buckets.index

 

root@mon01:~# sudo ceph osd​​ pool create pool-01 100

pool​​ 'pool-01'​​ created

 

root@mon01:~# ceph osd lspools

 

1 .rgw.root

2 default.rgw.control

3 default.rgw.meta

4 default.rgw.log

5 default.rgw.buckets.index

6 pool-01

Associate Pool to Application

Pools need to be associated with an​​ application before use. Pools that will be used with CephFS or pools that are automatically created by RGW are automatically associated.

Các loại Pool​​ 

--- Ceph Filesystem ---

$ sudo ceph osd pool application enable <pool-name> cephfs

 

--- Ceph Block Device ---

$ sudo ceph osd pool application enable <pool-name> rbd

 

--- Ceph Object Gateway ---

​​ $ sudo ceph osd pool application enable <pool-name>​​ rgw

Example:

root@mon01:~# sudo ceph osd pool application enable pool-01 rbd

enabled application 'rbd' on pool 'pool-01'

Pools that are intended for use with​​ RBD​​ should be initialized using the rbd tool:

root@mon01:~# sudo rbd pool init pool-01

Nếu muốn xoá bỏ​​ pool

To disable app, use:

ceph osd pool application disable <poolname> <app> {--yes-i-really-mean-it}

To obtain I/O information for a specific pool or all, execute:

$ sudo ceph osd pool stats [{pool-name}]

Delete a Pool

To delete a pool, execute:

sudo ceph osd pool delete {pool-name} [{pool-name} --yes-i-really-really-mean-it]

7.2/ Tạo Block để​​ sử​​ dụng

7.3/ Cấu hình trên client.

Trên server MON

root@mon01:~# ceph osd pool create rbd 8

pool 'rbd' created

root@mon01:~# rbd pool init rbd

Trên client

apt-get --assume-yes -q --no-install-recommends install ca-certificates apt-transport-https

Trên server ceph-admin

cd /home/ceph-admin/

ceph-admin@ceph-admin:~$ ceph-deploy install client01-ceph

#Kết quả

t01-ceph][DEBUG ] Created symlink​​ /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target.

[client01-ceph][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target.

[client01-ceph][DEBUG ]​​ Setting up ceph-mgr (13.2.10-1bionic) ...

[client01-ceph][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target.

[client01-ceph][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target.

[client01-ceph][DEBUG ] Setting up ceph (13.2.10-1bionic) ...

[client01-ceph][DEBUG ] Processing triggers for systemd (237-3ubuntu10.42) ...

[client01-ceph][DEBUG ] Processing triggers for man-db (2.8.3-2ubuntu0.1) ...

[client01-ceph][DEBUG ] Processing triggers for ureadahead (0.100.0-21) ...

[client01-ceph][DEBUG ] Processing triggers for libc-bin (2.27-3ubuntu1.2) ...

[client01-ceph][INFO ​​ ] Running command: sudo ceph --version

[client01-ceph][DEBUG ] ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)

 

cd /home/ceph-admin/ceph-deploy

ceph-admin@ceph-admin:~$ ceph-deploy admin client01-ceph

 

[client01-ceph][DEBUG ] connected to host: client01-ceph​​ 

[client01-ceph][DEBUG​​ ] detect platform information from remote host

[client01-ceph][DEBUG ] detect machine type

[client01-ceph][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

Trên client

root@client01-ceph:~# sudo rbd create rbd01 --size 10G --image-feature layering

#show list

root@client01-ceph:~# sudo rbd ls -l​​ 

NAME  ​​ ​​​​ SIZE PARENT FMT PROT LOCK​​ 

rbd01 10 GiB

# map the image to device

root@client01-ceph:~# sudo rbd map rbd01​​ 

/dev/rbd0

# show mapping

root@client01-ceph:~# rbd showmapped​​ 

 

id pool image snap​​ device  ​​ ​​​​ 

0 ​​ rbd ​​ rbd01 -  ​​ ​​​​ /dev/rbd0

# format with XFS

root@client01-ceph:~# sudo mkfs.xfs /dev/rbd0​​ 

 

meta-data=/dev/rbd0  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ isize=512  ​​ ​​​​ agcount=17, agsize=162816 blks

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ =  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ sectsz=512  ​​​​ attr=2, projid32bit=1

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ =  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ crc=1  ​​ ​​ ​​ ​​ ​​ ​​​​ finobt=1, sparse=0, rmapbt=0, reflink=0

data  ​​ ​​ ​​​​ =  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ bsize=4096  ​​​​ blocks=2621440, imaxpct=25

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ =  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ sunit=1024  ​​​​ swidth=1024 blks

naming  ​​​​ =version 2  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ bsize=4096  ​​​​ ascii-ci=0 ftype=1

log  ​​ ​​ ​​ ​​​​ =internal log  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ bsize=4096  ​​​​ blocks=2560, version=2

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ =  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ sectsz=512  ​​​​ sunit=8 blks, lazy-count=1

realtime =none  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ extsz=4096  ​​​​ blocks=0, rtextents=0

# mount device

root@client01-ceph:~# sudo mount /dev/rbd0 /mnt

#Kiểm tra

root@client01-ceph:~# df -hT​​ 

 

Filesystem  ​​ ​​ ​​​​ Type  ​​ ​​ ​​ ​​​​ Size ​​ Used Avail Use% Mounted on

udev  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ devtmpfs ​​ 1.9G  ​​ ​​ ​​​​ 0 ​​ 1.9G  ​​​​ 0% /dev

tmpfs  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ tmpfs  ​​ ​​ ​​​​ 395M ​​ 1.3M ​​ 394M  ​​​​ 1% /run

/dev/sda4  ​​ ​​ ​​ ​​​​ ext4  ​​ ​​ ​​ ​​ ​​​​ 93G ​​ 4.1G  ​​​​ 84G  ​​​​ 5% /

tmpfs  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ tmpfs  ​​ ​​ ​​​​ 2.0G  ​​ ​​ ​​​​ 0 ​​ 2.0G  ​​​​ 0% /dev/shm

tmpfs  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ tmpfs  ​​ ​​ ​​​​ 5.0M  ​​ ​​ ​​​​ 0 ​​ 5.0M  ​​​​ 0% /run/lock

tmpfs  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ tmpfs  ​​ ​​ ​​​​ 2.0G  ​​ ​​ ​​​​ 0 ​​ 2.0G  ​​​​ 0% /sys/fs/cgroup

/dev/sda2  ​​ ​​ ​​ ​​​​ ext4  ​​ ​​ ​​ ​​​​ 2.0G  ​​​​ 81M ​​ 1.8G  ​​​​ 5% /boot

/dev/loop1  ​​ ​​ ​​​​ squashfs  ​​​​ 98M  ​​​​ 98M  ​​ ​​ ​​​​ 0 100% /snap/core/10126

tmpfs  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ tmpfs  ​​ ​​ ​​​​ 395M  ​​ ​​ ​​​​ 0 ​​ 395M  ​​​​ 0% /run/user/0

/dev/loop0  ​​ ​​ ​​​​ squashfs  ​​​​ 98M  ​​​​ 98M  ​​ ​​ ​​​​ 0 100% /snap/core/10444

/dev/rbd0  ​​ ​​ ​​ ​​​​ xfs  ​​ ​​ ​​ ​​ ​​ ​​​​ 10G  ​​​​ 44M  ​​​​ 10G  ​​​​ 1% /mnt

Sau khi test tải 1 file về​​ phân vùng /mnt thấy dữ​​ liệu được chia đều ra 3 osd

# https://www.server-world.info/en/note?os=Ubuntu_18.04&p=ceph&f=2

#​​ https://computingforgeeks.com/create-a-pool-in-ceph-storage-cluster/

7.4/ Tiến hành test tốc độ​​ của Ceph.

File 200MB

###########################

Thu Dec ​​ 3 17:21:04 +07 2020

 

test: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64

fio-2.0.9

Starting 1 process

test: Laying out IO file(s) (1 file(s) / 200MB)

 

test: (groupid=0, jobs=1): err= 0: pid=22741: Thu Dec ​​ 3 17:22:34 2020

 ​​​​ read : io=153332KB, bw=2206.7KB/s, iops=551 , runt= 69488msec

 ​​​​ write: io=51468KB, bw=758450 B/s, iops=185 , runt= 69488msec

 ​​​​ cpu  ​​ ​​​​  ​​ ​​ ​​ ​​ ​​​​ : usr=0.43%, sys=1.23%, ctx=37593, majf=0, minf=4

 ​​​​ IO depths  ​​ ​​​​ : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9%

 ​​ ​​ ​​ ​​​​ submit  ​​ ​​​​ : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

 ​​ ​​ ​​ ​​​​ complete ​​ : 0=0.0%, 4=100.0%, 8=0.0%,​​ 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%

 ​​ ​​ ​​ ​​​​ issued  ​​ ​​​​ : total=r=38333/w=12867/d=0, short=r=0/w=0/d=0

 

Run status group 0 (all jobs):

 ​​ ​​​​ READ: io=153332KB, aggrb=2206KB/s, minb=2206KB/s, maxb=2206KB/s, mint=69488msec, maxt=69488msec

 ​​​​ WRITE: io=51468KB, aggrb=740KB/s, minb=740KB/s, maxb=740KB/s, mint=69488msec, maxt=69488msec

 

Disk stats (read/write):

 ​​​​ rbd0: ios=38273/12836, merge=0/4, ticks=2433864/1924656, in_queue=4455840, util=100.00%

Thu Dec ​​ 3 17:22:34 +07 2020

File 500MB

###########################

Thu Dec ​​ 3 17:27:55 +07 2020

 

test: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64

fio-2.0.9

Starting 1 process

test: Laying out IO file(s) (1 file(s) / 500MB)

 

test: (groupid=0, jobs=1): err= 0: pid=25983: Thu Dec ​​ 3 17:30:45 2020

 ​​​​ read : io=383784KB, bw=3463.2KB/s, iops=865 , runt=110819msec

 ​​​​ write: io=128216KB, bw=1156.2KB/s, iops=289 , runt=110819msec

 ​​​​ cpu  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : usr=0.57%, sys=2.05%, ctx=89319, majf=0, minf=5

 ​​​​ IO depths  ​​ ​​​​ : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%

 ​​ ​​ ​​ ​​​​ submit  ​​ ​​​​ : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

 ​​ ​​ ​​ ​​​​ complete ​​ : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%

 ​​ ​​ ​​ ​​​​ issued  ​​ ​​​​ : total=r=95946/w=32054/d=0, short=r=0/w=0/d=0

 

Run status group 0 (all​​ jobs):

 ​​ ​​​​ READ: io=383784KB, aggrb=3463KB/s, minb=3463KB/s, maxb=3463KB/s, mint=110819msec, maxt=110819msec

 ​​​​ WRITE: io=128216KB, aggrb=1156KB/s, minb=1156KB/s, maxb=1156KB/s, mint=110819msec, maxt=110819msec

 

Disk stats (read/write):

 ​​​​ rbd0: ios=95865/32024, merge=0/4, ticks=3175912/3886596, in_queue=7170572, util=100.00%

Thu Dec ​​ 3 17:30:45 +07 2020

File 1GB

###########################

Thu Dec ​​ 3 17:34:03 +07 2020

 

test: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64

fio-2.0.9

Starting 1 process

test: Laying out IO file(s) (1 file(s) / 1024MB)

 

test: (groupid=0, jobs=1): err= 0: pid=28765: Thu Dec ​​ 3 17:37:23 2020

 ​​​​ read : io=783980KB, bw=6015.7KB/s, iops=1503 , runt=130324msec

 ​​​​ write: io=264596KB, bw=2030.3KB/s, iops=507 , runt=130324msec

 ​​​​ cpu  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : usr=0.96%, sys=3.22%, ctx=182980, majf=0, minf=5

 ​​​​ IO depths  ​​ ​​​​ : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%

 ​​ ​​ ​​ ​​​​ submit  ​​ ​​​​ : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

 ​​ ​​ ​​ ​​​​ complete ​​ : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%

 ​​ ​​ ​​ ​​​​ issued  ​​ ​​​​ : total=r=195995/w=66149/d=0, short=r=0/w=0/d=0

 

Run status group 0 (all jobs):

 ​​ ​​​​ READ: io=783980KB, aggrb=6015KB/s, minb=6015KB/s, maxb=6015KB/s, mint=130324msec, maxt=130324msec

 ​​​​ WRITE: io=264596KB, aggrb=2030KB/s, minb=2030KB/s, maxb=2030KB/s, mint=130324msec, maxt=130324msec

 

Disk stats (read/write):

 ​​​​ rbd0: ios=195931/66136, merge=0/0, ticks=3011516/5307948, in_queue=8403132, util=100.00%

Thu Dec ​​ 3 17:37:23 +07 2020

################

Test​​ ​​ cứng thường không qua ceph nhưng có cache bởi card raid.

File 1GB

###########################

Thu Dec ​​ 3 17:38:16 +07 2020

 

test: (g=0): rw=randrw, bs=4K-4K/4K-4K, ioengine=libaio, iodepth=64

fio-2.0.9

Starting 1 process

test: Laying out IO file(s)​​ (1 file(s) / 1024MB)

 

test: (groupid=0, jobs=1): err= 0: pid=30103: Thu Dec ​​ 3 17:38:26 2020

 ​​​​ read : io=787416KB, bw=98563KB/s, iops=24640 , runt= ​​ 7989msec

 ​​​​ write: io=261160KB, bw=32690KB/s, iops=8172 , runt= ​​ 7989msec

 ​​​​ cpu  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : usr=7.62%, sys=40.12%, ctx=25433, majf=0, minf=5

 ​​​​ IO depths  ​​ ​​​​ : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%

 ​​ ​​ ​​ ​​​​ submit  ​​ ​​​​ : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%

 ​​ ​​ ​​ ​​​​ complete ​​ : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%

 ​​ ​​ ​​ ​​​​ issued  ​​ ​​​​ : total=r=196854/w=65290/d=0, short=r=0/w=0/d=0

 

Run status group 0 (all jobs):

 ​​ ​​​​ READ: io=787416KB, aggrb=98562KB/s, minb=98562KB/s, maxb=98562KB/s, mint=7989msec, maxt=7989msec

 ​​​​ WRITE: io=261160KB, aggrb=32689KB/s, minb=32689KB/s, maxb=32689KB/s, mint=7989msec, maxt=7989msec

 

Disk stats (read/write):

 ​​​​ sda: ios=192133/63639, merge=0/3, ticks=108124/362968, in_queue=471576, util=98.79%

Thu Dec ​​ 3 17:38:26 +07 2020

Nhận xét: Hiệu năng của CEPH khá chậm, có thể​​ là do số​​ lượng​​ osd chưa đủ​​ hoặc do test trên cùng 1 server. Vì theo lý thuyết số​​ lượng Node OSD càng nhiều tốc độ​​ sẽ​​ càng nhanh.

Lưu ý: khi dữ​​ liệu bị​​ xoá đi trên osd vẫn không udpate dữ​​ liệu xoá.

Dashboard khi có dữ​​ liệu

8./ Các lệnh thường dùng trên CEPH

ceph osd df

ceph df

ceph-disk list ​​ 

ceph-volume

ceph pg dump

ceph osd dump

root@mon01:~# ceph osd stat

root@mon01:~# ceph osd tree

ceph pg stat

 

############

Lệnh liên quan đến PG Placement group

ceph osd lspools

 

ceph osd lspools

 

1 .rgw.root

2 default.rgw.control

3​​ default.rgw.meta

4 default.rgw.log

5 default.rgw.buckets.index

root@mon01:~# ceph osd stat

 

3 osds: 3 up, 3 in; epoch: e275

flags noout

 

root@mon01:~# ceph osd tree

 

ID CLASS WEIGHT ​​ TYPE NAME  ​​ ​​ ​​ ​​​​ STATUS REWEIGHT PRI-AFF​​ 

-1  ​​ ​​ ​​ ​​ ​​​​ 0.29306 root default​​  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 

-3  ​​ ​​ ​​ ​​ ​​​​ 0.09769  ​​ ​​ ​​​​ host osd01  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 

​​ 0  ​​​​ hdd 0.09769  ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ osd.0  ​​ ​​ ​​ ​​​​ up ​​ 1.00000 1.00000​​ 

-5  ​​ ​​ ​​ ​​ ​​​​ 0.09769  ​​ ​​ ​​​​ host osd02  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 

​​ 1  ​​​​ hdd 0.09769  ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ osd.1  ​​ ​​ ​​ ​​​​ up ​​ 1.00000 1.00000​​ 

-7  ​​ ​​ ​​ ​​ ​​​​ 0.09769  ​​ ​​ ​​​​ host osd03  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 

​​ 2  ​​​​ hdd 0.09769  ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ osd.2  ​​ ​​ ​​ ​​​​ up ​​ 1.00000 1.00000

 

root@mon01:~# ceph pg stat

#

40 pgs: 38 stale+active+undersized+remapped, 2 unknown; 1.6 KiB data, 3.1 GiB used, 297 GiB / 300 GiB avail; 908/681 objects misplaced (133.333%)

 

Lưu ý:

Nếu dasboard thông báo

ceph TOO_FEW_PGS: too few PGs per OSD (13 < min 30)

thì cứ​​ kệ​​ đó, cảnh báo là do không có thao tác gì sau khi khởi tạo xong ceph. khi có tác động đến ceph storage sẽ​​ hết cảnh báo

#​​ https://github.com/rook/rook/issues/1329

Trong quá trình Lab​​ ​​ cứng có thể​​ bị​​ xoá đi và add lại từ​​ 5G lên 100G hoặc bị​​ thay đổi số​​ ​​ cứng từ​​ 3​​ ​​ sang 1​​ ​​ cứng.

 

#Tham khảo

https://fixloinhanh.com

https://computingforgeeks.com/how-to-deploy-ceph-storage-cluster-on-ubuntu-18-04-lts/

https://computingforgeeks.com/create-a-pool-in-ceph-storage-cluster/

SaKuRai

Xin chào, Mình là Sakurai. Blog này là nơi để note lại và chia sẻ những kiến thức, kinh nghiệm mà mình và anh em trong Team. Cảm ơn các bạn đã quan tâm theo dõi!

You may also like...

Leave a Reply