Tìm hiểu Triển khai CEPH Storage trên Ubuntu 18.04 LTS A-Z

Tìm hiểu Triển khai CEPH Storage trên Ubuntu 18.04​​ LTS​​ A-Z

1./ Setup Three Node Ceph Storage Cluster on Ubuntu 18.04

https://computingforgeeks.com/wp-content/uploads/2018/10/Ceph-Architecture-Ubuntu-18.04.png

 

The basic components of a Ceph storage cluster

  • Monitors: A Ceph​​ Monitor (ceph-mon) maintains maps of the cluster state, including the monitor map, manager map, the OSD map, and the CRUSH map

  • Ceph OSDs: A Ceph OSD (object storage daemon, ceph-osd) stores data, handles data replication, recovery, rebalancing, and provides some monitoring information to Ceph Monitors and Managers by checking other Ceph OSD Daemons for a heartbeat. At least 3 Ceph OSDs are normally required for redundancy and high availability.

  • MDSs: A Ceph Metadata Server (MDS, ceph-mds) stores metadata on​​ behalf of the Ceph Filesystem (i.e., Ceph Block Devices and Ceph Object Storage do not use MDS). Ceph Metadata Servers allow POSIX file system users to execute basic commands (like,ls, find etc.) without placing an enormous burden on the Ceph Storage Cluster.\

  • Managers: A Ceph Manager daemon (ceph-mgr) is responsible for keeping track of runtime metrics and the current state of the Ceph cluster, including storage utilization, current performance metrics, and system load.

2./ Chuẩn bị​​ môi trường cho 8 server LAB

Cài đặt 8 server Ubuntu 18.04LTS

CPU: 4 Threads

Ram: 2Gb

HDD: 80GB

Thay đổi Hostname như sau trên tất cả​​ các Node

nano /etc/hosts

#Paste

10.0.1.51 rgw.ceph.com  ​​​​ rgw

10.0.1.52 mon01.ceph.com mon01

10.0.1.53 mon02.ceph.com mon02

10.0.1.54​​ mon03.ceph.com mon03

10.0.1.55 ceph-admin.com ceph-admin

10.0.1.56 osd01.ceph.com osd01

10.0.1.57 osd02.ceph.com osd02

10.0.1.58 osd03.ceph.com osd03

10.0.1.42 client01-ceph.ceph.com client01-ceph

Thay đổi hostname tương​​ ứng với từng server và PASTE

HOSTNAME=rgw

hostnamectl

sudo hostnamectl set-hostname $HOSTNAME

#echo -e with escape

echo -e "127.0.0.1  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ $HOSTNAME" >> /etc/hosts

sed -i -e s/"preserve_hostname: false"/"preserve_hostname: true"/g /etc/cloud/cloud.cfg

Tiếp theo​​ 

 

sudo apt update

sudo apt -y upgrade

sudo reboot

3./ Tiến hành cài đặt

3.1/ Chuẩn bị​​ server ceph-admin node

Server ceph-admin có địa chỉ​​ IP:​​ 10.0.1.55

Import repository key

wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -

#Kết quả

OK

Add the Ceph repository to your system. This installation will do Ceph nautilus

echo deb​​ https://download.ceph.com/debian-nautilus/​​ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

Update your repository and install ceph-deploy:

sudo apt update -y

sudo apt -y install ceph-deploy

3.2/ Chuẩn bị​​ server ceph node

The admin node must have password-less SSH access to Ceph nodes. When ceph-deploy logs in to a Ceph node as a user, that particular user must have passwordless sudo privileges.

Add SSH user cho tất cả​​ các Ceph Node bao gồm: rgw, osd và monitors Nodes

Trên server ceph-admin

export USER_NAME="ceph-admin"

export USER_PASS="StrOngP@ssw0rd"

sudo useradd --create-home -s /bin/bash ${USER_NAME}

echo "${USER_NAME}:${USER_PASS}"|sudo chpasswd

echo "${USER_NAME} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/${USER_NAME}

sudo chmod 0440 /etc/sudoers.d/${USER_NAME}

Kiểm tra lại user ceph-admin có thể​​ chạy quyền root không cần password

root@ceph-admin:/etc/ceph# su - ceph-admin

ceph-admin@ceph-admin:~$ sudo su -

root@ceph-admin:~#

Tiến hành khởi tạo SSH keys trên ceph-admin node. lưu ý không đặt password cho passphrase:

# su - ceph-admin

$ ssh-keygen​​ 

Generating public/private rsa key pair.

Enter file in which to save the key (/home/ceph-admin/.ssh/id_rsa):​​ 

Created directory '/home/ceph-admin/.ssh'.

Enter passphrase (empty for no passphrase):​​ 

Enter same​​ passphrase again:​​ 

Your identification has been saved in /home/ceph-admin/.ssh/id_rsa.

Your public key has been saved in /home/ceph-admin/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:DZZdfRS1Yo+unWAkDum7juShEF67pm7VdSkfWlsCpbA ceph-admin@ceph-admin

The​​ key's randomart image is:

+---[RSA 2048]----+

|  ​​ ​​ ​​​​ . ​​ .. ​​ .. o=|

|  ​​ ​​ ​​ ​​​​ o..o . ​​ . o|

|  ​​ ​​ ​​​​ E .= o ​​ o o |

|  ​​ ​​ ​​ ​​ ​​​​ +.O .. + ​​ |

| . .. .oS.*. . . |

|. o.....ooo .  ​​ ​​​​ |

| o.. o . . o .  ​​​​ |

| ...= o . . + . ​​ |

|oooo o.+. ​​ . o  ​​​​ |

+----[SHA256]-----+

 

$ ls /home/ceph-admin/.ssh/

config ​​ id_rsa ​​ id_rsa.pub ​​ known_hosts

cấu hình file ~/.ssh/config

cat /home/ceph-admin/.ssh/config​​ 

Host osd01

 ​​​​ Hostname osd01

 ​​​​ User ceph-admin

Host osd02

 ​​​​ Hostname osd02

 ​​​​ User ceph-admin

Host osd03

 ​​​​ Hostname osd03

 ​​​​ User​​ ceph-admin

Host ceph-admin

 ​​​​ Hostname ceph-admin

 ​​​​ User ceph-admin

Host mon01

 ​​​​ Hostname mon01

 ​​​​ User ceph-admin

Host mon02

 ​​​​ Hostname mon02

 ​​​​ User ceph-admin

Host mon03

 ​​​​ Hostname mon03

 ​​​​ User ceph-admin

Host rgw

 ​​​​ Hostname rgw

 ​​​​ User ceph-admin

Tiến hành​​ copy key cho tất cả​​ các node

for i in rgw mon01 mon02 mon03 osd01 osd02 osd03; do

​​ ssh-copy-id $i

done

Như vậy đã có thể​​ remote sang tất cả​​ các node không bị​​ hỏi password

su - ceph-admin

ssh ceph-admin@rgw​​ 

Last login: Tue Sep ​​ 8 15:36:40 2020 from​​ 10.0.1.55

ceph-admin@rgw:~$

4./ Triển khai Ceph Storage Cluster

Yêu cầu tất cả​​ các Node cần được update thời gian chuẩn, nếu chưa cài đặt ntp có thể​​ cài đặt như sau:

sudo apt install ntp

Trên server ceph-admin tạo folder cài đặt files và keys nơi mà ceph-deploy khởi tạo cluster

su - ceph-admin

cd ~

mkdir ceph-deploy

cd ceph-deploy

4.1/ Tiến hành khởi tạo ceph monitor nodes

Trên ceph-admin node

ceph-deploy new mon01 mon02 mon03

#kết quả

[mon03][DEBUG ] connected to host: mon03​​ 

[mon03][DEBUG ] detect platform information from remote host

[mon03][DEBUG ] detect machine type

[mon03][DEBUG ] find the location of an executable

[mon03][INFO ​​ ] Running command: sudo /bin/ip link show

[mon03][INFO ​​ ] Running command: sudo /bin/ip addr show

[mon03][DEBUG ] IP addresses found: [u'10.0.1.54']

[ceph_deploy.new][DEBUG ] Resolving host mon03

[ceph_deploy.new][DEBUG ] Monitor mon03 at 10.0.1.54

[ceph_deploy.new][DEBUG ] Monitor initial members are ['mon01', 'mon02', 'mon03']

[ceph_deploy.new][DEBUG ] Monitor addrs are ['10.0.1.52', '10.0.1.53', '10.0.1.54']

[ceph_deploy.new][DEBUG ] Creating a random mon key...

[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...

[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

4.2/ Cài đặt ceph package​​ trên tất cả​​ các node

Trên ceph-admin node

ceph-deploy install mon01 mon02 mon03 osd01 osd02 osd03 rgw

#kết quả

[rgw][DEBUG ] Adding system user ceph....done

[rgw][DEBUG ] Setting system user ceph properties....done

[rgw][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target.

[rgw][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service.

[rgw][DEBUG ] Setting up radosgw (13.2.10-1bionic) ...

[rgw][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target.

[rgw][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target.

[rgw][DEBUG ] Setting up python-webtest (2.0.28-1ubuntu1) ...

[rgw][DEBUG ] Setting up ceph-base (13.2.10-1bionic) ...

[rgw][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service →​​ /lib/systemd/system/ceph-crash.service.

[rgw][DEBUG ] Setting up python-pecan (1.2.1-2) ...

[rgw][DEBUG ] update-alternatives: using /usr/bin/python2-pecan to provide /usr/bin/pecan (pecan) in auto mode

[rgw][DEBUG ] update-alternatives: using /usr/bin/python2-gunicorn_pecan to provide /usr/bin/gunicorn_pecan (gunicorn_pecan) in auto mode

[rgw][DEBUG ] Setting up ceph-osd (13.2.10-1bionic) ...

[rgw][DEBUG ] chown: cannot access '/var/lib/ceph/osd/*/block*': No such file or directory

[rgw][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target.

[rgw][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target.

[rgw][DEBUG ] Setting up ceph-mds (13.2.10-1bionic) ...

[rgw][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target.

[rgw][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target.

[rgw][DEBUG ] Setting up ceph-mon (13.2.10-1bionic) ...

[rgw][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target.

[rgw][DEBUG ] Created symlink​​ /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target.

[rgw][DEBUG ] Setting up ceph-mgr (13.2.10-1bionic) ...

[rgw][DEBUG ] Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target.

[rgw][DEBUG ] Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target.

[rgw][DEBUG ] Setting up ceph (13.2.10-1bionic) ...

[rgw][DEBUG ] Processing triggers for systemd (237-3ubuntu10.42) ...

[rgw][DEBUG ] Processing triggers for man-db (2.8.3-2ubuntu0.1) ...

[rgw][DEBUG ] Processing triggers for ureadahead (0.100.0-21) ...

[rgw][DEBUG ] Processing triggers for libc-bin (2.27-3ubuntu1.2) ...

[rgw][INFO ​​ ] Running command: sudo ceph --version

[rgw][DEBUG ] ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stabl

 

# Kiểm tra trên các máy ceph khác

root@rgw:~/.ssh# sudo ceph --version

ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)

#

root@mon01:~# sudo ceph --version

ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)

 

4.3/ Triển khai khởi tạo các server monitor và các keys

Khởi tạo server monitor

su -ceph-admin

cd /home/ceph-admin/ceph-deploy

ceph-deploy mon create-initial

#Kết quả

[mon01][DEBUG ] connected to host: mon01​​ 

[mon01][DEBUG ] detect platform information from remote host

[mon01][DEBUG ] detect machine type

[mon01][DEBUG ] get remote short hostname

[mon01][DEBUG ] fetch remote file

[mon01][INFO ​​ ]​​ Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.mon01.asok mon_status

[mon01][INFO ​​ ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon01/keyring auth get client.admin

[mon01][INFO ​​ ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon01/keyring auth get client.bootstrap-mds

[mon01][INFO ​​ ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon01/keyring auth get client.bootstrap-mgr

[mon01][INFO ​​ ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon01/keyring auth get client.bootstrap-osd

[mon01][INFO ​​ ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-mon01/keyring auth get client.bootstrap-rgw

[ceph_deploy.gatherkeys][INFO ​​ ] Storing ceph.client.admin.keyring

[ceph_deploy.gatherkeys][INFO ​​ ] Storing ceph.bootstrap-mds.keyring

[ceph_deploy.gatherkeys][INFO ​​ ] Storing ceph.bootstrap-mgr.keyring

[ceph_deploy.gatherkeys][INFO ​​ ] keyring 'ceph.mon.keyring' already exists

[ceph_deploy.gatherkeys][INFO ​​ ] Storing ceph.bootstrap-osd.keyring

[ceph_deploy.gatherkeys][INFO ​​ ] Storing ceph.bootstrap-rgw.keyring

[ceph_deploy.gatherkeys][INFO ​​ ] Destroy temp directory /tmp/tmp1nOYb4

1 keyrings sẽ​​ được khởi tạo vào trong working folder

trên ceph-admin

su - ceph-admin

cd ~/ceph-deploy

 

cat ceph.client.admin.keyring

[client.admin]

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ key = AQDOW1dfhiOEBRAAjspKZW4cea3P8qzwDm12gg==

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ caps mds = "allow *"

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ caps mgr = "allow *"

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ caps mon = "allow *"

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ caps osd = "allow *"

Triển khai 1 manager daemon (service)

ceph-deploy mgr create mon01 mon02 mon03

#Kết quả

[mon03][DEBUG ] connected to host: mon03​​ 

[mon03][DEBUG ] detect platform information from remote host

[mon03][DEBUG ] detect​​ machine type

[ceph_deploy.mgr][INFO ​​ ] Distro info: Ubuntu 18.04 bionic

[ceph_deploy.mgr][DEBUG ] remote host will use systemd

[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to mon03

[mon03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[mon03][WARNIN] mgr keyring does not exist yet, creating one

[mon03][DEBUG ] create a keyring file

[mon03][DEBUG ] create path recursively if it doesn't exist

[mon03][INFO ​​ ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mgr --keyring​​ /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.mon03 mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-mon03/keyring

[mon03][INFO ​​ ] Running command: sudo systemctl enable ceph-mgr@mon03

[mon03][WARNIN] Created symlink /etc/systemd/system/ceph-mgr.target.wants/[email protected] → /lib/systemd/system/[email protected].

[mon03][INFO ​​ ] Running command: sudo systemctl start ceph-mgr@mon03

[mon03][INFO ​​ ] Running command: sudo systemctl enable ceph.target

Thêm 1 server Metadata

ceph-deploy mds create mon01 mon02 mon03

#Kết quả

[mon03][DEBUG ] connected to host: mon03​​ 

[mon03][DEBUG ] detect platform information from remote host

[mon03][DEBUG ] detect machine type

[ceph_deploy.mds][INFO ​​ ] Distro info: Ubuntu 18.04 bionic

[ceph_deploy.mds][DEBUG ] remote host will use systemd

[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to mon03

[mon03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[mon03][WARNIN] mds keyring does not exist yet, creating one

[mon03][DEBUG ] create a keyring file

[mon03][DEBUG ] create path if it doesn't exist

[mon03][INFO ​​ ] Running command: sudo ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.mon03 osd​​ allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-mon03/keyring

[mon03][INFO ​​ ] Running command: sudo systemctl enable ceph-mds@mon03

[mon03][WARNIN] Created symlink /etc/systemd/system/ceph-mds.target.wants/[email protected] → /lib/systemd/system/[email protected].

[mon03][INFO ​​ ] Running command: sudo systemctl start ceph-mds@mon03

[mon03][INFO ​​ ] Running command: sudo systemctl enable ceph.target

4.4/ Copy ceph Admin key

Copy file cấu hình và admin key đến server admin node và các​​ ceph nodes:

ceph-deploy admin mon01 mon02 mon03 osd01 osd02 osd03

#kết quả

[osd03][DEBUG ] connected to host: osd03​​ 

[osd03][DEBUG ] detect platform information from remote host

[osd03][DEBUG ] detect machine type

[osd03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

4.5/ Tiến hành add các osds disk

Trên các server osd thêm 3 disk mỗi disk có dung lựng 5Gb: sdb sdc sdd

root@osd01:~# lsblk

NAME  ​​​​ MAJ:MIN RM ​​ SIZE RO TYPE MOUNTPOINT

loop0  ​​ ​​​​ 7:0  ​​ ​​​​ 0 89.1M ​​ 1 loop /snap/core/7917

loop1  ​​ ​​​​ 7:1  ​​ ​​​​ 0 96.6M ​​ 1 loop /snap/core/9804

sda  ​​ ​​ ​​ ​​​​ 8:0  ​​ ​​​​ 0 ​​ 100G ​​ 0 disk​​ 

├─sda1  ​​​​ 8:1  ​​ ​​​​ 0  ​​ ​​​​ 1M ​​ 0 part​​ 

├─sda2  ​​​​ 8:2  ​​ ​​​​ 0  ​​ ​​​​ 2G ​​ 0 part /boot

├─sda3  ​​​​ 8:3  ​​ ​​​​ 0  ​​ ​​​​ 4G ​​ 0 part [SWAP]

└─sda4  ​​​​ 8:4  ​​ ​​​​ 0  ​​​​ 94G ​​ 0 part /

sdb  ​​ ​​ ​​ ​​​​ 8:16  ​​​​ 0  ​​ ​​​​ 5G ​​ 0 disk​​ 

sdc  ​​ ​​ ​​ ​​​​ 8:32  ​​​​ 0  ​​ ​​​​ 5G ​​ 0 disk​​ 

sdd  ​​ ​​ ​​ ​​​​ 8:48  ​​​​ 0  ​​ ​​​​ 5G ​​ 0 disk

Câu lệnh để​​ tiến hành create osd data như sau:

ceph-deploy osd create --data {device} {ceph-node}

Trong trường hợp này

for i in sdb sdc sdd; do

 ​​​​ for j in osd01 osd02 osd03; do

 ​​ ​​ ​​​​ ceph-deploy osd create --data /dev/$i $j

done

done

#Kết quả:

/usr/local/sbin:/usr/sbin:/sbin

[osd03][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-511557d7-9e40-48ff-b5f7-50191cf4394f/osd-block-57b836ef-6f83-495e-8d9b-4ab413ee6961

[osd03][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-2

[osd03][WARNIN] Running command: /bin/ln -s /dev/ceph-511557d7-9e40-48ff-b5f7-50191cf4394f/osd-block-57b836ef-6f83-495e-8d9b-4ab413ee6961 /var/lib/ceph/osd/ceph-8/block

[osd03][WARNIN] Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-8/activate.monmap

[osd03][WARNIN] ​​ stderr: got monmap epoch 1

[osd03][WARNIN] Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-8/keyring --create-keyring --name osd.8 --add-key AQCqYFdfTn/4MhAA7CmiQgsX3fpMAVSZTsJ4Mw==

[osd03][WARNIN] ​​ stdout: creating /var/lib/ceph/osd/ceph-8/keyring

[osd03][WARNIN] added entity osd.8 auth auth(auid = 18446744073709551615 key=AQCqYFdfTn/4MhAA7CmiQgsX3fpMAVSZTsJ4Mw== with 0 caps)

[osd03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-8/keyring

[osd03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-8/

[osd03][WARNIN] Running command:​​ /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 8 --monmap /var/lib/ceph/osd/ceph-8/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-8/ --osd-uuid 57b836ef-6f83-495e-8d9b-4ab413ee6961 --setuser ceph --setgroup ceph

[osd03][WARNIN] --> ceph-volume lvm prepare successful for: /dev/sdd

[osd03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-8

[osd03][WARNIN] Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-511557d7-9e40-48ff-b5f7-50191cf4394f/osd-block-57b836ef-6f83-495e-8d9b-4ab413ee6961 --path /var/lib/ceph/osd/ceph-8 --no-mon-config

[osd03][WARNIN] Running command: /bin/ln -snf​​ /dev/ceph-511557d7-9e40-48ff-b5f7-50191cf4394f/osd-block-57b836ef-6f83-495e-8d9b-4ab413ee6961 /var/lib/ceph/osd/ceph-8/block

[osd03][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-8/block

[osd03][WARNIN] Running command: /bin/chown​​ -R ceph:ceph /dev/dm-2

[osd03][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-8

[osd03][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-8-57b836ef-6f83-495e-8d9b-4ab413ee6961

[osd03][WARNIN] ​​ stderr: Created symlink​​ /etc/systemd/system/multi-user.target.wants/[email protected] → /lib/systemd/system/[email protected].

[osd03][WARNIN] Running command: /bin/systemctl enable --runtime ceph-osd@8

[osd03][WARNIN] ​​ stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/[email protected] → /lib/systemd/system/[email protected].

[osd03][WARNIN] Running command: /bin/systemctl start ceph-osd@8

[osd03][WARNIN] --> ceph-volume lvm activate successful for osd ID: 8

[osd03][WARNIN] --> ceph-volume lvm create successful for: /dev/sdd

[osd03][INFO ​​ ] checking OSD status...

[osd03][DEBUG ] find the location of an executable

[osd03][INFO ​​ ] Running command: sudo /usr/bin/ceph --cluster=ceph osd stat --format=json

[ceph_deploy.osd][DEBUG ] Host osd03 is now ready for osd use.

Tiến hành kiểm tra lại

root@osd01:~# lsblk

NAME  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ MAJ:MIN RM ​​ SIZE RO TYPE MOUNTPOINT

loop0  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 7:0  ​​ ​​​​ 0 89.1M ​​ 1 loop /snap/core/7917

loop1  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 7:1  ​​ ​​​​ 0 96.6M ​​ 1 loop /snap/core/9804

sda  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 8:0  ​​ ​​​​ 0 ​​ 100G ​​ 0 disk​​ 

├─sda1  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 8:1  ​​ ​​​​ 0  ​​ ​​​​ 1M ​​ 0​​ part​​ 

├─sda2  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 8:2  ​​ ​​​​ 0  ​​ ​​​​ 2G ​​ 0 part /boot

├─sda3  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 8:3  ​​ ​​​​ 0  ​​ ​​​​ 4G ​​ 0 part [SWAP]

└─sda4  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 8:4  ​​ ​​​​ 0  ​​​​ 94G ​​ 0 part /

sdb  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 8:16 ​​ ​​ 0  ​​ ​​​​ 5G ​​ 0 disk​​ 

└─ceph--0cb8bcc0--6c7e--4b48--b2e9--95524d974fff-osd--block--ee5db9ec--c8f9--41e8--8d76--32074fba8775 253:0  ​​ ​​​​ 0  ​​ ​​​​ 5G ​​ 0 lvm ​​ 

sdc  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 8:32​​  ​​​​ 0  ​​ ​​​​ 5G ​​ 0 disk​​ 

└─ceph--ec347fc1--b759--4849--8487--7487a9b55965-osd--block--e3118288--36fd--42d2--8217--09f36aa0e65e 253:1  ​​ ​​​​ 0  ​​ ​​​​ 5G ​​ 0 lvm ​​ 

sdd  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 8:48​​  ​​​​ 0  ​​ ​​​​ 5G ​​ 0 disk​​ 

└─ceph--a307c8cd--9567--4e05--b3c3--a09a9c1db476-osd--block--93371b2d--ee43--493b--ac23--e05b73db2942 253:2  ​​ ​​​​ 0  ​​ ​​​​ 5G ​​ 0 lvm ​​ 

4.6/ Kiểm tra trạng thái của cluster

root@osd01:~# sudo ceph health

HEALTH_OK

#

root@osd02:~# ceph health

HEALTH_OK

root@osd02:~# sudo ceph status

 ​​​​ cluster:

 ​​ ​​ ​​​​ id:  ​​ ​​ ​​​​ ae0d4ca3-da52-4c95-a9c4-8bbfcd31c41a

 ​​ ​​ ​​​​ health: HEALTH_OK

​​ 

 ​​​​ services:

 ​​ ​​ ​​​​ mon: 3 daemons, quorum mon01,mon02,mon03

 ​​ ​​ ​​​​ mgr: mon01(active), standbys: mon02, mon03

 ​​ ​​ ​​​​ osd: 9 osds: 9 up, 9 in

​​ 

 ​​​​ data:

 ​​ ​​ ​​​​ pools:  ​​​​ 0 pools, 0 pgs

 ​​ ​​ ​​​​ objects: 0 ​​ objects, 0 B

 ​​ ​​ ​​​​ usage:  ​​​​ 9.0 GiB used, 36 GiB / 45 GiB avail

 ​​ ​​ ​​​​ pgs:  ​​ ​​ ​​​​ 

Check Quorum status

ceph quorum_status --format json-pretty

 

root@osd01:~# ceph quorum_status --format json-pretty

 

{

 ​​ ​​ ​​​​ "election_epoch": 6,

 ​​ ​​ ​​​​ "quorum": [

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 0,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 1,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 2

 ​​ ​​ ​​​​ ],

 ​​ ​​ ​​​​ "quorum_names": [

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "mon01",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "mon02",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "mon03"

 ​​ ​​ ​​​​ ],

 ​​ ​​ ​​​​ "quorum_leader_name": "mon01",

 ​​ ​​ ​​​​ "monmap": {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "epoch": 1,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "fsid": "ae0d4ca3-da52-4c95-a9c4-8bbfcd31c41a",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "modified": "2020-09-08 17:23:55.482563",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "created": "2020-09-08 17:23:55.482563",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "features": {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "persistent": [

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "kraken",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "luminous",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "mimic",

 ​​​​  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "osdmap-prune"

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ ],

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "optional": []

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ },

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "mons": [

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "rank": 0,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "mon01",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "addr": "10.0.1.52:6789/0",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "public_addr":​​ "10.0.1.52:6789/0"

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ },

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "rank": 1,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "mon02",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "addr": "10.0.1.53:6789/0",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "public_addr": "10.0.1.53:6789/0"

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ },

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "rank":​​ 2,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "mon03",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "addr": "10.0.1.54:6789/0",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "public_addr": "10.0.1.54:6789/0"

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ }

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ ]

 ​​ ​​ ​​​​ }

}

 

4.7./ Enable ceph Dashboard

sudo ceph mgr module enable dashboard

sudo ceph mgr module ls

 

{

 ​​ ​​ ​​​​ "enabled_modules": [

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "balancer",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "crash",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "dashboard",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "iostat",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "restful",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "status"

 ​​ ​​ ​​​​ ],

 ​​ ​​ ​​​​ "disabled_modules": [

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "hello",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "can_run": true,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "error_string": ""

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ },

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "influx",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "can_run": false,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "error_string": "influxdb python module not found"

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ },

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "localpool",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "can_run": true,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "error_string": ""

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ },

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "prometheus",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "can_run": true,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "error_string": ""

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ },

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "selftest",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "can_run": true,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "error_string": ""

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ },

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "smart",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "can_run": true,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "error_string": ""

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ },

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "telegraf",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "can_run": true,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "error_string": ""

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ },

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "telemetry",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "can_run": true,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "error_string": ""

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ },

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "name": "zabbix",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "can_run": true,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "error_string": ""

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ }

 ​​ ​​ ​​​​ ]

}

Khởi tạo 1 self certificates để​​ truy cập vào dashboard

sudo ceph dashboard create-self-signed-cert

#Kết quả

Self-signed certificate created

# Câu lệnh này không sử​​ dụng được do ceph bản mới không còn hỗ​​ trợ

#​​ sudo ceph dashboard ac-user-create admin 'Str0ngP@sswOrd' administrator

sudo ceph dashboard ac-user-create admin Str0ngP@sswOrd administrator

no valid command found; 10 closest matches:

dashboard set-enable-browsable-api <value>

dashboard set-rgw-api-port <int>

dashboard get-rgw-api-admin-resource

dashboard get-rgw-api-ssl-verify

dashboard get-rgw-api-secret-key

dashboard set-rgw-api-access-key <value>

dashboard get-rgw-api-port

dashboard get-enable-browsable-api

dashboard set-rgw-api-ssl-verify <value>

dashboard set-rest-requests-timeout <int>

Error EINVAL: invalid command

#https://tracker.ceph.com/issues/23973

#câu lệnh này mới tạo được user và password

ceph dashboard set-login-credentials​​ ceph-admin​​ 123456@@##

Username and password updated

Enabling the Object Gateway Management Frontend:

sudo radosgw-admin user create --uid=ceph-admin --display-name='Ceph Admin' --system

#Kết quả

{

 ​​ ​​ ​​​​ "user_id": "ceph-admin",

 ​​ ​​ ​​​​ "display_name": "Ceph Admin",

 ​​ ​​ ​​​​ "email": "",

 ​​ ​​ ​​​​ "suspended": 0,

 ​​ ​​ ​​​​ "max_buckets": 1000,

 ​​ ​​ ​​​​ "auid": 0,

 ​​ ​​ ​​​​ "subusers": [],

 ​​ ​​ ​​​​ "keys": [

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "user": "ceph-admin",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "access_key": "HP0IUU3NGNP1S10BJSS9",

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "secret_key": "BIn9FN5R4CABc8wayajKQQ1N0wtRolBRZKReVYg7"

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ }

 ​​ ​​ ​​​​ ],

 ​​ ​​ ​​​​ "swift_keys": [],

 ​​ ​​ ​​​​ "caps": [],

 ​​ ​​ ​​​​ "op_mask": "read, write, delete",

 ​​ ​​ ​​​​ "system": "true",

 ​​ ​​ ​​​​ "default_placement": "",

 ​​ ​​ ​​​​ "placement_tags": [],

 ​​ ​​ ​​​​ "bucket_quota": {

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "enabled": false,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "check_on_raw": false,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "max_size": -1,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "max_size_kb": 0,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "max_objects": -1

 ​​ ​​ ​​​​ },

 ​​ ​​ ​​​​ "user_quota": {

 ​​ ​​ ​​ ​​ ​​ ​​​​ ​​ "enabled": false,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "check_on_raw": false,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "max_size": -1,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "max_size_kb": 0,

 ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ "max_objects": -1

 ​​ ​​ ​​​​ },

 ​​ ​​ ​​​​ "temp_url_keys": [],

 ​​ ​​ ​​​​ "type": "rgw",

 ​​ ​​ ​​​​ "mfa_ids": []

}

Finally, provide the credentials to the dashboard:

sudo​​ ceph dashboard set-rgw-api-access-key <api-access-key>

sudo ceph dashboard set-rgw-api-secret-key <api-secret-key>

ceph dashboard set-rgw-api-access-key HP0IUU3NGNP1S10BJSS9

#kết quả

Option RGW_API_ACCESS_KEY updated

 

sudo ceph dashboard set-rgw-api-secret-key BIn9FN5R4CABc8wayajKQQ1N0wtRolBRZKReVYg7

#kết quả

Option RGW_API_SECRET_KEY updated

If you are using a self-signed certificate in your Object Gateway setup, then you should disable certificate verification:

#

sudo ceph dashboard set-rgw-api-ssl-verify False

#kết quả

Option RGW_API_SSL_VERIFY updated

4.8./ Add Rados Gateway

To use the Ceph Object Gateway component of Ceph, you must deploy an instance of RGW. Execute the following to create a new instance of Rados Gateway:

$ ceph-deploy rgw create {gateway-node}

Ví dụ:

su - ceph-admin

cd /home/ceph-admin/ceph-deploy

ceph-deploy rgw create rgw

#Kết quả

[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph-admin/.cephdeploy.conf

[ceph_deploy.cli][INFO ​​ ] Invoked (2.0.1): /usr/bin/ceph-deploy​​ rgw create rgw

[ceph_deploy.cli][INFO ​​ ] ceph-deploy options:

[ceph_deploy.cli][INFO ​​ ] ​​ username  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : None

[ceph_deploy.cli][INFO ​​ ] ​​ verbose  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : False

[ceph_deploy.cli][INFO ​​ ] ​​ rgw  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : [('rgw', 'rgw.rgw')]

[ceph_deploy.cli][INFO ​​ ] ​​ overwrite_conf  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : False

[ceph_deploy.cli][INFO ​​ ] ​​ subcommand  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : create

[ceph_deploy.cli][INFO ​​ ] ​​ quiet  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : False

[ceph_deploy.cli][INFO ​​ ] ​​ cd_conf  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7faa2e8136e0>

[ceph_deploy.cli][INFO ​​ ] ​​ cluster  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : ceph

[ceph_deploy.cli][INFO ​​ ] ​​ func  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : <function rgw at 0x7faa2eeb15d0>

[ceph_deploy.cli][INFO ​​ ] ​​ ceph_conf  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : None

[ceph_deploy.cli][INFO ​​ ] ​​ default_release  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ : False

[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts rgw:rgw.rgw

...

[rgw][DEBUG ] connected to host: rgw​​ 

[rgw][DEBUG ] detect platform information from remote host

[rgw][DEBUG ] detect machine type

[ceph_deploy.rgw][INFO ​​ ] Distro info: Ubuntu 18.04 bionic

[ceph_deploy.rgw][DEBUG ] remote host will use systemd

[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to rgw

[rgw][DEBUG ] write cluster​​ configuration to /etc/ceph/{cluster}.conf

[rgw][WARNIN] rgw keyring does not exist yet, creating one

[rgw][DEBUG ] create a keyring file

[rgw][DEBUG ] create path recursively if it doesn't exist

[rgw][INFO ​​ ] Running command: sudo ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.rgw osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.rgw/keyring

[rgw][INFO ​​ ] Running command: sudo systemctl enable [email protected]

[rgw][WARNIN] Created symlink /etc/systemd/system/ceph-radosgw.target.wants/[email protected] → /lib/systemd/system/[email protected].

[rgw][INFO ​​ ] Running command: sudo systemctl start [email protected]

[rgw][INFO ​​ ] Running command: sudo systemctl enable ceph.target

[ceph_deploy.rgw][INFO ​​ ] The Ceph Object Gateway (RGW) is now running on host rgw and default port 7480

By default, the RGW instance will listen on port 7480. This can be changed by editing ceph.conf on the node running the​​ RGW as follows:

Trên server​​ rgw

nano /etc/ceph/ceph.conf​​ 

#Paste​​ ​​ cuối cùng

#edit port of RGW Port 80

[client]

rgw frontends = civetweb port=80

Restart lại service rados trên server rgw

service [email protected] restart

service [email protected] status

#kết quả

[email protected] - Ceph rados gateway

 ​​ ​​​​ Loaded: loaded (/lib/systemd/system/[email protected]; indirect; vendor preset: enabled)

 ​​ ​​​​ Active: active (running) since Thu 2020-09-17 17:27:53 +07; 1s ago

​​ Main PID: 650818 (radosgw)

 ​​ ​​ ​​​​ Tasks: 581

 ​​ ​​​​ CGroup: /system.slice/system-ceph\x2dradosgw.slice/[email protected]

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ └─650818 /usr/bin/radosgw -f --cluster ceph --name client.rgw.rgw --setuser ceph --setgroup ceph

 

Sep 17 17:27:53 rgw systemd[1]: Started Ceph​​ rados gateway.

netstat -pnltu

 

tcp  ​​ ​​ ​​ ​​ ​​ ​​​​ 0  ​​ ​​ ​​ ​​​​ 0 0.0.0.0:80  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 0.0.0.0:*  ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ LISTEN  ​​ ​​ ​​ ​​​​ 650818/radosgw  ​​​​ 

Tiến hành truy cập và kiểm tra ceph

https://mon01.ceph.com:8443/#/dashboard

#​​ https://docs.ceph.com/en/latest/rados/operations/data-placement/

5./ Tạo và sử​​ dụng Block Storage

Mặc định ceph block devices sử​​ dụng​​ rbd pool. thông tin các loại Pool sẽ​​ được đề​​ cập​​ ​​ phần dưới

5.1/ Cài đặt kết nối client Ubuntu vào block storage

Trước​​ tiên cần tạo 1 vps client có thông tin như sau:

IP: 10.0.1.42

Cài đặt thông tin hostname cho tất cả​​ các node ceph và client

10.0.1.51 rgw.ceph.com  ​​​​ rgw

10.0.1.52 mon01.ceph.com mon01

10.0.1.53 mon02.ceph.com mon02

10.0.1.54 mon03.ceph.com mon03

10.0.1.55​​ ceph-admin.com ceph-admin

10.0.1.56 osd01.ceph.com osd01

10.0.1.57 osd02.ceph.com osd02

10.0.1.58 osd03.ceph.com osd03

10.0.1.42 client01-ceph.ceph.com client01-ceph

Tiếp tục trên server client01-ceph

export USER_NAME="ceph-admin"

export USER_PASS="StrOngP@ssw0rd"

sudo useradd --create-home -s /bin/bash ${USER_NAME}

echo "${USER_NAME}:${USER_PASS}"|sudo chpasswd

echo "${USER_NAME} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/${USER_NAME}

sudo chmod 0440 /etc/sudoers.d/${USER_NAME}

Trên server ceph-admin copy public keys sang note client01-ceph

ssh-copy-id client01-ceph

 

Number of key(s) added: 1

 

Now try logging into the machine, with:  ​​​​ "ssh 'client01-ceph'"

and check to make sure that only the key(s) you wanted were added.

Test connect nếu thành​​ công không hỏi password là được.

ssh 'client01-ceph'

 

 

 

 

 

 

6./Fix lỗi

6.1/ Trên mon01 báo lỗi

_check_auth_rotating possible clock skew, rotating keys expired way too early

không thể​​ start service ceph-mgr

có thể​​ vào​​ https://mon02.ceph.com:8443/​​ đã được tự​​ động thay thế​​ mon01 để​​ quản lý ceph. Chưa sử​​ dụng ceph nhưng​​ ​​ cứng đã báo sử​​ dụng gần hết

check log trên mon2 và mon3 thì không phát hiện lỗi.

Kiểm tra mon1 thì không còn​​ ​​ trong Quorum nữa.

Kiểm tra lại​​ file cấu hình

cat /etc/ceph/ceph.conf

[global]

fsid = ae0d4ca3-da52-4c95-a9c4-8bbfcd31c41a

mon_initial_members = mon01, mon02, mon03

mon_host = 10.0.1.52,10.0.1.53,10.0.1.54

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required =​​ cephx

#################

#auth_cluster_required = none

#auth_service_required = none

#auth_client_required = none

Sau khi thay đổi cấu hình authen cluster, service, client thông qua cephx thì mon1 tự​​ động được join lại vào cluster và hết lỗi.

Tiến hành enable các service khi khởi động lại trên mon1

systemctl enable ceph-mgr@mon01

systemctl enable ceph-mds@mon01

6.2/ Fix lỗi osd bị​​ đầy

ceph-admin@osd01:/etc/ceph$ ceph osd df

ceph osd status

ceph osd reweight-by-utilization

#kết quả​​ không thay đổi

no change

moved 0 / 120 (0%)

avg 60

stddev 36.7696 -> 36.7696 (expected baseline 5.47723)

min osd.2 with 12 -> 12 pgs (0.2 -> 0.2 * mean)

max osd.4 with 40 -> 40 pgs (0.666667 -> 0.666667 * mean)

 

oload 120

max_change 0.05

max_change_osds 4

average_utilization​​ 0.9293

overload_utilization 1.1152

ceph pg set_full_ratio 0.97

Lỗi

no valid command found; 10 closest matches:

pg ls {<int>} {<states> [<states>...]}

pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}​​ {<int>}

pg ls-by-primary <osdname (id|osd.id)>​​ {<int>} {<states> [<states>...]}

pg ls-by-osd <osdname (id|osd.id)> {<int>} {<states> [<states>...]}

pg dump_pools_json

pg ls-by-pool <poolstr> {<states> [<states>...]}

pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief [all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}

pg dump_json {all|summary|sum|pools|osds|pgs [all|summary|sum|pools|osds|pgs...]}

pg stat

pg getmap

Error EINVAL: invalid command

Kiểm tra lại version

root@osd01:~# ceph -v

ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)

root@osd01:~# ​​ ceph osd set-full-ratio 0.97

osd set-full-ratio 0.97

Với lỗi như trên phương án xử​​ lý là add thêm osd hoặc add thêm disk cho osd

# https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/pdf/troubleshooting_guide/Red_Hat_Ceph_Storage-3-Troubleshooting_Guide-en-US.pdf

 

#​​ http://centosquestions.com/what-do-you-do-when-a-ceph-osd-is-nearfull/

6.3/ OSD báo Down

root@mon01:~# sudo ceph status

 ​​​​ cluster:

 ​​ ​​ ​​​​ id:  ​​ ​​ ​​​​ ae0d4ca3-da52-4c95-a9c4-8bbfcd31c41a

 ​​ ​​ ​​​​ health: HEALTH_WARN

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ noout flag(s) set

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 2 osds down

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 2 hosts (2 osds) down

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 908/681 objects misplaced (133.333%)

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ Reduced​​ data availability: 2 pgs inactive, 38 pgs stale

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ Degraded data redundancy: 38 pgs undersized

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 1 slow ops, oldest one blocked for 82 sec, mon.mon01 has slow ops

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ too few PGs per OSD (13 < min 30)

​​ 

 ​​​​ services:

 ​​ ​​ ​​​​ mon: 3 daemons, quorum mon01,mon02,mon03

 ​​ ​​ ​​​​ mgr: mon03(active), standbys: mon01, mon02

 ​​ ​​ ​​​​ osd: 3 osds: 1 up, 3 in

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ flags noout

 ​​ ​​ ​​​​ rgw: 1 daemon active

​​ 

 ​​​​ data:

 ​​ ​​ ​​​​ pools:  ​​​​ 5 pools, 40 pgs

 ​​ ​​ ​​​​ objects: 227 ​​ objects, 1.6 KiB

 ​​ ​​ ​​​​ usage:  ​​​​ 3.1 GiB used,​​ 297 GiB / 300 GiB avail

 ​​ ​​ ​​​​ pgs:  ​​ ​​ ​​​​ 5.000% pgs unknown

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 908/681 objects misplaced (133.333%)

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 38 stale+active+undersized+remapped

 ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​ ​​​​ 2 ​​ unknown

Khi reboot host osd3 thì 2 osd còn lại chuyển status sang up