ceph-diary-2-add-new-osd.md

Table of Contents

Name

Environment

本文基于 ceph version 13.2.6 (7b695f835b03642f85998b2ae7b6dd093d9fbce4) mimic (stable)

Content

现有 CEPH 集群:

[root@storage02-ib ~]# ceph osd status
+----+----------------------------+-------+-------+--------+---------+--------+---------+-----------+
| id |            host            |  used | avail | wr ops | wr data | rd ops | rd data |   state   |
+----+----------------------------+-------+-------+--------+---------+--------+---------+-----------+
| 0  | storage02-ib.lobj.eth6.org | 1696G | 3892G |    0   |     0   |    0   |     0   | exists,up |
| 1  | storage02-ib.lobj.eth6.org | 1631G | 3957G |    0   |     0   |    0   |     0   | exists,up |
| 2  | storage02-ib.lobj.eth6.org | 1736G | 3852G |    0   |     0   |    0   |     0   | exists,up |
| 3  | storage02-ib.lobj.eth6.org | 1867G | 3721G |    0   |     0   |    0   |     0   | exists,up |
| 4  | storage02-ib.lobj.eth6.org | 1927G | 3661G |    0   |     0   |    0   |     0   | exists,up |
| 5  | storage03-ib.lobj.eth6.org | 1491G | 4097G |    0   |     0   |    0   |     0   | exists,up |
| 6  | storage03-ib.lobj.eth6.org | 1523G | 4065G |    0   |     0   |    0   |     0   | exists,up |
| 7  | storage03-ib.lobj.eth6.org | 1423G | 4165G |    0   |     0   |    1   |     0   | exists,up |
| 10 | storage03-ib.lobj.eth6.org | 1450G | 4138G |    0   |     0   |    0   |     0   | exists,up |
+----+----------------------------+-------+-------+--------+---------+--------+---------+-----------+

情况如上, 现在要在 storage03-ib.lobj.eth6.org 节点上新增一块盘, 即, 新增一个OSD.

首先盘已经插到了机器上, 运行 fdisk -l 查看新增硬盘:

[root@storage03-ib ceph-9]# fdisk -l

Disk /dev/sdb: 128.0 GB, 128035676160 bytes, 250069680 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b0447

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *        2048     2099199     1048576   83  Linux
/dev/sdb2         2099200   211814399   104857600   83  Linux

Disk /dev/sda: 256.1 GB, 256060514304 bytes, 500118192 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdd: 6001.2 GB, 6001175126016 bytes, 11721045168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sde: 6001.2 GB, 6001175126016 bytes, 11721045168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdf: 6001.2 GB, 6001175126016 bytes, 11721045168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdg: 6001.2 GB, 6001175126016 bytes, 11721045168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/sdc: 6001.2 GB, 6001175126016 bytes, 11721045168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/mapper/ceph--db085012--287e--4e5a--80a0--1c92f8966bc2-osd--block--eefef80a--4553--4832--8a83--1f70231f4b89: 6001.2 GB, 6001172414464 bytes, 11721039872 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/mapper/ceph--8761018c--32af--4732--869f--33c88aeec7bd-osd--block--3635f9d5--2f63--4d60--a1ed--696dc3b5e673: 6001.2 GB, 6001172414464 bytes, 11721039872 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/mapper/ceph--c8971056--0ac6--4229--a7a2--9764390d0b99-osd--block--1d78ea09--2aa7--4422--b6d0--95a41c337f30: 6001.2 GB, 6001172414464 bytes, 11721039872 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/mapper/ceph--29dd0bd9--2377--4b89--8a29--11fcfdf0faa6-osd--block--8faa1c0e--fa55--478e--9f4c--15fd9e93943a: 6001.2 GB, 6001172414464 bytes, 11721039872 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

新增硬盘为 /dev/sde.

如果盘太多, 插上去不记得是哪个盘了, 尤其是像我这个设备名称在中间的, 不是很好找到, 可以利用 pvs 命令查看盘的 VG name, 新插进去的盘如果没有 lvm 卷, 那么就是没有 lvm 信息的.

例如, 这是一个已经存在的盘的信息:

[root@storage03-ib ceph-9]# pvs /dev/sdc
  PV         VG                                        Fmt  Attr PSize  PFree
  /dev/sdc   ceph-29dd0bd9-2377-4b89-8a29-11fcfdf0faa6 lvm2 a--  <5.46t    0 

这是新盘:

[root@storage03-ib ceph-9]# pvs /dev/sde
  Failed to find physical volume "/dev/sde".

接下来我们需要登录到 ceph-deploy 机器上通过 ceph-deploy 程序添加 OSD (登陆后记得切换成 ceph-deploy 的专用用户).

通过 ceph-deploy osd create --data {target-device} {target-host} 命令即可新增 OSD.

[root@storage01-ib ceph01.lobj.eth6.org]# su ceph-deploy-user
[ceph-deploy-user@storage01-ib ceph01.lobj.eth6.org]$ ceph-deploy osd create --data /dev/sde storage03-ib.lobj.eth6.org
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph-deploy-user/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create --data /dev/sde storage03-ib.lobj.eth6.org
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7facea9a0bd8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : storage03-ib.lobj.eth6.org
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7faceabe0a28>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sde
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sde
[storage03-ib.lobj.eth6.org][DEBUG ] connection detected need for sudo
[storage03-ib.lobj.eth6.org][DEBUG ] connected to host: storage03-ib.lobj.eth6.org 
[storage03-ib.lobj.eth6.org][DEBUG ] detect platform information from remote host
[storage03-ib.lobj.eth6.org][DEBUG ] detect machine type
[storage03-ib.lobj.eth6.org][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.6.1810 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to storage03-ib.lobj.eth6.org
[storage03-ib.lobj.eth6.org][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[storage03-ib.lobj.eth6.org][DEBUG ] find the location of an executable
[storage03-ib.lobj.eth6.org][INFO  ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sde
[storage03-ib.lobj.eth6.org][WARNIN] usage: ceph-volume lvm create [-h] --data DATA [--filestore]
[storage03-ib.lobj.eth6.org][WARNIN]                               [--journal JOURNAL] [--bluestore]
[storage03-ib.lobj.eth6.org][WARNIN]                               [--block.db BLOCK_DB] [--block.wal BLOCK_WAL]
[storage03-ib.lobj.eth6.org][WARNIN]                               [--osd-id OSD_ID] [--osd-fsid OSD_FSID]
[storage03-ib.lobj.eth6.org][WARNIN]                               [--cluster-fsid CLUSTER_FSID]
[storage03-ib.lobj.eth6.org][WARNIN]                               [--crush-device-class CRUSH_DEVICE_CLASS]
[storage03-ib.lobj.eth6.org][WARNIN]                               [--dmcrypt] [--no-systemd]
[storage03-ib.lobj.eth6.org][WARNIN] ceph-volume lvm create: error: GPT headers found, they must be removed on: /dev/sde
[storage03-ib.lobj.eth6.org][ERROR ] RuntimeError: command returned non-zero exit status: 2
[ceph_deploy.osd][ERROR ] Failed to execute command: /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sde
[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs

可以看到, 新增的硬盘中发现了 GPT 分区表, 所以添加失败了, 我们要手动清理掉硬盘的分区表 (当然如果硬盘是全新的, 这里应该就成功了).

这里我们直接暴力干掉分区表, 不用费事的操作 PV 和 VG 了.
注意, 一定要再三检查目标硬盘是否是期望的硬盘, 如果操作错了硬盘, 分区表直接就没了.

[root@storage03-ib ceph-9]# dd if=/dev/zero of=/dev/sde bs=512K count=1
1+0 records in
1+0 records out
524288 bytes (524 kB) copied, 0.00109677 s, 478 MB/s

利用 dd 命令把硬盘的前 512K 填充为 0, 直接干掉分区信息.
注意如果这块盘之前已经挂载了, 那需要重启才能生效.

然后重新添加 OSD:

[ceph-deploy-user@storage01-ib ceph01.lobj.eth6.org]$ ceph-deploy osd create --data /dev/sde storage03-ib.lobj.eth6.org
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph-deploy-user/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create --data /dev/sde storage03-ib.lobj.eth6.org
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  bluestore                     : None
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f09deba1bd8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : xfs
[ceph_deploy.cli][INFO  ]  block_wal                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  journal                       : None
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  host                          : storage03-ib.lobj.eth6.org
[ceph_deploy.cli][INFO  ]  filestore                     : None
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f09dede1a28>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.cli][INFO  ]  data                          : /dev/sde
[ceph_deploy.cli][INFO  ]  block_db                      : None
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  debug                         : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sde
[storage03-ib.lobj.eth6.org][DEBUG ] connection detected need for sudo
[storage03-ib.lobj.eth6.org][DEBUG ] connected to host: storage03-ib.lobj.eth6.org 
[storage03-ib.lobj.eth6.org][DEBUG ] detect platform information from remote host
[storage03-ib.lobj.eth6.org][DEBUG ] detect machine type
[storage03-ib.lobj.eth6.org][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: CentOS Linux 7.6.1810 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to storage03-ib.lobj.eth6.org
[storage03-ib.lobj.eth6.org][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[storage03-ib.lobj.eth6.org][DEBUG ] find the location of an executable
[storage03-ib.lobj.eth6.org][INFO  ] Running command: sudo /usr/sbin/ceph-volume --cluster ceph lvm create --bluestore --data /dev/sde
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e1b6d9ba-f19d-4470-82e9-c3f1fa3e5893
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /usr/sbin/vgcreate --force --yes ceph-d524c6c7-15f3-4fcc-b494-7253907e3af2 /dev/sde
[storage03-ib.lobj.eth6.org][DEBUG ]  stdout: Physical volume "/dev/sde" successfully created.
[storage03-ib.lobj.eth6.org][DEBUG ]  stdout: Volume group "ceph-d524c6c7-15f3-4fcc-b494-7253907e3af2" successfully created
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /usr/sbin/lvcreate --yes -l 100%FREE -n osd-block-e1b6d9ba-f19d-4470-82e9-c3f1fa3e5893 ceph-d524c6c7-15f3-4fcc-b494-7253907e3af2
[storage03-ib.lobj.eth6.org][DEBUG ]  stdout: Logical volume "osd-block-e1b6d9ba-f19d-4470-82e9-c3f1fa3e5893" created.
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /bin/ceph-authtool --gen-print-key
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-8
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /usr/sbin/restorecon /var/lib/ceph/osd/ceph-8
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /bin/chown -h ceph:ceph /dev/ceph-d524c6c7-15f3-4fcc-b494-7253907e3af2/osd-block-e1b6d9ba-f19d-4470-82e9-c3f1fa3e5893
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /bin/chown -R ceph:ceph /dev/dm-4
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /bin/ln -s /dev/ceph-d524c6c7-15f3-4fcc-b494-7253907e3af2/osd-block-e1b6d9ba-f19d-4470-82e9-c3f1fa3e5893 /var/lib/ceph/osd/ceph-8/block
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-8/activate.monmap
[storage03-ib.lobj.eth6.org][DEBUG ]  stderr: got monmap epoch 3
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-8/keyring --create-keyring --name osd.8 --add-key AQBAkyldqQHgDxAAvwEeZNU9iOV+M0FEBq0Odw==
[storage03-ib.lobj.eth6.org][DEBUG ]  stdout: creating /var/lib/ceph/osd/ceph-8/keyring
[storage03-ib.lobj.eth6.org][DEBUG ] added entity osd.8 auth auth(auid = 18446744073709551615 key=AQBAkyldqQHgDxAAvwEeZNU9iOV+M0FEBq0Odw== with 0 caps)
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-8/keyring
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-8/
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 8 --monmap /var/lib/ceph/osd/ceph-8/activate.monmap --keyfile - --osd-data /var/lib/ceph/osd/ceph-8/ --osd-uuid e1b6d9ba-f19d-4470-82e9-c3f1fa3e5893 --setuser ceph --setgroup ceph
[storage03-ib.lobj.eth6.org][DEBUG ] --> ceph-volume lvm prepare successful for: /dev/sde
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-8
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph-d524c6c7-15f3-4fcc-b494-7253907e3af2/osd-block-e1b6d9ba-f19d-4470-82e9-c3f1fa3e5893 --path /var/lib/ceph/osd/ceph-8 --no-mon-config
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /bin/ln -snf /dev/ceph-d524c6c7-15f3-4fcc-b494-7253907e3af2/osd-block-e1b6d9ba-f19d-4470-82e9-c3f1fa3e5893 /var/lib/ceph/osd/ceph-8/block
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-8/block
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /bin/chown -R ceph:ceph /dev/dm-4
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-8
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /bin/systemctl enable ceph-volume@lvm-8-e1b6d9ba-f19d-4470-82e9-c3f1fa3e5893
[storage03-ib.lobj.eth6.org][DEBUG ]  stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-8-e1b6d9ba-f19d-4470-82e9-c3f1fa3e5893.service to /usr/lib/systemd/system/ceph-volume@.service.
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /bin/systemctl enable --runtime ceph-osd@8
[storage03-ib.lobj.eth6.org][DEBUG ]  stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@8.service to /usr/lib/systemd/system/ceph-osd@.service.
[storage03-ib.lobj.eth6.org][DEBUG ] Running command: /bin/systemctl start ceph-osd@8
[storage03-ib.lobj.eth6.org][DEBUG ] --> ceph-volume lvm activate successful for osd ID: 8
[storage03-ib.lobj.eth6.org][DEBUG ] --> ceph-volume lvm create successful for: /dev/sde
[storage03-ib.lobj.eth6.org][INFO  ] checking OSD status...
[storage03-ib.lobj.eth6.org][DEBUG ] find the location of an executable
[storage03-ib.lobj.eth6.org][INFO  ] Running command: sudo /bin/ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host storage03-ib.lobj.eth6.org is now ready for osd use.
[ceph-deploy-user@storage01-ib ceph01.lobj.eth6.org]$ 

添加成功, 我们运行 ceph osd status 查看新增的 osd:

[root@storage02-ib ~]# ceph osd status
+----+----------------------------+-------+-------+--------+---------+--------+---------+-----------+
| id |            host            |  used | avail | wr ops | wr data | rd ops | rd data |   state   |
+----+----------------------------+-------+-------+--------+---------+--------+---------+-----------+
| 0  | storage02-ib.lobj.eth6.org | 1689G | 3899G |    0   |     0   |    0   |     0   | exists,up |
| 1  | storage02-ib.lobj.eth6.org | 1631G | 3957G |    0   |     0   |    0   |     0   | exists,up |
| 2  | storage02-ib.lobj.eth6.org | 1784G | 3804G |    0   |     0   |    0   |     0   | exists,up |
| 3  | storage02-ib.lobj.eth6.org | 1862G | 3726G |    0   |     0   |    0   |     0   | exists,up |
| 4  | storage02-ib.lobj.eth6.org | 1892G | 3696G |    0   |     0   |    0   |     0   | exists,up |
| 5  | storage03-ib.lobj.eth6.org | 1515G | 4073G |    0   |     0   |    0   |     0   | exists,up |
| 6  | storage03-ib.lobj.eth6.org | 1546G | 4042G |    0   |     0   |    0   |     0   | exists,up |
| 7  | storage03-ib.lobj.eth6.org | 1438G | 4150G |    0   |     0   |    1   |     0   | exists,up |
| 8  | storage03-ib.lobj.eth6.org | 1102M | 5587G |    0   |     0   |    0   |     0   | exists,up |
| 10 | storage03-ib.lobj.eth6.org | 1481G | 4107G |    0   |     0   |    0   |     0   | exists,up |
+----+----------------------------+-------+-------+--------+---------+--------+---------+-----------+
[root@storage02-ib ~]# 

可以看到多了个 OSD 8, 在运行 ceph -s 看下:

[root@storage02-ib ~]# ceph -s
  cluster:
    id:     0f7be0a4-2a05-4658-8829-f3d2f62579d2
    health: HEALTH_WARN
            234857/4527742 objects misplaced (5.187%)
            Degraded data redundancy: 785524/4527742 objects degraded (17.349%), 94 pgs degraded, 94 pgs undersized
 
  services:
    mon: 3 daemons, quorum storage01-ib,storage02-ib,storage03-ib
    mgr: storage01-ib(active), standbys: storage02-ib, storage03-ib
    osd: 10 osds: 10 up, 10 in; 115 remapped pgs
    rgw: 2 daemons active
 
  data:
    pools:   5 pools, 288 pgs
    objects: 2.26 M objects, 8.6 TiB
    usage:   14 TiB used, 40 TiB / 55 TiB avail
    pgs:     785524/4527742 objects degraded (17.349%)
             234857/4527742 objects misplaced (5.187%)
             173 active+clean
             91  active+undersized+degraded+remapped+backfill_wait
             18  active+remapped+backfill_wait
             3   active+remapped+backfilling
             3   active+undersized+degraded+remapped+backfilling
 
  io:
    recovery: 108 MiB/s, 27 objects/s
 
[root@storage02-ib ~]# 

可以看到 234857/4527742 objects misplaced (5.187%), 因为新增了硬盘, 故数据需要重新平衡.
至此新的 OSD 添加完毕.

Reference