Why is my ceph cluster value(964G) of raw used in global secion far higher than that(244G) of used in pools sectio

851 views Asked by At

Why is my ceph cluster value(964G) of raw used in global section far higher than that(244G) of used in pools section

[en@ceph01 ~]$ sudo ceph df
GLOBAL:
SIZE        AVAIL       RAW USED     %RAW USED
6.00TiB     5.06TiB       964GiB         15.68
POOLS:
NAME                    ID     USED        %USED     MAX AVAIL     OBJECTS
.rgw.root               1      1.09KiB         0       1.56TiB           4
default.rgw.control     2           0B         0       1.56TiB           8
default.rgw.meta        3           0B         0       1.56TiB           0
default.rgw.log         4           0B         0       1.56TiB         207
cephfs_data             5       244GiB      9.22       2.34TiB     4829661
cephfs_meta             6       168MiB         0       2.34TiB        4160
[en@ceph01 ~]$ sudo ceph osd df
ID CLASS WEIGHT  REWEIGHT SIZE    USE    DATA   OMAP    META    AVAIL   %USE  VAR  PGS
0   hdd 2.00000  1.00000 2.00TiB 331GiB 326GiB 1.64GiB 3.38GiB 1.68TiB 16.17 1.03  77
1   hdd 2.00000  1.00000 2.00TiB 346GiB 341GiB 1.69GiB 3.51GiB 1.66TiB 16.90 1.08  78
2   hdd 2.00000  1.00000 2.00TiB 286GiB 282GiB 1.31GiB 2.96GiB 1.72TiB 13.97 0.89  69
TOTAL 6.00TiB 964GiB 949GiB 4.64GiB 9.86GiB 5.06TiB 15.68
MIN/MAX VAR: 0.89/1.08  STDDEV: 1.24

info about ceph cluster:

>pool 5 'cephfs_data' replicated size 2 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 33 flags hashpspool stripe_width 0 application cephfs..
>pool 6 'cephfs_meta' replicated size 2min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 31 flags hashpspool stripe_width 0 application cephfs
> max_osd 3
1

There are 1 answers

0
Arkadiy Bolotov On

This is due to bluestore_min_alloc_size_hdd being most likely set at 64K.

More info here: ceph df (octopus) shows USED is 7 times higher than STORED in erasure coded pool