After having finished the basic installation of the cluster an ceph it’s time to setup the cluster storage.
First off I will recap what has been done so far and what the disks look lile, then explaining how I plan to use them:
We have installed the ceph software and setup monitors on all of our nodes:
pveceph install -version hammer pveceph init --network 10.10.67.0/24 # only once pveceph createmon
The actual disk layout is:
node01, node02:
– 1x 370GB HW-RAID0 SSD (osd.0, osd.3)
– 2x 1TB SATA (osd.1, osd.2, osd.4, osd.5)
node03:
– 2x 2TB HW-RAID0 SATA (osd.6, osd.7)
– 1x 1TB HW-RAID1 (SW-RAID
– 1x 870GB HW-RAID1
Within Ceph we want 3 copies of the data, one on each node. We will be using the SSDs as a writeback cache pool. Ideally the cache pool beeing “local” to the VM should be used (only), because the main bottelneck is network bandwith (only 1GBit). Additionally the SSDs wil hold our (external) journals.
First create 2 OSDs on the SATA drives with external journals on the SSD, this can easily be accomplished from the web GUI.
Then we need to create the OSD on the remaining space of the SSD, which is not that as easy and needs to be done from the commandline:
DEVICE=/dev/sdb # the SSD HW-RAID0 PARTITION=3 # the next unused partition OSD_UUID=$(uuidgen -r) # a unique UUID for the OSD PTYPE_UUID=4fbd7e29-9d25-41b8-afd0-062c0ceff05d # the default PTYPE UUID ceph uses (from the source) FSID=345abc67-de89-f012-345a-bc67de89f012 # taken from /etc/ceph/ceph.conf sgdisk --largest-new=$PARTITION --change-name="$PARTITION:ceph data" --partition-guid=$PARTITION:$OSD_UUID --typecode=$PARTITION:$PTYPE_UUID -- $DEVICE partprobe # to read new partition table gdisk -l $DEVICE # verify the rest of the space on the device got allocated to a ceph data partition ceph-disk prepare --cluster ceph --cluster-uuid $FSID $DEVICE$PARTITION ceph-disk activate $DEVICE$PARTITION
After creating a pool named rbd, the keyring needs to be copied:
cd /etc/pve/priv/ mkdir ceph cp /etc/ceph/ceph.client.admin.keyring ceph/rbd.keyring