How do I add a device in an existing storagePoolClaim? I thought I could just edit the spc and add the disk to it, but I do not see the disk reformated as it should be.
How to add a device in an existing storagePoolClaim in OpenEBS?
650 views Asked by Chandan Sagar Pradhan At
1
Github issue 2258 openEBS repo is tracking this. At present this can be done by patch few resources. Pasting the content form the github workaround,
For expanding a cStor pool (type=striped) with additional disks.
A brief explanation cStor pool components Storage Pool CR (SP) - used for specifying the Disk CRs used by the pool. cStor Storage Pool CR (CSP) - used for specifying the unique disk path used by the pool. cStor Storage Pool Deployment and associated Pod. When the SPC spec is created with a set of disks, the cstor-operator will segregate the disks based on the node. And on each node, a cStor Pool will be created using the disks from that node. After the pool is provisioned, it can be expanded only by the disks already discovered on the same node.
The following steps are for expanding a single cStor Storage Pool and will need to be repeated on each of the cStor Pools corresponding to an SPC.
Step 1: Identify the cStor Pool (CSP) and Storage Pool (SP) associated with the SPC.
Storage Pools sample output:
From the above list, pick up the cStor Pool that needs to be expanded. The name of both CSP and SP will be same. The rest of the steps assume that cstor-disk-vt1u needs to be expanded. From the above output, also note down the node on which the Pool is running. In this case the node is gke-kmova-helm-default-pool-2c01cdf6-dxbf
Step 2: Identify the new disk that that need to be attached to the cStor Pool. The following command can be used to list the disks on a give node.
Sample Disks Output.
The following command can be used to see the disks already used on the node - gke-kmova-helm-default-pool-2c01cdf6-dxbf
Sample Output:
In this case, disk-ffca7a8731976830057238c5dc25e94c is unused.
Step 3: Patch CSP with the disk path details Get the disk path listed by unique path under devLinks.
Sample Output:
Patch the above disk path into CSP
Verify that disk is patched by executing kubectl get csp cstor-disk-vt1u -o yaml and check that new disk is added under diskList.
Step 4: Patch SP with disk name The following command patches the SP (cstor-disk-vt1u) with disk (disk-ffca7a8731976830057238c5dc25e94c)
Verify that disk is patched by executing kubectl get sp cstor-disk-vt1u -o yaml and check that new disk is added under diskList.
Step 5: Expand the pool. The last step is to update the cstor pool pod (cstor-disk-vt1u) with disk path (/dev/disk/by-id/scsi-0Google_PersistentDisk_kmova-n2-d1)
Identify the cstor pool pod associated with CSP cstor-disk-vt1u.
Sample Output:
Check the pool name:
Sample Output:
Extract the pool name from above output. In this case - cstor-deaf87e6-ec78-11e8-893b-42010a80003a
Expand the pool with additional disk.
You can execute the list command again to see the increase in capacity.
Sample Output: