How to attach a Cloud Block Storage volume to an OnMetal server with pyrax?

139 views Asked by At

I would like to automate the attachment of a Cloud Block Storage volume to an OnMetal server running CentOS 7 by writing a Python script that make uses of the pyrax Python module. Do you know how to do it?

1

There are 1 answers

0
Erik Sjölund On BEST ANSWER

Attaching a Cloud Block Storage volume to an OnMetal server is a bit more complicated than attaching it to a normal Rackspace virtual server. That you will notice when you try to attach a Cloud Block Storage volume to an OnMetal server in the Rackspace web interface Cloud Control Panel, as you would see this text:

Note: When attaching volumes to OnMetal servers, you must log into the OnMetal server to set the initiator name, discover the targets and then connect to the target.

So you can attach the volume in the web interface but additionally you need to log in to the OnMetal server and run a few commands. The actual commands can be copy-and-pasted from the web interface into the terminal of the OnMetal server.

Also before detaching you need to run a command.

But the web interface is actually not needed. It can be done with the Python module pyrax.

First install the RPM package iscsi-initiator-utils on the OnMetal server

[root@server-01 ~]# yum -y install iscsi-initiator-utils

Assuming the volume_id and the server_id are known, this Python code first attaches the volume and then detaches the volume. Unfortunately, the mount_point argument attach_to_instance() is not working for OnMetal servers, so we would need to use the command lsblk -n -d before and after attaching the volume. By comparing the outputs we would then deduce the device name used for the attached volume. (The part to deduce the device name is not taken care of by the following Python code).

#/usr/bin/python
# Disclaimer: Use the script at your own Risk!                                                                                                                                    
import json
import os
import paramiko
import pyrax

# Replace server_id and volume_id                                                                                                                                                                       
# to your settings                                                                                                                                                                                      
server_id = "cbdcb7e3-5231-40ad-bba6-45aaeabf0a8d"
volume_id = "35abb4ba-caee-4cae-ada3-a16f6fa2ab50"
# Just to demonstrate that the mount_point argument for                                                                                                                                                 
# attach_to_instance() is not working for OnMetal servers                                                                                                                                               
disk_device = "/dev/xvdd"

def run_ssh_commands(ssh_client, remote_commands):
    for remote_command in remote_commands:
        stdin, stdout, stderr = ssh_client.exec_command(remote_command)
        print("")
        print("command: " + remote_command)
        for line in stdout.read().splitlines():
            print(" stdout: " + line)
        exit_status = stdout.channel.recv_exit_status()
        if exit_status != 0:
            raise RuntimeError("The command :\n{}\n"
                               "exited with exit status: {}\n"
                               "stderr: {}".format(remote_command,
                                                   exit_status,
                                                   stderr.read()))

pyrax.set_setting("identity_type", "rackspace")
pyrax.set_default_region('IAD')
creds_file = os.path.expanduser("~/.rackspace_cloud_credentials")
pyrax.set_credential_file(creds_file)
server = pyrax.cloudservers.servers.get(server_id)
vol = pyrax.cloud_blockstorage.find(id = volume_id)
vol.attach_to_instance(server, mountpoint=disk_device)
pyrax.utils.wait_until(vol, "status", "in-use", interval=3, attempts=0,
                       verbose=True)

ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(server.accessIPv4, username='root', allow_agent=True)

# The new metadata is only available if we get() the server once more                                                                                                                                   
server = pyrax.cloudservers.servers.get(server_id)

metadata = server.metadata["volumes_" + volume_id]
parsed_json = json.loads(metadata)
target_iqn = parsed_json["target_iqn"]
target_portal = parsed_json["target_portal"]
initiator_name = parsed_json["initiator_name"]

run_ssh_commands(ssh_client, [
    "lsblk -n -d",
    "echo InitiatorName={} > /etc/iscsi/initiatorname.iscsi".format(initiator_name),
    "iscsiadm -m discovery --type sendtargets --portal {}".format(target_portal),
    "iscsiadm -m node --targetname={} --portal {} --login".format(target_iqn, target_portal),
    "lsblk -n -d",
    "iscsiadm -m node --targetname={} --portal {} --logout".format(target_iqn, target_portal),
    "lsblk -n -d"
])

vol.detach()
pyrax.utils.wait_until(vol, "status", "available", interval=3, attempts=0,
                                    verbose=True)

Running the python code looks like this

user@ubuntu:~$ python attach.py 2> /dev/null
Current value of status: attaching (elapsed:  1.0 seconds)
Current value of status: in-use (elapsed:  4.9 seconds)

command: lsblk -n -d
 stdout: sda    8:0    0 29.8G  0 disk

command: echo InitiatorName=iqn.2008-10.org.openstack:a24b6f80-cf02-48fc-9a25-ccc3ed3fb918 > /etc/iscsi/initiatorname.iscsi

command: iscsiadm -m discovery --type sendtargets --portal 10.190.142.116:3260
 stdout: 10.190.142.116:3260,1 iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50
 stdout: 10.69.193.1:3260,1 iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50

command: iscsiadm -m node --targetname=iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50 --portal 10.190.142.116:3260 --login
 stdout: Logging in to [iface: default, target: iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50, portal: 10.190.142.116,3260] (multiple)
 stdout: Login to [iface: default, target: iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50, portal: 10.190.142.116,3260] successful.

command: lsblk -n -d
 stdout: sda    8:0    0 29.8G  0 disk
 stdout: sdb    8:16   0   50G  0 disk

command: iscsiadm -m node --targetname=iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50 --portal 10.190.142.116:3260 --logout
 stdout: Logging out of session [sid: 5, target: iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50, portal: 10.190.142.116,3260]
 stdout: Logout of [sid: 5, target: iqn.2010-11.com.rackspace:35abb4bb-caee-4c5e-ad53-a16f6f12ab50, portal: 10.190.142.116,3260] successful.

command: lsblk -n -d
 stdout: sda    8:0    0 29.8G  0 disk
Current value of status: detaching (elapsed:  0.8 seconds)
Current value of status: available (elapsed:  4.7 seconds)
user@ubuntu:~$

Just, one additional note:

Although not mentioned in the official Rackspace documentation

https://support.rackspace.com/how-to/attach-a-cloud-block-storage-volume-to-an-onmetal-server/

in a forum post from 5 Aug 2015 the Rackspace Managed Infrastructure Support also recommends running

iscsiadm -m node -T $TARGET_IQN -p $TARGET_PORTAL --op update -n node.startup -v automatic

to make the connection persistent so that it automatically restarts the iscsi session upon startup.

Update

Regarding deducing the new device name: Major Hayden writes in a blog post that

[root@server-01 ~]# ls /dev/disk/by-path/

could be used to find a path to the new device. If you would like to deference any symlinks, I guess this would work

[root@server-01 ~]# find -L /dev/disk/by-path -maxdepth 1 -mindepth 1 -exec realpath {} \;