/ openstack

Cinder – EMC VNX integration – Openstack Newton

Cinder – EMC VNX integration – Openstack Newton

I decided to share this procedure as it took me nearly two days to get those things working together, ideally this would be helpful for someone else, if not, at least is a good reference for me in the future.

So long time ago when I’ve started my adventure with Openstack Juno, one of the first labs I’ve setup was build from old Sun Netra servers and some EMC VNX. Ok, I was newbie, and it took me some time, maybe week or two, but I was learning Openstack. Surprisingly it was not so easy task even now when I know a bit more 😉

I’ve used Red Hat Openstack Platform 10 (Newton) and EMC VNX 5600

Since Netwon release additional Python storops library is needed to interact with VNX. The VNX driver shipped with RH OSP10 or OSP 10 doesn’t work. We’ll also need Navisphere CLI, you can get it from EMC download pages.

Driver installation

EMC suggest to use osp_deploy.sh script, you can even patch your overcloud images with this, but you can also simply install storops with pip

On all compute and block storage nodes install:

rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm


yum install python-pip


pip install storops

Navisphere CLI ( download from EMC download pages first )

yum install NaviCLI-Linux-64-x86-en_US-

Network Configuration

It is clear and I will not write anything than that. Network layer has to be configured between all initiators and targets. Before continuing make sure you can ping from compute and block nodes all targets sourcing with IP of your storage network.

ISCSI initiators registration.

Driver supports auto registration, to enable it use the following option in cinder.conf backend configuration:

initiator_auto_registration = True

However I don’t recommend this if number of paths and targets discovered is higher than expected. I want to use only 4 targets, but my discovery shows many others, some of them reachable via MGMT network (unwanted) To avoid any unwanted behavior, we’ll register all manualy

Go to your first storage node (this should be repeated for all storage and compute nodes)

Send target discovery ( is IP of one of your iscsi targerts, you should be able to ping it first)

sudo iscsiadm -m discovery -t st -p

Login only those targets that you want to use, and through the paths you want to use.

The list of the targets can be much longer than you expect, so use only those IPs you want to connect

sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.ckm00154400512.a4 -p -l
sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.ckm00154400512.a5 -p -l
sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.ckm00154400512.b4 -p -l
sudo iscsiadm -m node -T iqn.1992-04.com.emc:cx.ckm00154400512.b5 -p -l

Now go to your EMC, navigate to Hosts -> Initiators and wait until your initiators appears on the list


Right click on one of the initiators and register it as following, the most important thing here is the Host Name, it has to be exactly the same as the output of hostname command from the host being registered


Right click on the remaining entries listed as the same initiator and register them as Existing Host that has been configured in previous step


After that the list should look similar to below


Do not create any LUN’s or storage groups, driver will take care of it.

Logout all targets from the node

sudo iscsiadm -m node -u

Repeat above steps for all compute nodes and storage ( controller ) nodes

Configure multipath

on all compute and storage nodes edit /etc/multipath.conf (create if doesn’t exists )

blacklist {
    # Skip the files under /dev that are definitely not FC/iSCSI devices
    # Different system may need different customization
    devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
    devnode "^hd[a-z][0-9]*"
    devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"

    # Skip LUNZ device from VNX
    device {
        vendor "DGC"
        product "LUNZ"

defaults {
    user_friendly_names no
    flush_on_last_del yes

devices {
    # Device attributed for EMC CLARiiON and VNX series ALUA
    device {
        vendor "DGC"
        product ".*"
        product_blacklist "LUNZ"
        path_grouping_policy group_by_prio
        path_selector "round-robin 0"
        path_checker emc_clariion
        features "1 queue_if_no_path"
        hardware_handler "1 alua"
        prio alua
        failback immediate

start multipathd

systemctl start multipathd

On all compute nodes add the following line to /etc/nova/nova.conf under [libvirt] section


restart nova-compute

systemctl restart openstack-nova-compute.service

Configure storage backend

on storage node ( in my case controller node ) edit /etc/cinder/cinder.conf file

add new backend configuration

storage_vnx_pool_names = example_pool
san_ip =
san_secondary_ip =
san_login = username
san_password = password
storage_vnx_authentication_type = global
naviseccli_path = /opt/Navisphere/bin/naviseccli
volume_driver = cinder.volume.drivers.emc.vnx.driver.EMCVNXDriver
volume_backend_name = emc-vnx1
storage_protocol = iscsi
iscsi_initiators = {"overcloud-compute-0":[""],"overcloud-compute-1":[""],"overcloud-compute-2":[""],"overcloud-compute-3":[""],"overcloud-controller-0":[""]}

add new backend to the list of enabled backends

enabled_backends = tripleo_iscsi,nfs,emc-vnx1

Restart volume service

systemctl restart openstack-cinder-volume.service

Nova and block allocation timeouts

With this storage architecture for every volume create operation cinder has to create LUN in dedicated storage pool, this process consumes more time so while spawning many instances with bigger volumes at once it is highly probably operation will fail due to timeout, to avoid this we need to increase some timeouts in nova.conf on all compute nodes

increase the default values in /etc/nova/nova.conf on all compute nodes


Restart nova compute

systemctl restart openstack-nova-compute.service

From now all the volumes on this backend will be created as LUNs in exmaple_pool on our VNX