NAS:VMWare SRP Guide
VMware SRP Guide
This wiki is meant to help people get Infiniband SRP Target working under RedHat/CentOS 7.3 to VMWare ESXi 6.0x SRP Initiators via Infiniband. (although the process should work under any RHEL / CentOS 7.x build)
This guide should cover all of the steps from Mellanox OFED Driver compiling/installing/configuring, to SCST compiling/installing, to adding ZFS on Linux (ZoL), and finally configuring the SCST with all of the above. As well as the few ESX6 steps required to remove all of the 'inband' drivers, install the OFED drivers, and be an active SRP initiator.
Not all of these steps are required for everyone, but i'm sure *someone* will appreciate them all together in one place :)
For the purposes of this guide, the syntax assumes you are always logged in as 'root'
VMware ESX 6.0.x SRP Initiator Setup
To be continued...
RedHat/CentOS 7.3 SRP Target Setup
These instructions are meant to be used with: SCST 3.2.x (latest stable branch) as well as Mellanox OFED Drivers 3.4.2 (latest) [as the time of writing]
They should be viable for Mellanox ConnectX-2/3/4 Adapters, with or without an Infiniband Switch.
NOTE: All Infiniband connectivity requires 'a' subnet manager functioning 'somewhere' in the 'fabric'. I will cover the very basics of this shortly, but the gist of it is; You want (1) subnet manager configured and running. On this subnet manager you need to configure at least one 'partition'. This acts like an ethernet VLAN, except that Infiniband wont play nice without one. For the purpose of this guide you wont need more than one. But...If you are on top of managing your subnet manager and partitions already, consider the pro's/con's of potentially creating one specifically for SRP traffic, and segmenting it from IPoIB, and binding all of your SRP only interfaces to that partition.
The basic order you want to do things in is: Install either base OS, and update it to current. Recommendation is Minimal OS installation. Highly recommended on OS installation that, you do NOT add any of the Infiniband or iSCSI packages that come with the OS. I can't guarantee they wont get in the way somewhere down the line. There may be some development type packages that show up as missing/required when making/installing, add them manually, and retry the step.
The Mellanox OFED Drivers
Mellanox OFED Driver Download Page http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers
Mellanox OFED Linux User Manual v3.40 http://www.mellanox.com/related-docs/prod_software/Mellanox_OFED_Linux_User_Manual_v3.40.pdf
My Driver Direct Download Link for RHEL/CentOS 7.3 x64 (Latest w/ OFED) http://www.mellanox.com/page/mlnx_ofed_eula?mtag=linux_sw_drivers&mrequest=downloads&mtype=ofed&mver=MLNX_OFED-3.4-2.0.0.0&mname=MLNX_OFED_LINUX-3.4-2.0.0.0-rhel7.3-x86_64.tgz
Step 1: Install the prerequisite packages required by the Mellanox OFED Driver package
# [root@NAS01 ~]$ yum install tcl tk -y |
Step 2: Download the Mellanox OFED drivers (.tgz version for this guide), and put them in /tmp
# [root@NAS01 ~]$ cd /tmp
[root@NAS01 ~]$ tar xvf MLNX_OFED_LINUX-3.4-2.0.0.0-rhel7.3-x86_64.tgz [root@NAS01 ~]$ cd /MLNX_OFED_LINUX-3.4-2.0.0.0-rhel7.3-x86_64 |
NOTE: If you just run the ./mnlxofedinstall script, with the latest & greatest RedHat/CentOS kernel, you will fail 'later down the road' in the SCST installation, specifically in the ib_srpt module, which is required for this exercise.
Step 3: Initially run the mlnxofedinstall script with the --add-kernel-support flag (REQUIRED)
# [root@NAS01 ~]$ ./mlnxofedinstall --add-kernel-support --without-fw-update |
NOTE: This will actually take the installation package, and use it to rebuild an entirely new installation package, customized for your specific Linux kernel. Note the name and location of the new .tgz package it creates. the --without-fw-update flag isn't required, but is useful if you 'dont even want to go there' on seeing if the driver package wants to auto-flash your HCA firmware. (Use your own best judgement)
Step 4: Extract the new package that was just created, customized for your Linux kernel.
# [root@NAS01 ~]$ cd /tmp/MLNX_OFED_LINUX-3.4-2.0.0.0-3.10.0-514.el7.x86_64/
[root@NAS01 ~]$ tar xvf MLNX_OFED_LINUX-3.4-2.0.0.0-rhel7.3-ext.tgz [root@NAS01 ~]$ cd MLNX_OFED_LINUX-3.4-2.0.0.0-rhel7.3-ext |
NOTE: In my example I'm using the RedHat 7.3 OFED Driver, so my file names may differ from yours.Look for the -ext suffix before the .tgz extension.
Step 5: Now we can run the Mellanox OFED installation script
# [root@NAS01 ~]$ ./mlnxofedinstall |
Step 6: Validate the new Mellanox Drivers can Stop/Start
# [root@NAS01 ~]$ /etc/init.d/openibd restart
Unloading HCA driver: [ OK ] Loading HCA driver and Access Layer: [ OK ] |
Look good? Move on! Nothing to see here....
NOTE: If you get an error here, about iSCSI or SRP being 'used', and the service doesn't automagically stop and start, then you likely have an conflict with a 'inband' driver. You should try and resolve that conflict, before you try and move forward.
I don't the definitive how-to guide for every possible scenario. But a good rule of thumb is run the un-installer that came with the Mellanox OFED package to try and 'clean' up everything.
/usr/sbin/ofed_uninstall.sh
Additionally, look for 'how to remove inband srp/infiniband/scst/iscsi' for whatever looks like it's conflicting for you. Repeat the OFED & SCST(If you had one previously) installations per this guide and validate again.
Step 7: Validate thew new Mellanox Drivers using the supplied Self Test script
# [root@NAS01 ~]$ hca_self_test.ofed |
Validate Output for me, looks like:
# ---- Performing Adapter Device Self Test ---- Number of CAs Detected ................. 1 PCI Device Check ....................... PASS Kernel Arch ............................ x86_64 Host Driver Version .................... MLNX_OFED_LINUX-3.4-2.0.0.0 (OFED-3.4-2.0.0): modules Host Driver RPM Check .................. PASS Firmware on CA #0 HCA .................. v2.10.0720 Host Driver Initialization ............. PASS Number of CA Ports Active .............. 0 Error Counter Check on CA #0 (HCA)...... PASS Kernel Syslog Check .................... PASS Node GUID on CA #0 (HCA) ............... NA ------------------ DONE --------------------- |
I'm currently using the Mellanox ConnectX-2 Dual Port 40Gb QDR Adapter. My Firmware is v2.10.0720 I believe the minimum version you need to be at for this guide to be successful is v2.9.1200, or v2.9.1000. If you allow the OFED driver package to push firmware in my hardware scenario, it only includes v2.9.1000. For ConnectX3/4 based adapters, please google-foo your way to the recommended firmware for SRP support.
Validate your Infiniband Device is being picked up by the driver, and what it's device name is.
# [root@nas01]# ibv_devices |
Output should look like:
# device node GUID ------ ---------------- mlx4_0 0002c903002805a4 |
And my device name is mlx4_0 , and my GUID (like a MAC Address or WWN) of the adapter itself is 0002c903002805a4. Which acts like a 'Node Name' of the HCA itself, not the interfaces/ports it has. Those will always be similar, usually Port 1 = +1 to the Node address, Port 2 = +2 to the Node address.
Validate the Device Info for your HCA
# [root@nas01]# ibv_devices |
Output should look like:
# hca_id: mlx4_0 transport: InfiniBand (0) fw_ver: 2.10.720 node_guid: 0002:c903:0028:05a4 sys_image_guid: 0002:c903:0028:05a7 vendor_id: 0x02c9 vendor_part_id: 26428 hw_ver: 0xB0 board_id: MT_0D80110009 phys_port_cnt: 2 Device ports: port: 1 state: PORT_ACTIVE (4) max_mtu: 4096 (5) active_mtu: 4096 (5) sm_lid: 1 port_lid: 2 port_lmc: 0x00 link_layer: InfiniBand port: 2 state: PORT_ACTIVE (4) max_mtu: 4096 (5) active_mtu: 4096 (5) sm_lid: 1 port_lid: 3 port_lmc: 0x00 link_layer: InfiniBand |
DONT WORRY if your ports aren't ACTIVE yet. It's likely due to a bad or missing Infiniband partition configuration.
Step 8a: Configuring InfiniBand Partitions
Edit the Infiniband Partition Configuration file, which is the single location for defining how you participate in Infiniband fabrics.
# [root@NAS01 ~]$ /etc/rdma/partitions.conf |
Here's a sample 'universal' configuration that should work for 'everyone'
# # For reference: # IPv4 IANA reserved multicast addresses: # http://www.iana.org/assignments/multicast-addresses/multicast-addresses.txt # IPv6 IANA reserved multicast addresses: # http://www.iana.org/assignments/ipv6-multicast-addresses/ipv6-multicast-addresses.xml # # mtu = # 1 = 256 # 2 = 512 # 3 = 1024 # 4 = 2048 # 5 = 4096 # # rate = # 2 = 2.5 GBit/s # 3 = 10 GBit/s # 4 = 30 GBit/s # 5 = 5 GBit/s # 6 = 20 GBit/s # 7 = 40 GBit/s # 8 = 60 GBit/s # 9 = 80 GBit/s # 10 = 120 GBit/s Default=0x7fff, ipoib, mtu=5, ALL=full; |
This creates a single Infiniband partition (0x7fff) also valid should be (0xffff). Also i'm setting my MTU to 4096 Bytes, which is mtu=5, and not hard coding my link speed. ALL=full; specifies which interfaces are allowed to join this 'vlan'.
If you are doing direct connect Infiniband HCA-to-HCA, with a software subnet manager, then you might not want to specify the MTU in the partition.conf, and try to set it to 65520 on the CX3/CX4 adapters at the software/driver layer. (I still need to get confirmation from someone who has tested).
If you have a Mellanox Infiniband Switch, you need to match the partition.conf file settings on the switch. Use something like WinSCP to connect to your switch. *BACKUP* your existing configuration file located in /usr/voltaire/config/partitions.config, before making any changes.
You can use an identical copy of the partitions.conf file from your host or your switch, to your host or your switch, depending on where you define it first. If they match, you should be set.
If you have a software subnet manager, you need to match the partition.conf file settings on the daemon config. (Check OpenSM documentation)
OK... Let's assume you have properly configured & matching partition.conf files by now...
Step 8b: (Optional) Validating InfiniBand Partitions & Subnet Manager connectivity
# [root@NAS01 ~]$ osmtest -f c |
Output should be similar to:
# Command Line Arguments Done with args Flow = Create Inventory Jan 23 22:15:20 337670 [5260E740] 0x7f -> Setting log level to: 0x03 Jan 23 22:15:20 338142 [5260E740] 0x02 -> osm_vendor_init: 1000 pending umads specified Jan 23 22:15:20 372978 [5260E740] 0x02 -> osm_vendor_bind: Mgmt class 0x03 binding to port GUID 0x2c903002805a5 Jan 23 22:15:20 419733 [5260E740] 0x02 -> osmtest_validate_sa_class_port_info: ----------------------------- SA Class Port Info: base_ver:1 class_ver:2 cap_mask:0x2602 cap_mask2:0x0 resp_time_val:0x10 ----------------------------- OSMTEST: TEST "Create Inventory" PASS |
Looking for that PASS
# [root@NAS01 ~]$ osmtest -f a |
Ignore any errors like --> 0x01 -> __osmv_sa_mad_rcv_cb: ERR 5501: Remote error:0x0300 All we care about is:
# OSMTEST: TEST "All Validations" PASS |
Finally
# [root@NAS01 ~]$ osmtest -v |
Ignore any errors... All we care about is:
# OSMTEST: TEST "All Validations" PASS |
Additionally, you can check the relationship between Infiniband Devices and Network Devices by:
# [root@NAS01 ~]$ ibdev2netdev |
Which in my case looks like:
# mlx4_0 port 1 ==> ib0 (Up) mlx4_0 port 2 ==> ib1 (Up) |
Congrats, you have working InfiniBand :) Now let's work on getting that SRP protocol bound on top of it, with at least one of the interfaces of your HCA as a 'target'
The SCST Package
Step 9: Prepare to install the SCST Package
# [root@NAS01 ~]$ yum install svn
[root@NAS01 ~]$ cd /tmp [root@NAS01 ~]$ svn checkout svn://svn.code.sf.net/p/scst/svn/branches/3.2.x/ scst-svn |
Step 10: Install the SCST Package
The folder is relative to this version of SCST i'm using... Note the make 2perf, rather than the make 2release or make 2anything else.
# [root@NAS01 ~]$ cd /tmp/3.2.x/ [root@NAS01 ~]$ make 2perf [root@NAS01 ~]$ cd scst [root@NAS01 ~]$ make install [root@NAS01 ~]$ cd ../scstadmin [root@NAS01 ~]$ make install [root@NAS01 ~]$ cd ../srpt [root@NAS01 ~]$ make install [root@NAS01 ~]$ cd ../iscsi-scst [root@NAS01 ~]$ make install |
You can combine these into a one liner, but for me it was easier to see issues I was having in SRPT, by performing the make install one by one.
Now if the correct Mellanox OFED drivers are loaded with Kernel support, and there is no conflicting 'inband' drivers that got in the way, then the SRPT install above, should have created a module called ib_srpt. The 'whole' trick for me in getting this setup, was understanding that to get this 'right', the SCST make, depends on the OFED make, to have been made with Kernel support.
Step 11: Validate the correct ib_srpt.ko file is loaded for the module ib_srpt
This step tripped me up for awhile....
# [root@nas01# modinfo ib_srpt |
Output should look like:
# filename: /lib/modules/3.10.0-514.el7.x86_64/extra/ib_srpt.ko license: Dual BSD/GPL description: InfiniBand SCSI RDMA Protocol target v3.2.x#MOFED ((not yet released)) author: Vu Pham and Bart Van Assche rhelversion: 7.3 srcversion: D993FDBF1BE83A3622BF4CC depends: rdma_cm,ib_core,scst,mlx_compat,ib_cm,ib_mad vermagic: 3.10.0-514.el7.x86_64 SMP mod_unload modversions parm: rdma_cm_port:Port number RDMA/CM will bind to. (short) parm: srp_max_rdma_size:Maximum size of SRP RDMA transfers for new connections. (int) parm: srp_max_req_size:Maximum size of SRP request messages in bytes. (int) parm: srp_max_rsp_size:Maximum size of SRP response messages in bytes. (int) parm: use_srq:Whether or not to use SRQ (bool) parm: srpt_srq_size:Shared receive queue (SRQ) size. (int) parm: srpt_sq_size:Per-channel send queue (SQ) size. (int) parm: use_port_guid_in_session_name:Use target port ID in the session name such that redundant paths between multiport systems can be masked. (bool) parm: use_node_guid_in_target_name:Use HCA node GUID as SCST target name. (bool) parm: srpt_service_guid:Using this value for ioc_guid, id_ext, and cm_listen_id instead of using the node_guid of the first HCA. parm: max_sge_delta:Number to subtract from max_sge. (uint) |
i.e.
/lib/modules/`uname -r`/extra/ib_srpt.ko <-- Where 'uname -r' is whatever comes up for you. In my case 3.10.0-514.el7.x86_64 |
If it looks different, like the below example, you have a problem with an 'inband' driver conflict.
# filename: /lib/modules/3.10.0-514.el7.x86_64/extra/mlnx-ofa_kernel/drivers/infiniband/ulp/srpt/ib_srpt.ko version: 0.1 license: Dual BSD/GPL description: ib_srpt dummy kernel module author: Alaa Hleihel rhelversion: 7.3 srcversion: 646BEB37C9062B1D74593ED depends: mlx_compat vermagic: 3.10.0-514.el7.x86_64 SMP mod_unload modversions |
If you have this problem, Please perform the following steps... Manually remove the ib_srpt.ko file in whatever location it is. Reboot. Re-Make the SCST package per the above instructions. Then re-check -> modinfo ib_srpt
If you don't have this problem You are getting close :) and have the foundations ready, we just need to find some information, and setup the /etc/scst.conf
Step 12: The SCST Configuration File
In order to create/edit the SCST configuration file with the right information, you will need at least:
1) To discover the SRP addresses of your Infiniband HCA interfaces, and have it handy.
2) Create (or) have the location of the Volume you want to present via SCST to the SRP initiator(s).
Final Notes:
Managing the SCST Service/Configuration:
Check the status, stop, start, restart the service.
# [root@NAS01 ~]$ service scst status [root@NAS01 ~]$ service scst stop [root@NAS01 ~]$ service scst start [root@NAS01 ~]$ service scst restart |
You need to restart the SCST service, after making changes to /etc/scst.conf if you want those changes to be active.
But be aware when restarting the service, there will be a temporary disruption in all SCST presented traffic (iSCSI/SRP) to all SCSI virtual disks for all initiators may disappear/reappear, this may take up to 60 seconds or so, while the SRP discovery/login process happens. So if you have critical VMs/Databases running that have strict time-outs, you want to plan accordingly.
I don't want to overstate the speed/flexibility that normally occurs during a quick 'add a quick LUN mapping' and restart. I've tested it via ESX6 with a Windows 2016 Server VM with Resource Monitor open, and you can see it hang for a bit, and then quickly its back and life is good. If that works for you, then don't worry about it.
However if you make a mistake in your /etc/scst.conf for example, and it takes you longer to get back up then you planned, well then, there's that...
List SCST Targets Addresses
# [root@NAS01 ~]$ /usr/local/sbin/scstadmin -list_target |
The output should look like:
# Collecting current configuration: done. Driver Target ----------------------------------------------------------------- copy_manager copy_manager_tgt ib_srpt fe80:0000:0000:0000:0002:c903:0028:05a5 fe80:0000:0000:0000:0002:c903:0028:05a6 |