MSA 2040; Hp MSA 2040 Manuals Manuals and User Guides for HP MSA 2040. We have 11 HP MSA 2040 manuals available for free PDF download: User Manual, Quickspecs, Troubleshooting Manual, Quick Start Instructions, Firmware Release Notes, Installation Instructions, Replacement Instructions.
Good hint but. If possible I would like to go beyond PartSurfer.I should explain better my needs:. I have a bunch of HP 300GB 6G SAS 10K 2.5in DP (P/N 507127-B21), they come from a couple of stand-alone DL380 G7. We'd like to organize host and storage in a cluster. We are considering HP MSA Disk Arrays, at this time we have P2000 G3 and MSA 2040. MSA arrays have similar costs but MSA 2040 is newer and faster than P2000 G3. P2000 G3 is on market from 2010 so, i think is going to be retired from HPExcluding controller capabilities the two MSA Arrays are very similar, almost identical.
Both can be expanded with D2700 Disk Enclosures and this one supports Proliant HDDs like 507127-B21. At the same time D2700 supports E2D55A that is the P/N of HP MSA 300GB 6G SAS 10K SFF DP.Both HDDs have same specifications but different P/N, both can be installed on D2700 Disk Enclosures so they can't have any relevant difference.In a totally new architecture the obvious approach is to buy a MSA 2040 with compatible HDDS, we can find them in QuickSpecs, PartSurfer is also a good source of information. But in our situation is at least ' unpleasant' to not consider the value of 25 Proliant HDDs. It exceeds the MSA 2040 cost:-(.
In the last few months, my company and our sister company , have been working on a number of new cool projects. As a result of this, we needed to purchase more servers, and implement an enterprise grade SAN. This is how we got started with the HPe MSA 2040 SAN (formerly known as the HP MSA 2040 SAN), specifically a fully loaded HPe MSA 2040 Dual Controller SAN unit.The PurchaseFor the server, we purchased another Proliant DL360p Gen8 (with 2 X 10 Core Processors, and 128Gb of RAM, exact same as our existing server), however I won’t be getting that in to this blog post.Now for storage, we decided to pull the trigger and purchase an MSA 2040 Dual Controller SAN.
We purchased it as a CTO (Configure to Order), and loaded it up with 4 X 1Gb iSCSI RJ45 SFP+ modules (there’s a minimum requirement of 1 4-pack SFP), and 24 X HPe 900Gb 2.5inch 10k RPM SAS Dual Port Enterprise drives. Even though we have the 4 1Gb iSCSI modules, we aren’t using them to connect to the SAN. We also placed an order for 4 X 10Gb DAC cables.To connect the to the servers, we purchased 2 X HPe Dual Port 10Gb Server SFP+ NICs, one for each server. The SAN will connect to each server with 2 X 10Gb DAC cables, one going to Controller A, and one going to Controller B.
HPe MSA 2040 ConfigurationI must say that configuration was an absolute breeze. As always, using intelligent provisioning on the DL360p, we had ESXi up and running in seconds with it installed to the on-board 8GB micro-sd card.I’m completely new to the MSA 2040 and have actually never played with, or configured one. After turning it on, I immediately went to and downloaded the latest firmware for both the drives, and the controllers themselves.
It’s a well known fact that to enable iSCSI on the unit, you have to have the controllers running the latest firmware version.Turning on the unit, I noticed the management NIC on the controllers quickly grabbed an IP from my DHCP server. Logging in, I found the web interface extremely easy to use. Right away I went to the firmware upgrade section, and uploaded the appropriate firmware file for the 24 X 900GB drives. The firmware took seconds to flash. I went ahead and restarted the entire storage unit to make sure that the drives were restarted with the flashed firmware (a proper shutdown of course).While you can update the controller firmware with the web interface, I chose not to do this as HPe provides a Windows executable that will connect to the management interface and update both controllers.
Even though I didn’t have the unit configured yet, it’s a very interesting process that occurs. You can do live controller firmware updates with a Dual Controller MSA 2040 (as in no downtime). The way this works is, the firmware update utility first updates Controller A. If you have a multipath (MPIO) configuration where your hosts are configured to use both controllers, all I/O is passed to the other controller while the firmware update takes place. When it is complete, I/O resumes on that controller and the firmware update then takes place on the other controller.
This allows you to do online firmware updates that will result in absolutely ZERO downtime. PLEASE REMEMBER, this does not apply to drive firmware updates. When you update the hard drive firmware, there can be ZERO I/O occurring. You’d want to make sure all your connected hosts are offline, and no software connection exists to the SAN.Anyways, the firmware update completed successfully. Now it was time to configure the unit and start playing. I read through a couple quick documents on where to get started. If I did this right the first time, I wouldn’t have to bother doing it again.I used the wizards available to first configure the actually storage, and then provisioning and mapping to the hosts.
When deploying a SAN, you should always write down and create a map of your Storage area Network topology. It helps when it comes time to configure, and really helps with reducing mistakes in the configuration. I quickly jaunted down the IP configuration for the various ports on each controller, the IPs I was going to assign to the NICs on the servers, and drew out a quick diagram as to how things will connect.Since the MSA 2040 is a Dual Controller SAN, you want to make sure that each host can at least directly access both controllers. Therefore, in my configuration with a NIC with 2 ports, port 1 on the NIC would connect to a port on controller A of the SAN, while port 2 would connect to controller B on the SAN.
When you do this and configure all the software properly ( in my case), you can create a configuration that allows load balancing and fault tolerance. Keep in mind that in the Active/Active design of the MSA 2040, a controller has ownership of their configured vDisk. Most I/O will go through only to the main controller configured for that vDisk, but in the event the controller goes down, it will jump over to the other controller and I/O will proceed uninterrupted until your resolve the fault.First part, I had to run the configuration wizard and set the various environment settings. This includes time, management port settings, unit names, friendly names, and most importantly host connection settings. I configured all the host ports for iSCSI and set the applicable IP addresses that I created in my SAN topology document in the above paragraph. Although the host ports can sit on the same subnets, it is best practice to use multiple subnets.Jumping in to the storage provisioning wizard, I decided to create 2 separate RAID 5 arrays.
The first array contains disks 1 to 12 (and while I have controller ownership set to auto, it will be assigned to controller A), and the second array contains disk 13 to 24 (again ownership is set to auto, but it will be assigned to controller B). After this, I assigned the LUN numbers, and then mapped the LUNs to all ports on the MSA 2040, ultimately allowing access to both iSCSI targets (and RAID volumes) to any port.I’m now sitting here thinking “This was too easy”. And it turns out it was just that easy! The RAID volumes started to initialize. VMware vSphere ConfigurationAt this point, I jumped on to my demo environment and configured the vDistributed iSCSI switches. I mapped the various uplinks to the various portgroups, confirmed that there was hardware link connectivity.
I jumped in to the software iSCSI imitator, typed in the discovery IP, and BAM! The iSCSI initiator found all available paths, and both RAID disks I configured. Did this for the other host as well, connected to the iSCSI target, formatted the volumes as VMFS and I was done!I’m still shocked that such a high performance and powerful unit was this easy to configure and get running. I’ve had it running for 24 hours now and have had no problems.
This DESTROYS my old storage configuration in performance, thankfully I can keep my old setup for a vDP ( Data Protection) instance. HPe MSA 2040 PicturesI’ve attached some pics below. I have to apologize for how ghetto the images/setup is. Keep in mind this is a test demo environment for showcasing the technologies and their capabilities.
![Msa Msa](https://img.dokumen.tips/img/1200x630/reader010/html5/0609/5b1b37ef7ac85/5b1b37f00500a.png)
Hi, very useful postI have decided to create in my lab a new vmware environment with 2x HP DL360p Gen8 (2 CPU with 8 Core, 48Gb RAM, SDHC for ESXi 5.5, 8 NIC 1Gb) + 1 HP MSA 2040 dual controller with 9 600Gb SAS + 8 SFP transceivers 1Gb iSCSI.I’m planning to configure the MSA with 3 vdisk. 1° vdisk RAID5 with phisical disk 1 to 3, 2° vdisk RAID5 with disk 4 to 6, 3° vdisk RAID1 with disk 7 to 8 and a global spare drive with disk 9.Then for each vdisk I create one volume with the entire capacity of vdisk and then for each volume, i create one lun per VM. The 3° vdisk i would like to use for the replica on any VM.For you, my configuration is ok?Thanks a lotByeFabio. Hi Fabio,Thanks for clearing that up.
Actually that would work great the way you originally planned it. Just make sure you provision enough storage.One other thing I want to mention (I found this out after I provisioned everything), if you want to use the Storage snapshot capabilities, you’ll need to leave free space on the RAID volumes. I tried to snap my storage the other day and was unable to.Let me know if you have any other questions and I’ll do my best to help out! I’m pretty new to the unit myself, but it’s very simple and easy to configure. Super powerful device!Stephen. Hello Satish,It is indeed a supported configuration. Hey Stephen,Great post.
I’ve also got the MSA 2040 with the 900GB SAS drives and connected through dual 12GbE SAS controllers to 2x HP DL380 G8 ESXi-hosts. It’s a fantastic and fast system. Better than the old Dell MD1000 I had ?Setup was indeed a breeze. I was doubting to choose the FC or the SAS controllers and I choose for the SAS. I only have 2 ESXi-hosts with VMware Essential plus, so 3 hosts max.During the datamigration of our old dataserver to the MSA with robocopy, i saw the data copied around 700MB/s (for large files and during the weekend).Backup will be done with Veeam and going to disk and then to tape. Backup server is connected with a 6GbE SAS hba to the MSA port 4 directly and a LTO-5 tapelibrary is connected to the backup server also with a 6GbE hba.I’m very pleased with my setup. Hi Lars,Glad to hear it’s working out so well for you!
I’m still surprised how awesome this SAN is. Literally I set it up (with barely any experience), and it’s just been sitting there, working, with no issues since! Love the performance as well. And nice touch with the LTO-5 tape library. I fell in love with the HP MSL tape libraries back a few years ago, the performance on those are always pretty impressive!I’m hoping I’ll be rolling these out for clients soon when it’s time to migrate to a virtualized infrastructure!Cheers,Stephen.
Hello Steve,That is correct (re: the 2 x 10Gb DACs and 2 x unused 1Gb SFP+s)The configuration is working great for me and I’m having absolutely no complaints. Being a small business I was worried about spending this much money on this config, but it has exceeded my expectations and has turned out to be a great investment.For vMotion, each of my servers actually have a dual port 10G-BaseT NIC inside of them. I actually just connected a 3 foot Cat6 cable from server to server and dedicated it to vMotion. VMotion over 10Gb is awesome, it’s INCREDIBLY fast. When doing updates to my hosts, it moves 10-15VMs (each with 6-12GB of RAM allocated to each VM) to the other host in less then 40 seconds for all.As for Storage vMotion, I have no complaints. It sure as heck beats my old configuration.
I don’t have any numbers on storage vMotion, but it is fast!And to your comment about SAS. Originally I looked at using SAS, but ended up not touching it because of the same issue; I was concerned about adding more hosts in the future. Also, as I’m sure you aware iSCSI creates more compatibility for hosts, connectivity, expansion, etc In the far far future, I wanted to have the ability to re-purpose this unit when it comes time to performing my own infrastructure refresh.Keep in mind with the 2040, the only thing that’s going to slow you down, is your disks and RAID level. The controllers can handle SSD drives and SSD speeds, but keep in mind, you want fast disks, you want lots of disks, and you want a fast RAID level.Let me know if you have any questions!Cheers,Stephen. Hi Stephen,Would you mind emailing me an image of the cable connections on this configuration? This is really close to what we are planning for replacement of our existing SAN and 2 host configuration and I appreciate you sharing this level of detail.What are your thoughts on using 15K SAS or a SSD in the raid caching and how that may affect the iops you experienced? I haven’t found much discussion on this array from real deployments and I appreciate you sharing the performance stats in your blog, really helpful.CheersDan.
Hi Alex,There are two controllers, but that’s the difference between the SAN, and the SAS model.As far as the SAN model goes, it has SFP+ module ports that are to be used with certified HP SFP+ modules (modules are available inside of the QuickSpecs document). The SAN can be used with either DAC cables, FC SFP+ modules, iSCSI SFP+ modules, or the 1GB RJ45 SFP+ modules.You should be fine doing what you mentionedIn my case, we use DAC. I also just configured and sold the same configuration as I have to a client of mine which is working GREAT!Let me know if you have any specific questions. If the unit you purchase is pre-configured, there is a chance you may have to log in to the CLI and change the ports from FC to iSCSI, but this is well documented and appears easy to do. But I wouldn’t be surprised if it just worked out of the box for you.Cheers,Stephen.
Hi Stephen,article of your blog on is great and fantastic,Can u share us back side connection diagramrecently we purchasesd below mentioned items didnt configured still in packet.please provide some valuable suggestions for setup this environemnt.Hp MSA 1040 2prt FC DC LFF storge quantity -1Hp MSA 4TB 6G SAS Qauantity 10HP 81Q PCI-e FC HBA Quantity 4HP 5m Multi Mode OM3 50/125um LC/LC 8GB FC and 10 GBE Laser enhanced cable pack – quantity 4Hp MSA 2040 8gb SW FC SFP 4pk quantity 1DL 380 G8 server quantity 2please provide brief idea about setup this one.Thanks in advance. Hi Florian,They would, and they do (in my case, and other cases where people are using VMWare), but you have to keep in mind that in any SAN environment or Shared Storage Environment that you are using a filesystem that would be considered a clustered filesystem.Some filesystems do not allow access by multiple hosts because they were not designed to handle this, this can cause corruption. Filesystems like VMFS are designed to be clustered filesystems in the fact they can be accessed by multiple hosts.Hope this answers your question!Cheers,Stephen. Just thought I’d drop by to say, in case anyone is reading and about to buy an MSA 2040 to use with Direct Attach Cables, this is definitely.not. a supported configuration. Like Stephen I carefully checked all the Quickspecs docs for the NIC, for the Server, for the SAN before specifying one recently.
However, though they are on the list, HP categorically do.not. support direct attached 10GbE iSCSI – regardless of whether you use fibre or DAC cables. This can be verified online using their SPOCK storage compatibility resource which lists all the qualified driver versions etc.
– SPOCK has a separate column for “direct connect”. It’s pretty difficult to understand what difference this could make, but the problem is that if you deploy an unsupported solution and something went wrong they would just blame that. The mitigation in my case was to convert the SAN to Fibre Channel and buy two 82Q QLogic HBAs for the hosts. This is the only supported way to use the SAN without storage switching. Hello Stephen,you write the following in one of your posts:“For vMotion, each of my servers actually have a dual port 10G-BaseT NIC inside of them. I actually just connected a 3 foot Cat6 cable from server to server and dedicated it to vMotion.
VMotion over 10Gb is awesome, it’s INCREDIBLY fast. When doing updates to my hosts, it moves 10-15VMs (each with 6-12GB of RAM allocated to each VM) to the other host in less then 40 seconds for all.As for Storage vMotion, I have no complaints. It sure as heck beats my old configuration.
I don’t have any numbers on storage vMotion, but it is fast!”Are you really direct connecting the 2 10GB Hbas to each other with a simple Cat cable? Or are you using a switch (10GB) to connect?Would be grateful for an answer.Thank youFranz. Hi Franz,As I mentioned, my servers had an extra 10G-BaseT NIC in both of them. Since I only have two physical hosts, I connected one port on each server to the other server using a short CAT6 cable.So yes, I’m using a simple CAT cable to connect to the two NICs together. Once I did this, I configured the networking on the ESXi hosts, configured a separate subnet for that connection, and enabled vMotion over those NICs.Keep in mind, I can do this because I only have 2 hosts. If I had more then 2 hosts, I would require a 10Gig switch.Hope this helps!
Hi Stephen,Congrats to your setup!I’m a systems engineer at a distributor – one hint: you can purchase only the chassis and buy two controllers, than you’re not forced to buy transceiver at all – price of the components are the same. Make sure you have the latest fw (GL200) to enable tiering and virtual pools.
Then you can make smaller raid sets (3+1) and pool volumes across the raids, this will give you a huge bump in speed. Paired with a ssd read cache (can be only one ssd) this storage is nuts! We have just implemented a similar configuration with the HP MSA 2040 iSCSI connected to ESXi 5.1. In addition, we replicate the volumes to a HP MSA 1040 that was initially connected locally and then shipped off site to replicate over a 100Mb isec tunnel.Moving the replica target controller was a bit tricky; but basically you have to detach the remote volume, remove the remote host and ship offsite.
After reinstallation and validating all IP connectivity then you read the remote host and reattach the volume. I also just purchased the msa2040 (SAS version) for our Vsphere 5.5 environment. How exactly did you update the drive firmware. There is a critical firmware upgrade for the SSD drives I loaded, but I can’t get it to run.I tried installing the HP Management agents for Vcenter Client plug in, but that can’t even detect teh existing firmware on the disks.When I try to upload the CP026705.vmexe file via the web interface for the controller, it loads the file to the disks but then fails the update.6 of 6 disks failed to load. (error code = -210)CAPI error: An error was encountered updating disks. Code Load Fail. Failed to successfully update disk firmware.
Steven,thanks for show the domo. If you have to do 3 VMware hosts. One the RJ-45 1GB modules, they come in single or a pair? It seems like the MSA 2040 can have up to 4 MSA 2040 per controller.“To connect to the hosts, I used 10Gb DAC Cables which have SFP+ modules built in to the cables on both ends.”So you did brought an adapter card that has SFP there for you ended up connecting your VMware hosts with 10GB SFPs. Can you please post picture of the back where the connections of the SFP the setup?Is it true that your setup is like page 5 figure 2 without the HP 16GB SAN switch (FC)?would it be VMware host HP ENET 10GB 2-PORT 530SFP ADAPTERMSA 2040 controller A or B C8R25A HP MSA 2040 10Gb RJ-45 iSCSI SFP+ 4-Pack TransceiverThanks much. Hello Jon,Please see the post: for pictures of the connection.The RJ-45 1Gb SFP+ modules come in a 4-pack when ordered.
My unit has 4 of them, but I don’t use them. When I ordered, it was mandatory to order a 4-pack, so that’s the only reason why I have them. I don’t use them, they are only installed in the unit so I don’t lose them, lol.I used the 10Gb SFP+ DAC (Direct Attach Cables) to connect the servers to the SAN. The Servers have SFP+ NICs installed.
Please see the pictures from the post above.The document you posted above is to HP’s best practice guide. I used that guide as a reference, but configured my own configuration. I did NOT use fiberchannel, I used iSCSI instead and utilized the DAC cables. You cannot use the DAC cables with FC.Please note that if you use the RJ45 1Gb modules, they will not connect to a NIC in the server that has a SFP+ connection, the RJ45 modules use standard RJ45 cables. Also, that card you mentioned uses SFP, and not SFP+. You’ll need a SFP+ NIC.In my configuration I used iSCSI with:HP Ethernet 10Gb 2-port 560SFP+ Adapter (665249-B21)HP BladeSystem c-Class Small Form-Factor Pluggable 3m 10GbE Copper Cable (487655-B21)You keep referencing SFP. Please note that the SAN or my NICs don’t use SFP, instead they use SFP+.Hope this helps,Stephen.
Hi Stephen,Thanks for you quick reply. I have to say your build is the most cost efficient.And allow me to recap to make sure I understand your setup.Host(Server side)2 X HP Ethernet 10Gb 2-port 560SFP+ Adapter (665249-B21) to A and B controller. This adapter has 2 ports with connectors that will fit 665249-B21 cable. Or the shorter one 487652-B21.Connector: HP Ethernet 10Gb 2-port 560SFP+ Adapter (665249-B21)Missing or my question is: The SAN doesn’t come with any SFP+, unless you order them as options.Trying to answering my own questions: Stephen yours is a For MSA 204010Gb iSCSI configuration user can use DAC cables instead of SFPs. This is confirmed by reading your post over and over, you said “Not familiar with the SAS version”Here is where I found SAS and iSCSI difference:”Step 2b – SFPsNOTE: MSA 2040 SAN Controller does not ship with any SFPs.
MSA SAS controllers do not require SFPmodules. Customer must select one of the following SFP options. Each MSA 2040 SAN controller can beconfigured with 2 or 4 SFPs. MSA SFPs are for use only with MSA 2040 SAN Controllers. For MSA 204010Gb iSCSI configuration user can use DAC cables instead of SFPs.”Indeed, the iSCSI version will be the cost effective way to go. Great job Stephen.Please help me convince myself to do the same setup as I was trying to do the following setup. Anyone welcome to jump in.3 HP G8/G9 servers.
Hi Jon,Each server has 1 NIC for SAN connectivity, but each NIC has 2 interfaces The 560SFP+ has two DAC cables running to the SAN (1 cable to the top controller, and 1 cable to the bottom for redundancy). You need to make sure you use a “supported” cable from HPe for it to work (supported cables are inside of the spec sheets, or you can reference this post, or other posts which have the numbers.The cables I’m using are terminated with SFP+ modules on both ends (they are part of the cable). The SFP+ module ends slide in to the SFP+ modules in the NIC and in the SAN controller.And that is correct. On CTO orders, it’s mandatory to order a 4-pack of SFP+ modules with some PART#s. That is why I have them.
Keep in mind I don’t use the modules, since I’m using the SFP+ DAC cables.I am familiar with the SAS version, but that is for SAS connectivity which isn’t mentioned inside of any of my posts. My configuration is using iSCSI. If you want to use iSCSI or Fiberchannel, you need to order the SAN. If you want to use SAS connectivity, you need to order the SAS MSA.When you place an order for the SAN, you need to choose whether you want the MSA 2040 SAN, or the MSA 2040 SAS Enclosure. These are two separate units, that have different types of controllers.
The SAS controllers only have SAS interfaces to connect to the host via SAS, whereas the SAN model, has converged network ports that can be used with fiberchannel or iSCSI.In your proposed configuration, you cannot use the MSA 2040 SAS with any type of iSCSI connectivity. You require the MSA 2040 SAN to use iSCSI technologies.You’re not very clear in your last post, but just so you know, the SAN does not support 10Gb RJ45 connectors (HP doesn’t have any 10Gb RJ45 modules). You must either use 10Gb iSCSI over fiber, 10Gb iSCSI over DAC SFP+, 1Gb RJ45, or Fiberchannel.The reason why I used the SFP+ DAC cables was to achieve 10Gb speed connectivity using iSCSI. That is why I have to have the SFP+ NIC in the servers.If you only want to use RJ45 cables, then you’ll be stuck using the 1Gb RJ45 SFP+ transceiver modules and stuck at 1Gb speed.
If you want more speed you’ll need to use the DAC cables, fiber, or fiberchannel.I hope this clears things up. Morning Jon,Technically you could use 10Gb SFP+ DAC cables to connect the SAN to the Switch, in which case you would then be able to use 10GBaseT (RJ45) to connect the servers to the switch.
This is something I looked at and almost did, but chose not to just to keep costs down for the time being.In my setup, when I do expand and add more physical hosts, I might either purchase a 10Gb switch that handles SFP+ or a combo of SFP+ and 10Gb RJ45 connections This way it will give me some flexibility as far as connecting the hosts to the SAN In this scenario, some hosts would be using SFP+ to connect to the switch, whereas other hosts would be able to connect via 10GBaseT RJ45If you do choose to use a 10Gb switch with DAC cables, just try to do some testing first. Technically this should work, but I haven’t talked to anyone who has done this, as most people use Fiber SFP+ modules to achieve thisOne day I might purchase a 10Gb switch, so I’ll give it a try and see if it can connect. The only considerations would be if the switch was compatible with the DAC cables, and if the SAN is compatible speaking to a switch via 10Gb DAC cables.Getting back on topic Since you only have 3 physical hosts, you definitely could just use 10Gb DAC SFP+ cables to connect directly to the SAN. Since each controller has 4 ports, this can give you direct connection access for up to 4 hosts using DAC cables (each host having two connections to the SAN for redundancy).As for your final question The solution you decide to choose should reflect your IO requirements.
The first item you posted, utilizes SSD high performance drives (typically optimized for heavy IO load DB operations). This may be overkill if it’s just a couple standard VMs delivering services such as file sharing and Exchange for a small group of users.The second item you posted is the LFF format (Large Form Factor disks). The disks are 3.5inch veruses the 2.5inch SFF format. Not only does this unit house less disks, but also may lower IO and performance.I’m just curious, have you reached out to an HP Partner in your area? By working directly with an HP partner, they can help you design a configuration. Also they can build you a CTO (Configure to Order) which will contain exactly what you need, and nothing you don’t need.The first PART# you mentioned may be overkill, while the second item you listed may not be enough.This may also provide you with some extra pricing discounts depending on the situation, as HP Partners have access to quite a few incentives to help companies adopt HPe solutions.Hope this helps and answers your question.Let me know if I can help with anything else.Cheers. Hi Jon,I would wait until you talk to an HPe Solution architect before choosing Part#s.
They may have information or PART#s that aren’t easily discovered using online methods. Also, I’m guessing they will probably recommend a CTO (Configure to Order) type of product order.Technically both methods you suggested would work, however you should test first to make sure the Cisco switches accept the SFP+ cables.
Originally I wanted to use switches, however I decided not to, to keep costs at a minimum. If you go this route, you could actually connect all 8 X SFP+ on the SAN to the switch for redundancy and increased performance. You could also add a second switch and balance half the connects from each controller to switch 1 and switch 2 for redundancy.Also, I’m curious as to why you chose the LFF version of the SAN? I would recommend using a SFF (2.5 inch disk version) of the SAN for actual production workloads I normally only use LFF (3.5 inch disk) configurations for archival or backup purposes. This is my own opinion, but I actually find the reliability and lifespan of the SFF disks to be greater then LFF.As I mentioned before, the SFF version of the SAN holds 24 disks instead of the 12 disks the LFF version holds. Also you will see increased performance as well since disk count is higher, and the small disks generally perform better.Cheers,Stephen. Hi Stephen,am back in with 3 HP DL380 GEN9 E5-2640V3 SRV with 3 HP HP Ethernet 10Gb 2-port 530T Adapter 657128-001 (RJ-45)HP MSA 2040 ES SAN in hand.
Choose to use DAC cables.As for the following quoting from you how many IPs did you use? I agree I should use 2 subnet, 1 subnet for hosts and guest(for user to access and hosting vmservers) then subnet for storage (only the VMware host and storage).
I first draft is not with good practice. Please let me know what you think. Also on do you have virtual server in different VLAN, and a DHCP server serving different VLAN? Hi Jon,I’m a little bit puzzled on your configuration. You mentioned you’re using DAC cables (which are SFP+ 10Gb cables), however you mentioned your server has RJ45 NICs. You can’t use SFP+ DAC cables with that NIC.Your storage network, and VM communication networks should be separated.In my configuration, each host has 1 10Gb DAC SFP+ cable going to each controller on the MSA2040 SAN. So each host has two connections to the SAN (via Controller A and Controller B).
This provides redundancy.Each connection to the SAN (a total of 4 since I have two physical hosts) is residing on it’s own subnet. This is because the DAC cables are going direct to the SAN, and are going through no switch fabric (no switches).Your VM communication for normal networking should be separated. They shouldn’t be using the same subnets, and they shouldn’t be sitting on the same network if you do have your SAN going through switching fabric.One last comment. Hi Stephen,I use DAC cables. What confuses me is that iSCSI with IP on the MSA, I thought iSCSI don’t uses IP?Yes, I would like to have a separate VLAN for my storage, physical hosts each use 10Gb DAC SFP+ cables to connect directly to the SAN. So 6 of 10Gb DAC SFP+ and all 6 port Controller A and B will be on VLAN X. Then the rest of the 1 GB NICs on the 3 hosts are on VLAN Y and Z.
But still working this out with my network admin.Now since the VLAN X don’t really talk to Y and Z (am I right??). Does the VLAN X has to be created by the network admin? Or do I just whip up VLAN X and document it and call it a day?Question for anyone who done DAC cable from host to MSA and using a switch (3560G) for client access (for those 4 GB ports on the HP host). Does anyone of you in favor of portchanel with trunk to get the host to have 2 (4)GB instead of 1 GB or 2 of 2GB portchanel with trunk?
OR are you in favor of trunk for needed VLAN on all 4 ports and deal with the 4 ports with VMWare switch?Thanks much. Hi John,On the initial configuration wizard, it should have prompted.If not, log in to the web GUI. Click on “System” on the left. Click the blue “Action” button, and then “Set Host Ports”. Inside of there, you can configure the iSCSI IP address settings.
You can also enable Jumbo frames by going to the Advanced tab.There is a chance that the unit’s converged network ports may be configured in FC channel mode. If this is the case, you’ll need to use the console to set and change the converged ports from FC to iSCSI.Cheers,Stephen. Great article!
I’m building a similar setup with an MAS 2040, two DL380’s and two ZyXEL XS1920 switches in a stacked configuration. I’ll be connecting the MSA 2040 to the ZyXEL switches using DAC cables. This gives me near 10Gbps line speed to the storage.
The DL380’s will have a 10GBase-T NIC’s (FlexFabric 10Gb 2P 533FLR-T Adptr). The ethernet NICs will be lower bandwidth (I believe I should get approx 7Gbps line speed) than DAC but using the ZyXEL switches I can now connect more servers via 10GBase-T ethernet and still maintain dual paths to any new clusters I create. Hi Ndeye,I can’t exactly answer your question However, any special drivers you may require should be available on the HPe support websitePlease note, when searching for your model on the HPe support website, make sure you choose the right one. There are multiple MSA 2040 units (SAN, and SAS units).Here is a link to the US site:When I checked my model on that site, I noticed there was drivers available for specific uses, however I’m not sure if I saw a DSM driver. There’s a chance that it may not be needed.As always, you can also check the best practice documents which should explain everyting:(this document is vSphere specific).I hope that helps!Cheers.
Hi Antonio,No such thing as a dumb question! When it comes to spending money on this stuff, it’s always good to ask before spending!To answer your question: Yes, if you buy only 1 x 4 pack of transceivers, you can load 2 in to Controller A, and 2 in to Controller B.When I purchased my unit, I ordered only 1 x 4-pack (as it was mandatory to buy at least 1 4-pack in CTO configurations). When I received it from HPe, they had 2 loaded in to each controller.Let me know if you have any other questions! Hi StephenNIce to write and read you!Please ypur help.In SMU reference guide, For firmware release G220:Using the Configuration Wizard: Port configuration.Say:To configure iSCSI ports1. Set the port-specific options: IP Address. For IPv4 or IPv6, the port IP address.
For corresponding ports in each controller, assign one port to one subnet and the other port to a second subnet. Ensure that each iSCSI host port in the storage system is assigned a different IP address. For example, in a system using IPv4:– Controller A port 3: 10.10.10.100– Controller A port 4: 10.11.10.120– Controller B port 3: 10.10.10.110– Controller B port 4: 10.11.10.130Well, I see that each port by controller has a different network segment. Supposing we have a SAN iSCSI with two servers and two dedicated LAN switch. The servers has two NICs each one. Each NIC must to have a different segment according the port in each controller?How does Windows Server 2012 R2 function with that configuration?Thanks and regards.Martin.
Hi Martin,That is correct. Hi Rene,I’m not that familiar with Hyper-V and blades, however: First and foremost, I would recommend creating separate arrays/volumes for the blade Host operating systems, and your virtual machine storage datastore. The idea of having the host blade OS’s and guest VM’s stored in the same datastore does not sit well with me. The space provided for the Hyper-V hypervisor should be minimized to only what is required.Second, you’ll need to configure your blade networking so that the Host OS Hyper-V has access to the same physical network and subnet as the MSA SAN resides on. Once this is done and the host OS is configured, you should be able to provide the Host Hyper-V OS with access to the SAN.Please note, as with all cluster (multiple host access) enabled filesystems, make sure you take care as for following the guidelines that Microsoft has laid out for multiple initiator access to the filesystem to avoid corruption due to mis-configuration.I hope this helps! Hopefully someone with specific knowledge can chime in and add on what I said, or make any corrections to my statements if needed.Stephen. Hi Stephen,Many thanks for the speed of your response.Having spoken to HP yesterday, your comments fall in line with their advice.
Our perception of tiering mechanics were a little off, and we now understand that both types of disk need to reside in the same pool for tiering to take effect.It’s a tussle between having all disks in one group, owned by one controller, giving the maximum number of IOPS, and splitting the disks into two groups, with only one having the SSDs within the group. Whilst this means that both controllers are active, there will be a drop in IOPs, and there will be one group that only contains the 10k drives. That’s a decision we are yet to make.
Your benchmark perf results look as though we needn’t worry, although you had more disks in the array if I recall.Many thanks Stephen. I may return with more questions as we proceed with the setup of the unit.J Howard. Hello Stephen,Please I need help. I read the topics and is very good. I have a question even is not FC but SAS port.I have a new HP MSA 2040 SAS (MOT27A) and after I configure the msa no host appear.I verified:The host card: HP H241The server: HP DL380 Gen 9 – Windows 2016The cable between the MSA 2040 and the server: mini SAS-HD connector to mini SAS HD connector (716197-B21)The only message that I have on the rear graphical on Port A1 to A4 and on the port B1 to B4 is:There is no active connection to this port.I have verify that all firmware and driver is update.I have forget to do anything?Thanks so lot.Dondia Jhoel. Hi Dondia Jhoel,My knowledge with the SAS model is limited, however there’s a few things to keep in mind.1. Did you use the cables listed in the Quickspecs compatibility document?2.
Have you configured disk groups and created a volume (I can’t confirm, but you may need to do this in the web interface beforehand).3. Are you using the latest and correct drives on the Windows Server host.
I’d recommend downloading the Proliant Support pack and make sure all your HPe drivers and software are up to date.As always I also recommend reading the best practice documentation available for the unit. I hope this helps, and my apologies for not being more help.Stephen. Hey Stephen,Great post regarding MSA 2040.
We just bought MSA 2042 with two controller, two 400GB SSD & 9x4TB SAS 7.2K. I have all network hosts with 1GB for iSCSI. Goal is to use MSA 2042 for Data storage to replace file server and also store VMs. We also purchased HP DL380 G9 with 8 network ports and two HP switch OfficeConnect 1920 16ports. I was planning to setup 8 HDs as RAID6 with one global spare and use both SSD for Readcache on Controller A.
I have redundancy connection from MSA network hosts to both switches using same subnet mask but separate from LAN subnet. HP server has full Server 2012 R2 with Hyper-V to host server VMs. I am planning to connect LUNs to host server to store VMs and connect LUNs inside server VMs. We did create Team 1 with 3 NICs for LAN access for server VMs. One NIC for host server on LAN. Team A with 2 NICs going to OfficeConnect switch A and Team B with 2 NICs going to OfficeConnect switch B. Since we can’t team hosts on MSA, I was trying to bond ports on OfficeConnect 1920 switch using LACP but it shows as faulty in Server 2012 R2 host under Teaming for SAN Team A & Team B.
I can only select Switch Independent Dynamic on Server 2012 R2 host for faulty error to go away. It will come back if I pick LACP with Dynamic.Why question is how to setup my switches and Server 2012 R2 host teaming of NIC for SAN subnetmask, so Data and VMs has best performance and bigger bandwidth. Is my RAID configuration is correct for Data read/write speed? Will my server VMs have performance issue since it’s on SAN? Do I need to aggregate ports on switch to create one big bandwidth pipe to SAN? Is it better to not Team NICs for server VMs and use dedicated NIC to talk with LAN?Thank you in advance for your expertise.
Hi Nish,First and foremost, you should not be using NIC teaming, or any other type of link aggregation with iSCSI and/or the MSA 2040.The correct procedure for iSCSI redundancy, and throughput is to take advantage of MPIO (Multi-Path Input Output). This technology not only offers redundancy, but combines and aggregates the speed of multiple interfaces for multiple connections between the hosts (initiators), and the SAN (target).Typically with MPIO, you would try to utilize multiple subnets and multiple switches for redundancy, however this isn’t a requirement (it should function even if you have one subnet, if configured properly).
You’ll need to configure the network layer (IPs and Subnet) on the SAN, as well as configure the initiator on the Windows Server to utilize MPIO.While most of my experience is with VMware and Linux, you should be able to configure Windows Server to use different types of MPIO policies. You’ll be looking to configure and set the policy to round-robin.As for SAN LUN access to both the host OS as well as guest VMs, you need to be very careful about creating the proper host maps on the MSA 2040. You want to make sure that the Host OS isn’t doing discovery, mounting, or scanning on LUNs mapped to the inside VMs, and vice versa.
Make sure you take time and effort in to creating these mappings (you don’t want instances connecting or even seeing LUNs it shouldn’t have access to). This can all be handled by host mapping on the MSA itself.As far as the network design goes itself, it all depends on the level of complexity you feel comfortable with. Normally I would recommend keeping the SAN and data network separate. Also making sure that each host, SAN, and guest VMs have access to their applicable networks.If it was me, I’d have dedicated multiple (isolated) physical networks assigned to the Storage Area Network using MPIO and multiple subnets. I would then also have a completely separate physical network for data to the local area network (you could use link aggregation or bonding for LAN access). This way you can separate the types of traffic and keep things simple.I hope this helps!Cheers,Stephen.
Hi Yen,Without more detail I cant really comment on the setup.If the simulation software supports it, and the servers are using a cluster filesystem (a cluster filesystem allows multiple hosts to read/write to it simultaneously without corruption), you should be able to have each server connect directly to the SAN (using a switch, and DAC or fiber cables).You’ll need to check the documentation on the simulation software to see what they recommend for the storage fabric/connectivity.Cheers,Stephen. Hi Daniel,I chose to go with iSCSI because I wanted to have iSCSI across my storage fabric.
This way I could either connect the SAN directly to the hosts, or in the future as my requirements grow (host count grows, or my feature requirement grows), I could use the DAC cables to connect the SAN to a switch.With SAS I would have been stuck only connecting it directly to the hosts.Using the setup I have now (iSCSI), I can and have used the SAN for other uses as well (other than for my vSphere environment). ISCSI with DAC provided me much more flexibility than going with the SAS model.Cheers,Stephen.
Hi Stephen,Good day to you!We have purchased MSA 2042 storage and 2 numbers of DL360 gen 9 servers. We have connected as a Direct attached storage using DAC. Storage firmware updated to the latest.Below are the part numbers of server storage and DAC cable755258-B21 – HP DL360 Gen9 8SFF CTO Server.Q0F72A – HPE MSA 2042 SAN Dual Controller with Mainstream Endurance Solid State Drives SFF Storage – CTO.J9285B – HPE X242 10GB SFP+ to SFP+ 7 Meter DAC Cable.Installed Esxi 6.0 on both the servers.Storage is having 2 controllers. Controller A & B. Hi Murali,I’m assuming you’ve already configured the host ports on the SAN as iSCSI, and that you’ve configured the host ports on the SAN with proper IP configuration and have the storage unit fully configured.Did you enable the software iSCSI initatiors on the ESXi hosts to enable iSCSI communication?After you enable the software iSCSI initiator, you should be able to add one of the IPs of the SAN to the discovery list, it should then self populate with the other IP addresses.Make sure after it is detected, that you configure the MPIO settings as well for round robin!Cheers,Stephen. Hi Stephen,Good morning!Could you please tell me how to enable the software iSCSI initiators on ESXI hosts to enable iSCSI communication.As we have created a iSCSI software adapter in ESXI after the rescan I couldn’t able to see the hosts on my ESXI tab.As we checked the logs.
Hi Murali,I’ve gone through your comment and blocked out you drive serial numbers. Please make sure you don’t publicly post any serial numbers in your configuration as these can be used maliciously (with warranty, etc).You’ll need to reference the best practice document to make sure that the SAN host parts are configured as iSCSI.
I could tell you the command, however I recommend you read the document to fully understand what you’re doing, especially if you’ll be managing this system.Also, you’ll need to follow VMwares instructions on how to enable the software iSCSI initiator. Again, I highly recommend you read their documentation as it’s important you understand what you’re doing.Furthurmore, I would recommend that you test the TCP/IP communication over the iSCSI links to see if they are responding to the configured IPs. This will help diagnose if you have any problems with the SFP+ modules, or if it’s just configuration that you need to do to get everything up and working.It sounds like you have a bunch of unknowns with your setup and configuration. I might recommend reaching out and hiring a consultant to help configure this as it sounds like you may require more assistance than what can be publicly provided over a blog post.Stephen. I’m in the process of setting up a setup very much like you describe here, but different disk config.
We also have the Advanced Data Services license for tiering and remote snap. Do I need to switch the 10gb ISCI on the controllers and servers to go instead through a switch for both setups so that replication can be done between them? Or can I keep as direct connects (servers to SANs) and use another 10gb port on each controller on each SAN to then go to a switch which can also get to the 10gb switch at the other location? Or would those ports on the SAN then not have access to all the volumes in a way that it can replicate them across, and also be used by the ESXi guests on both sides? Hi Gregory,While I’ve never used the remote snap feature or replication, I think you should be able to keep the servers and SAN connected via DAC, and then just use another free port on the controllers to connect to a switch for replication.Keep in mind you’ll want a port on each controller on that switch.I could be wrong, but the hosts have nothing to do with those features hence why I’m leaning towards just adding a switch on unused ports.I’d also recommend checking the HPe documentation which might help with your setup. They have a best practices documents as well as one on replication.Hope that helps, and my apologies I couldn’t provide something more definitive.Cheers,Stephen.