I had the opportunity to spend some time working with a Dell EqualLogic PS5000x SAN yesterday and here are some of my initial thoughts.
3 x Dell 1950, 2.4ghz Dual Core, 16GB ram
Vmware ESX 3.5U1
1 x Intel 100/1000 Ethernet port (iscsi)
1 x 3com 4924
1 x EL PS5000xÂ 16x400GB SAS, 10K Drives
Using software ISCSI initiator
Snapshots are very efficient and effective. I loaded up a few 10GB virtual machines on a LUN, took a snapshot and delete them. You can’t restore while the LUN is online so you need to take it offline to restore, however you can take the snapshot and make it read/write and mount it while the main LUN/Volume is online.
After restoring from the snapshot all the data was in tact.
You can delete snapshots after so many days, schedule automatic snapshots etc.
The management interface is very, very user friendly and intuitive. It works best if you run it outside the browser. If you run it in the browser IE performs the best.
Thin provisioning is pretty interesting and will definitely prove viable in certain scenario’s.
You are able to setup LUNS/Volumes to be automatic, raid5, raid50 or raid10. This is independent from the storage pool configuration. For example if you have multiple storage pools across multiple members with different raid specifications it will automatically move your data and find the best fit on the raid type you specify. If you leave it set to automatic it will do an analysis and migrate data to the area’s were it will get the best performance. When I get access to a second unit I’ll be testing this out.
Volume/LUN cloning is pretty simple and straight forward, you can clone the Volume/LUN while its online however you cannot setup thin provisioning on the clone if the original LUN/Volume doesn’t have thin provisioning enabled.
There is no dedicated management Ethernet interface. It’s a shared “group” IP address that floats between all three active nic’s on the controller.
According to Dell the next firmware which will be 4.0 and it will have the ability to specific a specific Ethernet port from the controller and make it a dedicated management port. But we will loose a ISCSI port that way so it might not be a viable option. The controllers can also be managed via serial cable.
Applying new firmware takes about 60 seconds and during this time both controllers are unable to serve data. An engineer told me that the next firmware will decrease this to about 20-30 seconds.
There is about a 10% slow down in read/write speeds when using straight vmdk files compared to storing the main vmdk file on local or fiber and then using the MS ISCSI initiator to mount the ISCSI lun. The downfall to using the MS ISCSI initiator is the amount of CPU it uses, almost 80% more then using straight vmdk disk.
Reads and writes are generally about 20-25% slower on the ISCSI then on the fiber disk. However, I think the demo setup is sort of gimped as we are only using 1 connection to the ISCSI network from the esx server and the switch we are using isn’t exactly the best choice. It seems to cap at about 40-50MB/s.
The I/O response times were about 10ms more on ISCSI then on the fiber disk. For example 13.4ms on fiber and 24ms on ISCSI.
More tests will be needed. However, while its generally a little slower given the nature of it the performance is still fast and efficient and with HBA’s and better switches I have no doubt these numbers will change.
Using IOMeter for my tests inside a virtual machine the servers cpu usage was about 2-5% lower on the ISCSI virtual machine as compared to the Fiber virtual machine.
There is much more to test and explore but these are my initial thoughts and I figured I would share them.