3.2GB/s of sustained throughput from a budget SAN

The configuration

  • 2 HP G8 servers
  • Each server with 3 FusionIO card
  • 2 Emulex 16GB FC
  • DataCore SANsymphony-V

This was in 2012. In the meantime, DataCore SANsymphonyV got V9, which is faster (up to 25% faster). Other options start to emerge to replace the FusionIO, like WhipTail or the Intel 910 series SSD. Of course the server can be any brand, but little can replace SANsymphonyV. You can easily have such performance in a 2U. Price – much lower then what you think and It’s a SAN not a DAS.

Would you like to become DataCore SANsymphonyV Implementation Engineer? I’m going to deliver DCIE training between the 3rd to the 5th June. If interested you can leave me a message on the Contact Me page.

You can see this short video with the above configuration http://www.youtube.com/watch?v=yr1dNUTaIKE

Intel vs ARM

Q1 2013 Intel had $12.6B revenue and ARM had $263.9M. It’s not a fair comparison since Intel has foundries and manufactures its chips. The full price of the CPU is revenue for Intel but ARM’s revenue per CPU is a royalty of 4.6 cents. Also ARM has only one product CPU (and a little GPU that goes with it) but Intel has Motherboards, servers, Network cards, Raid Controllers, SSD and more. But Intel the lion share is CPU’s.

In Q1 13, Intel makes $2B in profit and ARM makes $112.6M. 17% of Intel revenue is profit while 42% of ARM’s revenue is profit.

More statistics for Q1 13. Arm partners ships 2.6B chips, an increase of 35% while Intel PC Client Group revenue goes down by 6%

In 2012, Intel spent more than $10B in R&D.

What will Haswell and Baytrail do for Intel?

Dynamic vs Fixed VHD(x) – actual performance figures

Today I thought of measuring the differences between fixed and dynamic VHD and VHDX. We read a lot about such but I want to replace faster or slower with figures. Of course the figures will vary from one setup to another, but still I want some indicative figures.

I have already has the following setup up and running

One Core i7 PC with 32GB RAM and Intel 520 series SSD with Windows Server 2012 as base OS with the iSCSI Target Server role and VMware Workstation 9.0 installed. In VMware I have 2 VM’s each allocated 7GB RAM and running Windows Server 2012 with the Hyper-V role enabled. These 2 VM’s are forming a cluster and presented an iSCSI CSV from the physical Windows Server 2012. In other words, a small cluster on the same PC.

What is important is that I have a Hyper-V 3 cluster using a CSV which CSV is on an Intel 520 series SSD. In this cluster I have a Windows Server 2012 VM, which is the actual AD for hte cluster. This VM has the following drives (which are all on the CSV)

  • C: – Boot / OS
  • D: – DVD – An ISO file of around 3.44GB
  • E: – Test drive – 25GB Fixed VHD
  • F: – Test drive – 25GB Fixed VHDX
  • G: – Test drive – 25GB Dynamic VHD
  • H: – Test drive – 25GB Dynamic VHDX

I did 3 tests on each of the 4 test drives, which are

  • DVD to Disk – Creation. Copy the DVD contents to a folder 1 on each test disk
  • Disk to Disk – Creation. Copy the contents of folder 1 to folder 2 on the same drive
  • Disk to Disk – Overwriting. Deleted the contents of folder 2 and then re-run test B

This is not high-performance gear and it’s noting close to production systems, however the idea here is to have the same setup for dynamic and fixed VHD and VHDX to see the performance difference. Because I’m using VMware, then the virtual nic over which the CSV is presented is 1GBe. I’m not a VMware expert and I wonder if it can be 10GBe like the Hyper-V

The results

Drive

Type

DVD to Disk
Creation

Disk to Disk
Creation

Disk to Disk
Overwriting

E:

Fixed VHD

12.96MB/s

34.65MB/s

34.22MB/s

F:

Fixed VHDX

12.93MB/s

35.39MB/s

34.62MB/s

G:

Dynamic VHD

11.28MB/s

25.05MB/s

37.12MB/s

H:

Dynamic VHDX

12.23MB/s

33.56MB/s

35.36MB/s

My interpretation

If the difference in performance is very small, then it could be in the margin of error of the tests. The tests were run only once each, due to time constrain.

  • DVD to Disk – Creation
    • Fixed VHDX is negligible slower then fixed VHD. This could be in the tests margin of error
    • Dynamic VHD performance penalty from Fixed VHD is 15%
    • Dynamic VHDX performance penalty from Fixed VHDX is 6%
  • Disk to Disk – Creation
    • Fixed VHDX is around 3% faster than fixed VHD
    • Dynamic VHD performance penalty from Fixed VHD is a whopping 38%
    • Dynamic VHDX performance penalty from Fixed VHDX is 5%
  • Disk to Disk – Overwriting
    • Fixed VHDX is marginally faster than fixed VHD
    • Dynamic VHD is actually 8% faster than Fixed VHD*
    • Dynamic VHDX is actually 2% faster than Fixed VHDX*

The SandForce Factor

One of the reasons why dynamic are slower than fixed is that dynamic disks needs to expand. This process is a performance penalty but it’s hugely improved in VHDX where it goes down from 38% to 6%. However the Disk to Disk – Overwriting test shows that the dynamic are actually faster than the fixed. In my opinion the reason is the underlying SSD. I am using the Intel 520 series which use the SandForce controller. SandForce controllers actually compress the data before writing it to the flash cells. So, when creating the fixed VHD(x), where these are zero filled, in reality little was written to the flash cells. So when running the Disk to Disk – Creation test, actually a lot of flash cells were being allocated for the new data. In the Disk to Disk – Overwriting, the SandForce controller is now replacing the data, with almost identical data. This could account for the behavior marked with * in the results table

Further Tests

While it would be interested to make the same tests on Hyper-V 2 (without the VHDX), then I could be that it’s not worth the hassle. On the other hand, it would be interesting to make the same test on mechanical HDD to eliminate the SandForce controller variable. Perhaps when I find some time, I’ll do just that.

Hyper-V Virtual Machine Storage Migration looks inactive for long time

In a Windows Server 2012 cluster, when using the Failover Cluster Manager to initiate a move on the Virtual Machine Storage, in the Information column it shows “Starting virtual machine storage migration…” and stays like that. In reality it is being done but it does not show you anything. When done, the Information will go empty and the resources tab shows the new storage location.

The word “Starting” is very misleading since it is not starting but actually being done. To actually see the status, you can go to the Hyper-V manager where you get the status like “Moving Storage (78%) – Synchronizing files”.

Dell issue with Intel NIC cards

Last update – 24th April 2013

A customer needed a small cluster for around 10 virtual machines so we decided to go for a design I’ll be talking about in future blog. The hardware requirements for this cluster were 2 rack-mount servers where each server has as much as possible hot-swap hard drive. An RFQ was created and Dell won with the R510. Price was good and has 12 hot swappable 3.5” front HDD bays. Part of the RFQ was for the nics to be Intel and no OS installed.

The servers arrived, and the first thing was to install Windows Server 2008 R2 to become the Hyper-Visor. I literally spent 3 days and nights trying to do this, but the servers were just slow, dead slow, unbelievable slow. At times, it took hours to copy the windows installation files. The Dell agents in Malta were also baffled with this issue. I called them several times during these 3 days. Effectively I took the servers to there offices, where immediate attention was given, particularly by Stefan and Brian, however all the team was on the ball. Less then 24 hours, Brian found this Microsoft KB – http://support.microsoft.com/kb/2383674.

So if one removes the Intel network cards that Dell supplied installed, Install Windows Server 2008 R2, update the Intel network cards drives from Microsoft and then plug in the Intel network cards, everything works fine. On the other hand, the same Intel NICs works well on other systems

My guess is that at PCI-E level, the Intel NIC and the Dell Perc (that’s how Dell names the LSI raid controller) compete between themselves to become master. This is not proven but a guess from my side.

That fine, but I would expect that when Dell are building a server and the put in Intel network cards, the minimum is to place a note “If installing Windows Server 2008 R2, please read this web post before installing”.

Recently I was installing the Windows Server 2012 on a different, but still R510 server and I did not encounter this issue. Neither installing Windows Server 2012 on the R710. I love and use the Dell R510, mainly because I can have 12, front hot swapable 3.5″ drive bays, which makes it ideal for storage. The new R720xd is the new offering from Dell that has 12 x 3.5″ or 24 x 2.5″, front hot swapable drive bays, Intel processors, can be ordered with Intel on-board NIC, which makes it ideal as a Storage Server.

About this blog

I have been thinking to start a blog for around 1 year but I never had the time. Because this has been long coming I do have a number of blogs in my mind, which I’ll be working on and publishing soon. These include

  • Dell issue on with Intel NIC cards
  • The meaning of High-Availability
  • A fully high-available cluster, including HA storage on just 2 servers
  • Zeus file server
  • 3CX updates

Of course, now that I have the blog, every time I see something that it’s valuable to share, then it will get blogged. Also of importance is to enhance the look and feel of this blog. I’ll be working on that in the coming weeks.

Let me be clear on some very important stuff

  • This is my personal blog. I did not do it part of my company to be fully independent
  • No IT product is perfect. Us customers (yes, I’m a customer. I work for a re-seller / distributor but I’m a customer not a vendor) have the right to know of issues the product has. Please don’t tell us only the good points – it will look fake
  • I’m not paid or in any way supported by anyone to write good or bad stuff about any product. What I write about is what I experience
  • The products I like, will end-up the products I use most. The products I used most will be the products I find stuff to write about, both positive and negative
  • Anything I write it my own experience or opinion. It will be different then your experience and opinion. I’m interested to know about your experience and opinion. The more we share the more we learn and the better we get. If you are not interested in my experience and opinion, then feel not to visit this blog
  • Should I write something that you think it’s technically incorrect, then please do share you thoughts  I will publish them. That way we and the other readers will learn more. This is about knowledge sharing. This is about community

About Myself

Hi All

Thanks for visiting my blog.

I’m Philip. I’m from Malta – Europe. For the international readers, Malta is a tiny island in the middle of the Mediterranean. We are independent and part of the European Union.

I’ve been active in the corporate ICT since the age of 16, when I had my first corporate customer. I was still attending school, where my formal education is in Electronics. The first customer was a small factory. No mobiles by then. I went to the customer by public transport (which was and still is horrible in Malta) and charged them Lm0.25c (that less then €1 when accounting the official inflation rate) per hour. I soon got the second client, where the hourly rate was more reasonable.

After school, I worked at an IT head for a factory, then employing 550 people. I still kept my private corporate customers part time. In 1996, I decided to become freelance full time. End of Y2K, I formed my first company – Micro Technology Consultancy Ltd.

My ICT interests today are Storage, Virtualisation and VoiP. On a personal level, sciences and local politics are 2 of the subjects that interest me.

I hope you will like by blog. I will be twitting updates etc on @pcortis. Please follow for updates