Saturday 31 January 2009

Citrix Marketing Bullsh_t Dispelled by VMware (again)

I can't tell you how pleased I was to see VMware _finally_ publish some data on a more realistic XenApp scenario than they have previously. Back in the days they used to push a single vCPU for Terminal Servers argument, which as any Citrix guy will know is sub-optimal in more than a few aspects (mostly relating to the relationship of the TS console session and logons to CPU0).

Now I'll be the first to admit that any kind of test and resulting statistical analysis can be very subjective, but I'll tell you what I think is the most important point to take from this latest study from VMware - the performance isn't that different. Sure ESX came out on top, but not by much (I'm sure Citrix will come out with some kind of data that shows the contrary before too long, and that's fine). And why is that important? Because I am sick and tired of the marketing bullshit from Citrix and subsequent throwaway one-liners that propagate through the intertubes and into the minds of Administrators in the enterprise, regarding XenServer being "optimised for XenApp". I have _never_ seen any further details on what this actually means.

Well I'm throwing down the gauntlet on this one. I'm opening up the comments on this post, in the hope that Simon Crosby, Ian Pratt, or anyone else I highly respect on a technical level who works for Citrix to tell us all what the fuck these XenApp specific optimisations in XenServer actually are and why the same configuration could not be implemented with tweaks to ESX (not that they're even necessary, going by the VMware post). I'll even take a reply from hype-master Roger Klorese. If it is some kind of secret, I have no problem with that - just tell us so (and I'll continue to call it complete BS). If it's something that can only be disclosed under NDA, then tell us that too because I guarantee you that 90% of the people who read this blog regularly are working for large enterprises who have NDA's in place with Citrix, so we can follow that avenue if necessary (and of course honour the NDA - I'm not stupid enough to lose my job for the sake of this blog). Cut the bullshit Citrix - we just want to know.

UPDATE As numerous commenters have pointed out, I completely neglected to mention the excellent work of Project VRC. But again, in line with my original comment, the differences in hypervisors are so minor that it really doesn't matter - as Duncan said, the main thing is that Citrix workloads are finally viable targets for virtualisation, because the difference to physical is also insignificant when you consider all the benefits of virtualisation.

As for Simon Crosby's retort, I won't bother picking through it. Citrix are easily as guilty as anyone when it comes to spreading one sided "studies" under the guise of science, and their best buddies at Microsoft are the kings of the hill in that arena. Here's a tip for you Simon - Microsoft are also the kings of "embrace, extend, extinguish", remember? We all know what's gonna happen when they enter the connection broker market.

BUT I will say this much in credit to Simon - he hit the nail on the head with his comments regarding VMware's draconian "thou shalt not publish benchmarks" stance. This has been a bugbear of mine for a long time, and I've said as much to fairly senior people at VMware. A change in that would be most welcome indeed.

Also as a few of the anonymous commenters pointed out, Citrix offered this explanation 6 months ago as to what the 'secret XenApp optimisation sauce' is. Which I can only assume is now completely null and void with the widespread availability of RVI / EPT. Oh well, was fun while it lasted I guess.

Friday 30 January 2009

Next London VMUG - March 10th

As announced by Al Davies and reported by others, the next London VMware User Group meet will be on March 10th.

Come along to hopefully learn something, and say hi to some of the big names in the Virtual blogosphere such as Mike "RTFM or STFU" Laverick, Alan "Oui Oui, Poo Poo" Renouf, and the lads from Xtravirt. There's always the chance of Tom Howarth dropping by, as well as any number of people from VMware UK. The "bloggers attempting to get fame by association" from vinternals, Shyam and I, will also be there.

And if none of that interests you, then the pub session afterwards should. Pint of Peroni, thanks!

Wednesday 28 January 2009

Shanklin Elevated From "Demigod" to "FullGod"... Film at 11.

After having a look around the new VI Toolkit release today, and the accompanying videos and documentation, I am convinced Carter will be hailed as a new God by VI admins the world over. I mean, even Boche has stated this release may be enough to move his lazy ass into finally learning the ways of the 'Shell (I've been at him for ages about this, but alas he just ignores me).

I sooooo want to show the VI Java API some love (which was quietly updated to 1.0U1 only days ago), but with Windows Server 2008 R2 Core finally providing .net support, I'm more likely to wrap PowerShell scripts up in webservices for interop purposes than get dirty with Java (and that's no reflection on the quality of what Steve Jin has done, which is top class).

Massive props to Carter and his team. With tools like this, I don't give a fuck if vCenter or anything else becomes Linux based. Lets hope vCenter Orchestrator has native PowerShell support.

VI Toolkit 1.5 Released

The long awaited version 1.5 of the VI Toolkit has finally GA'ed, as Carter announced in the wee hours of this morning (2:30AM... I always wondered if Carter was a robot sent from the future, hopefuly not to destroy us all though).

Anyway 'nuff said, I'm gonna go get it now!

Sunday 18 January 2009

The Myth of Infrastructure Contention

Back... caught u lookin for the same thing. It's a new thing, check out this... oh no wait. It ain't a new thing, it's just another Sunday Arvo Architecture And Philoshopy post. This time I'm going to focus on a long time thorn in many of our sides - the myth of infrastructure resource contention.

This ugly beast rears it's head in many ways, but in particular when consolidating workloads or changing hardware standards (such as standardising on blade). Of course, the people raising these arguments are often server admins with little or no knowledge of the storage and network architecture in their environments, or consultants who have either never worked in large environments or also do not know the storage and network architectures of the environment they have come into. Which is not their fault - due to the necessary delineation of responsibility in any enterprise, they just don't get exposure to the big picture. And again, I should say from the outset that I'm talking ENTERPRISE people! Seriously, if I cop shit from one more person who claims to know better based on their 20 "strong" ESX infrastructure or home fucking lab, I am going to break out the shuriken. YES THE ENTERPRISE IS DIFFERENT. If you have never worked in a large environment, you can probably stop reading right now (unless you want to work in a large environment, in which case you should pay close attention). Can you tell how much these baseless concerns get to me? Now where was I...

OK a few more disclaimers. In this post I will try to stay as generic as possible to allow for broader applicability, and focus on single paths and simplistic components for the sake of clarity. Yes I know about Layer 3 switches and other convergent network devices and topologies, but they don't help to clarify things for those who may not know of such things. Additionally, the diagrams below are a wierd kind of mish-mash of various things I've seen in my time in several large enterprises, and I suck at Visio. Again, I have labelled things for clarity more than accuracy, and chopped stuff down in the name of broad applicability. Keep that in mind before you write to me saying I've got it all wrong.

IP Networks
So lets tackle the big one first, IP networks. Before virtualisation, your network may have looked something like this:

Does that surprise you? If it does, go ask one of your network team to draw out what a typical server class network looks like, from border to server. I bet I'm not far off. Go and do it now, I'll wait for you to get back.

OK, enlightened now? And in fact if you are, the penny has probably already dropped. But in case it hasn't, lets see what happens when the virtualisatoin train comes rolling in, and your friendly architecture and engineering team propose putting those 100 phsyicals into a blade chassis. It is precisely at this point that most operations staff, without a view of the big picture, start screaming bloody murder. "You idiot designers, how the hell do you think you can connect 100 boxes with only 4GB (active) links!!! @$%#@%# no way I'm letting that into production you @#%$%!!!". However, when we virtualise those 100 physical boxes and throw them all into a blade chassis, our diagram becomes:

OK, _now_ the penny has definitely dropped (or you shouldn't have administrative access to production systems). IT DOESN"T MATTER WHAT IS BELOW THE ACCESS LAYER. Because a single hop away (or 2 if you're lucky), all that bandwidth is concentrated by an order of magnitude. The networks guys have known this all along. They probably laughed at the server guys demands for GbE to the endpoints, knowing that in the grand scheme of things it would make fuck all difference in 90% of cases. But they humoured us anyway. And lucky for them they did, because the average network guy's mantra of "the core needs to be a multiple of the edge" needs to be tossed out on it's arse, for different reasons. But that's another post :-).

Fibre Channel Networks
I know I know, I really don't need to be as blatant about it this time, because you know I'm going to follow the exact same logic with storage. But just to drive the point home, here again we have our before virtualisation infrastructure:

And again, after sticking everything onto a blade chassis:

I don't think the above needs any futher explanation.

I'm sure there are a million variations out there that may give rise to what some may think as legitimate arguments. You may have a dedicated backup network, it may even be non-routed. To which I would ask, what is the backup server connected at? What are you backing up to? Whats the overall throughput of that backup system? Point is, there will _always_ be concentration of bandwidth on the backend, be it networking or storage, and your physical boxes don't use anywhere near the amount of bandwdith that you think they do. You may get the odd outlier, sure. Just stick it on it's own box, but still put ESX underneath it - even without the added benefits of SAN and cluster membership, from an administrative perspective you still get many advantages of virtualising the OS (remember, enterprise. We don't pay on a per-host basis, so the additional cost of ESX doesn't factor in for enterprises like it would for smaller shops).

OK, time to wrap this one up. Your environment may vary from the diagrams above, but you _will_ have concentration points like those above, somewhere. That being the case, if you don't have network or storage bandwidth problems before virtualisation, don't think that you will have them afterwards just because you massively cut the aggregate endpoint connectivity.

Thursday 15 January 2009

vInternals Launches vIdiot Program!

Funny thing, nature. As the Tao te Ching says, "when beauty is abstracted, then ugliness has been implied." That is, for something to be considered "beautiful", there must be something else considered "ugly". And so too, there cannot be a "vExpert" without a "vIdiot".

So we here at vinternals have taken it upon ourselves once again to be the Yin to VMware's Yang, and commenced nominations for the inaugural vIdiot award. The nomination process itself is pretty straight forward. Simply find a dumbarse quote from someone who is among the un-enlightened, and forward it along with a link to the source. Here are some examples:

Can anyone tell me how VMware keep saying that they are using Para metal Virtualization and overcommiting at the same time?!!! - Mohamed Fawzi

In the case of unplanned downtime, VMotion can’t live migrate because there is no warning. - Jeff Woolsey

We’re designing Windows Server virtualization to scale up to 64 processors, which I’m proud to say is something no other vendor’s product supports. We are also providing a much more dynamic VM environment with hot-add of processors, memory, disk and networking as well a greater scalability with more SMP support and memory. - Mike Neil (talking up what will be in Hyper-V 1.0 one month before they chopped everything)

We know you don't get it. That's fine. Chin up, and do please keep writing. - Simon Crosby (directed at Mike D)

You’ve got to question whether it’s worth paying $50,000 for that. I know the VMware camp go on about features like VMotion, but for $50,000 I could pay someone to move my virtual machines for me. - David Furey


I probably burned a few bridges just now. Oh well, so I'll never be an MVP, sif care. But a vExpert I could be, with some help from you! So if you value a balanced view, mixed humour and deep tech, and thoughts on the future just as much as the pure technical info that appears more regularly on other sites (like those listed in the "linkage" section down the right there) get on over to the nomination page and do me proud. Anyone who mails me on vinternals at gmail dot com can find out my name. Shameless? Don't know the meaning of the word ;)

Send your vIdiot entries to vinternals at gmail dot com, entries will close on February 13th and results posted around the end of Feb!

UPDATE Woops, looks like a few of us got it wrong with the nomination process. So I've struck out the shameless plea for nominations as I've already had at least one!

Sunday 11 January 2009

Boot Windows 7 From VHD!

In the virtualisation space, one of the often demo'ed features of the upcoming Windows 7 / 2008 R2 is the ability to boot directly from VHD. Microsoft have effectively created a "loopback HBA", which the bootloader can use to address VHD's just like a regular disk. This is pretty cool for a whole host of reasons, and easy to achieve.

1. If you're going to install another Windows 7, first set the boot menu description of your current installation with bcdedit /set {current} description "Windows 7 HDD"

2. Now create a new VHD. You can do this via the GUI in disk management, or via the CLI with diskpart. Since you're gonna need diskpart for the install, might as well use that now. I'm sure you can figure out what I'm doing with diskpart create vdisk file=d:\vhd\windows7.vhd type=fixed maximum=20000

3. Installation time. Drop in your Windows 7 installation media, reboot and when u get to the first screen prompt of the installation hit Shift+F10 to bring up a command window. Run diskpart, then enter select vdisk file=d:\vhd\windows7.vhd, then enter attach vdisk, then exit diskpart. Note that if you have multiple physical disks, Windows PE may not have honoured your drive letter assignments. Enter list vol from within diskpart to see what it's done if you run into any trouble.

4. Continue the installation process, and when you get to the choice of disks to install onto, you should see your VHD sitting in the list of available options just like a regular disk.

5. Select the VHD and go read something while the install finishes. You're done!

When the machine reboots, you'll notice that the VHD boot menu option is now the default. This can be easily changed using bcdedit from within either of the Windows environments you have booted into.

Next thing I'll probably try is to see just how much I can cut down the original install so I can use VHD's for everything, kind of like a semi-client hypervisor. Sounds like a perfect job for Server Core... hopefully it's functional enough to be able to do this, I can't imagine why it wouldn't be.

UPDATE Just been messing around with server core in a VM, looks like all systems are go. Using the same commands as above I could create / attach / format a VHD. Server core looks _great_ in 2008 R2. Installing PowerShell and launching it nearly brought a tear to my eye... finally server core has arrived.

UPDATE 2 Reader Patrick S has just pinged me to let me know that this technique also allows for installation of Windows XP / Server 2003 as well! I assume you'd need to use the Windows 7 PE environment or something... but he's done it, so there's a way. Nice one Patrick, thanks for the heads up!

Wednesday 7 January 2009

VMworld Europe 2009 - Boche Will Be There, Will You?

Hot in off the intertubes - bench pressing maniac Jason Boche, who also has a popular virtualisation blog (that's right people, it's spelled with an 's'. Sometimes I wonder where the "English" in "US English" went) has announced he will be attending VMware Europe 2009. This is great news, because it's more likely that my as yet unannounced 'US vs EU Bloggers Sumo Challenge' will go ahead if more US bloggers actually get to the event :)

Oh, those lowly bloggers from vinternals will be there as well, but who cares.

UPDATE Legendary Scott Herold will also be there to represent Team USA! Thank god Oglesby doesn't blog!

VMware 2009 Product Lineup

I was under the impression that some of these products were still in super secret squirrel status, but as they're now appearing on the VMware Products homepage, I guess that's no longer the case!


Since you're all going to go and check out the VMware spiel anyways, I won't bother repeating it here.

What I will say though is that I'm not sure how wise it is for VMware to simultaneously spread themselves even thinner by releasing more products, and eat into the opportunities for ISV's. 2009 is going to be a tough year for everyone, and a healthy ISV market is good for a software company. The last thing VMware should be doing is try and offer everything themselves, thus driving ISV's like Veeam, Quest and the many others into Microsoft's camp where they will no doubt be welcomed with open arms!

Sunday 4 January 2009

The Rise of the Stupid Endpoint

Welcome to the first SAAAP of 2009. And boy am I feeling philosophical today :-). The similarities between this post and David Isenberg's article Rise of the Stupid Network end with the title, but if you haven't read Isenberg's article you should do so - the principles he outlined over 10 years ago are just as applicable today to network infrastructure.

The Failure of Distributed Computing
The proliferation of distributed computing in the datacenter was essentially driven by the low cost of x86 hardware. Rather than spending huge sums on mainframe or proprietary Unix infrastructure and have it sit there underutilised for a long period (hmmm, sound familiar?) or use it for non-critical applications, why not buy smaller, cheaper x86 servers instead? The managment of these many small systems was never really factored into the equation however, and before too long there were more applications for management of this infrastructure than you could poke a stick at. But for a long time, those management systems were generally lacking in intelligence. And because of that, more staff were needed to manage the management systems, respond to alerts etc etc. x86 didn't turn out to be as cheap as promised, not because of the hardware but because of the lack of intelligent management systems.

What We Learned from Management Systems
The distributed computing management systems taught a new generation of system administrators (as was I) what had existed in the mainframe infrastructure for decades - the advantages of centralisation. Monitoring baselines, patch and software distribution, backup, job scheduling... these things have always been centralised in enterprise distributed computing environments. Such environments would be extremely expensive to operate otherwise. Yet for some reason this train of thought never really made it all the way to the personality of the endpoint. Because of this, system level backup and restore is of paramount importance in todays datacenter. DR and lifecycle management is a painfully manual and labour intensive process. Administrative overhead is exacerbated.

Endpoint Stupidity, Centralised Intelligence
You cannot make something stupid without simultaneously making something else intelligent. And that intelligence needs to be centralised. A single endpoint has many touch points in the enterprise. Are all these touch points still required? Is the cost of removing them greater than the cost of living with them? Is the pain simply due to a lack of intelligence in the tools, or the processes that you follow? Orchestration tools may be the solution for this, but do not fall into the trap of merely alleviating the pain and not addressing the underlying cause.

Infrastructure Like Water
"The height of cultivation is really nothing special. It is merely simplicity; the ability to express the utmost with the minimum." - Bruce Lee
That's what it all boils down to. In order for distributed computing infrastructure to work, we need to simplify the endpoint. When the endpoint is simple, it can be formless like a cloud (sorry, I couldn't resist), empty like a cup. You cannot have elasticity in the datacenter without this simplification. A piece of hardware may be running ESX today, Windows tomorrow, and shutdown the next day. In order to do that, the personality of the hardware cannot exist on the hardware itself. Likewise with a workload. It may ask for one level of resources today and another tomorrow, it doesn't matter. It may be running in one location today and another tomorrow, it shouldn't matter. Virtualisation has obviously made this much easier than it would have been otherwise, and I'm taking virtualisation as whole here - not just hypervisors, but VLANs, VSANs, VIPs, DDNS etc etc. But the same principles need to apply to both physical and virtual infrastructure.

End Game
So that's the what, it's up to people like you and me to figure out the how. With problems like these to solve, how can 2009 not be a good year :-)

Friday 2 January 2009

VI Toolkit and the PowerShell 2.0 Integrated Script Editor

Finally got a chance to install the PowerShell 2.0 CTP3, and have a look at the Integrated Script Editor (ISE). It has all the basic functionality you would expect from a scripting editor, including a very handy tab completion of cmdlets. It didn't pick up the VI Toolkit cmdlets straight away for me though, so here's how you'd enable that if you run into the same issue.

The ISE only picks up snapins that are loaded via one of the 'AllHosts' profiles, which is either '$pshome\profile.ps1' (the AllUsers\AllHosts profile) or '$home\[My] Documents\WindowsPowerShell\profile.ps1' (the CurrentUser\AllHosts profile). Until now I have stuck with using the CurrentUser\CurrentHost profile to customise my shell, which is in '$home\[My] Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1'. As a result, all the VI Toolkit cmdlets are available in any powershell session, however they aren't available in the ISE.

So the fix was obviously simple - rename '$home\[My] Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1' to 'profile.ps1'. For anyone out there who doesn't load the VI Toolkit with their normal powershell environment, simply add the following line...

add-PSSnapin VMware.VimAutomation.Core

... to $home\[My] Documents\WindowsPowerShell\profile.ps1 and you're done!

Thursday 1 January 2009

Deploying OVF's with the VMware OVF Tool

I finally got around to OVF'ing a Windows XP build. Although I have an unattended setup XP .iso which makes building XP VM's hassle free and pretty fast, it's obviously nowhere near as hassle free and fast as deploying from OVF!

Although you can import OVF's with VMware Workstation, it's a bit cumbersome. I've also had some wierd experiences with it, such as the "split disk into 2GB files" option being ticked and greyed out when I didn't want the option selected (both for export and import operations). VMware Player is no better, as you can't control where the OVF is deployed to. Back when we were developing statelesx, we used a Java based version of the OVF Tool from VMware to create the virtual appliance package. But someone at VMware has been quietly beavering away (I say quietly because I haven't seen _any_ publicity from VMware about this tool), and it's now at a 1.0.0 build (although still labelled as a "technology preview").

And just like Boche, they've injected some steroids into the tool... for example (note I haven't tried this... yet) you could pull an OVF straight off a webserver and plonk it onto a host, specifying datastore and portgroup details with:

ovfTool "http://my_ovf_server/ovfs/myAppliance.ovf" "vi://username:pass@my_vc_server/my_datacenter/host/my_host&datastore=my_datastore&net:ovf_network_name=vc_network_name"

So check it out, it's a nice and fast way to deploy OVF's locally for use with Workstation or Player, but also has some serious enterprise functionality.