For When You Can't Have The Real Thing
[ start | index | login ]
start > Virtualization Mini Forum 2009

Virtualization Mini Forum 2009

Created by dave. Last edited by dave, 14 years and 88 days ago. Viewed 8,186 times. #3
[diff] [history] [edit] [rdf]
labels
attachments

Virtualization Mini Forum

Ottawa, January 21 2009

I was invited to, and attended, the Virtualization Mini Forum held at the Westin Hotel in Ottawa.

Due to circumstances entirely of my own making, my arrival was delayed until after 9AM. So I missed the opening remarks.

These are my notes from the day.

What's Happening

VMware has virtualized CPU and memory, but more importantly: I/O operations. High-speed I/O computers exist today which should feed the present 4-cores, with room for growth through the 6-way, 8-way, and 12-way cores currently announced.

VMware has abstracted the hardware: the VMware x86 virtual machine is the reference platform of the near and medium future. This has major implications for

  • longevity of systems: when hardware dies, the VM moves; no longer tied to ancient, failing hardware
  • easy standardization for build targets
  • the entire way IT services are designed, built, deployed and operated

Definition: Cloud

The term "Cloud" has been over-used. The term "private cloud" has been derisively dismissed as nothing more than a fancy term for a private datacenter. It isn't. The term "cloud" implies not just virtialization

  • Automation (ie point-and-click deployment)
  • Flexibility (need it today, don't need it tomorrow)
  • Management: keeping track of storage, systems, assets, licenses
It is mostly in the latter area where projected savings are anticipated: reducing TCO by reducing per-instance Admin costs.

2015: the merge date

By 2015, most large companies will have their own "private cloud" infrastructures built out, and will be in a position to use "public cloud" compute and storage resources rather than continuing their own build-out. From there, cloud usage will go more to the "public" resources rather than continuing to refresh physical hardware in their own facilities as they age.

Virtual Desktop

I attended two breakout sessions about Virtual Desktop, mostly because I have thought it was a neat problem and I wanted to see what the solutions were.

These guys think big: their 'basic building block' is composed of:

  • two clusters of
  • eight ESX servers each;
  • 5TB of storage served up across 14 LUNs, with
  • 72 virtual desktops per LUN
14x72=1008 virtual desktops per 'building block'. This is definitely scaled up MUCH larger than anything I've ever had even a sniff at. I feel very small.

Disk footprint in this scenario is 5GB/instance.

The VM layer cake looks like this (well, inverted):

  • hardware
  • ESX
  • base image
  • replica image
  • linked clone from replica image <- this is where the user instance is
  • thin apps <- this means you don't have to load all apps in each image
  • folder redirection <- to keep users from writing to c:
Thin apps are 'packaged' so that they run off of network shares rather than the image's C: drive. This means apps can be assigned to users based on need, rather than general roll-out.

Security Issues: per-instance security agents are bad in this environment. If every virtual desktop kicks off a virus scan at noon, the entire environment will suffer. Solution?

  • snapshot the VDI (virtual desktop instance) that you want to scan
  • run the scan on that snapshot on an entirely different ESX server, one which is in place for the exclusive purpose of scanning VMs
Persistent vs. Non-Persistent: the system keeps track of who is entitled to a persistent image (ie one that is not abandoned on logout). So your knowledge workers get a persistent image, while your help desk people and your visiting contractors don't.

Aside

Back of the envelope calculations for a "basic" building block:

  • $2K per ESX instance = $32K
  • $4500 per hardware server = $83K
  • storage? $50K
That's $165K per vBlock, or $165 per user (assuming maximum density) before end user hardware (thin clients or whatever), end user software licenses, network infrastructure to glue it all together… and before the other VMware glue like vSphere and VDI and the clustering kit.

vDesktop ROI argument:

ROI argument here again comes down to TCO reduction of Admin overhead. Capital expenditure for vDesktop is probably greater than kitting out a large organization with traditional desktops.

Operationally you save with:

  • less helpdesk
  • less h/w diagnosis (the Sun Ray argument: throw it away, replace it, user connects back to their VDI like nothing has happened)
  • faster provisioning: point and click
  • standard "platforms" (ie different one for coders, helpdesk, managers, execs...)

10 Gigabit Ethernet

I attended one breakout session by Cisco where they were talking about 10GE (10 gigabit per second ethernet).

10 GE is seen as the solution to interface-creep: many ESX servers can have 6 to 8 physical 10/100 or 1G interfaces on them, plus FC (fiber channel) interfaces. Blade chassis usually have even more. 10 GE will give the bandwidth required to reduce the number of interfaces by aggregating all the traffic down one or two very large pipes.

Right now 10 GE is fiber. 10 GE over copper may come (ie cat 6, cat 7 wiring) but interfaces will cost 10 watts EACH to run on EACH side of the wire. Compromise interfaces for short-haul (10m) involving SFP-type connectors with a re-enforced carrier exist today.

10 GE should penetrate the data center "this year" for backbone operations.

10 GE has a concept of "channels", sort of a built-in QOS. This is to enable fragile protocols like FCoE. Each channel has (can have) a different set of thresholds for bandwidth reservation and latency sensitivity. Some of these channels are fixed in the specification (ie FC0E), some are user-configurable. Individual channels in the 10 GE stream can be paused, and utilized bandwidth dynamically shaped to meet channel requirements. The standard calls for the switching device to be configurable, and the switch will pass those settings to its 10 GE peer on the end-device.

10 GE adapters will be able to virtualize themselves in to (up to) 128 virtual adapters. The adapter will present these virtual adapters to the BIOS, giving hardware-partitioning as an increased way of protecting VMs on the host and better flexibility (in-band vs. out-of-band management). It wasn't made clear to me how these virtual adapters at the BIOS level get glued together to the physical switch port at the other end of the wire.

Aside: Cisco virtual switch

Cisco is selling a virtual switch module that replaces VMware's vSwitch. This extends IOS/XOS capability into the ESX server. End VMs once again have a 'switch' and 'port' that their interfaces are connected to, so you can dig into the virtual switch the same way you dig into a physical one today.

(I think I saw elsewhere on the net, the cost of this virtual switch is in the order of $500 per CPU in the ESX host. Yow!)

EMC/Cisco joint project: Acadia

Acadia will sell 'vBlock' units, turnkey hardware (compute, network and storage) and software, plus support (hardware and software, single point of contact for all components in the vBlock), plus services to get it all running. Scaling:

  • vBlock 0: 300 to 800 VMs
  • vBlock 1: 800 to 3000 VMs
  • vBlock 2: 3000 to 6000 VMs
(I wonder if there is room at the low-end of the market for a turn-key office in a box type solution for startups. Problem is, startups don't have any money and can't do anything, let alone anything "right".)

Overall

I feel very small.

no comments | post comment
This is a collection of techical information, much of it learned the hard way. Consider it a lab book or a /info directory. I doubt much of it will be of use to anyone else.

Useful:


snipsnap.org | Copyright 2000-2002 Matthias L. Jugel and Stephan J. Schmidt