menu
categorytree
Aug 2009
Mar 2006
While the Hypervisor API had been published some time ago on the T1 web site, it wasn’t possible so far to give the implementation a test run.
Recent news however give reason to hope that the time of waiting to actually access the hypervisor and partition a T1 System into Logical Domains is nearing an end. The LDoms v1.0 software has been in Nevada since build 41 and has now been included in the GA Release Solaris 10 11/06
From the presentation by Ashley Saulsbury I take that the hypervisor is layered under the OBP layer of the individual logical domains. This would imply that it either is implemented in OBP, or running in an underlying service domain/partition of its own.
So where is the hypervisor? In firmware (OBP) or is there a seperate (minimal?) OS instance running it? Is there a service domain for the hypervisor and the virtualization layer? If yes - What happens to the domains if the service domain is rebooted or crashes? What happens if a domain gets shut down with power-off? Does it terminate the domain or the platform? (Yes, I know what I’d wish to happen, but I’ve also learned not to expect something new to stick to my expectations :)
I’d also like to test out how much overhead the hypervisor (and the individual OS instances in each domain) represents, compared to a similar zones configuration.
Assuming that each domain has an individual OBP layer, what is visible at the level of the OBP?
PCI Express is supposed to be mapped into the address space of the logical domain, so the OBP running there can detect the devices. Is a pci card thus limited to a single domain, or does it get virtualized and shared?
A bit about console access to logical domains is described in the manual page to vntsd(1M). This virtual network terminal server daemon implements access based on the telnet protocol, similar to a “virtual annex”. Where does it run? Since the slides state, that the hypervisor is “not an OS”, on every logical domain?
How is network access passed on?
Another side effect of the implementation as mapped memory buffer was, that IDN couldn’t be used for cluster heartbeat — a “mailbox” just doesn’t provide that kind of signal.
How is storage shared and represented to the individual domains? There is more than one way to do that, which route has been chosen and implemented so far?
If I suspend a logical domain, move the image to another system, and then reload the domain from the suspended image, will it live?
This would enable interesting perspectives for clustering unique to the Niagara platform.
Can cores be designated to individual domains? What happens if that very core gets deactivated from the service partition?
Is it possible to run IDN, and listen to its traffic from
Is it possible to place a domain as router or firewall between other logical domains?
I’m just imagining a T1000 running a busy beehive of domains around a honeypot installation.
Footnotes:
1. IDN, short for Inter-Domain Networking, was used in the E10k. Implemented as a special asic-configuration state (board mask) that had to be scheduled, and basically mapped a reserved memory buffer from one domain to another, it took a certain toll on the system. By design, IDN was violating the principle that domains should be separated from each other. Also it created a dependency where in certain cases a failed domain could cause all other domains within the same IDN to follow its path downhill.
So far, I couldn’t give LDoms the try I would have wished for, both the firmware images and the LDoms Manager software are not available outside Sun yet. 1 2
My trial period isn’t all over yet, but it looks like I’ll have to return this wonderful toy before the support for logical domains hits the streets.
I had asked for permission to upgrade the OBP of the Try-and-Buy machine in case I get access to the image in time. Not only was I given immediate clearance for the procedure, if all goes well, I’ll also get the image. So there’s still hope!
By now, the trial period is over, and I have to return the system. Sadly, I didn’t get access to the firmare images needed or the LDoms Manager Software, and Sun was adamant about the trial period.
So there won’t a chapter on LDoms in the scheduled update for our OpenSolaris Book, and the LDoms chapter planned for the upcoming Cluster book will remain unwritten as well.