I am settling in quite well with this hwVPS account I am using for hosting this very blog. I have set it up with BlueOnyx on CentOS 5.3, and it is simply delightful. As an employee, I could have picked any account size I wanted, but I stuck with the 512 MiB version with 1 3Ghz CPU, because I have no desire to waste resources, and I believe that will be more than adequate for my purposes, if not I can upgrade, too many people don’t get that . It is a paravirtualized Xen domU running on some truly state of the art hardware, and setup in a state of the art way. I am not saying it is not possible to be done by others, but I am saying that very few have the skill to get the performance out of the hardware like it is done here. As it stands we are riding the edge of what Linux and Xen can do, and gleefully looking forward to patches and updates as the Linux kernel writers put them in the mainstream. We are not bold enough to toss in untested code, stability is a priority. So far, stability has been perfect for my installation. Not a hiccup to mention so far, and it is at least nearing a month (July 14th) of existence.
I love Xen as a virtualization software. It is as cut and dry as it gets, so in some aspects, and some applications, it may not have as big of an advantage as perhaps the software engineering marvel that was once called Virtuozzo, which has been taken over by an evil empire, which now call themselves Parallels. I do not know its state now, but I did know at least some of the engineers involved in the creation, and they were a good passion full bunch of people driven to excellence, and now they, if they even exist today, are being cut short and corrupted by the lust for money and corporate power. Their software started gleaming with passion, and excellence, but has been declining not too long after they started. But anyway, back to my point. In the beginning when Virtuozzo was in good hands, it allowed each VPS to use a templated operating system, that actually shared the same files in such a way that Linux could simply cache that one file set, and have it be used for all of the VPS instances, talk about totally nerd cool! And another cool aspect of that setup, all of the VPS instances could mainly run off of one file set, that’s right, one 600 or so megabyte set of files for as many VPS instances as you could fit in your ram, which was in our case, back in the day, several hundred. So in some aspects, that works well, people can have their own environment to muck up and destroy, or actually use, no one can crash the hardware, only their environment. The downsides were the fine tuning of many complex memory types and resources, which could not always be used across the board with the broad uses customers had in mind for their VPS instances. The other downside was that the instance would run many modified operating system files and packages, so that it could function correctly, so a yum update to the latest version of OS was not part of the deal, and keeping up to date with yum or the like was not possible for the end user. If it were one piece of hardware, under the control of one person for a specific organized use, then that person would be able to do wonderful things, but in the real world of a hosting company, that is not the case. That is where Xen comes in. Even though the concepts are a little hard to grasp at times, and it is weird having the virtual environments so totally separate so you can not as easily spy on what they are doing or help them when something goes wrong, once you get the hang of it and enact some good policies so that you gain access to the domUs (the name of Xen’s virtual environments) if and when you need to, it works quite well. The division of the hardware’s resources is much more solid, no sharing of memory as if they all ran off the same kernel and filesystem. This solidity leads to predictability, which is a must have in planning the use of the resources. It also seems that, for example, there are not any CPU over usage issues that can effect another domU in Xen. With Xen, you can run a fully virtualized with the emulation of various devices, with various minimal hits in performance, or you can run at near 100% hardware performance with a paravirtualized OS by using various Xen specific drivers. Most mainstream Linux distributions come with these Xen specific drivers, I believe even Windows has some drivers, even though they are considered unstable.
All the technical differences aside, it is just like I am using my own hosted hardware, with the added ability to restart or reboot my hwVPS, and watch the console if it ever has an issue. I can update like any plain old Linux install should be able to do. The big difference is that I have some major high end hardware backing my system, even a high end network attached storage system that allows my hwVPS to be transferred between 2 different pieces of hardware. How much would I be paying a month? In this case, it would be 45 dollars, which is really nothing compared to the actual costs associated with having that kind of hardware at your hands. I think I will rant about the misperceptions of the general public later, along with their lofty expectations with hosting that are not actually covered by how much they pay, but they feel they are entitled to more because they are comparing with larger companies who are undercutting to gain a larger footing in the hosting industry and to stamp out other competition.