The Quest for the Perfect Cloud Storage
Bill McGonigle
bill at bfccomputing.com
Mon Dec 21 23:11:46 EST 2009
On 12/18/2009 01:53 PM, Alan Johnson wrote:
> Why can't you pool the storage resources of the same
> physical hardware the same way?
Can vs. should. I've seen both Xen proper and Citrix XenServer wig out
and cause downtime. It's a matter of eggs in baskets. Some folks think
drawing the line at applications/storage is worth doing.
I've got at least one low-budget setup where Xen is running the OS
providing ZFS to DomU's - you have to tweak the Xen scheduler to prevent
race conditions and that excludes certain other classes of applications
- I'm told some networks switching apps fall into that category.
Basically the technology isn't as perfect as we'd like.
> 2. Ideally, n+2 redundant (like RAID6), but n+1 and mirroring are
> worth considering. In fact mirroring would be a nice option to
> have for some write heavy VMs. Even stripping would be useful in
> some instances.
Combining linux RAID with ZFS iSCSI might be an way to do it. You can
do some load balancing that way too if you know you systems load
characteristics.
> 3. The node software would run on the dom0 of XenServer physical
> nodes giving it direct access to the block devices within. From
> what I can tell, XenServer is a custom distro of Linux with with
> RPM/YUM package management.
yeah (see above), but it's meant to be menu-driven primarily. I don't
know how well tweaks are supported, and it's also an older Xen. CentOS
5.4 is worth considering, SuSE has newer Xen than that, and Fedora 12 is
just getting the plumbing setup for real new Xen on pv_ops, the real
future of kernel virt support.
> 4. Multilevel storage support within the nodes, so for example, a set
> of 256GB SSD, 300GB 10K, 500GB 2.7K, and 750GB 5.4K drives will
> all be used intelligently without need for human interaction after
> setup.
yeah, ZFS has options for planning this, depending on your workload.
> 5. Multilevel storage across nodes would be a neat concept, but some
> intelligence about load balancing across identical nodes is
> certainly desired.
Without going with a full cluster filesystem, some of that load is on
the sysadmin to set things up right.
> The closest I can come up with so far is to run one FreeBSD VM on each
note that FreeBSD ZFS lags quite a big - you're not going to get
block-level dedup, for instance. If you can handle something with a
Solaris kernel you're going to track features and fixes faster.
-Bill
--
Bill McGonigle, Owner
BFC Computing, LLC
http://bfccomputing.com/
Telephone: +1.603.448.4440
Email, IM, VOIP: bill at bfccomputing.com
VCard: http://bfccomputing.com/vcard/bill.vcf
Social networks: bill_mcgonigle/bill.mcgonigle
More information about the gnhlug-discuss
mailing list