DIY NAS review with an HP Microserver and FreeNAS
Tom Buskey
tom at buskey.name
Thu May 8 17:06:04 EDT 2014
On Thu, May 8, 2014 at 3:52 PM, Mark Komarinski <mkomarinski at wayga.org>wrote:
> (I've had this in my draft mailbox for quite a while after I promised
> sending this. It's getting rather long, so I'll send this and then open
> up for questions. tl;dr: I like it.)
>
> So there's really two parts to this review. First the hardware.
>
> I started with the HP ProLiant G7 N54L Microserver, 2x Kingston 4GB RAM,
> and 2x WD RED 4TB drives.
>
>
I've heard good things about the HP Microservers for home labs.
> drive bays (which clearly say 'not hot swap') and the motherboard.
>
They're SATA which *is* hot swap, but the people might install an OS which
doesn't support it. HP doesn't want support calls to debug the OS. I
don't blame them.
> put in an optical drive. The rear has two short PCI expansion slots, a
> few USB ports, power, and GigE port. I'm not looking at it right now
> but I also think it has an external SATA connector.
>
Do they still have IPMI on them? Do you happen to know the maxium RAM
limit?
I think previous models had only 1 PCIe slot. There are 2 port gigabit
PCIe 1x cards for < $50 that work with VMware ESXi. And 1 port Intel cards
for < $20.
> First login on the web interface has you set the password and you get
> in. Using ZFS does make things easier in that I just selected the two
> drives and it automatically set them up as a mirror and created the
> volume for me. Unlike LVM which I'm more accustomed to, that entire
> volume is the filesystem. With the volume created, you can create
> either a dataset (used for sharing via NFS/CIFS/AFP) or a zvol (for
> exporting via iSCSI). Everything I'm doing currently uses datasets,
> though I might tinker with iSCSI again later in the year.
>
And unlike LVM, you can resize the datasets without umounting. Like
NetApps.
> Creating a dataset by default makes a space that shares the same amount
> of space as the volume. You can then set a quota to limit the amount of
> space per volume[*]. You can also select compression or deduplication.
> Compression runs it through a variety of protocols, with lz4 being
>
Compression can also speed up throughput from the OS to the drive (you're
pushing fewer bits through the controller to the disk). And if can be
turned on/off on the fly.
> recommended. Dedup is a lot more effort and there's lots of warnings
> about enabling dedup if you don't have sufficient RAM. While I'm
>
I've been burned by lack of RAM for dedup before. A RAID 1 with 2TB drives
and 3 GB RAM that crashed during a power outage on OpenSolaris. I had to
boot single user (more RAM for the OS) on another system with 8GB and wait
days for ZFS to get access. Then I copied to another volume with dedupe
off. This is the only way to stop using dedupe once it's turned on a
volume. I was only reducing the space by 10%. I might've been able to
recover with less RAM, but I've heard 4-5 GB RAM per 1 TB of dedupe disk.
OpenNAS, being on FreeBSD, would have different limits.
> storing lots of compressed media files for now (video and music files),
> it will tell you how compressed the dataset is. In my case it winds up
> being 1.02x.
>
For the default compression, it doesn't slow things down even if it doesn't
compress. If it does compress, you get faster throughput. MP3s, videos
and gzips don't compress much.
> ZFS can set up routine snapshots, which is a good incentive to move my
> home directory there. When you enable a new snapshot, you can set it up
>
snapshots take very little space unless you're doing it with databases or
VM images that change alot. Even snapshots every 15 minutes take little.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://mail.gnhlug.org/mailman/private/gnhlug-discuss/attachments/20140508/68110b1d/attachment.html
More information about the gnhlug-discuss
mailing list