moving linux installs
Ben Scott
dragonhawk at gmail.com
Sat Apr 19 23:05:03 EDT 2008
On Sat, Apr 19, 2008 at 1:26 AM, Bill McGonigle <bill at bfccomputing.com> wrote:
> initrd needed new drivers ...
Yah, "rebuild your initrd's ahead of time" is a good thing to know
ahead of time. :-/
> modprobe.conf's needed to be updated to make that happen
Really? I haven't done this in awhile, but you used to be able to
specify "mkinitrd --with=foo" to force a module to be included in the
initrd. You just built a "fat" initrd with drivers for both old and
new hardware, and it worked. Well, the couple of times I had to use
it. :)
> it seems that grub needs to be run on the final destination hardware
> because of the way it does BIOS probes, so preparing the disks
> ahead of time wasn't obviously possible.
I'm a little confused about what you mean here. The GRUB shell (the
GRUB work-a-like that runs under Linux) probes to figure out which
Linux kernel devices correspond to which BIOS devices, so that you can
type "root (hd0)" while running Linux and still have the GRUB shell
actually scribble on the correct /dev/sd* node. Is that what you
mean?
If so, it's supposedly possible to manually map host OS devices to
GRUB devices for the GRUB shell. I've never done it, but the syntax
is apparently very simple (cat /boot/grub/device.map to see what I
mean).
That said, boot loader portability and robustness has long been the
bane of Linux. GRUB is at least better than LILO (which fell apart if
you so much as breathed on the boot partition without running the map
installer afterwards). But anything that changes the ordering of BIOS
drives is doing to make your GRUB config file really confused.
Ran into a similar situation earlier this week with a Asus Eee
laptop, where the user wanted to boot Ubuntu from an SSD card using
the BIOS's POST-time boot device selector. When installing Ubuntu,
the flash device will be seen as the second BIOS fixed disk (in GRUB
terms, "(hd0)"), but when booting from SSD, it is seen as the first
BIOS fixed disk. This was complicated by the fact that the Ubuntu
installer is very narrow-minded about what acceptable boot
configurations are. It was also complicated by the fact that we were
trying to also integrate with the stock install on the internal flash
drive. It was confusing as hell, trying to keep track of when the SDD
card will be "(hd0)" and when it will be "(hd1)", and also remembering
that it will always be sdb (because Linux doesn't use the BIOS (which
is, of course, why we need initrd -- look kids, Big Ben, Parliament)).
To tell the truth, I'm still not sure why one thing worked, but who
am I to argue with success?
Ultimately, I think the root cause to this particular mess is "the
IBM-PC BIOS sucks", but I expect we all knew that already. Go buy a
Sun if you want real firmware. ;-)
> ... Windows ... beat the pants off of us on
> linux, because the former has multiple hardware profiles ...
I beg to differ on that.
Hardware profiles get you... to tell you the truth, I'm not really
sure *what* they get you. Sure, you can prevent selected drivers from
loading in selected profiles. Who cares?
You still have to play BIOS drive ordering games in the BOOT.INI
file Windows uses, in order to create a fault-tolerant boot config
with Windows's software RAID. Windows won't boot if the BIOS ordering
doesn't match what BOOT.INI thinks is going on. Sound familiar?
Windows will still bluescreen during boot if the disk controller for
the system disk isn't configured (in the registry) to start during
boot. The only fix (after the fact) I've been able to find is
"reinstall the operating system". If you know of something better,
please let me know, because I've got a disk image at work that I can't
boot (the hardware it was made on is no longer available).
If you know ahead of time, there's a way to turn on *all* the disk
controller drivers Windows has available, which sort of solves the
problem, but I suspect it leaves the system config rather messier than
it was before.
Still, I've long maintained that "We don't suck more than Windows
does" is a pretty piss-poor goal for Linux to aim for.
> ... [Mac OS X] just has everything built-in ...
That doesn't impress me so much. Linux could do the same if it only
had to run on a few dozen different hardware models.
> Or, is there a better way that hasn't occurred to me?
If all you're worried about is migrating an installed system from
one machine to another, you might consider doing a minimal install
onto a spare disk on the new hardware, attaching the old disks as
secondary disks in the new hardware, fixing up the config and stuff on
the old disks to match the new hardware (using that sophisticated
migration tool, "cp"), and then removing the spare disk and turning
the old disks into the primary disks for the new hardware. It's rude
and crude, but it's also quick and effective.
-- Ben
More information about the gnhlug-discuss
mailing list