<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 TRANSITIONAL//EN">
<HTML>
<HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; CHARSET=UTF-8">
<META NAME="GENERATOR" CONTENT="GtkHTML/3.28.3">
</HEAD>
<BODY TEXT="#000000" BGCOLOR="#ffffff">
<BR>
<BLOCKQUOTE TYPE=CITE>
<PRE>
Various thoughts on this problem;
I am wondering if you tried adding
--metadata=0.90
to mdadm when you try start the RAID from a command-prompt? (if you
are not doing an automated install) and if that makes any difference?
</PRE>
</BLOCKQUOTE>
<BR>
<BLOCKQUOTE TYPE=CITE>
<PRE>
Or, does it still tell you there are not enough disks?
<A HREF="https://wiki.archlinux.org/index.php/Installing_with_Software_RAID_or_LVM">https://wiki.archlinux.org/index.php/Installing_with_Software_RAID_or_LVM</A>
I know you're using Ubuntu (not Arch) but it seems like there are
versioning issues with RAID metadata and Grub 0.97 and > (i.e. Grub2)
If after boot failure you have a shell (busybox) does it allow you to
run cat /proc/mdstat?
I'm also wondering if the grub.conf installed specifies the root as
the correct md device (e.g /dev/md2 or whatever the "/" partition is)?
Usually the UUID's are listed in /etc/mdadm.conf
ARRAY /dev/md0 UUID=3aaa0122:29827cfa:5331ad66:ca767371 #ARRAY /dev/
md1 super-minor=1 #ARRAY /dev/md2 devices=/dev/hda1,/dev/hdb1 #
Though your partitions are type fd (Linux RAID autodetect) you ought
be able to boot from a LiveCD and mount them. mount /dev/sda3 /mnt/
raiddisk (or whatever root "/" partition is).
and examine the contents of the /etc/mdadm.conf that the install
creates and also /boot/grub/grub.conf (mount /dev/sda1 /mnt/raidboot)
to see if anything is amiss there.
It could be that grub is having problems with the disk signatures (or
maybe that version of grub has a bug but that seems unlikely in a
release CD for Ubuntu 10.10). Does mdadm --detail --scan produce
normal output? Or you can do mdadm --examine /dev/sda1 to see the RAID
superblock UUID for a given drive.
<A HREF="http://ubuntuforums.org/archive/index.php/t-410136.html">http://ubuntuforums.org/archive/index.php/t-410136.html</A>
I hope that help, somewhere between what the install is writing to /etc/mdadm.conf, /boot/grub/grub.conf, the RAID signatures on the drives and Grub presumably lies the answer.
</PRE>
</BLOCKQUOTE>
<BR>
after Thursday session at the glug (big thank you Sam and Jason) it was clear that:<BR>
1/ I needed to do it from scratch, and <BR>
2/ I wasn't crazy<BR>
<BR>
I just managed to "stabilize" the array,<BR>
I'll install on top of it tomorrow,<BR>
<BR>
first I dd_rescued the disks to make sure there where totally clean,<BR>
because for some reason the disks seem to keep old superblock information that zero-superblock didn't clean up.<BR>
<BR>
second, I decided not to use the installer partitioner ever again, <BR>
but do it from bootable disk grml,<BR>
<BR>
when I assembled the md0 the mdadm tool warned about the metadata,<BR>
and as you said, I issued --metadata=1.0 (instead of 0.9)<BR>
<BR>
the other problem that i didn't know of was this:<BR>
the array was type 5 with 5 devs and 1 spare,<BR>
when mdadm detects that it has more devices than needed for the array type configured,<BR>
then it does a "trick", <BR>
instead of using the 5 devices, starts with an array of 4 and restores the 5th from a degraded mode,<BR>
(it assumes that has 4 devices and 2 spares, and that is running degraded)<BR>
<BR>
That seems an internal trickery to speed up the initial syncing,<BR>
so first I freaked, but then I read more and learned that, so I let the array rebuilt/sync, and finish it's jig,<BR>
after finishing the arrays showed the original configuration of 5/1 instead of 4/2<BR>
<BR>
then I rebooted from the live disk, and the whole array was fine and clean,<BR>
I didn't install on top of it, so I am not saying eureka yet, <BR>
but at least, this is the first time the array comes whole and clean after a reboot,<BR>
<BR>
<BR>
<BR>
<BR>
</BODY>
</HTML>