Raid1 Issue on RPI4

Bruce Labitt bruce.labitt at myfairpoint.net
Sat May 29 20:10:26 EDT 2021


Seems to be a NVME case related thing. Perhaps my clamping arangement 
shorted out a connection.

I have the disks up.  However, the nvme disk is reporting as /dev/sdd 
now, not /dev/sdb.
This is dumb.  mdadm.conf has configured the array as:

ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=rpi4:0 
UUID=82415afc:85be4701:d47937be:cdb8b4e8
    devices=/dev/sdb1,/dev/sdc1

Is there a way to make the devices by UUID so this always works?  If so 
how do I get the uuid of the individual disks?
Right now, due to the RAID1 sdc and sdd have the identical UUID.
But,

$ sudo mdadm --detail /dev/md0
/dev/md0:
            Version : 1.2
      Creation Time : Wed May 26 09:47:08 2021
         Raid Level : raid1
         Array Size : 976628736 (931.39 GiB 1000.07 GB)
      Used Dev Size : 976628736 (931.39 GiB 1000.07 GB)
       Raid Devices : 2
      Total Devices : 1
        Persistence : Superblock is persistent

      Intent Bitmap : Internal

        Update Time : Sat May 29 19:47:24 2021
              State : clean, degraded
     Active Devices : 1
    Working Devices : 1
     Failed Devices : 0
      Spare Devices : 0

Consistency Policy : bitmap

     Number   Major   Minor   RaidDevice State
        -       0        0        0      removed
        1       8       17        1      active sync

Is there a way to get back to a working 2 device RAID1 array?
There's nothing of any value on the array at the moment.  But I'm really 
not too happy about a disk disappearing - this doesn't seem robust.

On 5/29/21 4:57 PM, Bruce Labitt wrote:
> So I got the raid1 array running on the RPI4.  Today, I tried to add a
> sub directory to the array.  Apparently something bad happened and one
> of the disks disappeared.  When this happened there was some kind of
> major upset, as the OS stopped functioning, like it no longer knew where
> commands were located.  sudo, stopped working.  I could type in commands
> via ssh and see the characters, but the command interpreter didn't
> function correctly.  I got a bash message saying it couldn't find the
> command.  Could not establish another ssh session with the RPI.  At that
> point, I pulled the power.
>
> Upon a normal reboot, I find sdb is missing.  md0 is still intact with
> just sdc1.  I'm not sure how mkdir would cause this, but...
>
> $ sudo mdadm --detaill /dev/md0 states there are 2 RAID devices, but 1
> total devices.  1 Active device, 1 working device, 0 failed devices, 0
> spare devices.  Disk0 state is removed, and Disk1 state is active sync
> /dev/sdc1
>
> Assuming I can get sdb back, how do I get it back in the array? Just $
> sudo mdadm --manage /dev/md0 --add /dev/sdb1 ?  Is there a way with UUID's?
>
> Weird that the NVME disk sdb has just disappeared.  Even after a reboot
> it isn't present, even though the idiot light is on.  fdisk doesn't show
> it.  parted allows me to select it but print shows info on sda?  parted
> will select and print data on sdc.
>
> Is there anything that can be done to 'rescue' the nvme disk sdb?
>
>
> _______________________________________________
> gnhlug-discuss mailing list
> gnhlug-discuss at mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>



More information about the gnhlug-discuss mailing list