RAID help

classic Classic list List threaded Threaded
18 messages Options
Reply | Threaded
Open this post in threaded view
|

RAID help

Mick-10
Hi All,

I haven't had to set up a software RAID for years and now.  I want to set up
two RAID 1 arrays on a new file server to serve SBM to MSWindows clients.  The
first RAID1 having two disks, where a multipartition OS installation will take
place.  The second RAID1 having two disks for a single data partition.

From what I recall I used mdadm with --auto=mdp, to create a RAID1 from 2
disks, before I used fdisk to partition the new /dev/md0 as necessary.  All
this is lost in the fog of time.  Now I read that these days udev names the
devices/partitions, so I am not sure what the implication of this is and how
to proceed.

What is current practice?  Create multiple /dev/mdXs for the OS partitions I
would want and then stick a fs on each one, or create one /dev/md0 which
thereafter is formatted with multiple partitions?  Grateful for any pointers
to resolve my confusion.
--
Regards,
Mick

signature.asc (501 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: RAID help

Paul Hartman-3
On Tue, Oct 15, 2013 at 2:34 AM, Mick <[hidden email]> wrote:

> Hi All,
>
> I haven't had to set up a software RAID for years and now.  I want to set up
> two RAID 1 arrays on a new file server to serve SBM to MSWindows clients.  The
> first RAID1 having two disks, where a multipartition OS installation will take
> place.  The second RAID1 having two disks for a single data partition.
>
> From what I recall I used mdadm with --auto=mdp, to create a RAID1 from 2
> disks, before I used fdisk to partition the new /dev/md0 as necessary.  All
> this is lost in the fog of time.  Now I read that these days udev names the
> devices/partitions, so I am not sure what the implication of this is and how
> to proceed.
>
> What is current practice?  Create multiple /dev/mdXs for the OS partitions I
> would want and then stick a fs on each one, or create one /dev/md0 which
> thereafter is formatted with multiple partitions?  Grateful for any pointers
> to resolve my confusion.

One of the best resources is the kernel RAID wiki:
https://raid.wiki.kernel.org/

Reply | Threaded
Open this post in threaded view
|

Re: RAID help

Mick-10
On Tuesday 15 Oct 2013 20:28:46 Paul Hartman wrote:

> On Tue, Oct 15, 2013 at 2:34 AM, Mick <[hidden email]> wrote:
> > Hi All,
> >
> > I haven't had to set up a software RAID for years and now.  I want to set
> > up two RAID 1 arrays on a new file server to serve SBM to MSWindows
> > clients.  The first RAID1 having two disks, where a multipartition OS
> > installation will take place.  The second RAID1 having two disks for a
> > single data partition.
> >
> > From what I recall I used mdadm with --auto=mdp, to create a RAID1 from 2
> > disks, before I used fdisk to partition the new /dev/md0 as necessary.
> > All this is lost in the fog of time.  Now I read that these days udev
> > names the devices/partitions, so I am not sure what the implication of
> > this is and how to proceed.
> >
> > What is current practice?  Create multiple /dev/mdXs for the OS
> > partitions I would want and then stick a fs on each one, or create one
> > /dev/md0 which thereafter is formatted with multiple partitions?
> > Grateful for any pointers to resolve my confusion.
>
> One of the best resources is the kernel RAID wiki:
> https://raid.wiki.kernel.org/
Thanks Paul!  It seems that after a cursory look, both ways of partitioning a
RAID-1 are still available:

https://raid.wiki.kernel.org/index.php/Partitioning_RAID_/_LVM_on_RAID

========================================================
# df -h
 Filesystem            Size  Used Avail Use% Mounted on
 /dev/md2              3.8G  640M  3.0G  18% /
 /dev/md1               97M   11M   81M  12% /boot
 /dev/md5              3.8G  1.1G  2.5G  30% /usr
 /dev/md6              9.6G  8.5G  722M  93% /var/www
 /dev/md7              3.8G  951M  2.7G  26% /var/lib
 /dev/md8              3.8G   38M  3.6G   1% /var/spool
 /dev/md9              1.9G  231M  1.5G  13% /tmp
 /dev/md10             8.7G  329M  7.9G   4% /var/www/html
=========================================================

and:

mdadm --create --auto=mdp --verbose /dev/md_d0 --level=mirror --raid-devices=2
/dev/sda /dev/sdb

which is thereafter partitioned with fdisk.  This is the one I have used in
the past.


Which one is preferable, or what are the pros & cons of each?

--
Regards,
Mick

signature.asc (501 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: RAID help

Nicolas Sebrecht
On Tue, Oct 15, 2013 at 10:42:18PM +0100, Mick wrote:

> mdadm --create --auto=mdp --verbose /dev/md_d0 --level=mirror --raid-devices=2
> /dev/sda /dev/sdb
>
> which is thereafter partitioned with fdisk.  This is the one I have used in
> the past.
>
> Which one is preferable, or what are the pros & cons of each?

For a basic RAID1, the best is to keep it as simple as possible. So
mirroring while disk looks better. It will also keep MBR/GPT synced.

I tend to make manual partitions that I mirror but this is because I
usually require to do more complex setups (e.g. mixing mirror types), or
because I need to have the setup more flexible.

--
Nicolas Sebrecht

Reply | Threaded
Open this post in threaded view
|

Re: RAID help

Nicolas Sebrecht
On Wed, Oct 16, 2013 at 08:10:40PM +0200, Nicolas Sebrecht wrote:

> On Tue, Oct 15, 2013 at 10:42:18PM +0100, Mick wrote:
>
> > mdadm --create --auto=mdp --verbose /dev/md_d0 --level=mirror --raid-devices=2
> > /dev/sda /dev/sdb
> >
> > which is thereafter partitioned with fdisk.  This is the one I have used in
> > the past.
> >
> > Which one is preferable, or what are the pros & cons of each?
>
> For a basic RAID1, the best is to keep it as simple as possible. So
> mirroring while disk looks better. It will also keep MBR/GPT synced.

s/while/the whole/

> I tend to make manual partitions that I mirror but this is because I
> usually require to do more complex setups (e.g. mixing mirror types), or
> because I need to have the setup more flexible.

--
Nicolas Sebrecht

Reply | Threaded
Open this post in threaded view
|

Re: RAID help

Mick-10
On Wednesday 16 Oct 2013 21:14:38 Nicolas Sebrecht wrote:

> On Wed, Oct 16, 2013 at 08:10:40PM +0200, Nicolas Sebrecht wrote:
> > On Tue, Oct 15, 2013 at 10:42:18PM +0100, Mick wrote:
> > > mdadm --create --auto=mdp --verbose /dev/md_d0 --level=mirror
> > > --raid-devices=2 /dev/sda /dev/sdb
> > >
> > > which is thereafter partitioned with fdisk.  This is the one I have
> > > used in the past.
> > >
> > > Which one is preferable, or what are the pros & cons of each?
> >
> > For a basic RAID1, the best is to keep it as simple as possible. So
> > mirroring while disk looks better. It will also keep MBR/GPT synced.
>
> s/while/the whole/
>
> > I tend to make manual partitions that I mirror but this is because I
> > usually require to do more complex setups (e.g. mixing mirror types), or
> > because I need to have the setup more flexible.
Thank you both, I will go with mirroring the whole disk, before I partition
the array.
--
Regards,
Mick

signature.asc (501 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [O/T] RAID help - now won't boot

Mick-10
In reply to this post by Nicolas Sebrecht
On Wednesday 16 Oct 2013 21:14:38 Nicolas Sebrecht wrote:

> On Wed, Oct 16, 2013 at 08:10:40PM +0200, Nicolas Sebrecht wrote:
> > On Tue, Oct 15, 2013 at 10:42:18PM +0100, Mick wrote:
> > > mdadm --create --auto=mdp --verbose /dev/md_d0 --level=mirror
> > > --raid-devices=2 /dev/sda /dev/sdb
> > >
> > > which is thereafter partitioned with fdisk.  This is the one I have
> > > used in the past.
> > >
> > > Which one is preferable, or what are the pros & cons of each?
> >
> > For a basic RAID1, the best is to keep it as simple as possible. So
> > mirroring while disk looks better. It will also keep MBR/GPT synced.
>
> s/while/the whole/
>
> > I tend to make manual partitions that I mirror but this is because I
> > usually require to do more complex setups (e.g. mixing mirror types), or
> > because I need to have the setup more flexible.
OK, I spent some time to experiment in a VM.  Two small un-partitioned virtual
disks which I used to create /dev/md0 as RAID 1 using sysrescuecd.  Then I
used fdisk to create a MSDOS partition table on /dev/md0, followed by 4
partitions on /dev/md0:
======================
~$ fdisk -l

Disk /dev/sda: 10.5 GB, 10522460160 bytes
255 heads, 63 sectors/track, 1279 cylinders, total 20551680 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sda doesn't contain a valid partition table

Disk /dev/sdb: 10.5 GB, 10522460160 bytes
255 heads, 63 sectors/track, 1279 cylinders, total 20551680 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/md0: 10.5 GB, 10521337856 bytes
2 heads, 4 sectors/track, 2568686 cylinders, total 20549488 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c3148

    Device Boot      Start         End      Blocks   Id  System
/dev/md0p1   *        2048      718847      358400   83  Linux
/dev/md0p2          718848     3790847     1536000   82  Linux swap / Solaris
/dev/md0p3         3790848    18470911     7340032   83  Linux
/dev/md0p4        18470912    20549487     1039288   83  Linux
======================

So, no partition tables on /dev/sda or /dev/sdb drives and of course no
partitions at all.  The partitions were created on the /dev/md0 block device.

I then rebooted with a Ubuntu server CD and installed the OS in the
above filesystem.  It seemed to have recognised the RAID1 array as /dev/md127,
instead of /dev/md0.

Trying to install GRUB on /dev/sda, or /dev/sdb, or /dev/md127p1 failed.  The
only way to install GRUB and complete the Ubuntu server OS installation was to
install it on /dev/md127, which it accepted.  However, on rebooting it failed
with:  "FATAL: No boot medium found! System halted."


Rebooting with sysrescueCD and selecting to scan and boot any linux OS it
could find, it picks up the RAID1 installation and it boots into it without
any problem.  This is what I can see now:
======================
~$ lsblk
NAME      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda         8:0    0   9.8G  0 disk  
└─md0       9:0    0   9.8G  0 raid1
  ├─md0p1 259:0    0   350M  0 md    /boot
  ├─md0p2 259:1    0   1.5G  0 md    [SWAP]
  ├─md0p3 259:2    0     7G  0 md    /
  └─md0p4 259:3    0  1015M  0 md    /home
sdb         8:16   0   9.8G  0 disk  
└─md0       9:0    0   9.8G  0 raid1
  ├─md0p1 259:0    0   350M  0 md    /boot
  ├─md0p2 259:1    0   1.5G  0 md    [SWAP]
  ├─md0p3 259:2    0     7G  0 md    /
  └─md0p4 259:3    0  1015M  0 md    /home
======================


======================
~$ df -h -T
Filesystem     Type   Size  Used Avail Use% Mounted on
/dev/md0p3     ext4   6.9G  1.2G  5.4G  18% /
udev           tmpfs   10M  8.0K   10M   1% /dev
none           tmpfs  146M  352K  146M   1% /run
none           tmpfs  5.0M     0  5.0M   0% /run/lock
none           tmpfs  730M     0  730M   0% /run/shm
/dev/md0p1     ext2   329M   27M  285M   9% /boot
/dev/md0p4     ext4   999M   18M  931M   2% /home
======================


======================
~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4]
[raid10]
md0 : active raid1 sda[0] sdb[1]
      10274744 blocks super 1.2 [2/2] [UU]
     
unused devices: <none>
======================


======================
~$ sudo blkid

/dev/sr0: LABEL="sysrcd-3.8.0" TYPE="iso9660"
/dev/sda: UUID="59195572-751a-3bd9-7771-6e5411b032c8"
UUID_SUB="3acd1b2c-1c95-7c07-a8b2-8aa1b2a0a169" LABEL="sysresccd:0"
TYPE="linux_raid_member"
/dev/sdb: UUID="59195572-751a-3bd9-7771-6e5411b032c8" UUID_SUB="c63e97ba-42cb-
c4f8-550d-f1effae33d3f" LABEL="sysresccd:0" TYPE="linux_raid_member"
/dev/md0p1: UUID="d9dbe2bc-0453-46e4-a5b0-779e55246004" TYPE="ext2"
/dev/md0p2: UUID="f1a41bba-d519-42d5-8b9d-19292da899bd" TYPE="swap"
/dev/md0p3: UUID="63d67a30-b4e9-4792-a081-cf1caae281ae" TYPE="ext4"
/dev/md0p4: UUID="d6dc0b67-cbd3-47ae-a886-34299f491279" TYPE="ext4"
======================


======================
~$ sudo mdadm -Db /dev/md0
 
ARRAY /dev/md0 metadata=1.2 name=sysresccd:0
UUID=59195572:751a3bd9:77716e54:11b032c8
======================


======================
~$ sudo mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Sat Oct 19 14:17:46 2013
     Raid Level : raid1
     Array Size : 10274744 (9.80 GiB 10.52 GB)
  Used Dev Size : 10274744 (9.80 GiB 10.52 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Sun Oct 20 10:26:56 2013
          State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : sysresccd:0
           UUID : 59195572:751a3bd9:77716e54:11b032c8
         Events : 19

    Number   Major   Minor   RaidDevice State
       0       8        0        0      active sync   /dev/sda
       1       8       16        1      active sync   /dev/sdb
======================


This is my /etc/mdadm/mdadm.conf:
======================
~$ cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md0 metadata=1.2 UUID=59195572:751a3bd9:77716e54:11b032c8

# This file was auto-generated on Sat, 19 Oct 2013 15:23:12 +0100
# by mkconf $Id$
======================

Any ideas why the Ubuntu installation won't boot?

PS.  In case you ask:  I'm trying with Ubuntu because the user would struggle
to look after a Gentoo system for this implementation.

--
Regards,
Mick

signature.asc (501 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [O/T] RAID help - now won't boot

Michael Hampicke-7
Am 20.10.2013 11:54, schrieb Mick:
> Any ideas why the Ubuntu installation won't boot?
>

My guess would be, you cannot boot, because if you install grub in /dev/md0.

Upon boot the bios cannot find stage1 of the bootloader, which normally
lies in the MBR (which also houses the partition table).

Is a setup as you wish - sda and sdb as raid1, and partitions only on
md0 - bootable in general?


signature.asc (501 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [O/T] RAID help - now won't boot

Mick-10
On Sunday 20 Oct 2013 13:57:34 Michael Hampicke wrote:
> Am 20.10.2013 11:54, schrieb Mick:
> > Any ideas why the Ubuntu installation won't boot?
>
> My guess would be, you cannot boot, because if you install grub in
> /dev/md0.
>
> Upon boot the bios cannot find stage1 of the bootloader, which normally
> lies in the MBR (which also houses the partition table).

I see ... so installing the MBR code in the /dev/md0 block device is further
down the disk than where BIOS is looking for it and that's why it errors out?

Meanwhile, there is no MBR or partition table on /dev/sda or /dev/sdb for BIOS
to jump to.  Hmm ...

It seems to me then that I *have* to create normal partitions on /dev/sda &
/dev/sdb, or I would need a different boot drive.  Is there another way to
overcome this problem.


> Is a setup as you wish - sda and sdb as raid1, and partitions only on
> md0 - bootable in general?

Yes, in this case.  It makes easy to set a faulty drive in failed state and
remove it from RAID in a single step, rather than the alternative which would
involve removing multiple /dev/mdX, one for each partition.  I could I guess
install LVM on top of RAID, but this adds complexity for a functionality
(increasing LV sizes) which will not be used in this implementation.

Any suggestions welcome.
--
Regards,
Mick

signature.asc (501 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [O/T] RAID help - now won't boot

Michael Hampicke-7
Am 20.10.2013 15:13, schrieb Mick:

> On Sunday 20 Oct 2013 13:57:34 Michael Hampicke wrote:
>> Am 20.10.2013 11:54, schrieb Mick:
>>> Any ideas why the Ubuntu installation won't boot?
>>
>> My guess would be, you cannot boot, because if you install grub in
>> /dev/md0.
>>
>> Upon boot the bios cannot find stage1 of the bootloader, which normally
>> lies in the MBR (which also houses the partition table).
>
> I see ... so installing the MBR code in the /dev/md0 block device is further
> down the disk than where BIOS is looking for it and that's why it errors out?
>
That would be my guess. Maybe someone more knowledgeable on how mdadm
writes stuff on the disk can jump in and provide additional info. But
I'm pretty sure, if you install grub in md0, it's not in that place on
the disk where the bios is actually looking for.

>
> It seems to me then that I *have* to create normal partitions on /dev/sda &
> /dev/sdb, or I would need a different boot drive.  Is there another way to
> overcome this problem.

Maybe create two mds. md1 (sda1, sdb1) is a small boot partition which
contains stage2+, the kernel and the initramfs. And md2 (sda2, sdb2)
which acts as another block device with partition table, etc...
In this setup you could install grub in the mbr of sda and sdb
(grub-install /dev/sda...)

A quick google on this subject returned no usable results. But I am off
now until tomorrow.


signature.asc (501 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [O/T] RAID help - now won't boot

J. Roeleveld
Michael Hampicke <[hidden email]> wrote:
Am 20.10.2013 15:13, schrieb Mick:
On Sunday 20 Oct 2013 13:57:34 Michael Hampicke wrote:
Am 20.10.2013 11:54, schrieb Mick:
Any ideas why the Ubuntu installation won't boot?

My guess would be, you cannot boot, because if you install grub in
/dev/md0.

Upon boot the bios cannot find stage1 of the bootloader, which normally
lies in the MBR (which also houses the partition table).

I see ... so installing the MBR code in the /dev/md0 block device is further
down the disk than where BIOS is looking for it and that's why it errors out?


That would be my guess. Maybe someone more knowledgeable on how mdadm
writes stuff on the disk can jump in and provide additional info. But
I'm pretty sure, if you install grub in md0, it's not in that place on
the disk where the bios is actually looking for.


It seems to me then that I *have* to create normal partitions on /dev/sda &
/dev/sdb, or I would need a different boot drive. Is there another way to
overcome this problem.

Maybe create two mds. md1 (sda1, sdb1) is a small boot partition which
contains stage2+, the kernel and the initramfs. And md2 (sda2, sdb2)
which acts as another block device with partition table, etc...
In this setup you could install grub in the mbr of sda and sdb
(grub-install /dev/sda...)

A quick google on this subject returned no usable results. But I am off
now until tomorrow.


I would suggest trying it by usong the older metadata format.
Check the man pages, but I thinl it would be --metadata=0.90 (or similar) during creation.
That might put the metadata at the end, rather then at the front. (Or it's the other way round and new metadata does it at the end.)

--
Joost
Ps. I have never tried it this way (full disk raid for boot device) using linux software raid.
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
Reply | Threaded
Open this post in threaded view
|

Re: [O/T] RAID help - now won't boot

Mick-10
On Sunday 20 Oct 2013 15:31:12 [hidden email] wrote:

> I would suggest trying it by usong the older metadata format.
> Check the man pages, but I thinl it would be --metadata=0.90 (or similar)
> during creation. That might put the metadata at the end, rather then at
> the front. (Or it's the other way round and new metadata does it at the
> end.)
>
> --
> Joost
> Ps. I have never tried it this way (full disk raid for boot device) using
> linux software raid.
Ha!  Yes, this made a difference, thanks!  With metadata 0.90 I can see the
same partitions I set up on /dev/md0, also on /dev/sda and /dev/sdb.  The only
problem now is that the Ubuntu server CD wants to format /dev/sda2 as swap and
fails at that stage.  :-/

Not sure how to by-pass this.

I may also try metadata=1.0 to see if this makes a difference, which also
positions the RAID data superblock at the end of the device:

Sub-Version Superblock Position on Device
-----------  -----------------------------
0.9         At the end of the device
1.0         At the end of the device
1.1         At the beginning of the device
1.2         4K from the beginning of the device

--
Regards,
Mick

signature.asc (501 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [O/T] RAID help - now won't boot

J. Roeleveld
Mick <[hidden email]> wrote:
On Sunday 20 Oct 2013 15:31:12 [hidden email] wrote:

I would suggest trying it by usong the older metadata format.
Check the man pages, but I thinl it would be --metadata=0.90 (or similar)
during creation. That might put the metadata at the end, rather then at
the front. (Or it's the other way round and new metadata does it at the
end.)

--
Joost
Ps. I have never tried it this way (full disk raid for boot device) using
linux software raid.

Ha! Yes, this made a difference, thanks! With metadata 0.90 I can see the
same partitions I set up on /dev/md0, also on /dev/sda and /dev/sdb. The only
problem now is that the Ubuntu server CD wants to format /dev/sda2 as swap and
fails at that stage. :-/

Not sure how to by-pass this.

I may also try metadata=1.0 to see if this makes a difference, which also
positions the RAID data superblock at the end of the device:

Sub-Version Superblock Position on Device
----------- -----------------------------
0.9 At the end of the device
1.0 At the end of the device
1.1 At the beginning of the device
1.2 4K from the beginning of the device

To bypass the swap format you could try either deselecting the format option (if it exists) or setting the partition type to something else.
The partition type can be set back to swap later from a livecd without having to reinstall.

Other option:
1 install to single disk

2 using sysresccd create a degraded raid1 using the 2nd drive

3 copy the partitions and date from drive 1 to the degraded raid device

4 add disk 1 to the raid

5 wait for the raid device is synchronized

6 change fstab and grub config to reflect the new disklayout

--
Joost
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
Reply | Threaded
Open this post in threaded view
|

Re: [O/T] RAID help - now won't boot

Nicolas Sebrecht-2
The 21/10/13, J. Roeleveld wrote:

>  Ha!  Yes, this made a difference, thanks!  With metadata 0.90 I can see the
>  same partitions I set up on /dev/md0, also on /dev/sda and /dev/sdb.

Sorry to come back late in this thread. As other contributors pointed
out correctly, the problem was RAID metadata at the beginning.

>                                                                        The only
>  problem now is that the Ubuntu server CD wants to format /dev/sda2 as swap and
>  fails at that stage.  :-/
>
>  Not sure how to by-pass this.

Yes. Most of the installers suck at that game. What I would do (already
done this way) is:
  - install the disks in another machine with virtualization capacity;
  - create the RAID 1 (metadata=0.90);
  - create a virtual machine with the built RAID as single disk;
  - boot on the CD to install any distro;
  - move the disk out to the target bare metal machine;
  - update fstab and grub if needed.

This has the advantage to not require to bypass the installer at some
stage at the price of a temporary installation of the disks somewhere
else.

>  I may
>  also try metadata=1.0 to see if this makes a difference, which also
>  positions the RAID data superblock at the end of the device:
>
>  Sub-Version  Superblock Position on Device
>  -----------  -----------------------------
>  0.9          At the end of the device
>  1.0          At the end of the device
>  1.1          At the beginning of the device
>  1.2          4K from the beginning of the device
>
>    To bypass the swap format you could try either deselecting the format
>    option (if it exists) or setting the partition type to something else.
>    The partition type can be set back to swap later from a livecd without
>    having to reinstall.
>
>    Other option:
>    1 install to single disk
>
>    2 using sysresccd create a degraded raid1 using the 2nd drive
>
>    3 copy the partitions and date from drive 1 to the degraded raid device

What is "copy the date"?

>    4 add disk 1 to the raid

I might miss something but I guess you're going to erase the installed
system (on disk 1) from the unused disk (disk 2), here.

I believe it would only be possible by installing the system on the
degraded RAID, which will likely mean coming back to the original swap
problem.

>    5 wait for the raid device is synchronized
>
>    6 change fstab and grub config to reflect the new disklayout

--
Nicolas Sebrecht

Reply | Threaded
Open this post in threaded view
|

Re: [O/T] RAID help - now won't boot

J. Roeleveld
Nicolas Sebrecht <[hidden email]> wrote:

>The 21/10/13, J. Roeleveld wrote:
>
>>    Other option:
>>    1 install to single disk
>>
>>    2 using sysresccd create a degraded raid1 using the 2nd drive
>>
>>    3 copy the partitions and date from drive 1 to the degraded raid
>device
>
>What is "copy the date"?

A typo. I meant to say "copy the data".


>>    4 add disk 1 to the raid
>
>I might miss something but I guess you're going to erase the installed
>system (on disk 1) from the unused disk (disk 2), here.

No. At this point, the raid (with disk2) has a copy of disk1.

>
>I believe it would only be possible by installing the system on the
>degraded RAID, which will likely mean coming back to the original swap
>problem.

That is why I suggested installing on a normal single disk. The copying over onto the degraded raid using disk2.

>>    5 wait for the raid device is synchronized
>>
>>    6 change fstab and grub config to reflect the new disklayout


--
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Reply | Threaded
Open this post in threaded view
|

Re: [O/T] RAID help - now won't boot

Mick-10
On Monday 21 Oct 2013 09:55:42 J. Roeleveld wrote:
> Nicolas Sebrecht <[hidden email]> wrote:

> >I believe it would only be possible by installing the system on the
> >degraded RAID, which will likely mean coming back to the original swap
> >problem.
>
> That is why I suggested installing on a normal single disk. The copying
> over onto the degraded raid using disk2.

I'm fast gravitating towards this option ...

Although with metadata 0.90 I was able to progress with the installation
(after I deselected the swap partitions) the grub-install script wanted to
install in /dev/md127p1 but it failed.  I had to override the Ubuntu installer
since I could only install grub in the /dev/md127 block device.  BTW, I'm
still at a loss as to why for Ubuntu the RAID 1 is seen as /dev/md127 and not
/dev/md0 which I created originally with sysrescuecd.

Either way, it won't boot again.  Now it stays on a blank screen, no error at
all shown.  I'll have another go with sysrescueCD to see if I install grub on
/dev/sda and /dev/sdb and if this does not work either, I'll stop wasting time
and follow your suggestion of installing on a single disk first, before I
mirror it thereafter.
--
Regards,
Mick

signature.asc (501 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: [O/T] RAID help - now won't boot

Nicolas Sebrecht-2
The 21/10/13, Mick wrote:

> I'm fast gravitating towards this option ...
>
> Although with metadata 0.90 I was able to progress with the installation
> (after I deselected the swap partitions) the grub-install script wanted to
> install in /dev/md127p1 but it failed.  I had to override the Ubuntu installer
> since I could only install grub in the /dev/md127 block device.

Which is the one we expect. /dev/md127p1 is the first partition of
/dev/md127.

>                                                                  BTW, I'm
> still at a loss as to why for Ubuntu the RAID 1 is seen as /dev/md127 and not
> /dev/md0 which I created originally with sysrescuecd.

Names of RAID devices are built at boot time. It depends on
/etc/mdadm.conf which should be part of the initramfs. Otherwise,
consider the name random.

> Either way, it won't boot again.  Now it stays on a blank screen, no error at
> all shown.

I don't understand why this blank screen. Or do you mean a black screen?

>             I'll have another go with sysrescueCD to see if I install grub on
> /dev/sda and /dev/sdb and if this does not work either,

It should work. Linux software RAID is assembled once the kernel is up
and running. Before, the system boot as usual on a single disk. Though,
I'm not sure how mdadm will handle the disk change behind his back.

Once the installed system is bootable, I suggest you to try to reinstall
grub. This will be required at some point in time in the future to
update it either way.

>                                                         I'll stop wasting time
> and follow your suggestion of installing on a single disk first, before I
> mirror it thereafter.

--
Nicolas Sebrecht

Reply | Threaded
Open this post in threaded view
|

Re: [O/T] RAID help - now won't boot

Mick-10
On Tuesday 22 Oct 2013 08:10:18 Nicolas Sebrecht wrote:

> The 21/10/13, Mick wrote:
> > I'm fast gravitating towards this option ...
> >
> > Although with metadata 0.90 I was able to progress with the installation
> > (after I deselected the swap partitions) the grub-install script wanted
> > to install in /dev/md127p1 but it failed.  I had to override the Ubuntu
> > installer since I could only install grub in the /dev/md127 block
> > device.
>
> Which is the one we expect. /dev/md127p1 is the first partition of
> /dev/md127.
Right, although Ubuntu's installer would point only to /dev/md127p1.  I had to
ask it to not install GRUB in the MBR (why would it choose /dev/md127p1 as the
device where the MBR resides is another matter) which then allowed me to edit
the entry and point it to /dev/md127.  Pointing GRUB installer to /dev/sda or
sdb failed (no filesystem found).


> > Either way, it won't boot again.  Now it stays on a blank screen, no
> > error at all shown.
>
> I don't understand why this blank screen. Or do you mean a black screen?

Yes, sorry, poor choice of words the colour of the screen is black, and the
content blank (well, there is a horizontal, non-flashing cursor at the top
left of the screen).


> >  I'll have another go with sysrescueCD to see if I install grub on
> > /dev/sda and /dev/sdb and if this does not work either,
>
> It should work. Linux software RAID is assembled once the kernel is up
> and running. Before, the system boot as usual on a single disk. Though,
> I'm not sure how mdadm will handle the disk change behind his back.

Yes, it will!  :-)

OK, having tried a couple of options this is what I have concluded.

Superblock with metadata 0.90 is written at the front of the disk.  No need to
partition each disk separately, because any partitions created on /dev/md0
also show up on each disk, i.e. the partition table created on /dev/md0 seems
to be readily recognised on each /dev/sda & sdb disks, as can be verified with
fdisk -l.  Installing grub on /dev/sda and /dev/sdb is a straight forward
exercise either with the Ubuntu installer, or afterwards once booted into the
new installation with sysrescueCD.

With any other metadata format things are more complicated.  This may be
because the location of the metadata changes (not always at the start of the
disk) or because the format is different and the kernel doesn't deal with it
natively.  Not sure and it doesn't matter.  This is what I did to be able to
use mdadm -e 1.2, which is the default metadata format these days:


Boot with sysrescueCD to create the array and partitions on it
==============================================================
mdadm --create --auto=mdp --verbose /dev/md0 --level=mirror --raid-devices=2 \
/dev/sda /dev/sdb

Then fdisk /dev/md0 and create 4 partitions.  Leave them all with the default
linux type of 83 (unless you want to use some fs exotica here).  Then 'fdisk
/dev/sda' which will create an MSDOS partition table automatically, in case
the disk doesn't yet have a partition table on it.  Then I created one primary
partition accepting the default start and end values offered by fdisk.  Repeat
for /dev/sdb.


Create fs on the RAID array:
===========================
'mkfs.ext4 -c -L <label_name> /dev/md0p1' and repeat for each partition on the
RAID except for the swap partition (if you have created one for this purpose).


Boot with Ubuntu server CD to install the OS:
============================================
Go through the tedious installation process; yeah, I know - it is faster than
Gentoo, but it *feels* more tedious to me! :p

You'll find that the fs and labels are not recognised by Ubuntu's partition
manager.  Select each RAID partition (except swap) and set a fs plus a mount
point.  These are used by the installer to populate the fstab with.  Leave the
partitions unformatted and ignore any warnings that come up later when you
write these changes on disk.

Continue with the installation until it is time to install GRUB.  The
installer will choose /dev/md127p1.  Select to *not* install GRUB in the MBR,
which will allow you to edit the drive entry.  Choose /dev/md127 and complete
the installation.


Format a swap partition on the RAID:
===================================
Reboot with sysrescueCD, use fdisk /dev/md0 (that's how it will be recognised
by sysrescueCD) and change the fs type of the swap partition to 82.  Then,
'mkswap -c -l swap /dev/md0p2' or whichever partition you have your swap on.  
Edit /etc/fstab to include your new swap partition.  I don't know why Ubuntu's
installer would not accept the swap partition on a RAID1 device, but this was
the work around I used.


Install GRUB on each disk:
=========================
Reboot sysrescueCD, but now use the alternative option to boot into a Linux OS
on the disk.  It will probe the disks, assemble the RAID1 array and boot into
Ubuntu OS.  Install grub on /dev/sda and /dev/sdb - no need to stop the array.  
Then 'update-initramfs -u' and from then on you can reboot into your new
installation.

Note, if you get a drive failure, you will need to reinstall grub in the MBR
of the new disk.

I've written all this from memory so please correct any errors I've made.


The problem (at least with grub2 which is all that I tried) is that the grub
installer does not take kindly to installing itself on the MBR of disks which
are partitionless and the position of the RAID metadata on the disk messes up
GRUB's ability to find the partition table.  I thought that creating a
partition table alone will stop it having a fit, but ultimately the creation
of an empty partition with fd partition type was necessary.

I'm posting this in the odd chance that anyone would like to run a RAID1 with
RAID partitions on a single RAID device, with no LVM.  It makes
removing/adding a disk a single line command, which reduces the likelihood of
operator error for the users I have in mind for this particular
implementation.

Thanks again for your help!  :-)
--
Regards,
Mick

signature.asc (501 bytes) Download Attachment