i have been given a machine with 2 discs (2x 160gb, sata), linux debian 4.0 and a task to make it run as raid1.
in the begining layout was simple:
disc: /dev/sda had 2 partitions:
- sda1 – 2gb, swap
- sda2 – rest of disc, root filesystem
second disc (sdb) didn't have any partitions.
it was up to me what exactly i will do, but the outcome had to be:
- all important data will be in raid1 setup on both discs
- current data cannot be lost
- installing everything from scratch is not an option.
- machine has lilo loader on it, and it shouldn't be changed
so, after some tests i did it, and will write about how to do it for future reference.
all naming conventions in following text will use names on the machine described above (sda1, sda2, sdb).
- of course: apt-get install mdadm. mdadm is the tool to make raid arrays on linux.
- since debian kernel has everything important loaded – i dont need to, but you can: modprobe md_mod; modprobe raid1
- create 2 partitions (sdb1, sdb2) on sdb disc. their layout and sizes should be the same as on source disc. in my case i decided to use sdb1 partition as /tmp disc – 2gb should be enough
- let's create filesystem on sdb1 partition (future /tmp space) : mkfs -t ext3 /dev/sdb1
- now. i need to create md0 device (it didn't exist in my system. if it does in your – just skip this point). to create you use: mknod -m 0660 /dev/md0 b 9 0; chgrp disk /dev/md0
- once i have /dev/md0, i create the array. i do so, by creating new array in raid1 mode, that will contain sdb1 partition and “missing" disc. this means that this partition will be in “degraded" mode, but this is perfectly fine for us
- mdadm –create /dev/md0 -l1 -n2 /dev/sdb2 missing
- now, the filesystem on /dev/md0: mkfs -t ext3 /dev/md0
- then you should edit /etc/fstab, and modify it to change device for rootfilesystem from “/dev/sda1" to “/dev/md0". ready line can looks like this: “/dev/md0 / ext3 defaults 0 1"
- i add information about /tmp to fstab: “/dev/sdb1 /tmp ext3 defaults,errors=continue,noexec,nosuid 0 0". it is very important to use “0" at the end – otherwise, if one of disc would fail, system will not bootup correctly claiming that it can't mount /tmp.
- in /etc/lilo.conf i modify “root=" entry to point it to /dev/md0: “root=/dev/md0". “boot=" stays “/dev/sda"
- mkdir /mnt; mount /dev/md0 /mnt; cd /; tar cf – –exclude=./proc –exclude=./mnt –exclude=./sys . | ( cd /mnt; tar xvf – )
- above series of commands will make /mnt directory, mount our raid device there, and copy whole filesystem to it.
- since i skipped /proc and /sys, i have to create them now, and fix permissions: mkdir /mnt/proc /mnt/sys; chmod 555 /mnt/proc
- ok. now we have: root filesystem on /dev/sda2, 2-device raid1 on /dev/sdb2 (and a missing disc), with copy of root filesystem. configured /etc/fstab and /etc/lilo.conf. so, just issue “lilo" command to install new bootblock (should go without errors), and reboot machine.
- after bootup root filesystem should be mounted on /dev/md/0, and cat /proc/mdstat should show that this array (md0) is working, but degraded.
- now, we add unused (at the moment) /dev/sda2 to md0: mdadm –add /dev/md0 /dev/sda2
- raid rebuild process now works. it's progress can be seen by viewing /proc/mdstat file. full rebuild took me about 40 minutes. we can't proceed before rebuild finishes.
- after it finished, we have to modify /etc/lilo.conf once again. this time, “boot=" parameter should be changed to “/dev/md0", and we should add new parameter: “raid-extra-boot=/dev/sda,/dev/sdb"
- after modifications of lilo.conf we should issue “lilo" command to make the change permanent.
- at the moment the mogration practically finished. we can simply do one more reboot to test if it will work (should, and it did work for me 🙂
now, the procedure i shown above is not meant to be a full fledged raid howto or manual. there are better sources for this kind of information.
this procedure is only meant to help in similar cases (lilo, migration of root filesystem to raid1).
if you have any questions about it – do not hesitate to ask. and if you dont understand something – please tell me so – i'll be glad to fix all that's not clear.
Thanks a lot for sharing this. I’ve been postponing my migration to RAID for almost three months now, not having time to do some research and find the most efficient way of doing it.
My situation is almost identical, so it should go rather smoothly. I hope it will 🙂
Thanks again.
You do not need to have same partition schema on both drives. I talking about 3rd point. You can for example create sdb1 partition, smaller than original sda2 partition, but big enough to copy all data from sda2 partition. Then, follow steps till 15th point. After reboot you can remove all paritions from sda drive and create same partitions as on sdb (only these for which you need RAID1). This way, you can for example later create other partitions and combine them into RAID0, and/or LVM on it.
You can also create two, smaller swap partitions on both drives and mount them with same prio option in fstab to distribute access.
Let me guess – Hetzner data center and one of their DSX000 servers? 🙂
I had a similar request from one of my colleagues and I solved it roughly the same way – the most important difference was that I used a rescue system (the system shouldn’t be used anyways, moreover it gave me certainty that I will transform the data as it should be – no processes would be interfering) and rsync instead of tar.
All in all – it works, but whenever possible – just do the raid in advance 🙂
@Pawel J. Sawicki:
yes. exactly this 🙂
Leet 🙂
BTW: Hetzner is quite ok in terms of price/quality/service/etc… (at least currently) but (as a precaution) I did order some slices from Slicehost – you never know 🙂
slicehost seems to be prohibitively expensive. at least – judging from prices on their front page.
Thanks!
Great tutorial, thanks just in step 7 it should be
mdadm –create /dev/md0 -l1 -n2 /dev/sdb2 missing
hmm, ok, so it should be two dashes 🙂
@David Podhola:
yeah,. there were 2 dashes. wordpress mangled the commands 🙁 same in other steps (like 12).
sorry for that.