This assumes you got 2 similar HDDs already setup in a server and that you know how to install ProxMoxVE (just plop the CD in the tray and blindly follow the prompts).
Install Proxmox as usual onto 1 drive(/dev/sda)
Addextra packages
mdadm will ask which MD arrays are needed for the root fs. Go with the default (all)
apt-get install mdadm initramfs-tools vim
Add raid1 to /etc/modules
echo "raid1" >> /etc/modules
Regenerate initrd.img file
mkinitramfs -o /boot/test -r /dev/mapper/pve-root
Rename old img file (replace .x for whatever kernel version you are using)
mv /boot/initrd.img-2.6.x-pve /boot/initrd.img-2.6.x-pve.original
Rename new img file
mv /boot/test /boot/initrd.img-2.6.x-pve
Make sure grub is setup on both hdds
grub-install --no-floppy /dev/sda
grub-install --no-floppy /dev/sdb
Change UUIDs for proper md0 devices on /etc/fstab if your file has the UUID in it.
Now do the same thing for the /etc/fstab file.
Replace the UUID=XXXXXXXXX /boot ext3 defaults 0 1 line with /dev/md0 /boot ext3 defaults 0 1
vim /etc/fstab
Clone the partition table from 1st drive to 2nd
sfdisk -d /dev/sda | sfdisk --force /dev/sdb
Create md devices with second drive only
mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb1
mdadm --create /dev/md1 --level=1 --raid-devices=2 missing /dev/sdb2
Save new mdconf file
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
Get boot device setup
mkfs.ext3 /dev/md0
mkdir /mnt/md0
mount /dev/md0 /mnt/md0
cp -ax /boot/* /mnt/md0
umount /mnt/md0
umount /boot; mount /dev/md0 /boot
sfdisk --change-id /dev/sda 1 fd
mdadm --add /dev/md0 /dev/sda1
Setup data device
pvcreate /dev/md1
vgextend pve /dev/md1
this step takes a LOOOOONG time but if it works you know you have a good drive!
pvmove /dev/sda2 /dev/md1
vgreduce pve /dev/sda2
sfdisk --change-id /dev/sda 2 fd
mdadm --add /dev/md1 /dev/sda2
You can monitor the array rebuild progress with the following command:
watch -n 1 cat /proc/mdstat
If for some reason mdstat reports that the raid will take days to complete, you can adjust the minimal raid rebuild rate with the following command:
echo 60000 >/proc/sys/dev/raid/speed_limit_min
The default setting of 1000 may be too slow on most situations. Again, upping this too much on a system that is in use is not advisable. Also make sure you don't put the value higher than /proc/sys/dev/raid/speed_limit_max.
This was originally from the link below which helped us out when needing to install proxmox ve with mirrored drives for a customer.
I originally found this post here http://www.petercarrero.com/content/2010/07/31/adding-software-raid-proxmox-ve-install
The site went down and I saved a copy for myself to use. The site came backup a few months later and now the original post is back online. I have to say I used this many times Thank you!