CentOS 5.7 64 bit DRBD - Apache MySQL Failover - DRBD, Heartbeat, Apache, MySQl, phpmadmin, webmin, APF, BFD, and malware detect.
This is a 2 server setup with CentOS 5.7 64bit Apache ,MySQL, PHPMyAdmin, DRBD, APF, BFD, Malware Detect, and webmin.
Download and install CentOS 5 64 bit this only explains one nic but we usually use a secondary nic for transfering drbd data...
We used a 20 Gig root partition, 4 Gig meta partition, and a 100 gig data partition.
20 GB /
4 GB /meta
100 GB /data
Generate ssh keys on both servers and copy them to both..
on server1
----------
ssh-keygen -t dsa
on server2
----------
ssh-keygen -t dsa
on server1
----------
scp ~/.ssh/id_dsa.pub root@server2:~/.ssh/authorized_keys
on server2
----------
scp ~/.ssh/id_dsa.pub root@server1:~/.ssh/authorized_keys
on server1
----------
vi /etc/hosts or nano /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
10.0.211.180 server1
10.0.211.181 server2
on server2
----------
vi /etc/hosts or nano /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
10.0.211.181 server2
10.0.211.180 server1
Do this on both server1 and Server2
-----------------------------------
wget rpm -Uhv http://apt.sw.be/redhat/el5/en/x86_64/rpmforge/RPMS//rpmforge-release-0.3.6-1.el5.rf.x86_64.rpm
rpm -Uvh rpmforge-release-0.3.6-1.el5.rf.x86_64.rpm
wget http://prdownloads.sourceforge.net/webadmin/webmin-1.570-1.noarch.rpm
rpm -Uvh webmin-1.570-1.noarch.rpm
yum install httpd mysql-server mysql php phpmyadmin drbd83 kmod-drbd83 htop ncurses ncurses-devel php-cli php-common php-dba php-gd php-imap php-mbstring php-mcrypt php-mhash php-mysql php-ncurses php-odbc php-pdo php-pear php-pear-Auth-SASL php-pear-File php-pear-HTTP-Request php-pear-Log php-pear-MDB2 php-pear-MDB2-Driver-mysql php-pspell php-snmp php-soap php-tidy php-xml php-xmlrpc php-bcmath php-apc php-eaccelerator php-embedded php-ldap php-memcache
yum groupinstall "Cluster Storage"
cd
wget http://www.rfxn.com/downloads/apf-current.tar.gz
wget http://www.rfxn.com/downloads/bfd-current.tar.gz
wget www.rfxn.com/downloads/maldetect-current.tar.gz
tar xvzf apf-current.tar.gz
tar xvzf bfd-current.tar.gz
tar xzzf maldetect-current.tar.gz
yum -y update
vi /usr/share/phpmyadmin/config.inc.php
change
$cfg['Servers'][$i]['auth_type'] = 'cookie';
to
$cfg['Servers'][$i]['auth_type'] = 'http';
Also change phpmyadmin.conf
vi /etc/httpd/conf.d/phpmyadmin.conf
change
Allow from 127.0.0.1
to
Allow from all
/etc/init.d/httpd restart
now your root mysql password will work to acces phpmyadmin.
Quick and Dirty Apf install
---------------------------
cd apf-*
sh install.sh
edit the /etc/apf/conf.apf
vi /etc/apf/conf.apf
edit make it 0 to turn dev mode off test it first so you do not lock your self out of a server.
DEVEL_MODE="1"
Also edit
# Common inbound (ingress) TCP ports
IG_TCP_CPORTS="22"
Make it something like this
# Common inbound (ingress) TCP ports
IG_TCP_CPORTS="22,80,443,10000"
test it save it restart APF
/etc/init.d/apf restart
Add opposite server to allow file
vi /etc/apf/allow_hosts.rules
add the opposite servers ip to the allow file on each server so the firewall never blocks them.
Quick and Dirty BFD install
----------------------------
cd
cd bfd-*
sh install.sh
if you want to turn on email alerts edit the conf.bfd file.
vi /usr/local/bfd/conf.bfd
Quick and Dirty Malware Detect install
--------------------------------------
cd
cd maldetect-*
sh install.sh
if you want to turn on email alerts edit the conf.maldet
vi /usr/local/maldetect/conf.maldet
on server1
----------
/etc/init.d/httpd start
/etc/init.d/mysqld start
mysqladmin -u root password NEWPASSWORD
First we are going to edit the drbd.conf then replicate it to our slave.
vi /etc/drbd.conf or nano /etc/drbd.conf
resource meta {
protocol C;
handlers {
pri-on-incon-degr "echo 'DRBD: primary requested but inconsistent!' | wall; /etc/init.d/heartbeat stop"; #"halt -f";
pri-lost-after-sb "echo 'DRBD: primary requested but lost!'| wall; /etc/init.d/heartbeat stop"; #"halt -f";
}
startup {
degr-wfc-timeout 30; # 30 seconds
}
disk {
on-io-error detach;
}
net {
timeout 120;
connect-int 20;
ping-int 20;
max-buffers 2048;
max-epoch-size 2048;
ko-count 30;
cram-hmac-alg "sha1";
shared-secret "jhlkjh980986987";
}
syncer {
rate 100M; # synchronization data transfer rate
al-extents 257;
}
on server1 {
device /dev/drbd0;
disk /dev/hda2;
address 10.0.211.180:7789;
meta-disk internal;
}
on server2 {
device /dev/drbd0;
disk /dev/hda2;
address 10.0.211.181:7789;
meta-disk internal;
}
}
resource data {
protocol C;
handlers {pri-on-incon-degr "echo 'DRBD: primary requested but inconsistent!' | wall; /etc/init.d/heartbeat stop"; #"halt -f";
pri-lost-after-sb "echo 'DRBD: primary requested but lost!'| wall; /etc/init.d/heartbeat stop"; #"halt -f";
}
startup {
degr-wfc-timeout 30; # 30 seconds
}
disk {
on-io-error detach;
}
net {
timeout 120;
connect-int 20;
ping-int 20;
max-buffers 2048;
max-epoch-size 2048;
ko-count 30;
cram-hmac-alg "sha1";
shared-secret "iupoiu098098098";
}
syncer {
rate 100M; # synchronization data transfer rate
al-extents 257;
}
on server1 {
device /dev/drbd1;
disk /dev/hda5;
address 10.0.211.180:7788;
meta-disk internal;
}
on server2 {
device /dev/drbd1;
disk /dev/hda5;
address 10.0.211.181:7788;
meta-disk internal;
}
}
scp /etc/drbd.conf root@server2:/etc/
Do this on both server1 and Server2
-----------------------------------
dd if=/dev/zero of=/dev/hda2 bs=1M count=50
dd if=/dev/zero of=/dev/hda5 bs=1M count=50
drbdadm create-md meta
drbdadm create-md data
on server1
----------
mkfs.ext3 /dev/drbd0
mkfs.ext3 /dev/drbd1
Do this on server1 then Server2
-----------------------------------
/etc/init.d/drbd start
on server1
----------
drbdadm -- --overwrite-data-of-peer primary meta
drbdadm -- --overwrite-data-of-peer primary data
NOw you have to wait for your drives to sync. You check the status a few ways..
1. /etc/init.d/drbd status
2. service drbd status
3. cat /proc/drbd
4. watch -n 0 cat /proc/drbd
5. watch -n 0 /etc/init.d/drbd status
We can go ahead and mount our drbd drives and start moving stuff to our drbd drives.
On Server1
----------
mount /dev/drbd1 /data/
mount /dev/drbd1 /meta/
Move MySQL to a DRBD drive. You could do this a lot of way this is what we did.
-------------------------------------------------------------------------------
mkdir /data/var
mkdir /data/var/lib
/etc/init.d/mysqld stop
cd /var/lib
mv mysql /data/var/lib/
ln -s /data/var/lib/mysql/ mysql
mkdir /meta/etc
cd /etc/
mv my.cnf /meta/etc/
ln -s /meta/etc/my.cnf my.cnf
/etc/init.d/mysqld start
Move Apache to DRBD Drive
-------------------------
cd /etc
/etc/init.d/httpd stop
mv httpd /meta/etc
ln -s /meta/etc/httpd httpd
cd /meta/etc/httpd
rm -rf *logs* *modules* *run*
ln -s /var/log/httpd logs
ln -s /usr/lib64/httpd/modules module
ln -s /var/run run
cd /var
mv www /data/var/
ln -s /data/var/www www
/etc/init.d/httpd start
Now we need to get the second server ready for when we failover!
on server2
----------
mkdir /meta
mkdir /data
cd /etc
rm -rf my.cnf
ln -s /meta/etc/my.cnf my.cnf
rm -rf httpd
ln -s /meta/etc/httpd httpd
cd /var
rm -rf www
ln -s /data/var/www www
cd lib
rm -rf mysql
ln -s /data/var/lib/mysql mysql
Now everything should be on your DRBD Drives that you need on there. Next you need to setup heartbeat for failover.
I personally would wait. Once everyting is done syncing. Fail over manually. Just to test your dirves are synced up etc.
Make sure to put some data server1.
Manual failover
--------------
On Server1
----------
service drbd status
drbdadm secondary meta
drbdadm secondary data
On Server2
----------
service drbd status
drbdadm primary meta
drbdadm primary data
We now have our data tested and working... Time to automate it...
Do this on both server1 and Server2
-----------------------------------
wget -O /etc/yum.repos.d/pacemaker.repo http://clusterlabs.org/rpm/epel-5/clusterlabs.repo
yum install -y pacemaker corosync heartbeat openaislib openais-devel pacemaker-libs pacemaker-libs-devel
yum update
/etc/init.d/heartbeat start
On Server1
----------
vi /etc/ha.d/ha.cf
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
warntime 10
initdead 120
bcast eth0
bcast eth1
auto_failback off
node server1
node server2
crm yes
save ha.cf and scp ot to server2.
scp /etc/ha.d/ha.cf root@server2:/etc/ha.d/
Now edit authkeys and scp it to servers2
vi /etc/ha.d/authkeys
auth 1
1 sha1 654654htrt
scp /etc/ha.d/authkeys root@server2:/etc/ha.d/
chmod 600 /etc/ha.d/authkeys
on Server2
----------
chmod 600 /etc/ha.d/authkeys
We are now ready to configure Heartbeat/Corosync/Pacemaker using crm
This configuration is one shared IP MySQL Apache the Meta and Data DRBD drives with one master server and one slave server2.
On Server1
------------
crm
configure
edit
paste the following after your servers.
primitive ClusterIP ocf:heartbeat:IPaddr2 \
params ip="10.0.211.235" broadcast="10.0." cidr_netmask="24" nic="eth0" \
op monitor interval="30s" timeout="5s" \
meta target-role="Started"
primitive Data ocf:heartbeat:Filesystem \
params device="/dev/drbd1" directory="/data" fstype="ext3"
primitive Meta ocf:heartbeat:Filesystem \
params device="/dev/drbd0" directory="/meta" fstype="ext3"
primitive drbd_data ocf:linbit:drbd \
params drbd_resource="data" \
op monitor interval="15s"
primitive drbd_meta ocf:linbit:drbd \
params drbd_resource="meta" \
op monitor interval="15s"
primitive httpd lsb:httpd \
meta target-role="Started"
primitive mysqld lsb:mysqld \
meta target-role="Started"
group g_drbd drbd_meta drbd_data
group g_services ClusterIP Meta Data httpd mysqld
ms ms_g_drbd g_drbd \
meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
colocation c_g_services_on_g_drbd inf: g_services ms_g_drbd:Master
order o_g_services_after_g_drbd inf: ms_g_drbd:promote g_services:start
property $id="cib-bootstrap-options" \
dc-version="1.0.11-1554a83db0d3c3e546cfd3aaff6af1184f79ee87" \
cluster-infrastructure="Heartbeat" \
expected-quorum-votes="2" \
no-quorum-policy="ignore" \
stonith-enabled="false" \
last-lrm-refresh="1322262755"
commit
no type
cd
status
and you should see something like this although it may take a few minutes to work.
crm(live)# status
============
Last updated: Sat Dec 3 00:15:07 2011
Stack: Heartbeat
Current DC: server1 (47f36fac-dd26-4170-9ac9-fc10aba33e28) - partition with quorum
Version: 1.0.11-1554a83db0d3c3e546cfd3aaff6af1184f79ee87
2 Nodes configured, 2 expected votes
2 Resources configured.
============
Online: [ server2 server1 ]
Resource Group: g_services
ClusterIP (ocf::heartbeat:IPaddr2): Started server1
Meta (ocf::heartbeat:Filesystem): Started server1
Data (ocf::heartbeat:Filesystem): Started server1
httpd (lsb:httpd): Started server1
mysqld (lsb:mysqld): Started server1
crond (lsb:crond): Started server1
Master/Slave Set: ms_g_drbd
Masters: [ server1 ]
Slaves: [ server2 ]
Now test failing over. I make mistakes so if you see one let me know.
I have been using drbd and these other services for years, I used many sources over the course of time to gain this information!
Thank you all that helped!