– Setup Minimal CentOS 5
– be sure that both nodes can resolve correctly names (either through dns or /etc/hosts)
– yum update (as usual … )
– yum install heartbeat drbd kmod-drbd (available in the extras repository)

Current situation
* node1 , source disc /dev/sdb that will be replicated
* node2 , target disc /dev/sdb

DRBD Configuration

vi /etc/drbd.conf
global { usage-count no; }
resource repdata {
protocol C;
startup { wfc-timeout 0; degr-wfc-timeout 120; }
disk { on-io-error detach; } # or panic, …
net { cram-hmac-alg “sha1”; shared-secret “Cent0Sru!3z”; } # don’t forget to choose a secret for auth !
syncer { rate 10M; }
on node1 {
device /dev/drbd0;
disk /dev/sdb;
meta-disk internal;
on node2 {
device /dev/drbd0;
disk /dev/sdb;
meta-disk internal;

scp /etc/drbd.conf [email protected]:/etc/

– Initialize the meta-data area on disk before starting drbd (! on both nodes!)
[[email protected] etc]# drbdadm create-md repdata

– start drbd on both nodes (service drbd start)
[[email protected] etc]# service drbd start
[[email protected] etc]# service drbd start

[[email protected] etc]# drbdadm — –overwrite-data-of-peer primary repdata
[[email protected] etc]# watch -n 1 cat /proc/drbd

– we can now format /dev/drbd0 and mount it on node1 : mkfs.ext3 /dev/drbd0 ; mkdir /repdata ; mount /dev/drbd0 /repdata
– create some fake data on node 1 :
[[email protected] etc]# for i in {1..5};do dd if=/dev/zero of=/repdata/file$i bs=1M count=100;done

– now switch manually to the second node :
[[email protected] /]# umount /repdata ; drbdadm secondary repdata
[[email protected] /]# mkdir /repdata ; drbdadm primary repdata ; mount /dev/drbd0 /repdata
[[email protected] /]# ls /repdata/ file1 file2 file3 file4 file5 lost+found
Great, data was replicated …. now let’s delete/add some file :
[[email protected] /]# rm /repdata/file2 ; dd if=/dev/zero of=/repdata/file6 bs=100M count=2

– Now switch back to the first node :
[[email protected] /]# umount /repdata/ ; drbdadm secondary repdata
[[email protected] /]# drbdadm primary repdata ; mount /dev/drbd0 /repdata
[[email protected] /]# ls /repdata/ file1 file3 file4 file5 file6 lost+found

OK … Drbd is working … let’s be sure that it will always be started : chkconfig drbd on

Heartbeat V2 Configuration

vi /etc/ha.d/ha.cf
keepalive 1
deadtime 30
warntime 10
initdead 120
bcast eth0
node node1
node node2
crm yes

vi /etc/ha.d/authkeys (with permissions 600 !!!) :
auth 1
1 sha1 MySecret

Start the heartbeat service on node1 :

[[email protected] ha.d]# service heartbeat start
Starting High-Availability services: [OK]

Check the cluster status :
[[email protected] ha.d]# crm_mon

Replicate now the ha.cf and authkeys to node2 and start heartbeat
[[email protected] ha.d]# scp /etc/ha.d/ha.cf /etc/ha.d/authkeys [email protected]:/etc/ha.d/
[[email protected] ha.d]# service heartbeat start

Verify cluster with crm_mon :
Last updated: Wed Sep 12 16:20:39 2007
Current DC: node1.centos.org (6cb712e4-4e4f-49bf-8200-4f15d6bd7385)
2 Nodes configured.
0 Resources configured.
Node: node1 (6cb712e4-4e4f-49bf-8200-4f15d6bd7385): online
Node: node2 (f6112aae-8e2b-403f-ae93-e5fd4ac4d27e): online

vi /var/lib/heartbeat/crm/cib.xml

 <cib generated="false" admin_epoch="0" epoch="25" num_updates="1" have_quorum="true" ignore_dtd="false" num_peers="0" cib-last-written="Sun Sep 16 19:47:18 2007" cib_feature_revision="1.3" ccm_transition="1">
       <node id="6cb712e4-4e4f-49bf-8200-4f15d6bd7385" uname="node1" type="normal"/>
       <node id="f6112aae-8e2b-403f-ae93-e5fd4ac4d27e" uname="node2" type="normal"/>
       <group id="My-DRBD-group" ordered="true" collocated="true">
         <primitive id="IP-Addr" class="ocf" type="IPaddr2" provider="heartbeat">
           <instance_attributes id="IP-Addr_instance_attrs">
               <nvpair id="IP-Addr_target_role" name="target_role" value="started"/>
               <nvpair id="2e967596-73fe-444e-82ea-18f61f3848d7" name="ip" value=""/>
         <instance_attributes id="My-DRBD-group_instance_attrs">
             <nvpair id="My-DRBD-group_target_role" name="target_role" value="started"/>
         <primitive id="DRBD_data" class="heartbeat" type="drbddisk" provider="heartbeat">
           <instance_attributes id="DRBD_data_instance_attrs">
               <nvpair id="DRBD_data_target_role" name="target_role" value="started"/>
               <nvpair id="93d753a8-e69a-4ea5-a73d-ab0d0367f001" name="1" value="repdata"/>
         <primitive id="FS_repdata" class="ocf" type="Filesystem" provider="heartbeat">
           <instance_attributes id="FS_repdata_instance_attrs">
               <nvpair id="FS_repdata_target_role" name="target_role" value="started"/>
               <nvpair id="96d659dd-0881-46df-86af-d2ec3854a73f" name="fstype" value="ext3"/>
               <nvpair id="8a150609-e5cb-4a75-99af-059ddbfbc635" name="device" value="/dev/drbd0"/>
               <nvpair id="de9706e8-7dfb-4505-b623-5f316b1920a3" name="directory" value="/repdata"/>
       <rsc_location id="runs_on_pref_node" rsc="My-DRBD-group">
         <rule id="prefered_runs_on_pref_node" score="100">
           <expression attribute="#uname" id="786ef2b1-4289-4570-8923-4c926025e8fd" operation="eq" value="node1"/>

Firewall considerations
You will need to make sure that the nodes can talk on ports:
DRBD: 7788


/usr/local/lib/heartbeat/haresources2cib.py –stout -c /etc/ha.d/ha.cf /etc/ha.d/haresources

crm_resource -L

crm_resource -W -r DRBD_data

crm_resource -r DRBD_data -p target_role -v started
crm_resource -r DRBD_data -p target_role -v stopped

crm_resource -x -r DRBD_data

crm_resource -M -r DRBD_data

crm_resource -M -r DRBD_data -H node2

crm_resource -U -r DRBD_data

crm_resource -D -r DRBD_data -t primitive

crm_resource -D -r My-DRBD-group -t group

crm_resource -p is_managed -r DRBD_data -t primitive -v off

crm_resource -p is_managed -r DRBD_data -t primitive -v on

crm_resource -C -H node2 -r DRBD_data

crm_resource -P

crm_resource -P -H node2

One thought on “Heartbeat2+DRBD

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.