[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: new to pacemaker and heartbeat on debian...getting error..



Le Monday 24 October 2011 23:02:42 Joey L, vous avez écrit :
> > I'd say version 3 (1:3.0.3-2).
> > But you have heartbeat and corosync installed, you must choose only one
> > (I'd choose corosync).
> 
> Okay - choose corosync -- though pacemaker is installed - i can not
> control it - meaning there is no init file in /etc/init.d directory -
> but says installed.

pacemaker is launched by corosync (or heartbeat if it is chosen).

> > cibadmin is less readable than crm configure show.
> > 
> > Here is mine, truncated and anonymised to show only one IPaddr resource :
> > 
> > root@node1:~# crm configure show
> > node node1 \
> >        attributes standby="off"
> > node node2 \
> >        attributes standby="off"
> > primitive res_IPaddr2_res1 ocf:heartbeat:IPaddr2 \
> >        params ip="A.B.C.D" \
> >        operations $id="res_IPaddr2_res1-operations" \
> >        op start interval="0" timeout="20" \
> >        op stop interval="0" timeout="20" \
> >        op monitor interval="10" timeout="20" start-delay="0"
> > property $id="cib-bootstrap-options" \
> >        expected-quorum-votes="2" \
> >        dc-version="1.0.11-6e010d6b0d49a6b929d17c0114e9d2d934dc8e04" \
> >        no-quorum-policy="ignore" \
> >        cluster-infrastructure="openais" \
> >        last-lrm-refresh="1316614496" \
> >        stonith-enabled="false"
> > 
> > No clone primitive.
> > Side note : With 2 nodes and 2 votes, you'd better configure no-quorum-
> > policy="ignore", else, your resources won't start with only one node (and
> > then, why using a cluster !). The default config is good with at least 3
> > nodes.
> 
> not sure where to set these options ..but did this and is current:
> 
> 
> root@deb1:/home/mjh#  crm configure show
> node deb1
> node deb2
> primitive failover-ip ocf:heartbeat:IPaddr \
> 	params ip="192.168.2.113" \
> 	op monitor interval="10s"
> property $id="cib-bootstrap-options" \
> 	dc-version="1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b" \
> 	cluster-infrastructure="openais" \
> 	expected-quorum-votes="2" \
> 	stonith-enabled="false"
> 
> 
> root@deb2:/home/mjh#  crm configure show
> node deb1
> node deb2
> primitive failover-ip ocf:heartbeat:IPaddr \
> 	params ip="192.168.2.113" \
> 	op monitor interval="10s"
> property $id="cib-bootstrap-options" \
> 	dc-version="1.0.9-74392a28b7f31d7ddc86689598bd23114f58978b" \
> 	cluster-infrastructure="openais" \
> 	expected-quorum-votes="2" \
> 	stonith-enabled="false"
> 
> > Yes, the syslog, while very verbose, can certainly tell us why the
> > resource does not start.
> 
> here is from one server :
[...]

Here we are :

> Oct 24 16:58:54 deb1 lrmd: [4804]: info: RA output:
> (failover-ip:start:stderr) ERROR: Cannot use default route w/o netmask
> [192.168.2.113]
> Oct 24 16:58:54 deb1 IPaddr[5292]: ERROR: /usr/lib/heartbeat/findif
> failed [rc=1].
> Oct 24 16:58:54 deb1 lrmd: [4804]: WARN: Managed failover-ip:start
> process 5292 exited with return code 1.
[...]
 
> I have a very small network --- this ip is not being used at all.
> all i did was clone a vbox vm machine to another machine and installed.
> after installing - i realized the nic hardwares are the same do i did
> a refersh and restarted the machines.
> 
> Do i have to stop network manager ? or do i have to do anything
> special for pacemaker and corosync ???

I hope you don't use network manager for a cluster !

You really should use static IP.

Here we can see in the logs that the resource agent for the IP resource can 
not finc the network interface on which to add the faliover IP.

Show us the output of "ifconfig".

Your two clusteres nodes are vbox VM ? How is the network configured ? Bridge 
or NAT ?
I think it must be a bridge to work, I'm not sure.

Attachment: signature.asc
Description: This is a digitally signed message part.


Reply to: