I have also studied NFS fail-over with Pacemaker/Corosync/DRBD and itOn 06/26/2013 09:11 PM, David Parker wrote:
>
> What I'm looking for is a way to have the client be aware of both
> servers, and gracefully failover between them. I thought about using
> Pacemaker and Corosync to provide a virtual IP which floats between the
> servers, but would that work with NFS? Let's say I have an established
> NFS mount and server1 fails, and the virtual IP fails over to server2.
> Wouldn't there be a bunch of NFS socket and state information which
> server2 is unaware of, therefore rendering the connection useless on the
> client? Also, data integrity is essential in this scenario, so what
> about active writes to the NFS share which are happening at the time the
> server-side failover takes place?
>
could work with NFSv3; NFSv4 uses TCP which makes things very hard. But
even with NFSv3 I stumbled over strange situations, the likes of which I
don't really remember, but the bottom line I have decided that NFS NFS
fail-over is too fiddly and hard to control reliably. Now I'm studying
using Gluster for replicating data between nodes and mounting the
gluster volumes on the clients via glusterfs - this seems like a much
better, simpler and more robust approach. I suggest you take a look at
Gluster, it's an exceptionally good technology.