# HA (High Availability) for storage?   Is it possible?

## DingbatCA

I have two servers.  Each running one RAID5 array shared as /data via NFS. The 2 arrays need to be mirrored.

Looking for some way to deal with hardware failure.  Lets say one of the servers catches on fire.  How do I make sure that none of the clients loose connection, and gracefully transfer on to the other server.   How do I keep the servers mirrored?  

The whole point of this is fault tolerance.  Putting a single server between them to act as load balancer only adds a single point of failure. Is there any way to do this, or do I need to shell out the $$,$$$ to a storage vendor, like NetAPP?

----------

## Crenshaw

http://en.wikipedia.org/wiki/List_of_file_systems

I guess you should look into "Distributed parallel fault-tolerant file systems" section. 

http://en.wikipedia.org/wiki/GlusterFS

http://en.wikipedia.org/wiki/Lustre_(file_system)

But honestly, unless you really do have reasons to deploy such thing, it isn't worth it.

----------

## ccp

Standard NFS mounting protocol does not support automatically fail over so to have transparent fail over is impossible. however if you accept some level of manual intervention then there could be some solution.

On NFS client using "automount" which can configured to use multiple servers for same share. but I think if the mount point have open session on it will not able to fail over without some action.

Most filer actually use a shared storage system with two controller to facilitate fail over, so assume you have solution that mirror data in real time amount of the two server you can consider to use symbolic link and some script on client side to the trick too.

Ping.

----------

## bbgermany

I did this with DRBD and linux-ha. You should have a look at this.

Heartbeat will take care of the ip, the mountpoints and the other needed services. DRDB will make the the mirrored network raid.

bb

----------

## phoenixp

 *ccp wrote:*   

> Standard NFS mounting protocol does not support automatically fail over so to have transparent fail over is impossible.

 

I don't believe anything in computing is "impossible". More effort than it's worth...sure; impossible, no way!  :Very Happy: 

It's been ages since I played around with it, but I'm pretty sure you could achieve something like this by jumping down the network layers and implementing something interesting in the routing tables. Obviously authentication needs to be propagated correctly and your data has to be mirrored in damn-near-real-time(tm).

About load balancing single point of failure, if you you have the budget you can have two machines handle the load balancing (once again with source routing).

Hope this gives you some ideas to feed the search monster,

phen.

----------

## DingbatCA

GlusterFS gives me real time mirroring.  

NFS cant do fail over...   Alternatives?

----------

## think4urs11

 *DingbatCA wrote:*   

> Alternatives?

 

Mounting the GlusterFS on the client via NFS?

http://www.gluster.com/community/documentation/index.php/Client_Installation_and_Configuration#NFS_Client

Combined with DRBD that should do what you want

----------

## ccp

 *phoenixp wrote:*   

> I don't believe anything in computing is "impossible". More effort than it's worth...sure; impossible, no way!  

 

I agree with your statement, however I hate people twist my words and take it out of context. As you quoted, I stated it is under standard NFS condition, so should you willing to modify there is no reason it can not be done, however I believe this is not the orignal poster's intent.

I think the alternative is to use GFS on top of NBD to get the result.

----------

## phoenixp

 *ccp wrote:*   

>  *phoenixp wrote:*   I don't believe anything in computing is "impossible". More effort than it's worth...sure; impossible, no way!   
> 
> I agree with your statement, however I hate people twist my words and take it out of context. As you quoted, I stated it is under standard NFS condition, so should you willing to modify there is no reason it can not be done, however I believe this is not the orignal poster's intent.
> 
> I think the alternative is to use GFS on top of NBD to get the result.

 

I interpret the OP differently, apparently. I think the post shows awareness of the limitations of "standard NFS condition[s]" and is thus a request for workarounds/leads the OP may not have considered otherwise.

I apologise if I offended you. I merely wanted to reassure the OP that the goal was not in fact unattainable.

C'est la vie

phen

----------

## John R. Graham

Have you all seen the Multipathing for Gentoo guide yet?

- John

----------

## DingbatCA

I could be wrong, but Multipathing Gentoo still has the single point of failure.  The storage (SAN).  Unless you buy enterprise class storage.  Which is out of my $0 budget, by allot!

----------

## bbgermany

As i said before, you should have a look at DRBD. This looks like a good howto for DRBD + NFS with nearly any filesystem: http://www.linux-ha.org/DRBD/NFS

bb

----------

## DingbatCA

I am currently DRBD is currently setting up two VM's to play with DRBD.  It is the bets choice I have found.

----------

