# HTTP relay/caching server. Possible?

## meulie

Hi all,

Following set-up:

box 1: 20GB of free space, and fast multi-homed connectivity.

box 2: > 100GB of files archives, but slow internet connectivity.

What I would like to do is turn box 1 into some HTTP relay/caching server:

Every time box 1 gets a request for a file that it doesn't have locally, it will retrieve it from box 2, and serve it. It will also cache the file for x days, so that when the file is requested again within these x days it can serve it directly.

Is there a way to get this done? Any existing package/combination of packages that can handle this?

(it's all static pages/files).

----------

## Jaglover

Is there any reason you cannot mount box 2 over NFS?

----------

## alunduil

What's wrong with using squid to accomplish this?

Regards,

Alunduil

----------

## meulie

 *Jaglover wrote:*   

> Is there any reason you cannot mount box 2 over NFS?

 

Does NFS have good enough caching? I don't want box 1 to collect the file from box 2 more than once a week, even if the file is requested 100 times/day...

----------

## meulie

 *alunduil wrote:*   

> What's wrong with using squid to accomplish this?
> 
> Regards,
> 
> Alunduil

 

Nothing wrong with Squid, if it can indeed do this? My only experience with Squid is as standard proxy server to lessen bandwidth-usage when 10 people read the same newspaper online...    :Cool: 

Can you give me some pointers on how to accomplish this with Squid?

----------

## depontius

 *meulie wrote:*   

>  *Jaglover wrote:*   Is there any reason you cannot mount box 2 over NFS? 
> 
> Does NFS have good enough caching? I don't want box 1 to collect the file from box 2 more than once a week, even if the file is requested 100 times/day...

 

There is now caching for nfsV4.  I haven't used it yet, so I can't say anything about how good it is.  But it's there, and I've been building my kernels with it enabled, for whenever I get the time to do the userspace stuff.

----------

