# Fastest Apache Configuration for Single Purpose Web Server?

## Crimjob

Hello All!

I currently have multiple Gentoo machines set up, but for one of them (which happens to be my "router), I've set up hosts file based ad blocking. I've pumped ~24,000 malicious URL's into the hosts file and pointed them to 127.0.0.1. I've set the local error page for Apache to a shorter delay and customized it to know when it's working. It seems to work great, effectively blocking 99% of advertisements I'd see on a day to day basis. Only thing is, sometimes it's slow.

The content of the pages I am serving from this Apache server are merely plain text "ad blocked". This usually loads instantly, but sometimes, even regardless of Apache error delay, it still takes 20-35 seconds for the page to fully load (the straggler being the "ad" that was "blocked").

Is anyone aware of a "stripped" mode or settings for Apache to make it work as fast as possible to serve these requests? Or would I be better off doing something else like RBLDNS?

----------

## cach0rr0

nginx == your friend

does heaps better for massive concurrency and static content than apache. apache is way overkill for that (honestly, nginx probably is too, but it can easily handle that load)

if youre open to the nginx route, happy to help with config - if not, and youd rather stick with apache, no worries!

----------

## Crimjob

Looks interesting, thanks for the quick reply!

I'll certainly give it a try and see what happens  :Smile: 

I've got other machines on the network for real web serving, so I'm more than flexible. Apache is just widely used at work so I'm familiar with it.

----------

## malern

Maybe I'm missing something but wouldn't it be quicker and easier to not run anything and just keep port 80 closed? That way your web browser will give up as soon as the connection is refused, which should be quicker than it sending the http request and parsing the error doc each time.

----------

## cach0rr0

 *malern wrote:*   

> Maybe I'm missing something but wouldn't it be quicker and easier to not run anything and just keep port 80 closed? 

 

He'd have to do that for every single one of those 20k URLs

and it wouldn't  help for e.g. malware hosted on an alternate port

----------

## malern

 *cach0rr0 wrote:*   

> He'd have to do that for every single one of those 20k URLs

 

I mean keep port 80 closed on 127.0.0.1 (i.e. don't have any servers listening on 127.0.0.1:80 so any connections get a RST response). He's setup his hosts file to send all the bad domains to 127.0.0.1, so his web browser would try to connect to that, then get a connection refused response and move on to the next thing. Currently he's running apache on that port to return the same error doc for all requests, so the browser is doing a full http request and response for each item, which seems a bit redundant as he just wants those requests to fail.

----------

## Anarcho

But the problem is that I think you get an error page displayed where the ads should be. I think it wouldn't look to nice.

----------

## Crimjob

 *Anarcho wrote:*   

> But the problem is that I think you get an error page displayed where the ads should be. I think it wouldn't look to nice.

 

On top of that, it doesn't always work nicely. It was the first idea I tried prior to redirecting the requests to a web server. Some pages acted nice, stopped loading once the page was done. But others, not so nice, spinning the little loading page logo in whatever browser you use until it finally times out since it can't find a specific affiliate url or something specific, which is quite annoying. The workaround for that was of course to give it a page to always display no matter what.

Currently working on some production systems so this is on hold but it's definitely the first idea I'm going to try once I get some free time / go back to night shifts  :Smile: 

----------

## Ant P.

It'd be easier (and more efficient on all machines involved) to point those domains at an empty BIND zone. The web browser will give up as soon as it gets the NXDOMAIN response from the DNS server instead of trying to make a HTTP connection, and the browser does prefetching and caching of DNS queries already.

----------

## darkphader

 *Ant P. wrote:*   

> It'd be easier (and more efficient on all machines involved) to point those domains at an empty BIND zone. The web browser will give up as soon as it gets the NXDOMAIN response from the DNS server instead of trying to make a HTTP connection, and the browser does prefetching and caching of DNS queries already.

 

Late to see this post but I'm in agreement with this method. I use Unbound for DNS cache and simply put, for example:

```
local-zone: "fastclick.net." refuse

local-zone: "incredimail.com." refuse
```

 lines in the unbound.conf file.

If you really wanted to pull up a web page you could use a stub-zone (to point to an authoritative server such as NSD or BIND acting in this manner) or local-data instead of a refusal. Point all of your system at the Unbound cache to eliminate access to those sites and keep all edits in one place.

Chris

----------

