# T1 speed out of a modem!

## dreamer3

Ok, so it can't really be that fast... What I'm trying to do is speed it up as much as possible.

I am using:

named, for DNS name caching

squid, proxy cache (and transparent proxy for local network)

/etc/hosts, modified to direct ads to 127.0.0.1

Here are my problems/questions.

1. How can I get named to cache names LONGER (or is that limitied by the numbers in the DNS record?)

2. How can I get squid to look into /etc/hosts for those blocked IPs and quit serving them (it appears to be going straight to DNS to resolve the names)?

3. Are there any easy tweaks I can make to squid so it will cache non html (graphics, etc) files LONGER without checking for new versions?

I'm used to cable/DSL so I'm trying to optimize my connection as much as possible. Thanks in advance for your help.

----------

## rac

 *dreamer3 wrote:*   

> How can I get squid to look into /etc/hosts for those blocked IPs and quit serving them

 

You could try adding the line "order hosts,bind" to /etc/host.conf.

----------

## dreamer3

I already have that file but it's named hosts.conf?

Should it be host.conf? (without the s)

EDIT: Ok, I copied it to the correct name.  Ping and wget (non proxied) seem to work fine for the ad blocked sites but anything that goes through squid still downloads the ads just fine.  Is there an addititonal squid config value I'm going to need to change?

----------

## helmers

I've heard that you can use Privoxy in conjunction with squid. Read up on www.privoxy.org  :Razz: 

--

Regards,

Helmers

----------

## Matje

You _can_ get squid to block certain ip's/hostnames u know...

acl spam url_regex  eguard.com beweb.com metriweb.be sitestat.pi.be offshoreclicks.com saucybabes.net doubleclick.net

http_access deny spam

----------

## elzbal

 *dreamer3 wrote:*   

> 1. How can I get named to cache names LONGER (or is that limitied by the numbers in the DNS record?)

 

That is limited by a 'TTL' (Time To Live) number in the DNS record. This is chosen by the DNS administrator of the site you are browsing. The purpose is to make sure that DNS names correctly expire and true name lookups occur on a regular basis, and is helpful when systems get moved to new locations. Admins generally set this to something between an hour and a day, with a day being more common. In short, that is not something that you can control with normal DNS products (BIND/named, djbdns).

A good simple summary (which ironically is attached to some Windows DNS product) can be found here:

http://www.jhsoft.com/help/df_ttl.htm

For what it's worth, using your ISP's DNS servers may be a better way to go. They have more users, and therefore have more DNS entries cached. Getting a cached record from the ISP is almost certainly slower than getting a cached record from the local server, but the ISP's cache hit rate will be much higher and the ISP's cache miss penalty will probably be a bit lower.

Also, you should be aware that every once in a while, a serious root-gaining exploit will come out in various versions of BIND/named. By running named right now, you may be vulnerable to a rather serious attack.

http://www.cert.org/advisories/CA-2002-31.html

(If you emerged BIND in the normal Gentoo manner, you are probably running 9.2.x, which was not mentioned in this advisory. However, IMHO, it's just a matter of time. BIND has a pretty poor record of security.)

----------

## dreamer3

 *elzbal wrote:*   

>  *dreamer3 wrote:*   1. How can I get named to cache names LONGER (or is that limitied by the numbers in the DNS record?) That is limited by a 'TTL' (Time To Live) number in the DNS record ... chosen by the DNS administrator ... purpose is to make sure that DNS names correctly expire and true name lookups occur on a regular basis ...  that is not something that you can control with normal DNS products (BIND/named, djbdns).
> 
> 

 

I'm familiar with the concept, but thanks for the review.  I didn't realize it would be so hard to artificially adjust the TTLs...

 *Quote:*   

> For what it's worth, using your ISP's DNS servers may be a better way to go ... Getting a cached record from the ISP is almost certainly slower than getting a cached record from the local server, but the ISP's cache hit rate will be much higher and the ISP's cache miss penalty will probably be a bit lower.

 

I have a caching, forwarding server... DNS requests are sent to the LOCAL ISP cache if they cannot be fulfilled locally.

 *Quote:*   

> Also, you should be aware that every once in a while, a serious root-gaining exploit will come out in various versions of BIND/named. By running named right now, you may be vulnerable to a rather serious attack.
> 
> http://www.cert.org/advisories/CA-2002-31.html
> 
> (If you emerged BIND in the normal Gentoo manner, you are probably running 9.2.x, which was not mentioned in this advisory. However, IMHO, it's just a matter of time. BIND has a pretty poor record of security.)

 

I don't have any "login" type services running... (telnet, ssh, etc)... this would make a root compromise a good bit more difficult, right?

And could I just chroot jail bind if still concerned, could I not?

----------

## lotas

a quick note about your problem with squid and the ad servers. Firstly i think you have to restart squid. i know i did when slashdot moved to their new servers. I just restarted both squid and named and it worked. Also its not nice. im a webmaster and all revienu from my site comes from banner ads. Its not nice to use a service like mine (web search engine and open directory, free email account and news) and not pay for it in one way or an other. i have seen other places, like slashdot.org and phpnuke.org do pay for registration, but my site would not be a very good idea for that. anyway, thats my EUR0.02 on the subject.

----------

## frogger

Which version of Squid are you using?  I believe the older 2.4 versions didn't support /etc/hosts  lookup.  I think the latest 2.4 version includes the patch to support this.  If you are running an older version I have the patch if you'd like it.  However, you'd be better off running the latest stable.

Just add the names to /etc/hosts on the proxy server, and it should check there first before going to DNS.

----------

## dreamer3

 *frogger wrote:*   

> Which version of Squid are you using?  I believe the older 2.4 versions didn't support /etc/hosts  lookup.  I think the latest 2.4 version includes the patch to support this.  If you are running an older version I have the patch if you'd like it.  However, you'd be better off running the latest stable.
> 
> Just add the names to /etc/hosts on the proxy server, and it should check there first before going to DNS.

 

Running 2.4.7 and the names ARE in /etc/hosts, but they have no effect.

----------

## frogger

Hmm, perhaps the latest 2.4 doesn't include the patch for this.  I can't remember if I needed to apply it or not when I last updated my squid install.

I have put the patch on my webserver-- it's available at

http://frogger974.homelinux.org/patches/hosts.patch

You'll need to patch squid, recompile, and restart the server.

There is now a new option available in squid.conf, here is the excerpt from my config file (you shouldn't need to change the default however):

#  TAG: hosts_file

#        Location of the host-local IP name-address associations

#        database.  Most Operating Systems have such a file: under

#        Un*X it's by default in /etc/hosts MS-Windows NT/2000 places

#        that in %SystemRoot%(by default

#        c:\winnt)\system32\drivers\etc\hosts, while Windows 9x/ME

#        places that in %windir%(usually c:\windows)\hosts

#

#        The file contains newline-separated definitions, in the

#        form ip_address_in_dotted_form name [name ...] names are

#        whitespace-separated.  lines beginnng with an hash (#)

#        character are comments.

#

#        The file is checked at startup and upon configuration.  If

#        set to 'none', it won't be checked.  If append_domain is

#        used, that domain will be added to domain-local (i.e. not

#        containing any dot character) host definitions.

#

#Default:

# hosts_file /etc/hosts

----------

## dreamer3

Update 3-26-03

Summary

I've improved percieved modem performace quite a bit and hopefully (with the new init strings I added today) fixed some of the "freeze" issues I have where everything just stops for a few seconds and then picks up again.  Is there anyone else who's seen this and found a solution?  My sister swears her Windows XP PC (with a USR win-modem) doesn't do this and we are both on the same phone line (which would be my first suspect).

Modem itself

Went from crapola generic junk to a real non-winmodem USR 56k. (this helped a lot)

Later...

Added to /etc/ppp/options (any other advice out there?)

```
mtu 576

mru 576
```

Added to /etc/ppp/if-up (still playing with this)

```
/sbin/ifconfig ppp0 txqueuelen 10
```

Modified /etc/wvdial.conf (settings copied from the Windows .INF file for my exact modem).  One would assume they are more tweaked than the wvdial quick probe defaults.

```
Init1 = AT

Init2 = AT&F1E0Q0V1L0&C1&D2S0=0

Init3 = ATS6=8

# error compression ON and error control required

Init4 = AT&K1&M5

# hardware flow control

Init5 = AT&H1&R2&I0
```

DNS

Tried djbdns, and I like the service methodology, but sometimes it goes crazy talking to what seems like 100s of DNS servers to try and resolve certain names and it uses all my bandwidth for longer than 5 to 10 seconds which is WAY unacceptible.  Named never had any of these problems.  So I'm back to named... (chrooted now of course)

Squid

This sets some tough caching for squid on certain things...  dynamic pages aren't cached in general, images are cached for a LONG time, and regular site can go 8 hours without causing a re-fetch for HTML... of course I wouldn't just copy these, but rather read the section and docs and get an idea of what would work well for you.

I wish I could tell it to ignore the nocache directives for pages (sometimes) but it seems like that isn't the case.  A lot of web site are using this more and more on their static pages for who knows why and I don't care to see updates to microsoft.com ever 3 minutes... every 8 hours is fine.

Adding to (/etc/squid/squid.conf)

```
refresh_pattern         cgi-bin         0       20%     2

refresh_pattern         \.asp$          0       20%     2

refresh_pattern         \.aspx$         0       20%     2

refresh_pattern         \.acgi$         0       20%     2

refresh_pattern         \.cgi$          0       20%     2

refresh_pattern         \.pl$           0       20%     2

#refresh_pattern        \.shtml$        1       20%     2

refresh_pattern         \.php$          0       20%     2

refresh_pattern         \.php3$         0       20%     2

refresh_pattern         \?              0       20%     2

refresh_pattern ^ftp:                   1440    20%     10080

refresh_pattern ^gopher:                1440    0%      1440

refresh_pattern -i \.(gif|jpg|jpeg|png)$       10080 90% 20160 reload-into-ims override-expire override-lastmod

refresh_pattern -i \.(png|ico)$         10080 90% 20160 reload-into-ims override-expire override-lastmod

refresh_pattern -i \.(mid||wav)$        10080 90% 20160 reload-into-ims override-expire override-lastmod

refresh_pattern \/$                     480 50% 4320 reload-into-ims

refresh_pattern -i \.htm$               480 50% 4320 reload-into-ims override-expire

refresh_pattern -i \.html$              480 50% 4320 reload-into-ims override-expire

refresh_pattern -i \.shtml$             480 50% 4320 reload-into-ims override-expire

refresh_pattern -i \.css$               480 50% 4320 reload-into-ims override-expire

refresh_pattern .                       480     50%     4320

#refresh_pattern .                      0       20%     4320
```

----------

