# Setting up two subnets (100Mbit and 1000Mbit)

## Unclethommy

Hi there, I have recently been able to connect two diskless clients to my main server computer and have them working happily when connected to the router, all was fine. Now I have invested in 4 cheap gigabit cards with realtek chipsets (r8169 module compiled in the kernel)  (as I didnt want to splash out to get a gigabit switch just yet) so that I may upgrade the interface that is being used to send my NFS to the diskless clients ONLY using 1000Mbit. So the server will now have two EXTRA gigagbit cards, each of which will be connected to a gigabit card on the client. However, the diskless clients still need to use the old 100 Mbit onboard NICs to boot using PXE (as the new gigabit cards do not support this option yet). So my setup is as follows now:

Server ---100 NIC (ip=192.168.1.100) -------------- Switch/Router (ip=192.168.1.1)--------------Internet  

 |    |                                                            |

 |    |                                                            | 

 |    |                                                            |__________ 100 NIC (PXE boot) Client 1"BLACK" (ip = 192.168.1.201)  Connected to switch

 |    |                                                            | 

 |    |                                                            | 

 |    |                                                            | 

 |    |                                                            | 

 |    |                                                            |__________ 100 NIC (PXE boot) Client 1 "WHITE" (ip = 192.168.1.202) Connected to switch

 |    |                                                   

 |    |                                                   

 |    |---------1000NIC (ip=192.168.10.101) -------------------------------------------------------------------------1000 NIC (NFS) Client 1 (ip = 192.168.10.201)

 |

 |-------------1000NIC (ip=192.168.10.102) -------------------------------------------------------------------------1000 NIC (NFS) Client 2 (ip = 192.168.10.202)

I have tried to set this network up using the following on the server /etc/conf.f/net:

```
config_eth0=( "192.168.1.100 netmask 255.255.255.0 brd 192.168.1.255" )

routes_eth0=( "default gw 192.168.1.1" )

dns_domain_eth0="MSHOME"

dns_servers_eth0="62.30.112.39 194.117.134.19"

config_eth1=( "192.168.10.101 netmask 255.255.255.0 brd 192.168.10.255" )

dns_domain_eth1="GIGABIT"

dns_servers_eth1="62.30.112.39 194.117.134.19"

config_eth2=( "192.168.10.102 netmask 255.255.255.0 brd 192.168.10.255" )

dns_domain_eth2="GIGABIT"

dns_servers_eth2="62.30.112.39 194.117.134.19"
```

the server also operates as a dhcp server for the PXE booting the /etx/conf.d/dhcpd.conf is:

```
option domain-name "domain";

default-lease-time 600;

max-lease-time 7200;

ddns-update-style interim;

option domain-name-servers 62.30.112.39;

option routers 192.168.1.1;

subnet 192.168.1.0 netmask 255.255.255.0 {

    range 192.168.1.203 192.168.1.205;

}

option option-150 code 150 = text ;

  class "pxeclient" {

    match if substring(option vendor-class-identifier, 0, 9) = "PXEClient";

    vendor-option-space PXE;

    filename "pxelinux.0";

  }

host black {

    hardware ethernet 00:08:74:C5:99:45;

    next-server 192.168.1.100;

    fixed-address 192.168.1.201;

}

host white {

    hardware ethernet 00:08:74:C5:98:61;

    next-server 192.168.1.100;

    fixed-address 192.168.1.202;

}

```

for client1 :

```
config_eth0=( "noop" "192.168.1.201 netmask 255.255.255.0 brd 192.168.1.255" )

routes_eth0=( "default via 192.168.1.1" )

dns_domain_eth0="MSHOME"

dns_servers_eth0="62.30.112.39 194.117.134.19"

config_eth1=("192.168.10.201 netmask 255.255.255.0 brd 192.168.10.255" )

dns_domain_eth1="GIGABIT"

dns_servers_eth1="62.30.112.39 194.117.134.19"
```

for client2:

```
config_eth0=( "noop" "192.168.1.202 netmask 255.255.255.0 brd 192.168.1.255" )

routes_eth0=( "default via 192.168.1.1" )

dns_domain_eth0="MSHOME"

dns_servers_eth0="62.30.112.39 194.117.134.19"

config_eth1=( "192.168.10.202 netmask 255.255.255.0 brd 192.168.10.255" )

dns_domain_eth1="GIGABIT"

dns_servers_eth1="62.30.112.39 194.117.134.19"
```

However, I can't ping the NICs from the server which seems to suggest that the ssetup is incorrect.... also , and rather peculiarly, eth1 gets started WITH eth0 (whether the driver is compiled directly into the kernel or as a module) even when I haven't added it to the runlevel using rc-update....

Could anyone help me get my gigabit working on a different subnet so that I can then us this subnet to exclusively sent NFS data across at much higher speeds?

Thanks in advance

----------

## sschlueter

You have to set up two distinct, non-overlapping subnets for the two host-to-host gigabit connections.

----------

## Unclethommy

would you be able to provide me with a starter configuration on how I could do this for one machine as an example? soI can then replicate on the other?

----------

## nobspangle

I think your configuration is probably OK you just need to change the subnet of one of the gigabit connections

change eth2 to 192.168.11.102 and you should be OK.

----------

## sschlueter

The configuration is not ok. Unclethommy inserted two additional NICs into the server and configured both to work on the same subnet.

@Unclethommy: Just change one of the two subnets:

| |---------1000NIC (ip=192.168.10.101) -------------------------------------------------------------------------1000 NIC (NFS) Client 1 (ip = 192.168.10.201)

|

|-------------1000NIC (ip=192.168.11.102) -------------------------------------------------------------------------1000 NIC (NFS) Client 2 (ip = 192.168.11.202)

----------

## nobspangle

 *sschlueter wrote:*   

> The configuration is not ok. Unclethommy inserted two additional NICs into the server and configured both to work on the same subnet.
> 
> @Unclethommy: Just change one of the two subnets:
> 
> | |---------1000NIC (ip=192.168.10.101) -------------------------------------------------------------------------1000 NIC (NFS) Client 1 (ip = 192.168.10.201)
> ...

 

That's what I meant, change one of the subnets, everything else is probably OK.

----------

## sschlueter

Ah sorry, I see, ok  :Smile: 

----------

## xbmodder

Why do you want to use subnets? If you have to do routing your probably going to slow down your machine, and have a huge number of issues. I would say just bridge all the interfaces.

----------

## Unclethommy

From what I understand, the 100Mbit connection needs to be up constantly so that you can sent PXE information to the diskless client. If i were to bridge it, I would need to stop this connection before bringing up the new bridged connection with the gigabit internet. I thought this was not possible because if the original connection used to boot the computer dies then does the diskless client not die? Can I stop the connection after it is booted and then bring up the correct bridged network before I mount any NFS drive structures? I suppose I could experiment.... Would the NICs stay in the same places? I'd just have to create the apporpriate rule in /etc/conf.d/net to bridge all the network adaptors on each machine right ? and gentoo would do the rest and send the data down the fastest line..... i.e. the gigabit NICs?

Also would the computational overhead be drastically reduced? I didnt realise having different subnets effected perfomance so much, which is imporant as I want absolutely maximum speed to allow fast NFS access.

----------

## xbmodder

Yes. Your system (your router) would have a much easier time doing its job. As long as there isn't any IP change moving over to bridging will be seamless. If there is an IP change SNAT/DNAT will be able to fix that for you.

----------

## sschlueter

I guess the Linux kernel's performance is not significantly better when bridging as compared to simple routing. Anyway, if you're worried about possible performance degradation, just buy a gigabit switch   :Wink: 

----------

## Unclethommy

So will the system know that two of the network cards being bridged on the server are connected directly to other network cards and the other is connected to the normal router/switch?

I will try both and see whether this works and will post back but please let me know your thoughts...

----------

## d_adams

I'm not sure how much you paid for your gb cards, but a 5 port netgear gigabit switch was somewhere in the neighborhood of $60 at bestbuy.

If you can handle waiting for a few days, newegg has one for even less, after rebates it was $35

http://www.newegg.com/Product/Product.aspx?Item=N82E16833122128

----------

## Unclethommy

Sadly, I am in the UK, I got my gigabit cards very cheaply considering they have realtek chipsets for £5/$10 each, I got four. Currently, I want to postpone purchasing a switch for as long as possible as they currently cost £30/$60 here. If there is only a slight hit on the server machine's performance when channeling network information , then I am willing to take it, as long as it doesn't overwhelm the CPU i.e. utilization > 10%

----------

## xbmodder

The fact they are cheap cards means they are usually bad...

----------

## Unclethommy

ok, so I have tried the easiest step first which was to simply change the subnets for the two gigabit cards on my server machine....

I have tried to ping from my server to one of my machines and I get:

```
ping 192.168.11.111

PING 192.168.11.111 (192.168.11.111) 56(84) bytes of data.

64 bytes from 192.168.11.111: icmp_seq=1 ttl=64 time=15.1 ms

64 bytes from 192.168.11.111: icmp_seq=2 ttl=64 time=0.077 ms

64 bytes from 192.168.11.111: icmp_seq=3 ttl=64 time=0.089 ms
```

i have set it up so that 192.168.11.11 is the gigabit on my server and 192.168.11.111 is the gigabit on my client....

shouldn't it be outputting something like 64 bytes from 192.168.11.11: icmp_seq=1 ttl=64 time=15.1 ms 

because the signal should be going from 192.168.11.11 TO 192.168.11.111 right? Am I missing something?

----------

## sschlueter

The ping command sends an icmp "echo request" to the IP address given. The thing is that you can't be sure that those echo requests actually reach their destination. So by convention all systems that received an icmp echo request send an icmp "echo reply" back to the sender. If the sender receives the reply, it can be sure that the original request actually reached it destination. The sender can also measure round trip time then.

----------

## Unclethommy

ah okay... 

so now I think I have setup the two gigabit direct connections up, the next problem seems to be that things like distcc are broken (when using the gigabit Ips as available hosts rather than the old ips of the 100Mbit NICs)... I've added the new ips of the gigagbit client NICs to the distcc host list but the remote compilations that distcc send out dont seem to be reaching the clients (i am getting red bars in distcc-gui and also status saying "blocked"). Do I need to be doing anything else so that I can tell the server and clients to route all traffic through the faster gigabit connections e.g distcc commands and especially my NFS mounts? my /etc/hosts on the server is as follows. Is there any other config file I should modify which will make the computers aware of the fact that they can use the direct connection to talk to each other and not use to slower route to the switch via the 100Mbit NICs? 

```
127.0.0.1      localhost

192.168.1.201 black.MSHOME black

192.168.1.202 white.MSHOME white

192.168.1.100 heaven.MSHOME heaven
```

eg. would adding the following to /etc/hosts help? 

```

192.168.11.111 blackG.MSHOME blackG

192.168.22.222 whiteG.MSHOME whiteG
```

The second problem is that when the computer boots it needs to mount the root file structure which it only wants to do from the PXE capable low speed NIC.... I have tried to mount the NFS using the gigabit Ip address along side this NFS mount eg, in /etc/exports

```

/diskless/black black(rw,sync,no_root_squash,subtree_check) 192.168.11.111(rw,sync,no_root_squash,subtree_check)
```

It seems to allow mount the file system using both connections, but I want some way of turning of the connection using the low speed NIC and to keep using the gigabit connection 

Any ideas?

BTW: I am only trying this route instead of the bridging route because I employ wake on lan on the server's low speed NIC , if I were to bridge , then the presume I will lose the ability to set the right settings in ethtool to enable wake on lan before the system shutsdown... would this be correct, or could I still have access to the individual NICs and send them commands like 

```
ethtool -wol g eth0
```

when eth0 is bridged to both eth1 and eth2 ?Last edited by Unclethommy on Mon Sep 24, 2007 7:47 am; edited 1 time in total

----------

## sschlueter

Wow, this thread is sooooo confused  :Smile: 

Could you please post the output of

```
ifconfig
```

and

```
netstat -rn
```

of ALL systems involved?

----------

## xbmodder

so, let's say that eth0 is the 100 mbit stuff to clients, and the router

eth1,2 are 1 GigE to clients

```

brctl addbr gig

brctl addif gig eth1

brctl addif gig eth2

ifconfig eth1 0.0.0.0

ifconfig eth2 0.0.0.0

ifconfig gig 192.168.10.1

```

Add the default gateway of 192.168.10.1 to the gigE clients.

you'd be happy!

----------

## sschlueter

Could we please agree on this routing vs. bridging thing first? I think it only adds to the confusion if I try to help setting it up via routing and xbmodder tries to help setting it up via bridging.

----------

## Unclethommy

My server's ifconfig:

```
eth0      Link encap:Ethernet  HWaddr 00:0D:87:37:8D:DD

          inet addr:192.168.1.100  Bcast:192.168.1.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:1808 errors:0 dropped:0 overruns:0 frame:0

          TX packets:1759 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:1418387 (1.3 Mb)  TX bytes:244310 (238.5 Kb)

          Interrupt:16 Base address:0xc800

eth1      Link encap:Ethernet  HWaddr 00:0C:76:AB:00:24

          inet addr:192.168.11.11  Bcast:192.168.11.255  Mask:255.255.255.0

          UP BROADCAST MULTICAST  MTU:1500  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

          Interrupt:21 Base address:0x6f00

eth2      Link encap:Ethernet  HWaddr 00:0C:76:B1:00:48

          inet addr:192.168.22.22  Bcast:192.168.22.255  Mask:255.255.255.0

          UP BROADCAST MULTICAST  MTU:1500  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

          Interrupt:22 Base address:0x8e00

lo        Link encap:Local Loopback

          inet addr:127.0.0.1  Mask:255.0.0.0

          UP LOOPBACK RUNNING  MTU:16436  Metric:1

          RX packets:257 errors:0 dropped:0 overruns:0 frame:0

          TX packets:257 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:17072 (16.6 Kb)  TX bytes:17072 (16.6 Kb)
```

the ifconfig of one of my clients (I can't get the details for the other as it has a hardware fault at the moment and is unplugged)

```
eth0      Link encap:Ethernet  HWaddr 00:08:74:C5:99:45

          inet addr:192.168.1.201  Bcast:192.168.1.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:17493 errors:0 dropped:0 overruns:0 frame:0

          TX packets:13472 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:10293540 (9.8 Mb)  TX bytes:2218054 (2.1 Mb)

          Interrupt:16

eth1      Link encap:Ethernet  HWaddr 00:0C:76:AB:00:1F

          inet addr:192.168.11.111  Bcast:192.168.11.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:53 errors:0 dropped:0 overruns:0 frame:0

          TX packets:68 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:6756 (6.5 Kb)  TX bytes:6858 (6.6 Kb)

          Interrupt:17 Base address:0x4000

lo        Link encap:Local Loopback

          inet addr:127.0.0.1  Mask:255.0.0.0

          UP LOOPBACK RUNNING  MTU:16436  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

```

the eth0 NICs are the 100Mbit, the others are gigabit cards...

netstat -rm for server is

```
Kernel IP routing table

Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface

192.168.22.0    0.0.0.0         255.255.255.0   U         0 0          0 eth2

192.168.1.0     0.0.0.0         255.255.255.0   U         0 0          0 eth0

192.168.11.0    0.0.0.0         255.255.255.0   U         0 0          0 eth1

127.0.0.0       0.0.0.0         255.0.0.0       U         0 0          0 lo

0.0.0.0         192.168.1.1     0.0.0.0         UG        0 0          0 eth0

```

for the same client is

```

Kernel IP routing table

Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface

192.168.1.0     0.0.0.0         255.255.255.0   U         0 0          0 eth0

192.168.11.0    0.0.0.0         255.255.255.0   U         0 0          0 eth1

127.0.0.0       0.0.0.0         255.0.0.0       U         0 0          0 lo

0.0.0.0         192.168.1.1     0.0.0.0         UG        0 0          0 eth0

```

Sorry for the confusion, I guess I wanted to have the answer to my question of whether I can set the wake on lan states of the individual cards in a bridge before I made the decision of what I wanted to do. If I can't then I won't be able to get my main server to wake up. So I guess I'd like to do the routing option first, unless someone can tell me that I can set wake on lan properly whilst doing bridging.

Thanks for your patience.

----------

## sschlueter

Your client has no special route to the 192.168.22.* network, which means that it will use its default gateway to reach that network. Its default gateway is 192.168.1.1. You have to set an additional routing table entry so that the client uses the server as gateway to that network using its gigabit connection.

On the client:

```
route add -net 192.168.22.0 netmask 255.255.255.0 gw 192.168.11.11
```

IP forwarding must be enabled on the server. If its not currently enabled, enable it:

```
sysctl -w net.ipv4.ip_forward=1

echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
```

I can't really comment on the wake-on-lan question but I guess that neither briding nor routing interferes with wol as the physical interfaces still exist and receive packets just the same way. It's only the processing of received packets or frames that is different when routing or bridging.

----------

## Unclethommy

hi there sorry for the long delay, i was on holiday and forgot to check on my posts...

I have now returned and tried the routing option and manually entering 

```
route add -net 192.168.22.0 netmask 255.255.255.0 gw 192.168.11.11
```

seems to work , and now i can ping the other client from the first one.

I presume I have to add a line to the /etc/conf.d/net file to make this permanent? 

should it be something like

```
config_eth1=( "noop" "192.168.11.111 netmask 255.255.255.0 brd 192.168.11.255" )

routes_eth1=( "via 192.168.22.22" ) ???

dns_domain_eth1="GIGABIT"

dns_servers_eth1="62.30.112.39 194.117.134.19"
```

I am now going to see if distcc is able to recognise the gigagbit network and send data using this rather than the megabit network, i will report back.

EDIT:Distcc seems to be working using gigabit network cards... super!

Thanks everyone for helping me get this solved... now i just need  to know how to permanently add  the correct routes to the cards as stated above...

----------

