# 10 gigabit networking with no switch?

## 1clue

Is it possible/feasible to go point to point with 10 gigabit networking, either fiber or copper?

I'm contemplating 3 hosts: a nas and 2 kvm hosts. They will want high speed networking between all three points. I'd like to get 3 dual port nics of reasonable quality and go direct attach in a triangle, no forwarding from one physical interface to another. The links will be vlan trunks and there will be virtual routers on each box, if that matters.

I'd love to know if this is feasible. Nics aren't all that bad but 10 gigabit managed switches are outrageous.

----------

## szatox

You can do that with fiber, depending on the line length it migh be cheaper than copper. With only 3 points double loop topology seems to be the way to go -> you know the latency in advance, you know it does not depend on load, and you can expect it to be more than you can use so you won't choke it. Even a single loop if you're feeling lucky - however breaking a single line would take the whole network down efectively stopping your VMs untill replaced.

10Gbps ethernet might be better with short lines. I don't know if you can use it without switch, but if you could you would still need and extra NIC every single time you attach a new endpoint. If you're going to grow your infrastructure, this will not scale.

----------

## NeddySeagoon

1clue,

10G is 2 PCIe lanes full. Dual port cards will need 4 PCIe lanes each.  They seem to be all 8 lane anyway.

Do you have enough sutable slots in your hardware.

At £400 each, they will make a real hole in someones pocket.

----------

## 1clue

Yeah about the cost. It seems that if a single nic costs X then the dual port version costs about 1.25 X. 

A switch on the other hand costs 10 X or more which is beyond my budget. 

I have suitable slots available for everything except the newly purchased but not yet fully assembled c2758 router which only has 4 lanes. Not sure if it can handle router/vpn ids/ips along with a 10gbps action too, but as you said all the nics want 8 lanes for some reason.

----------

## stealthy

To be honest, I don't know the answer, but still do have couple of things that came to mind, that hopefully you have already accounted for.

Some(or most) dual port 10G cards, although have 2 ports, but usually only have 1 controller on the card. So in effect both ports are actually sharing 10G

On the other hand 10G Ethernet cards seems to also work at 1000/100 speeds also, so based on Ethernet standards theoretically you should be able to connect host to host.(Take this with grain of salt, I am just speculating)

Since you mentioned you are going to be trunking, have you considered just getting 4 port ethernet cards. You can do link aggregation on those..although that will only get you 4G

Or, a While back when I was setting up a backup solution, I was thinking of installing separate nics/switch to do the backup. While searching for cheap solution on ebay, I ran across used 4G Fibre cards and a Fibre switch.

Needless to say, I ended up getting those, although if I remember I had to setup iSCSI to get it do what I wanted, the solution worked.

----------

## 1clue

Good questions, I have answers.

Dual port cards, single controller:

I'm not bottom feeding on cards.  I'm not looking for the cheapest possible solution for the actual hardware I intend to buy, only trying to put off the purchase of the switch which is where the money is.  I know that some nics are substandard based on my experiences with networking in gigabit range.  Cheap cards might have feature count, but they tend to cause more interrupts on the motherboard which kills overall performance.  I was doing this back when gigabit was pushing the edge of hardware capabilities, same thing is happening now only there are more details to watch.

The rule of thumb is that Intel cards have good onboard hardware in terms of interrupts causing havoc on the motherboard.  Other brands are OK too, and no matter what you need to look at your needs of the host to see what card features you need.  Prices can vary drastically for the latest greatest virtualization tech and the onboard support on the cards, for example.

Single-port cards generally take 2 pcie lanes, but for some reason the 2-port cards want 8 lanes.  This makes me a little cranky given the layout of my motherboards.  

That said, once I determine acceptable cards I'm fine with looking on ebay for them, where they are likely very much cheaper than newegg.

Multi-speed connections:  I don't want that.  Every host I have has one or several gigabit nics.  My new router has a 4-lane pcie slot (one reason why I'm a little cranky about the dual port ones wanting 8 lanes) and it has 7 gigabit lanes as well.  If necessary I'll put a 4-port nic on one of the heavier boxes and have it forward packets, but IMO that's going to be an undesirable load on VM hosts that would be better off doing something else.

4-port nics:  I did research on link aggregation.  On a bigger network that would be OK, but in my case the bottleneck in my gigabit network comes when I'm making some sort of monster single connection.  Like transferring a huge file, or a database connection.  Link aggregation limits each connection to the speed of a single nic, so I would not gain anything if my monster stream is the only thing going on.  That's pretty close to what happens most of the time, so link aggregation fails when I actually need high speed.

Backups:  I'm not a huge fan of network backups for servers with big drives, although 10gbE connections would make me reconsider that.  So far every virtual machine host has a slot-load sata bay, I can stick a raw hard drive in there and use it like a tape cartridge.  I don't use compression or any special software, I just copy files over in a directory.  I can mount any of the backups on any linux box and get random access to the files.  I've gone through just about every form of backup media around, and been bitten by a lot of them.  This way costs a bit more per terabyte but restoring from a crash is incredibly easier than any sort of backup software.

----------

## dalu

This is probably slightly out of context,

I was wondering since 10GE cards are so expensive...

the original idea was, build a NAS and just have thin clients connected to the server

-> no more need to buy extra HDDs/SSDs for each new device, just export as iSCSI or similar.

Keep hardware costs low. But then I saw the price of those multi port 10GE hubs/switches.

And I was looking for alternatives, and found

- Thunderbolt

- USB 3.1

However Thunderbolt switches don't exist and I haven't dug into the topic very deeply and I'm not sure if Ethernet over Thunderbolt would work or is in any way supported by Linux.

Also I haven't tried anything with Ethernet over USB yet.

How much does fiber cost?

I don't mean to derail so if it doesn't fit the topic just ignore it.

----------

## 1clue

Dalu,

IMO this is not off topic at all, but keep in mind I'm going to try to focus on something that will work for my situation.

The cards aren't the problem. They're not so crazy compared to the cost of the hardware that would use them.  The problem is the switch.  Worse yet in my case I need a VLAN-aware switch and routing capability with access rules, which means a managed switch or at least a smart switch.

Thunderbolt is a single-host thing.  There are thunderbolt to 10gbE adapters, don't know if Linux supports them.  Mostly thunderbolt was made to support crazy monitors IMO.  Which means I want it on my workstation.  Given the cost of Thunderbolt and Thunderbolt 2-capable storage it's not a cost savings.

USB3:  Again, single-host thing.  They also make NICs for USB3 but not 10gbE nics since USB3 only supports half that.  Otherwise I don't see how it's going to be a workable solution.

Fiber:  Depends on what fiber.  The slower stuff is cheaper but the faster stuff is 10gbE and up.  There are 10gbE cards which have SFP+ ports, and fiber can hook up to that.  Some of them at least can go short distances without the SFP+ transceiver, and the cables seem to be comparable in price to cat6 and up.

You might also be interested in looking at Infiniband.  Some of the slower speed hardware is not too bad, and they typically bond 4 channels of it together.  Still pretty expensive and you generally have a bunch of switches rather than just one so its main benefit other than speed is reliability.

----------

## dalu

Idk 1clue,

Thunderbolt2

http://www.newegg.com/Product/Product.aspx?Item=N82E16813995027

69 USD

10GE SPF+ Transceivers

http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&DEPA=0&Order=BESTMATCH&Description=10ge&N=100006519&isNodeId=1

~320 USD

TB cables cost half as much as 10GE ones

since you're going ptp ...

I'd probably go

NAS 2 TB2

 - KVM1 2 TB2; 1<->NAS, 1<->KVM2

 - KVM2 2 TB2; 1<->NAS, 1<->KVM1

If you don't need the KVMs to talk to each other (since storage is NAS anyway), NAS 2, each KVM 1 TB2 card

Infiniband is crazy expensive

USB 3.1 (not 3.0) is supposed to have 10gbit/s bandwith(or transfer speed)

----------

## 1clue

This is the part that hurts: $530-- http://www.newegg.com/Product/Product.aspx?Item=N82E16816102380&cm_re=Thunderbolt_network_adapter-_-16-102-380-_-Product

That's on top of the thunderbolt card, a thunderbolt cable, and a cat6 cable.

----------

## dalu

 *1clue wrote:*   

> This is the part that hurts: $530-- http://www.newegg.com/Product/Product.aspx?Item=N82E16816102380&cm_re=Thunderbolt_network_adapter-_-16-102-380-_-Product
> 
> That's on top of the thunderbolt card, a thunderbolt cable, and a cat6 cable.

 

Ouch yeah, but do you *need* the 10GBase-T adapter?

I checked ebay and apparently there are some reburbished 40GE Infiniband cards for ~100 USD, 2 ports, PCIEx8.

Cables are like 60-120 USD. 3m, that's not very long, but long enough if used in a rack.

But 30 days warranty...

Mellanox MHQH29B-XTR

Or just go with your original choice of 10GE. Idk if you need the 2 KVMs to be inter-connected, depends on your virtualization solution I guess.

You know, perhaps you could dump the NAS and put the drives in the KVMs and pick a virtualization solution that supports this setup.

----------

## 1clue

I'm limited to a smart phone today so research is difficult. I believe that thunderbolt is like usb in the sense that there is a single master with a bunch of peripherals. The difference is, I believe, baked into the hardware. Smart phones I think have dual mode hardware.

----------

## szatox

It's hard to find details, but it seems to be more related to firewire thatn USB and in case of firewire is a bit like ethernet: it does not require a controler. You use crossed twisted pair and have duplex enabled.

Sure there is a difference in hardware, but the protocol itself seems to be superior in terms of throughput and reliability. USB was designed for low price while FW was for throughput and TB appears to have inherited this quality.

That's something you might like: during my brief search session I spotted some remarks on IP over TB. With 20Gbit throughput and duplex it might actually be an interesting option. As a reasonably compatable replacement for ethernet. (IP is IP, er? Routing won't be hard to setup if you ever need it)

I wonder how well it works with linux

----------

## 1clue

The best spec for original Thunderbolt I can find is here:  http://www.intel.com/content/www/us/en/architecture-and-technology/thunderbolt/thunderbolt-technology-brief.html

It's combination of PCIe and display adapter on a single connection.

The PC side has some sort of chipset on the motherboard or a PCIe card which provides a GPU and some PCIe lanes which are interfaced into the display port via a controller.

The device side has a controller which demultiplexes the display and the PCIe, and has some sort of functionality and possibly another port to daisy chain more devices with.

The way I see this, a bus master is required although no documentation I see mentions such a master.  As much as it might be fun to play with this, I have way too many projects for this to be appealing to me as an experiment, but I'm surely interested if someone else tries it.

Thunderbolt 2 seems to have peer-to-peer networking in the plan:  http://www.crn.com/news/networking/300072373/thunderbolt-2-offers-peer-to-peer-networking.htm

I'll still want to see how it works before diving in, although it seems this could solve some of my other problems as well.

My asking questions here is aimed at finding the proper steps to get high speed networking in a small office which may eventually need to expand high speed networking, but not for the near future.  I'm mostly focused on doing things correctly such that I'm not throwing money away.

I don't object to buying a real 10gbE nic, I object to buying a bargain-basement one which is not suited to my hardware or my purpose.  I object to making uneducated decisions about topology, and having to throw it out when I expand.

I'm educating myself as best I can, but there seems to be inconsistent information out there.  I don't know if that is because of misinformation or if I'm reading best practices for completely different scenarios.

My application would benefit much from not only VM host <-> NAS communication, but from VM host <-> VM host communication, and with routing rules between vlans at the same time.  Finally, there are several gigabit-only machines which will be driving this communication.

The question in my mind is, where does one draw the line between 'getting by without a switch' and 'I need a switch'?

Also, there seems to be a lot of importance to choosing the correct NIC.  Cheap NICs overload the CPU with interrupts and are detrimental to performance of the host, which IMO negates the point of having high speed networking.  Choosing a too-expensive NIC for a situation which does not require it is a waste of money since it wouldn't improve performance.

I'd want to see a comparison of TB2 peer to peer networking and a properly chosen set of 10gbE nics with tuned communications between VM hosts and NAS devices.

Thanks.

----------

## carbinefreak

For work I Setup and manage a medium sized ~8 host 500  ESXi VM cluster that uses dual port intel x520 10G SFP+ cards with a huge (60+) switch back haul of mostly copper SFP+ links. I recently started playing with a few TB of iSCSI storage on a qnap TS-1279 with the same NIC. Initial performance was great but the reliability isn't there, the iSCSI service started to fail with a medium load on it. If your set on using iSCSI just make sure your NAS can support the load properly otherwise you wont have any VM's at all. If your worried about cost it might be better to look into 10G RJ45 connections. Netgear makes a decently cheap 10G RJ45 switch and you can pick up some PCIe 10G rj45 cards but i cant find any for under ~$400.

http://www.amazon.com/Netgear-ProSAFE-10-Gigabit-Ethernet-XS708E-100NES/dp/B00B46AEE6

As a side note, if you were dead set on not using a switch you could get away pretty cheap using the intel SFP+ cards and a few ~5m copper SFP+ twinax cables but you are severely limited on distance and future growth.

----------

## 1clue

Carbinefreak,

The only thing I'm 'set on' is not making a mistake that would cause me to start over from scratch.  The NetGear link you gave is not adequate, I would need forwarding rules.  Not sure if a 'smart' switch is good enough, but I know an unmanaged one is not.

Right now, I have very little hardware that can even benefit from 10gbps.  Most of my setup described here is yet to be purchased, and It's a big enough expense for me that I'm agonizing over it.

I don't want to buy half-assed equipment just because my budget is low.  If I take this step to 10gbps, then I want to have hardware that will be on par for a real network if I expand.

Certainly I won't buy anymore hardware that can't handle a 10gbps nic easily.

The iSCSI would be a last step in the plan idea.  I've been around enterprise enough (not managing this end of things but close enough to see it) that I know the money has to go somewhere, companies won't just lay down that kind of skin for no reason.  And before I try to put the boot partition on a remote box, I have to be confident I can tune the nics to get good throughput.

I've almost talked myself out of 10gbE for now. I have a lot of other things to worry about, and while high speed nics would certainly speed things up they won't speed things up as much as some other steps I could take.

Thanks.

----------

## szatox

 *Quote:*   

> I'm educating myself as best I can, but there seems to be inconsistent information out there. I don't know if that is because of misinformation or if I'm reading best practices for completely different scenarios. 

  I think THIS is a good answer. Particularly when you take all this marketing bullshit into account. Everyone says he is the best, while what you actually need is "good enough". Yes, you can expect them to swap on the ladder as use case changes.

 *Quote:*   

> The iSCSI would be a last step in the plan idea. I've been around enterprise enough (not managing this end of things but close enough to see it) that I know the money has to go somewhere, companies won't just lay down that kind of skin for no reason.

  c'mon, accountants are spending this mone because marketing tells them it will make them shine and bring the peace to the world  :Rolling Eyes:  Well, more or less. You aren't going to tell me accountants make decisions based on technical stuff, are you?  :Wink: 

Anyway, there is one really funny thing that comes from some big company's marketing leaflets (Err... proffessional training, of course) I want to share: The purpose of FC SAN is to move IO traffic away from your LAN so you don't have network congestion. A few pages later they suggest using IP SAN within a VLAN so you can leverage your 10G ethernet for IO traffic. This way they can sell you SAN without selling you SAN. Brilliant!

----------

## 1clue

I don't know what accountants you talk to, but the ones I talk to will argue all day over a penny.

Surely there is some overzealousness in Enterprise SAN but probably not much, given the cost of downtime.

Just as surely my downtime cost might be personally tragic but won't have nearly that price tag.

What I'm saying is that Enterprise SAN is expensive because downtime is expensive,  I'm not saying I need an Enterprise SAN.  But I'm not going to throw one together that I built out of a 6-year-old box either, when I finally get around to it.

----------

