# virt-manager: NIC passthrough

## taskman

Hi,

I am looking for advice to set up "libvirt" and "qemu" with virt-manager.

I can run VMs without network access.

But my goal is to have every VM with its own NIC-passthrough (up to four).

Therefor I have bought a PCIe-CNA/NIC with Intel 82580 chipset.

```
mm@mypc ~ $ lspci | grep Ethernet

02:00.0 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01)

02:00.1 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01)

02:00.2 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01)

02:00.3 Ethernet controller: Intel Corporation 82580 Gigabit Network Connection (rev 01)

05:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)

mm@mypc ~ $ ip addr show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 brd 127.255.255.255 scope host lo

       valid_lft forever preferred_lft forever

    inet6 ::1/128 scope host 

       valid_lft forever preferred_lft forever

2: enp2s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000

    link/ether 00:1b:21:a7:42:b0 brd ff:ff:ff:ff:ff:ff

3: enp2s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000

    link/ether 00:1b:21:a7:42:b1 brd ff:ff:ff:ff:ff:ff

4: enp2s0f2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000

    link/ether 00:1b:21:a7:42:b2 brd ff:ff:ff:ff:ff:ff

5: enp2s0f3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000

    link/ether 00:1b:21:a7:42:b3 brd ff:ff:ff:ff:ff:ff

6: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether 38:d5:47:7c:65:32 brd ff:ff:ff:ff:ff:ff

    inet 192.168.178.50/24 brd 192.168.178.255 scope global enp5s0

       valid_lft forever preferred_lft forever

    inet6 fd00::3ad5:47ff:fe7c:6532/64 scope global dynamic mngtmpaddr 

       valid_lft 7101sec preferred_lft 3501sec

...

```

I just need to configure the network in virt-manager but the options I have available make no sense to me.

After selecting "specified shared device" I need to put in a bridge name manually.

Do I have to set up bridges before starting virt-manager ?

Because every time I put in a device name the VM wont install, instead I get an error message:

 *Quote:*   

> 
> 
> Unable to complete install: 'Unable to add bridge enp2s0f3 port vnet0: Operation not supported'
> 
> Traceback (most recent call last):
> ...

 

Log ...

 *Quote:*   

> 2018-12-21 22:53:49.311+0000: 23604: info : libvirt version: 4.9.0
> 
> 2018-12-21 22:53:49.311+0000: 23604: info : hostname: mypc
> 
> 2018-12-21 22:53:49.311+0000: 23604: error : virDBusGetSystemBus:109 : internal error: Unable to get DBus system bus connection: Failed to connect to socket /var/run/dbus/system_bus_socket: No such file or directory
> ...

 

It also doesn't matter when I switch between Hypervisor, or e1000e, or virtio, in the advanced settings.

Long story short...

How do I get my NICs available for VMs only ?

Some addition information ...

```
mm@mypc ~ $ emerge -vp app-emulation/libvirt app-emulation/qemu

These are the packages that would be merged, in order:

Calculating dependencies... done!

[ebuild   R    ] app-emulation/qemu-3.0.0::gentoo  USE="aio alsa bzip2 caps fdt filecaps gtk nls opengl pin-upstream-blobs pulseaudio seccomp spice usb usbredir vhost-net vte xattr -accessibility -bluetooth -capstone -curl -debug -glusterfs -gnutls -gtk2 -infiniband -iscsi -jpeg -lzo -ncurses -nfs -numa -png -python -rbd -sasl -sdl -sdl2 (-selinux) -smartcard -snappy -ssh (-static) -static-user -systemtap -tci -test -vde -virgl -virtfs -vnc -xen -xfs" PYTHON_TARGETS="python2_7 python3_6 -python3_4 -python3_5" QEMU_SOFTMMU_TARGETS="arm x86_64 -aarch64 -alpha -cris -hppa -i386 -lm32 -m68k -microblaze -microblazeel -mips -mips64 -mips64el -mipsel -moxie -nios2 -or1k -ppc -ppc64 -ppcemb -riscv32 -riscv64 -s390x -sh4 -sh4eb -sparc -sparc64 -tricore -unicore32 -xtensa -xtensaeb" QEMU_USER_TARGETS="x86_64 -aarch64 -aarch64_be -alpha -arm -armeb -cris -hppa -i386 -m68k -microblaze -microblazeel -mips -mips64 -mips64el -mipsel -mipsn32 -mipsn32el -nios2 -or1k -ppc -ppc64 -ppc64abi32 -ppc64le -riscv32 -riscv64 -s390x -sh4 -sh4eb -sparc -sparc32plus -sparc64 -tilegx -xtensa -xtensaeb" 0 KiB

[ebuild   R    ] app-emulation/libvirt-4.9.0:0/4.9.0::gentoo  USE="caps dbus libvirtd nls policykit qemu udev zfs -apparmor -audit -firewalld -fuse -glusterfs -iscsi -libssh -lvm -lxc -macvtap -nfs -numa (-openvz) -parted -pcap -phyp -rbd -sasl (-selinux) -uml -vepa -virt-network -virtualbox -wireshark-plugins -xen -zeroconf" 0 KiB

Total: 2 packages (2 reinstalls), Size of downloads: 0 KiB
```

I am using bliss-kernel ...

```
mm@mypc ~ $ uname -a

Linux mypc 4.14.33-FC.01 #2 SMP Mon Apr 9 10:36:59 EDT 2018 x86_64 AMD FX(tm)-8350 Eight-Core Processor AuthenticAMD GNU/Linux
```

Tools I have installed ...

```
mm@mypc ~ $ equery list 'app-emulation/*'

 * Searching for * in app-emulation ...

[IP-] [  ] app-emulation/libvirt-4.9.0:0/4.9.0

[IP-] [  ] app-emulation/libvirt-glib-2.0.0:0

[IP-] [  ] app-emulation/qemu-3.0.0:0

[IP-] [  ] app-emulation/spice-0.14.0-r2:0

[IP-] [  ] app-emulation/spice-protocol-0.12.14:0

[IP-] [  ] app-emulation/virt-manager-2.0.0:0
```

PLX HALP!

----------

## NeddySeagoon

taskman,

Passthrough and bridging are different things.

When you use passthrough, the host does not see the hardware item, its passed through to the VM. The VM 'owns' it.

When you use a bridge, the host may or may not have a connection to the bridge.

The host donates the actual interface to the bridge and if the host will have a connection to the bridge, the bridge gets an IP address, not the interface donated to the bridge. 

I have a Intel Corporation 82575GB 4 port card that I wanted to passthrough but it has a hardware bug, so I have to use bridging instead.

----------

## taskman

Thanks for the clarification.

I don't know what lead me to bridging in the first place.

I added "virt-network" to app-emulation/libvirt and then I needed to compile some additional tools I dont need, like dnsmasq radvd and a few others.

But still I don't know how to create a NIC passthrough.

This is what I have done now ...

Edit > Connection Details > Add Network > Forwarding to physical network > Mode: Routed

Then I need to choose a device and this lead in one of the following errors ...

w/o IPv4 network address space definition ...

 *Quote:*   

> Error creating virtual network: XML error: route forwarding requested, but no IP address provided for network 'network1'
> 
> Traceback (most recent call last):
> 
>   File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in cb_wrapper callback(asyncjob, *args, **kwargs)
> ...

 

w/ IPv4 network address space definition ...

 *Quote:*   

> Error creating virtual network: internal error: Network is already in use by interface enp5s0
> 
> Traceback (most recent call last):
> 
>   File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in cb_wrapper callback(asyncjob, *args, **kwargs)
> ...

 

Am I still doing it wrong ?

How do I connect to my network ?

I dont want to use DMZ.

In my network is a dhcp & dns server already.

IPv6 I get via neighbor discovery.

I am confused.

----------

## NeddySeagoon

taskman,

There are four things you can do.

a) libvirt will set up NAT by default, using 192.168.122.0/24 (No IPv6). This is free with USE=virt-network

b) Donate an interface to a bridge and connect the KVM(s) to a bridge. The host may or may not connect to this bridge. 

c) Create an empty bridge, no real hardware at all, then route traffic to it. The guests can use this bridge.

d) Pass the NIC to the KVM. The host runs a PCI-Stub driver and the Guest sees the real hardware.

I do all of the first three. I would prefer d) but my hardware has a bug, so it won't work for me.

You appear not to want a)

Which of the other solutions do you want?

I can't help much with d)

----------

## taskman

My goal is to have up to four VMs with its own NIC.

- switching between Windows and some amd64- and ARM-Distributions (w/ one NIC)

- PFSense (w/ two NICs)

- Kali for hacking/pentesting other VMs (w/ one NIC)

Most things are for learning purpose, therefor I want Option D I think.

Maybe bridging will work too, but this will result in a performance lost and that I dont want, cause my next step will be Windows-Gaming in a VM w/ GPU-passthrough.

----------

## NeddySeagoon

taskman,

Here's an outline of bridging.  This is on the host. My firewall/router runs as a KVM on this host.

udev is not permitted to renawe the interfaces.

```
# eth interfaces for firewall

# we don't want them getting IP addresses

# as they are being donated to bridges

config_eth0="null"

config_eth1="null"

config_eth2="null"

config_eth3="null"

config_eth4="null"

# the big bad internet - we may not need an IP here as all trafic goes to the router.

bridge_br0="eth1"

# the DMZ

bridge_br1="eth2"

config_br1="192.168.10.254/24"

# wireless

bridge_br2="eth3"

config_br2="192.168.54.254/24"

# protected wired

bridge_br3="eth4"

config_br3="192.168.100.254/24"
```

I actually only need the IP address on br3 here.  The IP address is static because I need to know where this system is when the router KVM doesn't start. It runs my dhcpcd server too. My main PC also has a static IP so I can fix stuff remotely. This server is in my garage, 50m away from the house.

Now that me have our bridges, with or without an IP on the host. The guests can connect to them, using the  Virtual Machine Manager.

Choose the guest NIC and the Network source dropdown will list the bridges and their host interfaces.

I use virtio everywhere to avoid emulating hardware. 

The guest will work as normal.  I actually use this. :)

For real passthrough. you need pci-stub support in the host kernel. The right NIC driver in the guest kernel and some configuration which you may not be able to do via the GUI.

The authoritative guide from Red Hat will need Gentoofied. It also wan't tell you very much about compile time options.

I've been through it a few times but always the host kernel detected a PCI bridge chip hardware bug on my 4 port NIC.

----------

## Anon-E-moose

This might have some clues (follow the links to ubuntu documentation) https://askubuntu.com/questions/1028489/intel-pci-e-quad-port-card-passthrough

I pass through a pcie nic card but it's a single port card and likely to be far different than what you're trying to do.

Edit to add: I just looked at the first post since they have individual IP addresses, you might be able to use regular pass through.

Basically unbind them from the linux side and bind them selectively to the virtual

# unbind addon ethernet from linux

echo 0000:03:00.0 > /sys/bus/pci/devices/0000:03:00.0/driver/unbind

set up virtual side

  modprobe vfio_pci (if not loaded)

# ethernet card (from lspci -nnk or similar)

  echo "10ec 8168" >/sys/bus/pci/drivers/vfio-pci/new_id 

  echo 0000:03:00.0 >/sys/bus/pci/drivers/vfio-pci/bind 

then let virtual know about it

  qemu-system-x86_64 --enable-kvm -m 1024 -cpu athlon -vga std -net none -device vfio-pci,host=03:00.0 /n/don/virtual/XP.img

#unbind devices when done

Of course all the id's and addresses, etc have to be changed for what your system has.

----------

## taskman

Thanks all for helping me and happy Xmas.

I've using NIC-passthrough now, but still a bit of a mess with unbind.

This is what I have done so far ...

I added iommu=pt iommu=1 to my grub.cfg.

FYI: For Intel CPUs you should add intel_iommu=on instead.

```
mm@mypc ~ $ cat /boot/grub/grub.cfg

...

menuentry "Gentoo - 4.14.33-FC.01" {

    linux /@/kernels/4.14.33-FC.01/vmlinuz root=rpool/ROOT/gentoo by=id elevator=noop quiet logo.nologo triggers=zfs iommu=pt iommu=1

...
```

Next step was getting the vendor IDs and iommu-groups.

Therefor I used the following script ...

```
mm@mypc ~ $ cat bin/ls-iommu.bash 

#!/bin/bash

shopt -s nullglob

for d in /sys/kernel/iommu_groups/*/devices/*; do 

    n=${d#*/iommu_groups/*}; n=${n%%/*}

    printf 'IOMMU Group %s ' "$n"

    lspci -nns "${d##*/}"

done;

exit 0
```

```
mm@mypc ~/bin $ ./ls-iommu.bash 

...

IOMMU Group 16 02:00.0 Ethernet controller [0200]: Intel Corporation 82580 Gigabit Network Connection [8086:150e] (rev 01)

IOMMU Group 17 02:00.1 Ethernet controller [0200]: Intel Corporation 82580 Gigabit Network Connection [8086:150e] (rev 01)

IOMMU Group 18 02:00.2 Ethernet controller [0200]: Intel Corporation 82580 Gigabit Network Connection [8086:150e] (rev 01)

IOMMU Group 19 02:00.3 Ethernet controller [0200]: Intel Corporation 82580 Gigabit Network Connection [8086:150e] (rev 01)

...
```

With this information I could make entries for modprobe and unbind devices ...

```
mypc ~ # echo "options vfio-pci ids=8086:150e" > /etc/modprobe.d/vfio.conf

mypc ~ # echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" >> /etc/modprobe.d/vfio.conf

mypc ~ # echo "vfio_pci" >> /etc/modules-load.d/local.conf

mypc ~ # echo "0000:02:00.0" > /sys/bus/pci/devices/0000\:02\:00.0/driver/unbind 

mypc ~ # echo "0000:02:00.1" > /sys/bus/pci/devices/0000\:02\:00.1/driver/unbind

mypc ~ # echo "0000:02:00.2" > /sys/bus/pci/devices/0000\:02\:00.2/driver/unbind

mypc ~ # echo "0000:02:00.3" > /sys/bus/pci/devices/0000\:02\:00.3/driver/unbind
```

After a reboot I checked functionality of iommu and vfio (Intel users should use "dmar|iommu|vfio" as search string instead).

```
mypc ~ # dmesg | grep -iE "amd-vi|vfio"

[    0.639202] AMD-Vi: Found IOMMU at 0000:00:00.2 cap 0x40

[    0.639202] AMD-Vi: Interrupt remapping enabled

[    0.639260] AMD-Vi: Lazy IO/TLB flushing enabled

[    5.051610] VFIO - User Level meta-driver version: 0.3

[    5.174191] vfio_pci: add [8086:150e[ffff:ffff]] class 0x000000/00000000
```

But I realized that only one device got unbind, and I have still three devices shown up.

```
mypc ~ # ip addr show

...

2: enp2s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000

    link/ether 00:1b:21:a7:42:b0 brd ff:ff:ff:ff:ff:ff

3: enp2s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000

    link/ether 00:1b:21:a7:42:b1 brd ff:ff:ff:ff:ff:ff

4: enp2s0f2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000

    link/ether 00:1b:21:a7:42:b2 brd ff:ff:ff:ff:ff:ff

 ...
```

Why I did unbind one iommu-group on the host only ?

Every thing else works fine now!

----------

## Anon-E-moose

I'm not sure why but I don't have a multi-ethernet card to check with.

What does "ls /sys/bus/pci/devices/0000\:02\:00.*" return?  Is there a driver subdir under each of them, and does it have an unbind under it?

And what does "ls /sys/bus/pci/drivers" return?

----------

## taskman

 *Anon-E-moose wrote:*   

> I'm not sure why but I don't have a multi-ethernet card to check with.
> 
> What does "ls /sys/bus/pci/devices/0000\:02\:00.*" return?  Is there a driver subdir under each of them, and does it have an unbind under it?
> 
> And what does "ls /sys/bus/pci/drivers" return?

 

```
mm@mypc ~ $ ls /sys/bus/pci/devices/0000\:02\:00.*

'/sys/bus/pci/devices/0000:02:00.0':

broken_parity_status      current_link_speed  dma_mask_bits    iommu          local_cpus      msi_bus    power   reset      revision          uevent

class                     current_link_width  driver           iommu_group    max_link_speed  msi_irqs   ptp     resource   subsystem         vendor

config                    d3cold_allowed      driver_override  irq            max_link_width  net        remove  resource0  subsystem_device

consistent_dma_mask_bits  device              enable           local_cpulist  modalias        numa_node  rescan  resource3  subsystem_vendor

'/sys/bus/pci/devices/0000:02:00.1':

broken_parity_status      current_link_speed  dma_mask_bits    iommu          local_cpus      msi_bus    power   reset      revision          uevent

class                     current_link_width  driver           iommu_group    max_link_speed  msi_irqs   ptp     resource   subsystem         vendor

config                    d3cold_allowed      driver_override  irq            max_link_width  net        remove  resource0  subsystem_device

consistent_dma_mask_bits  device              enable           local_cpulist  modalias        numa_node  rescan  resource3  subsystem_vendor

'/sys/bus/pci/devices/0000:02:00.2':

broken_parity_status      current_link_speed  dma_mask_bits    iommu          local_cpus      msi_bus    power   reset      revision          uevent

class                     current_link_width  driver           iommu_group    max_link_speed  msi_irqs   ptp     resource   subsystem         vendor

config                    d3cold_allowed      driver_override  irq            max_link_width  net        remove  resource0  subsystem_device

consistent_dma_mask_bits  device              enable           local_cpulist  modalias        numa_node  rescan  resource3  subsystem_vendor

'/sys/bus/pci/devices/0000:02:00.3':

broken_parity_status      current_link_speed  dma_mask_bits    iommu          local_cpus      msi_bus    rescan     resource3         subsystem_vendor

class                     current_link_width  driver           iommu_group    max_link_speed  numa_node  reset      revision          uevent

config                    d3cold_allowed      driver_override  irq            max_link_width  power      resource   subsystem         vendor

consistent_dma_mask_bits  device              enable           local_cpulist  modalias        remove     resource0  subsystem_device

```

```
mm@mypc ~ $ ls -l /sys/bus/pci/drivers/

insgesamt 0

drwxr-xr-x 2 root root 0 25. Dez 20:07 8250_mid

drwxr-xr-x 2 root root 0 25. Dez 20:07 agpgart-intel

drwxr-xr-x 2 root root 0 25. Dez 20:07 agpgart-sis

drwxr-xr-x 2 root root 0 25. Dez 20:07 agpgart-via

drwxr-xr-x 2 root root 0 25. Dez 20:07 ahci

drwxr-xr-x 2 root root 0 25. Dez 20:07 ata_piix

drwxr-xr-x 2 root root 0 25. Dez 20:07 dw_dmac_pci

drwxr-xr-x 2 root root 0 25. Dez 20:07 ehci-pci

drwxr-xr-x 2 root root 0 25. Dez 20:07 fam15h_power

drwxr-xr-x 2 root root 0 25. Dez 20:07 i2c-designware-pci

drwxr-xr-x 2 root root 0 25. Dez 20:07 igb

drwxr-xr-x 2 root root 0 25. Dez 20:07 intel_pmc_core

drwxr-xr-x 2 root root 0 25. Dez 20:07 iosf_mbi_pci

drwxr-xr-x 2 root root 0 25. Dez 20:07 k10temp

drwxr-xr-x 2 root root 0 25. Dez 20:07 nvidia

drwxr-xr-x 2 root root 0 25. Dez 20:07 nvidia-nvswitch

drwxr-xr-x 2 root root 0 25. Dez 20:07 ohci-pci

drwxr-xr-x 2 root root 0 25. Dez 20:07 pci-stub

drwxr-xr-x 2 root root 0 25. Dez 20:07 pcieport

drwxr-xr-x 2 root root 0 25. Dez 20:07 piix4_smbus

drwxr-xr-x 2 root root 0 25. Dez 20:07 serial

drwxr-xr-x 2 root root 0 25. Dez 20:07 shpchp

drwxr-xr-x 2 root root 0 25. Dez 20:07 snd_hda_intel

drwxr-xr-x 2 root root 0 25. Dez 20:07 uhci_hcd

drwxr-xr-x 2 root root 0 25. Dez 20:07 vfio-pci

drwxr-xr-x 2 root root 0 25. Dez 20:07 xen-platform-pci

drwxr-xr-x 2 root root 0 25. Dez 20:07 xhci_hcd

```

```
mm@mypc ~ $ lspci -nnk -d 8086:150e

02:00.0 Ethernet controller [0200]: Intel Corporation 82580 Gigabit Network Connection [8086:150e] (rev 01)

   Subsystem: Intel Corporation Ethernet Server Adapter I340-T4 [8086:12a1]

   Kernel driver in use: igb

   Kernel modules: igb

02:00.1 Ethernet controller [0200]: Intel Corporation 82580 Gigabit Network Connection [8086:150e] (rev 01)

   Subsystem: Intel Corporation Ethernet Server Adapter I340-T4 [8086:12a1]

   Kernel driver in use: igb

   Kernel modules: igb

02:00.2 Ethernet controller [0200]: Intel Corporation 82580 Gigabit Network Connection [8086:150e] (rev 01)

   Subsystem: Intel Corporation Ethernet Server Adapter I340-T4 [8086:12a1]

   Kernel driver in use: igb

   Kernel modules: igb

02:00.3 Ethernet controller [0200]: Intel Corporation 82580 Gigabit Network Connection [8086:150e] (rev 01)

   Subsystem: Intel Corporation Ethernet Server Adapter I340-T4 [8086:12a1]

   Kernel driver in use: vfio-pci

   Kernel modules: igb
```

```
mm@mypc ~ $ ls /sys/bus/pci/drivers/{igb,pci-stub,vfio-pci}

/sys/bus/pci/drivers/igb:

0000:02:00.0  0000:02:00.1  0000:02:00.2  0000:05:00.0  bind  module  new_id  remove_id  uevent  unbind

/sys/bus/pci/drivers/pci-stub:

bind  new_id  remove_id  uevent  unbind

/sys/bus/pci/drivers/vfio-pci:

0000:02:00.3  bind  module  new_id  remove_id  uevent  unbind
```

----------

## Anon-E-moose

I'm not sure why it's grabbing 2-4 eth but freeing them involves something like

echo 0000:02:00.2 > /sys/bus/pci/drivers/igb/unbind

then

echo 0000:02:00.2 > /sys/bus/pci/devices/0000:02:00.2/driver/unbind

I would just do the one and make sure it frees from the driver then gets unbound, then you should be able to do the others.

It sounds like some of the eth "cards" are being grabbed before the original attempt at an unbind on all of them works, may be a timing issue.

I put all my unbind of drivers/devices in /etc/local.d/baselayout1.start but that might not be far enough in the boot process for you to get everything done.

You could put all the unbind drivers/devices in a separate script and have it execute after you're sure all the booting has been done.

Edit to add: The way I do it (inside a shell script as I bring up a vm) is

```
  echo "10ec 8168" >/sys/bus/pci/drivers/vfio-pci/new_id 

  echo 0000:03:00.0 >/sys/bus/pci/drivers/vfio-pci/bind 
```

I don't know if it's necessary to have the first line for each bind (2nd line) with a multi-port adapter, but it might be. It's something to investigate.

----------

## gjy0724

It sounds like you are looking for something similar to what I am doing.  I have two physical interfaces on my desktop, one dedicated for my desktop, the second dedicated of use by my VMs within KVM/qemu/libvirt.  I am using systemd so if you are using openrc then this will likely not work.  I create a bridge (br0) then tie my second virtual interface to the bridge.  Then I set all VMs to use the bridge (br0) interface.  Then the VM has an IP or IPs, if there are multiple interfaces, on the local network so it can reach the internet and is accessible locally as well.

My post on this issue can be found here: https://forums.gentoo.org/viewtopic-t-1063212-highlight-gjy0724.html

The end result was the following, which I am currently using:

/etc/systemd/network/br0.netdev

```
[NetDev]

Name=br0

Kind=bridge
```

/etc/systemd/network/br0.network

```
[Match]

Name=br0
```

/etc/systemd/network/enp4s1.network

```
[Match]

Name=enp4s1

[Network]

Bridge=br0
```

----------

## taskman

Soon I will buy a new computer and there I will add systemd to get rid of log files.

But for now I run with initrd and this logs makes me nuts.

ATM I am banging my head on the keyboard (since 12 hours) to get a DIY Kernel.

Then I try dracut to set up rd.driver.pre hook with vfio.

I wasn't able to put vfio into initramfs so far.

```
genkernel --install initramfs
```

and

```
bliss-initramfs
```

failed on me and dracut can't find the vfio-modules of bliss-kernel.

----------

