Reputation: 35
My LXC containers usually work with a masqueraded bridge, on a private network.
This time I would like to put the containers on the host's LAN, but I can't get any results. I use LXC 2.0.7-2+deb9u2 on debian, and I refer to this documentation : LXC/SimpleBridge.
cfrbr0 is the bridge on the host, its IP is 192.168.0.12/24, it contains the physical interface (up with no IP) and lxc-net service is down.
[config]
lxc.network.type = veth
lxc.network.name = eth0
lxc.network.flags = up
lxc.network.ipv4.gateway = auto
lxc.network.link = cfrbr0
lxc.network.ipv4 = 192.168.0.13/24
[lxc-usernet]
test veth cfrbr0 100
$ sudo service lxc-net stop
$ lxc-start -n test-ct
$ lxc-attach -n test-ct -- sudo -i
# ip a
24: eth0@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 9e:82:4f:5a:6c:74 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.13/24 brd 192.168.0.255 scope global eth0
valid_lft forever preferred_lft forever
# ip r
default via 192.168.0.12 dev eth0
192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.13
# ping 192.168.0.12
PING 192.168.0.12 (192.168.0.12) 56(84) bytes of data.
64 bytes from 192.168.0.12: icmp_seq=1 ttl=64 time=0.081 ms
But :
# ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1) 56(84) bytes of data.
From 192.168.0.12: icmp_seq=2 Redirect Host(New nexthop: 192.168.0.254)
The host pings 1.1.1.1 and the veth is added to the bridge. IP forwarding is set to 1 on the host. FYI, the host is a virtualbox VM on a macos (same issue on debian stretch virtualbox).
I think I'm misconfiguring the host-shared bridge because I don't have problem with a masqueraded bridge and a LXC private network. As a workaround, is there a possibility to put the containers into the local network with a masqueraded bridge ?
Thank you for your suggestions !
Upvotes: 3
Views: 4976
Reputation: 171
When using VirtualBox on a Mac (over WiFi?) this requires a different approach because of how WiFi bridging works (https://docs.oracle.com/en/virtualization/virtualbox/6.0/user/network_bridged.html).
The key in this setup is not to bridge eth0
and don't use lxc-net
. On the host, /etc/network/interfaces is standard:
auto eth0
iface eth0 inet static
address 192.168.0.8
netmask 255.255.255.0
gateway 192.168.0.1
A bridge is not needed (no lxc-net
) but set the container config to create a virtual interface thusly:
lxc.net.0.type = veth
lxc.net.0.veth.pair = veth0
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx
lxc.net.0.ipv4.address = 192.168.0.64/32
lxc.net.0.ipv4.gateway = 192.168.0.8
lxc.net.0.script.up = /var/lib/lxc/netup.sh 192.168.0.64
lxc.net.0.script.down = /var/lib/lxc/netdown.sh 192.168.0.64
Some notes on this config: (1) there is no lxc.net.0.link
since we don't want a bridge, (2) the lxc.net.0.ipv4.gateway
address is the host's IP address, (3) note the netmask is /32
, (4) the scripts are explained below.
The netup.sh
script routes incoming IP traffic to the container and creates an ARP entry so that eth0
will accept traffic for it:
#!/bin/sh
ip route add ${1}/32 dev veth0
arp -i eth0 -Ds ${1} eth0 pub
The netdown.sh
script simply removes the ARP entry (the IP route will go away automatically when veth0
is destroyed).
#!/bin/sh
arp -d -i eth0 ${1} pub
On the guest, /etc/network/interfaces can be empty, since in this case the setup was done in the container config file.
The end result on the host:
# ip route
default via 192.168.0.1 dev eth0 metric 202
192.168.0.0/24 dev eth0 scope link src 192.168.0.8
192.168.0.64 dev veth0 scope link
# arp -a
...
? (192.168.0.64) at 00:16:3e:xx:xx:xx [ether] on veth0
? (192.168.0.64) at * PERM PUP on eth0
# ifconfig
eth0 Link encap:Ethernet HWaddr 08:00:27:xx:xx:xx
inet addr:192.168.0.8 Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:11476 errors:0 dropped:0 overruns:0 frame:0
TX packets:10425 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1253928 (1.1 MiB) TX bytes:1328460 (1.2 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:112 (112.0 B) TX bytes:112 (112.0 B)
veth0 Link encap:Ethernet HWaddr FE:9D:3D:14:4B:87
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:20 errors:0 dropped:0 overruns:0 frame:0
TX packets:20 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1748 (1.7 KiB) TX bytes:1748 (1.7 KiB)
And in the container:
# ip route
default via 192.168.0.8 dev eth0
192.168.0.8 dev eth0 scope link
# ifconfig
eth0 Link encap:Ethernet HWaddr 00:16:3E:xx:xx:xx
inet addr:192.168.0.64 Bcast:255.255.255.255 Mask:255.255.255.255
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:247 errors:0 dropped:0 overruns:0 frame:0
TX packets:247 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:23322 (22.7 KiB) TX bytes:23322 (22.7 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
# ping -c 3 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: seq=0 ttl=58 time=43.458 ms
64 bytes from 1.1.1.1: seq=1 ttl=58 time=41.121 ms
64 bytes from 1.1.1.1: seq=2 ttl=58 time=40.891 ms
--- 1.1.1.1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 40.891/41.823/43.458 ms
I know this was stated in the question, but for anyone starting from scratch, make sure forwarding is enabled, echo 1 > /proc/sys/net/ipv4/ip_forward ; echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
.
My other answer works for KVM and might be useful for others so I won't edit it, but this one is more specific to VirtualBox and WiFi.
Upvotes: 1
Reputation: 171
After experiencing this exact problem, the key for me was to manually set the src
in the container's default route.
On the host, /etc/network/interfaces:
auto br0
iface br0 inet static
address xx.xx.xx.zz
netmask 255.255.255.0
gateway xx.xx.xx.1
bridge_ports eth0
bridge_stp off
bridge_fd 0
bridge_maxwait 0
(note there is no entry for eth0
and xx.xx.xx.zz
is the primary/main public IP of the host)
On the host, the network part of /var/lib/lxc/[container]/config:
lxc.net.0.type = veth
lxc.net.0.link = br0
lxc.net.0.flags = up
lxc.net.0.hwaddr = 00:16:3e:xx:xx:xx
On the container, /etc/network/interfaces:
auto eth0
iface eth0 inet static
address xx.xx.xx.yy
netmask 255.255.255.0
post-up ip route add default via xx.xx.xx.1 src xx.xx.xx.yy
hostname $(hostname)
Note the src xx.xx.xx.yy
where the IP address is the external IP being assigned to the container.
The result of this on the container:
# ip route
default via xx.xx.xx.1 dev eth0 src xx.xx.xx.yy
xx.xx.xx.0/24 dev eth0 scope link src xx.xx.xx.yy
# ping -c 3 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: seq=0 ttl=58 time=12.499 ms
64 bytes from 1.1.1.1: seq=1 ttl=58 time=19.876 ms
64 bytes from 1.1.1.1: seq=2 ttl=58 time=12.894 ms
--- 1.1.1.1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 12.499/15.089/19.876 ms
And on the host:
# ip route
default via xx.xx.xx.1 dev br0
xx.xx.xx.0/24 dev br0 scope link src xx.xx.xx.zz
# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.00163c024f42 no eth0
vethBEQF56
# ifconfig
br0 Link encap:Ethernet HWaddr 00:16:3C:xx:xx:xx
inet addr:xx.xx.xx.zz Bcast:0.0.0.0 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:85102 errors:0 dropped:0 overruns:0 frame:0
TX packets:3682 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:4238847 (4.0 MiB) TX bytes:570002 (556.6 KiB)
eth0 Link encap:Ethernet HWaddr 00:16:3C:xx:xx:xx
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:92881 errors:0 dropped:0 overruns:0 frame:0
TX packets:3918 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:6038953 (5.7 MiB) TX bytes:588494 (574.7 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:1328 errors:0 dropped:0 overruns:0 frame:0
TX packets:1328 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:66440 (64.8 KiB) TX bytes:66440 (64.8 KiB)
vethBEQF56 Link encap:Ethernet HWaddr FE:C8:14:A3:16:48
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:105 errors:0 dropped:0 overruns:0 frame:0
TX packets:26610 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:6602 (6.4 KiB) TX bytes:1610963 (1.5 MiB)
Upvotes: 0
Reputation: 5231
What about sharing the network namespace of the host with the container ? To make it, you can use the --share-net option of lxc-start command to which you pass the pid of a process running on host side (e.g. pid#1). Then, the container will be in the network namespace as the host and so, will see all the available network interfaces of the host.
lxc-start -n container_name --share-net 1
Upvotes: 1