I’m no network technician but I’ll try to explain. I have a complex network system where many servers connect to each other, and today after a reboot, none of my servers could reach my webserver ip (192.168.0.18). Accessing my webserver, when trying to ping the router 192.168.0.1 it says “Host unreachable”. I checked my interfaces and it says bond0 is DOWN.

Tried to set it up manually but it didn’t work. In /etc/network/interfaces eth0 and eth1 are slaves of bond0, but when running ip a eth1 is also DOWN and eth0 is missing.

I don’t know if those interfaces are required to be UP for the webserver to work properly, neither how it looked like before rebooting. No cable or hardware changes were made.

I’m using ubuntu 12.04, and unfortunately I can’t update it now. I just have to fix this as soon as possible.

Output of ip a:

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
	link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
	inet 127.0.0.1/8 scope host lo
		valid_lft forever preferred_lft forever
	inet6 ::1/128 scope host
		valid_lft forever preferred_lft forever
2: rename2: <BROADCAST, MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
	link/ether f8:1a:67:04:57:b8 brd ff:ff:ff:ff:ff:ff
	inet6 fe80:: fa1a:67ff:fe04:57b8/64 scope link
		valid_lft forever preferred_1ft forever
3: eth1: <NO-CARRIER, BROADCAST, MULTICAST, SLAVE, UP> mtu 1500 qdisc pfifo_fast master bond0 state DOWN qlen 1000
	link/ether f8:1a:67:04:57:b8 brd ff:ff:ff:ff:ff:ff
4: eth2: <BROADCAST, MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
	link/ether f8:1a:67:04:a1:d4 brd ff:ff:ff:ff:ff:ff
	inet 179.125.210.147/29 brd 179.125.210.151 scope global eth2
		valid_lft forever preferred_lft forever
	inet6 fe80::fa1a:67ff:fe04:a1d4/64 scope link
		valid_lft forever preferred_lft forever
5: eth3: <BROADCAST, MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
	link/ether 94:de:80:6d:8b:b5 brd ff:ff:ff:ff:ff:ff
	inet 10.20.4.18/24 brd 10.20.4.255 scope global eth3
		valid_lft forever preferred_lft forever
	inet6 fe80::96de:80ff:fe6d:8bb5/64 scope link
		valid_lft forever preferred_lft forever
6: eth4: <NO-CARRIER, BROADCAST, MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
	link/ether 00:11:0a:e9:bb:ff brd ff:ff:ff:ff:ff:ff
11: bond0: <NO-CARRIER, BROADCAST, MULTICAST, MASTER, UP> mtu 1500 qdisc noqueue state DOWN
	link/ether f8:1a:67:04:57:b8 brd ff:ff:ff:ff:ff:ff
	inet 192.168.0.18/24 brd 192.168.0.255 scope global bond0
	valid_lft forever preferred_lft forever

Content of /etc/network/interfaces:

# This file describes the network interfaces available on your system
# and how to activate then. For more information, see interfaces (5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth2
iface eth2 inet static
address 179.125.210.147
netmask 255.255.255.248
gateway 179.125.210.145

# The primary network interface
auto eth3
iface eth3 inet static
address 10.20.4.18
netmask 255.255.255.0

auto eth0
iface eth0 inet manual
bond-master bond0

auto eth1
iface eth1 inet manual
bond-master bond0

#auto eth2
#iface eth2 inet manual
#bond-master bond0

auto bond0
iface bond0 inet static
address 192.168.0.18
netmask 255.255.255.0
bond-mode 4
bond-miimon 100
bond-lacp-rate 1
bond-slaves eth0 eth1
3 Spice ups

Check the physical network connections. eth1 is showing as down - is it connected to the switch, is the switch port up?

eth0 seems to have disappeared - I suspect it is “rename2” - so perhaps try renaming that to eth0.

Has anything on the switch changed?
both physical nic ports eth0 and eth1 are part of a bond which is configured for lacp - therefore the switch must be configured for a LAG bond.

2 Spice ups

Thank you for your reply! I didn’t set this server up, but I believe eth0 and eth1 doesn’t work simultaneously. I tried removing the rj45 cable from the network card it was plugged in and plugging it in another network card that had nothing plugged in it.

I believe the first one was the former eth0 and the one I connected is eth1, since eth0 is still DOWN and eth1 got UP. Also bond0 got UP as well, and now it looks like it’s working properly again.

If eth0 was actually changed to rename2 and lost its settings, how would I know? And if so, how to set it up correctly again?

I checked the udev rules, maybe it makes more sense to you:

# You can modify it, as long as you keep each rule on a single
# line, and change only the value of the NAME= key.

# PCI device 0x10ec:/sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0 (r8169)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="64:70:02:00:23:e3", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

# PCI device 0x10ec:/sys/devices/pci0000:00/0000:00:1c.2/0000:03:00.0 (r8169)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="f8:1a:67:04:57:b8", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"

# PCI device 0x10ec:/sys/devices/pci0000:00/0000:00:1c.3/0000:04:00.0 (r8169)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="f8:1a:67:04:a1:d4", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth2"

# PCI device 0x14e4:/sys/devices/pci0000:00/0000:00:1c.5/0000:06:00.0/0000:07:00.0 (tg3)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:11:0a:e9:bb:ff", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth4"

# PCI device 0x1969:/sys/devices/pci0000:00/0000:00:1c.6/0000:08:00.0 (alx)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="94:de:80:6d:8b:b5", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth3"