This article details the configuration process for setting up OpenBSD on an EdgeRouter 4 device to function as a home router, incorporating features such as private DNS resolution for clients and failover WAN connectivity.
Installation
OpenBSD provides support for the MIPS64-based system-on-a-chip (SoC) architecture utilizing the Cavium OCTEON CPU, first introduced in OpenBSD 5.4. Support for eMMC storage was added subsequently in OpenBSD 5.8.
The Ubiquiti Networks EdgeRouter 4 serves as an example of a supported SoC device, featuring 4Gb of eMMC flash storage, 1Gb of RAM, and a 4-core CPU.
The installation procedure is straightforward and well-documented in the OpenBSD/octeon installation instructions. However, the following points warrant specific mention:
- Due to the limited flash storage capacity, only a single root (
/
) partition was utilized, without a dedicatedswap
partition. - The target device possesses 4 CPU cores, and consequently, the final
bootcmd
includes the parameternumcores=4
. - To mitigate the risk of filesystem corruption resulting from power
outages, the root filesystem (
/
) is mounted with thesync
option.
This particular SoC lacks a Real-Time Clock (RTC). Ordinarily, OpenBSD would utilize the filesystem timestamp as a fallback time source; however, this mechanism was observed to be non-functional in the tested OpenBSD version 7.7.1
Although ntpd
typically adjusts the system clock, this configuration
poses a challenge as the machine is designated as a DNS resolver with
private zones, and the DNS server requires a relatively accurate time for
DNSSEC validation. To resolve this dependency cycle, static IP addresses
for several anycast NTP servers were added to /etc/ntpd.conf
:
server 0.0.0.0 # A time.cloudflare.com
server 0.0.0.0 # A time.google.com
server :: # AAAA time.cloudflare.com
server :: # AAAA time.google.com
These IP addresses are periodically updated by a script executed via
/etc/daily.local
as follows:
# Update hardcoded IPv4 and IPv6 addresses
if awk '
{
original_line = $0
processed_line = 0
record_type = ""
hostname = ""
new_ip = ""
# Pattern: server <non-space-chars> # A/AAAA <any-chars-for-hostname>
if (original_line ~ /^server [^ ]+ # A .+$/) {
record_type = "A"
temp_hostname_extractor = original_line
sub(/^server [^ ]+ # A /, "", temp_hostname_extractor)
hostname = temp_hostname_extractor
processed_line = 1
}
else if (original_line ~ /^server [^ ]+ # AAAA .+$/) {
record_type = "AAAA"
temp_hostname_extractor = original_line
sub(/^server [^ ]+ # AAAA /, "", temp_hostname_extractor)
hostname = temp_hostname_extractor
processed_line = 1
}
if (processed_line == 1 && hostname != "") {
cmd = "dig +short -t " record_type " " hostname " | sort | head -n1"
getline_status = (cmd | getline new_ip)
if (getline_status <= 0) {
new_ip = ""
}
close(cmd)
}
if (new_ip != "") {
print "server " new_ip " # " record_type " " hostname
} else {
print original_line
}
}
' /etc/ntpd.conf > /etc/ntpd.conf.new; then
mv -f /etc/ntpd.conf.new /etc/ntpd.conf
rcctl restart ntpd > /dev/null
fi
It should be noted that library Address Space Layout Randomization
(ASLR) was disabled using rcctl disable library_aslr
. This
significantly reduced the system boot time from approximately 15 minutes
to 2 minutes.2
Network Design and Setup
The network design utilizes three Ethernet ports on the device:
cnmac1
: Configured as the LAN interface withinrdomain 0
, using the address ranges172.31.0.0/16
andfd00::/8
.cnmac2
: Designated as the primary WAN interface withinrdomain 1
, obtaining dynamic addresses via DHCP/DHCPv6.cnmac3
: Configured as the backup WAN interface withinrdomain 2
, also obtaining dynamic addresses.
Interface configurations are as follows:
gw$ cat /etc/hostname.cnmac1
inet 172.31.0.1 255.255.255.0
inet6 fd00::1/64
gw$ cat /etc/hostname.cnmac2
rdomain 1
inet autoconf
# /56 prefix needs /etc/dhcp6leased.conf
inet6 autoconf
gw$ cat /etc/hostname.cnmac3
rdomain 2
inet autoconf
inet6 autoconf
gw$ cat /etc/dhcp6leased.conf
request rapid commit
request prefix delegation on cnmac2 for {
cnmac2/56
}
gw$
It should be noted that used the Internet Service Provider (ISP)
delegates a prefix larger than /64
, necessitating adjustments in
/etc/dhcp6leased.conf
for proper prefix delegation handling.
Routing between domains is managed via rport
interfaces. Interface
rport0
is assigned to rdomain 0
, and the peer address on this link
serves as the default route for this domain. rport1
(for WAN1) and
rport2
(for WAN2) are configured similarly within their respective
routing domains, with static routes pointing back to the LAN segment.
gw$ cat /etc/hostname.rport0
up
rdomain 0
inet 172.31.1.0 255.255.255.255 172.31.1.1
inet6 fd00:1:: 127
!route -T 0 add 0.0.0.0/0 172.31.1.1
!route -T 0 add ::/0 fd00:1::1
gw$ cat /etc/hostname.rport1
up
rdomain 1
parent rport0
inet 172.31.1.1 255.255.255.255 172.31.1.0
inet6 fd00:1::1 127
!route -T 1 add 172.31.0.0/16 172.31.1.0
!route -T 1 add fd00::/8 fd00:1::
gw$ cat /etc/hostname.rport2
up
rdomain 2
inet 172.31.1.1 255.255.255.255 172.31.1.0
inet6 fd00:1::1 127
!route -T 2 add 172.31.0.0/16 172.31.1.0
!route -T 2 add fd00::/8 fd00:1::
gw$
By default, rport1
is logically connected to rport0
by setting its
parent
interface.
This design necessitates enabling IP forwarding for both IPv4 and IPv6
via sysctl
settings:
gw$ cat /etc/sysctl.conf | grep -v ^#
net.inet.ip.forwarding=1 # 1=Permit forwarding (routing) of IPv4 packets
net.inet6.ip6.forwarding=1 # 1=Permit forwarding (routing) of IPv6 packets
gw$
Failover WAN Implementation
The implemented design, employing distinct rdomain
s and rport
interfaces, facilitates WAN failover. Connectivity checks are performed
by pinging stable remote endpoints (1.1.1.1
and 8.8.8.8
) within each
WAN’s routing domain. The ifstated
daemon manages the failover logic,
switching the active WAN connection by altering the parent
interface
(rport0
) of the rport1
and rport2
interfaces based on ping
results.
The ifstated.conf
configuration is as follows:
gw$ cat /etc/ifstated.conf
wan1_ok = '"(\
route -T1 exec ping -q -c 1 -w 1 1.1.1.1 || \
route -T1 exec ping -q -c 1 -w 1 8.8.8.8 \
) >/dev/null 2>&1" every 5'
wan2_ok = '"(\
route -T2 exec ping -q -c 1 -w 1 1.1.1.1 || \
route -T2 exec ping -q -c 1 -w 1 8.8.8.8 \
) >/dev/null 2>&1" every 5'
state wan1_plug {
init {
run "ifconfig rport2 -parent"
run "ifconfig rport1 parent rport0"
}
if ! $wan1_ok {
run "ifconfig rport1 -parent"
set-state wan2_plug
}
}
state wan2_plug {
init {
run "ifconfig rport1 -parent"
run "ifconfig rport2 parent rport0"
}
if $wan1_ok || ! $wan2_ok {
run "ifconfig rport2 -parent"
set-state wan1_plug
}
}
gw$
The ifstated
daemon should then be enabled and started:
rcctl enable ifstated
rcctl start ifstated
IPv4 and IPv6 NAT Configuration
A consequence of the rdomain
and rport
design is an incorrect
automatic determination of the egress
interface group within pf
.
Because only rport0
becomes a member of the egress
group3,
standard nat-to (egress)
rules are ineffective. Instead, per-interface
NAT rules were configured in /etc/pf.conf
:
match out on cnmac2 from 172.31.0.0/16 to any \
nat-to (cnmac2)
match out on cnmac2 from fd00::/8 to any \
nat-to (cnmac2)
match out on cnmac3 from 172.31.0.0/16 to any \
nat-to (cnmac3)
match out on cnmac3 from fd00::/8 to any \
nat-to (cnmac3)
pass in proto tcp \
from any to {(cnmac2) (cnmac3)} \
port { 80 443 } \
rdr-to 172.31.0.23
pass in proto tcp \
from any to {(cnmac2) (cnmac3)} \
port { 80 443 } \
rdr-to fd00::211:32ff:fede:ad71
match out \
from 172.31.0.0/24 to 172.31.0.0/24 \
nat-to (cnmac1)
match out \
from fd00::/64 to fd00::/64 \
nat-to (cnmac1)
In this configuration, both IPv4 and IPv6 traffic originating from the
internal network (172.31.0.0/16
, fd00::/8
) are subjected to Network
Address Translation (NAT) using the router’s corresponding external
interface (cnmac2
or cnmac3
) IP addresses. This strategy prevents
direct external access via IPv6 to internal devices.
Additionally, ports 80
and 443
are forwarded from both WAN
interfaces to internal hosts using rdr-to
rules.
DHCP and Dynamic DNS Setup
ISC BIND (named) and ISC DHCPD can be integrated to provide dynamic DNS updates for an internal zone.
Initially, the required packages are installed:
pkg_add isc-bind isc-dhcp-server
Subsequently, an RNDC (Remote Name Daemon Control) key is generated for secure communication between DHCPD and named:
rndc-confgen -k dhcp-update -a
cp /etc/rndc.key /var/named/etc/
The DHCP server configuration (/etc/dhcpd.conf
) is addressed next.
authoritative;
allow booting;
allow bootp;
allow unknown-clients;
ddns-updates on;
ddns-update-style interim;
update-static-leases on;
one-lease-per-client on;
include "/etc/rndc.key";
zone 31.172.in-addr.arpa {
primary 127.0.0.1;
key dhcp-update;
}
zone local {
primary 127.0.0.1;
key dhcp-update;
}
subnet 172.31.0.0 netmask 255.255.255.0 {
option routers 172.31.0.1;
option domain-name "local";
option domain-name-servers 172.31.0.1;
range 172.31.0.100 172.31.0.200;
}
group {
host example.local {
hardware ethernet aa:bb:cc:ee:ff:00;
fixed-address 172.31.0.123;
option host-name "example";
}
}
Note that the group
block provides an example of static IP address
allocation based on MAC address.
Following this, the named
(BIND) configuration
(/var/named/etc/named.conf
) is established:
include "/etc/rndc.key";
controls {
inet 127.0.0.1 port 953
allow { 127.0.0.1; } keys { "dhcp-update"; };
};
options {
directory "/tmp"; // working directory, inside the /var/named chroot
// - must be writable by _bind
version ""; // remove this to allow version queries
listen-on {
127.0.0.1;
172.31.0.1;
};
listen-on-v6 {
::1;
fd00::1;
};
allow-recursion {
127.0.0.1;
::1;
172.31.0.0/16;
fd00::/8;
};
};
zone "local" {
type master;
file "/var/db/local.zone";
allow-update { key dhcp-update; };
notify yes;
};
zone "31.172.in-addr.arpa" {
type master;
file "/var/db/31.172.in-addr.arpa.zone";
allow-update { key dhcp-update; };
notify yes;
};
Next, empty zone files must be created within the named
chroot
environment (/var/named
). These serve as the initial database for the
forward (local
) and reverse (31.172.in-addr.arpa
) zones. Appropriate
permissions must also be set.
gw$ cat /var/named/var/db/local.zone
$ORIGIN .
$TTL 86400 ; 1 day
local IN SOA gw.local. hostmaster.local. (
0 ; serial
3600 ; refresh (1 hour)
3600 ; retry (1 hour)
604800 ; expire (1 week)
3600 ; minimum (1 hour)
)
NS gw.local.
A 172.31.0.1
$ORIGIN local.
gw A 172.31.0.1
gw-in A 172.31.1.0
gw-out A 172.31.1.1
gw$ cat /var/named/var/db/31.172.in-addr.arpa.zone
$ORIGIN .
$TTL 86400 ; 1 day
31.172.in-addr.arpa IN SOA gw.local. hostmaster.local. (
0 ; serial
3600 ; refresh (1 hour)
3600 ; retry (1 hour)
604800 ; expire (1 week)
3600 ; minimum (1 hour)
)
NS gw.local.
A 172.31.0.1
$ORIGIN 0.31.172.in-addr.arpa.
1 PTR gw.local.
$ORIGIN 1.31.172.in-addr.arpa.
0 PTR gw-in.local.
1 PTR gw-out.local.
gw$ doas -R _bind:_bind /var/named/var/db
gw$
It is important to note that the DHCP daemon (isc_dhcpd
) requires a
lease file to exist, even if empty, before it will start successfully.
An empty lease file can be created with the correct ownership using:
install -o _isc-dhcp -g _isc-dhcp /dev/null /var/db/isc-dhcp/dhcpd.leases
For IPv6 client configuration via Stateless Address Autoconfiguration
(SLAAC), the Router Advertisement Daemon (rad
) must also be
configured. The configuration in /etc/rad.conf
specifies the prefix
and DNS server to be advertised on the LAN interface (cnmac1
):
interface cnmac1 {
no auto prefix
prefix fd00::/64
dns {
nameserver fd00::1
}
}
Finally, all necessary services should be enabled to start on boot and initiated for the current session:
rcctl enable isc_named isc_dhcpd rad
rcctl start isc_dhcpd isc_named rad
NAT Performance Benchmarking
Router NAT performance was evaluated using the iperf3
benchmarking
tool. The test client was executed on a Mac Mini (M1 model) connected
via a 1 Gbps Ethernet link.
The test methodology involved establishing an iperf3
connection from
the client (Mac Mini M1) to a port on the router’s WAN interface. This
port was configured with pf
rules (rdr-to
and nat-to
, as described
previously) to redirect and NAT the traffic back to the same Mac Mini,
which simultaneously operated the iperf3
server instance. This
loopback configuration is designed to measure the packet processing
throughput of the router under load, specifically stressing its NAT
capabilities. The test was conducted over a duration of 300 seconds.
During the test, CPU utilization was observed as follows: On the Mac Mini (M1 model, acting as iperf3 client and server):
- Approximately 30% by
kernel_task
. - Approximately 28% by the
iperf3 -s
(server) process. - Approximately 14% by the
iperf3 -c
(client) process.
The results presented below were obtained using OpenBSD 7.7.
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-300.00 sec 16.4 GBytes 469 Mbits/sec 1378872 sender
[ 5] 0.00-300.01 sec 16.4 GBytes 469 Mbits/sec receiver
Subsequently, the benchmark was repeated using an OpenBSD -current
development snapshot (dated 2025-05-09). The results obtained with this
version are as follows:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-300.00 sec 16.2 GBytes 463 Mbits/sec 1179368 sender
[ 5] 0.00-300.03 sec 16.2 GBytes 463 Mbits/sec receiver
Concurrently, on the EdgeRouter 4 (device under test), CPU utilization
reached nearly 100%, primarily by the softnet0
process.
Comparative Network Performance Benchmarking
This section evaluates the network performance of two Ubiquiti devices
when functioning as iperf3
servers. For all tests, a Mac Mini (M1
model) served as the iperf3
client.
The first device assessed was the Ubiquiti Networks EdgeRouter 4. This
device is understood to be equipped with a Cavium OCTEON (rev 0.2) @ 1000 MHz
processor. The iperf3
server performance results for the
EdgeRouter 4 are as follows:
[ ID] Interval Transfer Bitrate Retr
[ 5] 0.00-300.00 sec 15.1 GBytes 432 Mbits/sec 46336 sender
[ 5] 0.00-300.04 sec 15.1 GBytes 432 Mbits/sec receiver
For a comparative analysis, a Ubiquiti Networks UniFi Security Gateway
(USG) was subsequently tested as an iperf3
server. The USG features a
Cavium OCTEON (rev 0.1) @ 500 MHz
processor. The performance results
for the USG are presented below:
[ ID] Interval Transfer Bitrate Retr
[ 7] 0.00-300.00 sec 6.04 GBytes 173 Mbits/sec 0 sender
[ 7] 0.00-301.51 sec 6.04 GBytes 172 Mbits/sec receiver
During both benchmark scenarios, CPU utilization on the respective server device (EdgeRouter 4 or USG) exhibited a similar pattern:
- Approximately 100% of a CPU core’s capacity by
softnet0
. - Approximately 22% of a CPU core’s capacity by
iperf3 -s
.
Finally, the same benchmark was performed on the USG with pf
disabled
(via pfctl -d
):
[ ID] Interval Transfer Bitrate Retr
[ 7] 0.00-300.00 sec 8.04 GBytes 230 Mbits/sec 7240 sender
[ 7] 0.00-301.53 sec 8.04 GBytes 229 Mbits/sec receiver
and on the EdgeRouter 4:
[ ID] Interval Transfer Bitrate Retr
[ 7] 0.00-300.00 sec 18.5 GBytes 529 Mbits/sec 153480 sender
[ 7] 0.00-300.00 sec 18.5 GBytes 529 Mbits/sec receiver
During this latter benchmark scenarios on the both devices (with pf
disabled), the CPU utilization profile was reported as consistent with
the previous tests (with pf
enabled).
The same tests were reproduced on EdgeRouter 4 without any vlan
:
[ ID] Interval Transfer Bitrate Retr
[ 7] 0.00-30.00 sec 1.72 GBytes 493 Mbits/sec 11584 sender
[ 7] 0.00-30.01 sec 1.72 GBytes 493 Mbits/sec receiver
and without vlan
and pf
:
[ ID] Interval Transfer Bitrate Retr
[ 7] 0.00-30.00 sec 2.02 GBytes 579 Mbits/sec 8688 sender
[ 7] 0.00-30.00 sec 2.02 GBytes 578 Mbits/sec receiver
All tests were run on an OpenBSD -current
development snapshot (dated
2025-05-09).
This issue was documented in a bug report: https://marc.info/?l=openbsd-bugs&m=174704252007377 ↩︎
It is also noteworthy that the
sysupgrade
process (excluding the download of sets) requires approximately one hour on this hardware when thesync
mount option is active, and about half an hour otherwise. ↩︎A patch has been suggested to extend the
egress
interface group definition across multiple routing domains (rdomain
). Nevertheless, further work is required forpf
to accurately determine the appropriateegress
interface(s) in configurations utilizing multiple routing domains. ↩︎