I'm not surprised that the issue exists as even 10 years ago these speeds were uncommon outside of the datacentre, I'm just surprised that nobody has felt a pressing enough need to fix this earlier in the previous few years.
I guess few people with faster ports felt the need to limit bandwidth for a service to something that’s that large.
FTA:
“OpenBSD's PF packet filter has long supported HFSC traffic shaping with the queue rules in pf.conf(5). However, an internal 32-bit limitation in the HFSC service curve structure (struct hfsc_sc) meant that bandwidth values were silently capped at approximately 4.29 Gbps, ” the maximum value of a u_int ".
With 10G, 25G, and 100G network interfaces now commonplace, OpenBSD devs making huge progress unlocking the kernel for SMP, and adding drivers for cards supporting some of these speeds, this limitation started to get in the way. Configuring bandwidth 10G on a queue would silently wrap around, producing incorrect and unpredictable scheduling behaviour.
A new patch widens the bandwidth fields in the kernel's HFSC scheduler from 32-bit to 64-bit integers, removing this bottleneck entirely.”
Now I'm more scared to use OpenBSD than I was a minute before.
I strongly prefer software that fails loudly and explicitly.
Regardless of how good the philosophy of something is, if it's as niche and manpower constrained as OpenBSD is then it's going to accumulate problems like this.
That is, "worse is better" and it's okay to accept a somewhat leaky abstraction or less helpful diagnostics if it simplifies the implementation.
This is why `ed` doesn't bother to say anything but "?" to erroneous commands. If the user messes up, why should it be the job of the OS to handhold them? Garbage in, garbage out. That attitude may seem out of place today but consider that it came from a time when a program might have one author and 1-20 users, so their time was valued almost equally.
Half the problem is lack of proper drivers. I love OpenBSD but all the fibre stuff is just a bit half-baked.
For a long time OpenBSD didn't even have DOM (light-level monitoring etc.) exposed in its 1g fibre drivers. Stuff like that automatically kills off OpenBSD as a choice for datacentres where DOM stats are a non-negotiable hard requirement as they are so critical to troubleshooting.
OpenBSD finally introduced DOM stats for SFP somewhere around 2020–2021, but it doesn't always work, it depends if you have the right magic combination of SFP and card manufacturer. Whilst on FreeBSD it Just Works (TM).
And then overall, for higher speed optics, FreeBSD simply remains lightyears ahead (forgive the pun !). For example, Decisio make nice little router boxes with 10g SFP+ on them, FreeBSD has the drivers out-of-the-box, OpenBSD doesn't. And that's only an SFP+ example, its basically rolling-tumbleweed in a desert territory if you start venturing up to QSFP etc. ...
IIRC there are two problems at play:
First, I'm not a C coder so this is a bit above my pay-grade, but from what little I do remember about the subject, the problem relates to the OpenBSD requirement to adopt their security mechanisms such as pledge, unveil and strlcpy. IIRC the OpenBSD compiler is also (unsurprisingly !) more anal about stack protector W^X etc. So the porting process is perhaps more time-consuming and low-level than it might otherwise be on other porting projects.
Second, the licensing thing might come into it. OpenBSD has a high preference to most-permissive, and so things like GPL-licensed origins might not be acceptable. IIRC FreeBSD is a little more relaxed within reason ? And when you're working with network cards I would think that is perhaps hard to avoid to some extent if you're relying on certain bits being ultimately derived from Intel chipsets or whatever.
I'm open to correction by those more knowledgable than me on porting intricacies. ;)
I think most of the vendor supplied NIC drivers in FreeBSD are BSD licensed, so that shouldn't be an issue. I checked Intel, Melanox (now NVidia), Cavium/QLogic/Broadcom, Solarflare. The realtek driver in the tree is BSD licensed but not vendor provided; the vendor driver in ports is also BSD licensed. I'm not sure if there's a datacenter ethernet provider with in kernel drivers I missed; but I don't think license is a problem here either --- anyway you could ship a driver module out of tree if it was.
OpenBSD shines as a secure all-in-one router SOHO solution. And it’s great because you get all the software you need in the base system. PF is intuitive and easy to work with, even for non network gurus.
You end up pushing the hot path out to userland where you can actually scale across cores (DPDK/netmap/XDP style approaches), batch packets, and then DMA straight to and from the NIC. The kernel becomes more of a control plane than the data plane.
PF/ALTQ is very much in the traditional in-kernel, per-packet model, so it hits those limits sooner.
Staying in the kernel is approximately the same as bypassing the kernel (caveats apply); for a packet filtering / smoothing use case, I don't think kernel bypass is needed. You probably want to tune NIC hashing so that inbound traffic for a given shaping queue arrives in the same NIC rx queue; but you probably want that in a kernel bypass case as well. Userspace is certainly nicer during development, as it's easier to push changes, but in 2026, it feels like traffic shaping has pretty static requirements and letting the kernel do all the work feels reasonable to me.
Otoh, OpenBSD is pretty far behind the curve on SMP and all that (I think their PF now has support for SMP, but maybe it's still in development?; I'd bet there's lots of room to reduce cross core communication as well, but I haven't examined it). You can't pin userspace cores to cpus, I doubt their kernel datastructures are built to reduce communications, etc. Kernel bypass won't help as much as you would hope, if it's available, which it might not be, because you can't control the userspace to limit cross core communications.
Linux may have a different packet flow, or netfilter could be faster than pf.
> I find nftables to be frankly less challenging than pf
I also don't really care for how pf specifies rules. I would rather run ipfw, but pf has pfsync whereas ipfw doesn't have a way to do failover with state synchronization for stateful firewalls/NAT. So I figured out how to express my rules in pf.conf; because it was worth it, even if I don't like it :P
What sort of kernel do you have which can't scale across cores?
TBF that was the case historically, but they have absolutely been putting in an effort into performance in their more recent releases.
Lots of stuff that used to be simply horrific on OpenBSD, such as multi-peer BGP full-table refreshes is SIGNIFICANTLY better in the last couple of years.
Clearly still not as good as FreeBSD, but compared to what it was...
And that's IMHO is a good thing.
This looks like it only affects bandwidth limiting. I suspect it's pretty niche to use OpenBSD as a traffic shaper at 10G+, and if you did, I'd imagine most of the queue limits would tend toward significantly less than 4G.
When we had 512kbit links, prioritizing VOIP would be a thing, and for asymmetric links like 128/512kbit it was prudent to prioritize small packets (ssh) and tcp ACKs on the outgoing link or the downloads would suffer, but when you have 5-10-25GE, not being able to stick an ACK packet in the queue is perhaps not the main issue.
But, OpenBSD is a project by and for its developers. They use it and develop it to do what they want; they don't really care what anyone else does or doesn't do with it.
"When we set the upper limit of PC-DOS at 640K, we thought nobody would ever need that much memory." - Bill Gates
Especially given that IEEE 802.3dj is working on 1.6T / 1600G, and is expected to publish the final spec in Summer/Autumn 2026:
* https://en.wikipedia.org/wiki/Terabit_Ethernet
Currently these interfaces are only on switches, but there are already NICs at 800G (P1800GO, Thor Ultra, ConnectX-8/9), so if you LACP/LAGG two together your bond is at 1600G.
* https://en.wikipedia.org/wiki/Vector_Packet_Processing
* https://www.youtube.com/watch?v=ptm9h-Lf0gg ("VPP: A 1Tbps+ router with a single IPv4 address")
> We now support configuring bandwidth up to ~1 Tbps (overflow in m2sm at m > 2^40).
So I think that's it, 2^40 is ~1.099 trillion
We will be using Ethernet until the heat death of the universe, if we survive that long.
Calling something "Ethernet" amounts to a promise that:
- From far enough up the OSI sandwich*, you can pretend that it's a magically-faster version of old-fashioned Ethernet
- It sticks to broadly accepted standards, so you won't get bitten by cutting-edge or proprietary surprises