In the days when even cheap consumer hardware ships with 2.5G ports, this number seems weirdly low. Does this mean that basically nobody is currently using OpenBSD in the datacentre or anywhere that might be expecting to handle 10G or higher per port, or is it just filtering that's an issue?<p>I'm not surprised that the issue exists as even 10 years ago these speeds were uncommon outside of the datacentre, I'm just surprised that nobody has felt a pressing enough need to fix this earlier in the previous few years.
The article is about allowing bandwidth restrictions in bytes/second that are larger than 2³²-1, not about how fast <i>pf</i> can filter packets.<p>I guess few people with faster ports felt the need to limit bandwidth for a service to something that’s that large.<p>FTA:<p><i>“OpenBSD's PF packet filter has long supported HFSC traffic shaping with the queue rules in pf.conf(5). However, an internal 32-bit limitation in the HFSC service curve structure (struct hfsc_sc) meant that bandwidth values were silently capped at approximately 4.29 Gbps, ” the maximum value of a u_int ".<p>With 10G, 25G, and 100G network interfaces now commonplace, OpenBSD devs making huge progress unlocking the kernel for SMP, and adding drivers for cards supporting some of these speeds, this limitation started to get in the way. Configuring bandwidth 10G on a queue would silently wrap around, producing incorrect and unpredictable scheduling behaviour.<p>A new patch widens the bandwidth fields in the kernel's HFSC scheduler from 32-bit to 64-bit integers, removing this bottleneck entirely.”</i>
> Does this mean that basically nobody is currently using OpenBSD in the datacentre or anywhere<p>Half the problem is lack of proper drivers. I love OpenBSD but all the fibre stuff is just a bit half-baked.<p>For a long time OpenBSD didn't even have DOM (light-level monitoring etc.) exposed in its 1g fibre drivers. Stuff like that automatically kills off OpenBSD as a choice for datacentres where DOM stats are a non-negotiable hard requirement as they are so critical to troubleshooting.<p>OpenBSD <i>finally</i> introduced DOM stats for SFP somewhere around 2020–2021, but it doesn't always work, it depends if you have the right magic combination of SFP and card manufacturer. Whilst on FreeBSD it Just Works (TM).<p>And then overall, for higher speed optics, FreeBSD simply remains lightyears ahead (forgive the pun !). For example, Decisio make nice little router boxes with 10g SFP+ on them, FreeBSD has the drivers out-of-the-box, OpenBSD doesn't. And that's only an SFP+ example, its basically rolling-tumbleweed in a desert territory if you start venturing up to QSFP etc. ...
Not every in-and-out must pass through a queue in PF. The limitation specifically affected throughput of queues.
AFAIK performance is not a priority for OpenBSD project - security is (and other related qualities like code which is easy to understand and maintain). FreeBSD (at least when I followed it several years ago) had better performance both for ipfw and its own PF fork (not fully compatible with OpenBSD one).
> AFAIK performance is not a priority for OpenBSD project - security is<p>TBF that was the case historically, but they have absolutely been putting in an effort into performance in their more recent releases.<p>Lots of stuff that used to be simply horrific on OpenBSD, such as multi-peer BGP full-table refreshes is <i>SIGNIFICANTLY</i> better in the last couple of years.<p>Clearly still not as good as FreeBSD, but compared to what it <i>was</i>...
A lot of the time once you get into multi-gig+ territory the answer isn't "make the kernel faster," it's "stop doing it in the kernel."<p>You end up pushing the hot path out to userland where you can actually scale across cores (DPDK/netmap/XDP style approaches), batch packets, and then DMA straight to and from the NIC. The kernel becomes more of a control plane than the data plane.<p>PF/ALTQ is very much in the traditional in-kernel, per-packet model, so it hits those limits sooner.
The big things to avoid are crossing the user/kernel divide and communication across cores.<p>Staying in the kernel is approximately the same as bypassing the kernel (caveats apply); for a packet filtering / smoothing use case, I don't think kernel bypass is needed. You probably want to tune NIC hashing so that inbound traffic for a given shaping queue arrives in the same NIC rx queue; but you probably want that in a kernel bypass case as well. Userspace <i>is</i> certainly nicer during development, as it's easier to push changes, but in 2026, it feels like traffic shaping has pretty static requirements and letting the kernel do all the work feels reasonable to me.<p>Otoh, OpenBSD is pretty far behind the curve on SMP and all that (I think their PF now has support for SMP, but maybe it's still in development?; I'd bet there's lots of room to reduce cross core communication as well, but I haven't examined it). You can't pin userspace cores to cpus, I doubt their kernel datastructures are built to reduce communications, etc. Kernel bypass won't help as much as you would hope, if it's available, which it might not be, because you can't control the userspace to limit cross core communications.
Just a single data point, but the BSDs in general, as much as people like to jerk them off, having tested both recent FreeBSD (which should be much faster than OpenBSD) and Debian on I guess the now kind of elderly APU2s I have, netfilter is noticably faster (and I find nftables to be frankly less challenging than pf) and gets those devices right at gigabit line speed even with complex firewall rules, where as pf leaves performance on the table. It probably has to do with the fact it's an older 4 core design that wasn't super high power to begin with (does still does its job extremely well), but still.
One issue I've seen from a fair number of people on the APU2s running FreeBSD is if they've got PPPoE; inbound traffic (at least) all hashes to the same RX queue, and as a result there's no parallelism... if you're on gigE fiber with PPPoE, the APU2 can't really keep up single threaded. The earlier APU (1) boards use realtek nics that I think only have a single queue, so you won't get effective parallelism there either. If I'm finding the right information, APU2s with i210 have 4 rx queues which is well matched with a quad core, but those with i211 only have 2 rx queues, which means half of the processors will have nothing to do unless your kernel redistributes packets after rxing, but that comes at a cost too.<p>Linux may have a different packet flow, or netfilter could be faster than pf.<p>> I find nftables to be frankly less challenging than pf<p>I also don't really care for how pf specifies rules. I would rather run ipfw, but pf has pfsync whereas ipfw doesn't have a way to do failover with state synchronization for stateful firewalls/NAT. So I figured out how to express my rules in pf.conf; because it was worth it, even if I don't like it :P
<i>pushing the hot path out to userland where you can actually scale across cores</i><p>What sort of kernel do you have which can't scale across cores?
PF itself is not tailored towards ISPs and/or big orgs. IPFW (FreeBSD) is more powerful and flexible.<p>OpenBSD shines as a secure all-in-one router SOHO solution. And it’s great because you get all the software you need in the base system. PF is intuitive and easy to work with, even for non network gurus.
> Does this mean that basically nobody is currently using OpenBSD in the datacentre or anywhere that might be expecting to handle 10G or higher per port, or is it just filtering that's an issue?<p>This looks like it only affects bandwidth limiting. I suspect it's pretty niche to use OpenBSD as a traffic shaper at 10G+, and if you did, I'd imagine most of the queue limits would tend toward significantly less than 4G.
One thing could also be that by the time you have 10GE uplinks, shaping is not as important.<p>When we had 512kbit links, prioritizing VOIP would be a thing, and for asymmetric links like 128/512kbit it was prudent to prioritize small packets (ssh) and tcp ACKs on the outgoing link or the downloads would suffer, but when you have 5-10-25GE, not being able to stick an ACK packet in the queue is perhaps not the main issue.
At 10G and up, shaping still matters. Once you mix backups, CCTV, voice, and customer circuits on the same uplink, a brief saturation event can dump enough queueing delay into the path that the link looks fine on paper while the stuff people actually notice starts glitching, and latency budgets is tight. Fat pipes don't remove the need for control, they just make the billing mistake more expensive.
OpenBSD was a great OS back in the late 90s and even early 2000s. In some cases it was competing neck to neck with Linux.
Since then, well, Linux grew a lot and OpenBSD not so much. There are multiple causes for this, I will go only through a few: Linux has more support from the big companies; the huge difference in userbase numbers; Linux is more welcoming to new users. And the difference is only growing.
Isnt OpenBSD mainly used for security testing or do I have it wrong? Would be surprised if it was used in production datacenter networking hardware at all. Seems like most people would use one of the proprietary implementations (which likely would include specific written drivers for that hardware) or something like FreeBSD
It's widely used as a router, that's one of its primary uses. But not sure to what scale, likely at small orgs not at major ISPs.<p>But, OpenBSD is a project by and for its developers. They use it and develop it to do what they want; they don't really care what anyone else does or doesn't do with it.
You don't need 4gbps pf queues or even fiber on every single machine in a datacenter. So be surprised, it is used widely for its simplicity and reliability not to mention security compared to those proprietary implementations you speak of, may they rot in hell.