So you have to ship new code to every 'network element' to support IPv4x. Just like with IPv6.
So you have to update DNS to create new resource record types ("A" is hard-coded to 32-bits) to support the new longer addresses, and have all user-land code start asking for, using, and understanding the new record replies. Just like with IPv6. (And their DNS idea won't work—or won't work differently than IPv6: a lot of legacy code did not have room in data structures for multiple reply types: sure you'd get the "A" but unless you updated the code to get the "AX" address (for ipv4X addresses) you could never get to the longer with address… just like IPv6 needed code updates to recognize AAAA, otherwise you were A-only.)
You need to update socket APIs to hold new data structures for longer addresses so your app can tell the kernel to send packets to the new addresses. Just like with IPv6.
A single residential connection that gets a single IPv4 address also gets to use all the /96 'behind it' with this IPv4x proposal? People complain about the "wastefulness" of /64s now, and this is even more so (to the tune of 32 bits). You'd probably be better served with pushing the new bits to the other end… like…
* https://en.wikipedia.org/wiki/IPv6#IPv4-mapped_IPv6_addresse...
IPv6 adoption has been linear for the last two decades. Currently, 48% of Google traffic is IPv6.[1] It was 30% in 2020. That's low, because Google is blocked in China. Google sees China as 6% IPv6, but China is really around 77%.
Sometimes it takes a long time to convert infrastructure. Half the Northeast Corridor track is still on 25Hz. There's still some 40Hz power around Niagara Falls. San Francisco got rid of the last PG&E DC service a few years ago. It took from 1948 to 1994 to convert all US freight rail stock to roller bearings.[2] European freight rail is still using couplers obsolete and illegal in the US since 1900. (There's an effort underway to fix this. Hopefully it will go better than Eurocoupler from the 1980s. Passenger rail uses completely different couplers, and doesn't uncouple much.)[3]
[1] https://www.google.com/intl/en/ipv6/statistics.html
[2] https://www.youtube.com/watch?v=R-1EZ6K7bpQ
[2] https://rail-research.europa.eu/european-dac-delivery-progra...
It just so happens that, unlike for v6, v4 and v4x have some "implicit bridges" built-in (i.e. between everything in v4 and everything in v4x that happens to have the last 96 bits unset). Not sure if that actually makes anything better or just kicks the can down the road in an even more messy way.
That's pretty much identical to 6in4 and similar proposals.
The Internet really needs a variant of the "So, you have an anti spam proposal" meme that used to be popular. Yes, it kill fresh ideas in the bud sometimes, but it also helps establish a cultural baseline for what is constructive discussion.
Nobody needs to hear about the same old ideas that were subsumed by IPv6 because they required a flag day, delayed address exhaustion only about six months, or exploded routing tables to impossible sizes.
If you have new ideas, let's hear them, but the discussion around v6 has been on constant repeat since before it was finalized and that's not useful to anyone.
—Sent from my IPv6 phone
For those unfamiliar:
Yes, but the compatibility is very very easy to support for both hardware vendors, softwares, sysadmins etc. Some things might need a gentle stroke (mostly just enlarge a single bitfield) but after that everything just works, hardware, software, websites, operators.
A protocol is a social problem, and ipv6 fails exactly there.
Even still. The rollout is still progressing, and new systems like Matter are IPv6 only.
For me just disabling IPv6 has given the biggest payoff. Life is too short to waste time debugging obscure IPv6 problems that still routinely pop up after over 30 years of development.
Ever since OpenVPN silently routed IPv6 over clearnet I've just disabled it whenever I can.
I think the biggest barrier to IPv6 adoption is that this is just categorically untrue and people keep insisting that it isn't, reducing the chance that I'd make conscious efforts to try to grok it.
I've had dozens of weird network issues in the last few years that have all been solved by simply turning off IPv6. From hosts taking 20 seconds to respond, to things not connecting 40% of the time, DHCP leases not working, devices not able to find the printer on the network, everything simply works better on IPv4, and I don't think it's just me. I don't think these sort of issues should be happening for a protocol that has had 30 years to mature. At a certain point we have to look and wonder if the design itself is just too complicated and contributes to its own failure to thrive, instead of blaming lazy humans.
Now I'm sure I can fix DNSmasq to do something sensible here, but the defaults didn't even break - they worked in the most annoying way possible where had I just disabled IPv6 that would've fixed the entire problem right away.
Dual stack has some incredibly stupid defaults.
If an ISP uses an MPLS core, every POP establishes a tunnel to every other POP. IP routing happens only at the source POP as it chooses which pre-established tunnel to use.
If an ISP is very new, it likely has an IPv6-only core, and IPv4 packets are tunneled through it. If an ISP is very old, with an IPv4-only core, it can do the reverse and tunnel IPv6 packets through IPv4. It can even use private addresses for the intermediate nodes as they won't be seen outside the network.
So let’s say your internet provider owns x.x.x.x, it receives a packet directed to you at x.x.x.x.y.y… , forwards it to your network, but your local router has old software and treats all packages to x.x.x.x.* as directed to it. You never receive any medssagea directly to you evem though your computer would recognise IPv4x.
It would be a clusterfuck.
Your home router that sits on the end of a single IPv4 address would need to know about IPv4x, but in this parallel world you'd buy a router that does.
Your computer knows it’s connected to an old router because dhcp gave it x.x.x.x address and not x.x.x.x... so it knows it’s running in old v4 mode.
And it can still send outbound to a v4x address that it knows about.
Software updates scale _very well_ - once author updates, all users get the latest version. The important part is sysadmin time and config files - _those_ don't scale at all, and someone needs to invest effort in every single system out there.
That's where IPv6 really dropped the ball by having dual-stack the default. In IPv4x, there is no dual-stack.
I upgrade my OS, and suddenly I can use IPv4x addresses... but I don't have to - all my configs are still valid, and if my router is not compatible, all devices still fall back to IPv4-compatible short addresses, but are using IPv4x stack.
I upgrade the home router and suddenly some devices get IPv4x address... but it is all transparent to me - my router's NAT takes care of that if my upstream (ISP) or a client device are not IPv4x-capable.
I have my small office network which is on the mix IPv4 and IPv4x addresses. Most Windows/Linux machines are on IPv4x, but that old network printer and security controller still have IPv4 address (with router translating responses). It still all works together. There is only one firewall rule set, there is only one monitoring tool, etc... My ACL list on NAS server has mix of IPv4 and IPv4x in the same list...
So this is a very stark contrast to IPv6 mess, where you have to bring up a whole parallel network, setup a second router config, set up a separate firewall set, make a second parallel set of addresses, basically setup a whole separate network - just to be able to bring up a single IPv6 device.
(Funny enough, I bet one _could_ accelerate IPv6 deployment a lot by have a standard that _requires_ 6to4/4to6/NAT64 technology in each IPv6 network... but instead the IPv6 supporters went into all-or-nothing approach)
ahem
With IPv6 the router needs to send out RAs. That's it. There's no need to do anything else with IPv6. "Automatic configuration of hosts and routers" was a requirement for IPng:
* https://datatracker.ietf.org/doc/html/rfc1726#section-5.8
When I was with my last ISP I turned IPv6 on my Asus router, it got a IPv6 WAN connection, and a prefix delegation from my ISP, and my devices (including by Brother printer) started getting IPv6 addresses. The Asus had a default-deny firewall and so all incoming IPv6 connections were blocked. I had do to zero configuration on any of the devices (laptops, phones, IoT, etc).
> I upgrade my OS, and suddenly I can use IPv4x addresses... but I don't have to - all my configs are still valid, and if my router is not compatible, all devices still fall back to IPv4-compatible short addresses, but are using IPv4x stack.
So if you cannot connect via >32b addresses you fall back to 32b addresses?
* https://en.wikipedia.org/wiki/Happy_Eyeballs
> I upgrade the home router and suddenly some devices get IPv4x address... but it is all transparent to me - my router's NAT takes care of that if my upstream (ISP) or a client device are not IPv4x-capable.
* https://en.wikipedia.org/wiki/IPv6_rapid_deployment
A French ISP deployed this across their network of four million subscribers in five months (2007-11 to 2008-03).
> There is only one firewall rule set, there is only one monitoring tool, etc... My ACL list on NAS server has mix of IPv4 and IPv4x in the same list...
If an (e.g.) public web server has public address (say) 2.3.4.5 to support legacy IPv4-only devices, but also has 2.3.4.5.6.7.8.9 to support IPv4x devices, how can you have only one firewall rule set?
> So this is a very stark contrast to IPv6 mess, where you have to bring up a whole parallel network, setup a second router config, set up a separate firewall set, make a second parallel set of addresses, basically setup a whole separate network - just to be able to bring up a single IPv6 device.
Having 10.11.12.13 on your PC as well as 10.11.12.13.14.15.16 as per IPv4x is "a second parallel set of addresses".
It is running a whole separate network because your system has the address 10.11.12.13 and 10.11.12.13.14.15.16. You are running dual-stack because you support connection from 32-bit-only un-updated, legacy devices and >32b updated devices. This is no different than having 10.11.12.13 and 2001:db8:dead:beef::10:11:12:13.
Contrast this with ip6, which is a completely new system, and thus has a chicken and egg problem.
Today it seems most ISPs support it but have it behind an off by default toggle.
The reason there's an IPv4 address shortage is because ISPs assign every user a unique IPv4 address. In this alternative timeline, ISPs would have to give users less-than-an-IPv4 address, which probably means a single IPv4x address if we're being realistic and assuming that ISPs are taking the path of least resistance.
Something that just uses IPv4 won’t work without making the extra layer visible. That may not have been apparent then but it is now.
So the folks that just happen to get in early on the IPv4 address land rush (US, Western world) now also get to grab all this new address space?
What about any new players? This particular aspect idea seems to reward incumbents. Unlike IPv6, where new players (and countries and continents) that weren't online early get a chance to get equal footing in the expanded address space.
From where?
All then-existing IPv4 addresses would get all the bits behind them. There would, at the time, still be IPv4 addresses available that could be given out, and as people got them they would also get the extend "IPv4x" address associated with them.
But at some point IPv4 addresses would all be allocated… along with all the extended addresses 'behind' them.
Then what?
The extended IPv4x addresses are attached to the legacy IPv4 addressed they are 'prefixed' by, so once the legacy bits are assigned, so are the new bits. If someone comes along post-legacy-IPv4 exhaustion, where do new addresses come from?
You're in the exact same situation as we are now: legacy code is stuck with 32-bit-only addresses, new code is >32-bits… just like with IPv6. Great you managed to purchase/rent a legacy address range… but you still need a translation box for non-updated code… like with CG-NAT and IPv6.
This IPv4x thing is bullshit but we should be accurate about how it would play out.
But right now you can get an IPv4 /24 (as you say), but you can get an IPv6 allocation 'for free' as we speak.
In both cases legacy code cannot use the new address space; you have to:
* update the IP stack (like with IPv6)
* tell applications about new DNS records (like IPv6)
* set up translation layers for legacy-only code to reach extended-only destination (like IPv6 with DNS64/NAT64, CLAT, etc)
You're updating the exact same code paths in both the IPv4x and IPv6 scenarios: dual-stack, DNS, socket address structures, dealing with legacy-only code that is never touched to deal with the larger address space.
And at the same time the address format and IP header is extended, effectively still splitting one network into two (one of which is a superset of the others)?
A fundamentally breaking change remains a breaking change, whether you have the guts to bump your version number or not.
That's cool and all, but end-user edge routers are absolutely going to have to be updated to handle "IPv4x". Why? Because the entire point of IPvNext is to address address space exhaustion, their ISP will stop giving them IPv4 addresses.
This means that the ISP is also going to have to update significant parts of their systems to handle "IPv4x" packets, because they're going to have to handle customer site address management. The only thing that doesn't have to change is the easiest part of the system to get changed... the core routers and associated infrastructure.
Is that really the easy bit to change? ISPs spend years trialling new hardware and software in their core. You go through numerous cheapo home routers over the lifetime of one of their chassis. You'll use whatever non-name box they send you, and you'll accept their regular OTA updates too, else you're on your own.
When you're adding support for a new Internet address protocol that's widely agreed to be the new one, it absolutely is. Compared to what end-users get, ISPs buy very high quality gear. The rate of gear change may be lower than at end-user sites but because they're paying far, far more for the equipment, it's very likely to have support for the new addressing protocol.
Consumer gear is often cheap-as-possible garbage that has had as little effort put into it as possible. [0] I know that long after 2012, you could find consumer-grade networking equipment that did not support (or actively broke) IPv6. [1] And how often do we hear complaints of "my ISP-provided router is just unreliable trash, I hate it", or stories of people saving lots of money by refusing to rent their edge router from their ISP? The equipment ISPs give you can also be bottom-of-the-barrel crap that folks actively avoid using. [2]
So, yeah, the stuff at the very edge is often bottom-of-the-barrel trash and is often infrequently updated. That's why it's harder to update the equipment at edge than the equipment in the core. It is way more expensive to update the core stuff, but it's always getting updated, and you're paying enough to get much better quality than the stuff at the edge.
[0] OpenWRT is so, so popular for a reason, after all.
[1] This was true even for "prosumer" gear. I know that even in the mid 2010s, Ubiquiti's UniFi APs broke IPv6 for attached clients if you were using VLANs. So, yeah, not even SOHO gear is expensive enough to ensure that this stuff gets done right.
[2] You do have something of a point in the implied claim that ISPs will update their customer rental hardware with IPv6 support once they start providing IPv6 to their customer. But. Way back when I was so foolish as to rent my cable modem, I learned that I'd been getting a small fraction of the speed available to me for years because my cable modem was significantly out of date. It required a lucky realization during a support call to get that update done. So, equipment upgrades sometimes totally fall through the cracks even with major ISPs.
Sure. On the other other hand, companies going "Is this a security problem that's going to cost us lots of money if we don't fix it? No? Why the fuck should I spend money fixing it for free, then? It can be a headline feature in the new model." means that -in practice- they aren't so easily updated.
If everyone in the consumer space made OpenWRT-compatible routers, switches, and APs, then that problem would be solved. But -for some reason- they do not and we still get shit like [0].
No. The router in your home would need to support IPv4x, or you would get no Internet connection. Why? Because IPv4x extends the address space "under" each IPv4 address -thus- competing with it for space. ISPs in areas with serious address pressure sure as fuck aren't going to be giving you IPv4 addresses anymore.
As I mentioned, similarly, ISPs will need to update their systems to handle IPv4x, because they are -at minimum- going to be doing IPv4x address management for their customers. They're probably going to -themselves- be working from IPv4x allocations. Maybe each ISP gets knocked down from several v4 /16s or maybe a couple of /20s to a handful of v4 /32s to carve up for v4x customer sites.
Your scheme has the adoption problems of IPv6, but even worse because it relies on reclaiming and repurposing IPv4 address space that's currently in use.
I personally feel that IPv6 is one of the clearest cases of second system syndrome. What we needed was more address bits. What we got was a nearly total redesign-by-committee with many elegant features but had difficult backwards compatibility.
IPv6 gets a lot of hate for all the bells and whistles, but on closer examination, the only one that really matters is always “it’s a second network and needs me to touch all my hosts and networking stack”.
Don’t like SLAAC? Don’t use it! Want to keep using DHCP instead? Use DHCPv6! Love manual address configuration? Go right ahead! It even makes the addresses much shorter. None of that stuff is essential to IPv6.
In fact, in my view TFA makes a very poor case for a counterfactual IPv4+ world. The only thing it really simplifies is address space assignment.
It doesn't work like this. SLAAC is a standard compliant way of distributing addresses, so you MUST support it unless you're running a very specific isolated setup.
Most people using Android will come to your home and ask "do you have WiFi here?"
The Android implementation of IPv6 completely boggles my mind. They have completely refused to implemented DHCPv6 since 2012:
* https://issuetracker.google.com/issues/36949085
But months after client-side DHCP-PD was made an RFC they're implementing that?
* https://android-developers.googleblog.com/2025/09/simplifyin...
In what universe does implementing DHCP-PD but not 'regular' DHCPv6 make any kind of sense?
Simplifying address space assignment is a huge deal. IPv4+ allows the leaves of the network to adopt IPv4+ when it makes sense for them. They don't lose any investment in IPv4 address space, they don't have to upgrade to all IPv6 supporting hardware, there's no parallel configuration. You just support IPv4 on the terminals that want or need it, and on the network hardware when you upgrade. It's basically better NAT that eventually disappears and just becomes "routing".
What investment? IP addresses used to be free until we started running out, and I don't think anything of value would be lost for humanity as a whole if they became non-scarce again.
> they don't have to upgrade to all IPv6 supporting hardware
But they do, unless you're fine with maintaining an implicitly hierarchical network (or really two) forever.
> It's basically better NAT
How is it better? It also still requires NAT for every 4x host trying to reach a 4 only one, so it's exactly NAT.
> that eventually disappears
Driven by what mechanism?
> What investment? IP addresses used to be free
Well they're not now, so it's an investment. Any entity that has IP addresses doesn't want its competition to get IP addresses, even when this leads to bad outcomes overall.
The removal of arp and removal of broadcast, the enforcement of multicast
The almost-required removal of NAT and the quasi-relgious dislike from many network people. Instead of simply src-natting your traffic behind ISP1 or ISP2, you are supposed to have multiple public IPs and somehow make your end devices choose the best routing rather than your router.
All of these were choices made in addition to simply expanding the address scope.
Only use the real one then (unless you happen to be implementing ND or something)!
> The removal of arp and removal of broadcast, the enforcement of multicast
ARP was effectively only replaced by ND, no? Maybe there are many disadvantages I'm not familiar with, but is there a fundamental problem with it?
> The almost-required removal of NAT
Don't like that part? Don't use it, and do use NAT66. It works great, I use it sometimes!
(i.e. anything other than the decision to make a breaking change to address formats and accordingly require adapters)
I (and I expect the fellow you're replying to) believe that if you're going to have to rework ARP to support 128-bit addresses, you might as well come up with a new protocol that fixes things you think are bad about ARP.And if the fellow you're replying to doesn't know that broadcast is another name for "all-hosts multicast", then he needs to read a bit more.
[0] Several purity-minded fools wanted to pretend that IPv6 NAT wasn't going to exist. That doesn't mean that IPv6 doesn't support NAT... NAT is and has always been a function of the packet mangling done by a router that sits between you and your conversation partner.
There was only 1 mistake, but it was huge and all backwards compatibility problems come from it. The IPv4 32-bit address space should have been included in the IPv6 address space, instead of having 2 separate address spaces.
IPv6 added very few features, but it mostly removed or simplified the IPv4 features that were useless.
Like
> Addresses in this group consist of an 80-bit prefix of zeros, the next 16 bits are ones, and the remaining, least-significant 32 bits contain the IPv4 address. For example, ::ffff:192.0.2.128 represents the IPv4 address 192.0.2.128. A previous format, called "IPv4-compatible IPv6 address", was ::192.0.2.128; however, this method is deprecated.[5]
* https://en.wikipedia.org/wiki/IPv6#IPv4-mapped_IPv6_addresse...
?
The entire IPv4 address space is included in the IPv6 address space, in fact it's included multiple times depending on what you want to do with it. There's one copy for representing IPv4 addresses in a dual-stack implementation, another copy for NAT64, a different copy for a different tunneling mechanism, etc.
IPv6 added IPSEC which was backported to IPv4.
IPv6 tried to add easy renumbering, which did’t work and had to be discarded.
IPv6 added scoped addresses which are halfbaked and limited. Site-scoped addresses never worked and were discarded; link-scoped addresses are mostly used for autoconfiguration.
IPv6 added new autoconfiguration protocols instead of reusing bootp/DHCP.
That's ... exactly how IPv6 works?
Look at the default prefix table at https://en.wikipedia.org/wiki/IPv6_address#Default_address_s... .
Or did you mean something else? You still need a dual stack configuration though, there's nothing getting around that when you change the address space. Hence "happy eyeballs" and all that.
Yes there is, at least outside of the machine. All you need to do is have an internal network (100.64/16, 169.254/16, wherever) local to the machine. If you machine is on say 2001::1, then when an application attempts to listen on an ipv4 address it opens a socket listening on 2001::1 instead, and when an application writes a packet to 1.0.0.1, your OS translates it to ::ffff:100:1. This can be even more hidden than things like internal docker networks.
Your network then has a route to ::ffff:0:0/96 via a gateway (typically just the default router), with a source of 2001::1
When the packet arrives at a router with v6 and v4 on (assume your v4 address is 2.2.2.2), that does a 6:4 translation, just like a router does v4:v4 nat
The packet then runs over the v4 network until it reaches 1.0.0.1 with a source of 2.2.2.2, and a response is sent back to 2.2.2.2 where it is de-natted to a destination of 2001:1 and source of ::ffff:100.1
That way you don't need to change any application unless you want to reach ipv6 only devices, you don't need to run separate ipv4 and ipv6 stacks on your routers, and you can migrate easilly, with no more overhead than a typical 44 nat for rfc1918 devices.
Likewise you can serve on your ipv6 only devices by listening on 2001::1 port 80, and having a nat which port forwards traffic coming to 2.2.2.2:80 to 2001::1 port 80 with a source of ::ffff:(whatever)
(using colons as a deliminator wasn't great either, you end up with http://[2001::1]:80/ which is horrible)
That is horrible, but you do no longer have any possibility of confusion between an IP address and a hostname/domain-name/whatever-it's-called. So, yeah, benefits and detriments.
> Your network then has a route to ::ffff:0:0/96 via a gateway...
I keep forgetting about IPv4-mapped addresses. Thanks for reminding me of them with this writeup. I should really get around to playing with them some day soon.
I see many ISPs deploying IPv6 but still following the same design principles they used for IPv4. In reality, IPv6 should be treated as a new protocol with different capabilities and assumptions.
For example, dynamic IP addresses are common with IPv4, but with IPv6 every user should ideally receive a stable /64 prefix, with the ability to request additional prefixes through prefix delegation (PD) if needed.
Another example is bring-your-own IP space. This is practically impossible for normal users with IPv4, but IPv6 makes it much more feasible. However, almost no ISPs offer this. It would be great if ISPs allowed technically inclined users to announce their own address space and move it with them when switching providers.
Note though that I'm not proposing IPv4x as something we should work towards now. Indeed, I come down on the side of being happy that we're in the IPv6 world instead of this alternative history.
Huh:
> For any 32-bit global IPv4 address that is assigned to a host, a 48-bit 6to4 IPv6 prefix can be constructed for use by that host (and if applicable the network behind it) by appending the IPv4 address to 2002::/16.
> For example, the global IPv4 address 192.0.2.4 has the corresponding 6to4 prefix 2002:c000:0204::/48. This gives a prefix length of 48 bits, which leaves room for a 16-bit subnet field and 64 bit host addresses within the subnets.
Which has been discussed previously: https://hn.algolia.com/?q=The+IPv6+mess
The most likely alternative would have been 64-bit. That's big enough that could have worked for a long time.
Are you sure about that? Until a few years ago my residential ISP was IPv4 only. I definitely couldn't connect to an IPv6-only service back then.
"ping 1.1.1.1"
it doesn't work.
If stacks had moved to ipv6 only, and the OS and network library do the translation of existing ipv4, I think things would have moved faster. Every few months I try out my ipv6 only network and inevitably something fails and I'm back to my ipv4 only network (as I don't see the benefit of dual-stack, just the headaches)
Sure you'd need a 64 gateway, but then that can be the same device that does your current 44 natting.
The main limitation is software that only supports IPv4. This would affect your proposed solution of doing the translation in the stack. There is no way to fix an IPv4-only software that has 32-bit address field.
Issue for CLAT in systemd-networkd: https://github.com/systemd/systemd/issues/23674
A major website sees over 46 percent of its traffic over ipv6. A major mobile operator has a network that runs entirely over ipv6.
This is not “waiting for adoption” so I stopped reading there.
https://www.google.com/intl/en/ipv6/statistics.html
https://www.internetsociety.org/deploy360/2014/case-study-t-...
To be less glib: IPv6 is well-adopted. It's not universally adopted.
- NAT gateways are inherently stateful (per connection) and IP networks are stateless (per host, disregarding routing information). So even if you only look at the individual connection level, disregarding the host/connection layering violation, the analogy breaks.
- NAT gateways don't actually route/translate by (IP, port) as you imply, but rather by (source IP, source port, destination IP, destination port), as otherwise there simply would not be enough ports in many cases.
Until you have stateful firewall, which any modern end network is going to have
> NAT gateways don't actually route/translate by (IP, port) as you imply, but rather by (source IP, source port, destination IP, destination port), as otherwise there simply would not be enough ports in many cases.
If 192.168.0.1 and 0.2 both hide behind 2.2.2.2 and talk to 1.1.1.1:80 then they'll get private source IPs and source ports hidden behind different public source ports.
Unless your application requires the source port to not be changed, or indeed embeds the IP address in higher layers (active mode ftp, sip, generally things that have terrible security implications), it's not really a problem until you get to 50k concurrent connections per public ipv4 address.
In practice NAT isn't a problem. Most people complaining about NAT are actually complaining about stateful firewalls.
Yes, but it's importantly still a choice. Also, a firewall I administer, I can control. One imposed onto me by my ISP I can’t.
> not really a problem until you get to 50k concurrent connections per public ipv4 address.
So it is in fact a big problem for CG-NATs.
> In practice NAT isn't a problem. Most people complaining about NAT are actually complaining about stateful firewalls.
No, I know what I'm complaining about. Stateful firewall traversal via hole punching is trivial on v6 without port translation, but completely implementation dependent on v4 with NAT in the mix, to just name one thing. (Explicit "TCP hole punching" would also be trivial to specify; it's a real shame we haven't already, since it would take about a decade or two for mediocre CPE firewalls to get the memo anyway.)
Having global addressing is also just useful in and of itself, even without global reachability.
There was never 64-78 bits in the IPv4 header unconstrained enough to extend IPv4 in place even if you accepted the CGNAT-like compromise of routing through IPv4 "super-routers" on the way to 128-bit addresses. Extending address size was always going to need a version change.
I've rarely seen it used in practice, but it's in theory doable.
Does the "advice" boil down to "You should NEVER use ULAs and ALWAYS use GUAs!" and is given by the same very, very loud subset of people who seemed to feel very strongly that IPv6 implementations should explicitly make it impossible to do NAT?
It's conceivable that OSes could support some sort of traffic steering mechanism where the network distributes policy in some sort of dynamic way? But that also sounds fragile and you (i.e. the network operator) still have to cope with the long tail of devices that will never support such a mechanism.
The router in a coffee shop gives you an ULA, and NATs everything to a single globally routable public ipv6 address.
To answer your question: Who knows? Perhaps you have a shitlord ISP that only provides you with a /128 (such as that one "cloud provider" whose name escapes me). [0] It's a nice tool to have in your toolbox, should you find that you need to use it.
[0] Yes, I'm aware that a "cloud provider" is not strictly an ISP. They are providing your VMs with access to the Internet, so I think the definition fits with only a little stretching-induced damage.
That doesn't help when your shitbag ISP has given you exactly one /128.
It seemed strange that the need for CGNAT wasn't mentioned until after the MIT story. The "Nothing broke" claim in that story seems unlikely; I was on a public IP at University at the end of the 90s and if I'd suddenly been put behind NAT, some things I did would have broken until the workarounds were worked out.
What's the difference between that and dual stack v4/v6, though? Other than not needing v6 address range assignments, of course.
To replace something, you embrace it and extend it so the old version can be effectively phrased out.
Who's arguing for that? That would be completely non-viable even today, and even with NAT64 it would be annoying.
> Dual-stack fails miserably when the newer stack is incompatible with the older one.
Does it? All my clients and servers are dual stack.
> With a stack that extends the old stack, you always have something to fallback to.
Yes, v4/v6 dual stack is indeed great!
> To replace something, you embrace it and extend it so the old version can be effectively phrased out.
Some changes unfortunately really are breaking. Sometimes you can do a flag day, sometimes you drag out the migration over years or decades, sometimes you get something in between.
We'll probably be done in a few more decades, hopefully sooner. I don't see how else it could have realistically worked, other than maybe through top-down decree, which might just have wasted more resources than the transition we ended up with.
I don't see IPv4 going away within the next fifty years. I'd not be surprised for it to last for the next hundred+ years. I expect to see more and more residential ISPs provide their customers with globally-routable IPv6 service and put their customers behind IPv4 CGNs (or whatever the reasonable "Give the customer's edge router a not-globally-routable IPv4 address, but serve its traffic with IPv6 infrastructure" mechanism to use is). That IPv4 space will get freed up to use in IPv4-only publicly-facing services in datacenters.
There's IPv4-only software out there, and I expect that it will outlive everyone who's reading this site today. That's fine. What matters is getting proper IPv6 service to every Internet-connected site on (and off) the planet.
Honestly, this backwards compatibility thing seems even worse than IPv6 because it would be so confusing. At least IPv6 is distinctive on the network.
Rather than looking down on IPv4 , we should admire how incredible it's design was. Its elegance and intuitiveness, resourcefulness have all led to it outlasting every prediction of it's demise.
What is described here is basically just CIDR plus NAT which is...what we actually have.
At the time IPv6 was being defined (I was there) CIDR was just being introduced and NAT hadn't been deployed widely. If someone had raised their hand and said "I think if we force people to use NAT and push down on the route table size with CIDR, I think it'll be ok" (nobody said that iirc), they would have been rejected because sentiment was heavily against creating a two-level network. After all having uniform addressing was pretty much the reason internet protocols took off.