(Repost of <https://news.ycombinator.com/item?id=38570370>)
IPv6 was released in 1998. It had been 21 (!) years since the release of IPv6 and still what you're describing had not been implemented when Tailscale was released in 2019. Who was stopping anyone from doing it then, and who is stopping anyone from doing it now?
It's easy to paint companies as bad actors, especially since they often are, but Google, Cloudflare and Tailscale all became what they are for a reason: they solved a real problem, so people gave them money, or whatever is money-equivalent, like personal data.
If your argument is inverted, it's a kind of inverse accelerationism (decelerationism?) whereby only in making the Internet worse for everyone, the really good solutions can see the light. I don't buy it.
Tailscale is not the reason we're not seeing what you're describing, the immense work involved in creating it is why, and it's only when that immense amount of work becomes slightly less immense that any solution at all emerges. Tailscale for example would probably not exist if they had to invent Wireguard, and the fact that Tailscale now exists has led to Headscale existing, creating yet another springboard in a line of springboards to create "something" like what you describe -- for those willing to put in the time.
The folks who either (a) got in early on the IPv4 address land rush (especially the Western developed countries), or (b) with buckets of money who buy addresses.
If you're India, there probably weren't enough IPv4 address in the first place to handle your population, so you're doing IPv6:
* https://www.google.com/intl/en/ipv6/statistics.html#tab=per-...
Or even if you're in the West, if you're poor (a community Native American ISP):
> We learned a very expensive lesson. 71% of the IPv4 traffic we were supporting was from ROKU devices. 9% coming from DishNetwork & DirectTV satellite tuners, 11% from HomeSecurity cameras and systems, and remaining 9% we replaced extremely outdated Point of Sale(POS) equipment. So we cut ROKU some slack three years ago by spending a little over $300k just to support their devices.
* https://community.roku.com/t5/Features-settings-updates/It-s...
* Discussion: https://news.ycombinator.com/item?id=35047624
IPv4 'wasn't a problem' because the megacorps who generally run things where I'm guessing you're from (the West) were able to solve it in other means… until they can't. T-Mobile US has 120M and a few years ago it turns out that money couldn't solve IPv4-only anymore so they went to IPv6:
* https://www.youtube.com/watch?v=QGbxCKAqNUE
IPv6 is not taking off because IPv4 (and NAT/STUN/TURN) is 'better', but rather because (a) inertial, and (b) it 'works' (with enough kludges thrown at it).
I always bring this up and it’s always dismissed because tech people continue to dismiss usability concerns.
Even “small” usability differences can have a huge effect on adoption.
Yes, there are ways to configure IPv6 to isolate subnets, separate local traffic from internet traffic, set up firewalls and DMZs, run local DNS, etc., but they're all more complicated to configure and administer than their IPv4 equivalents.
For the love of expletive this mistaken belief needs to have died yesterday. NAT boxes help primarily because they also contain a firewall. But most of 2024's network security problems originate from the devices behind your firewall getting exploited through their on requests, not some random shit connecting from the outside. (Yes, that does still happen, so you keep your firewall.)
> no distinction between a local IP and a public IP
That is Survivor Bias at its best.
The originate _inside_ because NAT effectively blocks all _external_ requests.
You mean the firewall effectively blocks all external requests.
The reason NAT works for this is because by default there are no Internet-accessible services available via the router. If a request is received by the router that doesn't match an open port, its OS will, by default, reject it, with no firewall required.
NAT is not required for any of the things you’re talking about.
what happens with an incoming packet if there are no firewall rules on the NAT gateway/middlebox? without having a corresponding conntrack entry they will be dropped (and maybe even an ICMP message sent back, depending on the protocol), no?
for example if there is an incoming TCP packet with a 4-tuple (src ip, src port, dst ip, dst port) ... by necessity "dst ip" is the public IP of the NAT box, and on a pure NAT box there are no bound listening sockets. so whatever "dst port" is .. unless it gets picked up by an established NAT flow ... it will splash on the wall and getting a TCP RST.
isn't the argument that "NAT is not required", but that "NAT is implicitly a firewall"?
See perhaps stateless NAT:
* https://wiki.nftables.org/wiki-nftables/index.php/Performing...
You get a Full Cone NAT. Once the middlebox maps an (internal IP, port) tuple to an external port, every connection to that external port would lead to that internal tuple.
Why should Host C be able to reach Host A, when Host A is only speaking to Host B?
I am sure you know this but still, I have to stress that NAT is merely a mapper from one tuple to another tuple. If your router can handle NAT it certainly can handle an IPv6 firewall. And modern home/SOHO routers come with IPv6 firewall enabled by default (for the non-home routers, you have a bigger issue if your networking guys are not checking whether firewall is active) so I find the firewall discussions utterly as meaningless as someone fearing their DHCP server is not turned on by default. And frankly speaking, it's just an excuse for not implementing IPv6 -- saying that your ISP doesn't provide IPv6 connectivity would have been more convincing.
The point is not that "it can", the point is that on ipv4 "it doesn't work without".
In order for ipv4 to work at all you MUST use NAT, and implicitly a firewall, those two always work together even if there the person installing the system doesn't know the word "firewall", which is usually the case.
Hmm. I hadn't seen this brought up and I think it's a stronger argument than most others.
The IPv6 equivalent is services on ULA only, but that's not a default behaviour.
I think you misunderstand my post. My "philosophical inquiry" is about trying to get to the bottom of this, and it seem to me that NAT, as virtually everywhere deployed and found in the unspeakably many SoHo setups, is a stateful NAT, and it's implicitly a bad firewall.
So when people say that this is "meme" should die .... well they are right, but not technically right, no?
But, there are billions of other devices (IoT etc). that barely has any security protections in place that rely completely on not being exposed to the outside world.
Yes. And you can not-expose them via default deny firewall rule.
My home printer had an IPv6 in a prefix assigned from my ISP, but it was not accessible to the Internet (it was actually ping6-able because my Asus allowed ICMPv6 by default, but I could not connect to its web interface, like I can internally). Neither could I SSH into my macOS desktop or laptop from the outside (but could between the two internally).
And even if my globally addressable devices were globally reachable (which they were not), good luck scanning a /64.
Still is.
Riddled with viruses.
This is security theatre. People have been saying that NAT is not a security feature for over a decade:
* https://blog.ipspace.net/2011/12/is-nat-security-feature/
but the message still has not sunk in. The "Zero Trust" paper was published by John Kindervag in 2010:
* https://media.paloaltonetworks.com/documents/Forrester-No-Mo...
Most modern attacks start from a compromised internal host (e.g., from phishing), or through stolen credentials via a remote access method. The above is "castle-and-moat" thinking that tends to have weaker internal controls because it is thought the internal network is "hidden" from the dangerous outside network.
Set your firewall to default deny, then add a rule for allow outgoing connections, followed by only allow incoming connections if they are replies. For most machines (and networks), most of the time, this is what's needed: the above is applicable for both IPv6 and IPv4 (with or without NAT).
The protection comes from filtering (generally) and stateful packet inspection, not from hiding addresses.
> […] and having no distinction between a local IP and a public IP has a lot of disadvantages.
Just because something has a global addresses does not mean global reachability (see default deny above). Further you can layout your IPv6 address plan so that you can tell at a glance if hosts are externally accessible. Using a /48 a basis, you break out sixteen /52s, numbered $PREFIX:[0-f]000::/52.
To make it easier to remember what is externally accessible, you put all of those hosts in $PREFIX:e000::/52, where e stands for external. That /52 can then be broken down into:
* sixteen /56s
* 256 /60s
* 4096 /64s
or any combination thereof. See Figure A-5 for various ways to slice and dice:
* https://www.oreilly.com/library/view/ipv6-address-planning/9...
Everything in $PREFIX:[0-d,f]000::/52 is not externally reachable.
This is a lie. A "session through the NAT" does not really expose the host to the outside world, because in 99% of the cases this is a TCP session, and the NAT machine would drop all "out of order" packets.
>Most modern attacks start from a compromised internal host (e.g., from phishing), or through stolen credentials via a remote access method.
Your statement is a perfect example of https://en.wikipedia.org/wiki/Survivorship_bias.
Most modern attacks start from an internal host exactly because NAT makes external attacks infeasible for the majority of scenarios.
>Set your firewall to default deny, then add a rule for allow outgoing connections, followed by only allow incoming connections if they are replies.
What about I don't do it, and the system is still _automatically_ secure, because NAT does exactly that while being _required_ for the system to work.
>See Figure A-5
LOL. What about I don't see any figures, and the system still works and is secure for the 99% of the cases.
No, it's not. NAT only translates addresses and does not inspect the TCP "internals" (like sequence number etc, which would allow it to block certain packets).
What you are describing is a stateful firewall that allows "reply packets" for an established TCP-session.
Yes it is. How would it forward response packets back if it doesn't track connections?
In real life I haven't seen "stateless NAT" for about 20 years.
But cgnat machines usually go beyond that and even verify sequence numbers.
Or, you know, because firewalls block stuff.
I've had hosts with public IPv4 addresses attacked on (e.g.) tcp/80 and tcp/443 because that's what the firewall allowed through so the web service was available to the public. I've had hosts with internal IPv4 addresses attacked on web ports because they were behind a (reverse proxy) load balancer for serving traffic: the fact that they had a 10/8 and were behind a NAT did not protect them from attack.
Before recently switching ISPs, my last one had IPv6 (new one does not). They activated IPv6 at some point, and I enabled it on my Asus, and suddenly all my internal devices got an IPv6 address (via RA), including things like my printer.
I had SSH enabled on my macOS laptop and desktop, but could not SSH into them from an outside source. My printer has a web interface on port 80 that I could connect to internally, but not externally. Even though all the devices had IPv6 addresses.
Just because a device is globally addressable does not mean it is globally reachable.
> What about I don't do it, and the system is still _automatically_ secure, because NAT does exactly that while being _required_ for the system to work.
Because NAT is doing that I describe, so you are doing it. The firewall is checking state on incoming packets and rejecting those that are not in its state table. The firewall is also coïncidentally just happening to also be fiddling some bits in the address field.
It is the stateful inspection that is protecting you.
> I've had hosts with internal IPv4 addresses attacked on web ports because they were behind a (reverse proxy) load balancer for serving traffic: the fact that they had a 10/8 and were behind a NAT did not protect them from attack.
You explicitly set up a NAT bypass (reverse proxy) and then claim NAT didn't protect them. If I am an external attacker coming in towards a single public IP where the backside hasn't set up UPNP/Port Forwarding/STUN/Reverse Proxy, NAT does exactly what the previous poster said. It drops packets because the 'destination' is the router itself in the packet. It has no where else to go, it has literally reached its destination.
A stateful firewall is in no way necessary for this functionality to exist. Even UDP stateless packets cannot bypass the NAT because there if there is no table tracking the conversation from the POV of the inside->out initiating the conversation because the router would have zero idea which interior host to forward the packet to and no reason to do so.
Eh, I think that has hindsight bias. Setting up NAT manually, or customizing how things are NATed beyond the typical "one or two subnets/IP ranges behind a NAT gateway and maybe a DMZ" you see in businesses and residences is quite complicated! It's just that our control planes are really optimized to make that common case very easy. From router web UIs to pf presets to Windows'/NetworkManager's "share network" functionality to what articles/how-tos are available, that complexity is very effectively hidden but not removed.
As IPv6 becomes more entrenched (and more sites move to IPv6-only or public-IPv6-only deployments), the same thing that happened for the IPv4 world will happen for network segmentation configuration in the IPv6 world: it will get a lot easier and common defaults/conventions will emerge. I don't think the inherent complexity differences between IPv4 and IPv6 are that relevant here.
An example IPv4 address is 8 to 12 digits:
10.30.115.5
A memorable IPv6 address at a /56 site — the prefix and then one or two digits — isn’t much longer: 2001:db8:404:14::42
If you’re with a reasonably clued in ISP you probably get a /48 for your site by default: 2001:db8:404::42
If you’re enumerating your own /64 prefixes then it’s not much more complicated than: site 2001:db8:404::
net :14::
host ::42
I'd argue it's just enough to make the difference though.
The problem is that people got used to being able to rely on memorizing IP addresses. IPv6 does its best at making IP addresses both harder to memorize, and completely dynamic, going so far as to change the IP on a fairly regular basis. It's antithetical to some very core qualities that an IP address is supposed to have in the minds of many.
Hell I can’t get tech people I work with to give me their public IP.
If only there was some mechanism in which we could use a human-friendly label and have that translated to a computer-usable address…
> I always bring this up and it’s always dismissed because tech people continue to dismiss usability concerns.
I don't bother remembering IPv4 addresses, so I'm not sure why I would bother to remember IPv6 addresses. Heck, phone numbers are generally short as well, and who remembers them nowadays? ("0118 999 881 999 119 725… 3")
Maybe it's dismissed because people see it as a non-issue. I regularly work at OSI Layer 2 (and even 1, pulling fibres in a DC), and Layer 3, and am not sure what the concerns are about.
In modern devops in particular it is common to create and tear down IP networks in seconds and sling stuff everywhere. The extra moving part is an extra thing to break.
DNS also runs over IP which means if IP is down DNS doesn’t work. What do you have to do then? You have to debug IP without DNS.
There is mDNS but it’s not reliable and doesn’t scale to large networks. It also runs on the IP layer so if there is a problem there it can break.
Certainly it is not-zeroconf, but it is the same not-zeroconf for both IPv4 and IPv6.
But extra work with DHCP is needed for IPv4, and extra-extra work if you need to do things like configure 'IP helper', whereas IPv6 can be configured using only a router (which you need regardless) and some on-link packets (RAs).
> DNS also runs over IP which means if IP is down DNS doesn’t work. What do you have to do then? You have to debug IP without DNS.
And? At least you have fe80/64 as a basic starting point. Run a tcpdump to see if you're on-link in any way (or in the correct VLAN), and if you are, you can then ping(6) ff02::2 to find if there any on-link routers. You've now debugged Layer 2 and Layer 3 connectivity. Tada.
You're making IPv6 (sound) way more complicated than it is. It is no more or less complicated than IPv4 or IPX/SPX or …. It's protocol data units at OSI Layer 2 or 3 in different formats with different fields.
Me, that’s the new number for the emergency services.
Actually, that’s the only phone number I can remember :D
Eight six seven five three oh nine
Eight six seven five three oh nine
I don't understand why they didn't just add two or four more fields to ipv4 e.g. 0.91.127.0.0.1 is localhost where 0.91 can be omitted in the local context.
PS: I don't understand how networking works. Feels very very complex and full of jargons.
It's a fact of life that working with networking that we'll have to work with IP addresses at some level. It's easy to tell someone, "hey try typing in 'ping 8.8.8.8' and tell me what you get".
The readability of IPv6 is, in my opinion, worse with repeated symbols and more characters to remember. The symbols that were chosen were also poorly thought out. Colons are used in networking a lot of times when you want to connect to a service on a particular port, so if you want to visit 2001:4860:4860::8888 in your browser, you have to enclose the address in square brackets.
The wackiest example I've seen of this is the `ipv6-literal.net` notation for Windows UNC paths: https://devblogs.microsoft.com/oldnewthing/20100915-00/?p=12...
Because they thought that 64-bits would not be enough, and did not want to have to go through yet another transition.
The IPng proposal that was chosen, SIPP, was originally 'only' 64-bits:
* https://datatracker.ietf.org/doc/html/rfc1752#section-9
See also §10.2:
* https://datatracker.ietf.org/doc/html/rfc1752#section-10.2
Specifically (§11.1):
* scale - an address size of 128 bits easily meets the need to
address 10**9 networks even in the face of the inherent
inefficiency of address allocation for efficient routing
The solution for IPv6 addresses is the same for Ethernet addresses; don’t use them directly. Leave it to the name resolution system, and use host names.
Well, for a long time, IPv6 didn't work very well. We're past that, mostly. Google reports that 45% of their incoming connections worldwide are IPv6.[1] Growth rate has been close to linear, at 4%/year, since 2015. IPv6 should pass 50% some time in 2025.
Mobile is already 70%-90% IPv6. They need a lot of addresses.
Most of the delay comes from enterprise networks. They have limited connectivity to the outside world, and much of that limiting involves some kind of address translation. So a "corporate IPv6 strategy" is required.
There are other ways to do that.
There are dynamic DNS schemes, so you can give your machine which only has a temporary IP address a permanent name. That's been around for decades, and seems to have a bad reputation.
There are schemes with multiple coordination nodes that know about each other, and published lists of such nodes. The list may be out of date, but as long as the published list has one live node, you can connect and get updated. That's how Kademlia, which underlies Etherium's network and some file sharing systems, works. That's about 20 years old, and sort of has a sketchy reputation.
It's possible to go only halfway, and separate discovery from transmission. Peertube does that. You find a file to stream via ordinary HTTP to a server you find by ordinary web search means. Anybody can set up such a server. The actual streaming, for files wanted by many clients, is distributed, with people currently watching also sending out blocks to other people watching. This scales well, in case your video goes viral. It's not used much, though.
So it's definitely possible to do this without someone in the middle able to cut off your air supply.
Even if something is open, complexity is almost like closed as we can see with crazy complicated web standards for which there are few implementations.
A simple protocol is more likely to last.
IPSec is not always a VPN protocol. L2TP over IPSec is often used as one, but IPSec does little more than encrypt a tunnel between two IP addresses. IPSec in tunnel mode can be a minimal VPN, but it's not used as such as in VPN scenarios without a second packet encapsulation protocol, as it lacks authorization beyond key exchange.
As for the risk of ossification: that didn't go away with the current system either. HTTPS over TLS 1.3 looks like a TLS 1.2 session resumption on the wire (in its default configuration) because shitty middleboxes are used often enough that it would impede the protocol.
The "let's remake TCP over UDP" approach QUIC takes has very similar origins. UDP is generally allowed by random firewalls over that network, while other (more suited) protocols for this type of stuff like SCTP are not. The operating system doesn't allow opening raw network sockets without high privileges, so adding a new QUIC protocol at the layer of TCP and UDP to implement them at the right spot in the stack wouldn't be usable for many devices. Same is true for the TCP stack you have to use what the OS provides or get higher privileges to do your own; patching the TCP state machine isn't practical. So, if you want to implement a better version of TCP optimised for web browsing and such, you use UDP, because while technically incorrect, it'll work in most cases and has the least restrictions.
In the context of the network, IPSec is the new protocol here, not the result of ossification.
Your computer can talk to your home router (CPE) and punch a hole for a connection, but if your WAN port does not have a public IP address, but rather itself also has a private address (probably 100.64/10), the CPE cannot talk to the ISP's router to punch a hole:
* https://en.wikipedia.org/wiki/Carrier-grade_NAT
The two layers of NAT (home network (192.168) -> CPE NAT (100.64/10) -> ISP NAT ('real' public IPv4)) prevent hole punching.
> Our [Native American] tribal network started out IPv6, but soon learned we had to somehow support IPv4 only traffic. It took almost 11 months in order to get a small amount of IPv4 addresses allocated for this use. In fact there were only enough addresses to cover maybe 1% of population. So we were forced to create a very expensive proxy/translation server in order to support this traffic.
> We learned a very expensive lesson. 71% of the IPv4 traffic we were supporting was from ROKU devices. 9% coming from DishNetwork & DirectTV satellite tuners, 11% from HomeSecurity cameras and systems, and remaining 9% we replaced extremely outdated Point of Sale(POS) equipment. So we cut ROKU some slack three years ago by spending a little over $300k just to support their devices.
* https://community.roku.com/t5/Features-settings-updates/It-s...
* Discussion: https://news.ycombinator.com/item?id=35047624
just the other day i was discouraging a youngster from manually populating his hosts-file in order to circumvent a dmca-related dns block.... what has the world come to.
Or, more to the point, the server that I use to run my RSS feed reader?
Or my NAS?
Tailscale makes these more secure and more accessible for me. They are never meant to have the world access them.
Now for email and a few other things, sure, their nature is that they need to access the world.
Because that is how the internet is meant to work. It is an end-to-end network. If SSH would not be secure enough to handle this, it would need a secure replacement.
> Or my NAS? […] They are never meant to have the world access them.
What is a NAS, if not a Network-Attached Storage, i.e. meant to be accessed from the network? The concept of a ”local”, ”secure” network is a dangerous illusion. Embrace ”zero trust” networking.
No. The "internet" is literally the "inter-network", a way to connect private networks between each other.
The fact that VPN technologies sit behind proprietary corporate intellectual property is not by design, it is a failure of the internet standardization process as it was gamed by corporate interests.
> should be accessible from anywhere, and be secured at the end points, not at the network layer
If you're not securing at both the network layer and the endpoints, then you have utterly failed security and you need to go sit in the dunce corner.
If you do this, your application has no listening ports on the WAN, LAN, or host OS network and thus cannot be attacked from the external network/IP.
The asymmetry of risk now favours the defender, not attacker. Oh, plus we also have pre-built tunnelers for endpoints if you cannot do app embedded.
Last I checked, it hasn't solved DNS yet (there are unofficial projects trying to do that). I tested a small private network with a few devices and it worked very well.
IPv6 is completely useless and doesn't solve this problem.
Normal people don't care if they have to pay 5 dollars instead of 50 cents to rent an IP address. This is a problem specific only to the huge providers, and we don't need to rollout a whole internet upgrade just to optimize a tiny part of the operational costs for huge providers.
For example, every existing solution touts "security" and yet completely mangles the difference between authentication and encryption.
Authentication is important - you don't want random servers or users to enroll on your network, and you want good tools to rotate and manage secrets.
Encryption isn't important unless you care about state-level actors sniffing your traffic at the backbone. (And if you care about that then you already have your own datacenter.)
Meanwhile encrypting all network traffic is a huge performance penalty. (Orders of magnitude for some valid use cases.)
I have no problem criticizing tech companies, but I try to wait until they behave badly.
This is the Cloudflare problem all over again. One day Matthew Prince will get hit by a bus, all the "trustworthy people" will leave, a PE firm will take the company private, and merge it with an ad network. Congrats, the entire internet now has a single companies ads all over it and we let it happen because we happened to like the people fucking us.
That's why people don't necessarily, and shouldn't, trust that Tailscale won't head down the same path. It's hard enough for non-profits - heck, the Mozilla Foundation is losing all the good will they've ever had, and even the Raspberry Pi Foundation decided to gaslight people when they started eyeing corporate money.
If there's an open source way to do a thing that's a pain in the ass and a way to do the same thing from a for-profit company, I'll take the pain in the ass thing every time. History has shown it to be the prudent thing time and time again.
I'd rather not wait until they have a (quasi-)monopoly on something though. Twitter was great until…
I suppose if you follow that thread though a lot of businesses just shouldn't exist except for fulfilling the need they fill for the sake of those in need.
In fact, I prefer that capitalist model at this point having seen countless OSS/nonprofit efforts turn into glorified abandonware.
At least the business has an interest in remaining a going concern and maintaining the stack.
I have a SaaS-crush on buf because they did such a good job on fixing such an annoying problem.
edit: maybe even invite 2 or 3 DNSSEC advocates @tptacek :)
If we could find a credible DNSSEC advocate (for our audience; that is: a cryptography engineer, vulnerability researcher, or an engineering leader at a major firm), we would absolutely invite them on.
'teddyh below gave you links to two pro-DNSSEC resources; fun note: the latter source (Geoff Huston, one of the world's more respected networking researchers) has since then written this:
“I guess the question we should be asking is — if we want a secured namespace what aspects should we change about the way DNSSEC is used to make it simpler, faster, and more robust?”
• Blog post: <https://blog.technitium.com/2023/05/for-dnssec-and-why-dane-...>
• As a podcast episode: <https://blog.apnic.net/2023/03/16/podcast-dnssec-the-case-fo...>
IPv6 doesn't have that problem, though.
it's not a problem specific to any kind of corporation or corporations per se, but organizations or even broader, solutions.
though, do you really think that having a solution to a problem is worse than just having the problem?
• This does not really solve the problem, since a real solution would be to change the internet to make the problem go away
• A company making a lot of money gets to have an enormous influence on what is considered reasonable to standardize on. See for instance Google’s and Microsoft’s influence on things like the W3C. (Or if Tailscale is allowed to define what ”The New Internet” will be.)
historically car owners need to pay for repairs.
Its also worth considering that, for better or worse, very few people actually own their cars today. When you have a loan on it the resale value becomes really important. If the manufacturer wants the kind of customer that buys a new car every few years they'll need resale value that at least keeps up with the principle on the loan over that time.
They have a higher resale value because they have a reputation of lasting a long time, and people are thus perhaps more willing to pay a higher initial purchase price because they know their "investment" will last longer.
And while they may not be planing to sell their car after only a few years, knowing that they'll get back more of their "investment" is also probably sitting in the back of their mind ('just in case').
> There’s going to be a new world of haves and have-nots. Where in 1970 you had or didn’t have a mainframe, and in 1995 you had or didn’t have the Internet, and today you have or don’t have a TLS cert, tomorrow you’ll have or not have Tailscale. And if you don’t, you won’t be able to run apps that only work in a post-Tailscale world.
The king is dead, long live the king!
I still use a non-proprietary one that predates Tailscale and that is not OpenVPN. It is small and simple enough even I, a non-programmer, can make modifications.
It's possible one ends up using client-server in order to achieve peer-to-peer because not everyone has an internet-reachable, non-firewalled IP address. Using some hosting company's server to run a "supernode" may be required. No traffic needs to pass through it if it is used only as a "rendezvous server" so the cost can be minimal.
Companies that try to compete with "free" always draw high scrutiny from me. Stop using that free software and start paying us. We added 100 unnecessary "features".
Not doubting this "corporate strategy" can succeed, at least short-term. Look at Slack. But these subscriptions are not for me.
Client-server versus peer-to-peer is misdirection. The real issue is proprietary versus non-proprietary. IMHO.
re: GP comment. It really does not matter which non-properietary solution one chooses. It is personal preference. I know what I like but others might not like it. There are many options to choose from. And (I hope) there will continue to be more.
https://github.com/sshuttle/sshuttle
... which allows you to turn any system you have an ssh login on into a VPN endpoint.
https://github.com/apenwarr/sshuttle
... and I had not made that connection before ...
That won’t go down well in 10 years if they don’t become Microsoft-scale juggernauts.
I like Tailscale just because it's OpenVPN without the unbearable agony of setting it up so it actually works
If not, why bother? TLS and http don't charge licensing fees...
I have switched where possible, both my own networks and clients, to use headscale which is folly open source coordination server compatible with tailscale.
It was such a superb and easy to use tool to design/configure your own private networks at the time. Filesharing, local game LANs, development cooperation, heck, even media streaming was so easily done at the time.
Personally I think that the future of peer to peer isn't tailscale, it's more someting along the lines of a selfhosted hamachi variant that's able to put generically nodes together from all across different NATs and ASNs, generically understanding NAT breaking techniques and STUN/TURN/turtle routing.
A tool like this that could also allow remote users to chime in without a centralized VPN gateway would be a killer feature for the modern world.
Hamachi was different because a child could use it (literally). It was designed like an instant messenger, and you could easily create groups and invite friends for a LAN party. No IP masks, no hashes, none of that complicated stuff was necessary.
I'd only see maybe a tool that was built on top of headscale that could do that, but headscale's focus is too far off for something like that, and in my opinion too low level.
Agreed. We would all do well to learn about, and begin implementing, "Iceberg Articles":
Start with the important statements, then expand. Doesn't have to be the "Tell you what I'm telling you, tell you, tell you what I told you" format that many (American) students were taught, but starting with your thesis statement does help ground it.
On the other hand, the topic blog is somewhat of a story, and I can hear the presentation being given behind it. It's just translated 1:1 to a blog, which is a different medium.
In my experience, I only prefer "Classical philosophical writing" when I'm already convinced of reading the content (e.g. know the author, interested by the subject).
In almost all other cases, I prefer BLUF format: i.e. "get to the point, I'll read more if I'm intrigued".
I should say two things. Tailscale is amazing and I love it. The system could not exist without it, or I'd have to have at least ten more people in my team to manage all this 24/7. It's working, and it's good enough.
That being said, you do need to lower your expectations: it's not as good as "the internet". The latency spikes periodically, the connection drops sometimes, the MagicDNS just magically stops working or interferes with the system. Since we have many users, we've encountered every possible problem one can encounter, and then there's still something new you'll see tomorrow.
In any case, we believe in Tailscale and its vision, it's a categorically new approach that simultaneously gives you the control on hardware, reduces the cost, and improves the security. Our first big production server was a 4-core Linux Laptop!
We love Tailscale and we wish the product prosperous life and development. Thank you TEAM TAILSCALE!
So that laptop was a "free server" we've had. It's now replaced by a much beefier miniPC.
Do you require your users to install Tailscale?
I run it for my personal self-hosted infra, and it works really well. Setting a custom control server URL is relatively easy (at least on Windows and Android which I use).
I use taildrop, I serve docker containers to the tailnet, etc. headscale works really well and is worth a go.
The official clients (most valuable: the polished mobile apps easily installed from the default app stores) are one auto-update away from cutting ties when push comes to shove, the same as all commercial VPNs with a free tier.
However I do not think Tailscale is going to remove the custom control URL feature from their mobile clients. For one, I think there are legitimate "Tailscale Enterprise" use-cases for the custom login server.
Additionally, I have heard that Tailscale has been supportive of the Headscale project, providing assistance to the devs.
Further, Tailscale seems fairly committed to keeping their clients open sourced, and engaging in the developer community. Of course as you can say this can change at any time.
And of course the API used to manage the official server, so the rare things that depend on it won't work, but it's more a case that the project doesn't have the need to work on it
I think auto TLS requires some extra config, and DNS rules. I don't use it so I'm not sure.
https://en.wikipedia.org/wiki/Carrier-grade_NAT
Can anyone elighten me regarding what is different or special about 100.64.0.0/10 vs say, 192.168.0.0 or 10.0.0.0.
Edit: Answered my own question by digging into more wikis, there is a helpful table of reservations and intentions here: https://en.wikipedia.org/wiki/Reserved_IP_addresses
A bit of context: if an ISP cannot get enough IPv4 addresses for the WAN-side of people's home routers, some problems exist:
* something in 192.168/16 is generally used for the LAN-side of people's home routers, so that cannot be used on the WAN side
* 10/8 is used for business/enterprise corporate networks, so it also cannot be used on the WAN side because if people VPN connect to the corporate, then the router may get confused
* similarly for 172.12/12: often used for corporate networks
So the IETF/IANA set aside 100.64.0.0/10 as it had no 'legacy' of use anywhere else, and is specifically called out to only be used for ISPs for CG-NAT purposes. This way its routing does not clash with any other use (home or corporate/business).
IPv4 address space is nearly exhausted. However, ISPs must continue
to support IPv4 growth until IPv6 is fully deployed. To that end,
many ISPs will deploy a Carrier-Grade NAT (CGN) device, such as that
described in [RFC6264]. Because CGNs are used on networks where
public address space is expected, and currently available private
address space causes operational issues when used in this context,
ISPs require a new IPv4 /10 address block. This address block will
be called the "Shared Address Space" and will be used to number the
interfaces that connect CGN devices to Customer Premises Equipment (CPE).
* https://www.rfc-editor.org/rfc/rfc6598.htmlAnd that actually was a problem at a previous job I was at: when COVID hit our VPN address range just happened to be set to be in that range, and so a bunch of developers were having issues. (IIRC, we re-configured the VPN appliance to use something else.)
Edit: I should say, a subnet that docker carves smaller subnets out of for its networks.
No, because Tailscale isn't "the Internet", it is a bunch of disconnected moats. The IP space needed by Tailscale only has to be as big as the largest moat. And you can only be connected to a single moat at a time.
And if I remember correctly, ZT was initially created to provide something like this "New Internet" concept that Tailscale has apparently recently discovered, except they called it "Earth" and abandoned it in 2023.
(Some things don't change, I guess.)
I didn't intend to leave to implication the fact that Tailscale is node-to-node, or that it is is not hub-and-spoke.
(I even had this up in a browser tab when I wrote that previous comment: https://tailscale.com/blog/how-tailscale-works)
It's still proprietary if you self-host it, I was thinking in particular that tailscale uses Wireguard and Zerotier uses something custom, i.e. proprietary. Note that the context was:
> The internet succeeded because it was built on standards and was completely free. With Tailscale, I get wireguard is open source and we have things like Headscale. But [...]
to which the commenter I replied to asked of alternatives. So I wasn't saying tailscale great and open and standards compliant, and Zerotier not; I was saying it's the obvious competitor but if that's your problem with tailscale then it's if anything worse in that regard.
Because of this, I'll be switching to Headscale + Tailscale.
I don’t need EW traffic over the VPN, very NS based. Something like Headscale or another SDWan solution (automatically establishing vpn routes) would make sense if I needed to transport a lot of traffic E-W, that’s just not a requirement
You have a mesh VPN product with some value-added services on top of it. That's great, but this idea isn't novel or unique. Why should your solution be the "new internet" instead of any of the alternatives?
I wouldn't want to rely on a single company for all my internet infrastructure, anyway. So I'll stick with the traditional internet with all its complexity. Its major problems aren't technical but social, and no new technology will solve those.
Really? Isn't the major problem of the current internet is inherent centralization of services because the initial promise of 100% decentralized network is simply too complex to realistically manage? I view that problem as deeply technical. Unless if by "social" you simply mean everyone should become an experienced sysadmin. (or the slight variation of, everyone should know an experienced sysadmin who's willing to run their application for them for free)
Take something as mainstream as social media. Imagine a world where Facebook/Twitter/TikTok/YouTube/Reddit/HN/etc worked (seamlessly) like bittorrent. An application on your machine when you run it, it joins a "Facebook" network where your friends see you online through their instance of the application. Your feed/wall/etc is served to them directly from your machine. All your communication with them is handled directly between the 2 (or 1000 or millions) of you. No centralized server needed. You can easily extend and apply this majority of centralized application today. The only ones I can think of where this wouldn't work would be inherently centralized services like banking for example.
There are already plenty of p2p networks that show that this is a viable solution. Bittorrent, soulseek, bitcoin, etc.
All the problems you will run into however to make this as seamless as just connecting to facebook.com are purely technical. The initial big hurdle is seamless p2p connectivity. That is without port forwarding, dynamic dns, and requiring advanced networking, security, and other sysadmin knowledge from every user. Next would be problems like what happens when the node is offline? What happens to latency and load if you need to connect to thousands, hundreds of thousands, or millions of machines just to pull a "feed"? How is caching handled? How are updates/notifications pushed? How do nodes communicate when they are wildly out of date? Where is your data stored? How do you handle discoverability, security, etc.
All deeply technical problems. Most are solvable, but you're gonna have to invest a significant amount of effort to solve them one by one to reach the same brain-dead simple experience as a centralized service. The fediverse has been trying to solve just a small subset of these problems for over a decade now, and the solutions still require a highly capable sysadmin to give users a similar (or only slightly worse) experience than twitter.com.
Not quite. The internet _is_ decentralized. What made the web so centralized from the start could partially be the result of lacking tools that made publishing as easy as consuming content. I.e. had we had a publishing equivalent to the web browser, perhaps the web landscape would've been different today. You can see that this was planned as phase 2 of the original WWW proposal[2] ("universal authorship"), but it never came to pass AFAIK.
So you could say that the problem is partly technical. But it's uncertain how much this would've changed how people use the web, and if companies would've still stepped in to fill the authorship void, as many did and still do today. Once the web started gaining global traction in the early 90s, that ship had sailed. People learned that they had to use GeoCities to publish content, and later MySpace, Facebook and Twitter. These services gained popularity because they were popular.
There have been many attempts over the years to decentralize the web, but now the problem is largely social. As you say, we've had the fediverse for over a decade now. How is that going? Are technical issues still a hurdle to achieve mass adoption, or are people not joining because of other reasons? I'd say it's the latter.
Most people simply don't care about decentralization. They don't care about privacy, or that some corporation is getting rich off their data. They do care about using a service with interesting content where most of their contacts are. So it's a social and traction issue, much more than a technical one. The only people who use decentralized services today are those who care more about the technology than following the herd. Until you can either get the average web user interested in the technology, or achieve sufficient traction so that people don't care about the technology, decentralized services will remain a niche.
There is another technical aspect to this, though. Even if we could get everyone to use decentralized services today, the internet infrastructure is not ready for it. Most ISPs still offer asymmetrical connections, and residential networks simply aren't built for serving data. Many things will need to change on the operational side before your decentralized dream can become reality. I think this landscape would've also been different had the web started with decentralized principles, but alas, here we are.
Convenience trumps everything. All the parts of the iPhone existed for a few years before it -- especially PDAs with touch pens -- but what made the iPhone succeed was putting everything into convenient and easier package.
The amount of time worked on thing X has almost zero correlation with its adoption, as I think all of us the techies know.
> Even if we could get everyone to use decentralized services today, the internet infrastructure is not ready for it. Most ISPs still offer asymmetrical connections, and residential networks simply aren't built for serving data.
While that is true, let's not forget half-solutions like TeamViewer's relay servers, Tailscale / ZeroTier coordinators, and many others. They are not a 100% solution but then again nothing is nowadays; we have to start somewhere. I agree that many ISPs would be very unhappy with a truly decentralized architecture but the market will make them fall in line. I have no sympathy for some local businessmen who figured they will run with tens of millions with $50K investment. Nope, they'll have to invest more or be left out.
So there would be a market reshuffling and I'm very OK with it.
---
But how do we start off the entire process? I'd beet on automated negotiation between nodes + making sure those nodes are installed on much more machines. I envision a Linux kernel module that transparently keeps connections to a small but important subset of this future decentralized network and the rest becomes just API calls that would be almost as simple as the current ones (barring some more retry logic because f.ex. "we couldn't find the peer in one full minute"). I believe many devs would be able to handle it.
You can now build Internet-scale distributed systems, with or without requiring centralized (eg. DNS, SSL certs, etc.).
In other words, massively distributed apps without any means for centralized authorities to stop them.
Also various integrations, like tailscale k8s operator.
i.e., isn't some business is just a kludge of FOSS heavyweights, say for example, when they write an app in some open source language, deploy it on an open source OS with open source orchestration etc
I think Tailscale is a lot of foss software, with the utility that it lowers the barrier to entry massively
If not, why Tailscale specifically, and not Netbird, Nebula, Netmaker or some other competitor?
The article is indeed very well written, but gives the wrong vibes, like something's coming: acquisition, pivot, split, shutting down, etc. Also, "we're re just getting started", the famous last words.
Just to balance my healthy mistrust, I'd like to add that I'm a satisfied Tailscale user, mostly impressed with how little it requires of me to just work.
As a concrete example, a few weeks ago, I invited my dad to my tailnet with the intent of using remote desktop into his machine to help him fix something. He accepted the invite, and then I couldn't ping his machine despite it appearing in my TS domain web interface.
Now he hates tailscale, and I lost credibility because prior I told him how awesome it is. In his view, it wasted his time and doesn't "work right", and metadat is a fool.
The other thing to check is if he was running another VPN on his machine at the same time. Running multiple VPNs at the same time (both Windows and Linux) requires extra fiddling to map the routing correctly to prevent their rules from overlapping/breaking each. https://tailscale.com/kb/1105/other-vpns
Anyway, tailscale still has more to go. Inviting someone to your tailnet doesn't seem to be the same as adding a machine yourself.
I am, among other things, a network engineer, and previously I shared my tailnet with my brother's windows machine by logging him into my account directly, and it worked flawlessly.
I want TS to win, but they've got product and engineering work to do if they're serious.
If I can't figure it out, 99% of others won't either.
teamviewer, anydesk et.al are made for the task
The reason AWS is expensive is not because of IPv4, or the datacenters. It's mostly in their software/managed offerings, and the ability to quickly add more servers. If you are a "serious company" and you don't want to pay AWS or a similar company, renting a rack and colocating your own servers (either within your premises or in a datacenter) is doable and done by lots of companies.
I disagree that certificates have caused centralization, and they're not something separating the haves and have-nots and are in no way comparable to having or not a mainframe. HTTPS becoming pseudo-mandatory didn't push people into having their own (sub)domains, which is nowadays the only requirement to obtain a certificate. It already happened out of convenience.
The other point of centralization mentioned is DNS, which tailscale doesn't avoid at all. MagicDNS still relies on the ICANN root, as does the tailscale control plane. And if all you wanted was a free subdomain, there are plenty of people offering that.
If you are behind CGNAT, tailnets aren't particularly less centralized, as traffic has to flow through the DERP servers. I doubt tailscale can keep providing these free of charge when the volume is in the tbps instead of the gbps.
I agree that tailscale (and similar solutions) help in the last remaining case, which is accessing your computer that is behind a NAT. I even think they could reach the dozens of millions of users. This is, in my opinion, not enough to claim the title of "the new internet".
On other socials, a screenshot of the 'Not scaling' section is getting responses of "Those idiot developers think they need k8s scaling for their 1 req/s sites, ha ha."
The author brags about being able to (skip testing, CI/CD pipelines and just) edit their perl scripts (in prod,) really quickly.
What uptime is associated with that practice? As many 9's as it takes for Brad to debug his perl program in prod? This approach doesn't even scale to 2 developers unless they're sitting in the same room.
DevOps isn't a machine where you put unnecessary complexity in one end and get req/s out the other end. It's about risk and managing deployments better.
If I really wanted to engineer for req/s, I'd look at moving off k8s and onto bare metal.
IPv6 -together with WireGuard- gives privacy, security, and performance. The downside is the complexity to set it up.
Tailscale builds on the shoulder of giants. IPv4, WireGuard, Samy Kamkar NAT punching, OpenSSH, and probably many more. One of the upsides is the combination of these, and that the management interface in general is easy. But what counts for CA is also true for Tailscale: both are using FOSS to in the end deliver a (proprietary) service.
But because almost everything is build on top of FOSS and there's Headscale (and they're cool with it), this isn't a major issue to me. Like, it is a downside, but not a major one, as vendor lock-in is practically non-existent. In fact, it is likely an upside from a business/support PoV.
My laptop has an IPv6 address, as does the router that routes its traffic. There’s no NAT, that’s true, but there’s still a firewall — only inbound packets from a destination host and port that have been sent to are allowed in. And in enterprise environments, from what I’ve seen, there’s a symmetric NAT on IPv6 anyway — packet comes from a different IPv6 address and randomized port than the one client sent it from, making peer connectivity impossible, as the source port varies by destination host and port.
Well, T-Mobile US is 100% IPv6:
* https://www.youtube.com/watch?v=QGbxCKAqNUE
Facebook is IPv6-only on their internal infrastructure:
* https://www.internetsociety.org/resources/deploy360/2014/cas...
Microsoft has been moving to IPv6-only for their corporate network (so IPv4 address can be used for revenue-producing Azure):
* https://www.arin.net/blog/2019/04/03/microsoft-works-toward-...
So he better tell those folks that IPv6 is not a thing.
Anyway, remember IPv4 classes? Then they made it classless. IPv6 is not 128bit, its just 64bit with 64bit host address. So, first mistake. IPsec mandatory? pure stupidity. Crypto moves fast, every 10 years many protocols are obsoleted. How you will provide E2E connectivity with that?
In 1997 IPv6 was seriously immature yet to start migration. Additionaly, it was very different from IPv4 so was mostly ignored. What IPng team should do, is just take IPv4, extended it to 64bit, call it IPv6 and we are done. As bonus, they should think about some basic IPv6 -> IPv4 interop so clients would NOT need to be dual stack. And that could work back then. Now we are fucked.
Maybe in the 00s there was window when there might have been true doubt if ipv6 was going to happen. But after that, it was just question of "when", not "if".
Keeping on hating is simply not very productive. It's just much better to embrace ipv6, no matter it's possible flaws.
Tell me you've never had to seriously design and operate networks at scale without telling me etc...
This is a bit like Chesterton's fence - until you understand why (for example) ARP is a hilariously bad design and a major problem when trying to design networks at scale, then you can't understand why someone might want to replace it with something more effective. IPv6 doesn't get a lot of stuff right, but the motivation behind replacing v4 was much more than simply "more addresses pls".
IPv4 was the mistake, Vint Cerf is on the record as saying so. Should never have been let out of the lab.
Also, please cut the crap about IoT and Hyper IP networks centrally managed. Thats just serveral huge corporations. Majority is Internet is small/medium shops doing it completly different. Yet, big boys pop in and say you do it wrong, you must do it our way or go away. Not nice.
Yeah, that motivation become overengineering. They provided protocol that does NOT fit the needs it seems.
IPv6 will probably happen indeed. I doubt someone will popin with great protocol that will make IP obsolete.
Also, I wish IPv6 would really took off, because even if I personaly dont like IPv6, it success would provide me IPv4 address space for my retro networking projects.
The place that is behind are corporate networks.
ARP poisoning is the least of ARP's problems.
It can potentially have a blast radius that can bring down networks, and if it was actually sorted out, then things like BGP EVPN would not need to have been invented. One of touted benefits of BGP EVPN is reduced ARP and Layer 2 broadcasts.
I've seen ARP storms bring down even 'small' company networks (dozen switches for ~200 people) because someone fed a simply desktop switch back in on itself and the access layer switch in the closest could not do STP with the simple switch.
Thats why newer switches have STP, DHCP Snooping, ARP security and so on. Now take a look at ND tables exchaustion alone. Trival attack to do on IPv6 segment. Is it solved yet? I dont know. I do NOT track it.
The whole PnP (I call it Plug and Pray) is terrible aproach imo. IoT created hella of security problems (biggest DDoS botnets are IoT). If someone need autoconfiguration, he can slap DHCP on segment. Easy and super old protocol on IPv4. (IoT connected directly to internet? thats stupidy.. but I will leave that to other talk).
So, IPv6 should be simple, easy to implement and so less prone to mistakes. All extras should be put layer up.
It's a footgun. All footguns have ways to not trigger them, but saying you can't blow off a leg is also inaccurate. Reducing the number of footguns laying about is generally a good thing
> Now take a look at ND tables exchaustion alone.
No different than ARP table exhaustion (a finite L2-L3 mapping table). "First hop security" is a thing in both protocols.
> So, IPv6 should be simple, easy to implement and so less prone to mistakes. All extras should be put layer up.
I would argue that IPv6 is simpler to get going than IPv4: to start you don't need BOOTP/DHCP. In fact IPv4 later took some ideas from IPv6, e.g., 169.254.0.0/16 link-local addresses:
This document describes a method by which a host may automatically
configure an interface with an IPv4 address in the 169.254/16 prefix
that is valid for Link-Local communication on that interface. This
is especially valuable in environments where no other configuration
mechanism is available. The IPv4 prefix 169.254/16 is registered
with the IANA for this purpose. Allocation of IPv6 Link-Local addresses
is described in "IPv6 Stateless Address Autoconfiguration" [RFC2462].
* https://datatracker.ietf.org/doc/html/rfc3927And yeah.. soft LL in IPv4 is good idea. You can use it. In IPv6 you are forced to use it. Oh thank you, OSPFv3 configuration is so cool on in IPv6..
Yes, and there are tools and procedures for that:
* https://datatracker.ietf.org/doc/html/rfc9099
But as the old saying goes: easy things should be simple, and hard things should be possible. I think IPv6 does that.
This is literally what they did, except they made it 128 bit rather than 64.
The thing you're missing is that literally every IPv4 protocol breaks the second you change bit count. Before you change the 32-bit header you need to (a) redefine bit for bit every IP protocol so it can be understood by each IP capable device (b) somehow send a full-proof update to every IPv4 device in the world redefining how they ought to interpret IPv4 headers.
And I will tell it again to be clear. Im not fan of some IPv4+ contraption ideas like lets extend IPv4 address space and try to keep it IPv4. Thats DUMB. Make new protocol, improve things that were bad in IPv4 (are they any?) and try to make it one way interop to IPv4 (IPv6 -> IPv4) and we are done.
Remember that you are building protocol for entire planet. It have to be relativly simple and easy to implement. Any extras should be layer up. The whole IoT crap annoys me a lot. This stuff should NEVER ever be connected directly to internet. It creates huge security mess. There should be IoT GW to handle IP <-> (whatever IoT proto) and provide security.
>>The only flaw is too small address space.
>>>With current IPv6, you have to throw up half of the stuff you know about IPv4 for, imo, no valid reason.
ARP, DHCP, NAT, Lack of built in encryption are all huge problems that had to be addressed.
- ARP: incredibly inefficient, prime vector for abuse by malicious actors via arp poisoning
- DHCP: Man in the middle attacks, need I say more?
- NAT: Literally breaks the whole concept of IP addressing, incredibly inefficient as it requires manipulating packets mid-stream, literally designed as a temporary band aid to smooth our transition away from IPv4
- Built in encryption: You say this makes this more complicated but I believe it is the opposite, better security is built into the foundation rather than having to build it into every protocol on top of it. (ssh instead of telnet, SFTP instead of FTP, HTTPs instead of HTTP, ect) The issue I'm having with your argument is that you're saying that "you're fine with a replacement IP protocol which ditches the bad" and then go on to deride IPv6 for doing exactly what you're asking for. (keeping it as close to IPv4 as possible while ditching the biggest sources of technical debt)
>And I will tell it again to be clear. Im not fan of some IPv4+ contraption ideas like lets extend IPv4 address space and try to keep it IPv4. Thats DUMB.
But you literally did suggest exactly this when you said:
>What IPng team should do, is just take IPv4, extended it to 64bit, call it IPv6 and we are done.
Did I somehow misinterpret this?
>Make new protocol, improve things that were bad in IPv4 (are they any?) and try to make it one way interop to IPv4 (IPv6 -> IPv4) and we are done.
IPv6 does provide a way to do exactly this, it's called NAT64 https://en.wikipedia.org/wiki/NAT64?useskin=vector
>Remember that you are building protocol for entire planet. It have to be relativly simple and easy to implement. Any extras should be layer up.
Again, this really makes me think you don't work in networking. When you abstract security from the underlining protocols you essentially leave a gaping hole in your security. The only surefire way to communicate securely is to bake encryption into the protocol itself. (and even then it is hit or miss)
This is why we moved from HTTPv2 to HTTPv3 This is why we stopped wrapping telnet into IPsec Tunnels and opted for SSH, this is why we stopped wrapping HTTPv2 in TLS tunnels and baked it into HTTPv3, and so on.
I don't want to spend a lot of time on IoT but as a network engineer I can say that they exist whether you like them or not and make up a large portion of traffic so we can't just not consider them when talking about how network protocols ought to be designed.
DHCP snooping, need I say more? Also, if you are operating on network that is high security risk, you just layer VPN on top of it. Thats why they got invented in first place..
NAT is not that bad after all imo. I like its feature that my LAN is decoupled from WAN. Im multihomed and I do not need to bother annoucing prefixes to both ISPs.
Yes, you still misinterpret my statement. I mean: take IPv4 and just extend its address space and create new protocol out of it. It will not work with IPv4 itself because its not possible to do. But why take old IPv4 instead creating something from scratch? Simple, IPv4 works very well, why to trash last 30 years of R&D put to it? Sure, if you can came up with something better, go ahead. IPv6 did not deliver the promise.
Security is not that simple like, slap encryption everywhere and we are done, its more complicated matter. Encryption, control, management, endpoints security, router security. Whats the point of encryption of your device can be compromised due to shitty mgmt and traffic MITM again? Or whats the point of encryption if it can be cracked within hour doing MITM again due to protocol got old.
Yeah, HTTPv3.. created yet another problems that needs to be solved now. Why every time something new pops in, it trash past protocol R&D put to it, bringing same on similar problems AGAIN. Thats pathetic.
IoT, thats good example actually. It have E2E encryption (mostly its all HTTPS) and yet its p0wned so easly creating huge DDoS networks. Im starting to wonder if you have any security clue at all.
Negotiation. IPsec using IKEv2 (RFC 4306/7296) started with (e.g.) 3DES when it was initially released, but now allows for AES (RFC 3602, 3686, etc), as well as other algorithms:
* https://www.iana.org/assignments/ikev2-parameters/ikev2-para...
> What IPng team should do, is just take IPv4, extended it to 64bit, call it IPv6 and we are done.
For anyone curious, the technical criteria for choosing the (then-labelled) IPng:
* https://datatracker.ietf.org/doc/html/rfc1726
And the evaluation of the available candidates and why the winner was chosen:
* https://datatracker.ietf.org/doc/html/rfc1752
One of the IPng candidates, SIPP, indeed did extend addressing from 32-bits to 64-bits (RFC 1710, RFC 1752 § 7.2), but it was deemed that it may not enough and another transition would be even more difficult, so they went with 128-bits (RFC 1752 § 9).
Adding mechanisms for auto-configuration was one of the criteria for IPng; per RFC 1726 § 5.8:
CRITERION
The protocol must permit easy and largely distributed
configuration and operation. Automatic configuration of hosts and
routers is required.
DISCUSSION
People complain that IP is hard to manage. We cannot plug and
play. We must fix that problem.
We do note that fully automated configuration, especially for
large, complex networks, is still a topic of research. Our
concern is mostly for small and medium sized, less complex,
networks; places where the essential knowledge and skills would
not be as readily available.
In dealing with this criterion, address assignment and delegation
procedures and restrictions should be addressed by the proposal.
Furthermore, "ownership" of addresses (e.g., user or service
provider) has recently become a concern and the issue should be
addressed.
We require that a node be able to dynamically obtain all of its
operational, IP-level parameters at boot time via a dynamic
configuration mechanism.
[…]
In a world of IoT, not having to have a BOOTP/DHCP(v4) seems like decent foresight.What market are you talking about ?
* https://www.rfc-editor.org/rfc/rfc1454.html
Here are the technical criteria for choosing the (then-labelled) IPng:
* https://datatracker.ietf.org/doc/html/rfc1726
And finally the evaluation of the available candidates and why the winner was chosen:
* https://datatracker.ietf.org/doc/html/rfc1752
If someone doesn't want to use IPv6, then what they're effectively suggesting is that we create a new protocol, and role it out to every smartphone, tablet, laptop, desktop, server, (Wifi) router/CPE, ISP router, SMB router, enterprise switches, and IoT device. Meanwhile we've already effectively run out of IPv4 addresses (e.g., ARIN and RIPE pools are zero) and are just shuffling about whatever is left in auctions.
> There's one thing I forgot to mention in that big long story above: somewhere in that whole chain of events, we completely stopped using bus networks. Ethernet is not actually a bus anymore. It just pretends to be a bus. Basically, we couldn't get ethernet's famous CSMA/CD to keep working as speeds increased, so we went back to the good old star topology.
Except for 802.11 Wifi.
Although I've heard some ideas for a IPv4.1 that suffer from the obvious problem, I think the far more common view is rather that v4 is fine and its only problem is solved by NAT. Which I agree isn't actually a long term solution, but let's try to meet the stronger argument.
The only reason why NAT is "solving" the problem is because IPv6 is taking some of the pressure off. T-Mobile US has 120M subscribers:
* https://www.statista.com/statistics/219577/total-customers-o...
And they went to IPv6-only:
* https://www.youtube.com/watch?v=QGbxCKAqNUE
There's no way that would work in a no-IPv6 / IPv4-only world. Comcast ran out of 10/8 address space to manage their cable modems: how would that work without IPv6?
Google says India is 74% IPv6:
* https://www.google.com/intl/en/ipv6/statistics.html#tab=per-...
How would that work with only IPv4?
Even on smaller scales, without IPv6, supporting IPv4 with CG-NAT can get really expensive, real fast:
> We learned a very expensive lesson. 71% of the IPv4 traffic we were supporting was from ROKU devices. 9% coming from DishNetwork & DirectTV satellite tuners, 11% from HomeSecurity cameras and systems, and remaining 9% we replaced extremely outdated Point of Sale(POS) equipment. So we cut ROKU some slack three years ago by spending a little over $300k just to support their devices.
* https://community.roku.com/t5/Features-settings-updates/It-s...
* Discussion: https://news.ycombinator.com/item?id=35047624
Google says India is 74% IPv6:
* https://www.google.com/intl/en/ipv6/statistics.html#tab=per-...
How would connectivity for 10^9 people work with only IPv4? See also China. Each of those countries is 2^30 people, plus add another 2^30 for the continent of Africa, and you're already over 2^31. IPv4 is 2^32 addresses.
Sure Tailscale makes the internet easier again, but I still have to rely on a landlord. Something I didn’t/don’t have to for the internet. As much as a lot of stuff has been centralized, even today I can connect to any server in the world with just the link.
"Be sure to drink your Ovaltine."
This is a very good point. Counterpoint is self-hosting Headscale which I mentioned in another comment here: https://github.com/juanfont/headscale
Works with native Tailscale clients with a few config changes. I use it myself.
I wouldn’t be too surprised if the median age of Tailscale’s audience was 24.
What is it?
Can we call things for what they are? Is this a VPN? :)
Im tired and I am 34. So tired.
> That’s an IBM analogy.
Wow, this dialogue comes in the first episode of halt and catch fire. I didn't know this was a real thing
Here's the clip at 1.51 minutes, if anyone's interested: https://www.youtube.com/watch?v=XOR8mk0tLpc
I always just assumed they were building some kind of logging software (“tail”scale), used Wireguard to connect hosts, and just kind of stopped there. Don’t get me wrong, Tailscale is a nice way to connect machines. It’s nice because Wireguard is nice.
> Update 2019-04-26: Based on a lot of positive feedback from people who read this blog post, I ended up starting a company that might be able to help you with your logs problems. We're building pipelines that are very similar to what's described here.
Update 2020-08-26:
Aha! Okay, for some reason this article is trending again, and I'd better provide an update on my update. We did implement parts of this design for use in our core product, which is now quite distinct from logs processing.
After investigating the "logs" market last year, we decided not to commercialize a logs processing service. The reason is that the characteristics we want our design to have: cheap, lightweight, simple, fast, and reliable - are all things you would expect from the low-cost provider in a market. The "logs processing" space is crowded with a lot of premium products that are fancy, feature-filled, etc, and reliable too, and thus able to charge a lot of money.
Instead, we built a minimalistic version of the above design for our internal use, collecting distributed telemetry about Tailscale connection success rates to help debug the network. Big companies can also use it to feed into their IDS and SIEM systems.
We considered open sourcing the logs services we built (since open source is where attributes like cheap, lightweight, etc tend to flourish) but we can't afford the support overhead right now for a product that is at best tangential to our main focus. Sorry! Hopefully someday.
Nice requires humane UX.
There is very little use for companies like Tailscale in this setup, it’s scalable and works.
Tailscale can certainly be blocked on NGFW firewalls like Palo Alto. I am not a BOFH, but also can’t have random employees circumventing security policies by setting up tailscale and leaving permanent backdoors in my corporate network.
I remember the good old days when everyone had a public IP on the Internet and how easy it was to setup things. It was cool and fun while it lasted. But now things are different and security is a nightmare when we have to deal with things like ransomware.
Tailscale isn't exactly an open door. Only machines signed-in via SSO can access a Tailscale network.
If you don't trust your employees to safeguard their credentials and machines then how do you trust them at all? Keep them in an airtight underground bunker chained to their desks? Not sure what threat you're modeling for...
1. We took a hard-problem, peer-to-peer networking and IdM, and (mostly) solved it.
2. We're hoping this will drive people to build apps that leverage the unique capabilities of authenticated p2p mesh networks. It doesn't even have to be specifically for Tailscale.
3. People will want to use those apps and (if we're good at our jobs) choose to pay us to run the network for them over our competitors or building something in-house.
4. $$$
I'm not sure I would say this is as nefarious as the tone of the comments here suggest. Wanting a "killer app" for your software platform is pretty normal which is really what he's talking about. I would be nervous declaring victory or an inevitability without being to name what that killer app actually is but trying to figure it out/build it is a good strategy. It's one of those times where the desire for engineers to solve their pet-problems, play with shiny new toys, and build Halo LAN Party over Tailscale is aligned with the business.
> But the liberation didn’t last long. If you deploy software, you probably pay rent to AWS.
There's no Azure? GCP? Hetzner? Digital Ocean?
> You pay exorbitant rents to cloud providers for their computing power because your own computer isn’t in the right place to be a decent server.
You do that because you don't know what port-forwarding is (vast majority of software people do not), or you don't have the place or infra in your dwelling to stash a laptop server running 24/7 without interruption.
Devs have lost such knowledge because big tech have trained them to loose it and now we see more and more limits of their model. The new internet must be the very old one, a network of hosts communicating each others, without *NAT and alike in the middle explicitly done in most case to lock users hosts behind some giant iron curtain.
The modern web today matter because we lack UIs because commercial desktops have decided for widgets based UIs and have strongly hit their limits, finding in the modern web a crappy modern version of the old classic DocUIs and we know as well we need DocUIs. Slowly we start coming back to the end-users programming admitting that visual crap and all tentative to make programming hard on purpose led to unsustainable crapware ecosystems. Maybe in a decade spreadsheets and "calculators" will be finally dropped and Jupyter/R alike tools will have finally substituted them eventually with some LLM plugged in to help the dumb mean users. In another decade we probably will be back at LispM because try other paths to profit from users is not sustainable anymore.
The shortest this period will be the less damage we will suffer.
The rollout and transformation, if it happens, won't look like all this stuff becoming so easy that every individual can run a server. But it is possible that every extended family will have at least one member who can run a server or administer a private network for the whole clan. And that's where tech like tailscale's offering will come in. That's where I see the author's vision being a believable moonshot:
Each extended family, and some small communities, with their own little interconnected, distributed network-citadels, behind the firewalls of which they do their computing, their sharing, and their work. Most family members won't need to understand it any more than they understand the centralized clouds they use now. And most networks won't be as well secured as a massive company can make its cloud offering, but a patchwork heterogeneity of network-citadels creates its own sort of security, and significantly lowers the value of any one "citadel" to even motivated adversaries.
And Tailscale works for me to create my own network of phones, laptops, desktops and a remote node at DO. Works brilliantly to cross geo boundaries, borders, wifi networks(home has multiple) and seamless moving between mobile networks and wired.
Not sure will create a new internet or not but at least a new intranet where all my devices are reachable and controllable.
That's as may be; but many, many people have no access to an "extended family". And extended families are not necessarily warm, safe spaces where everyone trusts everyone else; extended families are more likely to be "broken" than nuclear families.
It is a good thing to promote and advance privacy, security, and freedom to isolated, atomized individuals; but it is important for all of humanity to promote and advance those same ideals to extended families. People who have no access to an extended family will ultimately either join a different one or disappear into the mists of ages past. In 100 years, the Earth will be populated mostly by the descendants of people in extended families today, however imperfect or even broken those extended families may be. If those people today don't see privacy, security, and freedom as both possible and worthy, their descendants may not value or even possess any of those ideals.
Funnily enough, I was once like this but now I have deliberately moved everything to the big cloud providers as I don’t want to deal with the toil of running my own homelab anymore. This is coming from someone who used to have a FreeBSD server with ZFS disks and using jails to run various things like pf, samba, etc. Eventually things would fail and it felt like I was back at work again when all I want to do is drink a cold beer and watch YouTube.
Perhaps I will try again one day as things get easier. For now I am content with having my photos and videos automatically synced up to iCloud/Google Photos.
I think part of the excitement I'm feeling is that the ecosystem today feels way more stable and mature than it did a decade to a decade and a half ago. Home Assistant, Jellyfin, TrueNAS, and a few other things have all pretty much run themselves for me with almost no downtime (other than one blackout that happened while I was traveling and drained my UPS) for the past nine months. There's tinkering to get it all up and running, but way less maintenance than I remember in the past.
The only time I do anything with this stuff is when I want to upgrade (which is very rare) or add something. My NAS solution is a custom mini-ITX I built 8 years ago which I feel has more than paid for itself. I have long stopped chasing the latest and greatest because most of what has been produced in the last decade is very usable.
Very wary of going cloud, as I can't as easily control costs.
It may be highly unlogical, but maybe by shooting for zero it would be possible to bat 1000?
I do everything it takes so that the "extended family" site just works after I leave, as long as the "operator" can keep track of their USB sticks.
Scrap PCs being used as media servers have no internal drive.
Boots to the stick containing the server app.
Accesses media on a second stick containing the files.
Hotplug the media stick to emulate game-cartridge/VCR-cassette convenience.
Upon server failure or massive update, replace that particular stick with a backup or later version, or in the worst case get another scrap PC.
I know, easier said than done :/
Aiming for zero required sysadmins in the short term after your own passing, I think the computers you leave behind will run into a similar case of the same general problem in the long term: there's no such thing as an entropy proof system. Castle walls erode and weapons rust if there are no skilled people to maintain them. Computer components slowly break down due to ordinary wear and tear. Software configurations become obsolete and unable to talk with other software, and become less secure as vulnerabilities are discovered over time. If there are no skilled people at all to maintain a familial network-citadel, it will eventually break down and fall into disuse.
Especially with passing, eventually it's like the siege of the Alamo, when the walls do end up breached there's not a soldier there that can do any good.
It's shoestrings anyway and amazing it's working for now :)
> We’re removing layers, and layers, and layers of complexity, and making it easier to work on what you wanted to work on in the first place.
as an avid user, i'd say they are in fact adding more layers to the problem. it is well-designed and relatively accessible, sure, but it represents a stop-gap solution while everyone eventually pushes to the eventual solution.
it has always been the double-edged nature of abstraction. we give trust and responsibility to another party while for us networking works out "magically". but the moment your remote client has some auth issues, you snap back to reality. besides bandwidth costs, it seems that their otherwise generous pricing model is economically viable in post-ai landscape.
i'd personally like to act like a "landowner" of the internet, but currently being a rentor seems like a good idea while we all wait for social housing to finally get accepted.
The idea of entirely decentralized internet is wishful thinking. You always need servers. Even with IP6, you have to run a STUN or DDNS server, since ip addresses change. Do you want to run them at home? I don’t.
I do think Tailscale is on path to different networking.
Smells like more centralisation, not less.
In summary, if you don't pay it directly doesn't mean you don't pay it indirectly.
Some of the self-hosted options presented during sign up include Keycloak, Ory, Gitea, Zitadel, Authelia, and more.
There's also a workaround to create a passkey account by signing up with any SSO provider, inviting yourself as an external user, accepting that invite and sining in with a passkey, then leave the original SSO network. Then you're not tied to any external service at all.
https://news.ycombinator.com/item?id=22760130
> (I'm a Tailscale co-founder) The idea is to avoid building yet another commercial service that holds onto your username and password. People have enough identities already. More details here: https://tailscale.com/blog/how-tailscale-works/ We know we keep getting feedback that people want a different way to authorize their accounts (especially for personal use), so we're looking at other options. We just really want to stay out of the username+password business; it's simply bad security practice.
Except to use tailscale you do need to bring in a while OIDC authentication provider.
It's all small and aimed at avoiding scale until the very first step, when suddenly only the big complex thing is acceptable.
I still just want to just use my email and a top. The only one of the auth providers tailscale supports that I have is GitHub, but I don't use GitHub as beyond work as I self host my git.
When the onboarding is "maintain and run a full oidc provider", all we've done is trade one aspect of complexity for another.
I don't really understand it, I can use the direct IP address of the other machine and I can still see tailscaled.exe using a lot of CPU and my file transfer being only 65MB/s. If I right click the system tray icon and exit from Tailscale, the transfer speed instantly jumps to 109MB/s (which is the maximum my Gb/s LAN).
It's that simple.
Another one just blocks all incoming connections on ipv6 entirely.
Not to mention most Internet services do need a central backend to function even if there are no barriers among clients at all (because the clients are completely unreliable), including even the textbook p2p example of file transfer: while direct p2p is nice in many cases, with a central service the recipient can receive at any time, instead of having to coordinate with the sender to both stay online simultaneously and for the duration of the transfer, which is quite difficult nowadays with most of the computing happening on phones.
TL;DR
It's been really disheartening to watch the steady enshittification of Tailscale, Inc. I knew it was coming with 100% certainty once they raised 100mil in 2022. It's still heartbreaking because the product itself is quite good.
The worst part is because Tailscale, Inc got there "first" (I know nebula existed before Tailscale did. shut up, okay?) and now the other competitors like NetMaker, NetBird, are all following almost the exact same business model ("open core+" - open source client and some kind of claim to an open source control plane with infinity caveats to funnel enterprise dollarydoos back to the vulture capitalists)
> The worst part is because Tailscale, Inc got there "first"
never heard of nebula but please clarify where they got first.
i'm sure you are aware that branded/purposebuilt vpn's existed long before the first iphone.
are you describing the process of a company achieving commercial success?
lmao