I think this is... fine? Am I just totally naive. I think it's fine to say "You don't really have privacy on this app" - as long as there are relatively good options of apps that do have privacy (and I think there are). TikTok is really a public by default type of social media, there's not much idea of mutual following or closed groups. So sure, you don't have privacy on tiktok, if you want it you can move to snapchat or signal or whatever platform of your choice.
Like, it's literally a platform that was run under the watchful eye of the CCP, and now the US version is some kleptocratic nightmare, so I just don't see the point in expecting some sort of principled stance out of them.
In some ways I think it's worse for places like Facebook to "care about privacy" and use E2EE but then massively under-resource policing of CSAM on their platform. If you're going to embrace 'privacy' I do think it's on you to also then put additional resources into tackling the downsides of that.
> nobody should believe for a second that WhatsApp or FB messages are truly E2EE.
Meta still tracks analytics which isn't good for privacy, but I'm not aware of any news of them or 3rd parties reading messages without consent of one of the 1st parties? Signal is probably much better though
> Meta still tracks analytics which isn't good for privacy, but I'm not aware of any news of them or 3rd parties reading messages without consent of one of the 1st parties? Signal is probably much better though
Correct. WhatsApp uses the Signal protocol, and there is zero evidence of them reading message contents except with the consent of one of the users involved (such as a user reporting a message for moderation purposes).
(And before anyone takes issue with that last qualifier, consent from at least one party is the bar for secure communications on any platform, Signal included. If you don't trust the person you are communicating with, no amount of encryption will protect you).
Discovering a backdoor in WhatsApp for Facebook/Meta to read messages would be a career-defining finding for a security researcher, so it's not like this is some topic nobody has ever thought to investigate.
Yet. Until they say "We delete these messages after X time and they are gone gone, and we're not reading them" Assume they are reading them, or will read them and the information just hasn't got out yet.
I mean we keep finding more and more cases where companies like FB and Google were reading messages years ago and it wasn't till now we found out.
Whether Facebook/Meta can read the plain text of the messages or not depends on whether that encryption is "zero knowledge" or not, aka: does Facebook generate and retain the private encryption key, or does it stay on the users' devices only, never visible to Facebook or stored on Facebook servers?
In the former case, Facebook can decrypt the messages at will, and the e2ee only protects against hackers, not Facebook itself, nor against law enforcement, since if Facebook has the decryption key they can be legally compelled to hand it over (and probably would voluntarily, going by their history).
This viewpoint isn't a slippery slope, it's a runaway train.
"You moved into a neighborhood with lead pipes? That's on you, should have done more research"
"Your vitamins contained undisclosed allergens? You're an adult, and it didn't say it DIDN'T contain those"
"Passwords stolen because your provider stored them in plaintext? They never claimed to store them securely, so it's really on you"
Legislating that everyone must always be safe regardless of what app they use is a one-way ticket to walled gardens for everything. This kind of safety is the rationale behind things like secure boot, Apple's App Store, and remote attestation.
Also consider what this means for open source. No hobbyist can ship an IM app if they don't go all the way and E2E encrypt (and security audit) the damn thing. The barriers of entry this creates are huge and very beneficial for the already powerful since they can afford to deal with this stuff from day one.
this isn't anything new, however. No messaging has been actually private since forever, that's why encryption was invented. To keep secrets and to pass those secrets in a way that can be observed without revealing the secret.
Telephones can be tapped, people sold special boxes that would encrypt/decrypt that audio before passing it to the phone or to the ear. Mail can be opened, covertly or not. AIM was in the clear (I think at one point, fully in the clear, later probably in the clear as far as the aol servers were concerned)...
Unless the app/method is directly lying to users about being e2ee it's not a slippery slope, it's the status quo. Now there are some apps out there that I think i've seen that are lying. They are claiming they are 'encrypted' but fail to clarify that it's only private on the wire, like the aim story.. the message is encrypted while it flys to the 'switchboard' where it's plain text and then it's put wrapped in encryption on the wire to send it to the recipient.
The claim here that actually makes me chuckle is somehow trying to paint e2ee as 'unsafe' for users.
If you are a grown adult and don't do research on "<insert any topic that could have a material negative impact on your life, but that is not currently on your radar as being a topic that could have a material negative impact on your life>" then that's really on you.
It definitely ignores that many people don't have time. If someone is working over 40 hours per week, plus maybe doing unpaid labor taking care of kids or elders, where are people supposed to find the time and energy to brush up on a million different topics they don't even know they might not know enough about? Especially if they might also have medical issues, or hobbies, or want to have any time at all to relax.
Obviously, one way to improve the situation would be to make sure people are paid fairly and not overworked and have access to good and affordable or free childcare and elder-care and medical care, but corporations don't want that either. If anything, they're incentivised to disempower workers and keep them uninformed, and to get as much time out of them as they can for as little money as possible.
80% of the population does not and will never do that level of deep dive on apps
same discussion for any form of technology be it TVs or changing their car's oil
the deliberate app-store-ification of all things computer is also designed to keep people from asking those questions -- just download in and install, pleb.
it's why the Zoomers can't email attachments or change file types: all of the computers they grew up with were designed so they never had to understand what happens under the hood.
Most people couldn't tell you how their car works, at least not enough to fix it. Is that handholding, too?
People can't be knowledgable about everything. There's just too much information in the world, and too many different skills that could be learned, and not enough time.
A carpenter can rely on power tools without understanding fully how the tools work, and it's fine, as long as the tools are made to safe standards and the user understands basic safety instructions (e.g. wear protective eyewear).
To me, making sure that apps don't screw with people, even if they don't understand how the apps work, is roughly the equivalent of making sure power drills are made safely so they don't explode in peoples' hands.
Honestly I'm tired with every app trying to become the everything app.
Now TikTok wants to be a messaging app. Snapchat has a short video feed just like TikTok. WhatsApp only has a text feed, how long until they also add a video feed?
> nobody should believe for a second that WhatsApp or FB messages are truly E2EE
That's interesting. You think all firms that audited WhatsApp and Signal protocol used by WhatsApp and all programmers who worked there for decades and can see a lie and leak if it was true are all crooks? valid opinion I guess, but I won't call it "no one should believe for a second
(curious you didn't mention Telegram, it is actually marketed as secure and e2e and it has completely gimped "secret chats" that are off by default and used by like almost nobody.)
I forget if its WhatsApp that technically lets you sync chats in unencrypted form to iCloud which is the “loophole” around this, though you can lockdown your iCloud even tighter, not sure it Apple can do much if you fully lock down your iCloud, not sure if this has been legally tested? Its not a very advertised feature its just a setting.
iCloud backups are encrypted, and can be end-to-end encrypted.
Also, backups have nothing to do with the messages being end-to-end encrypted. Like if you don't use a passcode on the phone, the messages are still encrypted.
You mean you will read all code with dependencies and compile it yourself to make sure?;) good for you. but good luck creating a popular e2e messenger then.
In my experience most forums have private messaging.
Additionally I think it is fine to say "we don't support e2ee". I prefer honesty to a bad (leaky) e2ee implementation, at least the user can make an informed choice.
I agree. At least take of "Yes messages are stored on our servers" is honest. And if they are accessed by anything else than limited subpoena is policy or legal issue.
>In my experience most forums have private messaging.
Yeah but it's kind of accepted that the forum owner could read it all if they so chose. Maybe this is a hold over from back in the old days when encryption was nowhere near default during which forums arose.
Adding that private self hosted forums can permit uploads of encrypted files, encrypted with a pre-shared secret or a secret shared over a private self hosted Mumble voice chat server.
you can encrypt the content but not the metadata, not even the subject unless you use a customized client that encodes it (like deltachat which doesn't use a subject at all), but then you still have your email address exposed.
Email encryption for most people is sufficient even if the metadata is exposed. One can simply state in their email encryption "Bing Bing Bong" or "Why did you not put the trash out?" which might mean to the recipient :: "check the second SFTP server" or "let the cat outside" or "Jump on my private Mumble chat server" or "Get on my private self hosted IRC server". The email message need not be encrypted for that matter.
The intended payload can be in an header-less encrypted file on a throw-away SFTP server in the tmpfs ram disk.
I have never considered metadata a part of the term E2EE. It has always been about the message contents.
I understand that metadata is valuable information for spies/governments and that encrypting or hiding it is valuable for privacy. But if you use that definition, there are almost no E2EE protocols on the planet in use.
First and foremost, any protocol that uses Apple or Google push notifications is giving metadata to those organizations. Even Whatsapp, iMessage, Signal, Telegram private messages, all of that leaks metadata but the contents of messages are hidden from the provider.
I know, right? I admit that is mostly for people on Linux desktops. People on smart phones are 100% monitored regardless of encryption or fake E2EE that platforms pinky promise is really E2EE like Signal. Shame on Moxie, he knows better.
Ovaltine has a crapload of sugar. Don't drink that horse piss.
you can bring your own encryption to ANY messaging platform, doesn't mean it will be easy to use. e2ee just really makes it handy so that users don't need to preshare any keys.
> as long as there are relatively good options of apps that do have privacy (and I think there are)
Once you have enormous network effect like TikTok has, you don't really have any free selection of alternative apps. You are free to use one, but you will be the only sad user over there.
Regulations are needed that would force large platforms like TikTok and Instagram to enable federation, opening them up to actual competition. This way platforms would be able to compete on monetisation and usability, instead of competing on locking in their precious users more strictly.
> MySpace is well on the way to becoming what economists call a "natural monopoly". Users have invested so much social capital in putting up data about themselves it is not worth their changing sites, especially since every new user that MySpace attracts adds to its value as a network of interacting people.
> "In social networking, there is a huge advantage to have scale. You can find almost anyone on MySpace and the more time that has been invested in the site, the more locked in people are".
Lolololol. No, not regulations. Regulators. With the people we currently have voted into office in the US the only regulations we are going to get are ones saying Sam and Peter must look at everything you do all the time.
Until we stop voting for more authoritarianism, expect ever increasing amounts of authoritarianism.
federation would never work. How would it work here? Either you are forcing tiktok to give pageviews to federations of spam, or you are letting tiktok decide which federations to work with, which essentially results in no federation.
I am fine TikTok remaining that 'we watch what you are doing' platforms. Those do not care can gave that if they wish, I do not mind.
But bullshitting about it is making users more safe, that is ... bullshit! Worse that that, distorting public opinion, intentionally fooling the gullible.
It might be fine if they presented an honest choice.
They are lying straight off though... police and safety team don't read messages only "if they needed to" to keep people safe. They do so for a large variety of other reasons, such as suppressing political dissent and asserting domination and control.
I don't think we can expect most people to understand TikTok's BS here either. I notice even a skeptic like you is uncritically echoing the dubious conflation of privacy and CSAM.
Anyone who doubts the requirement for e2e messaging should not be considered a skeptic, they are fully buying into whatever narrative LEO would like you to believe.
That it’s fine because it’s the CCP (commies see all) is a new one.
It’s at best subpar for the same reasons as if it was the usual Silicon Valley spyware.
I could leave well enough alone. But why? Because there are choices? There are five other brands of cereal that do not have 25% sugar? I’d rather be a negative nancy towards these on-purpose addictive, privacy-leaking attention pimp apps.
Fine with me too. I think many other apps (WhatsApp, FB, etc.) are using E2EE for PR purposes and are not actually good implementations of E2EE.
Good implementations of E2EE:
1. Generate the key pairs on device, and the private key is never seen by the server nor accessible via any server push triggered code.
2. If an encrypted form of the private key is sent to the server for convenience, it needs to be encrypted with a password with enough bits of entropy to prevent people who have access to the server from being able to brute force decode it.
3. Have an open-source implementation of the client app facilitating verifiability of (1) and (2)
4. Permit the users to self-compile and use the open-source implementation
If company isn't willing to do this, I'd rather they not call it E2EE and dupe the public into thinking they're safe from bad actors.
Depends on your definition of "safe". Imagine an adult DMs a nude photo to a minor (or other kinds of predation).
If it's E2EE, no one except the sender and receiver know about this conversation. You want an MITM in this case to detect/block such things or at least keep record of what's going on for a subpoena.
I agree that every messaging platform in the world shouldn't be MITM'd, but every messaging platform doesn't need to be E2EE'd either.
The receiver has a proven and signed bundle, that they can upload to the abuse report. So the evidence has even stronger weight. They can already decrypt the message, they can still report it.
Yes, but this leaves the only way to identify this behavior as by reporting from a minor. I'm not saying I trust TikTok to only do good things with access to DMs, but I think it's a fair argument in this scenario to say that a platform has a better opportunity to protect minors if messages aren't encrypted.
I'm not saying no E2E messaging apps should exist, but maybe it doesn't need to for minors in social media apps. However, an alternative could be allowing the sharing of the encryption key with a parent so that there is the ability for someone to monitor messages.
> I think it's a fair argument in this scenario to say that a platform has a better opportunity to protect minors if messages aren't encrypted
Would it be a fair argument to say the police have a better opportunity to prevent crimes if they can enter your house without a warrant? People are paranoid about this sort of thing not because they think law enforcement is more effective when it is constrained. But how easily crimes can be prosecuted is only one dimension of safety.
> However, an alternative could be allowing the sharing of the encryption key with a parent
Right, but this is worlds apart from "sharing the encryption key with a private company", is it not?
> Police cannot access your E2EE DMs with a warrant.
They can and do, regularly. What they can't do is prevent you from deleting your DMs if you know you're under investigation and likely to be caught. But refusing to give up encryption keys and supiciously empty chat histories with a valid warrant is very good evidence of a crime in itself.
They also can't prevent you from flushing drugs down the toilet, but somehow people are still convicted for drug-related crimes all the time. So - yes, obviously, the police could prosecute more crimes if we gave up this protection. That's how limitations on police power work.
And they shouldn't be able to. Police accessing DMs is more like "listening to every conversation you ever had in your house (and outside)" than "entering your house".
>Police cannot access your E2EE DMs with a warrant.
Well the kind of can if they nab your cell phone or other device that has a valid access token.
I think it's kind of analogous to the police getting at one's safe. You might have removed the contents before they got there but that's your prerogative.
Keeping children safe and prosecuting are too different concepts, only vaguely related. So no, being able to track pdfs doesn't make children safer. What keeps them safe is teaching them safe communication habits and keeping them away from things like Tiktok.
We shouldn't make the world a worse place for every one because some parents can't take care of their children.
Not necessarily saying what they did was right, but I think there's a strong utilitarian argument to be made that what they did in that case was, in fact, the best way to keep children safe.
What's more dangerous? CSAM on the internet? Or actual child predators running loose?
That stuff spreads and re-spreads just like anything else people download off the internet. There's a pretty strong argument for shutting it down right away. IIRC most users were outside jurisdiction.
Even if one more person was prosecuted it was worth it. If you shut down an illegal website a new one will show up a month later, with the same people involved, and you achieved nothing.
In that case don't use Tiktok dm's to discuss your sexuality. I think it is strange that people feel like they have to be able to talk on sensitive topics over every interface they can get their hands on.
Similarly in "traditional" media you may not want to discuss such private conversation on a radio broadcast. Perhaps you would rather discuss it on the phone or over snail mail as there is more of an expectation of privacy on those medium.
Right, but it currently isn't a sensitive topic - homosexuality is, as of 2026, broadly legal in the United States. That's a relatively new state of affairs, historically speaking, and one which Afghanistan shared as recently as 2021.
I'm commenting in the context of the conversation, not in a vacuum. You could just as (in fact, much more) easily say that children shouldn't be on apps with private messaging enabled. That would help a lot more, and then we could keep e2ee.
I don't think you confused anything, except for the terminology the platform uses. There is an obvious expectation of privacy when sending direct messages!
Hasn't been true ANYTIME IN HISTORY. Hell it was well understood even by children that no conversation you had on the telephone was truly private. That's why cyphers were invented.
It makes certain users less safe in certain situations.
E2E makes political activists and anti-chinese dissidents safer, at the cost of making children less safe. Whether this is a worthwhile tradeoff is a political, not technical decision, but if we claim that there are any absolutes here, we just make sure that we'll never be taken seriously by anybody who matters.
well having no e2e encryption is safer than having a half-baked e2e encryption that have backdoor and can be decrypted by the provider.
and for tiktok's stance, I think they just don't want to get involved with the Chinese government related with encryption (and give false sense of privacy to user)
>I think it's fine to say "You don't really have privacy on this app"
Disagree. To analogize why: privacy isn't heated seats, *its seat belts*. Comfort features and preferences are fine to tailor to your customers and your business model. Jaguar targets a different market than Ford, and that's just fine.
Safety features should be non-negotiable for all. Both Jaguar and Ford drivers merit the utmost protection against injury in crashes. Likewise, all applications that offer user messaging functionality should offer non-defective, non-harmful versions of it. To do that, e2e privacy is absolutely necessary.
>I just don't see the point in expecting some sort of principled stance out of them.
This is the defeatism that adds momentum to a downhill trajectory. Exactly the opposite approach arrests the slide - users expecting their applications and providers to behave in principled ways, and punishing those who do not, are what keeps principles alive. Failing to expect lawful and upright behavior out of those you depend on, be they political leaders or software solutions providers, guarantees that tomorrow's behavior will be less lawful and upright than yesterday's. Stop writing these people a pass for this horrible behavior, and start holding them unreasonably accountable for it, then we'll see behavior start to change in the direction that we mostly all agree that it needs to.
The most effective protests against internet censorship came from massive grass roots movements, with users drawing a line in the sand that they will not tolerate further impositions on their freedom.
>In some ways I think it's worse for places like Facebook to "care about privacy" and use E2EE but then massively under-resource policing of CSAM on their platform.
The irony is so manifest of billions of people having their privacy stripped by politicians and business elites in the name of protecting our children, while those politicians and business elites conspire en masse to prey on and sex traffick our children. If these forces actually took those concerns seriously, rather than sensing them as an opportunity to push ulterior motives, they'd be eating each other alive, right now. Half of DC, half of Hollywood, and at least a tenth of most major college administrations would ALL be at the docket.
Tesla doesn't have parking sensors. They're a safety feature. There's lots of safety features in cars that are optional, we've got an entire rating system for the safety of cars.
We're talking about an app that's controlled by the CCP, I do expect them to take a principled stance - stances like Taiwan is a part of China and you can't be openly critical of the leader of the party. They don't have the same principles as you. You can force them to put in E2EE, but you can't force them to be honest about it or competent about it. I would rather know what we're getting than to push them to lie.
This is the same thing as the OpenAI/Anthropic thing. You've got Anthropic taking a principled stance and getting pain for it, and you've got OpenAI claiming to take the same stance, but somehow agreeing to the terms of the DoW. Do you think it's more likely that Anthropic carelessly caused themselves massive trouble, or do you think OpenAI is claiming to have got the concessions that clearly won't work in practice. I think it's naive to think the former.
>We're talking about an app that's controlled by the CCP, I do expect them to take a principled stance
In the area of large scale internet service providers, who do you expect to take a principled stance, and why do you expect them to take it?
If the answer is, "nobody", then why keep singling out China? And if the answer isn't "nobody", then how do we apply the same pressures and principles to TikTok and other platforms that offer messaging?
This isn't some abstract concern. We know that WESTERN journalists, activists, and others have been murdered in acts of transnational repression that either began or were focused and abetted by communications surveillance aimed toward political dissidence. It seems incredibly naive to believe that current Western political and military leadership could ever be dissuaded from taking effective action (and such surveillance and repression campaigns certainly are effective) by moral qualms unsupported by strong checks and balances of accountability. In other words - this sort of repression most likely continues happening to journalists, activists, human rights lawyers, and other political dissidents, in our society, today. Enabled by the refusal of our service providers to protect us, their users.
It seems incredibly naive - civilization threateningly so - to write a pass to anyone, let alone Larry Ellison, for opting to deliberately expose "his" users to this risk. Nothing is OK about this dereliction of responsibility towards them.
Children are just too effect of a tool when building a surveillance state. We should have banned children from owning open computers a long time ago just like we do with Alcohol, Driving licenses, etc.
Instead children would own special devices that are locked down and tagged with a "underage" flag when interacting with online services, while adults could continue as normal. We already heavily restrict the freedom of children so there is plenty of precedent for this. Optionally we could provide service points to unlock devices when they turn 18 to avoid E-waste as well.
This way it's the point of sale where you provide your ID, instead of attaching it to the hardware itself and sending it out to every single SaaS on the planet to do what they wish.
Would be a nightmare to implement and achieve the goal, but I have to say I think it’s more right than wrong. All of the data is very clear about the harms.
China has restrictions for social media and screen time for kids — how do they implement this?
I actually think this would be easier to implement than many of the current ID verification methods I've seen being pushed. We already have the infrastructure for selling age restricted goods, this is nothing new. Manufacturers that are unable to restrict their hardware in a "child" mode don't have to do anything and could simply continue selling to adults only.
It's obvious we're moving in a direction where we are going to get these restrictions in one way or another, and this is the only way I've come up with that doesn't come with serious privacy implications.
Most importantly, this solution would be simple for anyone to understand. You don't need to be a cryptography expert to understand there are child safe devices and then there are unrestricted devices for adults.
Would the parents comply though? Many of the restrictions work because most adults agree is OK. For example for alcohol, children could drink as much as they want at home, if adults would permit it.
If most adults would be convinced there is an issue, one probably has enough lock-down modes even nowadays, not sure it is a "technical" problem.
I strongly believe that most would actually. All parents I've talked to have had issues with parenting their children's online activity. They know there are harmful things they want to prevent them from accessing but it is simply to hard to configure and set up existing tools for it. (Besides every single friend they have don't have any restrictions so it all seem pointless.)
I can also see also large support for uploading ID to various services when talking about kids, but when you re-frame the question to adults, most seems to really dislike the idea immensely.
Sure there will be children with access to unrestricted devices, just like we had kids with porn mags hidden in a forest somewhere back in the day, or how that one sketchy guy was buying alcohol, etc. But I think this is an acceptable level of risk for whatever harm people want to prevent.
Definitely makes it easier for parents. It also normalizes screen time limits for kids. When none of your kids' friends have screen time limits, it's harder to enforce. When at least there's a few of them, it's easier to get buy-in from your kids.
At that point it's on the parents. We can't stop parents from giving their kids alcohol or drugs either. (Not saying internet access is necessarily on the same level as that but you get the point.)
Consider that even with something as divisive as covid lockdowns and vaccines, the overwhelming majority of people complied with government instructions.
There are a minority of people currently refusing to vaccinate their children properly, and their fucking around is being found out with measles outbreaks in various countries.
Why would this be different? Why wouldn't it be a minority of parents permitting their children to drink, to smoke, to use unrestricted computing resources?
Oh, that actually seems ... bad. On the gripping hand... restricted in which way? I learned to program on the BBC B, for instance.
I keep thinking that computers that are actually made to be good for children should be a thing. Perhaps like "A Young lady's Illustrated Primer" ( https://en.wikipedia.org/wiki/The_Diamond_Age )
Good ideas can come from anywhere. Shutting yourself off only does a disservice. You don’t need to replicate 100% of another society to recognize individual strengths.
Yeah, sounds like something from an authoritarian police state.
> Essentially, this is a cross-platform, cross-device, government-led parental control system that has been painstakingly planned out by Beijing.
> The rules are incredibly specific: kids under eight, for instance, can only use smart devices for 40 minutes every day and only consume content about “elementary education, hobbies and interests, and liberal arts education”; when they turn eight, they graduate to 60 minutes of screen time and “entertainment content with positive guidance.” Honestly, this newsletter would have to go on forever to explain all the specifics.
We don’t do this in free societies. Let the parents decide.
It's a nightmare to some extent to prevent underage people from consuming alcohol if you want to phrase it that way. But we don't try to ban stores from selling alcohol because of concerns children will be drinking it. Instead we require the store checks for ID.
Which has never worked. Korea had a system to prevent kids from gaming after midnight for something like 15 years. All it did was make Korean kids very good at memorizing their parents ID.
In China they link the ID to a phone number (via mobile carriers) and the online services require you to authenticate using the phone (SMS etc.) Unless the kids are able to secretly access the parent's phone there's no low-effort way to work around the system.
I don't know about Korea but if memorizing an ID number works, then that's just a badly designed system.
I'm not sure what your argument is really, unless you're saying there's technically and absolutely no feasible way to securely verify the age of a person before allowing them to access an online service (even if you allow the government to be authoritarian)
when i signed up for mobile service or for internet service in china (i don't remember the specifics), i was given half a dozen sim cards for use in my family. so they were all tied to my or my wife's name, but used by anyone who needed one. i believe the in-laws got at least one or two, and my kids would have gotten one, had they been old enough to have their own phone. i don't know if there was any rule that would restrict who we give those cards.
the actual users of each simcard did not have to identify themselves. so at least then it wasn't about age controls, but it obviously would allow tracing the owner eventually.
Maybe it does work exactly as intended. It gives parents more leverage to restrict their kids gaming but many parents just don't care. And it's ok I guess, the society probably needs some flexibility in raising the next gen.
Parents are also allowed to restrict their children access to alcohol and cigarettes, but it seems a government ban on them buying those things works better
Alcohol is totally legal for a child to drink in my state as long as consumed privately. It's only illegal for them to buy. My parents gave me alcohol all the time in order to teach me about it and the result was that I didn't really drink when I turned 21 or have any urge to sneak it.
That's exactly how I'm doing technology. I sign my kid up for kid accounts. And I apply parental controls.
Given the ease with which kids who want them can get any of those things in schools, it's not clear that the government ban is actually doing anything of significance or that the reduction in usage isn't more a result of convincing people that those things are actually bad for them so they choose not to partake despite the continued widespread availability.
Notice that consumption of those things is also down for adults even though adults are not banned from getting them.
Doesn't seem to be a universal truth to me. As a teenager I had rather easy access to both cigarettes and alcohol in spite of usual age-restrictions legally imposed. I didn't care what gov't thinks about it. I did care about what my parents would do if I caught drunk though. That was my real barrier.
I don't think debazel was saying that children should have been banned from owning computers for the benefit of the children. He was saying that children should have been banned from owning computers so that the government would have no excuse to regulate what's allowed on computers.
so we agree that governments only using the safety of children as pretext to extend their control of people's lives, otherwise there are better solution protect children of the harms of the internet.
The most important principle in the modern age is the freedom to prey on wallets. You can’t give parents tools to conveniently restrict what their children do. Impressionable minds ought to live in a lord of the flies state where they are bombarded with stuff to nag to their parents about and give them FOMO about what their friends have that they don’t have.
At the same time, I remember growing up in the internet's wild west and bad encounters weren't an issue for me because of the golden rule I was taught from the start: you don't give your personal information and you don't interact with complete strangers. Learning to navigate the web instead of being in a walled garden was helpful in many ways.
The better question to ask ourselves is, does the capability to gather more information also lead to more power to act on this information? If the investigative resources are spread thin already it's not like they're gonna catch more criminals with investing more there. Repelling questionable individuals off the platform with lots transparancy -is- an effective way, but just a specific tool for a symptom.
I think a part of a better solution is to give parents and children better tools to manage their social graph themselves. Essentially the real problem is discovery and warding off of social outliers in a way that doesnt out all responsibility on opaque algos or corporations.
A part of their e2e keys could be shared using an intentionally obtuse way like mailing an item or a physical "friend code". That way parents and vetted friends can have their privacy.
You don't need to tie an id to someone's person to get positive confirmation on someone's poor behaviour. If someone crossed the line then parents can see it and escalate. In additon, what would happen to a child with abusive parents who can then arbitrarily restrict and deny a childs freedom to communicate? I did not have this myself, but without free access to other minds and information I would have been duller. Does a large information dragnet really serve our collective interests or are more precise tools needed?
> I think a part of a better solution is to give parents and children better tools to manage their social graph themselves. Essentially the real problem is discovery and warding off of social outliers in a way that doesnt out all responsibility on opaque algos or corporations.
This is actually a key consideration for the proposed implementation. The biggest issue for parents when restricting their children's online activity is that they simply don't understand the tool available for it.
By having a "child mode" iPhone, parents don't have to know any of that. They simply buy the iPhone Kids for their children and then get a plain iPhone for themselves.
If these restrictions were to actually be enforced by law as well, then it would make it very easy for teachers and other guardians to check if a device is appropriate for the child using it.
From what I've seen, the bad effects don't necessarily just come from free access to the internet, but that everyone around them in their social group has a video camera that can covertly record, they're all immature children and thus you cannot slip up once or you get kid cancelled, and they start doing a collective dissociative freeze response in a self-imposed emergent panopticon as a result.
So if the teen phone turned into a restricted "call mom" device with no cameras and with neon yellow obvious fuck you coloring and a restricted set of apps, and police took away a full phone much like they take away cigs and beer it might be enough to break the critical mass to create this issue. They can have dedicated cameras for video club, use the family computer, have an xbox or switch and have whatever tech experience that millenials had, the last generation to not have exponential increases in anxiety , depression and sexlessness.
It's the covert camera + internet that it's the key issue.
Locking down children’s devices doesn’t stop adults sharing illegal content with other adults though, so there would still be pressure to monitor communications between adults.
At some points, laws become an ineffective tool to prevent malevolent people to act in detrimental manners, no matter what it states. But prejudices of wicked states will always continue to impact more badly general public as ever more drastic laws lacking any balance become enacted.
Indeed way past time. Though no CEO would admin publicly what the addiction to attention/social media, gaming, and general screen use, causes to children. Of course this should've been regulated similarly to Alcohol, but billions would dry and it's much easier to witch-hunt marijuana, and illegal raves, right?
> Instead children would own special devices that are locked down and tagged with a "underage" flag when interacting with online services, while adults could continue as normal.
California is mandating OSes provide ages to app stores, and HN lost their mind because it's a ban on Linux.
> California is mandating OSes provide ages to app stores,
They forgot to put in the provision which exempts apps which do not need an age rating? As in: everything os related.
Sounds like a good way to get rid of snap at least since that is where all the commercial bloat is located. Last time I did a fresh Debian install I do not remember installing any app from the os repository which would require age restrictions (afaik).
Agreed. Putting the burden on parents is quite something:
1. You end up being the bad guy, other parents don't restrict their kids internet usage etc. Some folks would argue to just not set up restrictions and trust them. But it's a slippery slope and puts kids in a weird position. They start out with innocent YouTube videos, but pretty quickly a web search or even a comment can lead them to strange places. They want to play games online, but then creeps abuse that all the time. Even if you trust them to not do anything "wrong", it's a lot to put on their shoulders.
2. If you want to put restrictions in place, even if you're an expert, the tools out there are pretty wonky. You can set up a child protection DNS, but most home routers don't make it easy (or even allow you) to set a different DNS server. And that's not particularly hard to circumvent. I suppose a proxy would be a more solid solution, but setting that up would be major yak shaving. Any "family safety" features (especially those from Microsoft) are ridiculously complicated and often quite buggy. Right now, I got the problem on my plate that I need to migrate one of my kid's accounts from a local Windows account to a Microsoft account (without them loosing all their stuff), because for local accounts, it seems the button to add the device is just missing? Naturally, the docs don't mention that, I had to do research to arrive at that hypothesis. The amount of yak shaving, setup and configuration you have to do for a reasonable setup is just nuts.
3. If you're not good with tech - I don't see how you have _any_ chance in hell to set up meaningful restrictions.
Some countries are banning social media - sure, that's one thing. But there's a _lot_ of weird places on the internet, kids will find something else. I for one would appreciate dedicated devices or modes for kids < 18. Would solve all this stuff in a heartbeat.
After struggling with this problem for a while, we started using Qustodio. It's not perfect by any means, but it's the most broadly effective and usable tool for parental control I've found. Loads better than the confusing iOS native screen time tools.
TikTok has a drug-like effect on the brain. Multiple studies show a clear link between excessive TikTok engagement and increased levels of anxiety, depression, and stress. Maybe it is time we regulate it like a drug?
Is that because of engaging with tiktok, or because of the content on tiktok? If the app was exclusively pictures of kittens and nice flowers you saw on your commute, would it have a detrimental effect?
I don't understand why all teh child safety systems require age verification. Why not have a single setting on a smartphone that sends a 'child' flag to every single app or website, which then reacts accordingly? As long as you ensure that the browser can't be changed or modifed, it should be fine.
Does it matter. It's just some arbitrary company. They do have the freedom to decide those things however they want, right? The customer can then decide whether to switch or not.
It matters because if it works and people continue using the platform then other providers will follow and the only remaining E2EE providers will be niche.
If there is nothing else, then you as a customer has screwed up with it before, right? And then the entire strategy/philosophy is maybe to be reviewed?!
Or, in other words: If there is no alternative, this is due to your own faults. Either deal with it, or find ways to undo your mistakes.
Exactly. And once those ones are established: Why not have all discussions there; not just the ones where you explicitly want to hide something (for whatever dubious or legit reason).
Ultimately your neighbors must buy the argument. The reason why this argument wins is not because framing is so tricky, but because it connects with the values of your neighbors. Trying to convince people that these aren't actually their values is swimming upriver.
> They should just have no DM feature at all, then; make all messages publicly visible.
This makes no sense.
I can discuss something in a bar which is not a very private conversation, I wouldn't care if someone else hear what I'm saying. But I also don't want someone to record it and post it on the internet to be seen by the whole world.
I suppose they mean that apps should brand their non e2ee chat features as private or personal, which is what users take as the default assumption when interacting in one to one chat.
In a bar you're not speaking directly into a microphone that is permanently saving everything you say for later instant access by every government and advertising agency that wants to prosecute you or invade your privacy to sell you something
Isn't that something we asked for? We keep asking for parents to parent their children instead of getting age verification laws, and that is what that looks like.
I fail to see the link between private conversations/DM and E2EE.
To quote a comment I made some time ago:
- You can call your service e2e encrypted even if every client has the same key bundled into the binary, and rotate it from time to time when it's reversed.
- You can call your service e2e encrypted even if you have a server that stores and pushes client keys. That is how you could access your message history on multiple devices.
- You can call your service e2e encrypted and just retrieve or push client keys at will whenever you get a government request.
E2EE only prevents naive middlemen from reading your messages.
Fundamentally actual E2EE is complicated problem. And probably not very user friendly. It is full of technical trade-offs. And mistakes are very common. Or they lead to situations that people do not want. Like if you lost your phone or it break how do you get history back... What if you also forgot password? Or it was stored in local manager...
It is phrase that sounds good. But actually doing it effectively in way that average user understand and can use system with it with minimal effort is very hard.
You could have reasonable legal system where privacy is guaranteed. But you do not need end to end encryption for that to be thing. It really is orthogonal issue.
Sure, however kids these days often can't socialize irl - should kids be isolated from friends because they're unable to have any private conversations at all?
During times in which I was unable to socialize irl (eg school holidays), and unable to talk to my friends online, I can confirm that the isolation was not good for my mental health.
This might be off-topic but on-topic about child safety... but I'm surprised people are being myopic about age verification. Age verification should be banned, but people ignore that nowadays most widely used online services already ask for your age and act accordingly: twitter, youtube, google in general, any online marketplace. They already got so much data on their users and optimize their algorithms for those groups in an opaque way.
So yeah, age verification should be taken down, as well as the datamining these companies do and the opaque tunning of their algorithms. It baffles me: people are concerned about their children's DMs but are not concerned about what companies serves them and what they do with their data.
> Where are these mythical people who aren’t concerned with both?
People don't care about "what companies serve them". They only care if the children see sexual content (or things considered deviant). Once sexual and deviant content is filtered, they're happy to give away their children's development to the company's algos.
In effect, the people don't want to concern themselves with what their children consume, unless they're outraged by things normally taboo in their age group. Besides, if everyone is in it "it's not that wrong". They seek reactive entertainment rather than proactive engagement in their children's development.
Monitoring children's DMs is the responsibility of the parents, not megacorps. If a parent wants to install a keylogger or screen recorder on their child's PC, that's their decision. But Google should not be able to. Neither should... literally anyone else except maybe an employer on a work-provided device.
> Monitoring children's DMs is the responsibility of the parents, not megacorps
Absolutely. But what responsibilities do megacorps have? Right now, everyone seems to avoid this question, and make do with megacorps not being responsible. This means: "we'll allow megacorps to be as they are and not take any responsibilities for the effects they cause to society". Instead of them taking responsibilities, we're collecting everyone's data and calling it a day by banning children from social networks... and this is because there are many interests involved (not related to child development and safety).
Human operators were not required of The Bell Telephone Company by law. Bell switched to mechanical switching stations as soon as doing so was economically advantageous.
(Reconsider my post. I'm arguing for no regulation.)
I'd say that at minimum social networks need to be required to show how their algorithm works and allow users control over their data. They must be able to know why a content was served to them. Nowadays social networks are so pervasive in society, affecting it and molding it to unknown interests, that this is the bare minimum for a free society.
Ideally, users should be able to modify the algorithm, so they can get just what they want, while simultaneously maximizing free speech. If something isn't illegal, it shouldn't be hidden or removed.
> Nowadays social networks are so pervasive in society, affecting it and molding it to unknown interests
I think this is the real issue. We should free ourselves from "social networks" such as Tiktok, Facebook, Instagram and others. Even with direct messages truly E2EE, they create countless other privacy problems. They enable surveillance of people at scale and should be completely shunned for that reason alone.
They should have a responsibility of transparency, accountability and empathy towards users. They should work for the user and in the interests of the user. But multiple constraints make this impossible in practice.
> Monitoring children's DMs is the responsibility of the parents, not megacorps.
Yup, but the tools provided make that easy or hard.
But putting that emotive bit to one side, Megacorps have a vested interest in not being responsible to children. They need children's eye balls to drive advertising revenue. If that means sending them corrosive shit, then so be it.
Its a bigger issue than encryption, its editorial choice.
I'm all for helping parents to do this. Any site requiring age verification should indicate this as a http header or whatever, and the browser I allow my child to use should respect that and the parental controls should be easy for me to engage with
Many parental controls are massive pains to get working. Apple does fairly well (although I don't get a parental pin number to unlock the phone, which is normally fine as my child will tell me, but in some circumstances it wouldn't be), but does require the parent to be on the apple ecosystem too.
EA and Microsoft however are terrible, especially as it's likely the child will be playing fortnite/minecraft and the parent won't have ever touched it. I think with minecraft we had to make something like 5 or 6 accounts across three different sites to allow online minecraft play from a nintendo switch.
Why? Plenty of children benefit from talking to other people. Some children need careful monitoring, and some children shouldn't be allowed to use DMs, but it's not universal and should be up to the parents.
Control over who they can talk to (if needed), certainly monitoring of both who they talk to and in many situations what the contents are
At some point between the age of 0 and 18 the child has to be fully ready for an independent world. A cliff edge is a terrible idea, allowing 3 year olds unmonitored uncontrolled conversations with strangers is a terrible idea, but not allowing 15 year olds to talk to their friends is a terrible idea.
There are a variety of ways (see "Verifiable Credentials") that ages can be verified without handing over any data other than "Is old enough" to social media services.
> Age verification obliviates anonymity on the internet.
How so?
Please explain in detail, because there are already schemes such as "verifiable credentials" which allow people to prove they are of age without handing over ID to online services.
> You need to 100% trust those verification services.
First link - mitigation: use a well supported standard like OIDC, not a home-cooked scheme. Duh.
Second link - this is part of the problem such schemes as verifiable credentials are designed to address, random third parties collecting ID they don't need.
Yes, any system needs to be executed well. Neither of these really display that.
If _the government_ can't be trusted not to use a dumbass scheme, then no, it isn't a duh moment. You don't exactly get to dictate how the government implements it!
The point is that systems today, aren't really well executed. So it is unreasonable to expect them to be well executed.
If you can't trust people not to build the bomb well - then don't let them build a bomb.
> You don't exactly get to dictate how the government implements it!
Who was talking about the government implementing it? I wasn't.
And also "This has been done poorly in the past so we should never attempt to do it again, better" seems an odd way to go about things. There are well put together schemes by international standards bodies in this area now. Neither of the above links followed them.
I mean, your example of the ATO there isn't even an age verification thing, it's a defective clone of OIDC, so by that logic we should ban all SSO or identity delegation solutions?
Because we don't believe anyone will ever use the standards in this area, despite loads of companies and government bodies actually using OIDC already?
> I mean, your example of the ATO there isn't even an age verification thing, it's a defective clone of OIDC, so by that logic we should ban all SSO or identity delegation solutions?
MyGovID _is_ an age verifier. Sorry. The successor after the rebrand, is called myID [0], and advertised as:
> myID is a secure way to prove who you are online.
---
> I'm not really sure what you're driving at.
Clearly. You seem to think that because it might one day be done correctly, by one group, the rest of the world is safe. However, over in this reality, we have fuck ups by governments and private corporations, who are the people the rest of the world actually deals with.
You cannot enforce these real groups, to actually follow good practices. Thus, in practice, everyone gets fucked when you bring in these laws. Because it will always be done the wrong way, by someone.
> The successor after the rebrand, is called myID [0], and advertised as:
It's an identity scheme and SSO solution for accessing government services. As said at [0] in the "What is myID" section.
I sincerely hope that they're using something standard and well tested like OIDC behind the scenes this time, because otherwise it's ripe for another fuckup like the one you linked. If it is also used for age verification that appears to be secondary.
> You cannot enforce these real groups, to actually follow good practices. Thus, in practice, everyone gets fucked when you bring in these laws. Because it will always be done the wrong way, by someone.
So we need to stop the Australian government from ever using an SSO/identity solution again because it can't be trusted to do it properly, having messed up in the past, and the rest of us have had to live with the consequences. And as they aren't the only ones to have messed up, companies do it all the time too, we should also ban all identity and SSO solutions (because that's what we're talking about in this thread, banning of age verification, not mandating it).
I don't think you get to call out age validation as a uniquely hard problem that cannot possibly be made safe, but allow other identity-style services a pass. There are many areas in which we (through the government) can and do mandate good practice, both by government and private entities.
You should probably stop pretending you know what myID is, and what it does.
Its a sovereign identity verification service. That is not limited to above PL2 verifications. There are age-only accredited entities in the registry.
Its one of the approved verification tools for the Online Safety Act 2021 . It was renamed as part of the passage of the law. You're just not forced to use it, for verification.
And yes, it does it poorly, and does not follow a standard. Its using Vanguard's PAS behind the scenes [1], with extras ServiceNow tacked on. Until they rearchitect the entire damn thing.
So... As I might have doxxed myself a little just now... No, uploading identity documents is never a safe process. Its a king's hoard in treasure before nations that never sleep.
Name a provider, and there will be a breach, and it will continue to affect the victims most of their lives.
In the context of "Age verification should be banned" though, we're already talking about legislative intervention. If there's no particular problem with schemes that are like that then we don't necessarily need a blanket ban on age verification.
Perhaps what we're really saying is "Ban age verification that collects lots of personal information".
Or perhaps we could distil it down further to "Ban unnecessary collection and storage of PII". In which case, Congrats! You've arrived back at the GDPR :)
Which I think is a good thing, and should be strengthened further.
(Also the other response to "because most implementations are not going to be like that" is "why not?". People are already building such ecosystems.)
> If there's no particular problem with schemes that are like that then we don't necessarily need a blanket ban on age verification.
There is a problem with schemes like that.
The way computer security works is, attacks always get better, they never get worse. A scheme that nobody has found any privacy holes in when it's enacted will have one found a week after.
The way governments work is, the compromise bill passes if the people who care about privacy support it because then it has the votes of the people who care about privacy and the people who want to ID everyone. But then when the vulnerability is found, the people who care about privacy can't get it fixed because they can't pass a new bill without also having the votes of the people who want to ID everyone, and those people already have what they want. More specifically, many of them then have what they really want, which is to invade everyone's privacy, as they were hoping to do once the vulnerability was found.
Which means you need it to be perfect the first time or it's already ossified and can't be fixed. But the chances of that happening in practice are zero, which means it needs to not happen at all.
/goes on to discuss how government legislation of specific schemes is the issue, not the schemes themselves.
Then we don't legislate specific schemes? The GDPR doesn't do that, for instance, it spells out responsibilities and penalties but doesn't say "Though shalt use this specific algorithm".
Remember, this discussion started with a call to ban all age checks, which itself is a government action and restriction on the agency of private business.
There are ways that private entities can implement age checks both securely and without leaking much other information, so it seems very heavy-handed to ban them. Private entities are building such systems between themselves already, without government mandates on the specifics.
The difference is that IRL establishments don't sell off that data to anyone else, nor do they have the ability to collate that data with data from other establishments to make a profile of you.
If you think the nightclub that scans your driver's license magstripe isn't selling your data off, when they could be making money off of it? Between PatronScan,Intellicheck, Scantek, and TokenWorks, yeah a dingy bar where it's a dude visually checking isn't it, but a nightclub and quick swipe totally is.
The problem with this discussion is that this is a wonk solution for wonkish times. You're trying to thread the needle between various reasonable compromises. Ironically due to social media, that is simply not how politics and lawmaking works any more. Instead it's an emotionally driven fight between various different sorts of moral panic, and the only option is to get people more mad about surveillance than "think of the children".
You might be able to get somewhere by getting a tech company on your side, but they generally also hate adult content and don't mind banning it entirely.
(people are not going to get age verification _banned_ any time soon! That's simply not going to happen!)
The Party doesn't care about the Proles, only the members of the Outer Party.
I think that it's rather funny that people like to appeal to 1984 as if the only point of Mr. Orwell was that surveillance is bad, missing the entire point about stuff like the control of the language or the idea that the only self-justification of the (Inner) Party is power for the sake of power (see also: The Theory and Practice of Oligarchical Collectivism).
I'd even go as far as to say that if "telescreens are horrible" is the only thing that someone takes away from 1984, they've frankly missed the point.
Slippery slope can be argumental if you provide the actual argumental reasoning for it as I was thought it could be used as deductive argumentation (though that does not say much). On itself it is a fallacy.
I don't see how verifiable credentials with zero knowledge proofs provide that however.
They don't believe that. It makes it more difficult to deal with governments, is all. Big Brother needs your messages from time to time, and TikTok is not willing to risk getting shut down to argue against that. We can't have pesky principles getting in the way of money.
The government are able to access your conversations, data and connections with e2ee in place already. I don't see how not having e2ee would have an effect on that ability in any way.
You actually choose to believe that these trillion dollar tech monsters run by some of the most despicable people on the planet are being forthright in that they have no ability to do this on behalf of some government request? For something that isn't open source and can't be audited, and can be changed at the next upgrade without any oversight? I find it so much more unlikely that they can't, and that informs my normie use, mostly.
I don’t really understand how we are supposed to believe in e2ee in closed proprietary apps. Even if some trusted auditor confirms they have plumbed in libsignal correctly, we have no way of knowing that their rendering code is free of content scanning hooks.
We know the technology exists. Apple had it all polished and ready to go for image scanning. I suppose the only thing in which we can place our faith is that it would be such an enormous scandal to be caught in the act that WhatsApp et al daren’t even try it.
(There is something to be said for e2ee: it protects you against an attack on Meta’s servers. Anyone who gets a shell will have nothing more than random data. Anyone who finds a hard drive in the data centre dumpster will have nothing more than a paperweight.)
The unfortunate fact about E2EE messaging is that it is hard to do. Even if you do have reproducible builds, the user is likely to make some critical mistake. What proportion of, say, Signal users actually compare any "safety numbers" for example? There is no reason to worry about software integrity if the system is already insecure due to poor usability.
Sure, we should all be doing PGP on Tails with verified key fingerprints. But how many people can actually do that?
Same, my default MO is assuming 'e2ee' is broken and unsafe by default. Anything that I truly don't want sent over the wire would be in person, in the dark, in a root cellar, underwater. Not that I've ever been in the position to relay juicy info like that. Hyperbole I know, but my trust begins at zero.
With e2ee please remember that it is important to define who are the ends.
Perhaps your e2ee is only securing your data in travel if their servers are considered the other end.
Also one thing people seem to misunderstand is that for most applications the conversation itself is not very interesting, the metadata (who to who, when, how many messages etc.) is 100x more valuable.
As much as I want to agree with you, the people who like TikTok make up a significant amount of the population, and their opinions do matter--arguably more than yours, due to sheer numbers.
Smugly dismissing them doesn't do you any favors except for making you feel good about yourself for a few seconds.
You’d be surprised how many people don’t give a shit about TikTok. It’s just another blip in history like Facebook, Instagram, Vine, MySpace and others before them.
You say that like the typical 18 year old has any idea what they're doing when it comes to proper encryption and communication safety. That is never going to be the case.
It's a communication channel attached to the most popular social network for young people. Obviously they're going to use it a lot. They use it for the extreme convenience.
Listed in the article are the National Society for the Prevention of Cruelty to Children and the Internet Watch Foundation, which monitors and removes child sexual abuse material from the internet.
The recent Meta lawsuits also mention opposition from the National Center for Missing and Exploited Children and Meta's own executives: Monika Bickert (head of content policy) and Antigone Davis (global head of safety). Both executives mention the danger end-to-end encryption poses to children when attached to a social media graph.
> Both executives mention the danger end-to-end encryption poses to children when attached to a social media graph
So the fact that we welded a messaging platform onto a global-child-discovery-service is bad? Sure. Not encrypting that messaging platform is sort of closing the barn door after the horse has gone walkabout
It is a considerably larger threat for anonymous strangers to be able to establish private lines of communication with children than for them to know that Lisa Simpson (8) lives in Springfield and attends Springfield Elementary. In terms of discovery, most people are already aware that children can be found in school.
I don't see how you arrive at that conclusion? The risk in being able to connect to a random victim somewhere in the world appears to be strictly less than being able to target a specific victim in your local geographical area to whom you could gain physical access
Hence why nobody up in arms (in either direction) about e2e encryption for Chatroulette
Good to see this called out. The HN echo chamber has this really terrible habit of attributing any disagreement with the prevailing opinion here to big, shadowy forces with evil motives (billionaires, corporations, three letter agencies, politicians, etc) instead of facing the reality that sometimes well meaning people just have different values and priorities than us. Very rarely does that narrative get challenged directly.
> if the server operator was malicious, they could just push different client-side JavaScript
Same as with OS updates, browser updates, dependencies used by the OS, dependencies used by the browser. Also you can run malicious software such as keyloggers and you're compromised.
That argument doesn't mean E2E (even web based) is snake oil. Browsers just give you more points of failure.
The difference is: in web based cryptography, you get the cipher text and the code to decrypt it from the same source. Hijacking OS updates is arguably much harder than hijacking one particular web server, and there is pretty much no effective defense against malicious OS updates.
Agree, but a significant point missed in the article is that of data vulnerability. with E2EE the company db is useless to an external attacker.
For some companies (eg facebook, google, tiktok) i would be mostly worried about the company itself being untrustworthy. For others I would be mostly worried about the company being vulnerable.
> It is worth noting that this law also applies to non-web applications where the service provider supposedly being secured against is also the client software distributor; thus, the “end-to-end encryption” offered by Whatsapp and Signal, amongst other proprietary services, is equally bogus. (Both Whatsapp and Signal ban use of third party clients, and enforce this policy.)
the specificity of between web apps that is highlighted by the article is that you receive a bundled code of software every time you open or use the app, as opposed to say, the operating system or desktop apps, which are less frequently updated. (Native) mobile apps are like web apps in that they release updates almost every day.
In my opinion, a separate software should be used for the end-to-end encryption than for the communication, although there are other things to do for security other than only programming the computer correctly (such as securely agreeing the keys and ciphers in person).
It's never been controversial, it's the BBC. doing it's usual job of laundering the arguments the establishment want you to hear for domestic consumption.
The thing is, it _is_ controversial. At least amongst the general public.
Obviously not in somewhere like Hacker News where there’s a clear consensus, but if you asked a random sample of the UK population “should law enforcement be allowed to compel tech companies to hand over all DMs of confirmed paedophiles?”, I’d bet very good money the majority would say “yes”.
The notion that “Big Tech” can absolve themselves of the responsibility to help law enforcement find child abusers by saying “it’s all encrypted, not my problem”, does not sit well with a large sector of the population.
Whether it’s good or bad is an ultimately political question, and both sides of the debate tend to talk past each other on this topic, but it’s undeniably a controversial point within the broader population.
If you asked 'Would you support weakening encryption in messaging apps if it helped catch some criminals, even though it could make it easier for hackers to read your messages and steal your passwords, bank details, or personal photos?' I'd bet a large proportion of the general population would say no.
But that side never gets explored, or there's an assumption that there's some way of only letting the good guys access the information.
We are technologists here. There is no technology that can determine if somebody is a pedophile. We can't make a system that exposes the data of pedophiles but is secure for everyone else. We think it has to be all or nothing.
But other people are not technologists. Lawyers think the law is robust enough to determine if someone is a pedophile and only issue warrants for pedophile's data and simultaneously punish anyone who leaks the data of non-pedophiles. Most of the public also believe the police and the law can do that.
When the law is set up to do that, always gets abused eventually, after a time of not getting abused. The public gets outraged and the responsible person gets a slap on the wrist, and the abuse is normalized. In other words, lawyers are wrong and it doesn't work - by our standards. That doesn't stop them thinking it does. Our definition of "you can't do that" is "it's impossible to do that." Their definition of "you can't do that" is "you can do that, but if the police find out, you will go to jail."
It makes sense - they extract every possible bit of personal information from your device - why would they make you believe they care about your privacy?
You want to communicate privately? TikTok is not the place, and that’s ok. shrugs
The core tension here isn’t really about encryption itself, it’s about moderation models.
Most large platforms rely heavily on server-side visibility for abuse detection, spam filtering, recommendation systems, and safety tooling. End-to-end encryption removes that visibility by design. Once a platform is built around centralized analysis of user content, adding strong E2EE later isn’t just a feature toggle — it conflicts with large parts of the existing architecture.
Obviously carrier pigeons carrying messages encrypted with post-quantum ciphers where keys have been sent ahead of time using USPS because no one would be so rude as to read someone elses mail.
A middle ground would be to implement E2EE but have messages signed (and ideally organized in a Merkel tree), so that if a DM is reported there's cryptographic proof that the accounts sent the messages.
I dont think the argument is really about child safety. If it was tiktok would also be working on fixing their algorithm that can send minors toward harmful content, which is a far larger documented vector than encrypted DMs. This is about preserving access.
Fun fact - there is a big correlation between World Wars and compulsory education. Of course governments and big corporations "care" about children. Of course!
I feel like this makes sense for a platform that targets teens. Plus, I wouldn't trust TikTok to implement E2E encryption properly—who knows what they've snuck into their client.
What kind of application is not targeted at both teens and adults?
Youtube, twitter, bluesky, whatsapp? Every app with a social aspect will be used by teens. And no, tiktok is not "only for teens" or "specially targeted at teens", nowadays everyone uses it and creates content on it.
I think it's very safe to assume that no major US based platform has 'real' E2E encryption. They're almost certainly all a part of PRISM by now, and it'd contradict their obligations to enable government surveillance. So the only thing that's different is not lying about it. Though I expect the other platforms are, like when denying they were part of PRISM, telling half truths and just being intentionally misleading. 'We provide complete E2E encryption [using deterministically generated keys which can be recreated on demand].'
Aside from the fact that you can get Metadata and that some communication frequently happens outside of E2EE - what US law do you believe mandates moderation? I'm curious.
I see it like this: Taking in the totality of the danger, they're right. If the source (social network) and the destination (child brain) cannot be treated as trustworthy, then you must control the content for overall safety. If you could trust either end, then you could dismiss the argument. But you cannot trust children to be cognizant of abuse, and you already know social media literally reinvented abusive behaviors for the 21st century. Do nothing and children will be harmed. Overreach by any amount and you have destroyed freedom. The only middle ground is weaker encrypted E2E comms. Something that creates a forcing function with very high cost (an electric bill or SaaS service) for the sniffer but can be broken with enough horsepower. Think about what millions of dollars per character would do. Good luck codifying that insane compromise into a law.
> But critics have said E2EE makes it harder to stop harmful content spreading online, because it means tech firms and law enforcement have no way of viewing any material sent in direct messages.
Like they give a damn. I report accounts that explicitly sell fake credit cards, citing laws that make it illegal and 95% of the time "we checked and there is no violation here, we know that you're not happy but don't give a crap".
So the argument of security is utter bullshit and they just want to snoop.
I hate the BBC so much - "controversial privacy tech" "E2EE ... the best way to protect conversations from .. even repressive authorities" "End-to-end encryption has been criticised by governments, police forces"
They're saying this at the same time as they're clutching pearls over Iran's repression of protestors. Typical of the ethical consistency I would expect from them.
Reminder, Larry “citizens shouldn’t get any privacy” Ellison now owns tik tok. If you’re still using it or have friends and family using it you should stop immediately. It WILL eventually be used against you if this regime gets its way.
As if. If people haven't stopped using TikTok with all of the other reasons for stopping, then because Ellison is damn sure not going to move the needle.
> TikTok won't protect DMs with controversial privacy tech, saying it would put users at risk
Not sure if this was changed since first posting, I don't mind updates, but unless it'd redacting for legal purposes (which should then itself be clearly mentioned), the BBC should provide a public changelog like wikipedia
TikTok’s stance against end-to-end encryption is unsurprising but still concerning. TikTok is a source of information on many topics, such as the genocide in Gaza, which traditional media underreport and many governments try to suppress. The network effect of big social media platforms means many people will likely talk about these topics in TikTok DMs. No matter what legal controls TikTok claims to enforce, there is no substitute for technological barriers for preventing invasions of privacy and government overreach. This is yet another example where corporations and governments sacrifice people’s autonomy and privacy in the name of security.
It's a pretty terrifying world we live in now, where an unencrypted addictive short-form video platform is considered a source of information more than news agencies or even community-managed forums.
"The situation is made more complex because TikTok has long faced accusations that ties to the Chinese state may put users' data at risk."
And yet, it's even more complex than that, since it's now owned by cronies of the current US President. I've never had a TikTok account, but conceptually I was mostly pretty okay with being spied-upon by China. I'm never going to China.
Yes. China gives a shit that user rdiddly, at 36 minutes before 00:55 UTC on March 4, 2026, said that China is spyihg to the point that they are going to be abducted for it.
It's one thing to make a policy decision I disagree with. It's another to lie, blatantly, to my face about it. But what do you expect from people who bought TikTok specifically so they could add censorship and lied about it being some kind of national security issue?
This according to many researchers is the best case study example for corporations gaslighting users into accepting surveillance by companies and governments alike.
> Grooming and harassment risks are very real in DMs [direct messages] so TikTok now can credibly argue that it's prioritising 'proactive safety' over 'privacy absolutism' which is a pretty powerful soundbite
I wondered how it could be considered 'controversial', but they do quote at least a couple groups speaking against it. The NSPCC for instance, who incidentally also warned parents about a Harry Potter video game because their children might want to learn more about the game:
>“Parents should also be aware that players may want to find out more about the game using other platforms such as YouTube, Twitch, Reddit and Discord, where other game fans can discuss strategies and experiences.
It is controversial.. amongst people who have concerns about private communications and society, from a regulatory and governance perspective.
It's uncontroversial amongst people who value their privacy.
The tension between the two camps (there are obviously nuances and this is a false dichotomy) is at a current peak. It's an ongoing controversy. It's a matter of public debate.
You might have liked it better if the angle had been "...which the government, controversially, wants to clamp down on" or something.
Calling something controversial is a favorite propaganda technique employed by "news" outlets. It's another form of selective reporting and framing. It carries negative connotations, and has really no objective standard by which it can be wrong since you'll always find somebody against any issue.
Interesting I'm not a native English speaker but in news articles I have always interpreted "controversial" as meaning "under discussion" (perhaps even around a 50/50 divide) hence why they are writing an article about it.
I feel it is the news outlet trying to justify why the topic is important to read about since most people reading it will interpret the issue at hand as having a "common" stance. Usually it is used in topics that are very binary, for or against.
It does have negative connotations. And it does get used by news corporations to influence opinion. I have rarely if ever seen them feel the need to explain why a topic they report is important or newsworthy, and just stating without evidence that something is controversial really doesn't either.
> Usually it is used in topics that are very binary, for or against.
It can be for those topics, but very rarely to describe the side of such topics with which they align.
The UK government seems a lot more willing to embrace the panopticon in the name of protecting people from terrorists, child sex traffickers, human rights activists, Catholics, jaywalkers, you name it.