On systems before apple's locked-down iphone, it was just called "installing".
The PC revolution started with people just inserting their software into the comptuer and running it. You didn't have to ask the computer manufacturer or the OS vendor permission to do it.
And note that apple doesn't allow you to protect yourself. You cannot install a firewall and block arbitrary software on your phone. For example, you can not block apple telemetry.
2. I try to install my own software.
3. I'm prevented in installing my software on my device without "permission" from manufacturer.
4. Therefore, I do not own said hardware; manufacturer still does.
5. Therefore this is a indefinite rental instead of a sale.
6. I was defrauded with a fake sale, and Apple is defrauding IRS by not being properly taxed over millions of rental units (phones, tablets)
Want to renovate and change your home that you own? You need permitting and not all changes are allowed. But you own the home and land so why do you need permitting?
Say you want to modify your car that you own, again depending on the modification that's technically not allowed either (an aerodynamic wing in a place like Japan, for instance, can't be certain dimensions; but if you own the car you should be able to do what you want with it).
Maybe none of these types of things should be beholden to someone holding the reins of the thing you own but it's not like Apple not allowing sideloading is some wholly unique problem.
If there was a law requiring apps to be approved by someone first then your argument would be valid, but I do not think such a law exists (at least in my country).
While complying with a regulation vs a business requirement may feel like the same thing in practice, there is at least an avenue to change the regulation via, you know, democracy.
I believe both this situation and the iphone software situation are wrong, so it's not really a counter argument.
Yes. I'm not the original commenter, but this is what I expect.
From my POV, the OS exists to virtualise the hardware it runs on. I don't want the OS manufacturer to decide if I'm allowed to have a web browser or play games.
Naive in hindsight, but until game consoles and smartphones came along, it didn't occur to me that an OS would forbid me from installing something.
For example, a coffee maker does have software in there. But it does a job and does it well. There's no cloud garbage, no remote attestation, or much of anything.
To that end, I look at "who can control the device?" If the answer, as someone who paid money for it, and the answer is "the company", then I'm logically not the owner.
Alongside a fraudulent sale, there is also tax fraud by misclassifying these rentals as sales.
I've also seen nobody discussing the tax fraud angle either. We the public are getting cheated as well, from both directions. Its high time we start suing and pressing charges, and making us whole.
Man, have you seen coffee makers lately?
just search for "smart <appliance-name>" and you get all cloud garbage and more. Dishwashers, vacuum cleaners, televisions, microwaves, ... what a cesspit
No you can’t? Things like Project Sandcastle barely function on a single model. It can’t even access the network
I have a purely mechanical lawn mower. I can replace any part of the engine, frame, switches, I can add a second engine if I wanted to.
An Iphone doesn't let you do any of this. "Their OS", no dude, I bought it, it's in my hand.
Again, these companies who want to "sell" something, but still retain owner-level control at a distance should be classified as a rental.
And a rental means the company still owns this property, and therefore should pay taxes on all of their property.
And that would absolutely mean that game consoles SHOULD not be sold as such. Or better yet, if these companies do make changes against the property owner's decisions, should be prosecuted using the CFAA against the company.
Case in point: Nintendo Switch 2 is remotely destroying consoles that play a game that was ripped by someone else. If it were me, Nintendo of America's C levels would be charged with CFAA and have a nice perp-walk.
But that's the point in the USA. Companies are allowed to use Trojans and hack tools against hardware others own, but if we tried that, I'd be making this message in a jail cell.
What taxes exactly are you referring to?
E.g. if a game console manufacturer wants to retain owner-level control of their console, they can rent it to you for $X per month, which would include a Y% sales/VAT/GST/whatever tax.
And correspondingly if the device is sold to you, they should not be able to do things like disallow you from running custom software, remotely brick the device with a soft fuse, etc. and otherwise stop you from using it freely.
I think there is a middle ground (e.g. you can buy the console and either have it in "secure" mode as it ships from the factory, or choose to "root" the device and gain the ability to run custom code - perhaps this would invalidate the manufacturer's attestation keys from the secure enclave or burn a soft fuse as part of the process, so it no longer passes checks for DRM and so on). However that may not be economically viable as I understand the consoles are often loss leaders on the hardware and the profit is made on game sales and licensing.
My question was referring specifically to the “not paying taxes”. TTBOMK, in all western jurisdiction, sales/vat/etc/income taxes on sales are equal to or higher than those owed on rental income - and op kept repeating (in multiple responses) that misclassifying a rental as a sale is a tax fraud for the seller/original-owner. That makes no sense to me.
But regardless, if a company can remotely remove my ability to use a product solely at their discretion, we need a better way to talk about than "buying and selling"
I think we can do better than "well you own it because you're technically allowed to attempt to break the lock." We can demand that users be given ability to remove the lock.
So... Who's the jailer?
As an owner, I want THEIR rights.
It’s a disingenuous argument.
Apple wants to sell appliances. The parent commenter wants to buy a computer.
That's the fundamental disagreement.
Most of these analogies don’t make things much clearer.
The closest one is: the phone is supposedly my employee - I pay its salary (to Apple), but it is asking Apple to approve everything I ask it to do, and they are the only arbiter.
(This analogy also sucks. You have to actually deal with subject matter at hand and not look for shortcuts)
You actually can’t.
It’s not indefinite, because the vendor won’t support the hardware indefinitely. It’s also not a rental, because you are free to resell the hardware.
So Apple has never allowed sideloading. Google however?
Well if an update breaks that, it would be the same thing sort of.
The energy in this comment is 'Mr Gotcha', and is as "inspiring".
But the terminology did seem to spring up with iOS. It makes sense to call it that there. But on a platform that allows it, it's just installing.
Many moons ago I attended an internal tech talk by the Google security team. This was shortly after they got hacked by China around 2010 or so. The talk was a general one on what they were doing to boost the security posture in general.
Number one thing they were doing was moving away from AV scanners on Windows to a regime in which IT would centrally whitelist all apps by signature or EXE/DLL hashes. Beyond the issue of false negatives, the reason was that people would routinely install malware infected software despite being told by the AV scanner that it was infected. They'd be told that and they'd just override it. Nearly always the reason was that they were installing pirated software and wanted it badly enough that they either didn't care that it was virus infected, or they talked themselves into believing a conspiracy theory in which AV companies reported false positives to try and discourage piracy.
The other problem with AV was that it reported true positives centrally, but then they'd be coming from high level executives and there'd be problems with addressing the issue. Whereas in a whitelisting scheme said executive would have to file a ticket to request permission to install the malware-ridden pirated Photoshop or whatever, and they wouldn't do it.
This was very sad and I don't know if they kept it up, that sort of thing is terribly high maintenance and it wouldn't be a surprise if they moved away from it at some point. But when your biggest problem is AV that is accurate but ignored and that's inside one of the world's most sophisticated tech companies, it's fair to say AV is not useless but if anything needs to be even stricter.
To be fair, pirated software often uses obfuscation techniques similar to malware, and then it's more like antivirus vendors refusing to add an exception for pirated software, rather than antivirus vendors specifically seeking out pirated software to mark as malware.
Also:
Certain types of scripts and software that I use to configure Windows in unsupported ways are detected as malware by major scanners. While I'm sure someone wouldn't appreciate these scripts being used on their computer by surprise, when I use them intentionally, I want their effects.
How is that curl https://... | sudo sh going?
Apple also has enforced a similar policy to what Google is doing, but much stricter, and has done for ~13 years or so (devs must be identified, the OS rejects unsigned code in all territories by default, Apple pre-approves all binaries even outside the app store).
Linux distros have policies far more extreme than anything Google, Apple or Microsoft have ever done. They explicitly don't support installing any software not provided by their "app stores". Getting into those requires giving up your source code to them, and they reserve the right to modify it as they see fit without informing anyone, reject it for any reason or no reason at all (including reasons like "we don't have time"), and they tie getting new releases of your app to the user upgrading to new releases of the OS. If you do try and install stuff from outside of your distribution, not only are there security warnings to click through but an expected outcome is that the OS breaks and the vendor washes their hands of you.
Despite those policies, or perhaps because of them, botnets of Linux servers are common.
Of all consumer-facing platforms only Windows and Android allow installation of unsigned third party code out of the box via some obvious graphical path. And on Windows that right is somewhat theoretical. You can do it but the built in browser will try very hard to stop you, and the OS itself will happily break unsigned code by blocking file open syscalls heuristically. So in practice most apps don't go the unsigned route. On Android OTOH, unsigned (non ID verified) code is sandboxed and works just like regular apps after installation, the OS won't heuristically interfere with the app.
Most Linux distributions don't prevent you from installing third party software at all. You download something, you set the execute bit, it runs.
Users are wary of doing that with software from untrusted sources because, obviously, you're then placing your trust in whoever provided the software instead of the distribution's packaging team. But the OS won't stop you if that's what you want to do, and sometimes you do trust the source of the software.
> Despite those policies, or perhaps because of them, botnets of Linux servers are common.
Botnets of Linux servers are common because some people operate them without security installing updates (common with WordPress), and then attackers exploit known vulnerabilities in the unpatched software.
But "locked" phone platforms regularly discontinue security updates for devices that are still in widespread use. Locking the device doesn't solve that problem at all, and in fact makes it worse because then if the OEM doesn't patch it nobody else can do it either.
The OS doesn't stop you installing third party software - signed or not - on macOS, Windows or Android, so "allow" is nothing interesting. That also won't be changing with Android, given that you can buy a phone with an unlockable bootloader and reflash to some other spin of Android that implements whatever security policies you want. You can put these devices into a mode that allows anything.
The question is whether that's something the vendors make easy, if they support it in the sense that you can do it and they will still deal with you if there's a problem. That's what support means. It's not a synonym for technically possible.
Windows, macOS and Android don't consider installing third party software to put the system in an unsupported state. Linux vendors do.
The concern is that they are now doing this on Android, and have long been on iOS. Moreover, there are really three things here: Fully supported, still easy enough to be practical, and so much friction that it's dead.
If you install Steam on Windows, Microsoft doesn't "support" that -- if you call Microsoft support and want them to fix a problem with Steam, they're going to direct you to Valve. But installing Steam on Windows is easy to do, and therefore common. And it's the same thing with installing Steam on Linux.
Likewise, you can get Linux software from the distribution's repositories, but you can also use pip or npm or flatpak or any number of alternative packaging systems, and doing this is easy and common.
Which, on Android and iOS, it isn't. It's not just "not supported" but so arduous that the alternatives can't gain traction, which is qualitatively different and has consequences in terms of network effect even if it's technically possible to install LineageOS on a handset if you buy just the right one and immediately reinstall the OS and keep a separate phone to run your bank app. And even then you still can't install a mainline kernel on that device and are reliant on the OEM to keep publishing security updates.
Even with this new policy there are still ways to install unsigned apps on Android e.g. via adb, reflashing to a different build of Android, and so on. But you're absolutely right that there's a spectrum of usability here, which is why "allow" isn't really a useful standard. Only iOS tries to set friction to 100%. Every other platform "allows" third party installation given enough work, which is why it's valid to compare the difficulty of doing so on Linux with other platforms.
Re: Steam. Microsoft absolutely does support that! If you install Steam, Windows breaks, and Steam isn't doing something disallowed like messing with internal data structures, then Microsoft will accept it as a bug in Windows. They work very hard to support apps even when they actually do mess with internals. It's the Linux world that shrugs if a change in Linux breaks Steam when Steam was doing nothing wrong.
Flatpak is a genuine improvement, yes. But for the rest, sorry, you have developer brain switched on! Pip! Easier to use than Android!? These tools:
• Only target developers, and as such regularly do things like try to compile software during install and then fail due to obscure compatibility or versioning issues.
• Have severe malware problems.
You couldn't present pip or npm to the Android team as a solution to the problem they're trying to solve. You blame Android for being "arduous" whilst desktop Linux has spent decades with <5% market share exactly because it's so incredibly arduous. Come on: even with these new policies it is much easier for both users and developers to access/make software on Android. I've developed and distributed software for every OS except iOS at this point, and the differences are clear.
Sure, but the point being, it's a lot easier to install software from outside of the repositories on Linux than it is on Android. Measure by how often it happens. Do a significant percentage of desktop Linux users ever use something other than the official repositories? Yes. Do a significant percentage of Android users? Nope.
> Re: Steam. Microsoft absolutely does support that! If you install Steam, Windows breaks, and Steam isn't doing something disallowed like messing with internal data structures, then Microsoft will accept it as a bug in Windows. They work very hard to support apps even when they actually do mess with internals. It's the Linux world that shrugs if a change in Linux breaks Steam when Steam was doing nothing wrong.
I don't think this is accurate. If there is actually a bug in Linux, they'll accept the bug report regardless of whether you discovered it while using Steam or something else.
> Only target developers, and as such regularly do things like try to compile software during install and then fail due to obscure compatibility or versioning issues.
Nah. If you want to use some random AI thing or web thing that isn't in the main repositories, it's going to be telling you to install dependencies using those tools regardless of whether you're doing any software development.
> Have severe malware problems.
This is true but not unique. If you use a package distribution system and it has malware in it, it has malware in it. It doesn't matter if it's Google Play or pip or something else. It doesn't matter if it's operated by the same entity who made the device. What matters is if the people operating it do a poor job of excluding malware, and then some are better than others. Google Play has more malware than F-Droid or the Debian repositories; npm has more than Google Play.
> You couldn't present pip or npm to the Android team as a solution to the problem they're trying to solve.
The interface for some of those things are tuned for developers, sure. If you make an interface for ordinary people then it looks more like F-Droid than npm. But then that's what you'd do -- except that the F-Droid installer isn't allowed in Google Play, which leaves ordinary people in the chicken and egg where you have to do something technical to get access to the interface that makes it easy for ordinary people.
> You blame Android for being "arduous" whilst desktop Linux has spent decades with <5% market share exactly because it's so incredibly arduous.
Desktop Linux has been growing at a pretty significant rate. It's now above 5%, and it wasn't so long ago that it was under 2%.
The main problem isn't the difficulty of installing third party software but rather the network effect of getting people to make it to begin with. If hardly anybody uses it then developers don't make software for it and then people who e.g. want to play games get a Windows PC etc. Which makes it slower to gain market share. But despite that, the number keeps going up rather than down.
> even with these new policies it is much easier for both users and developers to access/make software on Android.
The thing you really need is the ability for someone who has never done it before to make Hello World and get it running on their own phone, and that is not easier on Android than on a Linux desktop.
I recently upgraded macOS, and it took me a couple of reboots and scarily-worded system configuration changes to re-enable (signed) kernel extensions…
Linux distributions each have their built in package managers, but there's no 'policy', as I understand it, that prevents installation of, literally, whatever you want. It's generally more difficult than just downloading and double clicking on the installer / exe, but just follow the instructions and it's done.
And, yes, also there are weird version and dependency issues that crop up more than would be ideal, but that's not the topic.
And, note, back when I was a Linux user, distro vendors and evangelists justified that situation by security. They said we don't want people distributing software outside of our repositories because that's how Windows users get viruses, so we deliberately won't make it any easier.
So the Linux community doesn't get to cry freedom and decentralization now, IMHO. The time to do that was 25 years ago when Debian was being praised for having big repositories. Some of us actually did point out how centralized and authoritarian that approach was, I even built a system for distributing apps in binary form to all distros (with hacks and shims for binary compatibility), and that projects attracted some volunteers, but we got pilloried for not "getting" UNIX. One Debian developer even called us monkeys.
The users got tired of this and bypassed them with Docker, a much more decentralized system in which anyone can publish images without binary compatibility problems, and using them isn't tied to your OS version or OS vendor policies. But Docker is also centralized around Docker Hub, and Docker Inc do ban images and developers when malware is found:
https://jfrog.com/blog/attacks-on-docker-with-millions-of-ma...
Not so different to what the app stores do.
It's fair to say that the only OS vendors who have ever taken decentralized and free app distribution seriously are Apple, MS and Google. The open source world went all-in on the centralized store model from the start and never looked back.
Sure, the Linux ecosystem has not prioritized binary compatibility as much, so doing so has been harder, people culturally expected "use existing libraries" more than "just bundle everything", but as you note that attitude has shifted too and it always was possible, and nothing seriously suggested preventing it.
Never heard that argument, ever. `apt-get` literally allows you to add whatever repositories you want. You're conflating two completely separate worlds. The first is the world of Linux that pretty much invented the idea of a software repository for an operating system. This was invented because Linux has the notion of "distros", and the trick there is to provide a set of packages that all work together in that distro. That's the purpose of curating packages in the repos (along with Free Software licensing, in the case of distros like Debian). But this system was always federated, where users were empowered to add any additional software repositories they needed. F-Droid on Android copies the exact same architecture, allowing the user to add endpoints of servers they want to pull software from.
The second is a system of control built by Google and Apple. It has nothing in common with the Linux system, but rather was designed to vend proprietary software that extracted money from users, for the purpose of lining Google and Apple's pockets. When Tim Cook testified about app store fees and the judge queried him about why they were so high, he said "To lower those fees would be to give up the full return on our App Store investment." Basically: we're charging this much because we can.
Conflating these two systems and the reasons for their design would be very misleading.
> It's fair to say that the only OS vendors who have ever taken decentralized and free app distribution seriously are Apple, MS and Google. The open source world went all-in on the centralized store model from the start and never looked back.
It is not even remotely fair to say this. In fact, it's so misleading it feels malicious. The only operating system on the planet that offers user-supplied software repositories that work with the built-in package management system is Linux. Full stop. And Linux doesn't even only have one of these systems, it has several. Flatpak, Debian repos, Ubuntu repos, Arch's AUR, Slackware's third party repos, etc. And users don't have to "work around" the system to use any of this - simply adding new URLs works great, and it's always been this way.
https://wiki.archlinux.org/title/Unofficial_user_repositorie...
https://documentation.ubuntu.com/server/explanation/software...
In short, Windows and MacOS and Android have never taken third party software distribution seriously in the least, and have done nothing to support it. Linux has built-in support for third-party repositories, and has for decades.
They all have sophisticated systems in place specifically to support third party software distribution that works (and is relatively safe):
• Windows has the app store, MSI, and MSIX (which allows efficient installs and updates from arbitrary web servers). MSIX is a package manager, by the way. It also has API support for writing AV scanners, managing software deployments across managed networks and so on.
• macOS has .dmgs, notarization, Gatekeeper
• Android has support for installing APKs from the web with a package identity system that lets anyone self-sign their software.
Above all they consider installing apps that aren't controlled by the vendors to be a core feature, so they work hard to provide binary compatibility, bug workarounds, multi-year deprecation cycles, anti-malware scanners and more, all for the benefit of developers who develop their apps independently of the vendors.
Linux can be reconfigured with additional repositories, technically, but that feature was originally designed for reducing bandwidth usage with mirrors. It wasn't meant to allow third parties to distribute software on their own schedule, which is why these third party repositories are invariably locked to a specific version of a specific distribution. Developers who complain about this are just told every version of every Linux distribution is a unique OS, and that they should open source their apps to let distributors centrally take ownership of their work.
It's changing a bit now with Flatpak. But for the bulk of Linux's history, that was the gig: no supported way to distribute your apps, and third party repositories would come with health warnings from your OS vendor. Not a supported way to use the OS. If it breaks you keep the pieces.
The stuff about Linux not letting you install stuff flies far in the face of like everyone’s knowledge of Linux. Your description of how Linux installation works is pure fantasy.
I've been a Linux user for 25 years. You can reconfigure the OS to use additional repositories. It may or may not work, and only if there is a repository specific to both your distro and its version. But it's not a good idea.
In particular, OS upgrades are very likely to break. Being able to upgrade itself is a basic requirement of any modern OS. If your Linux distro corrupts itself on upgrade or fails to do so and you file a bug report you'll be told to remove any third party software because that's not supported.
This would be like if your Mac started crashing on boot because you downloaded a word processor from a website, and then Apple say "sorry, we only support apps coming from the app store". They don't do that, but Red Hat or Canonical will.
I've used Ubuntu, Debian, Manjaro, Mint, and Fedora, and none of them are like this. Which distro do you use that doesn't let you install any software you want?
Nonsense. You can and run install whatever you want. Tons of closed source commercial software available for Linux like Matlab come as a .tar file which you extract and run.
But we certainly support your _ability_ to install and run whatever you want. It's your computer, and it's your OS.
And although I was making that argument to Fedora decades ago, it's only recently that this point has been accepted with official support by Red Hat for stuff like Flatpak. Of course other distros developed their own thing as always so it's still not really ideal. But at least the principle was now accepted that third party apps should have a properly supported way to thrive. Far too late, but it's done.
Working in retail tech support, we got folks bringing in their new macbooks, freshly ruined by new ransomware, utterly baffled that it was possible at all. But when you're trying to use Photoshop without paying... well, shady stuff's still out there.
My last 10 apk installs:
- 9 apps not available in the local store - 1 app I changed some setting in the manifest
For less technical people it will also include some shady apk's for example promising free La Liga match broadcast but then scraping everything from phone.
If the phone people could make a solid permissions system, this wouldn't be a problem. Applications should by default be able to read their own install files, and have dedicated directories for their local storage, caches, and such. They can make network connections to their home site, if the user allows it. That's all they get.
This covers most games. What else does it cover?
If you want an open phone, buy one. But I instruct all of the older members of my family to buy iPhones and iPads.
I’ve been programming computers since 1986 and even I have never said it would be cool to side load on my phone.
Because you know about the options, and probably have at least one computer where you can install what you want. Imaging if 1986 you only had access to an iPhone, like most young people today, would you still be programming computers 40 years from now then? There are new computer science students in university that doesn't know how file paths work.
Is this a joke? The reason for TFA is precisely that this is quickly becoming impossible as Google closes down Android. It's already viciously impractical to install a privacy respecting OS like Lineage or Graphene, and now they're coming for the very possibility of installing software.
Obviously there no way on earth Google will allow you to decide whatever device you own is "safe". There is still ways to bypass it using kernel hacks, but it's both cat and mouse game and often not very trustworthy since a lot of software used to bypass safetynet is proprietary.
So yep, using custom OS on your phone is impractical because Google made it so.
I’ve heard people say Monzo in the UK. But there are plenty of banks in the UK you can choose from in the UK that have websites
It's actually an EU law that financial apps must use something like Play Integrity and online banking must be authenticated by a smartphone.
You've already quoted one example so you know which was the trend is going, but since you asked here is another. New bank accounts handed our by https://boq.com.au/ can only be accessed from a phone, or via the web.
I started banking with them a long time ago. All accounts open back then have net banking, but no app. They've recently changed. New bank accounts can be accessed via an app, but web interface. I think this is a good thing in general. Insisting you do transactions using your phone or in a branch is far more secure that allowing payments via the web, or card.
As fraud continues to increase I suspect most payment systems will go that way. I would not be surprised if the bulk of non-cash payments on the planet are already done by phone: https://theconversation.com/no-more-card-charges-how-austral...
> what financial services companies are inaccessible via a web browser?
Yes the fix is obvious if it was a problem. I thought I made it clear I think it is a net increase in security, and so isn't a problem, for me anyway. Even if it was a problem, your throw away suggestion of "just move to another bank" is not so easy if you've borrowed money from the bank.
None of this has anything to do with topic being discussed - which is should Google allow side loading of unsigned apps. You seem to be positively enthusiastic about handing the keys to your life and assets to Google and / or Apple. The comments you see here are from people who aren't so sanguine about it. You look to be dismissive of their concerns. I would be too, if I thought if what they are doing yielded a big increase in security.
The OS should be so secure loading any app, signed or not, malicious or not, is mostly harmless. That is true for iOS and Android. You can always uninstall an app, and you have to give it additional permissions to access your data. I don't know if an app can attest it was downloaded from the web store so organisations like banks can be sure they are talking so software they issued. If it isn't, that's a security hole that should be closed.
Unlike adding attestation, sideloading apps doesn't look to be a security hole that needs fixing to me. I'm doubt it provides much additional security. I've personally had to fix phones whose apps went rogue after a spammers bought the developer licence from an abandoned app. Worse, the app still had the permissions granted to the original.
This new requirement does create barriers. I use apps from F-Droid. They typically have no ads, and they do what they say on the box. Security in the long term is higher than a Google store app because the source is available, and F-Droid uses reproducible builds. But I would not be surprised to find some open source app developers that aren't as keen as you are to hand over their private data to Google in order to get their keys signed, so there will be less F-Droid apps. If that happens, this new requirement would lead to a net reduction in security for me.
There are none that are usable.
All of the services I need to operate my buisness (such as my banking app) are also locked down to locked down OSes thanks to the silent majority and viewers like you.
You keep calling this 'hostile' and we should choose something else, but the whole reason we're complaining is because the choices are going away! Should we wait until we have literally zero choice (as opposed to limited choices with bad tradeoffs) before complaining?
It is a move taken in lockstep with EU's Chat Control and UK's Online Safety Act, and the proposed Kids Online Safety Act in the US. The common objective of all is total control of digital lives of citizens and allowing the government to snoop on all internet communication while not disabling end to end encryption. They need end to end encryption to lock out external adversaries (Russia China etc) but they need to see the contents of encrypted messages to monitor internal adversaries.
First step is blocking you from running any apps not allowed by Google/Apple.
Second step is putting in the systems to snoop on end to end encrypted communication apps on the endpoints, enabling intel agencies to detect thoughtcrime without exposing everyone's chats to Chinese/Russian intelligence. This will most likely be done by OSes recognizing the apps and extracting private keys on demand.
Last step is locking the bootloaders so you cannot have a phone which lacks the 'features' added in the second step.
You have already given in to tyranny when you've given that total control.
> "Many years ago I posted that I could not see anything wrong about sex between an adult and a child, if the child accepted it.
> "Through personal conversations in recent years, I've learned to understand how sex with a child can harm per psychologically. This changed my mind about the matter: I think adults should not do that. I am grateful for the conversations that enabled me to understand why."
https://www.stallman.org/archives/2019-jul-oct.html#14_Septe...
(I do agree with your comment overall, anyway.)
> Children: Humans up to age 12 or 13 are children. After that, they become adolescents or teenagers. Let's resist the practice of infantilizing teenagers, by not calling them "children".
Older than 13 is not a child. Man is using that statement. Even if you want to argue that he didn’t really mean it like that, which I disagree with, opponents would have a field day with that to discredit him
Stallman is like Humpty Dumpty (""When I use a word, it means just what I choose it to mean—neither more nor less") and that masks some of his terrible beliefs. Also in his clarification (I refuse to call that an apology) why aren't we questioning why he needed to be told that sex with minors is bad? Why did he lack the skills to look up anything about consent and child development before saying that he though sex with >= 13 year olds was ok?
how then? just a rough idea would be nice. because don't see it. as much as it pains me, but i have to admit that i find the article convincing. i see these people around me every day. they have no experience with technology. they didn't even go to school long enough. yet they all have a smartphone with no idea what it is capable of, or what the consequences are. and they are used to the government taking care to protect them.
Even if we are restricting installing apps, there are less heavy handed measures. By enabling .apk installs only via developer options/command line/adb in a way that the average user will never be able to figure out, for example. Sprinkle a few warning pages with scary red lettering and it's fine. Grandma will never figure out how to run adb commands on Gentoo.
There is a tradeoff between liberty and security. You can never guarantee security; the Google rules in the article won't ensure it either, as Google has been shown to simply not care about scam/malware apps published onto its own app store anyway. The whole security angle is a misdirection. The whole move is about control.
> "Those who would give up essential Liberty, to purchase a little temporary Safety, deserve neither Liberty nor Safety."
- Benjamin Franklin
they don't sell them to people who don't know how to use them. with the exception of knives. but unlike technology, people do know how to use knives without getting hurt. they can easily see that chainsaws are dangerous. they need a drivers license for a car, and they can't get opiates without a prescription.
none of these controls are available for apps, and yet, because they don't know how to use phones/apps safely, because they can't tell the difference between an app that is save, and one that isn't, they risk their livelihood because they fell for a scam. they are not going to install those apps by themselves. they will ask the techshop around the corner to do it for them, and the scammers give the techguy a cut for installing that app that steals your money.
the problem is of course lack of education, but education doesn't have a quick fix. in the meantime many peoples lives will be ruined.
By contrast, they think the 4 ton death machines are a really cool way to impress girls and that's what they're used for. Similar in Europe, by the way.
Not that there's any shortage of people who think guns are a cool way to impress girls.
But ask yourself, would business do this anyway? The answer is yes. Google needs a growth target and modeling app store lockin and fees is there.
Youre free to live in paranoid government land, but its an unnecessary abstraction. Its actually the EU and US rulings against their monopoly thats driving it.
Again, the paranoia is just drivel.
This is just what you'd expect any government that is either competent or greedy to be doing, given the technologies at play.
Calling it "thought crime" is, of course, a bit glib. But things like "we want to monitor the communications of every pro Palestinian university student so we can take proactive disruptive actions" are very real and not so hidden desires and sentiments of modern Western governments.
> Its actually the EU and US rulings against their monopoly thats driving it.
Can you elaborate on this? Locking phones down like this would seem to make Google an even bigger target for future anti-trust suits, no?
Monopoly enforcement only occurs when theres no natural monopoly.
Well, in this domain (government surveillance), probably not paranoia.
I have posted multiple times before that this effectively limits people’s property rights. Here are some other posts I have made on the subject:
* https://news.ycombinator.com/item?id=39349288
* https://news.ycombinator.com/item?id=39236853
That's the problem to attack - not user freedom. "Mandatory app" is an anti-accessibility anti-feature.
That may be the crux of the misunderstanding. The 'licensing' of music, movies, TV shows when you "purchase" them is coming / has come to hardware.
The owner of the device is who controls what you can do with it, not necessarily who paid to keep it in their pocket.
For example, let’s say you buy an iDevice and do not even intend to run iOS, but instead want to install/port Linux, or run some bare-metal code. You would have to ask apple to sign that code with their private key, which they won't do. The problem is a sale should have transferred all rights of property rights to you as part of the sale. The clue is you have to ask a third party to even hope to do this points to the fact your being limited on the full enjoyment of your property rights. This cryptography is not a contract or legal instrument either and you don't even have to agree to anything for it to be in effect. You could buy the device and have no intention to use the preinstalled software, and it's in effect before you even open the box.
The problem is the right of exclusion is very important, and can even derive most other property rights for example this paper "Property and the Right to Exclude" [https://core.ac.uk/download/pdf/33139498.pdf]. The fact such an important property right is being blatantly impeded is the problem.
This is crazy long and not directly about the iPhone, but this is the most comprehensive explaination I've heard of why your plea will probably never be heard:
https://youtu.be/ZK742uBTywA?si=poDXl3Mz7lYwdUxa0
(TLDR: international treaties)
Yes, obviously yes. In the same way we teach people to operate cars safely and expect them to carry and utilise that knowledge. Does it work perfectly? Of course not, but at least we entertain the idea that if you crash your car into a wall because you’re not paying attention it might actually be your fault.
Computers are a critical aspect of work and life. While I’m a big proponent of making technology less of a requirement in day to day life—you shouldn’t need to own a smartphone and download an app to pay for parking or charge your car—but in cases where it is reasonable to expect someone to use a computer, it’s also reasonable to expect a baseline competency from the operator. To support that, we clearly need better computer education at all ages.
By all means, design with the user’s interests at front of mind and make doing the right thing easiest, but at some point you have to meet in the middle. We can’t reorient entire industry practices because some people refuse to read the words in front of them.
But this sounds an awful lot like trying to avoid changing the technology by changing human nature. And that's a fool's errand.
There are always going to be a significant percentage of users you're never going to reach when it comes to something like this. That means you can never say "...and now we can just trust people to use their devices wisely!"
Fundamentally, the issue with people clicking things isn't really a problem because it's new technology. It's a problem because they're people. People fall for scams all the time, and that doesn't change just because it's now "on a computer".
But that's exactly the issue. You won't prevent someone from wiring money to Nigeria by restricting what apps they can install on their phone while allowing the official bank app which supports wire transfers.
If someone is willing to press any sequence of buttons a scammer tells them to then the only way to prevent them from doing something at the behest of the scammer is to prevent them from doing it at all.
But that's hardly practical, because you're going to, what? Prevent anyone from transferring money even for legitimate reasons? Prevent people from reading their own email or DMs so they can't give a scammer access to sensitive ones?
The alternatives are educating people to not fall for scams, or completely disenfranchising them so that they're not authorized to make any choices for themselves. What madness can it be that we could choose the second one for ordinary adults?
Arguably they already do and the numbers wanting an open phone are relatively trivial and the market ends up the way it has.
I do these days, happily, and I speak as someone who owned a Neo Freerunner and an N900. My phone is far too important as a usable, stable device to want to fuck around treating it as an open platform any more.
The market is consolidated into Apple and Google and neither of them actually offers this. Taking away everyone's choices and then saying "look how few people are choosing the thing that isn't available" is a bit of a farce.
Nobody cared, so they went away.
I mean it seems like your argument is "nobody wants this thing that people keep getting mad that nobody offers". Obviously people want it; otherwise who are all of these people?
Half a dozen geeks on HN do not a market make.
That market is half the world. It's not small.
As it is there have always been phones that are open to greater or lesser extent and they have always been market failures, even among geeks.
Personal vehicles have turned out to be A Bad Idea, and now the consensus appears to be we should be moving toward more -- perhaps exclusive -- use of public transport, rather than expect people to own a car.
I'm beginning to wonder if the same isn't true of personal "general purpose computing" devices. 99% of people would choose the locked down device, especially if it makes their favorite apps available: Instagram, Netflix, etc. Which it may not if it were open, because then it could not provide guarantees against piracy or tampering by the end user. But still, from an end user perspective, knowing that stuff from bad actors will be prevented or at least severely hampered is a source of peace of mind.
Nintendo figured this out 40 years ago: buy our locked down system, and we can provide a guarantee against the enshittification spiral that tanked the home video game market in 1983, leading to landfills full of unsold cartridges. It sold like hotcakes.
If this is so, we need a lot MORE locked down tech. Most people on the roads are killers
There are some comments attempting to trick people into thinking that some of the least intelligent people of society have more freedom than regular people.
Freedom of speech and to own your belongings is first. This includes installing what you want on your device.
But should they? Should we also accept Google's browser signing and ban all browsers the bank doesn't like? Am I allowed to accept calls from people they haven't vetted or is it too much of a risk to the bank's bottom line that they might talk me into a scam.
I suppose we should also write off the inevitable privacy and freedom violations in the name of "security".[0] I don't have anything to hide after all.
[0]: https://en.wikipedia.org/wiki/Four_Horsemen_of_the_Infocalyp...
There are also banks which are app-only.
You'll also notice that modern phones have a "spam caller" feature. It either gets data from the phone network or from another source. Should your phone block the most obvious spam calls? Your email client already blocks spam.
At a network level, STIR/SHAKEN is also trying to block you from answering fraudulent calls.
These things are happening right now. I expect most people think a reduction in phone spam is worth the occasional false positive.
You may have a different opinion.
Despite bogus requirements like these, websites have to rely on hacks to figure out what browser you're using, usually making it trivial to spoof (especially between browsers using the same engine). More importantly, websites can't prevent extensions from running, which I believe was one of WEI's goals.
> You'll also notice that modern phones have a "spam caller" feature.
I have yet to see a smartphone that enforces such feature and does not allow the user to disable or configure it.
> At a network level, STIR/SHAKEN is also trying to block you from answering fraudulent calls.
I am unfamiliar with STIR/SHAKEN, but Wikipedia describes it as "a suite of protocols and procedures intended to combat caller ID spoofing". This is fraudulent in the sense of "the caller is not who they claim to be," and not "this caller is on our blacklist" or even "is not on our whitelist". YMMV as some countries require GSM subscribers to ID themselves, but it's still far from a central entity deciding who is allowed to call you.
But otherwise I agree, I hate the same shit about requiring 2fa. Let me fucking decide about how much I care about my account being stolen.
If you want to hold the banks liable for fraud committed against you (which is exactly what happens in many countries), then it’s hardly reasonable to say that they’re not allowed to use what ever technical options they can to prevent that fraud.
You can put forward the argument that banks simply shouldn’t be responsible for fraud committed against their customers. But we only need to look at world of cryptocurrencies to see how well that works in reality.
Of course it's reasonable? You can give someone a job and also ask them to do it a certain way.
And they can say “no”. Which is pretty much what the banks do.
Regulators are one of the entities pushing for these types of limitations. It’s a natural consequence of doing a risk assessment, very hard to justify not applying these limits when explaining to a regulator how you keep your customer funds safe. I’m speaking from experience here having worked with a team that attempted exactly that, but ultimately ended up adding jailbreak/rooting detection anyway.
If you want to tax banks and pay the money directly to fraudsters, I guess that's a model you can aim for.
Apple/Google rejecting some obvious scam apps doesn’t mean people don’t get scammed or hurt in other ways. Just like online age verification doesn’t actually protect children or make you a better parent… its just straw man of sorts, designed to remove agency from users through a false sense of safety.
But it comes with the rather large price of a huge limitation to my personal choices.
It is actually much closer than you think. There are the standard sunglasses and then you have actually rated sunglasses for various purposes. The more extreme the environment, the more the former gives a false sense of safety that just isn't there.
A) It should be harder for non-technical users to accidentally install apps designed to harm them.
B) It should also be possible for anyone to run whatever code they want on hardware they own.
Both can be true, and platforms should support both. Ultimately, it is up to the platform to decide what they want to allow and how they protect their users.
I get why Android is tightening controls: plenty of people install shady APKs they get from random websites or Telegram/WhatsApp groups and get burned. But forcing developers to register with Google isn’t the answer. If I want to run a hobby project on my own phone, I for sure shouldn't have to jump through bureaucratic hoops.
The thing is that Google already has the mechanism to protect users: the Play Store. The real problem is that its review process is weak and flooded with low-quality and malicious apps. Fixing that would do far more good than punishing independent developers. They also don't want to open up anti-trust behavior by actually prioritizing the Play Store and saying that you shouldn't trust an app from a random Chinese App Store.
If Google wants to make Android safer, step one should be cleaning up the Play Store. Step two is making that the obvious, prioritized channel. Only after that should they even think about playing Big Brother.
> A) It should be harder for non-technical users to accidentally install apps designed to harm them.
> B) It should also be possible for anyone to run whatever code they want on hardware they own
Require something in the neighborhood of:
C) It should be possible to prevent people who can run whatever they want from wanting* to intentionally or accidentally install apps designed to harm them; or, where these harms are either not harmful or are reversible.
If you consider things that help with (C), and apply this principle — “Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith." — then a lot of iOS/iPadOS developer and app ecosystem can be understood as positive intentionality around flavors of C.
* By being scammed, persuaded, misled, confused, coerced, etc.
The people who were scammed did not run rooted phones. Rooting your phone may allow you to install pirated applications containing malware. But most banking losses comes from scams where the user itself initiated a transaction.
These discussions aren't really about tech. They're all about politics. Libertarian societies grant freedom on the understanding that some people can't handle it and will hurt themselves (and maybe even others). Collectivist societies sacrifice freedom on the altar of socializing individual losses. The first example he gives is from the relatively collectivist UK, where "James" sent all his money to a foreign romance scammer despite being warned by his bank not to do it. The twist that the blog author doesn't mention is how the story ends: his family went crying to the BBC who kicked up a fuss and Lloyds decided to give him the amount he lost i.e. make other bank customers pay for his bad decisions.
This is a spectrum: you can't have a society that both grants maximal freedom and that also protects people from themselves.
As societies differ in how collectivist/libertarian/crime-ridden they are yet tech platforms are global, it's inevitable there will be disagreements about where on the spectrum this judgement call should fall. What Google is doing here is actually quite innovative and surprising for a company as historically woke as they are: they're admitting that the problem primarily affects some cultures/countries and not others, so the level of freedom should be different. The rules are being changed to only apply to phones in specific countries, whilst preserving freedoms for those in others. This is a very interesting decision that stands against a multi-decade trend in the tech world of treating every country and culture as if they are all identical.
Yes. Remove all of the features from the software. Now, I know you're wondering, "What if my users eat the battery?"
Next, remove the hardware itself. Now users cannot harm themselves at all.
Clearly, there should be a way to restrict their access to that too. Keep them from performing unauthorized bodily actions that could result in self-harm. For safety reasons.
I'd like to have some clarification what kind of safety level people generally expect from their devices.
As an analogy, consider the different safety expectations of public transport (buses, trains, planes, etc) and individual transport (cars, bikes, scooters, etc).
In public transport, I'm responsible for exactly two things: Choosing the right transport to get on and getting off at the right moment. Everything else is the line operator's fault. The operator is also well within their rights to keep me from unscrewing random panels inside the train, conducting scientific experiments with a plane's onboard WiFi or thrashing the seats when I'm drunk. They can kick me out if I behave too badly. (They can not on arbitrary grounds deny me service if that would trigger anti discrimination protections)
In short, I don't own the train, I don't have any expectations of arbitrary control, but in exchange I do have very high expectations of the service provided, even with very little knowledge of the internal workings of a train.
In contrast, with private transport, I'm much more involved in the technical details of the trip: I have to know the exact route, I have to take every turn myself, I'm expected to know traffic rules and safely interact with other participants and I should at least have a basic knowledge of the internals of my bike or car.
In exchange, I also have much more freedom to modify my transport or to pick a different route.
The question is if the safety expectations for phones are more like the ones of public or of private transport.
The analogy would be, does the your private car allow you to change the ratio between different gears in your car. Or does it allow you to customize the sensitivity of the steering wheel arbitrarily. Or install any custom kind of AC vents in your car and allowing you to make arbitrary cuts in chassis.
Having said that,first and foremost:
Its very difficult to explain to a lot of people people's expectations of vetting and privacy. People are completely fine with FB siphoning of their data and spying on them, but they are not fine if anyone can do it. That is, there should be a barrier to installing malware on the app, and that barrier being the company being a big company is okay with most people. What they're not fine with is any random person being able to do that.
And they will blame the phone manufacturer for all bad applications that can be installed on the phone. If a phone manufacturer allows for side loading applications and a big company requires it. Then, there becomes rhe culture of side loading applications, and suddenly the platform is not safe because there's no trust in applications.
The manufactures have to ensure that people can side load their apps and at the same time ensure that all apps of relevance use platforms like playprotect so that people can be given a simple advice "only use playprotect apps".
I am not sure this is a solvable problem.
> The bank blocked a number of transactions, it spoke to James on the phone to warn him and even called him into a branch to speak to him face-to-face.
Y'know, at some point the cost of protecting the dumbest people is too much to be worth it. I am perfectly fine with some people getting hacked, doxxed and scammed out of their life savings if the alternative is everyone losing their freedoms.
Freedoms are important because without them people with power go unchecked more and more. It's a slow process but it culminates in 1) dictatorship at the state level 2) exploitation at the corporate level.
I disagree. Let's go with preferring user agency until banks are in trouble.
> Again, it probably isn't fair to ban users who run on permissive software, but it is a rational choice by the manufacturer. And, yet again, I think software authors probably should be able to restrict things which cause them harm.
I disagree. Ban users when they cheat, not when they have the power to cheat.
Instead of negotiating over what you do with your own devices, when you open a bank account they loan you a small, cheap, ultra-locked-down phone-like device that only runs the bank app and biometric verification. You may still use a web interface for online banking if you want a bigger screen and keyboard, but regardless all transactions need a confirmation through the bank's device.
This way you're free to do whatever you want with your hardware (including even not having one!), without requiring every bank to support every possible platform under the sun.
When the device is being set up for the first time, ask the user if they want to enable developer mode. Make the warnings as scary as you like. When the device is booting, display a prominent "developer mode enabled" message. But, once the device is booted, there should be no way for apps to query developer-mode status, to prevent discriminatory apps.
The only way to toggle the flag after setup would entail a full factory reset of the device. You could go one step further and have it be a fully permanent flag, in efuses.
I'm not the user of my phone, I'm its owner.
This reeks as a powergrab that restricts my freedom disguised with the classic "for the greater good". Same as the new UK age verification laws
That debate was had already and was lost. Phone scammers get blocked by telcos all the time.
What I am saying is that I have not been stopped from taking calls from unknown numbers just because the call might be a scam. Likewise I don't wanna be stopped from installing an apk just because it might be a scam.
The societal losses of a vast amount of people having no private, uncensored means of communication, which this is leading to, are orders of magnitude greater. The largest cause of early death in the past century was governments murdering their own citizens, and the more power governments have over their citizens, the easier it becomes for this to happen again.
Sounds like an interesting claim, mind sharing your source / calculation?
Same as the privacy invading tech/E2EE for "national security/protecting children online". You think banning VPN or E2EE apps is going to stop bad actors?
NO! As always it effects normal users for control/power or money.
How many times does this same thing get played over and over again? It's the same script you know?
I'm allowed to build a wacky unsafe DIY car and drive it around my own property without getting permission from the government. In many scenarios I don't even need a driver's license.
Bringing the analogy back around, maybe one could argue that if I let my phone get hacked such that it becomes part of a botnet or something then it is a danger to other people, but that's not the typical example. Usually these policies claim to be about protecting me from myself while using a device I own.
...but not in others. Which is why those who still have the freedom will continue fighting for it.
By the way, if you do go down the route of building your own phone, pedophiles, drug dealers, and terrorists will use it, and you're now on the hook to do something about it.
...Back to square one.
We could ban the internet completely and minors wouldn't be any safer.
Which is a great advertisement for GrapheneOS.
There were unofficial statements made by third party android news sites that Catalan police claimed that "Every time we see a Google Pixel, we think it could be a drug dealer"... which, while obviously wrong on its own, is completely different than claiming they are actually arresting people on suspicion of drug dealing merely by using GrapheneOS.
Please stop spreading misinformation.
When there are problems reported about an app, there has to be a known party to hold accountable. I agree that a developer path that is complex enough that only people who know all the impacts are able to use to side load random apps they own or from someone they can trust, but the general population has to be protected unless at the individual level they are savvy.
The app shouldn't get to decide what permissions it "can't work without." That's how you get calculator apps that claim they can't possibly work without GPS location.
Can you please explain why there is no big push from the Google and Apple to remove microphone and camera access from the browsers? You claim that most users are "less skilled" and will allow anything , so for the grater good why not pushing to remove microphone, camera and file upload permissions? Why do we trust this users with reading a popup for permissions ?
Or maybe if the popups are not clear or good enough maybe is not the users fault ?
In this case, one nuance is the fact that camera and microphone permissions are very very often necessary in the browser for video chats. Y'know, exactly the kind of thing that grandma might want to do with her grandkids on a regular basis.
Though, that document also states:
> Our research [1] finds that users often make rational decisions on the most used capabilities on the web today — notifications, geolocation, camera, and microphone. All of them have in common that there is little uncertainty about how these capabilities can be abused. In user interviews, we find that people have clear understanding of abuse potentials: notifications can be very annoying; geolocation can be used to track where one was and thus make more money off ads; and camera and microphone can be obviously used to spy on one’s life. Even though there might be even worse abuse scenarios, users aren't entirely clueless what could possibly go wrong.
You have the issue reversed. I should people should be able to buy specifically locked phones separately if they want to. Actually they already can.
I literally have a banking app that will refuse to run on an “unsecure” phone. Today I can still install unsigned apps, but removing that ability is explicitly the goal of this policy change.
A decade ago, we had Xposed modules that would hook into the permissions systems, and give you the option to feed apps fake, generated data. So if it tried to scrape my location or phone number or whatever else, it got garbage
Everything about the so called stores is so decrepit, the safest way to get any decent software on is side loading / fdroid. How could you in sincerity argue any different?
Apple is only slightly better. They limit espionage from other parties but not their own. And meta ads still exist so they block was not very effective.
You do not. You can go into System Settings and allow the app to run.
It is technically possible, yes. You can turn Gatekeeper off via the command line in various ways, or even via an obscure deliberately non-discoverable set of GUI tricks.
But it isn't reasonable to expect any normal person to do that. So, in practice, any app that isn't some open source widget targeting developers does register them with Apple. In this sense it also isn't possible.
This isn't specific to ARM. It's also been true on Intel Macs for a long time too. The only thing that changed on ARM is some minor detail - the kernel now requires a "signature" for all binaries, but a "signature" is also allowed to be a hash match against a local machine-specific whitelist, so this doesn't make much difference in practice to anyone except toolchain developers. It seems to have mostly been about reducing tech debt in the security stack.
The registration process is however very lightweight. There are no app policies involved beyond "don't distribute malware" and "verify your ID so we can do something about it if you do". It's not like the app store where there are lots of very subjective criteria. To get an identity is nearly automatic, you can do it as an individual with a credit card and approval is automated. Ditto for applications: it's automatic and driven by a simple (albeit undocumented) REST API. You upload a zip containing your signed app to S3, it's processed automatically, the app now works. The notarization API is extremely open - you need an API key, but otherwise anyone can notarize anything, including apps written by other people. So in the early years of this system when lack of notarization just triggered a security warning, lots of people notarized any app they found that was missing it. This made a nice smooth backwards compatible path to transition the ecosystem. Nowadays, there is no bypassable security warning, an unnotarized app is just described as corrupted and won't open without tricks.
So - does macOS "support" sideloading or not? It's very ambiguous. You can argue both yes and no.
Apps created by me for my routine,
Does that mean i would not be able to install my apps ??
Please read this primer on applied cryptography and hand over ID and personal information to be able to be "managed" within the ecosystem in which you aspire to be more competent.
Now that Android is going full retard with their authoritarian BS, it’s time to build a new phone operating system or at least make the ones we already have viable.
It’s a monumental undertaking, but it needs to be done.
What if we'd instead require users to verify themselves before being allowed to see ads? I'm sure that would be more effective for preventing scams, fraud and abuse.
There are plenty of apps available through the Play store that are not safe. Even if side loading requires chain-of-trust, malicious behavior will remain rampant. I'll concede that it restricts the ease with which one can redistribute malware but by how much? It doesn't seem significant to me compared to the hassle for end users/developers.
It all seems so contrived. The only rational explanation to all of this is backpedaling into a closed garden.
The push to lock down the Google app ecosystem is just Google's eternal quest to lower costs.
Reviewing apps, even if automated, is expensive. A while ago they demanded that all app publishers get a DUNS number. It's obvious why they did this - they want to rely on the reputation of the company (easy, automated) rather than detect malicious apps (slow, may require manual intervention).
One important thing to note is, the threat model changes depending on the user. If a "poweruser" installs a card game app, and it suddenly pops up a screen that looks like their bank login, they're going to close the app and report it. Grandma might get confused and think she was checking her bank balance. So they need a complex set of tiers and warnings to ensure that only users who know what they're doing get apps that could do something like that.
But, we all know that the above scenario is hardly the worst thing an app can do. Thanks to mysteriously bad security, there have been all kinds of Android vulnerabilities that allow one app to spy on another app's data files. Maybe locking down the ecosystem so that only "good" apps can spy on one another is the win/win solution for them.
Why is this a question of _allow_? Who is my hardware provider that he is somehow my guardian and must _allow_ me to install software that I want to install?
>Is it possible to allow people to do sports and keep them safe?
>Is it possible to allow people to roam freely and keep them safe?
>Is it possible to allow people to not be locked up in a padded cell and keep them safe?
People are responsible for what they are doing, and teaching them about technology is the best way to do deal with this example here, as it doesn't infringe anyone's human rights and would give anyone the resources to check their sources.
Similarly, every modern society has rules to keep people safe when roaming. That might be as simple as warning signs it as complex as a coastguard.
We've had decades of warning people about online scams and I don't see any slowdown in the volume of scammy emails that I receive. Education clearly isnt working - and that imposes a cost on all of us.
Millions of people hurt themselves, physically hurt themselves, every day, doing things that we could easily restrict. Yet we still allow them to buy knives, glassware that can break, hammers, power tools, non automated vehicles of all kinds, the list goes on.
We also spend a lot of time educating them on the dangers, far more than is spent warning about online scams, and we do it at a far earlier age (age 0, for some of them).
Of course we still allow the sale of safe knives and plastic mugs, so people are free to choose; that point still stands. I'd argue that there is more competition in tableware, and less friction shifting between it, than there is in mobile operating systems.
This is the exact same thing. We don’t spend time educating users of roads on how the road stripe width affects their safety, nor about how train tracks carry radioactive material through their communities every day. We let the companies and governments work to make things safer for everyone, even if it comes at the expense of some.
I honestly can’t believe I’m having this argument. Making things safer for the world should be a goal we all strive for, even if a very very incredibly small minority lose a tiny tiny bit of what they want.
Google and phone manufacturers have been actively moving in that direction and have a long history of being actively hostile to those things. This is just another move on the same board to restrict these freedoms.
You mean, the iPhone, which restricts everything even more?
Outlaw all non big corpo operating systems?
Perfect surveillance? All because some boomers can't into common sense?
It's also ironic that you bring up warning signs as a counterexample to my point, as it's exactly what I am saying. You can warn them, but you don't bar them from doing so.
For example https://www.forbes.com/advisor/legal/personal-injury/attract...
Societies often place limits on individual freedoms.
Google is telling you to buy their particular brand of fence (which has inextricably an insane markup). And they disallow it for pool shapes they dont like and you dont have an appeals process for it.
So if you're a 3 year old child that wanders into a neighbor's yard and drowns, it's on you?
We know young children wander where they're not supposed to go, despite their parents' best efforts to supervise them.
So we do our best to legislate safety regulations when they can be low cost and high reward, like preventing children from falling into pools and drowning. We can't do everything, but when it comes to pool fencing the benefits seem to obviously and greatly outweigh the harms.
> Is it possible to run and app store and keep people safe?
The answer is clearly "no", so I'm not sure what we're discussing.
If there’s a real downside, they’ll be affected.
And we're back to "just break into the thing you've already paid for." Nope. Go away. No more smartphone crap.
Tech industry has completely lost the goddamn plot. Or more specifically, is doing everything it can to make it nigh impossible for the average user to navigate the info asymmetry to actually use the hardware they paid for.
That's how we become the smartest animal on the planet. But it no longer works, we are very good at keeping everyone alive. And there's nothing wrong with that, as long as we don't compromise our freedoms to achieve it.
Some people getting exploited is the modern equivalent of leopards eating your face. It would be nice to protect people from it happening but NOT by everyone giving up basic human rights. And yes, in the modern world, running any software on your hardware should be a basic human right.
Especially at a time where computation is starting to resemble intelligence. Otherwise we all become serfs all over again.
If you can't explain why i am wrong, consider i am right.
To compute on one's own is to open one's electronic soul to the Sins of Free Software. Such devilish arts must be shunted to the margins of society, till they may be purged on That Day when all shall bask in Google's light forevermore.
> The first is that a user has no right to run anyone else's code, if the code owner doesn't want to make it available to them. Consider a bank which has an app. When customers are scammed, the bank is often liable. The bank wants to reduce its liability so it says "you can't run our app on a rooted phone".
> Is that fair? Probably not. Rooting allows a user to fully control and customise their device. But rooting also allows malware to intercept communications, send commands, and perform unwanted actions. I think the bank has the right to say "your machine is too risky - we don't want our code to run on it."
> The same is true of video games with strong "anti-cheat" protection. It is disruptive to other players - and to the business model - if untrustworthy clients can disrupt the game. Again, it probably isn't fair to ban users who run on permissive software, but it is a rational choice by the manufacturer. And, yet again, I think software authors probably should be able to restrict things which cause them harm.
It's not clear to me whether in this fragment the author is stating the two alleged cracks in the argument or rather only the first one — the second one being Google's ostensible justification for the change. Either way, neither of these examples are generalisable arguments supporting that 'a user has no right to run anyone else's code, if the code owner doesn't want to make it available to them'.
With regards to banking apps, the key point has been glossed over, which is that that when customers are scammed the bank is 'often' liable. Are banks really liable for scams caused by customer negligence on their devices? If they're not, this 'crack' can be thrown out of the window; if they are, then it is not an argument for "you can't run our app on a rooted phone", but rather "we are not liable for scams which are only possible on a rooted phone".
As for the second example, anti-cheat protection in gaming, the ultimate motivation of game companies is not to prevent 'untrustworthy clients' from 'running their code'. The ability of these clients to be 'disruptive to other players' is not ultimately contingent on their ability to run the code, but rather to connect to the multiplayer servers run by the gaming company or their partners. The game company's legitimate right 'to ban users who run on permissive software' is not a legitimate argument in favour of users not having full control over their system.
The problem if you are a bank is that scammed people can be very persistent about trying to reclaim their money. There's a cost to the bank of dealing with a complaint, doing an investigation, replying to the regulator, fielding questions from an MP, having the story appear in the press about the heartless bank refusing to refund a little old lady.
It is entirely rational for them to decide not to bear that cost - even if they aren't liable.
Who is going to prove that though? It’s much simpler and less stressful on our court systems if a bank just says “we don’t allow running on rooted phones” and then if a user takes them to court the burden is on proving whether the phone was rooted or not rather than proving if the exploit that affected them is only possible on a rooted phone.
In the UK, not legally liable. However culture is not 100% aligned with the law and in practice banks that stick to the rules will be pilloried by the left-wing press and politicians, they risk regulator harassment etc, so they sometimes decide to socialize the losses anyway even when the law doesn't force them. The blog post cites an example of that.
To stop this you'd have to go further and pass a law that actively forbids banks from giving money to people who lost it to scammers through their own fault.
If it doesn't, don't we all have our answer on what we should do?
> 01. Vulnerable members of society should be protected from scams.
00: yes, always; 01: yes, but not at the expense of 00 (or probably some other things)
> Safety is not a valid reason to limit freedom.
What about (a) speed limits, (b) drink driving laws, (c) seat belt laws, and (d) helmet laws for bicycle and motorcycle riders? I assume in your world view that all of these categories are "limiting your freedom". I am fine with all of them.The logic that your device security isn’t tied to my safety needs to be rethunk. Every leaked password reduces password security for everyone. Every successful phish makes the next phish easier.
Your device safety is tied to my wellbeing.
How would you feel if your brain was “safeguarded” against potentially harmful thoughts?
Is it possible to restrict software installation and keep users free?
Showing them the permissions requested, training them to not install things from outside the store unless they know what they're doing, explicitly needing to manually enable installation of software from the outside, etc etc.
That's it. You've done your job.
And if, despite all that, some people still want to continue to use their phones in a dangerous manner LET THEM suffer the consequences of their ignorance. Let them bruise their knee. They're grown ups, presumably. Some people just need to learn the hard way, and we shouldn't architect the entire system to protect that lowest common denominator.
You absolutely do not need to childproof the entire phone to protect people from themselves. For me, that's why it's patently obvious that this move has an ulterior motive.
As the article illustrates, users will ignore all warnings and get themselves scammed, and so the last resort is to not even give them the option.
Even something like GrapheneOS, in theory the best path to security and privacy and liberty, was falling way short even before this latest announcement from Google.
The problem lies partially in the app ecosystems, which embrace spyware and exploiting users (requiring all the worst Google APIs), and partially in governments, which will leverage any centralized organization like Google to gain control (EU chat control etc.).
The solution cannot be just a custom OS or an OS fork. In fact, ecosystem compatibility is toxic and slows down growth of real alternatives. There needs to be some wholly independent and decentralized offering.
The challenge is hardware compatibility and core services like digital IDs. Most apps should be solved by using a website instead.
These issues are especially important because the future is increasingly digital. Smart phones, smart glasses, smart watches, VR glasses, smart homes, and even brain implants. I don't want to live in a future where I'm either left behind or my whole life is controlled by Google/Apple/the government/etc.
There are three ways to deliver protection: build better walls, defeat attackers after successful initial attacks, defeat attackers before successful initial attacks.
The article ties itself into knots because it recognizes that the first way cannot deliver 100% security. But it refuses to recognize that there are two additional ways.
The United States military could go after scammers operating from foreign compounds. It could treat the economic targeting of American citizens as acts of economic war. It chooses not to. Freedom is not free, and when your country chooses to literally not fight for your freedom, it's hardly any wonder that your freedoms are eroded.
Remember XKCD 538: https://xkcd.com/538/ Cybersecurity and physical security are fundamentally linked.
That scammers can operate from anywhere is beside the point. More often than not, law enforcement and the military know where that is. A conscious decision is made not to prioritize or fund fighting it.
But try applying that approach to India or China. Do you think those countries are going to allow the U.S. military to operate on their home turf, shooting at their citizens, and not retaliate? It doesn’t even have to be military retaliation, the U.S. economy is heavily intertwined with those countries, just look at the consequences of Trumps tariffs. Do you honestly think U.S. citizens would be willing to trade off the trade benefits of working with those countries, just so you run a military raid on building of scammers?
It's not related to scamming, but the US did just bomb Iranian nuclear facilities; the reaction was a face-saving gesture that was intentionally weak so as to de-facto de-escalate. So the answer to your question is basically yes. The costs of a wider war are too large to the host country to make it worth it to continue to allow scammers to operate freely.
> just look at the consequences of Trumps tariffs. Do you honestly think U.S. citizens would be willing to trade off the trade benefits of working with those countries, just so you run a military raid on building of scammers?
Don't you realize that Trump's election, his tariffs, all this is due to popular sentiment that the US was getting the raw end of the deal in its foreign affairs, that there was a need to, literally, put America First? If anything, such ideas, to have targeted attacks and enforcement aimed at the exact actors targeting American citizens, have been at their most popular in decades, at least since the Iraq war went off the rails.
Last I checked Iran and U.S. didn’t have a great relationship, so I don’t really know what point you’re trying to make. If anything you’re just further reinforcing my point. Iran is already cut off from the U.S. financial system, not many people there running scams against American citizens when they literally can’t transfer the money into the country.
> Don't you realize that Trump's election, his tariffs, all this is due to popular sentiment that the US was getting the raw end of the deal in its foreign affairs, that there was a need to, literally, put America First?
What does popular sentiment have anything to do with the practical reality? You can have all the popular sentiment you want, doesn’t change the facts on the ground. If US popular sentiment is that it wants to speed run a declining empire, that doesn’t change the fact that even Trump is cowed by the likes of Xi Jinping, and amusingly, Putin.
> If anything, such ideas, to have targeted attacks and enforcement aimed at the exact actors targeting American citizens, have been at their most popular in decades, at least since the Iraq war went off the rails.
Are you honestly trying to equate an atrocity like 9/11 to financial fraud?
> There are three ways to deliver protection
While I agree with your idea I'd like to remember that there are previous steps: teach people to be less vulnerable. Teach people to be less greedy. Teach people the consequences of actions.
Being less vulnerable is an obvious definition: know how to not fall for some scams.
Less greedy: some scams revolve around the idea of quick and ease profits and the comeback is hurtful because the person thinks he would get x and ends up losing 500x.
Consequences of actions: there's a lot of value to the group that observes the (bad) consequences of one actions. Pain, even from others, teaches something. The more we protect people from consequences, the better and safer it is about small losses until the actions go beyond the protection and the consequences are catastrophic.
That's beside the point that the line, too often, is being crossed, and perpetrators are allowed to perpetuate their crimes, instead of the military and/or law enforcement stepping in and performing their organization's missions to protect us, especially the most vulnerable among us.
Seems like plenty to me. If someone is warned about what they are doing, that's good enough.
At point of purchase, you get to decide whether you want secure mode or not. Then after that, if you want to change it, you have to open a support ticket with the manufacturer.
Kinda like how SIM-locking works.
If they can be convinced of that, how hard will it be for a scammer to say "we've detected a problem with your phone. To avoid being imprisoned for piracy, please file this support ticket so we can debug things."?
Making the device so locked down that no such con could exist also means there's no way to use the phone in ways that haven't been authorized - and as a power user, i detest that i am paying a price for the safety of those who are too stupid. I do not want to pay that price.
Conveniently, google gets to remain in a position to earn more money from being in the controlling seat.
as they say, if you trade freedom for security, you'll end up with neither.
* Everyone has to trust one of two giant mega-corporations to make good decisions for everyone
* Everyone has to take on the evaluation of everything themselves, do their own admin, understand opsec, etc etc.
Freedom does not entail the latter. Freedom means having the freedom to do it, but also having the freedom to delegate it, and to decide who to delegate it to. We don't have to be technology "preppers". We can set up and fund independent organisations to do this -like Debian, for example. And have competition between them.
Yes, that means some people will delegate their trust to their religious cult. That's the price of freedom
Do you want an phone where you trust Apple/Google/3rd party to make a "malware or not" decision? Or one where all that is turned off and you can do whatever? Go right ahead in either case - you control the trust, rather than it being made for you by the platform vendor.
Similarly, we have certificate infrastructure where the TLS roots are owned by a small number of people. These are generally trusted, but some people/organizations edit them down (ex: removing roots from state actors deemed untrustworthy). But it's hidden, and generally a lot of choices.
Even linux distros, you pick which package signing keys you trust.
And Docker/K8s... oh wait, there's no default keys and containers remain being developer's puke bags in most cases, and the repos are rugpulled by corporations regularly...
Once you’ve explained the difference between Google and “the internet”, you may stand a chance. I wish you luck, I’ve been trying that for a while.
BRB, heading out for popcorn.
Imagine how internet would look like of we had just a single authority issuing SSL certificates.
And if a lone developer has a cool new idea, and its app is recommended by users I trust on an obscure specialized forum, then I'll decide to install their app from "coollonedeveloper.com".
If only we could invent some kind of "domain names" system that one would have control and responsibility over, instead of trusting some broken unscalable app stores...
When I started computing, sideloading was just called installing a program.
Corporations have a long tradition:
- from printers refusing to print unless the ink was of the blessed brand
- to planned obsolescence of Apple phones
- to adversarial inoperability of Apple's trash lighting cable charger with well established standards
- to Apple refusing out of store transactions so they can steal developers' profits
- to Anthropic luring users saying they will never train on your chats but then training on your chats
- to facebook basically backdooring your device so they can track you cross vpn and incognito https://localmess.github.io/
- to locking your OS on your own device to not be able to install software that is not blessed by google so they can control what you install and bully the developers for a bigger slice of the pie
Companies like Meta, Google, Microsoft, Palantir, Apple are absolute garbage, a menace to society, a parasite that grows like a tumor and can only be stopped not by the non existent "invisible hand of the market", but by being regulated and fined to oblivion.
Instead we pour billions into educating users to be submissive sheeple.
"Freedom is not worth having if it does not include the freedom to make mistakes."
Alternatively, sideloading could require you to delete all App Store apps. In other words, disabling Google Play Protect should require you to wipe your phone. This is another barrier that will prevent a lot of people from getting scammed.
It wouldn't solve the "getting infected via cracked apps" problem, but it would at least solve the "users being scammed into sideloading something they don't want" problem.
At what ratio do you say "this freedom is worth it?" I feel like narrative is always that we point out one bad thing that happened, and then the immediate answer is to take away freedom. What happened to a balanced analysis?
I think what happened is due to asymmetry: it's very easy to point out the cost of freedom, but it's very hard to articulate it's value.
That's why we so relentlessly march toward authoritarianism: we think taking away freedom will solve our problems. Then, one day we wake up and realize the lack of freedom has become our biggest problem, but by that point, it's too late.
1: Still basically required if you have young children and want things like play dates. Oh Signal? Yeah, the recent push means that some tech-savvy users now have both Whatsapp and Signal installed. In the Netherlands, you can do without Whatsapp, but not if you don't want to turn your child into a social recluse.
2: For example, in order to use Germany's Deutschlandticket one of the participating public transport companies apps is required. This is a huge regression compared to the initial paper ticket, but there it is.
Yes. It is a basic human right.
> This is a question where freedom, practicality, and reality all collide into a mess.
No; it isn't. The answer is clear and not messy. If you are not allowed to run programs of your choice, then it is not your hardware. Practicality and "reality" (whatever that means) are irrelevant issues here.
Maybe you prefer to use hardware that is not yours, but that is a different question.
But that's a system design issue as opposed to an argument against user freedom.
(For the record I have nothing but disdain for those that choose to go this route. Looking at you rustup home page. In contrast LLVM has the decency to provide an apt repository for nightly images.)
In that case, the solution should be to raise the lowest commmon denominator. Lots of issues like that could be prevented by investing in education to increase technology literacy. But long term investments (even public ones) do not match well with quarterly reports.
However, this isn't entirely a tech problem - it's a social/human one.
Not every mechanic has a driver's license. Sure, they may enjoy working on cars and the technology of cars... but for one reason or another they may have never gotten or have lost their driver's license.
Not everyone who is tech literate is similarly socially literate. I have programmer co-workers who have been scammed into sending gift card authentication codes or installed malware (or allowed the installation) onto their personal computing devices.
It isn't possible to prevent someone from accessing the internet any more than it is possible to prevent them from accessing a phone.
I am not saying that one should have a license to access the internet. Rather, I am saying that a device that holds and maintains the authentication mechanism for doing banking transactions, it is not unreasonable for the maker of that device and its software to attempt to mitigate the possibility that they are held liable for negligence in allowing user installed software to do banking without the owner's consent.
With the uncertainty that everything in the operating system and hardware is locked down to the point where no-consent access by malware to those banking capabilities is completely restricted (and thus they're not liable for negligence) - the wall that is being put up to try to prevent that is "no software that has not been vetted can be run on this device."
Consider that the phone is often the authentication mechanism and second factor for authorization to restricted systems. Authy, Microsoft Authenticator, and other 2nd factor applications typically do not run on general computing devices.
Technical literacy does not imply social or security literacy.
Indeed. And people were falling for scams long before the Internet. What's new is the push to make that the fault of bystanders... thus causing those bystanders to intervene. It's neither the bank's fault, nor Google's fault, if somebody falls for a scam. Or installs malware. Or whatever. If you try to make it their fault, they're going to do really annoying things that you don't want.
Sure, you can sell security tools, or curation, or whatever. Many people will even want to buy them, but things break when that starts being a duty. And the only way to prevent it from becoming a duty is to accept that people own their own mistakes.
This tends to be counter to consumer protection laws or data privacy laws.
A company that can be held to strict liability for their actions can be sued (and be found liable) even if they presented that the action is unreasonable or dangerous.
In saying a consumer who buys a 100% "you can do anything on it" device liable for every action that that device takes no matter what initiated that action?
To me, the argument that you should be able to do anything on the device and be held liable for all the actions that device allows is very similar to that of "the maker of the device has no liability for providing a device that can be misused."
If that is the case, then (to me) this would need to be something that would need to be changed by the courts and the laws (and such a company would need to pull completely out of Europe).
You may be exaggerating it, but insofar as you're right, you're just describing the problem.
That’s just it. Software isn’t being vetted. Witness all the scam apps in the iOS and Android app stores. Even paid developer accounts don’t stop people from publishing these, nor does Apple’s walled garden protect you from them.
That said, for sensitive apps they tend to go through more strict scrutiny of their functionality. Publishing a "Wəlls Fargo" application will likely not get approval.
The question isn't "does it need to be 100%" but rather "if was not done at all, would Apple or Google be liable for flaws in their software (e.g. VM breakouts) that allows malware to do banking transactions, location tracking, or place calls (e.g. 1-900 number dialing) without user consent?"
I'm fairly certain that Apple and Google take measures to limit their liability. With how courts and countries are finding technology companies liable for such (consumer and data privacy protections), I would expect to see more restrictions on the device to try to further limit the company's exposure.
Absolutely not a Scooby.
It’s a more tricky issue where Google and other parties can restrict access to their services to devices they deem legitimate. Their services, their rules. Your hardware. Different arguments required.
It’s everywhere: Widevine is used to prevent stealing 4K content (incl ATSC 3.0), gaming providers use it for anti-cheat, banks use it to rate limit abuse. It’s not just Android.
(I say this as someone with an Apple Vision Pro running visionOS 1.0 with the hope to jailbreak it one day. I’m actually unable to do whatever I want to their hardware, unlike my Pixel phones.)
And yet you can't install an alternative OS like Mobian, postmarketOS or PureOS due to the closed drivers and specs.
Providers still implement it where they can, like for blackout restrictions for US sports games: impossible to enforce on the web because I can spoof location. Very possible to enforce on iOS because jailbreaking is not possible. Possible to enforce on Android because you can check if spoofing was made possible.
It’s currently the primary reason I can’t play games online on Linux.
> Yes. It is a basic human right.
Says who?
What's your philosophical argument in favour of this?
> hardware you own
Please explain how owning an item of hardware implies that running whatever computer program you want on it is a basic human right.
If someone else could use your car without your permission, do you own the car or do they?
If someone could grow their own plants in you back yard, do you own the garden or do they?
If someone else could choose what programs run on your computer, do you own the computer or do they?
Saying "basic human right" instead of just "basic right" may be odd, but definitionally, owning a thing means having the right to say how it is used. Either you own it and have that right, or you don't own it and don't have that right. That's what owning means.
There are times when it is necessary to limit the rights that a individual has so that the system that the individual lives within can work.
You can buy a radio transmitter, but you're not allowed to operate it without a license. You can likewise buy a car, but you aren't allowed to operate that either without a license.
You do not have the right to modify your phone so that it acts as a radio frequency jammer.
Possession of a device does not give an individual unrestricted rights to what can be done with it.
I’m fine with government requiring smoke detectors in my home, I’m not fine with completely unregulated private entity deciding how I live in my home, bought with my money.
And in case of a muffler, there’s literally no one in this entire world who can stop me from removing it. There are repercussion for doing so, but nobody stole my rights from removing it.
Is it illegal to spin up a Linux server on your mobile phone?
If people still go for it, then it is their responsibility. A lot of things in life require responsibility because otherwise the results can be disastrous. But we don't forbid them, because it would be a huge violation of freedoms.
You have to take into account that the threat model here is vulnerable people, often older, being taken in by scammers who talk to them for weeks and gain their complete confidence. To the victims, it feels like a real romantic relationship, not someone who could even possibly be a scammer.
Also, scams also happen outside smartphones.
What's next? Are we going to revoke people's control over their financials because they might be scammed? Let's have the bank approve before we can do a transaction. And since we are using their payment platform, maybe they should also take 30%.
Please stop feeding their narrative. Scammers are Google/Apple's "but think of the children".
Aren’t they? I ask my partner for investment opinions all the time.
> Let's have the bank approve before we can do a transaction.
Yes… That’s already how it works. Banks use heuristics to detect and prevent suspicious transactions. That’s why most of these scams ultimately involve crypto.
Obviously, the probability of it being a scammer reduces with the amount of time. In the end it's a function of time vs. effort. Scamming billionaires by marrying them and waiting until they die happens frequently enough. A 5 year scam for a few thousand bucks, unlikely.
As usual, use common sense, which you would have to do anyway if you do investments.
... and it's really fucking annoying when their heuristics misfire-- which is not at all rare-- especially since they do all they can to externalize all costs of that to the customer.
We've been trying to educate people about passwords and phishing for years/decades now, and it has not worked. Further, every day a new ten thousand (US) people need to be educated:
The proverbial grandparents will follow the instructions of the scammers and will click through all of that. We've had decades of empirical evidence: people will keep clicking and tapping on dialogue boxes to achieve their goal.
People have physically driven to cryptocurrency ATMs on the instructions of scammers:
* https://bc-cb.rcmp-grc.gc.ca/ViewPage.action?siteNodeId=2136...
* https://www.usatoday.com/story/money/2025/04/21/bitcoin-atm-...
Warning sheets will do nothing.
Those of us with elderly parents and piblings (aunts/uncles).
Because at the end of the day the scammer is going to convince your grandma to go to the bank, withdraw the entirety of her savings and send them to the scammer in an envelope.
Any technical restrictions therefore only harm our personal freedoms and don't actually protect those who are vulnerable because those people's problems aren't technical in nature.
And it doesn't have to be children of parents, that's just the common example that's brought out every time this comes up.
As always it comes down to insulting and emotionally guilt tripping people to screw them out of their freedoms and of course there's never even a shred of evidence to support any of these incredible claims. You're laying it on too thick, give us a break.
> You’re acting like we (and these massive corporations) haven’t been trying for decades at this point.
You're acting like this would make a dent in the total number of people who are scammed every day.
And it just so happens that the only acceptable remedy necessitates infringing on billions of people's personal freedoms which will, incidentally, secure trillions in future profits for these corporations. All that for a temporary speed bump that would only affect a minority of scammers who would adapt in a month.
Says the person that thinks they are losing personal freedoms when a company makes a product change and they just don’t want to bother switching to a different product.
Buy a different phone. This isn’t affecting your personal freedom.
And yes, it does affect the number of scams that people fall for, as evidenced by iOS’s hiding of links in scam messages. It forced scammers to try and get the scammee to jump through several more hoops just to be able to open links. Immediate drop in scams.
There are tons of things to be done. None of them are affecting your freedom. Buy a different phone.
Many, probably most, of the people most at risk aren't going to do that.
When you're (somewhat) drunk, you know that you're drunk, and you're still able to comprehend how that will slow down your reactions while driving. When you're being scammed, you think you're right... and if you begin to doubt that, you may tend to push the thought out of your mind rather than follow it through, and to evade things that might bring it back. And it's very hard to admit to yourself that you're permanently impaired in that sort of way... especially when you're impaired in that sort of way.
Give the knowledgeable the freedom to use their skills. Separately, develop ways to help/protect specifically those that need it.
My suspicion is: were you to list them, running programmes on hardware you own would be fairly low on that list.
The social structure of the smartphone app ecosystem is remarkably similar to the cable provider -> network -> show situation from before too.
They’re clearly just computers, they’re “hardware you own”, but you’ve never been able to run whatever software you want on them. But it’s been like this since the 1970’s and there’s never been an uproar over it.
For me the difference is that you know what you’re getting into when you buy a console, and it’s clear up front that it’s not for “general” computing. I’m inclined to put smart phones into this category as well, but I can see how reasonable people may disagree here.
I think there is a huge difference. You can perfectly live your life without a game console. Even if you are a game addict and it is absolutely necessary for you to live, you could buy a PC and game on that.
Smartphones are a necessity nowadays. Some banks only have smartphone apps (or require a smartphone app to log in to their website). Some insurers want you to upload invoices with an app. Some governments require an app to log in (e.g. the Dutch DigiID). You need a smartphone to communicate with a lot of organizations and groups.
Smartphones have become extremely essential. And two companies can decide what does and what doesn't get run on a smartphone and they can take their 30% over virtually everything. They can destroy a company by simply blocking their app on a whim (contrast with game studios, which could always publish their game for PC or Mac or whatever).
It is not a healthy, competitive market. It is the market version of a dictatorship. And Google forbidding non-app store installs is making it worse.
Governments should intervene to guarantee a healthy market (the EU is trying, but I think they are currently worried about the tariff wrath).
Unfortunately, the copyright lobby of the video game industry was too strong in the 70s/80s/90s, so here we are.
In the future, when your whole house is controlled by a computer, do you want that computer to be controlled by Google or to be controlled by yourself?
People started free and equal, then some specialized into warriors[0] and gradually built deeper and deeper hierarchical power structures, called themselves "nobles" and started exploiting the "commoners".
At some point people snapped, killed a bunch of them (French revolution, US was for independence, etc.) and decided they wanna rule themselves.
And then companies started getting bigger and bigger, with deeper hierarchical power structures, the "nobles" call themselves "executives" or "shareholders" and the people doing actual productive work are not longer "commoners", they are "workers"[1].
[0]: And thus controlled the true source of power - violence.
[1]: Ironically admitting that people who are not workers are not doing real work, they are just redistributing other people's work and money.
I don't like describing it as cycles because it is too simplistic and pretend it is inevitable, robbing people of agency.
I prefer to think of society as a system where different actors have different goals and gradually lose/gain influence through a) slow processes where those with influence gain more from people who are sufficiently happy to be apathetic b) fast processes when people become sufficiently unhappy to reach for the source of all real world influence - violence.
This happens because uneducated/dumb/complacent people let it happen. It can be prevented by teaching them the importance if freedoms and to always fight back. But that goes directly against the interests of those in power - starting from parents who want children to be obedient.
The banking system has been relying on remote attestation for decades to ensure that devices used in settling financial transactions have not been tampered with:
https://en.wikipedia.org/wiki/IBM_4758
Also, I think the chip-and-PIN cards used for most in-store transactions in Europe for the last 20 years rely on remote attestation and tamper resistance to prevent fraud.
Finally, in the domain of desktop and laptop computers, there is a big security hole in that most components (certainly, disk drives and storage devices, but basically any peripheral or board) are essentially embedded computers that can be pwned with the result that they stayed pwned even if the owner of the computer installs the OS from scratch. One solution to this would be for suppliers of peripherals and boards to get much better at securing their products or to stop using microprocessor to implement their products, but it would be quite a lot of work (and governmental intervention or at least intervention by industry-wide quasi-governmental entities that currently do not exist) to get from the current situation to the one I just described. The only products currently available that are secure against this threat (aside perhaps from using 40-year-old computers) use verified-boot technology to implement the security.
I.e., the only desktop and laptop computers you can buy where you can be reasonable sure some attacker hasn't installed malware in the computer's disk drive or track page or wifi module are things like Macs and Chromebooks, which implement the security using verified boot.
Do you simply not care that this Linux computer that you have such warm feelings about is fairly easy to pwn (in part because of the lack of verified boot and in part because desktop Linux software is just much easier to pwn than the systems software on a Mac or a Chromebook or an iPhone or an Android phone) such that if you ever got to be an effective activist against some government or some powerful industrial interest, that government or industrial interest could fairly easily eavesdrop on everything you do with this Linux computer?
That doesn't sound much like protecting your individual rights.
It's just not. Otherwise, all servers would be running your beloved iOS, wouldn't they?
>in part because of the lack of verified boot
This does not matter. I can generate my own keys.
>easier to pwn [...] than [...]an iPhone
Lol... If anything, phones are more vulnerable because you have less access to sandboxes and VMs.
Hey, look, an Apple CVE from two days ago. https://nvd.nist.gov/vuln/detail/CVE-2025-43284
And this one's from this month. https://nvd.nist.gov/vuln/detail/CVE-2025-43300
And here's Apple's sandbox failing, last month. https://nvd.nist.gov/vuln/detail/CVE-2025-43274
It's IMHO a matter between trust and hope.
Do we really think that Google has complete control over the stuff they distribute?
Do we really think that a single person delivering some software outside of Google ecosystem is evil?
Judging these things is rather hard without some form of trust and hope.
And it's not something everyone can pick up seriously without the needed knowledge and tools.
... 3. allow user to feed application with fake data for every requested permission, e.g. false geoposition, dummy random generated address book etc.
Restated, every electronic transfer requires a sender and a receiver—and there are standardized (electronic) protocols to ensure funds are debited from sender's account and credited to the receiver's account. So we ought to know where monies end up but so often we don't.
The way around these scams is (a) have infallible fully identifiable trace routes, and (b) destination banks must be known to the sending bank and meet an international standard of prudence and integrity or funds would not be transfered, and that ought to be a lawful requirement. (Ipso facto, it would be incumbent on recipient banks to know its account holders and to act on fraudulent transactions.)
In other words, the electronic funds transfer system should be transparent from the sender's account right through to the recipient's bank and the actual bank account within that destination bank. In short, the funds should be traceable right through to the point where the recipient withdraws cash from the destination bank and walks out the bank's door. (There are ways that a destination/bank can keep certain details about the recipient private and yet still allow the money trail capable of being audited that I can't address here.)
In effect, the requirements ought to be (1) sending banks would only transfer funds to banks of known integrity, (2) receiving banks must have procedures in place to recover monies from accounts in the event of fraud, and (3) protocols such as delaying payments, putting funds in escrow until transactions are proven legitimate, and methods of recovering/refunding funds etc. are properly established. Transparency would also mean transactions would be reversible in case of fraud.
Ideally, such procedures would be set out in ISO protocols and by law banks could only transfer funds to other banks that follow the protocols.
Yes, I know this sounds simple and the world's banking systems are complex and convoluted and that there'd be many objections from many sources, banks, credit card companies, money traders and so on but it cannot be denied that the great weakness in funds transfer is that monies can vanish without a trace. Frankly that's unacceptable in an age of electronic money transfer where every cent is accounted for along the transfer route. That various entities can obfuscate that accounting at various points in the transfer process ought no longer be acceptable.
To say it can't be done or that it's unacceptably complex is bullshit, for example banks and credit card companies such as Visa and MasterCard had no trouble blocking funds transfered to WikiLeaks.
The real problem is that the world's banking system is a law unto itself and that banks would on many grounds object strongly to introducing a system.
Look at it this way: similar schemes to that which I've outlined are already in place in say conveyancing, property is only deemed exchanged and the transfer complete when lawyers 'meet' and exchange money and land deeds. Same happens when say two waring countries meet and exchange captive soldiers on the spot.
Given the many billions of dollars lost to scammers every year it's clear that banking transfer systems are hopelessly flawed. Things would soon change if banks told customers that they cannot transfer monies to xyz destination because the money trail is untrusted/cannot be authenticated and that it would be unlawful for them to so act.
The entire excuse is that banks need people to sign their software because otherwise they can't identify who stole people's money using bank transfers.
How is it possible that one can even say that shit with a straight face?
Funnily and ironically enough, a phone that is rooted and fails safety net would likely not allow the bank apps to open, and thus be safer in such a case.
Also see: wrench vs RSA encryption at https://xkcd.com/538/
Who could disagree with that?
The problem is it’s often controlling household members sneakily installing creepy things on devices of those they live with and want to control.
I'd like a source for that. News to me if that is common at all. Not to mention there are apps on the playstore / ios store that can be used in a similar way without sideloading.
This is actually the happy path: parenting! I would love to see this approach taken with parenting, rather than trying to age-verify the internet.
Of course what you're talking about is abusive behavior, but my point is that's not what we're solving for here: and "parent has control" scenario has the dual-use of "the abuser has control". I don't think we can fix that by requiring code signatures or banning sideloading.
---
iOS and Android still provide per-app sandboxes, but those sandboxes are managed entirely by the OS kernel and higher-level frameworks.
Secure Enclave (iOS) and Titan M/TEE (Android) still exist for cryptographic operations, biometric data, and DRM, but access is brokered by the OS. The enclave doesn’t run apps; it just provides cryptographic functions.
OS privilege expansion: system services have visibility into app data at runtime for telemetry, background tasks, push notifications, etc. Apps are isolated from each other, but not from the platform owner.
Result: app-to-app compromise is still difficult, but OS-level compromise (intentional or not) gives broad access. This design simplifies features like push services, app updates, and sync, but makes "true isolation" (hardware separation, zero OS visibility) infeasible in today’s consumer mobile ecosystems.