This doesn't make sense in a rather fundamental way - there is no way to design a real computer where doing some useless work is better than doing no work, just think about energy consumption and battery life since this is laptops. Or that's just resources your current app can't use
Besides, they aren't that well engineered, bugs exist and last and come back, etc, so even when on average the impact isn't big, you can get a few photo analysis indexing going haywire for awhile and get stuck
However, given the trend in modern software engineering to break work into units and the fact that on modern hardware thread switches happen very quickly, being able to distribute that work across different compute clusters that make different optimization choices is a good thing and allows schedulers to get results closer to optimal.
So really it boils down to if the gains in doing the work on different compute outweighs the cost splitting and distributing the work, then it's a win. And for most modern software on most modern hardware, the win is very significant.
As always, YMMV
This is far from being a hypothesis. This is an accurate description of your average workstation. I recommend you casually check the list of processes running at any given moment in any random desktop or laptop you find in a 5 meter radius.
This take expresses a fundamental misunderstanding of the whole problem domain. There is a workload comprised of hundreds of processes, some of which multithreaded, that need to be processed. That does not change nor go away. You have absolutely no suggestion that any of those hundreds of processes is "useless". What you will certainly have are processes that will be waiting for IO, but waiting for a request to return a response is not useless.
Hmm I guess the apple silicon laptops don't exist? Did I dream that I bought one year? Maybe I did - it has been a confusing year.
> he's talking about real systems with real processes in a generic way
So which real but impossible to design systems are we discussing then if not the Apple silicon systems?
sigh.
also a mandatory: and yet the macbooks are faster and more battery efficient than any PC laptop with linux/windows
I ran a performance test back in October comparing M4 laptops against high-end Windows desktops, and the results showed the M-series chips coming out on top.
https://www.tyleo.com/blog/compiler-performance-on-2025-devi...
Apple has the best silicon team in the world. They choose perf per watt over pure perf, which means they don't win on multi-core, but they're simply the best in the world in the most complicated, difficult, and impossible metric to game: single core perf.
Even when they were new, they competed with AMD's high end desktop chips. Many years later, they're still excellent in the laptop power range - but not in the desktop power range, where chips with a lot of cache match it in single core performance and obliterate it in multicore.
https://www.cpu-monkey.com/en/compare_cpu-apple_m4-vs-amd_ry...
I don't know how to set up a proper cross compile setup on Apple Silicon, so I tried compiling the same code on 2 macOS systems and 1 Linux system, running the corresponding test suite, and getting some numbers. It's not exactly conclusive, and if I was doing this properly properly then I'd try a bit harder to make everything match up, but it does indeed look like using clang to build x64 code is more expensive - for whatever reason - than using it to build ARM code.
Systems, including clang version and single-core PassMark:
M4 Max Mac Studio, clang-1700.6.3.2 (PassMark: 5000)
x64 i7-5557U Macbook Pro, clang-1500.1.0.2.5 (PassMark: 2290)
x64 AMD 2990WX Linux desktop, clang-20 (PassMark: 2431)
Single thread build times (in seconds). Code is a bunch of C++, plus some FOSS dependencies that are C, everything built with optimisation enabled: Mac Studio: 365
x64 Macbook Pro: 1705
x64 Linux: 1422
(Linux time excludes build times for some of the FOSS dependencies, which on Linux come prebuilt via the package manager.)Single thread test suite times (in seconds), an approximate indication of relative single thread performance:
Mac Studio: 120
x64 Macbook Pro: 350
x64 Linux: 309
Build time/test time makes it look like ARM clang is an outlier: Mac Studio: 3.04
x64 Macbook Pro: 4.87
x64 Linux: 4.60
(The Linux value is flattered here, as it excludes dependency build times, as above. The C dependencies don't add much when building in parallel, but, looking at the above numbers, I wonder if they'd add up to enough when built in series to make the x64 figures the same.)Not even a bad little gaming machine on the rare occasion
Those panther lake comparisons are from the top end PTL to the base M series. If they were compared to their comparative SKUs they’d be even further behind.
This was all mentioned in the article.
See the chart here for what the intel SKUs are: https://www.pcworld.com/article/3023938/intels-core-ultra-se...
They consume more power at the chip level. You can see this in Intels spec sheets. The base recommended power envelope of the PTL is the maximum power envelope of the M5. They’re completely different tiers. You’re comparing a 25-85W tier chip to a 5W-25W chip.
They also only win when it comes to multi core whether that’s CPU or GPU. If they were fairly compared to the correct SoC (an M4 Pro) they’d come out behind on both multicore CPU and GPU.
This was all mentioned in my comment addressing the article. This is the trick that apples competitors are using, by comparing across SKU ranges to grab the headlines. PTL is a strong chip, no doubt, but it’s still behind Apple across all the metrics in a like for like comparison.
Because, when running a Linux intel laptop, even with crowd strike and a LOT of corporate ware, there is no slowness.
When blogs talk about "fast" like this I always assumed it was for heavy lifting, such as video editing or AI stuff, not just day to day regular stuff.
I'm confused, is there a speed difference in day to day corporate work between new Macs and new Linux laptops?
Thank you
When Apple released Apple Silicon, it was a huge breath of fresh air - suddenly the web became snappy again! And the battery lasted forever! Software has bloated to slow down MacBooks again, RAM can often be a major limiting factor in performance, and battery life is more variable now.
Intel is finally catching up to Apple for the first time since 2020. Panther Lake is very competitive on everything except single-core performance (including battery life). Panther Lake CPU's arguably have better features as well - Intel QSV is great if you compile ffmpeg to use it for encoding, and it's easier to use local AI models with OpenVINO than it is to figure out how to use the Apple NPU's. Intel has better tools for sampling/tracing performance analysis, and you can actually see you're loading the iGPU (which is quite performant) and how much VRAM you're using. Last I looked, there was still no way to actually check if an AI model was running on Apple's CPU, GPU, or NPU. The iGPU's can also be configured to use varying amounts of system RAM - I'm not sure how that compares to Apple's unified memory for effective VRAM, and Apple has higher memory bandwidth/lower latency.
I'm not saying that Intel has matched Apple, but it's competitive in the latest generation.
My work laptop will literally struggle to last 2 hours doing any actual work. That involves running IDEs, compiling code, browsing the web, etc. I've done the same on my Macbook on a personal level and it barely makes a dent in the battery.
I feel like the battery performance is definitely down to the hardware. Apple Silicon is an incredible innovation. But the general responsiveness of the OS has to be down to Windows being god-awful. I don't understand how a top of the line desktop can still feel sluggish versus even an M1 Macbook. When I'm running intensive applications like games or compiling code on my desktop, it's rapid. But it never actual feels fast doing day to day things. I feel like that's half the problem. Windows just FEELS so slow all the time. There's no polish.
I currently have a M3 Pro for a work laptop. The performance is fine, but the battery life is not particularly impressive. It often hits low battery after just 2-3 hours without me doing anything particularly CPU-intensive, and sometimes drains the battery from full to flat while sitting closed in a backpack overnight. I'm pretty sure this is due to the corporate crapware, not any issues with Apple's OS, though it's difficult to prove.
I've tended to think lately that all of the OSes are basically fine when set up reasonably well, but can be brought to their knees by a sufficient amount of low-quality corporate crapware.
If you have access to the Defender settings, I found it to be much better after setting an exclusion for the folder that you clone your git repositories to. You can also set exclusions for the git binary and your IDE.
That M2 MBA however, it only feels sluggish at > 400 Chrome tabs open because only then swapping becomes a real annoyance.
[1] https://9to5mac.com/2022/07/14/m2-macbook-air-slower-ssd-bas...
[2] https://www.tomshardware.com/laptops/macbooks/m5-macbook-pro...
[3] https://www.reddit.com/r/AcerNitro/comments/1i0nbt4/slow_ssd...
Except that you can replace Windows with Linux and suddenly it doesn't feel like dogshit anymore. SSDs are fast enough that they should be adding zero perceived latency for ordinary day-to-day operation. In fact, Linux still runs great on a pure spinning disk setup, which is something no other OS can manage today.
With Windows, you're probably still getting SATA and not even NVMe.
The options in that space are increasingly dwindling which is a problem when supporting older machines.
Sometimes it is cheaper to get a sketchy m2 ssd and adapter than to get an actual sata drive from one of the larger manufactures.
(I love my MacBook Air, but it does have its limits.)
What’s surprising is it DOES throttle using Discord with video after an hour or so, unless the battery is already full (I’m guessing it tries to charge which generates a lot of heat). You get way less thermals with a full battery and it using power instead of discharging/charging the battery during heavy usage.
My recommendation to friends asking about MBP / MBA is entirely based on whether they do anything that will load the CPU for more than 7 minutes. For me, I need the fans. I even use Macs Fan Control[0], a 3rd party utility, to control the fans for some of my workflows - pegging the fans to 100% to pre-cool the CPU between loads can help a lot.
My used M1 mba is the fastest computer I’ve ever used. If a video render is going to take more than 7 minutes I walk away or just do something in another app anyway. The difference of a few mini means nothing.
Happiness #1
Apples CPUs are most powerful efficient however, due to a bunch of design and manufacturing choices.
But to answer your question, yes Windows 11 with modern security crap feels 2-3 slower than vanilla Linux on the same hardware.
Also, all the top nearly 50 multi-core benchmarks are taken up by Epyc and Xeon chips. For desktop/laptop chips that aren't Threadripper, Apple still leads with the M3 Ultra 32-core in multi-core passmark benchmark. The usual caveats of benchmarks not being representative of any actual workload still apply, of course.
And Apple does lag behind in multi-core benchmarks for laptop chips - The M3 Ultra is not offered in a laptop form-factor, but it does beat every AMD/Intel laptop chip as well in multicore benchmarks.
Obviously it's an Apple-to-Oranges (pardon the pun) comparison since the AMD options don't need to care about the power envelope nearly as much; and the comparison gets more equal when normalizing for Apple's optimized domain (power efficiency), but the high-end AMD laptop chips still edge it out.
But then this turns into some sort of religious war, where people want to assume that their "god" should win at everything. It's not, the Apple chips are great; amazing even, when considering they're powering laptops/phones for 10+ hours at a time in smaller chassis than their competitors. But they still have to give in certain metrics to hit that envelope.
1 - https://thepcbottleneckcalculator.com/cpu-benchmarks-2026/
What does "single core gaming performance" even mean for a CPU that doesn't have an iGPU? How could that not be a category error to compare against Apple Silicon?
I was looking at https://www.cpubenchmark.net/single-thread/
See also:
https://nanoreview.net/en/cpu-list/cinebench-scores
https://browser.geekbench.com/mac-benchmarks vs https://browser.geekbench.com/processor-benchmarks
The distance was not huge, maybe 3%. You can obviously pick and choose your benchmarks until you find one where "your" CPU happens to be the best.
https://www.cpubenchmark.net/single-thread/
https://browser.geekbench.com/mac-benchmarks vs https://browser.geekbench.com/processor-benchmarks
Apple leads all of these in single core, by a significant margin. Even at geekbench.com (3398 for AMD 9950X3D vs 3235 for the 14900KS vs ~4000 for various Apple chips)
I'm not sure I could find a single core benchmark it would lose no matter how hard I tried...
My personal M1 feels just as fast as the work M4 due to this.
With maximum corporate spyware it consistently takes 1 second to get a visual feedback on Windows.
The cores are. Nothing is beating a M4/M5 on single CPU performance, and per-cycle nothing is even particularly close.
At the whole-chip level, there are bigger devices from the x86 vendors which will pull ahead on parallel benchmarks. And Apple's unfortunate allergy to effective cooling techniques (like, "faster fans move more air") means that they tend to throttle on chip-scale loads[1].
But if you just want to Run One Thing really fast, which even today still correlates better to "machine feels fast" than parallel loads, Apple is the undisputed king.
[1] One of the reasons Geekbench 6, which controversially includes cooling pauses, looks so much better for Apple than version 5 did.
It’s probably the single most common corner to cut in x86 laptops. Manufacturers love to shove hot chips into a chassis too thin for them and then toss in whatever cheap tiny-whiny-fan cooling solution they happen to have on hand. Result: laptop sounds like a jet engine when the CPU is being pushed.
The issue is actually very simple. In order to gain more performance, manufactures like AMD / Intel for a long time have been in a race for the highest frequency but if you have some knowhow in hardware, you know that higher frequency = more power draw the higher you clock.
So you open your MS Paint, and ... your CPU pushes to 5.2Ghz, and it gets fed 15W on a single core. This creates a heat spike in the sensors, and your fans on laptops, all too often are set to react very fast. And VROOOOEEEEM goes your fan as the CPU Temp sensor hits 80C on a single core, just for a second. But wait, your MS Paint is open, and down goes the fan. And repeat, repeat, repeat ...
Notice how Apple focused on running their CPUs no higher then 4.2Ghz or something... So even if their CPU boosts to 100%, that thermal peak will be maybe 7W.
Now combine that with Apple using a much more tolerant fan / temp sensor setup. They say: 100C is perfectly acceptable. So when your CPU boosts, its not dumping 15W, but only 7W. And because the fan reaction threshold is so high, the fans do not react on any Apple product. Unless you run a single or MT process for a LONG time.
And even then, the fans will only ramp up slowly if your 100C has been going on for a few seconds, and while yes, your CPU will be thermal throttling while the fans spin up. But you do not feel this effect.
That is the real magic of Apple. Yes, their CPUs are masterpieces at how they get so much performance from a lower frequency, but the real kicker is their thermal / fan profile design.
The wife has a old Apple clone laptop from 2018. Thing is for 99.9% of the time silent. No fans, nothing. Because Xiaomi used the same tricks on that laptop, allowing it to boost to the max, without triggering the fan ramping. And when it triggers with a long running process, they use a very low fan rpm until it goes way too high. I had laptops with the same CPU from other brands in the same time periode, and they all had annoying fan profiles. That showed me that a lot of Apple magic is good design around the hardware/software/fan.
But ironically, that magic has been forgotten in later models by Xiaomi ... Tsk!
Manufactures think: Its better if millions of people suffer from more noise, then if we need to have a few thousand laptops that die / get damaged, from too much heat. So ramp up the fans!!!
Of course Apple did pick a very good sweet spot favoring a wide core as opposed to a speed daemon more than the competition.
That's true in principle, but IMHO a little too evasive. In point of fact Apple 100% won this round. Their wider architecture is actually faster than the competition in an absolute sense even at the deployed clock rates. There's really no significant market where you'd want to use anything different for CPU compute anywhere. Datacenters would absolutely buy M5 racks if they were offered. M5 efficiency cores are better than Intel's or Zen 5c every time they're measured too.
Just about the only spaces where Apple is behind[1] are die size and packaging: their cores take a little more area per benchmark point, and they're still shipping big single dies. And they finance both of those shortcomings with much higher per-part margins.
Intel and AMD have moved hard into tiled architectures and it seems to be working out for them. I'd expect Apple to do the same soon.
[1] Well, except the big elephant in the room that "CPU Performance Doesn't Matter Much Anymore". Consumer CPUs are fast enough and have been for years now, and the stuff that feels slow is on the GPU or the cloud these days. Apple's in critical danger of being commoditized out of its market space, but then that's true of every premium vendor throughout history.
Early on personally I had doubts they could scale their CPU to high end desktop performance, but obviously it hasn't been an issue.
My nitpick was purely about using clock per cycle as a performance metric, which is as much nonsense as comparing GHz: AFAIK Apple cpus still top at 4.5 GHz, while the AMD/Intel reach 6Ghz, so obviously the architectures are optimized for different target frequencies (which makes sense: the power costs of a high GHz design are astronomical).
And as an microarchitecture nerd I'm definitely interested in how they can implement such a wide architecture, but wide-ness per-se is not a target.
It was a discussion about how the P cores are left ready to speedily respond to input via the E cores satisfying background needs, in this case talking specifically about Apple Silicon because that's the writer's interest. But of course loads of chips have P and E cores, for the same reason.
You are comparing 256 AMD Zen6c Core to What? M4 Max?
When people say CPU they meant CPU Core, And in terms of Raw Speed, Apple CPU holds the fastest single core CPU benchmarks.
https://www.cpubenchmark.net/single-thread/
Where the M5 (non-pro, the one that will be in the next MacBook Air) is on top.
When the M5 multicore scores arrive, the multi-core charts will be interesting.
My Apple silicon laptop feels super fast because I just open the lid and it's running. That's not because the CPU ran instructions super fast, it's because I can just close the lid and the battery lasts forever.
Replaced a good Windows machine (Ryzen 5? 32 Gb) and I have a late intel Mac and a Linux workstation (6 core Ryzen 5, 32 Gb).
Obviously the Mac is newer. But wow. It's faster even on things that CPU shouldn't matter, like going through a remote samba mount through our corporate VPN.
- Much faster than my intel Mac
- Faster than my Windows
- Haven't noticed any improvements over my Linux machines, but with my current job I no longer get to use them much for desktop (unfortunately).
Of course, while I love my Debian setup, boot up is long on my workstation; screensaver/sleep/wake up is a nightmare on my entertainment box (my fault, but common!). The Mac just sleeps/wakes up with no problems.
The Mac (smallest air) is also by far the best laptop Ive ever had from a mobility POV. Immediate start up, long battery, decent enough keyboard (but If rather sacrifice for a longer keypress)
I still use an M1 MB Air for work mostly docked... the machine is insane for what it can still do, it sips power and has a perfect stability track record for me. I also have a Halo Strix machine that is the first machine that I can run linux and feel like I'm getting a "mac like" experience with virtually no compromises.
I didn't find any reply mentioning the easy of use, benefits and handy things the mac does and Linux won't. Spotlight, Photos app with all the face recognition and general image index, contact sync, etc. Takes ages to setup those on Linux and with macs everything just works with an Apple account. So I wonder if Linux had to do all this background stuff, if it would be able to run smoothly as Macs run this days.
For context: I was running Linux for 6 months for the first time in 10 years (which I was daily driving macs). My M1 Max still beats my full tower gaming PC, which I was using linux at. I've used Windows and Linux before, and Windows for gaming too. My Linux setup was very snappy without any corporate stuff. But my office was getting warm because of the PC. My M1 barely turn on the fans, even with large DB migrations and other heavy operation during software development.
Mac on intel feels like it was about 2x slower at these basic functions. (I don’t have real data points)
Intel Mac had lag when opening apps. Silicon Mac is instant and always responsive.
No idea how that compares to Linux.
But I'm running a fairly slim Archlinux install without a desktop environment or anything like that. (It's just XMonad as a window manager.)
This is a metric I never really understood. how often are people booting? The only time I ever reboot a machine is if I have to. For instance the laptop I'm on right now has an uptime of just under 100 days.
It rebooted and got to desktop, restoring all my open windows and app state, before I got to the podium (it was a very small room).
The Mac OS itself seems to be relatively fast to boot, the desktop environment does a good job recovering from failures, and now the underlying hardware is screaming fast.
I should never have to reboot, but in the rare instances when it happens, being fast can be a difference maker.
My work desktop? Every day, and it takes > 30 seconds to go from off to desktop, and probably another minute or two for things like Docker to decide that they’ve actually started up.
Presumably a whole bunch of services are still being (lazy?) loaded.
On the other hand, my cachyos install takes a bit longer to boot, but after it jumps to the desktop all apps that are autostart just jump into view instantly.
Most time on boot seems to be spent on initializing drives and finding the right boot drive and load it.
Even Windows (or at least my install that doesn't have any crap besides visual studio on it) can run for weeks these days...
My work PC will decide to not idle and will spin up fans arbitrarily in the evenings so I shut it down when I’m not using it.
Something else to consider: chromebook on arm boots significantly faster than dito intel. Yes, nowadays Mediateks latest cpus wipe the floor with intel N-whatever, but it has been like this since the early days when the Arm version was relatively underpowered.
Why? I have no idea.
After I put an SSD in it, that is.
I wonder what my Apple silicon laptop is even doing sometimes.
It’s all about the perf per watt.
The switch from a top spec, new Intel Mac to a base model M1 Macbook Air was like a breath of fresh air. I still use that 5 year old laptop happily because it was such a leap forward in performance. I dont recall ever being happy with a 5 year old device.
There are dozens of outlets out there that run synthetic and real world benchmarks that answer these questions.
Apple’s chips are very strong on creative tasks like video transcoding, they have the best single core performance as well as strong multi-core performance. They also have top tier power efficiency, battery life, and quiet operation, which is a lot of what people look for when doing corporate tasks.
Depending on the chip model, the graphics performance is impressive for the power draw, but you can get better integrated graphics from Intel Panther Lake, and you can get better dedicated class graphics from Nvidia.
Some outlets like Just Josh tech on YouTube are good at demonstrating these differences.
This was particularly pronounced on the M1 due to the 50/50 split. We reduced the number of workers on our test suite based on the CPU type and it sped up considerably.
Not when one of those decides to wreck havoc - spotlight indexing issues slowly eating away your disk space, icloud sync spinning over and over and hanging any app that tries to read your Documents folder, Photos sync pegging all cores at 100%… it feels like things might be getting a little out of hand. How can anyone model/predict system behaviour with so many moving parts?
Fifteen years ago, if an application started spinning or mail stopped coming in, you could open up Console.app and have reasonable confidence the app in question would have logged an easy to tag error diagnostic. This was how the plague of mysterious DNS resolution issues got tied to the half-baked discoveryd so quickly.
Now, those 600 processes and 2000 threads are blasting thousands of log entries per second, with dozens of errors happening in unrecognizable daemons doing thrice-delegated work.
It seems like a perfect example of Jevons paradox (or andy/bill law): unified logging makes logging rich and cheap and free, but that causes everyone to throw it everywhere willy nilly. It's so noisy in there that I'm not sure who the logs are for anymore, it's useless for the user of the computer and even as a developer it seems impossible to debug things just by passively watching logs unless you already know the precise filter predicate.
In fact they must realize it's hopeless because the new Console doesn't even give you a mechanism to read past logs (I have to download eclecticlight's Ulbow for that).
This is the kind of thing that makes me want to grab Craig Federighi by the scruff and rub his nose in it. Every event that’s scrolling by here, an engineer thought was a bad enough scenario to log it at Error level. There should be zero of these on a standard customer install. How many of these are legitimate bugs? Do they even know? (Hahaha, of course they don’t.)
Something about the invisibility of background daemons makes them like flypaper for really stupid, face-palm level bugs. Because approximately zero customers look at the console errors and the crash files, they’re just sort of invisible and tolerated. Nobody seems to give a damn at Apple any more.
grumble
Spotlight, aside from failing to find applications also pollutes the search results with random files it found on the filesystem, some shortcuts to search the web and whatnot. Also, at the start of me using a Mac it repeatedly got into the state of not displaying any results whatsoever. Fixing that each time required running some arcane commands in the terminal. Something that people associate with Linux, but ironically I think now Linux requires less of that than Mac.
But in Tahoe they removed the Applications view, so my solution is gone now.
All in all, with Apple destroying macOS in each release, crippling DTrace with SIP, Liquid Glass, poor performance monitoring compared to what I can see with tools like perf on Linux, or Intel VTune on Windows, Metal slowly becoming the only GPU programming option, I think I’m going to be switching back to Linux.
> I quickly found out that Apple Instruments doesn’t support fetching more than 10 counters, sometimes 8, and sometimes less. I was constantly getting errors like '<SOME_COUNTER>' conflicts with a previously added event. The maximum that I could get is 10 counters. So, the first takeaway was that there is a limit to how many counters I can fetch, and another is that counters are, in some way, incompatible with each other. Why and how they’re incompatible is a good question.
Also: https://hmijailblog.blogspot.com/2015/09/using-intels-perfor...
Your second example, is the complaint that Instruments doesn't have flamegraph visualization? That was true a decade ago when it was written, and is not true today. Or that Instrument's trace file format isn't documented?
Why do I like Instruments and think it is better? Because the people who designed it optimized it for solving real performance problems. There are a bunch of "templates" that are focused on issues like "why is my thing so slow, what is it doing" to "why am I using too much memory" to "what network traffic is coming out of this app". These are real, specific problems while perf will tell you things like "oh this instruction has a 12% cache miss rate because it got scheduled off the core 2ms ago". Which is something Instruments can also tell you, but the idea is that this is totally the wrong interface that you should be presenting for doing performance work since just presenting people with data is barely useful.
What people do instead with perf is they have like 17 scripts 12 of which were written by Brendan Gregg to load the info into something that can be half useful to them. This is to save you time if you don't know how the Linux kernel works. Part of the reason why flamegraphs and Perfetto are so popular is because everyone is so desperate to pull out the info and get something, anything, that's not the perf UI that they settle for what they can get. Instruments has exceptionally good UI for its tools, clearly designed by people who solve real performance problems. perf is a raw data dump from the kernel with some lipstick on it.
Mind you, I trust the data that perf is dumping because the tool is rock-solid. Instruments is not like that. It's buggy, sometimes undocumented (to be fair, perf is not great either, but at least it is open source), slow, and crashes a lot. This majorly sucks. But even with this I solve a lot more problems clicking around Instruments UI and cursing at it than I do with perf. And while they are slow to fix things they are directionally moving towards cleaning up bugs and allowing data export, so the problems that you brought up (which are very valid) are solved or on their way towards being solved.
The implication that perf is not is frankly laughable. Perhaps one major difference is that perf assumes you know how the OS works, and what various syscalls are doing.
You just proved again that it's not optimized for reality because that knowledge can't be assumed as the pool of people trying to solve real performance problems is much wider than the pool with that knowledge
Only a system reinstall + manually deleting all index files fixed it. Meanwhile it was eating 20-30GB of disk space. There are tons of reports of this in the apple forums.
Even then, it feels a lot slower in MacOS 26 than it did before, and you often get the rug-pull effect of your results changing a millisecond before you press the enter key. I would pay good money to go back to Snow Leopard.
That being said, macOS was definitely more snappy back on Catalina, which was the first version I had so I can't vouch for Snow Leopard. Each update after Catalina felt gradually worse and from what i heard Tahoe feels like the last nail in the coffin.
I hope the UX team will deliver a more polished, expressive and minimal design next time.
It is completely useless on network mounts, however, where I resort to find/grep/rg
Firstly performance issues like wtf is going on with search. Then there seems to be a need to constantly futz with stable established apps UXes every annual OS update for the sake of change. Moving buttons, adding clicks to workflows, etc.
My most recent enraging find was the date picker in the reminders app. When editting a reminder, there is an up/down arrow interface to the side of the date, but if you click them they change the MONTH. Who decided that makes any sense. In what world is bumping a reminder by a month the most common change? It’s actually worse than useless, its actively net negative.
I just got my first ARM Mac to replace my work Win machine (what has MS done to Windows!?!? :'()
Used to be I could type "display" and Id get right to display settings in settings. Now it shows thousands of useless links to who knows what. Instead I have to type "settings" and then, within settings, type "display"
Still better than the Windows shit show.
Honestly, a well setup Linux machine has better user experience than anything on the market today.
We probably have to preface that with “for older people”. IMO Linux has changed less UX wise than either Windows or MacOS in recent years
For several decades, I have used hundreds of different computers, from IBM mainframes, DEC minicomputers and early PCs with Intel 8080 or Motorola MC6800 until the latest computers with AMD Zen 5 or Intel Arrow Lake. I have used a variety of operating systems and user interfaces.
During the first decades, there has been a continuous and obvious improvement in user interfaces, so I never had any hesitation to switch to a new program with a completely different user interface for the same application, even every year or every few months, whenever such a change resulted in better results and productivity.
Nevertheless, an optimum seems to have been reached around 20 years ago, and since then more often than not I see only worse interfaces that make harder to do what was simpler previously, so there is no incentive for an "upgrade".
Therefore I indeed customize my GUIs in Linux to a mode that resembles much more older Windows or MacOS than their recent versions and which prioritizes instant responses and minimum distractions over the coolest look.
In the rare occasions when I find a program that does something in a better way than what I am using, I still switch immediately to it, no matter how different it may be in comparison with what I am familiar, so conservatism has nothing to do with preferring the older GUIs.
A consequence of having "UI designers" paid on salary instead of individual contract jobs that expire when the specific fix is complete. In order to preserve their continuing salary, the UI designers have to continue making changes for changes sake (so that the accounting dept. does not begin asking: "why are we paying salary for all these UI designers if they are not creating any output"). So combining reaching an optimum 20 years ago with the fact that the UI designers must make changes for the sake of change, results in the changes being sub-optimal.
I just installed Plasma with Endevouros and use it. I used Cinnamon before it. They don't require much effort.
And yet on Windows 11, hit Win key, type display, it immediately shows display settings as the first result.
People are really unable to differentiate “I am having issues” and “things are universally or even widely broken”
I've been using spotlight since it was introduced for... everything. In Tahoe it has been absolutely terrible. Unusable. Always indexing. Never showing me applications which is the main thing I use it for (yes, it is configured to show applications!). They broke something.
It’s a QoS level: https://developer.apple.com/documentation/dispatch/dispatchq...
I replaced a MacPro5,1 with an M2Pro — which uses soooooo much less energy performing similarly mundane tasks (~15x+). Idle is ~25W v. 160W
Edit: It looks like there was some discussion about this on the Asahi blog 2 years ago[0].
This lets Apple architect things as small, single-responsibility processes, but make their priority dynamic, such that they’re usually low-priority unless a foreground user process is blocked on their work. I’m not sure the Linux kernel has this.
Multithreading has been more ubiquitous in Mac apps for a long time thanks to Apple having offered mainstream multi-CPU machines very early on (circa 2000), predating even OS X itself, and has made a point of making multithreading easier in its SDK. By contrast multicore machines weren’t common in the Windows/x86 world until around the late 2000s with the boom of Intel’s Core series CPUs, but single core x86 CPUs persisted for several years following and Windows developer culture still hasn’t embraced multithreading as fully as its Mac counterpart has.
This then made it dead simple for Mac developers to adopt task prioritization/QoS. Work was already cleanly split into threads, so it’s just a matter of specifying which are best suited for putting on e-cores and which to keep on P-cores. And overwhelmingly, Mac devs have done that.
So the system scheduler is a good deal more effective than its Windows counterpart because third party devs have given it cues to guide it. The tasks most impactful to the user’s perception of snappiness remain on the P-cores, the E-cores stay busy with auxiliary work and keep the P-cores unblocked and able to sleep more quickly and often.
When I ran Gnome, I was regularly annoyed at how often an indexing service would chew through CPU.
So for example, if in an email client the user has initiated the export of a mailbox, that is given utmost priority while things like indexing and periodic fetches get put on the back burner.
This works because even a selfish developer wants their program to run well, which setting all tasks as high priority actively and often visibly impedes, and so they push less essential work to the background.
It just happens that in this case, smart threading on the per-process level makes life for the system scheduler easier.
Android SoCs have adopted heterogenous CPU architectures ("big.LITTLE" in the ARM sphere) years before Apple, and as a result, there have been multiple attempts to tackle this in Linux. The latest, upstream, and perhaps the most widely deployed way of efficiently using such processors involves using Energy-Aware Scheduling [1]. This allows the kernel to differentiate between performant and efficient cores, and schedule work accordingly, avoiding situations in which brief workloads are put on P cores and the demanding ones start hogging E cores. Thanks to this, P cores can also be put to sleep when their extra power is not needed, saving power.
One advantage macOS still has over Linux is that its kernel can tell performance-critical and background workloads apart without taking guesses. This is beneficial on all sorts of systems, but particularly shines on those heterogenous ones, allowing unimportant workloads to always occupy E cores, and freeing P cores for loads that would benefit from them, or simply letting them sleep for longer. Apple solved this problem by defining a standard interface for the user-space to communicate such information down [2]. As far as I'm aware, Linux currently lacks an equivalent [3].
Technically, your application can still pin its threads to individual cores, but to know which core is which, it would have to parse information internal to the scheduler. I haven't seen any Linux application that does this.
[1] https://www.kernel.org/doc/html/latest/scheduler/sched-energ...
[2] https://developer.apple.com/library/archive/documentation/Pe...
[3] https://github.com/swiftlang/swift-corelibs-libdispatch?tab=...
SCHED_BATCH and SCHED_IDLE scheduling policies. They've been there since forever.
I have read there are some potential security benefits if you were to keep your most exploitable programs (eg web browser) on its own dedicated core.
It’s about half, actually
> The fact that an idle Mac has over 2,000 threads running in over 600 processes is good news
I mean, only if they’re doing something useful
This article couldn't have come at a better time. Because frankly speaking I am not that impressed after I tested Omarchy Linux. Everything was snappy. It is like back to DOS or Windows 3.11 era. ( Not quite but close ) It makes me wonder why Mac couldn't be like that.
Apple Silicon is fast, no doubt about it. It isn't some benchmarks but even under emulation, compiling or other workload it is fast if not the fastest. So there are plenty of evidence it isn't benchmark specific which some people claims Apple is only fast on Geekbench. The problem is macOS is slow. And for whatever reason haven't improved much. I am hoping dropping support for x86 in next macOS meant they have time and excuses to do a lot of work on macOS under the hood. Especially with OOM and Paging.
Apple Silicon is awesome and was a game changer when it came out. Still very impressive that they have been able to keep the MacBook Air passively cooled since the first M1. But yeah, macOS is holding it back.