I spent a lot of my life and money thinking about building better algorithms (over five years).
We have a bit of a chicken / egg problem. Is it the algorithm or is it the preference of the Users which is the problem.
I'd argue the latter.
What I learned which was counter-intuitive was that the vast majority of people aren't interested in thinking hard. This community, in large part, is an exception where many members pride themselves on intellectually challenging material.
That's not the norm. We're not the norm.
My belief that every human was by their nature "curious" and wanting to be engaged deeply was proven false.
This isn't to claim that this is our nature, but when testing with huge populations in the US (specifically), that's not how adults are.
The problem, to me, is deeper and is rooted in our education system and work systems that demand compliance over creativity. Algorithms serve what Users engage with, if the Users were to no longer be interested in ragebait, clickbait, focused on thoughtful content -- the algorithms would adapt.
> Algorithms serve what Users engage with
User engagement isn't actually the same thing as user preference, even though I think many people and companies take the shortcut of equating the two.
People often engage more with things they actually don't like, and which create negative feelings.
These users might score higher on engagement metrics when fed this content, but actually end up leaving the platform or spending less time there, or would at least answer in a survey question that they don't like some or most of the content they are seeing.
This is a major reason I stopped using Threads many months ago. Their algorithm is great at surfacing posts that make me want to chime in with a correction, or click to see the rest of the truncated story. But that doesn't mean I actually liked that experience.
Curious about this. Don't have an angle, just trying to survey your perspective.
You shared: > People often engage more with things they actually don't like, and which create negative feelings.
Do you think this is innate or learned? And, in either case, can it be unlearned.
If you measure which TV shows and movies I watch, that’s a vote of preference.
If you measure which news headlines evoke a comment from me, that’s a measure of engagement but not necessarily preference.
People respond to a lot of things that annoy them, and I think it’s a pretty common human trait. Advertising your business with bright lights and noise can be effective, but we often ban this in our towns and cities because we prefer life without them.
Algorithms have been adapted; they are successful at their goals. We’ve put some of the smartest people on the planet on this problem for the last 20 years.
Humans are notoriously over-sensitive to threats; we see them where they barely exist, and easily overreact. Modern clickbait excels at presenting mundane information as threatening. Of course this attracts more attention.
Also, loud noises attract more attention than soft noise. This doesn’t mean that humans prefer an environment full of loud noises.
That's not the norm. We're not the norm.
I recommend against putting HN on a pedestal. It just leads to disappointment.
Any comment that challenges mainstream science, materialism/physicalism, and leftist politics gets downvoted into oblivion here because HN is definitely not a haven for people who "pride themselves on intellectually challenging material."
TL;DR: It's an echo chamber here, too, but most people who hold the worldview that is enforced here often cannot see their own presuppositions, nor do they see that their views are political in nature.
Stupid mainstream science.
>... and leftist politics ...
>... nor do they see that their views are political in nature.
You don't say. Personally, I respect comments that prove their own claims.
So changing your feed to show popular posts on the platform instead of just your friends’ Tweets would be expected to shift someone’s intake toward the average of the platform.
All modern social media is pretty toxic to society, so I don't participate. Even HN/Reddit is borderline. Nothing is quite as good as the irc and forum culture of the 2000s where everyone was truly anonymous and almost nobody tied any of their worth to what exchanges they had online.
It's the proliferation of downvoting. It disincentivizes speaking your honest opinion and artificially boosts mass-appeal ragebait.
It's detrimental to having organic conversations.
"But the trolls" they say.
In practice it's widely abused.
Using HN as an example, there are legitimate textbook opinions that will boost your comment to the top, and ones that will quickly sink to the bottom and often be flagged away for disagreement. Ignoring obvious spam which is noise, there is no correlation to "right" or "wrong".
That's one advantage old-school discussion forums and imageboards have. Everyone there and all comments therein are equally shit. No voting with the tribe to reinforce your opinion.
What's worse is social media allowed the mentally ill to congregate and reinforce their own insane opinions with plenty of upvotes, which reinforces their delusions as a form of positive feedback. When we wonder aloud how things have become more radicalized in the last 20 years — that's why. Why blame the users when you built the tools?
Ultimately, I think it comes back to people value their online persona way too much and this is something we've intentionally marched towards.
No away I'm going to let that level of outrage-baiting garbage even so much as flash before my eyes.
I'm not saying there is no algorithmic bias, and I tend to agree the X algorithm has a slight conservative bias, but for the most part the owners of these sites care more about keeping your attention than trying to get you to vote a certain way. Therefore if you're naturally susceptible to cultural war stuff, and this is what grabs your attention, it's likely the algorithm will feed it.
But this is far more broad problem. These are the types of people who might have watched political biased cable news in the past, or read politically biased newspapers before that.
Or would that just be considered an unalloyed good?
In twitters case, you had regime officials directing censorship illegally through open emails and meetings.
It's no surprise that the needle moves right when you dial back the suppression of free expression even a little bit (X still censors plenty)
One aspect he highlights at the end is that Fascism was not rejected by the current and former citizens, those that migrated, of Germany. In their mind it was incorrectly implemented. A number of Zionist that migrated from Germany to Palestine were supporters of Fascism. It was not until mid to late 1960s when people start realize and admitted Fascism was bad.
I personally will never fund Elon Musk. Anyone that says empathy is bad is a bad person at heart. Empathy is intelligence and those that lack it lack strong intelligence. There is no way to put yourself in the position others have gone through without empathy.
[0] https://academic.oup.com/ahr/article-abstract/128/3/1512/728...
He didn't win a majority of the vote, just a plurality. And less than 2 of 3 eligible voters actually voted. So he got about 30% of the eligible population to vote for "yay grievance hate politics!" Which is way more than it should be, but a relatively small minority compared to the voter response after all ambiguity about the hate disappeared. This is why there's been a 20+ point swing in special election outcomes since Trump started implementing all the incompetent corrupt racist asshatery.
https://www.npr.org/2025/06/26/nx-s1-5447450/trump-2024-elec...
from the study:
> We assigned active US-based users randomly to either an algorithmic or a chronological feed for 7 weeks
This sounds like a call to separate the aggregation step from the content. Reasonable enough, but does it really address the root cause? Aren't we just as polarized in a world where there are dozens of aggregators into the same data and everyone picks the one that most indulges their specific predilections for engagement, rage, and clicks?
What does "open" really buy you in this space?
Don't get me wrong, I want this figured out too, and maybe this is a helpful first step on the way to other things, but I'm not quite seeing how it plays out.
For a related example I was talking with a colleague recently about how we had both (independently) purchased Nebula subscriptions in an effort to avoid getting YouTube premium and giving Google more money, but both felt the pull back to YouTube because it is so good at leveraging years of subscription and watch history to populate the landing page with content we find engaging.
If even two relatively thoughtful individuals choosing to spend money on a platform with the kind of content they'd like to choose to watch can't seem to succeed at beating an engagement-first algorithm, I'm not sure how much hope normies would have, unless it's the real issue is just being terminally online period, and the only way to win is simply not to play.
The result is underneath any tweet that gets traction you will see countless blue checkmark users either saying something trolling for their side or engagement-baiting.
The people who are more ideologically neutral or not aligned with Musk are completely drowned out below the hundreds of bulk replies of blue checkmarks.
It used to be that if you saw someone, like a tech CEO, take an interesting position you'd have a varied and interesting discussion in the replies. The algorithm would show you replies in particular from people you follow, and often you'd see some productive exchange that actually mattered. Now it's like entirely drivel and you have to scroll through rage bait and engagement slop before getting to the crumbs of meaningful exchange.
It has had a chilling effect on productive intellectual conversation while also accelerating the polarization of the platform by scaring away many people who care about measured conversation.
An algorithm designed today with the goal of helping users pick the most profitable product photo would probably steer people towards using caucasian models, and because eBay's cut is a percentage, they would be incentivized to use it.
Studies show that conservatives tend to respond more positively to sponsored content. If this is true, algorithm-driven ad-sponsored social sites will tend towards conservative content.
https://onlinelibrary.wiley.com/doi/abs/10.1111/1756-2171.12...
https://www.tandfonline.com/doi/full/10.1080/00913367.2024.2...
That said, the Trending tab does tend to push very heavy MAGA-aligned narrative, in a way that to me just seems comical, but I suppose there must be people that genuinely take it at face value, and maybe that does push people.
Less to do with the article:
The more I think about it, I'm not really even sure why I use X these days, other than the fact that I don't really have much of an in-person social life outside of work. Sometimes it can be enjoyable, but honestly the main takeaway I have is that microblogging as a format is genuinely terrible, and X in particular does seem to just feed the most angry things possible. Maybe it's exciting to try and discuss opinions but it is also simultaneously hardly possible to have a nuanced or careful discussion when you have limited characters, and someone on the other end that just wants to shout over you.
I miss being a kid and going onto some forums like for Scratch or Minecraft or whatever. The internet felt way more fun when it was just making cool things and chatting with people about it. I think the USA sort of felt more that way too, but it's hard to know if that was just my privilege. When I write about X, it uncomfortably parallels to how I would consider how my interactions have evolved with my family and friends in real life.
It just baffles me how different my experience of using the platform is. I literally do not see any news. I'm not entirely convinced that it's Twitter being biased and not just giving each person what they most engage with.
I gave up on Twitter when everyone I followed kept adding politics. Even if I agreed with it, I just don't want to a marinade in the anger all day.
To give an example, the recent protests in Iran where being covered on X but the BBC was silent for weeks before finally covering the story (for a few days).
Because the MSM news stations themselves pick up the stuff from twitter and just add their own spin flavor. A dozen phone videos from random citizens on-site is always quicker than the time CNN/FOX can send a reporter there. On twitter you at least get the raw footage and can judge for yourself before MSM try to turn it political to rage bait you.
I do share the author's concerns and was also concerned back in the day when Twitter was quite literally banning people for posting the wrong opinions there. But it's interesting how the people who used to complain about political bias, now seem to not care, and the people who argued "Twitter is a private company they can do what they want" suddenly think the conservative leaning algorithm now on X is a problem. It's hard to get people across political lines to agree when we do this.
In my opinion there two issues here, neither are politically partisan.
The first is that we humans are flawed and algorithms can use our flaws against us. I've repeatedly spoken about how much I love YouTube's algorithm because despite some people saying it's an echo chamber, I think it's one of the few recommendation algorithms which will serves you a genuinely diverse range of content. But I suspect that's because I genuinely like consuming a very wide range of political content and I know I'm in a minority there (probably because I'm interested in politics as a meta subject, but don't have strong political opinions myself). But my point is these algorithms can work really well if you genuinely want to watch a diverse range of political content.
Secondly some recommendation algorithms (and search algorithms) seem to be genuinely biased which I'd argue isn't a problem itself (they are private companies and can do what they want), but that bias isn't transparent. X very clearly has a conservative bias and Bluesky also very clearly has political bias. Neither would admit their bias so people incorrectly assume they're being served something which is fairly representative of public opinion rather than curated – either by moderation or algorithm tweaks.
What we need is honesty, both from individuals who are themselves seeking out their own bias, and platforms which pretend to not have bias but do, and therefore influence where people believe the center ground is.
We can all be more honest with ourselves. If you exclusively use X or Bluesky, it's worth asking why that is, especially if you're engaging with political content on these platforms. But secondly I think we do need more regulation around the transparency of algorithms. I don't necessary think it's a problem if some platform recommends certain content above other content, or has some algorithm to ban users posts content they don't like, but these decisions should be far transparent than they are today so people are at least able to feed that into how they perceive the neutrality of the content they're consuming.
But now people seem to think newsfeeds, which increase the influence of "the algorithm", are just a result of engagement and (IMHO) nothing could be further from the truth.
Factually accurate and provable statements get labelled "misinformation" (either by human intervention or by other AI systems ostensibly created to fight misinformation) and thus get lower distribution. All while conspiracy theories get broad distribution.
Even ignoring "misinformation", certain platforms will label some content as "political" and other content as not when a "political" label often comes down to whether or not you agree with it.
One of the most laughable incidents of putting a thumb on the scale was when Grok started complaining about white genocide in South Africa in completely unrelated posts [1].
I predict a coming showdown over Section 230 about all this. Briefly, S230 establishes a distinction between being a publisher (eg a newspaper) and a platform (eg Twitter) and gave broad immunity from prosecution for the platform for user-generated content. This was, at the time (the 1990s), a good thing.
But now we have a third option: social media platforms have become de facto publishers while pretending to be platforms. How? Ranking algorithms, recommendations and newsfeeds.
Think about it this way: imagine you had a million people in an auditorium and you were taking audience questions. What if you only selected questions that were supportive of the government or a particular policy? Are you really a platform? Or are you selecting user questions to pretend something has broad consensus or to push a message compatible with the views of the "platform's" owner?
My stance is that if you, as a platform, actively suppresses and promotoes content based on politics (as IMHO they all do), you are a publisher not a platform in the Section 230 sense.
[1]: https://www.theguardian.com/technology/2025/may/14/elon-musk...
It comes down to this: you can have visibility into things and yet those in power won’t care whatsoever what you may think. That has always been the case, it is the case now, and will continue to be in the future.