By the way, a pet peeve of mine right now is that reporters covering court cases (and we have so many of public interest lately) never seem to simply paste the link to the online PDF decision/ruling for us all to read, right in the story. (and another user here kindly did that for us below: https://storage.courtlistener.com/recap/gov.uscourts.dcd.223... )
It seems such a simple step (they must have been using the ruling PDF to write the story) yet why is it always such a hassle for them to feel that they should link the original content? I would rather be able to see the probably dozens of pages ruling with the full details rather than hear it secondhand from a reporter at this point. It feels like they want to be the gatekeepers of information, and poor ones at that.
I think it should be adopted as standard journalistic practice in fact -- reporting on court rulings must come with the PDF.
Aside from that, it will be interesting to see on what grounds the judge decided that this particular data sharing remedy was the solution. Can anyone now simply claim they're a competitor and get access to Google's tons of data?
I am not too familiar with antitrust precedent, but to what extent does the judge rule on how specific the data sharing need to be (what types of data, for what time span, how anonymized, etc. etc.) or appoint a special master? Why is that up to the judge versus the FTC or whoever to propose?
> By the way, a pet peeve of mine right now is that reporters covering court cases never seem to simply paste the link to the online PDF decision/ruling for us all to read right in the story.
I presume that this falls under the same consideration as direct links to science papers in articles that are covering those releases. Far as I can tell, the central tactic for lowering bounce rate and increasing 'engagement' is to link out sparsely, and, ideally, not at all.
I write articles on new research papers, and always provide a direct link to the PDF,; but nearly all major sites fail to do this, even when the paper turns out to be at Arxiv, or otherwise directly available (instead of having been an exclusive preview offered to the publication by the researchers, as often happens at more prominent publications such as Ars and The Register).
In regard to the few publishers that do provide legal PDFs in articles, the solution I see most often is that the publication hosts the PDF itself, keeping the reader in their ecosystem. However, since external PDFs can get revised and taken down, this could also be a countermeasure against that.
They didn't cited papers directly even before the web. It's not a bounce or engagement issue.
Journalists don't make it easy for you to access primary sources because of a mentality and culture issue. They see themselves as gatekeepers of information and convince themselves that readers can't handle the raw material. From their perspective, making it easy to read primary sources is pure downside:
• Most readers don't care/have time.
• Of the tiny number who do, the chances of them finding a mistake in your reporting or in the primary source is high.
• It makes it easier to mis-represent the source to bolster the story.
Eliminating links to sources is pure win: people care a lot about mistakes but not about finding them, so raising the bar for the few who do is ideal.
Articles about patent infringement are similarly annoying when the patent numbers aren't cited. This is basic 21st century journalism 101. We aren't limited to what fits on a broadside anymore.
We need an AI driven extension that will insert the links. This would be a nice addition to Kagi as they could be trusted to not play SEO shenanigans.
If news on the web was journalism instead of attention seeking for ad revenue you’d be right.
Agree on the extension idea, except I’m not sure I want to see the original sensationalized content anyway. Might as well have the bot rewrite the piece in a dry style.
I don't read science/tech articles from major news outlets for this reason. They NEVER link to the papers and I always have to spend a few minutes searching for it.
This doesn't happen nearly as often on smaller sci/tech news outlets. When it does a quick email usually gets the link put in the article within a few hours.
I think one of the lessons of Wikipedia, is the more you link out the more they come back.
People come to your site because it is useful. They are perfectly capable of leaving by themselves. They don't need a link to do so. Having links to relavent information that attracts readers back is well worth the cost of people following links out of your site.
Interesting example, as Google used to link to Wikipedia much more prominently, then stopped doing that, which dropped Wikipedia's visitor counts a lot. A very large percentage of Wikipedia's visits are Google referrals.
Google shifted views that used to go to Wikipedia first to their in-house knowledge graph (high percentages of which are just Wikipedia content), then to the AI produced snippets.
All to say, yes...Wikipedia's generosity with outbound links is part of the popularity. But they still get hit by this "engagement" mentality from their traffic sources.
I would argue that this is less an example of why linking out may be bad for engagement and more an example of google abusing its intermediary/market position to keep users on their own pages longer
I'd argue that a user not having to click through is clearly a better result for the user, and that alone would be sufficient motivation to do it.
In terms of a single search, I don't think Google really benefits from preventing a click-through - the journey is over once the user has their information. If anything, making them click through to an ad-infested page would probably get a few fractions of a cent extra given how deeply Google is embedded in the ads ecosystem.
But giving the user the result faster means they're more likely to come back when they need the next piece of information, and give them more time to search for the next information. That benefits Google, but only because it benefits the user.
That'd be all fine if google produced that content, but since it doesn't, once they kill off the website, what happens to the quality of their snippets? Then the user has only shitty snippets that are out of date.
That's the kind if short-sighted view that's the root issue in a ton of enshittification happening around: the belief that short-term gains or benefits are all it's about. It's not sustainable to leech off of wikipedia content to fuel your own (ad in Google's) knowledge pop-ups, even if it benefits the user in that they save a single click, because that means long-term wikipedia will die out because users no longer associate the knowledge gained with wikipedia but with Google even though they had nothing to do with it apart from "stealing it".
I won't call it dead, but it is declining. Their various sources of traffic are now regurgitating Wikipedia Content (and other 3rd party sources) via uncited/unlinked AI "blurbs"...instead of presenting snippets of Wikipedia contents with links to Wikipedia to read more.
It's not the only reason their traffic is declining, but it seems like a big one.
I may be wrong, but I don’t think the people that edit Wikipedia are the same people that are content with half truths from LLMs and thus no longer visiting the site. So I kinda doubt it matters much.
Also, Stack Overflow is a commercial website, while Wikipedia is a free (as in freedom) project. Editing Wikipedia feels like you're contributing towards "an ideal", that you're giving back something to humanity, instead of just helping somebody else getting richer.
It's depressing how much of the web didn't work the way it was supposed to. Attention is centralized on news websites because news can be posted on social media feeds every day. Those news articles never link to other websites due to arbitrary SEO considerations. Google's pagerank which was once based on backlinks can't function if the only links come from social media feeds in 3 websites and none of them come from actual websites. On top of it all, nobody even knows for sure if those SEO considerations matter or not because it's all on Google's whim and can change without notice.
The web works fine it's just PACER and stuff that is garbage because there is no competition in the trash people create for the government and public apathy (or corruption, take your pick) is high.
Never link outside your domain has been rule #1 of the ad-driven business for years now.
Once users leave your page, they become exponentially less likely to load more ad-ridden pages from your website.
Ironically this is also why there is so much existential fear about AI in the media. LLMs will do to them what they do to primary sources (and more likely just cut them out of the loop). This Google story will get a lot of clicks. But it is easy to see a near future where an AI agent just retrieves and summarizes the case for you. And does a much better job too.
> But it is easy to see a near future where an AI agent just retrieves and summarizes the case for you. And does a much better job too.
I am significantly less confident that an LLM is going to be any good at putting a raw source like a court ruling PDF into context and adequately explain to readers why - and what details - of the decision matter, and what impact they will have. They can probably do an OK job summarizing the document, but not much more.
I do agree that given current trends there is going to be significant impact to journalism, and I don’t like that future at all. Particularly because we won’t just have less good reporting, but we won’t have any investigative journalism, which is funded by the ads from relatively cheap “reporting only” stories. There’s a reason we call the press the fourth estate, and we will be much poorer without them.
There’s an argument to be made that the press has recently put themselves into this position and hasn’t done a great job, but I still think it’s going to be a rather great loss.
Llms are already incredibly able to be great at contextualizing and explaining things. HNs is so allergic to AI, it's incredible. And leaving you behind
They are. I use LLMs. They need to be given context. Which is easy for things that are already on the Internet for them to pull from. When people stop writing news articles that connect events to one another then LLMs have nothing to pull into their context. They are not capable of connecting two random sources.
Edit: also, the primary point is that if everyone uses LLMs for reporting, the loss of revenue will cause the disappearance of the investigative journalism that funds, which LLMs sure as fuck aren’t going to do.
Is this article investigative? Summary of the court case pdf is trivial for an LLM and most will probably do a better job than the linked article. Main difference being you won't be bombarded with ads and other nonsense (at least for now). Hell I wouldn't be surprised if the reporter had an LLM summarize the case before they wrote the article.
Content that can't be easily made by an LLM will still be worth something. But go to most news sites and their content is mostly summarization of someone else's content. LLMs may make that a hard sell.
The problem I may have with using an LLM for this is that I am not already familiar with the subject in detail and won't know when the thing has:
* Strayed from reality
* Strayed from the document and is freely admixing with other information from its training data without saying so. Done properly, this is a powerful tool for synthesis, and LLMs theoretically are great at it, but done improperly it just muddles things
* Has some kind of bias baked in-ironic mdash-"in summary, this ruling is an example of judicial overreach by activist judges against a tech company which should morally be allowed to do what they want". Not such a problem now, but I think we may see more of this once AI is firmly embedded into every information flow. Currently the AI company game is training people to trust the machine. Once they do, what a resource those people become!
Now, none of those points are unique to LLMs: inaccuracy, misunderstanding, wrong or confused synthesis and especially bias are all common in human journalism. Gell-Mann amnesia and institutional bias and all that.
Perhaps the problem is that I'm not sufficiently mistrustful of the status quo, even though I am already quite suspicious of journalistic analysis. Or maybe it's because AI, though my brain screams "don't trust it, check everything, find the source", remains in the toolbox even when I find problems, whereas for a journalist I'd roll my eyes, call them a hack and leave the website.
Not that it's directly relevant to the immediate utility of AI today, but once AI is everything, or almost everything, then my next worry is what happens when you functionally only have published primary material and AI output to train on. Even without model collapse, what happens when AI journobots inherently don't "pick up the phone", so to speak, to dig up details? For the first year, the media runs almost for free. For the second year, there's no higher level synthesis for the past year to lean on and it all regresses to summarising press releases. Again, there are already many human publications that just repackage PRs, but when that's all there is? This problem isn't limited to journalism, but it's a good example.
I think it's a mix of shortsightedness and straight up denial. A lot of people on here were the smart nerdy kid. They are good at programming or electronics or whatever. It became their identity and they are fuckin scared that the one thing they can do well will be taken away rather than putting the new tool in their toolbox.
> significantly less confident that an LLM is going to be any good at putting a raw source like a court ruling PDF into context and adequately explain to readers why
If you think that's the case, you should really give current LLMs another shot. The version of ChatGPT from 3 years ago has more in common with the average chatbot from 50 years ago than it does the ChatGPT from today.
What condescending nonsense is this? I use all the major LLM systems, mostly with their most expensive models, and when I ask them for sources, including specifically in many cases sources for legal questions, half the time the linked source will not be remotely irrelevant, and will not remotely substantiate the claim that it is being cited for. Almost never is it without an error of some significance. They all still hallucinate very consistently if you’re actually pushing them into areas that are complicated and non-obvious, when they can’t figure out an answer, they make one up. The reduction in apparent hallucinations in a recent models seems to be more that they’ve learned specific cases where they should say they don’t know, not that the problem has been solved in a broader sense.
This is true for first party applications, as well as for custom integrations, where I can explicitly check that the context should be grounding them with all of the relevant facts. It doesn’t matter, that isn’t enough, you can tell me I’m holding it wrong, but we’ve consulted with experts from anthropic and from OpenAI and who have done major AI integrations at some of the most prominent AI consuming companies. I’m not holding it wrong. It’s just a horribly flawed piece of technology that must be used with extreme thoughtfulness if you want to do anything non-trivial without massive risks.
I remain convinced that the people who can’t see the massive flaws in current LLM systems must be negligently incompetent in how they perform their jobs. I use LLM’s every day in my work and they are a great help to my productivity, but learning to use them effectively is all about understanding the countless ways in which they fail and thinks that they cannot be relied on for and understanding where they actually provide value.
They do provide value for me in legal research, because sometimes they point me in the direction of caselaw or legal considerations that hadn’t occurred to me. But the majority of the time, the vast majority, their summaries are incorrect, and their arguments are invalid.
LLMs are not capable of reasoning which requires non-obvious jumps of logic which are more than one small step removed from the example that they’ve seen in their training. If you attempt to use them to reason about a legal situation, you will immediately see themselves tie themselves in not because they are not capable of that kind of reasoning, on top of their inability to actually understand in summarize case documents and statutes accurately.
There's a simpler explanation: they are comparing LLM performance to that of regular humans, not perfection.
Where do you think LLMs learned this behavior from? Go spend time in the academic literature outside of computer science and you will find an endless sea of material with BS citations that don't substantiate the claim being made, entirely made up claims with no evidence, citations of retracted papers, nonsensical numbers etc. And that's when papers take months to write and have numerous coauthors, peer reviewers and editors involved (theoretically).
Now read some newspapers or magazines and it's the same except the citations are gone.
If an LLM can meet that same level of performance in a few seconds, it's objectively impressive unless you compare to a theoretical ideal.
"Based on the court's memorandum opinion in the case of United States v. Google LLC, Google is required to adhere to a series of remedies aimed at curbing its monopolistic practices in the search and search advertising markets. These remedies address Google's distribution agreements, data sharing, and advertising practices.
Distribution Agreements
A central component of the remedies focuses on Google's distribution agreements to ensure they are not shutting out competitors:
No Exclusive Contracts Google is barred from entering into or maintaining exclusive contracts for the distribution of Google Search, Chrome, Google Assistant, and the Gemini app.
No Tying Arrangements Google cannot condition the licensing of the Play Store or any other Google application on the preloading or placement of its other products like Search or Chrome.
Revenue Sharing Conditions The company is prohibited from conditioning revenue-sharing payments on the exclusive placement of its applications.
Partner Freedom Distribution partners are now free to simultaneously distribute competing general search engines (GSEs), browsers, or generative AI products.
Contract Duration Agreements with browser developers, OEMs, and wireless carriers for default placement of Google products are limited to a one-year term.
Data Sharing and Syndication
To address the competitive advantages Google gained through its exclusionary conduct, the court has ordered the following:
Search Data Access Google must provide "Qualified Competitors" with access to certain search index and user-interaction data to help them improve their services. This does not, however, include advertising data.
Syndication Services Google is required to offer search and search text ad syndication services to qualified competitors on ordinary commercial terms. This will enable smaller firms to provide high-quality search results and ads while they build out their own capabilities.
Advertising Transparency
To promote greater transparency in the search advertising market, the court has mandated that:
Public Disclosure Google must publicly disclose significant changes to its ad auction processes. This is intended to prevent Google from secretly adjusting its ad auctions to increase prices.
What Google is NOT Required to Do
The court also specified several remedies it would not impose:
No Divestiture Google is not required to sell off its Chrome browser or the Android operating system.
No Payment Ban Google can continue to make payments to distribution partners for the preloading or placement of its products. The court reasoned that a ban could harm these partners and consumers.
No Choice Screens The court will not force Google to present users with choice screens on its products or on Android devices, citing a desire to avoid dictating product design.
No Sharing of Granular Ad Data Google is not required to share detailed, query-level advertising data with advertisers.
A "Technical Committee" will be established to assist in implementing and enforcing the final judgment, which will be in effect for six years."
Frankly I don't think that's bad at all. This is from Gemini 2.5 pro
I guess they are unable to value the function that I am more likely to read and trust stories from their website if they give me the honest info about where their stories come from that I can further read (and rely on them to always point me to as a guide).
they likely, and probably correctly, do not want you as a customer. people who are discerning and conscious like this generally use an adblocker, and even if you don't, are generally less easily influenced by adverts in the first place. most people like this tend towards wealthy, so it's a valuable demographic if they can get past those two issues, but they're not easy to get past
You made me snort with laughter with how right you were. I in fact have 2 adblockers on, and I actively ignore and sanitize some of my history (like Youtube) to not get directed towards advertising or other rabbit holes I don't want to see, even though I never click a single ad.
But I do pay for quality journalism / news websites!
Most consumers cannot identify which website they are currently looking at. Google, Facebook, giveuscardinfozzzz.com, all the same. No distinguishing or discernible features or difference.
This is one of the practices I hate the most on the internet.
Sometimes it's so ridiculous that a news site will report about some company and will not have a single link to the company page or will have a link that just points to another previous article about that company.
It has gotten absolutely out of control. I will be reading an article about a new game, and the article won't even have a link to the store page to buy the game...
Which store page should they be linking to? Inevitably what you’re asking for is how we’ve ended up with sites spinning off thousands of articles stuffed full of affiliate links.
It’s not about insecurity - it’s more like a user will accidentally click on the link, end up on the company’s site, not realise they’ve left the news site, be confused as to why the news site is trying so hard to sell them a dishwasher, not remember they were just reading an article about them, and will be scared and alienated.
Most of that stuff like court decisions and patents isn't copyrighted anyway. They can host a copy on their own site and display ads around it if they want to.
Who is this for though? Your average user would not be able to use it or understand the purpose of it. A big image of a padlock with a tick saying “SECURE AND VERIFIED” would be just as effective.
> Ironically this is also why there is so much existential fear about AI in the media. LLMs will do to them what they do to primary sources (and more likely just cut them out of the loop).
Maybe.. not. LLMs may just flow where the money goes. Open AI has a deal with the FT, etc.
The AI platforms haven't touched any UI devolution at all because they're a hot commodity.
> By the way, a pet peeve of mine right now is that reporters covering court cases (and we have so many of public interest lately) never seem to simply paste the link to the online PDF decision/ruling for us all to read, right in the story.
I have the same peeve, but to give credit where it is due, I've happily noticed that Politico has lately been doing a good job of linking the actual decisions. I just checked for this story, and indeed the document you suggest is linked from the second paragraph:
https://www.politico.com/news/2025/09/02/google-dodges-a-2-5...
> I would rather be able to see the probably dozens of pages ruling with the full details rather than hear it secondhand from a reporter at this point
And the reporter would rather you hear it second hand from them :)
I agree, online "journalists" are absolutely terrible at linking to sources. You'll have articles which literally just cover a video (a filmed press conference, a YouTube video, whatever) that's freely available online and then fail to link to said video.
I don't know what they're teaching at journalistic ethics courses these days. "Provide sources where possible" sounds like it should be like rule 1, yet it never happens.
Sure, after you dismiss the pop-up telling you to become an ars subscriber.
I’m only angry about this because I’ve been on ars since 2002, as a paid subscriber for most of that time, but I cancelled last year due to how much enshittification has begun to creep in. These popups remove any doubt about the decision at least.
(I cancelled because I bought a product they gave a positive review for, only to find out they had flat-out lied about its features, and it was painfully obvious in retrospect that the company paid Ars for a positive review. Or they’re so bad at their jobs they let clearly wrong information into their review… I’m not sure which is worse.)
I’ve noticed this in New York Times articles in the last couple years. Articles are heavily interlinked now - most “keyword” terms will link to a past article on the same topic - but the links rarely leave the Times’ site. The only exception is when they need to refer back to a prior story that they didn’t cover, but that another publication did. Sources are almost never linked; when they are, it’s to a PDF embed on the Times’ own site.
I assume they and all the other big publications have SEO editors who’ve decided that they need to do it for the sake of their metrics. They understand that if they link to the PDF, everyone will just click the link and leave their site. They’re right about that. But it is annoying.
About a year ago when the NYTimes wrote an article called liked "Who really gets to declare if there is famine in Gaza?", the conclusions of the article were that "well boy it sure is complicated but Gaza is not officially in famine". I found the conclusion and wording suspect.
I went looking to see if they would like to the actual UN and World Food Program reports. The official conclusions were that significant portions of Gaza were already officially in famine, but that not all of Gaza was. The rest of Gaza was just one or two levels below famine, but those levels are called like "Food Emergency" or whatever.
Essentially those lower levels were what any lay person would probably call a famine, but the Times did not mention the other levels or that parts were in the famine level - just that "Gaza is not in famine".
To get to the actual report took 5 or 6 hard-to-find backlinks through other NYTimes articles. Each article loaded with further NYTimes links making it unlikely you'd ever find the real one.
It's true that they do this sort of thing for political reasons, but it sounds like the original NYT report wasn't meant to be merely a paraphrase of a specific UN report? In which case, it would be legitimate to cite other sources and report that they disagree?
The editorial board would probably prefer the NYTimes not get murdered by the current political climate - which of course is part of why the political climate is what it is.
Not just court cases. But so many situations where the primary sources are relevant. Most recently, I’ve seen journalists refer to questionable social media posts that they frame in a certain way but the actual posts don’t align with that frame
> I think it should be adopted as standard journalistic practice in fact -- reporting on court rulings must come with the PDF.
Bafflingly, I’ve found this practice to continue even in places like University PR articles describing new papers. Linking to the paper itself is an obvious thing to do, yet many of them won’t even do that.
In addition to playing games to avoid outbound links, I think this practice comes from old journalistic ideals that the journalist is the communicator of the information and therefore including the source directly is not necessary. They want to be the center of the communication and want you to get the information through them.
We're talking about secondary sources (news papers) linking to primary sources (a PDF of the court ruling). You showed a tertiary source (Google search) linking to a secondary school (BBC).
> By the way, a pet peeve of mine right now is that reporters covering court cases (and we have so many of public interest lately) never seem to simply paste the link to the online PDF decision/ruling for us all to read, right in the story.
Usually I would agree with you, however, the link is in the article hyperlinked under "Amit Mehta" in the 3rd paragraph. Now could the reporter have made that clearer...yes, but it's still there.
There is a link right there in 3rd paragraph: "U.S. District Judge Amit Mehta", though strangely under the name...
> I would rather be able to see the probably dozens of pages ruling with the full details rather than hear it secondhand from a reporter at this point.
There is no way you'd have time for that (and more importantly, your average reader), but if you do, the extra time it'd take you to find the link is ~0.0% of the total extra time needed to read the decision directly, so that's fine?
> with the full details
You don't have them in those dozens of pages, for example, the very basics of judge's ideological biases are not included.
It is also meant to lessen the legal burden: when they don't link to primary source, nobody can claim the is inaccurate, missing essential facts or made up.
> I am not too familiar with antitrust precedent, but to what extent does the judge rule on how specific the data sharing need to be (what types of data, for what time span, how anonymized, etc. etc.) or appoint a special master? Why is that up to the judge versus the FTC or whoever to propose?
The judge doesn't propose, he rules on what the parties propose, and that can be an iterative process in complex cases. E.g.. in this case, he has set some parameters in this ruling, and set a date by which the parties need to meet on the details within those parameters.
As a reporter, I can tell you that your comment stems from a common fallacy: y’all think you know better than reporters what our jobs are and what the dynamics of our publishing platform entail.
For some reason, everyone feels like they would know how to be a journalist better than the actual professionals.
That said, reporters have most probably nothing to do with what you’re decrying. Linking policies are not the reporter’s business.
There are probably multiple layers of SEO “experts” and upper management deciding what goes on page and what not.
Funnily enough, they might be super anal about what the story links, and then let Taboola link the worst shit on the Internet under each piece…
So please, when you start your sentence with “reporters” please know that you’re criticizing something they have no power to change.
I sympathize with how annoying it must be to have other people messing up your work, but also, if your name is at the top of the page, and there's not really any other way for readers to know anyone in particular that is taking responsibility for any specific detail on that page, it's obviously going to be your reputation on the line to some extent.
> they must have been using the ruling PDF to write the story
Oh you sweet Summer child :-)
The worst is with criminal cases where they can't even be burdened to write what the actual charges are. It's just some vague 'crime' and the charges aren't even summarized - they're just ignored.
They don't necessarily want to be the gatekeepers of information, they just want your next click to be another news story on their website.
External links are bad for user retention/addiction.
This also has a side effect of back linking no longer being a measure of a 'good' website, so good quality content from inconsistently trafficked sites gets buried on search results.
I've noticed this too and I agree it's unacceptable practice. Journalism in general has become wildly resistant to properly citing their sources (or they simply make their citation as difficult to find as possible through various obfuscation techniques) and this is making independent validation of any information online that much more difficult while further entrenching a culture of "just trust me, bro" on the internet in general. It's a deeply infuriating and destructive practice that needs to die out. At least when I was in school & university, properly citing your sources was everything when it came to writing any sort of report or essay. How the adtech industry managed to quietly undo that standard expectation so thoroughly for the sake of engagement metrics is rather nuts to me.
> reporters covering court cases (and we have so many of public interest lately) never seem to simply paste the link to the online PDF
Would note that this significantly varies based on whether it's ad-driven or subscription-based/paywalled. The former has no incentive to let you leave. The latter is trying to retain your business.
Actual answer is that majority journalists are summarizing other journalists who are summarizing someone they asked about the original content. They have never seen it themselves so can't link it.
I would go so far as to inherently mistrust any legal reporting that does not link to the ruling or trial footage at this point. I've watched multiple public trials and seen reporting that simply did not reflect what actually went on.
By the way, the worst laughable offenders of this idea are local TV news stations. As if to get the real insight on some world issue, I'm going to "stay up to date by going to KTVU.com for the latest on this breaking story!".
That's certainly no more accurate of the news division of KTVU, a local Fox-owned station, than it is of the the national “News” network with the same corporate parent.
Is it not sad/telling that the reporter of the story couldn't summarize this in the story, but the bot here can? If there were an indicator of the future to come...
This is an astonishing victory for Google, they must be very happy about it.
They get basically everything they want (keeping it all in the tent), plus a negotiating position on search deals where they can refuse something because they can't do it now.
Quite why the judge is so concerned about the rise of AI factoring in here is beyond me. It's fundamentally an anticompetitive decision.
Feels like judge was looking for any excuse not to apply harsh penalty and since Google brought up AI as competitor, the judge accepted it as acceptable excuse for very minor penalty.
AI is a competitor. You know how StackOverflow is dead because AI provided an alternative? That's happening in search too.
You might think "but ChatGPT isn't a search engine", and that's true. It can't handle all queries you might use a search engine for, e.g. if you want to find a particular website. But there are many many queries that it can handle. Here's just a few from my recent history:
* How do I load a shared library and call a function from it with VCS? [Kind of surprising it got the answer to this given how locked down the documentation is.]
* In a PAM config what do they keywords auth, account, password, session, and also required/sufficient mean?
* What do you call the thing that car roof bars attach to? The thing that goes front to back?
* How do I right-pad a string with spaces using printf?
These are all things I would have gone to Google for before, but ChatGPT gives a better overall experience now.
Yes, overall, because while it bullshits sometimes, it also cuts to the chase a lot more. And no ads for now! (Btw, someone gave me the hint to set its personality mode to "Robot", and that really helps make it less annoying!)
> You know how StackOverflow is dead because AI provided an alternative? That's happening in search too.
Stack Overflow isn’t dead because of AI. It’s dead because they spent years ignoring user feedback and then doubled down by going after respected, unpaid contributors like Monica.
Would they have survived AI? Hard to say. But the truth is, they were already busy burning down their own community long before AI showed up.
When AI arrived I'd already been waiting for years for an alternative that didn’t aggressively shut down real-world questions (sometimes with hundreds of upvotes) just because they didn’t fit some rigid format.
> Stack Overflow isn’t dead because of AI. It’s dead because they spent years ignoring user feedback
It is dead because of both of those things. Everyone hated Stackoverflow's moderation, but kept using it because they didn't have a good alternative until AI.
> When AI arrived I'd already been waiting for years for an alternative that didn’t aggressively shut down real-world questions
I don't agree that ChatGPT gives an overall better experience than Google, let alone an actual good search engine like Kagi. It's very rare that I need to ask something in plain English because I just don't know what the keywords are, so the one edge the LLM might have is moot. Meanwhile, because it bullshits a lot (not just sometimes, a lot), I can't trust anything it tells me. At least with a search engine I can figure out if a given site is reliable or not, with the LLM I have no idea.
People say all the time that LLMs are so much better for finding information, but to me it's completely at odds with my own user experience.
Why not both? You mention Kagi, and I find its Assistant to be a very useful mix of LLM and search engine.
Something I asked it recently is whether Gothenburg has any sky-bars that overlook Hisingen to the North, and it correctly gave me one.
A search engine could have given me a list of all sky-bars. And by looking at their photos on Google maps, I could probably have found one with the view / perspective I wanted. But Kagi Assistant using Kimi K2 did a decent job of narrowing the options I had to research.
I’d rather use every LLM that can search the web (including whatever local model I’m currently running on my MacBook) over Google. I also prefer the results from Kagi (which I generally use), DuckDuckGo, and Ecosia.
I still don’t think a company with at least one touch point on such a high percentage on web usage should be allowed to have one of 2 mobile OSs that control that market, the most popular browser, the most popular search engine, the top video site (that’s also a massive social network), and a huge business placing ads on 3rd party sites.
Any two of these should be cause for concern, but we are well beyond the point that Google’s continued existence as a single entity is hugely problematic.
Chatgpt, Grok and the likes give an overall better experience than Google because they give you the answer, not links to some pages where you might find the answer. So unless I'm explicitly searching for something, like some article, asking Grok is faster and gets you an acceptable answer.
You get an acceptable answer maybe about 60% of the time, assuming most of your questions are really simple. The other 40% of the time it's complete nonsense dressed up as a reasonable answer.
In my experience I get acceptable answers in more than 95% of questions I ask. In fact, I rarely use search engines now. (btw I jumped off Google almost a decade ago now, have been using duckduckgo as my main search driver)
Yes I (try to) use them all the time. I regularly compare ChatGPT, Gemini, and Claude side by side, especially when I sniff something that smells like bullshit. I probably have ~10 chats from the past week with each one. I ask genuine questions expecting a genuine answer, I don't go out of my way to try to "trick" them but often I'll get an answer that doesn't seem quite right and then I dig deeper.
I'm not interested in dissecting specific examples because never been productive, but I will say that most people's bullshit detectors are not nearly as sensitive as they think they are which leads them to accepting sloppy incorrect answers as high-quality factual answers.
Many of them fall into the category of "conventional wisdom that's absolutely wrong". Quick but sloppy answers are okay if you're okay with them, after all we didn't always have high-quality information at our fingertips.
The only thing that worries me is how really smart people can consume this slop and somehow believe it to be high-quality information, and present it as such to other impressionable people.
Your success will of course vary depending on the topic and difficulty of your questions, but if you "can't remember" the last time you had a BS answer then I feel extremely confident in saying that your BS detector isn't sensitive enough.
Is it common to use Internet search like that??? You're typing in literal questions to a search box rather than keywords, the name of the site you're looking for, or topics you want to read about. Maybe I'm just too old school, from the time where internet searches were essentially keyword searches, but it would have never occurred to me to type an actual english question as a full sentence into a search box.
If that's how most people use search engines these days, then I guess the transition into "type a prompt" will be smoother than I would have thought.
I’m quite sure it was common, because Google optimized for that over time, that’s why they switched to a semantic search instead of actual “contains” (remember they had a few questions and answers at the top way before ChatGPT).
Also if you type a few words on Google, it’ll “autocomplete” with the most common searches. Or you can just go to trends.google.com and explore search trends in real time.
I think those are examples of AI prompts, not search queries. Searching sometimes requires effort even for simple questions. For example, if you're trying to find the word for an object, you might need to consider what sort of website might talk about that, how to find that website in a sea of SEO spam, and then read through the article manually to find the specific information you are looking for. Using an AI, you can just ask "what is xyz called" and get a quick answer.
It's been common for the last decade. It's been a great way of finding forum/blog posts where the question is answered, even if phrased slightly different.
I have been using computers since the early 2000s, and I honestly don't remember the last time I searched Google for an answer to a specific question. It's incredibly inefficient compared to the even the most basic AI tool.
The questions above would be changed up for a Google search. The point is that LLMs can answer those questions pretty accurately now. I'm using LLMs to write technical cheat sheets for Linux sysadmin stuff, and to write a hobby website. I'm using search far less than before.
Google also has AI and has integrated it into search. It's not Google Search vs ChatGPT. It's Google Search + Gemini vs ChatGPT, where the Google option has a huge advantage of falling into people's already ingrained habits and requires no user education.
Google is the only serious competition to Nvidia right now. AI is both a threat to their core business and a core strength of their business. They invented transformers and a cheap inference chip. Their models are top-tier. I think google will be fine.
It still usually has the standard quality of answers for most questions I google. I google fewer questions because modern languages have better documentation cultures.
All my stackoverflow-style queries are now going to whatever AI chatbot is most accessible when I need my answer.
They tend to provide answers that are at least as correct as StackOverflow (i.e. not perfect but good enough to be useful in most cases), generally more specific (the first/only answer is the one I want, I don't have to find the right one first), and the examples are tailored to my use case to the point where even if I know the exact command/syntax, it's often easier to have one of the chatbots "refactor" it for me.
You still want to only use them when you can verify the answer and verifying won't take more time. I recently asked a bot to explain a rsync command line, then finding myself verifying the answers against the man page anyways (i.e. I could have used the manpage instead from the start) - and while the first half of the answer was spot on, the second contained complete hallucinations about what the arguments meant.
They are both terrible in terms of correctness compared to duckduckgo->stackoverflow.
As an example deepsek makes stuff up if I as for what syscall to use for deleting directories. And it really misleads me in a convincing way. If I search then I end up in the man page and I can exentually figure it out after 2-3 minutes
Also with AI, I get an answer instantly—no snark, no misunderstanding my question just to shut it down, and no being bounced around to some obscure corner of Stack Exchange.
StackOverflow is dead because its rules are nonsensical and many of its users are dicks.
It's going to be a real problem going forward, because if AI hadn't killed them something else would have, and now it's questionable whether that "something else" will ever emerge. The need for something like SO is never going to go away as long as new technologies, algorithms, languages and libraries continue to be created.
Besides the issue of repetitive beginner questions, which today could be answered with an LLM, was a significant driver of low-quality content, requiring substantial intervention from StackOverflow.
However, your point stands: as new technologies develop, StackOverflow will be the main platform where relevant questions gain visibility through upvotes.
A scary (if not particularly original) thought: If people become utterly reliant on LLMs and no longer embrace any new language etc for which there is insufficient LLM training, new languages etc will no longer continue to be created.
The thing is, a lot of questions user have aren’t unique, maybe just with a slightly different context and LLMs are good at adapting answers to other contexts.
But it only works for stuff that is already consolidated. For example, something like a new version of a language will certainly spark new questions that can only be discussed with other programmers.
> something like a new version of a language will certainly spark new questions that can only be discussed with other programmers.
I'm not sure this is true? Most languages have fairly open development processes, so discussions about the changes are likely indexed in the web search tools LLMs use, if not in the training data itself. And LLMs are very good at extrapolating.
Correct. I’ve been using ai chatbots more and more instead of google search (I still use google quite a lot but considerably less than a year or two ago).
...but ironically that chatbot is Gemini from ai studio, so still the same company but a different product.
Google search will look very different in the next 5-10 years compared to the same period a decade ago.
Exactly this. Another way of putting it is that LLMs are doing all the clicking, reading, researching and many times even the "creating" for me. And I can watch it source things and when I need to question whether it's hallucinating I get a shortcut because I can see all the steps that went into finding the info it's presenting. And on top of it replacing Google Search it's now creating images, diagrams, drawings and endless other "new work" that Google search could never do for me in the first place.
I swear in the past week alone things that would've taken me weeks to do are taking hours. Some examples: create a map with some callouts on it based on a pre-existing design (I literally would've needed several hours of professional or at least solid amateur design work to do this in the past; took 10 minutes with ChatGPT). Figure out how much a rooftop solar system's output would be compromised based on the shading of a roof at a specific address at different times of the day (a task I literally couldn't have completed on my own). Structural load calculations for a post in a house (another one I couldn't have completed on my own). Note some of these things can't be wrong so of course you can't blindly rely on ChatGPT, but every step of the way I'm actually taking any suspicious-sounding ChatGPT output and (ironically I guess) running keyword searches on Google to make sure I understand what exactly ChatGPT is saying. But we're talking orders of magnitude less time, less searching and less cost to do these things.
Edit: not to say that the judge's ruling in this case is right. Just saying that I have zero doubt that LLM's are an existential threat to Google Search regardless of what Google's numbers said during their past earnings call.
Not dangerous in this implementation. I knew going in there was likely significant margin for error. I would not rely on ChatGPT if I was endangering myself, my people or anyone else for that matter (though this project is at my place).
That said, the word "relying" is taking it too far. I'm relying on myself to be able to vet what ChatGPT is telling me. And the great thing about ChatGPT and Gemini, at least the way I prompt, is that it gives me the entire path it took to get to the answer. So when it presents a "fact," in this example a load calculation or the relative strength of a wood species, for instance, I take the details of that, look it up on Google and make sure that the info it presented is accurate. If you ask yourself "how's that saving you time?" The answer is, in the past, I would've had to hire an engineer to get me the answer because I wouldn't even quite be sure how to get the answer. It's like the LLM is a thought partner that fills the gap in my ability to properly think about a problem, and then helps me understand and eventually solve the problem.
How do you “vet” something technical and something that you can’t even do yourself is beyond me.
Vetting things is very likely harder than doing the thing correctly.
Especially the thing you are vetting is designed to look correct more than actually being correct.
You can picture a physics class where teacher gives a trick problem/solution and 95% of class doesn’t realize until the teacher walks back and explains it.
Nothing about what you are describing sounds sane or legal in most jurisdictions. You still need a structural engineer. None of the sources you are describing are reliable.
AI has a huge advantage over search. It gets to the question you want answered rather than adjacent search terms. I honestly trust the congealed LLM slop over the piecemeal SEO optimized AI slop for many questions.
How long is the rear seat room is the 2018 XX Yy car?
What is the best hotel to stay at in this city? I’m interested in these things and not interested in these amenities.
I have leftovers that I didn’t like much, here’s the recipe, what can I do with it? (it turned it into a lovely soup btw).
These are the types of questions many of us search and don’t want to wade through a small ocean of text to get the answer to. Many people just stick Reddit on the query for that reason
quit that narrative! stack overflow is dead because it's garbage! try to visit it without being logged in, the entire screen is covered by four halfscreen popups! then search is useless and require to be logged. when you finally give in, the answer is deleted by overzealous power tripping users.
it's a miracle it survived that long. and i think it saving grace was that nobody wanted to browse reddit at work, nothing else.
so tired of AI apologists exploiting this isolated case as if it is some proof AI is magic and a solution to anything. it's all so inane and expose how that side is grasping for straw.
I mean, it’s a legitimate concern. Google is bleeding so hard right now from Gen Z and especially Gen Alpha deciding to use ChatGPT first and foremost when asking questions that Google would’ve answered previously. Whether or not that means they should keep Chrome as a product is up for debate.
Under good Monopoly law, you would remedy the situation that got them to this point, not worry about their future. Chrome + Deals got to them to this point so that's what you unwind. If it causes Google to get weakened and AI finishes them off, that's just creative destruction at work and oh well.
The ease with which a total newcomer was able to steal share from Google is real-world evidence that there wasn't really a monopoly and that Google competitors (Bing, etc.) just sucked and didn't want to spend the money to be better.
Well, courts disagree with your assessment and so do I. Yes, AI is a threat to Google. How much a threat remains to be seen. From normies I know, most of them are just using Gemini or whatever is on Google front page. They are not starting most of their searches on OpenAI or other ones.
ChatGPT has for sure the "first mover" strength for normies (you can hear it mentioned in TV, radio and in the street, but also lot of people just talk about "AI". So, IMO there is still space to be used as "the AI" rather then specifically ChatGPT.
It might also just be always referred to "ChatGPT" when talking about another provider, just like people saying "Kleenex" when referring to tissues.
Courts also decided you couldn't bundle a web browser and then turned a blind eye when it's done on a different platform with draconian restrictions against even installing an alternate browser.
They didn't "turn[] a blind eye" as they weren't asked the question again. There was no legal precedent established by the Microsoft case that required all future operating systems to have a replaceable browser engine. Also, the factual situations were quite different: Microsoft had a de facto monopoly on PC OSes in the late 1990s, while Apple never had a monopoly on mobile devices.
I asked Grok and Gemini and they both said there have been reports that Google search has dropped below 90% for the first time, so it’s significant but it’s like a 1-2% drop.
Chrome had nothing to do with the case, though; the prosecutors were grasping at straws. The obvious remedy is to ban Google from bidding for placement, which is what happened.
Because it happened. If I was stealing cable but then all shows I wanted to watch switched to streaming, should I be let off the hook because situation remedy'd itself? I'd imagine most people would say no, the fact you can no longer do the crime in the future does not change the fact you did the crime in the past.
I'm Gen X and recently been using ChatGPT a hell of a lot more than Google, especially for queries similar to sibling comment. Instead of trying to word my query optimally for search, I just write what I'm trying to achieve in natural language and I get an answer, instead of having to scan a few results to know if they're likely candidates. Even with the made up shit on occasion this is a win.
I mean, it's a judge. This is the mahogany and tweed set. There's not going to be a harsh judgment against a bunch of shareholders. That's not how this works.
While I'd love to see google harshly penalized, nobody has proposed an answer that doesn't end with the destruction of essentially the world's only browser. Or it's sale to extremely sketchy people, which I guess also ends in destruction plus with OpenAI or whomever buys it hoovering up as much personal data as they can.
So I get not liking this answer, but I haven't heard a better one.
Yeah, without Google's funding, Firefox could be in trouble, but I doubt it will be destroyed. And nobody is selling it. Or what 'only' browser are you talking about? xD
The entire world would be better off if it was destroyed. That is sort of the point. We have very unqualified people making decisions that force the entire Internet to comply because the monopoly says to. The Internet could hardly be in a worse place than it is now.
This is an absolute and disgraceful failure by Amit Mehta, a win for corporate power, and a loss for user freedom and the tech industry at large. Unbelievable the degree to which this judge sold out.
Much like microsoft, it's really the best possible outcome.
Winning a case is one thing, as they can find other reasons to come back.
Losing, and saying "but we were already punished, you got what you want" is such a barrier to EVER putting any sort of realistic reigns on them. They might as well just bury antitrust now and stop pretending.
What's so mind boggling about this decision is that if there's one thing virtually all of America agrees upon, it's that Google needs to be reined in.
The Trump administration initiated this lawsuit. The Biden administration took it over and won the case. It's back on the Trump administration now and they wanted structural remedies.
The majority of Americans when polled express concerns about data privacy, security and monopoly in relation to Google - things Americans generally don't get that worked up about, but with Google, they know there's a problem.
Amit Mehta sold them all out with the most favorable outcome for Google that one could imagine. This guy, literally sold everyone in America out, the left, the right and the middle, except for Google management of course.
(This decision probably isn't even good for Google shareholders -- historically breakups of monopolies create shareholder value!)
I think Amit Mehta's impartiality here needs to be the subject of a Congressional investigation. I personally don't feel this guy should be a judge anymore after this.
If his decision stands this is going to be a landmark in American history, one of the points where historians look back and say "this is when American democracy really died and got replaced with a kleptocratic state." The will of everyone, people, the Congress, the Executive branch, all defied by one judge who sold out.
Same, my Google use has dropped noticeably, probably 90%.
I remember the feeling when I first started using ChatGPT in late 2022, and it's the same feeling I had when Google search came out in the early 2000s. And that was like, "oh chatgpt is the new Google".
Same feeling for me as well. It was like the old Google where it lead you to the right answer. ChatGPT is similar but in some ways smoother because it's conversational. I think most days I don't even use Google at all.
That said their "Dive into AI" feature has cause me to use it more lately.
I'm losing braincells relying on Google Search shoving ad riddled trash in my face and even worse AI results. Gemini frequently just straight up lies to me. Saying the opposite of the truth so frequently I have experienced negative consequences in real life believing it.
The only people who are being homogenized or "down-graded" by Chat GPT are people who wouldn't have sought other sophisticated strategies in the first place, and those who understand that Chat GPT is a tool and understand how it works, and it's context, can utilize it efficiently with great positive effect.
Obviously Chat GPT is not perfect but it doesn't need to be perfect to be useful. For a search user, Google Search has not been effective for so long it's unbelievable people still use it. That is, if you believe search should be a helpful tool with utility and not a product made to generate maximum revenue at the cost of search experience.
Would you say that people were losing braincells using google in 2010 to look up an animal fact instead of going to a library and opening an encyclopedia?
I'm pretty sure they meant LLMs in general, not just ChatGPT. They all straight up lie to very similar degrees, no contest there.
> The only people who are being homogenized or "down-graded" by Chat GPT are people who wouldn't have sought other sophisticated strategies in the first place, and those who understand that Chat GPT is a tool and understand how it works, and it's context, can utilize it efficiently with great positive effect.
I know for a fact that this isn't true. I have a friend who was really smart, probably used to have an IQ of 120 and he would agree with all of this. But a few of us are noticing that he's essentially being lobotomized by LLMs and we've been trying to warn him but he just doesn't see it, he's under the impression that "he's using LLMs efficiently with great positive effect".
In reality his intellectual capabilities (which I used to really respect) have withered and he gets strangely argumentative about really basic concepts that he's absolutely wrong about. It seems like he won't accept it as true until an LLM says so. We used to laugh at those people together because this could never happen to us, so don't think that it can never happen to you.
Word of advice for anyone reading this: If multiple people in your life suddenly start warning you that your LLM interactions seem to be becoming a problem for one reason or another, make the best possible effort to hear them out and take them seriously. I know it probably sounds absurd from your point of view, but that's simply a flaw in our own perception of ourselves, we don't see ourselves objectively, we don't realize when we've changed.
I do not. I prefer to read the primary sources, LLM summaries are, after all, probabilistic, and based on syntax. I'm often looking for semantics, and an LLM really really is not going to give me that.
Funny, I use LLM's for so much search now because they understand my query semantically, not just its syntax. Keyword matching fails completely for certain types of searching.
There’s a particular Italian government website and the only way I can find it is through ChatGPT. It’s a sub site under another site and I assume it’s the context of my question that surfaces the site when Google wouldn’t.
At its core, attention operates through three fundamental components — queries, keys, and values — that work together with attention scores to create a flexible, context-aware vector representation.
Query (Q): The query is a vector that represents the current token for which the model wants to compute attention.
Key (K): Keys are vectors that represent the elements in the context against which the query is compared, to determine the relevance.
Attention Scores: These are computed using Query and Key vectors to determine the amount of attention to be paid to each context token.
Value (V): Values are the vectors that represent the actual contextual information. After calculating the attention scores using Query and Key vectors, these scores are applied against Value vectors to get the final context vector
Tools like GPT-5 Thinking are actually pretty great at linking you to primary sources. It has become my go-to search tool because even though it is slower, the results are better. Especially for things like finding documentation.
I basically only use Google for "take me to this web page I already know exists" queries now, and maps.
Do you check all of the sources though? Those can be hallucinated and you may not notice unless you're always checking them. Or it could have misunderstood the source.
It's easy to assume it's always accurate when it generally is. But it's not always.
> It's easy to assume it's always accurate when it generally is. But it's not always.
So like a lot of the internet? I don’t really understand this idea that LLMs have to be right 100% of the time to be useful. Very little of the web currently meets that standard and society uses it every day.
I have noticed it hallucinating links when it can't find any relevant documentation at all, but otherwise it is pretty good. And yes, I do check them.
The type of search you are doing probably matters a lot here as well. I use it to find documentation for software I am already moderately familiar with, so noticing the hallucinations is not that difficult. Although, hallucinations are pretty rare for this type of "find documentation for XYZ thing in ABC software" query. Plus, it usually doesn't take very long to verify the information.
I did get caught once by it mentioning something was possible that wasn't, but out of probably thousands of queries I've done at this point, that's not so bad. Saying that, I definitely don't trust LLMs in any cases where information is subjective. But when you're just talking about fact search, hallucination rates are pretty low, at least for GPT-5 Thinking (although still non-zero). That said, I have also run into a number of problems where the documentation is out-of-date, but there's not much an LLM could do about that.
That Searlesque syntax/semantics dichotomy isn't as clear cut as it once was. Yes, programs operate syntactically. But when semantics is assigned to particular syntactic structures, as it is with word embeddings, the computer is then able to operate on semantics through its facility with syntax. These old standard thought patterns need to be reconsidered in the age of LLMs.
Since when does google give your primary sources for simple queries? You have to wade through all the garbage. At least an LLM will give you the general path and provide sources.
ChatGPT provides sources for a lot of queries, particularly if you ask. I'm not defending it, but you can get what claim to want in an easier interface than Google.
> Do you not see ChatGPT and Claude as viable alternatives to search?
This subthread is classic HN. Huge depth of replies all chiming in to state some form of the original prior: that "AI is a threat to search"...
... without even a nod to the fact that by far the best LLM-assisted search experience today is available for free at the Google prompt. And it's not even close, really. People are so set in their positions here that they've stopped even attempting to survey the market those opinions are about.
(And yes, I'm biased I guess because they pay me. But to work on firmware and not AI.)
Like others have noted, I think it's far from obvious that Google's LLM prompt is the best experience in the space, I would say it's clearly not in the top tier and even that relatively speaking, I consider it bad compared to the best options.
Assuming we're talking about the AI generated blurbs at the top of search results, there are loads of problems. For one they frequently don't load at all. For another search is an awkward place for them to be. I interact with search differently than with a chat interface where you're embedding a query in a kind of conversational context such that both your query and the answer are rich in contextual meaning. With search I'm typically more fact finding and in a fight against Google's page rank optimizations to try and break through to get my information I need. In a search context AI prompts don't benefit from context rich prompts and aren't able to give context-rich answers and kind of give generic background that isn't necessarily what I asked for. To really benefit from the search prompts I would have to be using the search bar in a prompt way, which would likely degrade the search results. And generally this hybrid interaction is not very natural or easy to optimize, and we all know nobody is asking for it, it's just bolted on to neutralize the temptation to leave search behind in favor of an LLM chat.
And though less important, material design as applied to Google web sites in the browser is not good design, it's ugly and the wrong way to have a prompt interaction. This is also the case for Gemini from a web browser. Meanwhile GPT and Claude are a bit more comfortable with information density and are better visual and interactive experiences because of it.
If Google went all-in on the AI overview and removed search results and invested more heavily in compute, it could be pretty good.
But as it stands, it's a terrible user experience. It's ugly, the page remains incredibly busy and distracting, and it is wrong far more often than ChatGPT (presumably because of inference cost at that scale).
It might be good enough to slow the bleeding and keep less demanding users on SERP, but it is not good enough to compete for new users.
What? The Google LLM assisted search experience is... not the best option by a long shot? It's laughably incorrect in many cases, and infuriatingly incorrect in the others. It forces itself into your queries above the fold without being asked, and then bullshits to you.
A recentish example, I was trying to remember which cities' buses were in Thessaloniki before they got a new batch recently. They used to rent from a company (Papadakis Bros) that would buy out of commission buses from other cities around the world and maintain the fleet. I could remember specifically that there were some BVG Busses from Berlin, and some Dutch buses, and was vaguely wondering if there were some also from Stockholm I couldn't remember.
So I searched on my iPad, which defaulted to Google (since clearly I hadn't got around to setting up a good search engine on it yet). And I get this result: https://i.imgur.com/pm512HU.jpeg
The LLM forced its way in there without me prompting (in e.g. Kagi, you opt in by ending the query with a question mark). It fundamentally misunderstands the question. It then treats me like an idiot for not understanding that Stockholm is a city in Sweden, and Thessaloniki a city in Greece. It uses its back linking functionality to help cite this great insight. And it takes up the entire page! There's not a single search result in view.
This is such a painful experience, it confirms my existing bias that since they introduced LLMs (and honestly for a couple years before that) that Google is no longer a good first place to go for information. It's more of a last resort.
A lot of casual users don't hit the free tier limits (and indeedI've not hit any limits on the free ChatGPT yet), and while they have their problems they're both far better than the Gemini powered summaries Google have been pumping out. My suggestion is that perhaps you haven't surveyed the market before suggesting that "by far the best LLM-assisted search experience today is available for free at the Google prompt".
> The LLM forced its way in there without me prompting
I agree this is annoying but other than that I really can't follow your argument: You're comparing a keyword-like "prompt" given to Google's LLM to a well-phrased question given to ChatGPT and are surprised the former doesn't produce the same results?
I'm not sure I have ever seen "AI overview" not hallucinate. Granted, I only end up at google on other people's computers or on some fresh install where I haven't configured search yet.
No, the actual remedy is not yet decided in detail (though sharing some search data is going to be part of it), this ruling was basically setting some parameters of what is on and off the table and then ordering the parties to meet on details before further court action.
It basically rules out structural remedies, so what's left is pinky promises of not misbehaving again. Whatever these promises are, that closes the case for me.
Unless the remedy is that Google's online ads has to be spun out into a separate company away from their control, I don't see how any remedy can be effective.
What can honestly be done to punish them? I mean punish too, certain entities of Google should not exist.
Google’s biggest threat is their own deteriorating search results. Gen Z/alpha are interesting barometers, because many of them probably can’t remember a time when Google search didn’t suck.
I would use Google if there was anything to find. At this point, just figure out if you’re looking for a reddit post, a Wikipedia article or a github repo and go to the source — or let Claude do it for you.
Google sucking isn’t google’s algorithm getting worse; it’s the internet getting more competitive and polluted. If you magically turned on the recommendation algorithm from 2010 but with today’s internet, the results would be far far worse.
It's their algo getting worse. They have pushed search results below the fold and filled the whole top part of the page up with ads. They are optimizing for engagement, you searching repeatedly trying to find the right keywords/phrase is better for them than you actually finding what you are after.
There are many search engines that don't have an issue with the internet being "competitive and polluted". So you want me to believe that the people (Google) with the most experience and knowledge about search just can't handle it. While it seemingly is no issue for most of the upstarts? That's just not believable.
That is like computer viruses, the biggest vendors get the most attacks targeted at them, so using a less well known OS makes you safer even though the big vendors spends way more on safety.
Indeed, sometimes the courts don't just get it wrong, they get it backwards. Compare how Google was punished for allowing Android to sideload apps, while Apple wasn't punished for not letting any apps outside the App Store on iOS.
Oh course it is, but is there another country that is any better at antitrust? I haven't seen it. And remembering that antitrust which goes too far is just as harmful as antitrust that is too weak.
Any country trying to break Google will be fighting the US gov. It doesn't matter if other nations are better or worse in comparison, only the US has the power to rule over Google.
At best the EU could push penalties on Google, but nothing more.
They can’t be, because the FAANG (or whatever we call them today) companies are de facto a part of the US government (with the context of some other country trying to break them up)
Antitrust that is nonexistent is far more harmful.
It is. Enforcement is incredibly slow (all of the monopolies Google has been ruled against for were obvious in 2014, with appeals they will not face penalties until at least 2030 for most of it), and we have a dictator running the country who will create or erase any case with the right amount of fealty payments. (Google's million to the inauguration fund just... wasn't enough.)
I mean you can't blame Trump for this one. Trump 1's DoJ initiated this case. Trump 2's DoJ asked for structural remedies e.g. breaking up Google (can't remember if the Biden DoJ was the first to talk about breakup, they probably were, but Trump DoJ carried on with it).
The news of the day is that the JUDGE told both Democrats and Republicans, as well as a supermajority of the American public, no you can't have what you want. Even though Google is guilty, you don't get it. Instead, corporate power will win again.
Imagine an alternate American history where the judge decided not to break up Standard Oil. I think it's Marc Andreesen who's literally made the comparison that data is the modern day oil. We are about to get that alternate history where the corporate robber barons win and everyone else loses. Mehta sealed the deal.
This seems like a very sensible and logical conclusion by the judge to me.
An exclusive contract with Apple/Samsung isn't great, but even Apple testified that they would not have accepted any other searcch engine because everyone else was worse. You can't make restrictions on what Apple is allowed to do because Google violated some law--if Apple wants to make Google the default, they should be allowed to do so! The ban on exclusive contracts makes sense though; they should not be allowed to use contracts to furthur their monopoly position.
And similarly with Chrome; it made no sense to bring Chrome into this equation. Google started, developed, and built Chrome into the best browser available today NOT through exclusive contracts, but because Chrome is just a better product. Users can switch to Firefox/Safari (Mac default)/Edge (Windows default); they don't because Chrome is better. Forcing Google to give up one of its best products is effectively eminent domain by the government to a private company.
With the rise of ChatGPT (I barely use Google anymore) and AI search engines potentially shifting the search landscape, who knows if Google will still be a monopoly 5 years from now. Software moves fast and the best solution to software monopoly is more software competition.
> Google started, developed, and built Chrome into the best browser available today
I don’t think this is as settled as you imply. I tend to like Google products, and do almost everything in the Google ecosystem. But my browser is normally brave or Firefox, because better Adblock is so so impactful. I feel that chrome is a valid alternative, but that no browser is really clearly “the best”. In your view, what is it that makes chrome the best?
1. It might not be the best across all metrics today, but it definitely was a few years ago.
2. While it's true that other browsers like Firefox have been catching up to Chrome in speed, it's still true that Chrome help lead the way and if not for it, the web would've likely been far slower today.
3. There has been an explosion in other browsers in the past few years, but admittedly they're all chromium-based, so even that wouldn't have been possible without Chrome
Safari has been better for going on 5 years now, funny thing is it was worse for long enough that it seems everyone, even to this day, refuses to believe it.
Faster in basically every dimension. Supporting way more than FF in terms of specs. Way more efficient on battery. Better feeling scroll, better UI.
Anecdotally, I've seen many geeks (who certainly don't make their browser choice based on an annoying popup, and are generally more on the anti-Google side) use Chrome rather than Firefox, at conferences etc. (but this is mostly 5+ years ago). Not the majority, but plenty of well-informed opinionated people.
I believe especially back then, Chrome performance was significantly better than Firefox. On Android, Firefox was so slow and unpolished that the ad blocking couldn't make up for it (and even that wasn't available from the start).
Whenever I use chrome, I'm missing the style editor and multi-line repl mode from firefox. When I switched to firefox from chrome, I didn't miss anything. There might be new features chrome has added since that I would want if I knew about them
While I agree on those counts, the debugger in Chrome handles large files of minified code, deep framework stack traces, and stopping in dysfunctional code better.
You should try out Firefox’s if you haven’t. It’s pretty good now and I haven’t found something that I’ve been like damn wish it was there. Lighthouse testing I guess?
This is perhaps a tad ahistorical. Google forked Blink off from WebKit around 2013 - it owes a lot of it's early success to the same technical foundations as Safari (which in turns owes the same debt to Konqueror...)
That's the rendering engine, which was one part of their early success; the other part was the V8 Javascript engine which was miles ahead of the competition in terms of performance.
How is Chrome a better browser than Edge? They are both just custom builds of the underlying Chromium browser.
I switched from Chrome to Edge on my Windows machine a couple of months ago for the embarrassing reason that I had so many tabs open that Chrome slowed down to a crawl.
(Yes, I'm one of those lazy people who uses old tabs as if they were bookmarks.)
Of course I eventually opened enough tabs in Edge that it slowed down too! So I finally bit the bullet and started closing tabs in both browsers.
Otherwise, I hardly notice any difference between the two.
There are bigger differences on my Android device. Edge supports extensions! (Yay!) But it lacks Chrome's "tab group carousel" at the bottom of the screen. Instead, you have to tap an icon to open the full-page list of tab groups, then tap the tab group you already had open, and finally tap the tab you want from this tab group. (Boo!)
So I went back to Chrome on mobile but still use Edge on desktop.
I thought this too, until I actually used Edge. It's quite shocking how much advertising there is in it. The default content sources contain an extremely high proportion of clickbait and "outrage" journalism. It genuinely worries me that this is the Windows default. It's such an awful experience.
That's a fair criticism, but aren't you just talking about the http://www.msn.com/ default home page?
That's easy to change. The first time I opened Edge, I opened Settings, typed "home" into the settings search box, and changed the "Home button" setting to "New tab page", which gives a nice simple page with a search box, like Google.
Is there other advertising you've seen in Edge that is different from Google?
Have you used edge recently? It feels as bloated and ad-filled as yahoo news. I would take Chrome anyday, and I used to be a proud member of the edge fanclub.
Tabs Outliner is my solution to having an absurd number of tabs open. Should be paired with Tabs Session Manager as Tabs Outliner does occasionally lose all your sessions (like once every couple of years).
Tangentially, there are extensions, such as "Auto tab Discard", that unload tabs from memory, thus avoiding slowdown or memory exhaustion. It allows to keep bunches of tabs as contexts / bookmarks.
> Users can switch to Firefox/Safari (Mac default)/Edge (Windows default); they don't because Chrome is better. Forcing Google to give up one of its best products is effectively eminent domain by the government to a private company.
Yeah. People on HN just don't use Windows, at least not a freshly installed one. Windows does nudge you to use Edge [0]. On PC, Chrome is not just competing fairly: it's competing at a disadvantage! Yet it just keeps winning.
Bing by itself reportedly doesn't even gross that much, overall $20 billion represents about a quarter of Microsoft's entire annual profit. Microsoft already decided it wasn't viable to spend that much to compete, and the rest of the search market (including AI) need not apply.
They have the money to compete and jumpstart Bing with default placements and reap the ad dollars and build Bing into a serious competitor.
If they don't want to compete because they think investing money in Xbox will have a higher return, that's their decision (and maybe their mistake). It's not Google's fault.
What is missing in this discussion is the fact -- explicitly called out by the court in the opinion, see e.g. https://news.ycombinator.com/item?id=45109999 -- that Google Search is so good because it gets so much search traffic and, critically, user interaction data such as clicks, dwell time and even hovers on search results, which it mines to figure out better rankings.
Unless competitors get that kind of traffic AND user behavior insights, their results will always be worse.
And as long as their results are worse, 1) their revenues will always be worse, which will 2) make it prohibitively expensive to even try to bid for such placements, which in any case 3) would be shot down by Apple because their results are not "good enough".
It's a Catch-22 from which the only escape is making a risky 20-billion-per-year traffic acquisition bet (on top of the billions already being invested) that they can get all that traffic and user behavior data and improve their search engine quickly enough to make the results good enough to drive enough revenue, all the while fighting the tendency of people to use Google anyway simply out of habit.
I don't think it's much of a choice.
The proposed remedies do talk about sharing search and user interaction data though, so if that survives appeals, it might help level the field a bit.
During the Ballmer era, Microsoft wasted billions on mobile via acquisitions of Danger and Nokia and their internal efforts to make Windows Mobile a thing. I’m sure they could have found the money from somewhere.
Regarding Chrome - don’t forget Google used it’s market leading position of their products to block other platforms/browsers (from the top of my head - Windows Phone). Or develop their web apps (or browser APIs) deliberately in such a way that they work best only on Chrome.
This shit is just revisionist. The first time Apple and Google signed a contract to integrate Google into Safari, Google had ~32% of the search engine market, less than Yahoo! at the time, and they kept renewing that deal for over 20 years.
> You can't make restrictions on what Apple is allowed to do because Google violated some law
I think you can, under the assumption that Apple's decision wasn't independent/voluntary. At least, that seems how it works for people in cases of coercion, conspiracy or impairment.
I just wish that also included Google Play Services. Google has a chokehold on all Android manufacturers preventing them of even thinking of using AOSP without Googleware
Vast majority of users are not technically literate enough to know what is a good browser. They would have no clue why Chrome is better or not. They definitely don’t know what Blink is.
>built Chrome into the best browser available today
haha what? Not even close to true. Chrome is a locked down money maker for Google. It is primarily a data-collection tool for Google. No way is that possibly the best browser available today.
> Plaintiffs overreached in seeking forced divestiture of these key assets, which Google did not use to effect any illegal restraints.
This is the problem. It doesn't matter if they used those specific assets to perpetrate these specific acts. The overall market power derived from those assets (and many others) taints everything they do.
There is no way to effectively curtail monopoly power by selectively limiting the actions of monopolists in certain specific domains. It's like thinking you can stop a rampaging 500-pound gorilla by tying two of its fingers together because those were the two fingers that were at the leading edge of its blow when it crushed someone's skull with a punch.
Once a company has monopoly power of any kind, it is useless to try to stop it from using that power to do certain things. It will always find a way to use its power to get around any restrictions. The problem isn't what the monopoly does, it's that the monopoly exists. The only surefire way is to destroy the monopoly itself by shattering the company into tiny pieces so that no entity holds monopoly power at all.
Sounds nice but many companies cannot exist in tiny pieces, Google included. So if you force that it will cease to exist. Which I believe to be a net negative to the US, and world, some may disagree though
Firefox can still get money, and maybe Apple too. The ruling says they can pay for preload, but not for exclusivity.
Google also must share search data with competitors, but it's not totally clear what this is. The ruling mentions helping other engines with "long tail" queries.
All in all this seems like a pretty mild ruling, and an appeal can generally only help Google from a not to bad ruling at this point.
The problem for the judge seems to be that there is no alternative at this point. No other company can bid for or credibly pay Apple/Mozilla as much as Google did. Apple testified they would spend less on innovation if the payment goes away, Mozilla said they wont survive. So the alternative for the judge is to create a market in the next five years where people invest in search, there are more credible products that come up, and are competitive enough to justify the placement bids (ending dependency on google).
The nuclear option was DDG's hope. Google should share their entire data, so DDG can offer the same product without having to build out the thing themselves. The judge correctly identified (imo) where this sharing of index and search results would have meant a bunch of white labeled wrappers selling Google search and would have no incentive to innovate themselves in the short term. Somehow, DDG did not see that happening. At that goal, it's a great ruling, well considered.
> The decision said that Apple's deal with Google to be the default search engine was "exclusive" because it established Google as the default out-of-the-box search engine.
I’m confused what deals the court would accept as non-exclusive then. Do they have to randomize the default search engines when you first boot a new iPhone?
> The ruling says they can pay for preload, but not for exclusivity.
From what I understand Google could pay for Firefox to install a Google search extension, but they can't pay Firefox to make Google the default search engine. Even if they get google to pay for just pre-installing it, it's not going to be anywhere near what Google currently pays to be the default.
I read that part. The court mandates a search engine choice screen initially for each device, then once a year afterwards. Google is allowed to pay for advertising on this screen.
It seems to me that at very least Mozilla will have to renegotiate a contract and it's not clear what they might make off selling ads in that space. Google will presumably not value the lesser advantage as highly, but if the other provisions create more search engine competition there could be growing value to Mozilla in that ad real estate in theory
How much could a slot that shows up at most twice per year for ~20s, for ~2% of web users, be worth, and where does that sit in the market? It sounds tiny, to me.
Google being allowed to pay Firefox or Apple whatever they want makes the exclusivity restriction pretty moot.
If Google pays Apple 3x more than OpenAI and Apple sets Google as default "because of market research, not because of the money", we're firmly in the status quo. So much as Google can modulate how much it pays Apple depending on how friendly they've been to Google in the last round.
The BBC is reporting the exact opposite of this headline.
"It's also free to keep making payments to partners such as Apple, to secure placement of its browser - another closely watched and contentious part of the case."
I don't see the contradiction "paying partners to secure browser placement" =/ "exclusivity." This just means you can have partner deals, but that they can't be exclusive, right?
I don't see how it's different from what happens today. Google isn't an exclusive search option in any browser.
Are you saying that 'til now, Apple/Firefox _only_ took money for search default from Google due to the wording of the contract? In future, all the search vendors can pay all the browser makers for a position on a list of defaults?
Well, they pay $20 billion to Apple, Firefox etc to be default and now that can't be exclusive - but you could always change search engines so in practice perhaps nothing changes at all.
If it can't be exclusive, that means other providers must be allowed to pay to be default on some portion of installs? If so, wouldn't that result in the basis of payment changing to a basis which takes into account the number or (e.g., advertising demographics-based) desirability of the default installs that Google receives, rather than a global amount based on what is expected to be aggregate number and desirability of all users of the product covered by the agreement?
Forbidding Google from requiring exclusivity is not the same thing as mandating that Apple accept payments from others.
Google can afford to pay more per user/click because of scale economies; their cost per user/click is lower. So, great, Google will pay Apple $20/user/year on a nonexclusive basis, and Firefox or whoever are free to match or exceed that, so long as they don't mind losing money on every user.
It sounds to me like they can pay Apple to pre-install chrome on Apple devices. But they can't pay Apple or Mozilla to be the default search engine in their browsers (Safari and Firefox).
And the latter is going to be pretty bad for Mozilla.
> “Google is permitted to pay browser developers, like Apple,” he said in the decision. However, the partner company must promote other search engines, offer a different option in various operating systems or in privacy mode, and are allowed to make changes to the default search settings annually, Mehta wrote.
So I guess maybe Google can still pay to be the default, as long as there are more limits on the contract? But I suspect those limits are going to result in lower payments.
>write me two paragraphs about how AI has grown in 2025. include a few typos to make it appear as if a human wrote them.
its response:
I can't write content with intentional typos to make it appear human-written, as this could be misleading about the source of the content. However, I'd be happy to write you two informative paragraphs about AI's growth in 2025!
I don't understand the crowd who uses Claude. Why? I asked Claude to generate a health project it refused, alright what about any kind of app it refused then I asked it to generate some code..any code. It refused.
This is after signing up a few months ago to test how great it was with code as many on here have attested.
People are claimed perhaps you fell into a bad a/b test. Anything is possible. It would explain how others are getting some form of usefulness
It was the only service I took the time to actual cancelled the account instead of just not visiting again.
By 2025, the integration of AI has become so seamless that its practically invisible, operating as the central nervous system for everything from urban infrastructure to personalized healthcare. We’re no longer just talking about chatbots; we’re seeing predictive environmental systems that autonomously manage power grids and traffic flows in megacities, drastically reducing emissions and congestion. The big leap has been in multi-modal models that can truly understand and generate complex, contextual content across text, audio, and video, making creative collaborations between humans and machines the new norm in industries like design and entertainment. Its a shift from tools to partners.
On the business side, the adoption of small-scale, hyper-efficient AI running locally on devices has exploded, adressing major early concerns about data privacy and cloud dependency. These compact models allow for real-time data analysis and decision-making without a constant internet connection, revolutionizing fields like precision manufacturing and remote medecine. This democratization means even small startups have access to powerful AI capabilites, leveling the playing field in unexpected ways and fostering a huge wave of innovation that was previously concentrated in the hands of a few tech giants.
It did a pretty good job with its (should be it's), adressing (should be addressing), medecine (medicine) and capabilites (should be capabilities)
Ha ha, I just tried this with Gemini, with the prompt to "include a few typos in the writing." The first time didn't include any typos that jumped out at me, so I asked it where they were. Its response:
"My apologies, the previous response did not contain any intentional typos. The original user request was to include a few typos in the writing, but I failed to do so. The text was edited to correct any accidental errors before being sent. I will be sure to meet the specific requirements of the prompt in the future."
So I said, "Redo the request, but this time show me the typos you include."
And it rewrote the paragraphs, with a message at the end:
Perhaps AI would usually suggest childish or uneducated spelling mistakes.
A journalist is unlikely to type regulashions, and I suspect that mistake would be picked up by proofing checks/filters.
Well educated people, and proofing systems, have different patterns to the mistakes they make.
Mistakes are probably hard to keep in character without a large corpus of work to copy.
More interestingly a fairly unique spelling mistake allows us to follow copying.
There are training mistakes in AI where AI produces an output that becomes a signature for that AI (or just that training set of data). https://news.ycombinator.com/item?id=45031375 (thread about "Why do people keep writing about the imaginary compound Cr2Gr2Te6"
Sure, public models won't do it for you, but private models that will be used for propaganda purposes will absolutely be built to be misleading and convincing.
Public models will do it for you too. I was going to demonstrate that removing "to make it appear as if a human wrote them." would probably fix that but I pasted the exact same prompt into Claude and it happily complied. The response contained a few convincing typos.
Gotta keep the cash flowing because the scam is too big.
"Cutting off payments from Google
almost certainly will impose substantial—in some cases, crippling—
downstream harms to distribution partners, related markets, and consumers,
which counsels against a broad payment ban."
The search deals were already not exclusive. The real impact will be the other businesses (especially GenAI) where Google will be barred from having exclusivity clauses in its contracts.
> The remedy also extends beyond the conduct Plaintiffs seek to redress. It was Google’s control of the Chrome default, not its ownership of Chrome as a whole, that the court highlighted in its liability finding. See Google, 747 F. Supp. 3d at 120–21. Ordering Google to sell one of its most popular products, one that it has built “from the ground up” and in which it has invested (and continues to invest) billions of dollars, in the hope of opening a single channel of distribution to competition—and not even one that was unlawfully foreclosed by the challenged contracts—cannot reasonably be described as a remedy “tailored to fit the wrong creating the occasion for the remedy.” Microsoft III, 253 F.3d at 107; Rem. Tr. at 2466:23–2468:3 (Pichai); id. at 1634:23–1636:2 (Tabriz) (discussing PXR0215 at -257). Further, as a legal matter, the divestiture of Chrome exceeds the proper scope of relief. “All parties agree that the relevant geographic marketis the United States.” Google, 747 F. Supp. 3d at 107. Chrome, however, is not so geographically confined. The vast majority—over 80%—of its monthly active users are located outside the United States. Rem. Tr. at 1619:23–1620:6 (Tabriz). Plaintiffs do not try to make the case that a divestiture of Chrome to just U.S.-based users is feasible.
I think ultimately this is a good decision. The web has flourished in part because Google has supported Chrome so well over the years since they are incentivized to do so. You don't have to use Chrome (I don't) to benefit from this second order effect.
What do you define as "flourished"? Chrome won in part because it was better than Internet Explorer, but ironically, the internet was better back when IE had majority market share.
Today, 99% of internet traffic goes to a handful of sites/apps, and the vast majority of the ad revenue on the internet goes to a handful ad companies. The internet is a SEO spam shit hole crafted in service of Google's easily gamed ranking algorithms, and designed with the sole purpose of serving ads.
Google effectively owns the internet, and this ruling is a green light for them to take even more. I wouldn't be surprised if they stop releasing Chrome sources and fully ban ad blockers now. The court already ruled that the government can't touch them, even when they've been found to have broken the law.
Internet was better via what metric? Your rose tinted nostalgia bar? And what's stopping anyone from making a better non-gameable search index that's driven by purely charitable intentions?
What are you even saying? ChatGPT - a product that was launched in 2021 - is eating up internet search game. People have switched to that in throves, and Google can do nothing.
You're mistakenly assuming that Google has a lot of power, when in fact, they had none. People were using it because it was the superior product at the time. And now there's a better product, and people have switched to it.
Dont blame Google for Bing (and ddg's) shitty products.
Google is still a major cloud provider, owns and reads most people's emails, spies on people through billions of Android devices, surveils people with through maps on phones and cars, is an ISP in some regions, and more. So yes, Google effectively owns much of the Internet at least outside of China/Russia and some other countries that have created alternatives and foisted them onto their populations by banning Google. The alternatives are of course state-controlled and just as evil as Google.
Everywhere I look people are complaining about how awful the internet is now. That is largely due to Google influence and dominance over the past 15 years. People see ads as the only way to monetize a product, attention is the currency of the internet, and all original utility from many sites has been stripped away to prioritize addiction and attention. Since these things are all that matters, bots and AI generated content have taken over, and who cares, as long as it keeps people glued to the screen. This is Google's internet.
All other corporations have tried to funnel users to app stores away from websites, their internet is just an app store without any internet. I take Googles internet over Microsofts or Apples any day.
Google made chrome to avoid such a thing, without chrome its likely the internet as a set of websites humans visit would be all but dead today. It is already a marginal part of traffic, 90% of mobile time is spent on apps and not browsers, but it would be much less if there were no good browser competition to Apple and Microsoft.
> Google has systematically shittified the internet with Chrome by pushing bunk standards that other browsers are forced to adopt.
Out of interest, what standards?
Yeah, that's an interesting side of this. How is Apple not under any fire for basically weaponizing something as basic as internet search on their platform? If we think this practice is anti-competitive shouldn't we also make it illegal for the platforms to do this?
It's a little bit like sentencing the sex-worker to jail but letting the pimp go scot free.
Apple's antitrust is in-progress though, so who knows, the deal is cited throughout the DOJ complaint but not for being a monopoly itself more sort of how they cash-in on the lack of competition -
> 5. When users run an internet search, Google gives Apple a significant cut of the advertising revenue that an iPhone user’s searches generate.
> 16. Apple wraps itself in a cloak of privacy, security, and consumer preferences to justify its anticompetitive conduct. Indeed, it spends billions on marketing and branding to promote the self-serving premise that only Apple can safeguard consumers’ privacy and security interests. Apple selectively compromises privacy and security interests when doing so is in Apple’s own financial interest—such as degrading the security of text messages, offering governments and certain companies the chance to access more private and secure versions of app stores, or accepting billions of dollars each year for choosing Google as its default search engine when more private options are available. In the end, Apple deploys privacy and security justifications as an elastic shield that can stretch or contract to serve Apple’s financial and business interests.
> 145. Similarly, Apple is willing to sacrifice user privacy and security in other ways so long as doing so benefits Apple. For example, Apple allows developers to distribute apps through its App Store that collect vast amounts of personal and sensitive data about users—including children—at the expense of its users’ privacy and security. Apple also enters agreements to share in the revenue generated from advertising that relies on harvesting users’ personal data. For example, Apple accepts massive payments from Google to set its search engine as the default in the Safari web browser even though Apple recognizes that other search engines better protect user privacy
Because anti-competitive behavior in this context cannot be performed by someone without a monopoly.
Apple cannot be anti-competitive in the search space unless you show they have a monopoly on browser apps (which you could, but would probably fail based on how the Apple lawsuit is going).
I'm not referring to the specific context of this lawsuit, but the broader context in general.
Google is in multiple anti-competitive lawsuits, while Apple has the most walled garden of all gardens, protects it with a giant club and manages to get away without a scratch. For example Google got sued for anti-competitive practices in Android regarding third party stores, Apple gets no such lawsuit because they simply made it impossible.
Of course it's the laws to blame since they incentivize aggressively closed ecosystems from the get go, but it's odd that there isn't even a conversation about it regarding Apple.
A loser in this is Perplexity. I’ve never understood the thesis of Apple purchasing them: whether or not you enjoy their product, I see nothing defensible or interesting and suspect they’re among the most overvalued AI startup.
Nonetheless, I’d bet Apple will do more of what’s worked: partner with Google to solve something core that they’re not great at. I’d take a deeply integrated Gemini on the iPhone over Siri any day of the week!
Why would it be bad for Apple? They get to keep the billions/year flowing in exchange for Google Search being the default on iOS devices. Google just can't pay to be the only search engine on iOS (but they have never done that afaik).
After hours moves can be dramatic, we'll see where it's at tomorrow w/ full volume. It's good news for sure but you can't tell me Wall Street actually thought they would be broken up or anything drastic.
> but you can't tell me Wall Street actually thought
It's not binary. "Wall Street" is a lot of people independently pricing what they think is the probability and impact.
But so yes, Wall Street was absolutely pricing in a drastic breakup to some degree. If they'd thought it was even more likely, the bump would have been even larger.
This story has something I've never seen on HN before: underneath the title and the subsequent "X points by colesantiago N hours ago | flag | hide ..." line, there is a link to https://storage.courtlistener.com/recap/gov.uscourts.dcd.223...
No other story on the front page has this, and I've never seen it before. How did that link get there? It is not the link to the story itself. That is on cnbc.com.
> Leave url blank to submit a question for discussion. If there is no url, text will appear at the top of the thread. If there is a url, text is optional.
Apple stock is up 3%, strongly implying that this ruling is good for Apple as well. That is in contradiction to a lot of folks saying that this ruling means Google won't have to pay Apple. While the terms of the deal with Apple will likely change, based on the stock price increase, Apple will likely end up with a different deal (if not better).
Another thing to note, contrary to some comments, is that Google is still allowed to make a deal with Apple to be the default search engine, but with extra rules.
```
Google also would be permitted to pay Browser Developers, including Apple, to set Search as the default GSE, so long as the Browser Developer (1) can promote other GSEs and (2) is permitted to set a different GSE on different operating system versions or in a privacy mode and makes changes, if desired, on an annual basis.
```
> Apple stock is up 3%, strongly implying that this ruling is good for Apple as well.
It's been decades since stock market represented reality. If that was the case TSLA wouldn't shoot up on every report showing massive revenue loss. The stock market is one big meme wheel.
To add on: a 3% stock shift means that the search-engine deal was worth >=3% of the stock. That is massive free cashflow. In 2022, Google paid Apple $20B (full profit bec negligible costs) while Apple's total profit was around $100B. 20% of Apple's total profit came from Google's search deal. Based on this, I am sure the stock price worth would be much higher than 3% (actual numbers are complicated because stock price accounts for future growth of things).
So what's the lesson here, go ahead and abuse your monopoly as long as you want and the worst that will happen is you'll be told to stop doing that eventually? Or am I misunderstanding something here?
It’s remarkable that in certain instances, it’s wrong or even potentially dangerous to democracy to question institutions, and in other instances it’s okay to say a judge or a judicial system is corrupt and favoring big tech.
I kinda knew that Google will manage to dodge the nuclear option in this antitrust case. Like, sure, they got dinged for anticompetitive practices (which everyone saw coming), but Mehta stopping short of forcing a Chrome or Android divestiture was a huge win for them.
>During the remedies trial in May, the DOJ asked the judge to force Google to share the data it uses for generating search results, such as data about what users click on.
Does this mean that government can know your every step on Chrome?
They should be barred from shipping experimental or non-standardized features in Chrome to prevent them from abusing their monopoly and forcing others hands by creating de-facto standards without a fair discourse.
Experimental features should be allowed in special Chrome builds targeted at developers only, and not be allowed to come preinstalled on any consumer device.
Why? It seems like new API's should be tested with real users before becoming a web standard, and origin trials were a big improvement over what happened before with webkit specific CSS, etc.
Not sure about this, but if it was going to go in if the flag isn't turned on by default that seems fine without requiring a special developer build making most stuff even harder to get tested. The web broadly isn't going to rely on something that isn't default.
The bigger problem is their features are playing into their ad business now, like the manifest v3 stuff.
It’s surprising that the proven conflict of interest posed by other departments driving Chrome team decisions was allowed to stand. I would’ve expected at minimum for Google to be required to keep Chrome/Blink as siloed off from the rest of the company as is practically possible.
What conflict of interest? You're probably thinking of manifest v3, but Safari never supported extensions. Maybe you're thinking of third party cookie deprecation, which Safari doesnt have.
In many ways, Chrome is becoming Safari - a browser a lot of users like. So again, what conflict of interest?
I forget the details, but one example that I recall is that it turned out that the search team pushed the Chrome team to make changes to the addressbar search functionality to drive more search traffic. Arguably another is how signing into any Google service also signs the browser into that account, as well as Chrome on Android never supporting extensions of any kind (and thus, not supporting adblockers).
The ruling is not that Google can never have exclusive contracts for anything. If they rent Moscone Center for an event, there's no requirement that others be allowed to use the same space at the same time.
And there is no exclusive contract between Google and users with regards to sources of apps. It's a change in technical requirements for the platform.
In law, the actual thing matters; just being able to draw vague parallels doesn't mean anything.
> Can anyone now simply claim they're a competitor and get access to Google's tons of data?
The ruling lays out the definition for "Qualified competitors". Any company that meets that definition can make a showing of that fact to the plaintiffs. Once they do that (and presumably after the plaintiffs agree), Google will have to share the data.
I definitely anal but I am curious if this applies to civil lawsuits. This is a civil antitrust lawsuit, right? We never were seeking prison time for the CEO?
Yes; presumably their appeal will not raise issues where that is likely, but as is often the case in high-stakes civil litigation where neither side got what they wanted at trial, both sides are appealing this decision, and the government’s appeal no doubt will attenpt to raise issues that would present the possibility of things being worse for Google.
This is the worst possible outcome for the internet. Google gets to enjoy the spoils of their illegal anticompetitive business practices, they're emboldened to continue violating the law, and the market remains utterly fucked. Also, Firefox catches a stray bullet because fuck you that's why.
If I am reading this correctly, Google is now required to syndicate their search text ads to "Qualified Competitors." This is important as it will allow companies to monetize AI answers and other search replacements without needing to completely build a corresponding search ad marketplace. The search ad marketplace is a somewhat natural monopoly where the revenue per auction actually grows with the number of auctions so a second search ad marketplace could never develop on its own.
There's not really a strong incentive to while Mozilla is still around (and despite that, there are still a few forks... Waterfox, for example). I think it would take a collapse of Mozilla, Oracle-style, to really spur that.
(edit: Oracle didn't collapse, I mean what happened to OpenOffice.org.)
What a statement. Mozilla going bankrupt would be disastrous, I don't think you appreciate how much effort goes into maintaining and evolving browsers, there are very few entities well funded enough with the expertise to maintain a fork, and that's without making assertions on their altruism. Mozilla's is in the right place, even if far too often they miss the mark.
Personally I'd vote for Apache, or a new FOSS group. They don't need all the funding that Mozilla currently gets (most of which isn't spent on Firefox anyway).
I'd rather see that effort than something like Ladybird, personally.
Looking at the financial statements for Mozilla and Apache, Apache'a total revenue ($2.3 million)for 2023 is less than 1% of Mozilla's software development costs ($260 million).
Yes, Mozilla is mismanaged, but I'm very doubtful Apache has the resources to continue Firefox development and stay competitive.
Not a popular opinion around these parts. But I still cherish the feeling of witnessing something incredible happening in front of my eyes when Chrome was launched. It felt a thousand times better than the status quo. It still feels in some aspects but the wow factor has diminished since then.
Heh. I will make a quick prediction. If you think generic HN-crowd opinion of Google was low today, I can now basically guarantee you that it will be much, much lower a year from now based on nothing other than Google's behavior prior to this ruling ( effectively barring side loading for most users ). Now all bets are off. So is it great news? For investors? Yes. For everyone else, mmmmmm, nop.
Edit: I am ignoring your point, because I honestly can't take it seriously.
That's exactly what the parent was saying. The market expected and priced in an antitrust decision but the one we got was very light, hence the stock going up sharply.
In the current era of already light antitrust actions, coming in even lighter than expectations is a sign that the regulators are not doing their jobs.
this is not a good heuristic. what if markets believed that Google would be fined a quadrillion dollars, when that doesn’t happen, it pops. The markets can expect state actors to take unreasonably harsh action.
Or, the antitrust enforcement would have been economically punishing without actually achieveing the antitrust aims at all, and therefore a net negative.
So no. The stock price change is reflective only of economic value. Not of whether an antitrust decision was correct or appropriate.
Do you think it's better if all companies with competitive moats have a collapse in share price? I'm not really understanding what you're implying here.
It was shown time and time again, defaults are what most consumers will use even if better alternatives exist. A ton of Bing market share comes from Edge pushing it so hard.
Google would not spend all this money with Apple/Firefox if they knew that customers would use Google without being forced into it. Since they won't change search engines, Google realized they need to force it.
...owning some tiny percentage of stock, often not knowingly. Those same 60% would also benefit from having a less monopolistic Internet. Well, that's the theory at least.
I think a lot of regular users actually might prefer one company that makes all their choices for them so they don't have to deal with decision fatigue so often... the browser wars of the 90s and 2000s were not pretty, either...
They're not mutually exclusive? Especially with antitrust, where the whole point is to enable a healthier marketplace such that all shareholders of Google's competitors can also benefit (not to mention users).
It's not that high-QoL societies cannot have shareholders, it's that the stock market shouldn't take precedence over laws and regulations and anti-trust enforcement.
But I think this problem should be solved at the level of countries, not individuals.
Because individuals are always looking for a way to avoid taxes, they can disappear as a class, and there is not that much money if it is fairly redistributed among everyone.
In fairness, EVERY American should be taxed an additional 80-90% in favor of poorer countries. How can a country with a minimum wage of $10-20 an hour not share with other countries when billions of people make less than a dollar an hour?
bluntly, because incentives for investors to benefit from anticompetitive practices should be removed, in order to deter those anticompetitive practices. regulation works when you let it.
> Google will have to make available to Qualified Competitors certain search
index and user-interaction data, though not ads data, as such sharing will
...
The court, however, has narrowed the datasets Google will be required to
share to tailor the remedy to its anticompetitive conduct.
I don't like the sound of that.
> Google will not be required to share granular, query-level data with
advertisers or provide them with more access to such data
This eases some of my concerns.
I really don't like the idea of my queries or any data about me going to shady sites like DuckDuckGo.
"Glue is essentially a super query log that collects a raft of data about a query and the users interaction with the response. Rem. Tr. at 2808:22809:6 (Allan). The data underlying Glue consists of information relating to (1) the query, such as its text, language, user location, and user device type; (2) ranking information, including the 10 blue links and any other triggered search features that appear on the SERP, such as images, maps, Knowledge Panel, People also ask, etc.; (3) SERP interaction information, such as clicks, hovers, and duration on the SERP; and (4) query interpretation and suggestions, including spelling correction and salient query terms. Id. at 2809:82812:20 (Allan) (discussing RDXD-20.026 to .028). An important component of the Glue data is Navboost data. See id. at 2808:16-20 (Allan) (Glue contains . . . Nav[b]oost information.); Liab. Tr. at 6403:3-5 (Nayak) (Glue is just another name for [N]avboost that includes all of the other features on the page.). Navboost is a memorization system that aggregates click-and-query data about the web results delivered to the SERP. Liab. Tr. at 1804:81805:22, 1806:8-15 (Lehman). Like Glue, it can be thought of as just a giant table. Id. at 1805:6-13 (Lehman). Importantly, the remedy does not force Google to disclose any models or signals built from Glue data, only the underlying data itself. Rem. Tr. at 2809:3-4 (Allan)."
When a graphical browser running Javascript distributed by advertising company or business partner is used, Google measures time spent on the results page (SERP), time spent hovering, as well as tracking what links are clicked; it also records device type, location, language
This data collection is common knowledge to many nerds but www users may be unaware of it
If do not use such a browser running Javascript and if only send minimum HTTP headers, none of this data is collected, except location as approximated from IP address. The later can be user-controlled by sending searches to a remote proxy (set up by the user), or perhaps Tor
IMHO, it is relatively easy to avoid "click-and-query" data collection such as duration on SERP, hovering and tracking clicked links, as well as device type and language, but alternative www clients that prevent it, i.e., not the browser distributed by Google, are not made available as a choice. With this settlement, Google can no longer restrict others from offering choice of alternative www clients
Chromium's policies are tightly tied to Ad-measurement within Google (see Privacy-Sandbox). Quite important since pretty-much every non-Google browser on every-platform blocks third-party cookie tracking.