This is pulling the content of the RSS feeds of several news sites into the context window of an LLM and then asking it to summarize news items into articles and fill in the blanks?
I'm asking because that is what it looks like, but AI / LLMs are not specifically mentioned in this blog post, they just say news are 'generated' under the 'News in your language' heading, which seems to imply that is what they are doing.
I'm a little skeptical towards the approach, when you ask an LLM to point to 'sources' for the information it outputs, as far as I know there is no guarantee that those are correct – and it does seem like sometimes they just use pure LLM output, as no sources are cited, or it's quoted as 'common knowledge'.
Just for concrete confirmation that LLM(s) are being used, there's an open issue on the GitHub repository, on hallucinations with made up information, where a Kagi employee specifically mentions "an LLM hallucination problem":
There's also a line at the bottom of the about page at https://kite.kagi.com/about that says "Summaries may contain errors. Please verify important information."
It is getting easier and easier to fake stuff and there are becoming less and less fully trusted institutions. So sadly I think you are right. Its scary but we are likely heading towards a future where you need to pay to get verified information and that itself will likely be segmented to different subscriptions for what information you want.
To take a moment to be a hopeless Stan for one of my all-time favorite companies: I don't think the summary above yours is fair, and I see why they don't center the summary part of it.
Unlike the disastrous Apple feature from earlier this year (which is still available, somehow!), this isn't trying to transform individual articles. Rather, it's focused on capturing broader trends and giving just enough info to decide whether to click into any of the source articles. That seems like a much smaller, more achievable scope than Apple's feature, and as always, open-source helps work like this a ton.
I, for one, like it! I'll try it out. Seems better than my current sources for a quick list of daily links, that's for sure (namely Reddit News, Apple News, Bluesky in general, and a few industry newsletters).
> when you ask an LLM to point to 'sources' for the information it outputs, as far as I know there is no guarantee that those are correct
A lot of times when I ask for a source, I get broken links. I'm not sure if the links existed at one point, or if the LLM is just hallucinating where it thinks a link should exist. CDN libraries, for example. Or sources to specific laws.
I monitor 404 errors on my website. ChatGPT frequently sends traffic to pages that never existed. Sometimes the information they refer to has never existed on my website.
For example: "/glossary/love-parade" - There is no mention of this on my website. "/guides/blue-card-germany" has always been at "/guides/blue-card". I don't know what "/guides/cost-of-beer-distribution" even refers to.
They'll do pretty much everything you ask of them, so unless the text actually come from some source (via tool calls, injecting content into the context or other way), they'll make up a source rather than doing nothing, unless prompted otherwise.
On my llm, I have a prompt that condenses down to:
For every line of text output, give me a full MLA annotated source. If you cannot then say your source does not exist or you are generating information based on multiple sources then give me those sources. If you cannot do that, print that you need more information to respond properly.
Every new model I mess with needs a slightly different prompt due to safeguards or source protections. It is interesting when it lists a source that I physically own and their training data is deteriorated.
They could make up source, but ChatGPT is an actual app with complicated backend, not dumb pipe between textedit and GPU. Surely they could verify on server side every link they output to user before including it in the answer. I'm sure Codex will implement it in no time!
They surely can detect it, but what are they going to do after detecting it? Loop the last job with a different seed and hope that the model doesn't lie through its teeth? They won't be doing it because the model will gladly generate you a fake source on the next retry too.
Maybe they should be trained on the understanding that making up a source is not "doing what you ask of them" when you ask for a source. It's actually the exact opposite of the "doing what you asked, not what you wanted" trope-- it's providing something it thinks you want instead of providing what you asked for (or being honest/erroring out that it can't).
Think for a second about what that means... this is a very easy thing to do IFF we already had a general purpose intelligence.
How do you make an LLM understand that it must only give factual sources? Just some naive RL with positive reward on the correct sources and negative reward on incorrect sources is not enough -- there are obscenely many more hallucinated sources possible, and the set of correct sources is a set of insanely tiny measure.
Yes, that's what it is. Kagi as a brand is LLM-optimist, so you may be fundamentally at odds with them here... If it lessens the issue for you, the sources of each item are cited properly in every example I tried, so maybe you could treat it as a fancy link aggregator
Kagi founder here. I am personally not an LLM-optimist. The thing is that I do not think LLMs will bring us to "Star Trek" level of useful computers (which I see humans eventually getting to) due to LLM's fundamentally broken auto-regressive nature. A different approach will be needed. Slight nuance but an important one.
Kagi as a brand is building tools in service of its users, no particular affinity towards any technologies.
Another LLM-pragmatist here. I don't see why we should treat LLMs differently than any other tool in the box. Except maybe that it's currently the newest and most shiny, albeit still a bit clunky and overpriced.
I'm about as AI-pessimist as it gets, but Kagi's use of LLMs is the most tasteful and practical I've seen. It's always completely opt-in (e.g. "append a ? to your search query if you want an AI summary", as opposed to Google's "append a swear word to your search query if you don't want one"), it's not pushy, and it's focused on summarizing and aggregating content rather than trying to make it up.
Google thinks the same of me and I don't even edit the URL. I can have a session working just fine one night and come back the next day, open a new tab to search for something, and get captcha'd to hell. I'm fairly sure they just mess with Firefox on purpose. I won't install Brave, Chrome, or Edge out of principle either. Safari works fine, but I don't like it.
I consider myself a major LLM optimist in many ways, but if I'm receiving a once per day curated news aggregation feed I feel I'd want a human eye. I guess an LLM in theory might have less of the biases found in humans, but you're trading one kind of bias for another.
Yeah, I agree. The entire value/fact dichotomy that the announcement bases itself on is a pretty hot philosophical topic I lean against Kagi on. It's just impossible to summarize any text without imparting some sort of value judgement on it, therefore "biasing" the text
> It's just impossible to summarize any text without imparting some sort of value judgement on it, therefore "biasing" the text
Unfortunately, the above is nearly a cliché at this point. The phrase "value judgment" is insufficient because it occludes some important differences. To name just two that matter; there is a key difference between (1) a moral value judgment; (2) selection & summarization (often intended to improve information density for the intended audience).
For instance, imagine two non-partisan medical newsletters. Even if they have the same moral values (e.g. rooted in the Hippocratic Oath), they might have different assessments of what is more relevant for their audience. One could say both are "biased", but does doing so impart any functional information? I would rather say something like "Newsletter A is compromised of Editorial Board X with such-and-such a track record and is known for careful, long-form articles" or "Newsletter B is a one-person operation known for a prolific stream of hourly coverage." In this example, saying the newsletters differ in framing and intended audience is useful, but calling each "biased in different ways" is a throwaway comment (having low informational content in the Shannonian sense).
Personally, instead of saying "biased" I tend to ask questions like: (a) Who is their intended audience; (b) What attributes and qualities consistently shine through?; (c) How do they make money? (d) Is the publication/source transparent about their approach? (e) What is their track record about accuracy, separating commentary from factual claims, professional integrity, disclosure of conflicts of interest, level of intellectual honesty, epistemic standards, and corrections?
> The entire value/fact dichotomy that the announcement bases itself on
Hmmm. Here I will quote some representative sections from the announcement [1]:
>> News is broken. We all know it, but we’ve somehow accepted it as inevitable. The endless notifications. The clickbait headlines designed to trigger rather than inform, driven by relentless ad monetization. The exhausting cycle of checking multiple apps throughout the day, only to feel more anxious and less informed than when we started. This isn’t what news was supposed to be. We can do better, and create what news should have been all along: pure, essential information that respects your intelligence and time.
>> .. Kagi News operates on a simple principle: understanding the world requires hearing from the world. Every day, our system reads thousands of community curated RSS feeds from publications across different viewpoints and perspectives. We then distill this massive information into one comprehensive daily briefing, while clearly citing sources.
>> .. We strive for diversity and transparency of resources and welcome your contributions to widen perspectives. This multi-source approach helps reveal the full picture beyond any single viewpoint.
>> .. If you’re tired of news that makes you feel worse about the world while teaching you less about it, we invite you to try a different approach with Kagi News, so download it today ...
I don't see any evidence from these selections (nor the announcement as a whole) that their approach states, assumes, or requires a value/fact dichotomy. Additionally, I read various example articles to look for evidence that their information architecture group information along such a dichotomy.
Lastly, to be transparent, I'll state a claim that I find to be true: for many/most statements, it isn't that difficult nor contentious to separate out factual claims from value claims. We don't need to debate the exact percentages or get into the weeds on this unless you think it will be useful.
I will grant this -- which is a different point that what the commenter above made -- when reading various articles from a particular source, it can take effort and analysis to suss out the source's level of intellectual honesty, ulterior motives, and other questions I mention in my sibling comment.
Hard pass then. I’m a happy Kagi search subscriber, but I certainly don’t want more AI slop in my life.
I use RSS with newsboat and I get mainstream news by visiting individual sites (nytimes.com, etc.) and using the Newshound aggregator. Also, of course, HN with https://hn-ai.org/
It actually seems more like an aggregator (like ground.news) to me. And pretty much every single sentence cites the original article(s).
There are nice summaries within an article. I think what they mean is that they generate a meta-article after combining the rest of them. There's nothing novel here.
But the presentation of the meta-article and publishing once a day feel like great features.
I have yeah, to me it looks like what I described in my comment above, it's LLM generated text, is it not?
> And pretty much every single sentence cites the original article(s).
Yeah but again, correct me if I'm wrong, but I don't think asking an LLM to provide a source / citation yields any guarantee that the text it generates alongside it is accurate.
I also see a lot of text without any citations at all, here are three sections (Historical background, Technical details and Scientific significance) that don't cite any sources: https://kite.kagi.com/s/5e6qq2
I can envision the day where an LLM article generator starts consuming LLM generated articles which were sourced from single articles (co-written by an LLM).
I guess I'm trying to understand your comment. Is there a distinction you're making between LLM summaries or LLM generated text, or are you stating that they aren't being transparent about the summaries being generated by LLMs (as opposed to what? human editors?).
Because at some point when I launched the app, it did say summaries might be inaccurate.
Looks like you found an example where it isn't properly citing the summaries. My guess is that they will tighten this up, because I looked mostly at the first and second page and most of those articles seemed to have citations in the summaries.
Like most people, I would want those everywhere to guard against potential hallucinations. No, the citations don't guarantee that there weren't any hallucinations, but if you read something that makes you go "huh" – the citations give you a low-friction opportunity to read more.
But another sibling commenter talked about the phys.org and google both pointing to the same thing. I agree, and this is exactly an issue I have with other aggregators like Ground.news.
They need to build some sort of graph that distills down duplicates. Like I don't need the article to say "30 sources" when 26 of them are just reprints of an AP/Reuters wire story. That shouldn't count as 30 sources.
> I guess I'm trying to understand your comment. Is there a distinction you're making between LLM summaries or LLM generated text, or are you stating that they aren't being transparent about the summaries being generated by LLMs (as opposed to what? human editors?).
The main point of my original comment was that I wanted to understand what this is, how it works and whether I can trust the information on there, because it wasn't completely clear to me.
I'm not super up to date with AI stuff, but my working knowledge is that I should never trust the output of an LLM and always verify it myself, so therefore I was wondering if this is just LLM output or if there is some human review process, or a mechanism related to the citation functions that makes it output of a different, more trusted category.
I did catch the message on the loading screen as well now, I do still think it could be a little more clear on the individual articles about it being LLM generated text, apart from that I think I understand somewhat better what it is now.
> No, the citations don't guarantee that there weren't any hallucinations, but if you read something that makes you go "huh" – the citations give you a low-friction opportunity to read more.
Either you mean every time you read something interesting (“huh”) you should check it. But in that case, why bother with reading the AI summary in the first place…
Or you mean that any time you read something that sounds wrong, you should check it. But in that case, everything false in the summaries that happens to sound true to you will be confirmed in your mind without you ever checking it.
...yes? If I go to a website called "_ News" (present company included), I expect to see either news stories aggregated by humans or news stories written and fact checked by humans. That's why newspapers have fact checking departments, but they're being replaced by something with almost none of the utility and its proponents are framing the benefits of the old system as impossible or impractical.
I think you misunderstood my comment. I wasn't challenging the concept of human editors and fact checkers. I was asking a parent for a clarification of what the parent post meant by outlining that they were LLM generated summaries.
Like, I was asking whether they were expecting the curation/summarization to be done by humans at Kagi News.
Publishing once a day to remove the "slot machine dopamine hit" is worth it for that alone. I have forever been looking for a peer/replacement to Google News, I was about to pony up for a Ground News subscription but I'll probably hold off for a couple more months. Alternatives to google news have been sorely lacking for over a decade, especially since google news got their mobile-first redesign which significantly and permanently weakened the product to meet some product manager's bonus-linked KPI. One more product to wean off the google mothership. Gmail is gonna be real hard though.
Gmail seems like the easiest piece of the Google puzzle to replace. Different calendar systems have different quirks around repeating events, you sometimes need to try a variety of search engines to find what you're looking for, Docs aren't bug-for-bug equivalent to the Office or iCloud competitors, YouTube has audience, monetization, and hosting scale... Gmail is just "make an email account with a different provider and switch all of your accounts to use the new address." They don't even give you that much storage for free Gmail; it's 15GB, which lots of other email providers can match (especially paid ones). You can import your old emails to your new provider or just store them offline with a variety of email clients.
Is updating all of your accounts (and telling your contacts about the new address) what you consider to be the hard part, or do you actually use any Gmail-specific features? Genuinely curious, as I tend to disregard almost all mail-provider-specific features that any of my mail providers try to get me excited about (Gmail occasionally adds some new trick, but Zoho Mail is especially bad about making me roll my eyes with their new feature notifications).
I am sticking with this reprehensible company for email because their spam detection is awesome and I have found no clear measurements of detection to reasonably compare. I’d love to be proven wrong!
Switched from Gmail to Fastmail about 10 years ago.
2-3 spam emails slip through every week, and sometimes a false positive happens when I sign up for something new. I don't see this as a huge problem, and I doubt Gmail is significantly better.
I am fine with it using AI but it makes me feel pretty icky that they didn’t mention that this was ai/llm generated at any point in this article. That’s a no-no IMO, and has turned me off this pretty strongly.
They don't explicitly say they generate summaries at any point in the article. In fact I read it and though this was just some fancy RSS aggregator. The way they describe the "daily briefing" is extremely ambiguous.
In this situation, humans are more accurate, for now, so it's good information to have.
Same as I would like to know if humans self assessed in a study about how well they drive vs the empirical evidence. Humans just aren't that good at that task so it would be good to know coming in.
Just call it Kagi Vibes instead of Kagi News as news has a higher bar (at least for me)
> when you ask an LLM to point to 'sources' for the information it outputs,
Services listing sources, like Kagi news, perplexity and others don't do that. They start with known links and run LLMs on that content. They don't ask LLMs to come up with links based on the question.
That is what I mean yeah, I’m not saying it’s fabricating sources from training data, that would obviously be impossible for news articles, I’m saying if you give it a list of articles A, B and C including their content in the context and ask ‘what is the foo of bar?’ and it responds ‘the foo of bar is baz, source: article B paragraph 2’, that does not tell you whether the output is actually correct, or contained in the cited source at all, unless you manually verify it.
When you go to Google News, the way they group together stories is AI (pre-LLM technology). Kagi is merely taking it one step further.
I agree with your concern. I see this as a convenient grouping, and if any interests me I can skip reading the LLM summary and just click on the sources they provide (making it similar to Google News).
It cannot be "one step further", because there's a clear break in reality between what Google News provides and Kagi provides. Google News links to an article that exists in our world, 100%, no chance involved. Kagi uses an LLM generate text and thus is entirely up to chance.
This seems like the opposite of "privacy by design"
> Privacy by design: Your reading habits belong to you. We don’t track, profile, or monetize your attention. You remain the customer and not the product.
How would the LLM provider get any information about your reading habits from the app? The LLM is used _before_ the news content is served to you, the reader.
Yes, they are not the only player here. Quite a few companies are doing this, if you use Perplexity, they also have a news tab with the exact feature set.
> if you use Perplexity, they also have a news tab with the exact feature set
"Exact" is far from accurate. I just did a side-by-side comparison. To name only two obvious differences:
A. At the top level, Perplexity has a "Discover" tab [1] -- not titled "News". That leads to a AAF page with the endless-scroll anti-pattern (see [2] [3] for other examples). Kagi News [4] presents a short list of ~7ish items without images.
B. At the detail-page level, Kagi organizes their content differently (with more detail, including "sources", "highlights", "perspectives", "historical background", and "quick questions"). Perplexity only has content with sources and "discover more". You can verify for yourself.
I'm firmly on the side of "AI" skepticism, but even I have to admit that this is a very good use of the tech. LLMs generally do a great job at summarizing text, which is essentially what this is. The sources could be statically defined in advance, given that they know where they pull the information from, so I don't think the LLM generates that content.
So if this automates the process of fetching the top news from a static list of news sites and summarizing the content in a specific structure, there's not much that can go wrong there. There's a very small chance that the LLM would hallucinate when asked to summarize a relatively short amount of text.
I see! One thing I'm wondering: They say they are fetching the content from the RSS feeds of news outlets rather than scraping them, I haven't used RSS in a bit, but I recall most news outlets would usually not include the full article in their feed but just the headline or a small summary. I'd be worried that articles with misleading headlines (which are not uncommon) might cause this tool to generate incorrect news items, is that not a concern?
That's a fair concern, and I would prefer it if they scraped the sites instead. They could balance this out by favoring content from sites that do provide the entire article in their feeds, but that could lead to bias problems. Maybe this is why their own summaries are short. We can't know for sure unless they explain how it works.
It's useful for the users, but tragically bad for anyone involved with journalism. Not that they're not used to getting fucked by search engines at this point, be it via AMP, instant answers, or AI overviews.
Not that the userbase of 50k is big enough to matter right now, but still...
All this is doing is aggregating RSS feeds and linking to the original articles.
So this might result in lower traffic for "anyone involved in journalism" – but the constant doomscrolling is worse for society. So I think we can all agree that the industry needs to veer towards less quantity and more quality.
RSS feeds are meant to be used by actual users, not regurgitated publicly. RSS readers at the very least have have author info visible and its users tend to be reported to website's analytics with a special user agent.
What journalism? Most of these sites copy their content from each other or social media, and give it their own spin. Nowadays most of them use AI anyway.
Actual journalism doesn't rely on advertising, and is subscription based. Anyone interested in that is already subscribed to those sources, but that is not the audience this service is aiming for. Some people only want to spend a few minutes a day catching up with major events, and this service can do that for them. They're not the same people who would spend hours on news sites, so these sites are not missing any traffic.
Broadly agreed, I don't consider the CBS (national) news website to be a source of hard hitting journalism; Reuters, however, is. Reuters and the AP are often the source of these news stations.
I continue to subscribe to Reuters because of the quality of journalism and reporting. I have also started using Kagi News. They are not incompatible.
If the parent commenter is correct, the concern I'd have would be about transparency. Even if it's good at what it does, I don't think we're anywhere close to a place as a society where it shouldn't be explicit when it's being used for something like this.
> Kagi is probably the only pro-LLM company praised on HN.
Kagi made search useful again, and their genAI stuff can be easily ignored. Best of both worlds -- it remains useful for people like myself who don't want genAI involved, but there's genAI stuff for people who like that sort of thing.
That said, if their genAI stuff gets to be too hard to ignore, then I'd stop using or praising Kagi.
That this is about news also makes it less problematic for me. I just won't see it at all, since I don't go to Kagi for news in the first place.
I'm not against AI summaries if they are marked as so. Sneakily sliding LLM under the table is a dark pattern no matter how I interpret their intentions.
Even Google calls the overview box AI Overview (not saying it doesn't hurt content hosting sites.)
It's also a workaround around copyright, news sites would be (rightfully) pissed if you publicly post their articles in full and would argue that you're stealing their viewership. But, if you're essentially doing an automatic mash-up of five stories on the same topic from different sources, all of a sudden you're not doing anything wrong!
As an example from one of their sources, you can only re-publish a certain amount of words from an article in The Guardian (100 commercially, 500 non-comercially) without paying them.
Yes, that is fine! That's how RSS feeds usually work when you follow more "mainstream" news sources. At the very least, you see the name of the author and you actually make a connection to their server that can be measured in the analytics.
But instead, Kagi "helpfully" regurgitates the whole story, visits the article once, delivers it to presumably thousands, and it can't even be bothered to display all of the sources it regurgitates unless you click to expand the dropdown. And even then the headline itself is one additional click away, and they straight up don't even display the name of the journalist in the pop-up, just the headline.
Incredibly shitty behaviour from them. And then they have the balls to start their about page with this:
And yet, after trying it, I have to admit it's more informative and less provocative than any other news source I've seen since at least 2005.
I don't know how they do it, and I'm not sure I care, the result is they've eliminated both clickbait and ragebait, and the news are indeed better off for it!
Soulless, uncreative, not fact-checked (or read by anyone before clicking publish), not contributing anything back to the original journalists, all of the editorial decisions are done by an undeterministic AI filter.
Not gonna call it the worst insult to journalism I've ever seen because I've seen factually(.)so which does essentially the same thing but calls it an "AI fact check", but it's not much better.
It's like instead of borrowing a book from the library, there's like a spokesperson at the entrance who you ask a question and then blindly believe whatever they say.
This is exactly how I want my news to be. Nothing worse than a headline about a new vaccine breakthrough, followed by a first paragraph that starts with "it was a cold November morning as I arrived in..."
I guess it's a matter of taste, but I prefer it short and to the point
Thanks for pointing out that this is yet more AI slop. Very disappointing for Kagi to do this. I get my money's worth from searches, but if I was looking for more features I would want them to be not AI-based.
Disappointing. Non-LLM NLP summarization is actually rather good these days. It works by finding the key sentences in the text and extracting the relevant sections, no possibility for hallucination. No need to go full AI for this feature.
i believe an llm output is fine for giving an overview if provided the articles, if you want a detailed overview you should be reading the articles anyways.
> One daily update: We publish once per day around noon UTC, creating a natural endpoint to news consumption. This is a deliberate design choice that turns news from an endless habit into a contained ritual.
I might not agree with all decisions Kagi makes, but this is gold. Endless scrolling is a big indicator that you're a consumer not a customer.
> Endless scrolling is a big indicator that you're a consumer not a customer.
Someone recently highlighted the shift from social networks to social media in a way I'd never thought about:
>> The shift from social networks to social media was subtle, and insidious. Social networks, systems where you talk to your friends, are okay (probably). Social media, where you consume content selected by an algorithm, is not. (immibis https://news.ycombinator.com/item?id=45403867)
Specifically, in the same way that insufficient supply of mortgage securities (there's a finite number of mortgages) led to synthetic CDOs [0] in order to artificially boost supply of something there was a market for.
Social media and 24/7 news (read: shoving content from strangers into your eyeballs) are the synthetic CDOs of content, with about the same underlying utility.
There is in fact a finite amount of individually useful content per unit of time.
> Social media and 24/7 news (read: shoving content from strangers into your eyeballs) is the synthetic CDO of content, with about the same underlying utility.
This is a great way to put it. Much of the social media content is a derivative/synthetic representation of actual engagement. Content creators and influencers can make us "feel" like we have a connection to them (eg: "get ready with me!" type videos), but it's not the same as genuine connection or communication with people.
This is one of the big reasons I've gravitated towards a reverse-chronological feed that takes you from the past to the present -- at some point you hit a natural end, which is a natural prompt to go do something else. I've picked up Reeder[0] as a feed reader, since it can aggregate a bunch of sources (chiefly RSS, but also Mastodon, BlueSky, reddit, etc) and presents it in such a timeline without pressure to read everything.
I agree but I also would like to see yesterday's news. 12 articles is a little to few for me. I would like to come back every couple of days and review what happened.
What everyone gets wrong about news curation is thinking people want the same news as everyone else, or "both sides" of a situation, or whatever mechanism for exposing them to things that someone else thinks are true.
What I actually want is a curated set of things that are useful to me personally given my situation.
The most important things about my situation to give me useful news are things like: net worth, income, citizenship, family situation, where I live, what industries I work in, current investments, travel destinations, regulatory and political risks associated with any of those things, etc.
Because those are the things that dictate how the parts of the world I can't control are going to affect me (especially if I don't react).
I don't want to hear about random things that aren't going to affect me when I'm looking at the news.
Sometimes I want to learn new random/useless things for fun, but that's a leisure activity. It's totally separate from the "news", which is a thing that adults consume as a chore to better plan their lives.
The fundamental problem is that myself and others are not going to willing give out the personal information required to curate useful news feeds, so the news will always be filled with noise.
Maybe local AI can help with that.
ChatGPT Pulse has actually done a great job at this for me. It knows about an upcoming vacation I have planned and gave me some specific news about closures and events there, with recommendations on what activities to book in advance.
It feels much less slimy to pay a nominal fee for a service than it does to use a "free" service and wonder about how / to what extent your data is being exploited.
100% agree. Free services have their place, but I'd love to have more paid service alternatives for derives that only exist as "free".
That said, all my friends think I'm insane and poke fun at me for paying for search, so I imagine we're a small minority.
People just hate paying for software in general in my experience, especially a subscription.
I have multiple good friends who refuse to pay 99 cents a month to get 50gb of iCloud storage so they can backup their phones, and instead of all their precious memories on a single device that is out and about.
It's pretty well established that people are just generally irrational about free things. Because of this, I think any business model involving giving something away for free, whether it's a loss leader or or ad supported or something else, is fundamentally anti-competetive. Cognitive biases place any competitor charging for the good/service at a disadvantage. If you're a non profit, go ahead and give things away. If you're a business, you should have to charge.
I think the whole "if it's free, you're the product" nugget of information has not been broadly understood by folks, or if it has, maybe folks don't care as much about their data.
I do live these days with the understanding that pretty much all of my personal info is out there one way or another, a social security number is about as private as a phone number these days.
You get multiple LLM in a single interface, with a single login and a single subscription to maintain, all your threads stored at the same place, the ability to switch between models in a thread, custom models...
Actually, i get the news search with a quick answer and a link to the assistent and not a single LLM but practically all LLMs in one interface and can link and share the chats.
The Interface is nice, simple and Kagi is very up to date regarding new LLMs (it already contains Sonnet 4.5, for example).
It's just a nice interface for all LLMs which i often use on mobile or laptop for various work and also private tasks.
The last months have shown that there is no single LLM worth investing in (todays "top" LLM is tomorrows second-in-class).
KI multi-step asisstant. Being able to try out all the llms in one subscription. Search integration with Kagi which means AI can really search only pages I want. And my settings for search as well.
The Kagi implementation can use Kagi search and can use advanced features of search like lenses. This isn't a unique feature but if you believe Kagi search is better than whoever Anthropic/OpenAI are using it's a nice plus.
Kagi's contracts with LLM providers are the ones businesses get with actual privacy protections which is also nice.
I used Kagi search for awhile but eventually switched back to google because Kagi location aware search sucks. It might be better nowadays. I’ve been living on their browser Orion for a few weeks now though and it’s great. It works about 90% of the time which is impressive for a browser that isn’t tested alongside the big 4
I think this is the wrong direction. We need better journalism, not better summarizing aggregators.
Summaries are no substitute for real articles, even if they're generated by hand (and these apparently are not). Summaries are bound to strip the information of context, important details and analysis. There's also no accountability for the contents.
Sure, there are links to the actual articles, but let's not kid ourselves that most people are going to read them. Why would they need a summarizing service otherwise? Especially if there are 20 sources of varying quality.
There are no "lifehacks" to getting informed. I'll be harsh: this service strikes me as informationally illiterate person's idea of what getting informed is like.
Also, they talk about "echo chambers" and "full spectrum of global perspectives". Representing all perspectives sounds great in theory, but how far should it go?
Should all politicians' remarks be reproduced verbatim with absolutely no commentary, no fact-checking and no context? Should an article about an airplane crossing the Pacific include "some experts believe that this is impossible because Earth is flat?"
Excessive bias in media is definitely a problem, but I don't think that completely unbiased media can exist while still being useful. In my expierence, people looking for it either haven't thought about it deeply enough, or they just want information that doesn't make their side look bad.
> Representing all perspectives sounds great in theory
A bigger bias problem by far is bias by omission, so including all stories whether they meet the presenter's political agenda or not would be a great start.
That's precisely what Axios does, and they make money from this (and they don't list their sources). So I can see Kagi pursuing this.
FWIW, I agree with you.
I used to be a news junkie. I've always thought of writing the lessons I learned, but one of them was "If you're a casual news reader, you are likely more misinformed than the one who doesn't read any news." One either should abstain or go all in.
I guess I'd amend it to put people who only glance at headlines to be even more misinformed. It was not at all unusual for me to read articles where the content just plain disagreed with the headline!
> We need better journalism, not better summarizing aggregators.
I agree, but how do you envision that happening? Journalism died a long time ago, arguably around the birth of the 24-hour news cycle, and it was further buried by social media. A niche tech company can only provide a better way to consume what's out there, not solve such large societal problems.
> There are no "lifehacks" to getting informed.
I don't think their intent is to change how people are informed. What this aims to do is replace endless doomscrolling on sites that are incentivized to rob us of our attention and data, with spending a few minutes a day to get a sense of general events around the world. If something piques your interest, you can visit the linked sources, or research the event elsewhere. But as a way of getting a quick general overview of what's going on, I think it's great.
We're seeing success with giving journalists better tools to create engaging journalism (which HN hates :). Many outlets are now seeing that they have to once more prove their value, and there exists some really great subscription-only media here in the Nordics and France.
I like Kagi and want them to succeed. But currently (according to LinkedIn) theres 26 employees. They are building search, LLM assistant wrappers, a browser and now news. Please don't overextend the same way Proton is currently doing.
I used to love Proton, but they focus too much on feature development instead of stability and fixing long-standing bugs. E.g. zooming has been broken for years in ProtonMail on iOS. Some emails won’t even render at all :(
Yup, i quit Proton (Mail) for the same reason. I had been using it for a long time…
There are so many little bugs and annoyances, it’s frustrating to see new features being released all the time while obvious bugs and shortcomings are not fixed.
It was a very big relief going back to a normal email client.
I still support Proton (i pay for Proton VPN) and hope they will succeed in their mission.
How is Proton over extending? All of their services are pretty great imo. I'm happy with them. Doesn't mean I am ever going to use their bitcoin wallet app thing, but if they want to build it, great, they know their customer base so it's probably not out of left field.
I like this a lot, going to try it! One issue i have though is in the current world of LLMs scraping content, i'd prefer there to be more discussion about compensation of authors.
I know the announcement page talks about not scraping, but to me personally the value i see in this product is that i don't have to go to those ad ridden, poorly organized and often terrible pages of the authors. Which then seems really unfair to the actual content providers.
I'd like to see this type of service cost $3-5/m ontop of my normal Kagi sub to compensate the authors of the articles i read. A Streaming Music model for news, ish.
This proposed value is quite small, but my assumption is only a very small amount of money would reach them from my ad views anyway so a $10/m addition feels extreme to me.
> One daily update: We publish once per day around noon UTC, creating a natural endpoint to news consumption. This is a deliberate design choice that turns news from an endless habit into a contained ritual.
Could you guys maybe print it on paper and send it to my physical mailbox, so I can do this ritual with breakfast? :-)
Several sections have Trump in the headline. I wish there was a way to block that word like I do in Lemmy. That guy monopolizes the headlines which only makes him more powerful, and annoys me. I'll see whether I can take this when I use this new app, which I otherwise think is great.
We have Content Filter on the web version. And it's coming to the mobile app very soon. We're working towards having complete feature parity with the web app.
I can't speak to any of the apps, but after making my original comment, I checked out the website version, and the settings there do indeed have filters for category as well as specific terms.
Surely it isn't that simple. Even a person who thoroughly condemns Trump's hijacking of media systems and attentions must acknowledge that if international politics are at all relevant for you, some actions of the US president should be seen by you, if only in exceptional circumstances.
This is the issue. I feel bombarded by trump's firehouse of bullshit, but some of it has to be important, right? So how can Kagi create a "smart" trump filter that focuses on the most important stuff and reduces the firehose. I tried Kagi news and created a "trump" filter and *every story in the US section was gone*.
The problem is the continual stream of bullshit emitted from Trump's mouth gets clicks, and as such even little things that don't have any bearing on an international audience are reported heavily.
When Biden was president I barely heard anything about US politics, but with Trump in power it's hard to avoid.
Haters gonna hate, but I just downloaded Kagi News and LOVE it.
I want to QUICKLY see all the news headlines and drill deeper in as needed, and Kagi News seems to do exactly this.
I really like this for practising a foreign language by switching the content language. I do agree with other comments here though that it will need greater control over which languages are translated.
lol I added 'trump', 'republican', and 'democrat' as custom filter keywords and now it's showing zero stories in the USA category. So apparently, that category is a stand in for politics? Although I have the Thai category enabled (since I live there) and that's all run of the mill national (non political) news.
In other words, this is exposing a long standing flaw in journalism. I know things are super polarized now, but even 20 years ago, when mentioning a Congressperson regarding a particular social problem, they would specify if he was a Democrat/Republican.
I really don't need to know which party he is part of. If the article was about a party's stance, it makes sense - but the article is about one politician.
Do the checkmarks do anything? I expected them to disappear after a reload (like hiding a post on Hacker News), but apparently that's not what they're for.
"Mark as read" checks all the checkmarks, but since they're still there after a reload, I don't see the point.
The really weird thing to me is that the check marks don't persist across categories. I.e. I marked the story about Youtube as "read" in the "World" tab, but it doesn't get a check in the "USA" tab. Seems like the feature is pretty half baked.
It automatically marks stuff you've seen - it's just a visual cue. Similar to how search engines (like Google) show visited links with a different color.
I think keeping them on the page instead of automatically hiding them makes more sense for a product that's trying to update their news feed once per day. You feel more in control, as if it's not a stream of never-ending stories, but rather a fixed amount of stories that you can realistically power through. Seeing all items checked sort-of supports this philosophy.
I do wish I could have better control of what languages I'm getting. Right now the option is to either translate everything or nothing. I'd prefer news in their original, untranslated form if it's one of the 4 languages I speak, otherwise translate them to English.
I added the category "Israel" and everything was in Hebrew, so I had to set my language to English, but now news in my native Swedish are translated to English and I have to kind of translate it back in my head as I read them.
It's not the end of the world, but it seems like fairly low-hanging fruit!
There is a more fundamental problem here. The news feeds are going in this direction for a reason. I don't think you addressed that reason.
You have defined the desirable news as "pure, essential information". What's that again? How do you know what's pure and essential info for any user? The traditional news media had started there, with that pure news, and ended up here where they are today.
Ultimately, you will realize that your content need to grab attention enough so that people consume your feed. People's attention goes to where things look weird, exciting, sensational, emotional, trivia, gossip etc. You can't do away with all that and just dish out the pure and essential info. It didn't work. People tried it.
Nice to see an approach to reduce doomscrolling (for myself, and most of my bubble, the biggest addiction, impacting productivity, mental health, and neck).
Yet, there is Hacker Newsletter (https://hackernewsletter.com/, which I like and use), there are others pointed by GPT5 that I don't Mailbrew and Digest. Kagi looks like the true former.
What I do want is personalization - not by picking interest, but actual personality, prompt, tastes, good enough that it puts something other, rather than only narrowing and narrowing my view. Yet high quality, rather than clickbaits and other "fluff". Otherwise, following a few Reddits would do the job (with some API to send emails).
What I would like even more is something that actually turns my social media into daily emails.
This is awesome. Only thing that is missing is a place for me to ask a question from Kagi Assistant about the current story I am looking at, using the story as part of the context of my question.
We've Time Travel feature coming soon to both the web and mobile apps. It'll allow you to browse the stories from any date since we started aggregating news ;)
That's a good idea. If they implement it, I would however suggest putting a limit on to it - perhaps only let you see the news within the last week/month.
There is also a list of "citations" which are referenced from the generated text, and "sources" which are not referenced anywhere. It's not clear if they used reddit or reuters to generate any of the text.
I also see lots of citations to "common knowledge"... which is um, weird.
For example:
> National Guard activation: Guard forces can serve under state control (Title 32) or be federalized (Title 10), which determines who directs missions and the scope of authority [*].
Given that Orion (which I repeatedly attempt to daily drive due to the dearth of browsers meeting my requirements right now on macos) is still full of bugs that hamper usability, and seems to introduce new ones with every update, I don't know why Kagi insists on overextending itself like this. They just started porting their broken browser to Linux, they're creating a maps app, all while they clearly do not have the manpower to finish the projects they've already started.
- Site blocking with /etc/hosts doesn't work consistently with Orion, it intermittently and inconsistently ignores these rules. (this is sort of niche but it's bizarre for a browser based on WebKit)
- The password manager is busted on certain websites that have a third input box (so a captcha or 2FA code), where it'll fill the password twice
- Kept randomly getting the error "Orion can't open this page: This operation couldn't be completed. Cannot allocate memory" with like 10 windows, ~30 tabs open. Haven't seen it recently but like many Orion bugs it is intermittent and hard to reproduce consistently.
- Switching between Chrome and Orion sometimes (inconsistently) switches me to the last Orion window I had open (often on a different Desktop) rather than the one I clicked on.
- On networks where I can form WebRTC connections in Safari and Chrome, I cannot in Orion.
- This was just fixed but until like yesterday, the highlight color in their PDF viewer for ctrl F was a barely visible 10% opacity highlight that was totally unusable.
- Various other intangible performance bugs that seem to pile up when you haven't restarted in a day or two. It starts out really snappy and tends to get slower the longer you've had it open.
I should note that the pace of developement would be much faster if they would open source the browser, but instead of that they keep starting new, closed source projects that will likely have the same fate. Their Linux Orion port is from scratch, none of their macOS code is reusable.
> - Site blocking with /etc/hosts doesn't work consistently with Orion, it intermittently and inconsistently ignores these rules. (this is sort of niche but it's bizarre for a browser based on WebKit)
oh i hate it when developers get cute with DNS. this doesn't happen with Safari? i've also had issues with the password manager (even after telling it i want to use Passwords, it just... doesn't sometimes).
i've been in the same boat as you - i really want to diversify the browser ecosystem, so i've been daily driving orion for a bit, but their stance toward open source (which you mentioned) is a big bummer.
One of the best news sites (still running) that I use frequently is http://68k.news/ - it's sort of like this minus the AI summary and info part of the article.
It's just plain text web 1.0 page that uses some ranking algo to figure out the top stores of a given day across categories, and shows that headline and under it similar headlines across different news sources.
It used to pull in RSS from the sources so you could also read the articles in plaintext, but that broke a bit ago and the dev hasn't fixed it.
Regardless, I still find it a great site to quickly get up to speed on top stories of the day!
But also I really like (and pay for!) Kagi so happily support their own effort here.
Between this app (kagi) and the Harmony hacknernews client, I'm super happy if this is my only content consumption on the internet/smartphone. The kagi app just needs a black/oled theme please, and can we bump the aricles from 12 to 20 or 30? 12 is just a tiny bit shy.
There is almost no good reason to keep up with current events in a "news feed" style. I'd maybe like a feed that has a 1 month window summarizing any news cycle that survived 3 days. If it came and went in one cycle, then just don't bother about it. Most of the news is just propaganda anyway. I suppose it's wise to have a sense of the "current thing" so you don't put your foot in it with colleagues who are inhabiting a tighter timeline than you are, but other than that there doesn't seem to be many use cases for keeping tabs in a news feed. Maybe if you're in the business of disrupting/reinforcing people's OODA loops you might need to know some of this stuff, but otherwise it's just a self-own to keep up with the news.
Very skeptical that this would work for me. None of the topics that Kagi chooses to "cover" in their seven or so stories for the day resonates with what I'd want to read. That's exactly why we have feeds that you can tune to your tastes and so on. Getting rid of endless scrolling and such might be a good thing though.
I like 1440 (https://join1440.com) for this. Once a day daily email digest. I like the email format because I'm less likely to start clicking around compared to a web site, and it doesn't require a separate app.
I think it is human curated, but I'm not positive about that.
Thanks for providing RSS feeds for Kagi -- just added them all to https://usedigest.com so users can use this as a drop-in replacement for their news instead of adding various RSS feeds from other news outlets.
Feedback (if someone reads it): offer an option to translate everything to English. For example, news from/about Russia are in Russian, and thus I can't meaningfully share them to non-Russians.
Hey, thanks patrakov for trying the app! Users can actually change the content language of the app from Settings. All stories will be automatically translated to their preferred language.
When you share a Russian story with a non-russian speaker, they will still be able to read the story in their own set Content Language in the Settings. We're working on improving the UX of language, sharing a story, and more.
Some UX friction i noticed:
To get back to the homepage from an article, i have to click on the article headline. While this is elegant and you likely get used to it, once you know it, it's not exactly intuitive.
Just plugging my service, https://mosaique.info/. It only uses an LLM to generate a short summary, and other ML algorithms structure the information (comments from officials and experts, classification...).
I'm currently working on a major overhaul to provide more holistic context around news by better surfacing less-discussed events.
I’ve been using this for a few days now. I stumbled across it in the App Store last week.
I hoping this can fill a gap for me currently. I want something that will give me broad awareness of big news I should probably know about, that’s not a 24 hour firehose of news.
I like the once-per-day update and the relatively short list of stories. The jury is still out on how sticky it will be, in terms of being my go-to place for a daily update.
> This is pulling the content of the RSS feeds of several news sites into the context window of an LLM and then asking it to summarize news items into articles and fill in the blanks?
This is awful. It's cutting out any money going to the news agencies that go out there and write news. If they didn't exist, Kagi wouldn't work.
This is true in a big picture sense but that's not the concern of someone who's making a tool meant to be useful to users. The consequences of this existing will be what they will be.
> This is awful. It's cutting out any money going to the news agencies that go out there and write news. If they didn't exist, Kagi wouldn't work.
Why would Kagi stop working if news didn't exist? Kagi is a search engine first and foremost, Kagi News is not a money making product of theirs. Kagi would still be making money with their search engine.
Also, this should entice news writers to write better news. The main reason people use products such as this is that they are sick and tired of going to news sites only to have to power through filler material to get the 10% that actually matters...
i'm doing something like this, summarizing HN posts because most of the time when there's hundreds or thousands of comments, it's not possible to read everything and i feel like i'm missing something.
So far, i quite enjoy having a summary with bullet points.
Nice idea, I’ve been toying around the idea of consuming news only once per day. But for me I think I want an actual newspaper with in depth articles rather than short news posts from online news.
Given that this news is generated I have no idea why the default would be to be in the native language of the sources. And if that makes any sense I would need to be able to select multiple languages I want to read in because I can’t read all languages.
I really like the balance here. No "brand names" in the headline summaries, no imagery or videos on the homepage, summarize multiple sources. It's daily so no need to refresh.
I've been really enjoying Semafor's emails too, but their 2x a day is tough for me to keep up with. I'll try to get a habit of looking at Kagi News to stay informed.
I don't understand how this is 100% free, no subscriptions, no purchase, apparently no ads or tracking and yet I'm also the "customer" and not the product. What's the catch?
It seems fairly low effort (from a cost perspective) to deploy and maintain this feature, so I think it's a great way to get the Kagi name out there, which may perhaps lead to a few new users!
Sort of like a loss leader, eg the Costco hot dog :-)
I’m probably online too much, but a lot of the news I see is from yesterday. Supposedly it just refreshed with today’s news, but does that really clear out anything older if some outlets publish their stories later than others? I would not describe some of this as "today's headlines"
I've been using Kagi search for a while now and frankly it's fantastic. Google looks like AOL to me now.
These guys are doing great work and this news product is exactly what I want... Once a day hit. What is happening in the world? As far as pmf goes they hit the mark for an old fart like me.
Can anyone in here explain why there are so many vague "I don't trust Kagi" comments in here? I don't know who they are. And, I haven't seen anyone expand on why.
It actually seems nice. I realize Reddit is not a news source but it used to be a great way to see current events and get level-headed takes on those events. This approach could be a better non-biased* alternative.
I LOVE this. The app feels very clean, the data's presented beautifully, and it hasn't been enshittified yet. And hopefully never will, because I pay Kagi in hopes that they don't.
I feel this is what Apple News should've been. Instead it's just god-awful ad-filled mess of news articles. And the only reason I have it is because of Apple One. But it is a clearly neglected product.
I also pay for ground news but it hasn't met my expectations, mostly because there's a lot of redundancy with wire stories. Like it'll show 50 sources but they're all just regurgitating the same AP or Reuters article. So it skews the "bias"
I AM using it for a long time, its brilliant but as any ai can hallucinate. Joining 4 separate technical topics about 4 different companies and initiatives is funny but misleading. Wont go back though - HN+Kite is all I do.
Cool but how does it compare to something like subreddits? There are still biased moderators behind the scene just like subreddits. Seems to not have the upvoting/downvoting side of it which imo is crucial to democratize the entire thing.
I think upvoting/downvoting is a crucial aspect to news/information/knowledge. But we've been doing it with just numbers all along. Why not experiment with weights or more complex voting methods? Ex: my reputation is divided in categories - I'm more an expert in history then politics hence my vote towards historical subjects have more weights. Feels like that's the next big step for news. Instead of just another centralized aggregator?
I don't find it too much. For $10 I get a search engine better than all the others, I get access to many AI models via Kagi Assistant and Kagi Translate.
While I understand different people find value in different things, dismissing Kagi generally as "too expensive" is ignorant IMO.
I think it also depends what you use it for. I use both their search and their AI models for software development and it saves me precious time when looking for information - in a way it pays for itself.
I get by with the $5 plan. I don't use it at work, and it shows. I often curse during work that I can't exclude certain domains permanently from my Google search!
Mini feedback - it appears to report google news results as if from google and not the website in question (Wired in my case, the snapdragon x2 elite article).
Apart from that, it's really nice! Good job, kagi team!
I love everything that Kagi has put out. The Orion browser rocks (recently replaced Brave, good riddance) and my go-to chatbot today is the Kagi Assistant with Kimi K2 connected to the internet.
I tended towards Axios but lately it's gotten a bit paywalled and less informative. Can't wait to incorporate Kagi News into my daily workflow.
Big news junkie but I don't feel the need to buy into Kagi's ecosphere personally as a SearXNG user. The article touches on signal over noise and I have found two solutions that work for me as a news junkie:
News Minimalist [1] and Boring Report [2]. Both aggregate news and (IMO) most importantly provide links from multiple outlets for the same stories. Really made me notice the clickbait and allows me to be more selective in choosing reputable sources.
Both use AI, with the former ranking news based on importance, while the latter summarizes articles. (That doesn't feel useful for supporting journalism as a whole so I typically click through and read the articles unless I don't like the outlet reporting)
I'm biased because I build my own RSS reader[0] and I feel that with this approach the thing I love the most about RSS, to follow small niche sources gets lost. That said, I think for big news it could be great.
Every single news aggregation services promises the same "signal over noise" and "just the facts". I'm so numb from hearing that, that I don't believe it anymore.
I do however like the fact that Kagi only pushes _once_ a day. Drinking from the firehose is physically and mentally exhausting. Even daily feels like too much these days other than a quick check to make sure the world didn't implode or the Rapture happened while I was busy trying to get CC to behave.
I've been using this since the beta launch, and I really like it. They're spot on about news being broken.
That said, I do think the service could be improved. Often the summary is a very short blurb that forces me to go to one of the original sites for the content, and hopefully land on one that is not obnoxious to use, which kind of defeats the purpose. The event timeline sounds interesting, but when it essentially shows 2 or 3 events that are obvious from the context, it's not so useful in practice. I always skip the "Quick questions" section, since it reads like an elementary school report, and the questions are really basic. How about letting me ask the questions I want?
Also:
> We don’t scrape content from websites. Instead, we use publicly available RSS feeds that publishers choose to provide.
I think this is a mistake. Most publishers are hostile to RSS and often don't offer it. Scraping is, unfortunately, a requirement if you want to consume public content on your own terms, which is the entire point of this service. Besides, scraping is how all search engines generate their index, so as long as the bot is well behaved and doesn't hammer the site, follows robots.txt or perhaps even bends the rules a bit, it should be fine. I would rather Kagi wasn't so respectful of publishers' wishes, if that would allow them to offer a better service. I understand if they want to avoid getting in trouble with publishers, but the alternative would be better for their users.
As much as i hate modern news sites and our ad riddled culture, its pretty hard to ignore that this tool couldn't exist without the articles that those same news sites are creating.
100 years ago, imagine a service that just took all newspapers and summarized them like this somehow, and everyone knew they had no actual writers but just an advanced printer that could merge articles or some goofy w/e.
can't imagine it would go over well in the court system.
What is the business model / exit strategy for Kagi's founders and investors? What is the news curation process and its relation to the public interest?
Are these articulated in a manner which gives stakeholders (investors, users, and staff) assurances and standing?
...
What are competitors and collaborators in this space? Semafor seems to have a similar product, what are the differentiators and/or collaboration opportunities?
...
Netflix was subscription only, till it was "pay to get rid of ads". Then there is the whole business of profiling customer interest, etc.
We have product labeling for food, why not web services?
I never found the lowest most common denominator news "curation" to be at all interesting, let alone algorithmically driven ones. The issue with news has nothing to do with curation of mainstream media. There is very little value in reading a state department or law enforcement press release summarized by some overworked stenographer/journalist. Or some NGO's push to drive some nondescript narrative uncritically parroted. Or some SEO driven click bait or tragedy porn.
If you wanted to fix the news you'd begin by critically curating mainstream news and throwing 80% of it in the trash, then you'd add 80% of material and critical analysis back to the 20% that had none of that.
I'm just happy to be able to entirely remove topics like sports. Google News no longer lets you do this, and gleefully pushes topics on me even when I religiously press "Show fewer stories like this"; it is infuriating. No I do not care about celebrities or football; stop insisting that I do!
"Community-driven sources: Our news sources are open source and community-curated through our public GitHub repository. Anyone can propose additions, flag problems, or suggest improvements."
This sounds like it's going to be a massive headache. Activists with nothing to do all day will be all over this, for their chance to try to have influence over what other people read.
One thing I found working on a startup which touched on the political sphere is people don't want curated lists imposed on them, they want to impose their curated lists on other people.
I like that it only provides the list once a day (I do think that's a clever feature), but the inability to influence bias seems like a mistake, especially since the sources already seem to follow a bias.
Exactly. I also wonder what the end game is. If creating content becomes a loss-making exercise, people will logically stop and the LLMs will have less and less to content to 'train on.' And as even large news corps are increasingly deploying internal LLMs, the deadening banal style of LLMs, A.I. over-view etc will inevitably drive readers away. I use Perplexity for search in place of Google and it surfaces good links most of the time. But what do tech and media companies - even spotify - think they will do when the artists, reporters and creatives stop feeding them? Or readers don't want to read banal summaries of everything?
Another one you can check out is one I have made for myself and used by friends [1], although only tech news.
It also uses more than 100 RSS feeds to aggregate the top 10 news every few hours. Also has tags that can be used to read topic related news.
sites like these make me realize that i’m not all that interested in “news”, which might be a personal fault, but also makes you wonder what all the other “”news”” sites have been doing to capture my attention...
This is honestly very disappointing. Not using LLMs, but the complete lack of transparency about their usage. You can already see in the repository issues related to hallucinations[^1]. This is _fine_, but not if you seem to obscure the fact that these can be very, very wrong. This seems to only be mentioned in the very brief loading screen and at the bottom of the about page[^2]. Also, apparently many of the "core RSS feeds" are just... reddit[^3]???
For me, this is only useful as a curated list of news feeds (and subreddits I guess), but nothing more.
Nextcloud News works just fine, is free, is as biased as the feeds you configure and no more, does not (yet...) introduce/intrude LLM slop, is free software (beer/freedom) and has been around for a long long time. You can configure it any way you want, the default update interval is 5 minutes which should be enough for even the most FOMO-affected 'news' junkie. Of course the actual updates depend on the RSS sources but if you configure a number of active feeds you'll get updates every few minutes.
This doesn’t really seem to touch on the problem I have with news, which is that it is all doom and gloom, FUD and outrage. The headlines I saw:
Trump, Congress deadlock as shutdown deadline nears
Taliban cuts internet nationwide, flights grounded in Afghanistan
Indonesia school collapse leaves 38 missing, 77 hurt
YouTube settles Trump suspension lawsuit for $24.5m
German court jails AfD aide for China spying
US deports 120 Iranians after deal
Russian drone strike kills family of four
Is this really what I need to know in the world? Am I saying “informed”? This is not helping the anxiety from reading news described in the article. This is not good for people.
I think a fundamental issue with news is that it doesn't try to push people to have a more correct mental model of the world.
Some things that could change that:
- Deep fact checking. Community Notes on twitter do a better job at this than any other system I've seen. The reason it doesn't really work in practice is that the stream of misinformation and confusion is orders of magnitude larger than the Community Notes community. A news app should not have that scalability issue.
- Follow up. If I read something that later turns out to be false I need to be notified of that. This unfortunately requires that the app track what I have read.
- Context. If you have a news article about a stabbing, it sounds like stabbings are up. The context that they are going up or down statistically is extremely relevant. The lack of context can turn a tiny truth into a bigger lie.
- Deep confusion analysis. Figuring out where people are confused statistically and focusing on trying to manage that misinformation gap is not something that is dealt with at all. I would like to become LESS confused by information sources not more.
That's just media literacy I think. I would add to that "Sourcing": if the article just parrots some press release or badly summarizes some paper, it should at least link to that but they rarely do. Then it is hard to find the primary source, because you'll only find articles about it, not the actual primary source that they bury in google search.
RSS is a strange choice in 2025. As a search engine they are in the position to extract things from web pages themselves. They already need this capability in order to properly rank the page.
For one, I welcome RSS (back) and say the more, the better. I much prefer to pull specific types of information at my own set intervals when I need them, instead of either having undifferentiated information pushed on me continuously like a blast from a fire hose, or having to reach out to manually check and filter many individual sources. The idea is to schedule my receipt and processing of the information, and then refine the stream itself as well as the intervals I use to view it and the total amount of time I spend on it.
I'm currently on the hunt for an RSS reader that has good filtering and sorting functionality, so I can (for instance) pull several feeds from only certain sources, but not see any posts/articles about terms A or B, yet see and sort any posts with term C by time, followed by either posts from source 1 with terms C and D, or posts from source 2 with terms E or F but not G, which would be sorted by relevance.
I know that's a complicated and probably poorly written explanation; but I'm imagining something like Apple Mail Rules for RSS.
I think Kagi's target audience is people who want to see news, and not people who want a RSS reader for news. The average person does not care how news gets to them. The fact that it uses RSS is a technical detail they should not have to worry about. Kagi should not be artificially restricting themselves to RSS feed when there is news that exists outside the RSS ecosystem which they should consider including.
Kagi's eventual target audience might be the average person, but right now its customers are almost certainly the type of people who mourned the shuttering of Google Reader.
Why? I do use it and can't imagine following anything without it... And I keep hoping that it will come back and replace absolutely terrible schizophrenic feeds from meta/x/etc
Because not every site has a RSS feed. For example when Claude Sonnet 4.5 released it would make sense to have that, but there is no RSS feed for Anthropic. Being compatible with the entire web instead of just a subset of it is useful.
Well... thanks to walled-garden policy from google whole web decided to erect walls instead of providing neat RSS. Though all websites I care about have RSS and if some doesn't I contact the admin and they add it…
I don't know of any major publisher that doesn't maintain RSS feeds, and this is mostly syndicating major publishers, so I'm not sure it makes any difference
I use RSS to get my information and I've built my own reader https://rahuldshetty.github.io/reader-project/ for it. It helps me stay upto date with my various news feed and sites at one place. I don't have to search for things in search engine all the time for a piece of news.