Now why a spec from 2005 is in the front page of hacker news, I have no idea...
The hyperscalers stopped that timeline from winning, though.
YouTube had atom feeds and I don't think Amazon and Microsoft have relevant syndication.
Meta is surely responsible but that's it, imo.
<feed xmlns:yt="http://www.youtube.com/xml/schemas/2015" xmlns:media="http://search.yahoo.com/mrss/" xmlns="http://www.w3.org/2005/Atom">
I don't think they are linked to anywhere but the url is http://www.youtube.com/feeds/videos.xml?channel_id=<channel_id>They dumped microformats and standards in favor of soupy error tolerant formats that benefitted their search engine and made it harder for other efforts to make information shareable and accessible.
They wanted it to be easy to get information in, but for you to have to go through them to get information out.
I'm not sure they killed microformats, they still support hReview, hProduct etc, don't they?
And they pushed schema.org. I wrote a trivial recipe importing tool that just works™ on a bunch of website because it uses the JSON-LD Recipe schema. It's ~100 lines and a ton simpler than what I had to write 15 years ago.
Sure, they pushed for HTML5-style stuff, but that's not much of killing things.
IMO it's not google that stopped microformats: it's that website owners realized most of the time it was advantaging third parties for no advantage to them.
It's just that they both fell out of fashion when social media decided they prefer to keep their users captive than accepting interop.
I liked Atom's clean design but it felt it was mostly pushed by Google (I may be misremembering) and in the end the syndicated web faded into obscurity anyway.
There's really no good reason to use anything other than Atom.
What do you like about XML? I feel like I'm missing something.
Obviously, that's only a benefit if you care about and utilize those features; most teams doing JSON integrations will just build those into the consumer in lieu of them being provided by the transport. But it is something that some people (especially larger enterprise organizations) value.
In addition, JSON is easier to parse and to map to common data structures of programming languages.
JSON is not, it is closer to the PHP, JS, etc "object" type, which is an ephemeral object with arbitrary member associations.
And, to be clear, this is not a value judgement. They just excel in different fields. XML tends to be easier for strongly and strictly typed languages such as C/C++, C#, Java, etc where you can use the schema to generate your structs automatically. Vanilla JSON is easier for higher level languages that don't require you to manually create a mapping/validation level. JSON Schema tries to bridge that gap to a degree, but isn't built into the standard and isn't even universal.
But, ultimately, both are perfectly sufficient for either use case. It just depends on how much massaging you want to do to make them work.
I'm also not so sure about JSON being easier to map to common data structures. The lack of order guarantees within objects makes things like ordered maps quite annoying (you need to either use an array of entries with key and value, or an index within the mapped objects).
But, I would suspect, proponents of XML would still point to it's deeper typing system, document structure (especially the hierarchical features of it), and extremely mature ecosystem + tooling (such as XSLTs) as reasons to prefer it over JSON w/ JSON Schema.
JSON is still figuring it out.
As for DTD: https://en.wikipedia.org/wiki/Document_type_definition
Basically it tells the system what elements are allowed in which places and what attributes they can contain.
<!ELEMENT html (head, body)>
Defines a html element that can contain a head and body, nothing else. Anything extra or missing will fail the validator.It was kinda-sorta eventually superseded by XML Schema that could also define what KIND of data the attributes could contain, but did exist at the top of XML/HTML/SGML documents for years.
In retrospect, its useful for creating islands of sanity/enforcement in a codebase. Lightweight way to give type annotations across organizational boundaries.
> we use an XML parser to parse it to JSON and even then it's not perfect
I can't quite picture this: how does one parse XML to JSON? I assume there's code that's parsing XML and returning a JSON object? What would make this not perfect, other than a poor implementation of the translator? Would them using JSON help? If JSON is a less expressive format than JSON, is it possible to 100% translate their XML to JSON?
Thanks for the insight! Is this what JSDoc/Swagger is now used for?
> I can't quite picture this: how does one parse XML to JSON?
I'm not sure actually. I haven't personally seen the code, I just hear my coworkers always lambasting that API provider for their usage of XML. Maybe it's just their lack of documentation that sucks, but it's become a running joke whenever we get a new partner that the team integrating it jokes that their API is XML.
I hear this too, but often when I ask why people say things like that, it's either because XML is "outdated" or because they don't like it.
It's like programs written in C or C++: very few large projects chose those languages nowadays, often for good reasons, so the projects written in those languages are usually 10 to 20 years old. Age comes with a lot of legacy cruft and obscure behaviour, but that's not the fault of those languages per se. Or for people blasting banks for using COBOL, even though COBOL is a perfectly fine high-performance language for the niches bank mainframes serve.
Some people forged ahead with a cleaned up RDF-based version and called it RSS 1.0, while other people went ahead with the ambiguities but without RDF and called it RSS 2.0. The person publishing RSS 2.0 considered it finished and refused to update it. There was drama.
A bunch of people decided that there was too much to clean up from within that mess and started a new format, Atom. This ended up being a much better spec. with an official RFC, but at this point everybody was calling any type of feed “RSS”, even if it was Atom.
If you have the choice, you should pick Atom.
At the bottom of the article there's, under "See Also", a link to this page comparing RSS and Atom: https://www.intertwingly.net/wiki/pie/Rss20AndAtom10Compared...
It seems like the last update is from 2008, but the section on the differences has a few interesting items. I am not sure if it changed, but it says:
"The RSS 2.0 specification is copyrighted by Harvard University and is frozen. No significant changes can be made (although the specification is under a Creative Commons licence) and it is intended that future work be done under a different name; Atom is one example of such work."
The Wikipedia RSS page has also a small section comparing RSS and Atom: https://en.wikipedia.org/wiki/RSS#RSS_compared_with_Atom
"Technically, Atom has several advantages: less restrictive licensing, IANA-registered MIME type, XML namespace, URI support, RELAX NG support.[35]"
There is an npm package called astrojs-atom but i am not use it is official or safe.
Is there any astro core developer reading this, please add atom option addition to rss.
Pity though. RSS / Atom was a fantastic concept and it’s a real pity big tech killed them off.
Basically, I get to see the latest post from a random feed. Nothing else. No lists of unread new posts from all the feeds. If I like the title and short summary, I click through to the website or blog itself where I can read the whole thing. There's no FOMO this way, or an information overload. Just one post a time.
Because the whole list of feeds is curated by myself, I know that everything is at least a little interesting. I even made a category with Youtube channels that I like, so I can skip their annoying recommended videos algo.
Next to this basic functionality, I made what I call 'Newspapers'. These are certain topics with a bunch of selected feeds attached, they get checked automatically in the background. When the Newspaper has enough articles, I see a new Newspaper appear. Otherwise it might take months before a feed is shown in the random selection.
Or you create a blog for yourself and you make a blogroll.
As for discovering new blogs, couple of options but there are more out there: https://ooh.directory, https://blogroll.org/
One 'dream' of me is to have OPML be the discovery-glue between all kinds of individual personal websites and blogs. But this requires critical mass to have enough to discover and explore, and it needs some fun/interesting software way to do that.