The core differentiator of Zero is actually query-driven sync. We apparently need to make this more clear.
You build your app out of queries. You don't have to decide or configure what to sync up front. You can sync as much, or as little as you want, just by deciding which queries to run.
If Zero does not have the data that it needs on the client, queries automatically fall back to the server. Then that data is synced, and available for next query.
This ends up being really useful for:
- Any reasonably sized app. You can't sync all data to client.
- Fast startup. Most apps have publicly visible views that they want to load fast.
- Permissions. Zero doesn't require you to express your permissions in some separate system, you just use queries.
So the experience of using Zero is actually much closer to a reactive db, something like Convex or RethinkDB ().
Except that it uses standard Postgres, and you also get the instant interactions of a sync engine.
This architecture offers several advantages:
1. Data is stored locally, resulting in extremely fast software response times 2. Supports convenient full database export and import 3. Server-side logic is lightweight, requiring minimal performance overhead and development complexity, with all business logic implemented on the client 4. Simplified feature development, requiring only local logic operations
There are also some limitations:
1. Only suitable for text data storage; object storage services are recommended for images and large files 2. Synchronization-related code requires extra caution in development, as bugs could have serious consequences 3. Implementing collaborative features with end-to-end encryption is relatively complex
The technical architecture is designed as follows:
1. Built on the Loro CRDT open-source library, allowing me to focus on business logic development
2. Data processing flow: User operations trigger CRDT model updates, which export JSON state to update the UI. Simultaneously, data is written to the local database and synchronized with the server.
3. The local storage layer is abstracted through three unified interfaces (list, save, read), using platform-appropriate storage solutions: IndexedDB for browsers, file system for Electron desktop, and Capacitor Filesystem for iOS and Android.
4. Implemented end-to-end encryption and incremental synchronization. Before syncing, the system calculates differences based on server and client versions, encrypts data using AES before uploading. The server maintains a base version with its content and incremental patches between versions. When accumulated patches reach a certain size, the system uploads an encrypted full database as the new base version, keeping subsequent patches lightweight.
If you're interested in this project, please visit https://github.com/hamsterbase/tasks
Networks and servers will only get faster. Speed of light is constant, but we aren't even using its full capabilities right now. Hollow core fiber promises upward of 30% reduction in latency for everyone using the internet. There are RF-based solutions that provide some of this promise today. Even ignoring a wild RTT of 500ms, a SSR page rendered in 16ms would feel relatively instantaneous next to any of the mainstream web properties online today if delivered on that connection.
I propose that there is little justification to take longer than a 60hz frame to render a client's HTML response on the server. A Zen5 core can serialize something like 30-40 megabytes of JSON in this timeframe. From the server's perspective, this is all just a really fancy UTF-8 string. You should be measuring this stuff in microseconds, not milliseconds. The transport delay being "high" is not a good excuse to get lazy with CPU time. Using SQLite is the easiest way I've found to get out of millisecond jail. Any hosted SQL provider is like a ball & chain when you want to get under 1ms.
There are even browser standards that can mitigate some of the navigation delay concerns:
https://developer.mozilla.org/en-US/docs/Web/API/Speculation...
this isn't an argument for SSR. In fact there's hardly a universal argument for SSR. You're thinking of a specific use-case where there's more compute capacity on the server, where logic can't be easily split, etc. There are plenty of examples that make the client-side rendering faster.
Rendering logic can be disproportionately complex relative to the data size. Moreover, client resources may actually be larger in aggregate than sever. If SSR would be the only reasonable game in we wouldn't have excitement around Web Assembly.
Also take a look at the local-computation post https://news.ycombinator.com/item?id=44833834
The reality is that you can't know which one is better and you should be able to decide at request time.
- is 50kb (gzipped)
- requires no further changes required from you (either now or in the future)
- enables offline/low bandwidth use of your app with automatic state syncing and zero UX degradation
would you do it?
The problem I see with SSR evangelism is that it assumes that compromising that one use case (offline/low bandwidth use of the app) is necessary to achieve developer happiness and a good UX. And in some cases (like this) it goes on to justify that compromise with promises of future network improvements.
The fact is, low bandwidth requirement will always be a valuable feature, no matter the context. It's especially valuable to people in third-world countries, in remote locations, or being served by Comcast (note I'm being a little sarcastic with that last one).
> would you do it?
No, because the "automatic state syncing and zero UX degradation" is a "draw the rest of the owl" exercise wherein the specific implementation details are omitted. Everything is domain specific when it comes to sync-based latency hiding techniques. SSR is domain agnostic.
> low bandwidth requirements
If we are looking at this from a purely information theoretical perspective, the extra 50kb gzipped is starting to feel kind of heavy compared to my ~8kb (plaintext) HTML response. If I am being provided internet via avian species in Africa, I would also prefer the entire webpage be delivered in one simple response body without any dependencies. It is possible to use so little javascript and css that it makes more sense to inline it. SSR enables this because you can simply use multipart form submissions for all of the interactions. The browser already knows how to do this stuff without any help.
You're stating that networks and latency will only improve, and that this is a reason to prefer SSR.
You're also stating that 50kb feels too heavy.
But at 8kb of SSR'd plaintext, you're ~6 page loads away from breaking even with the 50kb of content that will be cached locally, and you yourself are arguing that the transport for that 50kb is only getting better.
Basically: you're arguing it's not a problem to duplicate all the data for the layout on every page load because networks are good and getting better. But also arguing that the network isn't good enough to load a local-first layout engine once, even at low multiples of your page size.
So which is it?
---
Entirely separate of the "rest of the owl" argument, with which I agree.
I thought we were living in a utopia where fast high-speed internet was ubiquitous everywhere? What's the fuss over 50kb? 5mb should be fine in this fantasy world.
That's an assumption you're making, but that doesn't necessarily have to be true. I offered you what amounts to a magic button (drop this script in, done), not a full implementation exercise.
If it really were just a matter of dropping a 50kb script in (nothing else) would you do it? Where's the size cutoff between "no" and "yes" for you?
> Everything is domain specific when it comes to sync-based latency hiding techniques.
Yes and no. To actually add it to your app right now would most likely require domain-specific techniques. But that doesn't imply that a more general technique won't appear in the future, or that an existing technique can't be sufficiently generalized.
> the extra 50kb gzipped is starting to feel kind of heavy
Yeah - but we can reasonably assume it's a one-and-done cached asset that effectively only has to be downloaded once for your app.
If everything you need - application logic, css, media, data etc... - is cached on your device, you can carry on.
You're being far too myopic about all of this. There's many different use cases and solutions, all of which have their tradeoffs. Sometimes ssr is appropriate, other times local first is. You can even do both - SSR html in the service worker with either isomorphic JavaScript, or WASM.
We can all agree, though, that React and it's derivations are never appropriate.
Would you try to write/work on a collaboratibe text document (ie Google Docs or Sheets?) by editing a paragraph/sentence that's server side rendered and hope nobody changes the paragraph mid-work because the developers insisted on SSR ?
These kinds of tools (Docs, Sheets, Figma, Linear,etc) work well because changes have little impact but conflict resolution is better avoided by users noticing that someone else is working on it and hopefully just get realtime updates.
Then again, hotel booking or similar has no need for something like that.
Then there's been middle-ground like an enterprise logistics app that had some badly YOLO'd syncing, it kinda needed some of it but there was no upfront planning and it took a time to retrofit a sane design since there was so much domain and system specifics things lurking with surprises.
Sorry, but this is 100% a case of privileged developers thinking their compute infrastruction situation generalizes: it doesn't and it is a mistake to take shortcuts that assume as such.
10ms is a best case for DOCSIS3.0/3.1, it means you have near optimal routing and infrastructure between you and the node or are using some other transport like ethernet that is then fed by fiber. I currently get 24ms to my local Ookla host a couple of miles away over a wired connection with a recent DOCSIS3.1 modem. Hotel internet is likely to be backed by business fiber. They're likely throttling you.
I worked for an ISP for several years, there's a huge range of service quality even within the same provider, zipcode, and even same location depending on time of day.
After that, with competent engineering everything should be faster on the client, since it only needs state updates, not a complete re-render
If you don't have competent engineering, SSR isn't going to save you
Also the former technologies are local first in theory but without conflict resolution they can break down easily. This has been from my experience making mobile apps that need to be local first, which led me to using CRDTs for that use case.
Native app is installed and available offline by default. Website needs a bunch of weird shenanigans to use AppManifest or ServiceWorker which is more like a bunch of parts you can maybe use to build available offline.
Native apps can just… make files, read and write from files with whatever 30 year old C code, and the files will be there on your storage. Web you have to fuck around with IndexedDB (total pain in the ass), localStorage (completely insufficient for any serious scale, will drop concurrent writes), or OriginPrivateFileSystem. User needs to visit regularly (at least once a month?) or Apple will erase all the local browser state. You can use JavaScript or hit C code with a wrench until it builds for WASM w/ Emscripten, and even then struggle to make sync C deal with waiting on async web APIs.
Apple has offered CoreData + CloudKit since 2015, a completed first party solution for local apps that sync, no backend required. I’m not a Google enthusiast, maybe Firebase is their equivalent? Idk.
Ad: unless you use Conveyor, my company's product, which makes it as easy as shipping a web app (nearly):
You are expected to bring your own runtime. It can ship anything but has integrated support for Electron and JVM apps, Flutter works too although Flutter Desktop is a bit weak.
I don't think Apple's solution syncs seamlessly, I needed to use CRDTs for that, that's still an unsolved problem for both mobile and web.
You just have to write one for every client, no big deal, right? Just 2-5 (depending on if you have mobile clients and if you decide to support Linux too) times the effort.
You even say it yourself, you'll have to use Apple's sync and data solutions, and figure it out for Windows, Android and maybe Linux. Should be easy to sync data between the different storage and sync options...
Oh, and you have to figure out how to build, sign and update for all OSes too. Pay the Apple fee, the Microsoft whatever nonsense to not get your software flagged as malware on installation. It's around a million times easier to develop and deploy a web application, and that's why most developers and companies are defaulting to that, unless they have very good reasons.
I don't feel like I know all the answers, but as the creator of Replicache and Zero here is why I feel a pull to the web and not mobile:
- All the normal reasons the web is great – short feedback loop, no gatekeepers, etc. I just prefer to build for the web.
- The web is where desktop/productivity software happens. I want productivity software that is instant. The web has many, many advantages and is the undisputed home of desktop software now, but ever since we went to the web the interaction performance has tanked. The reason is because all software (including desktop) is client/server now and the latency shows up all over the place. I want to fix that, in particular.
- These systems require deep smarts on the client – they are essentially distributed databases, and they need to run that engine client-side. So there is the question of what language to implement in. You would think that C++/Rust -> WASM would be obvious but there are really significant downsides that pull people to doing more and more in the client's native language. So you often feel like you need to choose one of those native languages to start with. JS has the most reach. It's most at home on the desktop web, but it also reaches mobile via RN.
- For the same reason as prev, the complex productivity apps that are often targeted by sync engines are often themselves written using mainly web tech on mobile. Because they are complex client-side systems and they need to pick a single impl language.
But the web is primarily where a lot of productivity and collaboration happens; it’s also a more adversarial environment. Syncing state between tabs; dealing with storage eviction. That’s why local first is mostly web based.
The PWA capabilities of webapps are pretty OK at this point. You can even drive notifications from the iOS pinned PWA apps, so personally, I get all I need from web apps pretending to be mobile apps.
Local-first is actually the default in any native app
Before the emergence of tools like Zero I wouldn't have ever considered attempting to recreate the experience of a Google Sheet in a web app. I've previously built many live updating UIs using web sockets but managing that incoming data and applying it to the right area in the UI is not trivial. Take that and multiply it by 1000 cells in a Sheet (which is the wrong approach anyway, but it's what I knew how to build) and I can only imagine the mess of code.
Now with Zero, I write a query to select the data and a mutator to change the data and everything syncs to anyone viewing the page. It is a pleasure to work with and I enjoy building the application rather than sweating dealing with applying incoming hyper specific data changes.
Main problems I have are related to distribution and longevity -- as the article mentions, it only grows in data (which is not a big deal if most clients don't have to see that), and another thing I think is more important is that it's lacking good solutions for public indexes that change very often (you can in theory have a public readable list of ids). However, I recently spoke with Anselm, who said these things have solutions in the works.
All in all local-first benefits often come with a lot of costs that are not critical to most use cases (such as the need for much more state). But if Jazz figures out the main weaknesses it has compared to traditional central server solutions, it's basically a very good replacement for something like Firebase's Firestore in just about every regard.
My favorite so far is Triplit.dev (which can also be combined with TanStack DB); 2 more I like to explore are PowerSync and NextGraph. Also, the recent LocalFirst Conf has some great videos, currently watching the NextGraph one (https://www.youtube.com/watch?v=gaadDmZWIzE).
Needing to support clients that don’t phone home for an extended period and therefore need to be rolled forward from a really old schema state seems like a major hassle, but maybe I’m missing something. Trying to troubleshoot one-off front end bugs for a single product user can be real a pain, I’d hate to see what it’s like when you have to factor in the state of their schema as well
And then it just... never happened. 20 years went by, and most web products are still CRUD experiences, such as this site included.
The funny thing is it feels like it's been on the verge of becoming mainstream for all this time. When meteor.js got popular I was really excited, and then with react surely it was gonna happen - but even now, it's still not the default choice for new software.
I'm still really excited to see it happen, and I do think it will happen eventually - it's just trickier than it looks, and it's tricky to make the tooling so cheap that it's worth it in all situations.
This site being a CRUD app is a feature. Sometimes simplicity is best. I wouldn't want realtime updates, too distracting.
I'm still excited about the prospects of it — shameless plug: actually building a tool with one-of-a-kind messaging experience that's truly real-time in the Google docs collaboration way (no compose box, no send button): https://kraa.io/hackernews
We run into human-perceptible relativistic limits in latency. Light takes 56ms to travel half the earth's circumference, and our signals are often worse off. They don't travel in an idealized straight path, get converted to electrons and radio waves, and have to hop through more and more hoops like load balancers and DDOS protections.
In many cases latency is worse than it used to be.
Roughly: Meteor required too much vertical integration on each part of the stack to survive the strongly changing landscape at the time. On top of that, a lot of the teams focus shifted to Apollo (which at least from a commercial point of view seems to have been a good decision).
It also had some pretty serious performance bottlenecks, especially when observing large tables for changes that need to be synced to subscribing clients.
I agree though, it was a great framework for its day. Auth bootstrapping in particular was absolutely painless.
Given this, I reject your assertion that Meteor is limited to MongoDB and "toy apps".
Most of the solutions with 2 way sync I see work great in simple rest and hobby "Todo app" projects. Start adding permissions and evolving business logic, migrations, growing product and such, and I can't see how they can hold up for very long.
Electric gives you the sync for reads with their "views", but all writes still happen normally through your existing api / rest / rpc. That also makes it a really nice tool to adopt in existing projects.
Just to note that, with TanStack DB, Electric now has first class support for local writes / write-path sync using transactional optimistic mutations:
https://electric-sql.com/blog/2025/07/29/local-first-sync-wi...
For instance one closes an something and another aborts the same thing.
Described here https://blog-doe.pages.dev/p/my-front-end-state-management-a...
I've already made improvements to that approach. decoupling of backend and front end actually feels like you're reducing complexity.
Example is a bit bad, but roughly shows how we're using. I have built a custom sync API that accepts in the body a list of <object_id>:<short_hash> and returns a kind of json-list in a format
<id>:<hash>:<json_object>\n <id>:<hash>:<json_object>\n
API compares what client knows vs. current state and only returns the objects that were updated/created and separately objects that were removed. Not ideal for large collections (but then again why did I store 50mb of historical data on the client in the first place? :D)
Long story short, if requirements aren't strictly real time collaborative and online-enabled, I've found rolling something yourself more in the vein of a "fat client" works pretty well too for a nice performance boost. I generally prefer using IndexedDB directly— well via Dexie, which has reactive query support.
> instant UX
I do not get the hype. At all."Local first" and "instant UX" are the least of my concerns when it comes to project management. "Easy to find things" and "good visibility" are far more important. Such a weird thing to index on.
I might interact with the project management tool a few times a day. If I'm so frequently accessing it as an IC or an EM that "instant UX" becomes a selling point, then I'm doing something wrong with my day.
I've never used a project manager and thought to myself "I want to switch because this is too slow". Even Jira. But I have thought to myself "It's too difficult to build a good workflow with this tool" or "It's too much work to surface good visibility".
This is not a first-person shooter. I don't care if it's 8ms vs 50ms or even 200ms; I want a product that indexes on being really great at visibility.
It's like indexing your buying decision for a minivan on whether it can do the quarter mile at 110MPH @ 12 seconds. Sure, I need enough power and acceleration, but just about any minivan on the market is going to do an acceptable and safe speed and if I'm shopping for a minivan, its 1/4 mile time is very low on the list. It's a minivan; how often am I drag racing in it? The buyer of the minivan has a purpose for buying the minivan (safety, comfort, space, cost, fuel economy, etc.) and trap speed is probably not one of them.
It's a task manager. Repeat that and see how silly it sounds to sweat a few ms interaction speed for a thing you should be touching only a few times a day max. I'm buying the tool that has the best visibility and requires the least amount of interaction from me to get the information I need.
Growing up my folks had an old Winnebago van that took 2+ minutes to hit 60mph which made highway merges a white-knuckle affair, especially uphill. Performance was a criteria they considered when buying their next minivan. Whereas modern minivans all have an acceptable acceleration -- it's still important, it's just no longer one you need to think about.
However, not all modern interfaces provide an acceptable response time, so it's absolutely a valid criteria.
As an example, we switched to a SaaS version of Jira recently and things became about an order of magnitude slower. Performing a search now takes >2000ms, opening a filter dropdown takes ~1500ms, filtering the dropdown contents takes another ~1500ms. The performance makes using it a qualitatively different experience. Whereas people used to make edits live during meetings I've noticed more people just jotting changes down in notebooks or Excel spreadsheets to (hopefully remember to) make the updates after the meeting. Those who do still update it live during meetings often voice frustration or sometimes unintentionally perform an operation twice because there was no feedback that it worked the first time.
Going from ~2000ms to ~200ms per UI operation is an enormous improvement. But past that point there are diminishing returns: from ~200ms to ~20ms is less necessary unless it's a game or drawing tool, and going from 20ms to 2ms is typically overoptimization.
Not quite the same as responsiveness but editing text fields in JIRA have a tendency of not saving in progress work if you accidentally escape out. Also hyperlinking between the visual and text mode is pretty annoying since you can easily forget which mode you’re in.
Honestly as I type these out there are more and more frustrations I can think of with JIRA. Will we ever move away? Not anytime soon. It integrates with everything and that’s hard to replace.
It’s still frustrating though.
It all depends on what we do consider "good enough". 200ms total page render time would be "blazing fast" for me already. I've just clicked around Github (supposed to be globally fast, can we agree?) and the SPA page changes are 1-1.5s to complete.
To continue my example above, your computer peripherals are probably good enough. Have you considered what it would be like with a garbage-tier mouse? Similarly, maybe you wouldn't notice the difference to a better mouse. I do, because a standard office mouse is not the pace I'm moving at. (No, I'm not some the Flash, I am just fast and precise with my mouse.)
If anything, this gives us a glimpse of what's possible. The latency benchmark[1] of text editors has given us something to think about. In the past decade (already?!) that article was probably the sole reason for drawing public attention to this topic[2] . For example, JetBrains have since put considerable work into improving their IDEs (IntelliJ IDEA etc). They had called it "zero latency" mode.
[1]: https://pavelfatin.com/typing-with-pleasure/ [2]: small study from 2023 https://dl.acm.org/doi/fullHtml/10.1145/3626705.3627784
That said, I really enjoy Linear (it reminds me a lot of buganizer at Google). The speed isn't something I notice much at all, it's more the workflow/features/feel.
I hate Jira with a burning passion simply because it is slow where I live (in China, with a VPN). Even minor interactions, like clicking on a task’s description to edit it, takes about 2 seconds. Opening a task from a list takes around 5 seconds.
The result is that I and my coworkers avoid using Jira unless we really have to. Ad-hoc work that wasn’t planned as part of the sprint just doesn’t get tracked because doing so is unreasonably painful.
Doing that with any other system, sync-engine or not, requires a huge mess of code and ends up implementing some sort of ad-hoc glue code to make even a part of this work.
I'm building an app right now with it and I'm currently so much further ahead in development than I could be with any other setup, with no bugs or messy code.
Your choice is either 100% server-side like v1 Rails, or some sort of ad-hoc sync/update system. My argument is either you should stick 100% server side, or go all the way client properly with a good sync engine. It's the middle part that sucks, and while there's a chunk of apps that benefit from fully server, it's not really an argument that you can build much faster responding apps client-side and that users generally prefer it, rightly so.
First I used PouchDB which is also awesome https://pouchdb.com/ but now switched to SQLite and Turso https://turso.tech/ which seems to fit my needs much better.
My use case is scoring live events that may or may not have Internet connection. So normal usage is a single person but sometimes it would be nice to allow for multi person scoring without relying on centralized infrastructure.
Here's the app I built if you want to try it out: https://github.com/chr15m/watch-later
I've been writing a budget app for my wife and I and I've made it 100% free with 3rd party hosting:
* InstantDB free tier allows 1 dev. That's the remote sync.
* Netlify for the static hosting
* Free private gitlab ci/cd for running some email notification polling, basically a poor mans hosted cron.
IIUC, InstantDB is open source with a docker container you can run yourself, but at this point it's designed to run in a more cloud-like environment than I'd like. Last time I checked there was at least one open PR to make it easier to run in a different environment, but I haven't check in recently.
It seems like it'll be impossible without an overlay network (like Yggdrasil, i2p), but these will be too heavy for mobile devices without a dedicated functioning relay... here we go again.
As far as I can tell, it's VASTLY more capable than all of these new options. It has full-text search, all sorts of query optimizations, different storage backends in both the browser and server, and more.
https://rxdb.info/rx-storage-pouchdb.html
My point/question still stands though - rxdb seems to be vastly more capable than all of the new tools that get all the attention. Very peculiar
pouchdb is definitiely the OG local first database, and it inspired rxdb and was the core of it for a long time. it has definitely flaws mainly the developer ergonomics and not very up to date and non homogenic ecosystem. my feeling is non of these issues were the reaason rxdb parted ways, they could just have decided to > contribute < by fixing them and improving, but that would not have allowed them to create moat that they can enforce users switching to pro feautures with.
Secure Connection Failed
An error occurred during a connection to bytemash.net. PR_END_OF_FILE_ERROR
Error code: PR_END_OF_FILE_ERROR
How garbage the web has become for a low-latency click action being qualified as "impossibly fast". This is ridiculous.
Were some extras installed? Or is this one of those tools that needs a highly performant network?
Large numbers of custom workflows and rules can do it, too, but most have been the first.
I have only seen a few self hosted jira, but all of those were mind numbingly slow.
Jira cloud, on the other hand, now compared to 2018 is faster from what I remember, I still call it painful any time I am trying to be quick about something, most of the time though it is only annoying.
While I see strict safety/reliability/maintainability concerns as a net positive for the ecosystem, I also find that we are dragged down by deprecated concepts at every step of our way.
There's an ever-growing disconnect. On one side we have what hardware offers ways of achieving top performance, be it specialized instruction sets or a completely different type of a chip, such as TPUs and the like. On the other side live the denizens of the peak of software architecture, to whom all of it sounds like wizard talk. Time and time again, what is lauded as convention over configuration, ironically becomes a maintenance nightmare that it tries to solve as these conventions come with configurations for systems that do not actually exist. All the while, these conventions breed an incompetent generation of people who are not capable of understanding underlying contracts and constraints within systems, myself included. It became clear that, for example, there isn't much sense to learn a sql engine's specifics when your job forces you to use Hibernate that puts a lot of intellectual strain into following OOP, a movement characterized by deliberately departing away from performance, in favor of being more intuitive, at least in theory.
As limited as my years of experience are, i can't help but feel complacent in the status quo, as long as I don't take deliberate actions to continuously deepen my knowledge and working on my social skills to gain whatever agency and proficiency that I can get my hands on
Developers of the past weren't afraid to tell a noob (remember that term?) to go read a few books before joining the adults at the table.
Nowadays it seems like devs have swung the other way and are much friendlier to newbs (remember that distinction marking a shift?).
In 2005 we wrote entire games for browsers without any frontend framework (jQuery wasn't invented yet) and managed to generate responses in under 80 ms in PHP. Most users had their first bytes in 200 ms and it felt instant to them, because browsers are incredibly fast, when treated right.
So the Internet was indeed much faster then, as opposed to now. Just look at GitHub. They used to be fast. Now they rewrite their frontend in react and it feels sluggish and slow.
I find this is a common sentiment, but is there any evidence to find that React itself is actually the culprit of GH's supposed slowdown? GH has updated their architecture many times over and their scale has increased by orders of magnitude, quite literally serving up over a billion git repos.
Not to mention that the implementation details of any React application can make or break its performance.
Modern web tech often becomes a scapegoat, but the web today enables experiences that were simply impossible in the pre-framework era. Whatever frustrations we have with GitHub’s UI, they don’t automatically indict the tools it’s built with.
This was actually the recommended way to do it for years with the atom/molecule/organism/section/page style of organizing React components intentionally moving data access up the tree into organism and higher. Don't know what current recommendations are.
Used properly, React’s overhead isn’t significant enough on its own to cause noticeable latency.
And decided to drop legacy features such as <a> tags and broke browser navigation in their new code viewer. Right click on a file to open in a new tab doesn’t work.
The techniques Linear uses are not so much about backend performance and can be applicable for any client-server setup really. Not a JS/web specific problem.
But the industry is going the other way. Building frontends that try to hide slow backends and while doing that handling so much state (and visual fluff), that they get fatter and slower every day.
All to avoid writing a bit of JavaScript.
The bottleneck is not the roundtrip time. It is the bloated and inefficient frontend frameworks, and the insane architectures built around them.
Here's the creator of Datastar demonstrating a WebGL app being updated at 144FPS from the server: https://www.youtube.com/watch?v=0K71AyAF6E4&t=848
This is not magic. It's using standard web technologies (SSE), and a fast and efficient event processing system (NATS), all in a fraction of the size and complexity of modern web frameworks and stacks.
Sure, we can say that this is an ideal scenario, that the server is geographically close and that we can't escape the rules of physics, but there's a world of difference between a web UI updating at even 200ms, and the abysmal state of most modern web apps. The UX can be vastly improved by addressing the source of the bottleneck, starting by rethinking how web apps are built and deployed from first principles, which is what Datastar does.
The entire thing is a JavaFX app (i.e. desktop app), streaming DOM diffs to the browser to render its UI. Every click is processed server side (scrolling is client side). Yet it's actually one of the faster websites out there, at least for me. It looks and feels like a really fast and modern website, and the only time you know it's not the same thing is if you go offline or have bad connectivity.
If you have enough knowledge to efficiently use your database, like by using pipelining and stored procedures with DB enforced security, you can even let users run the whole GUI locally if they want to, and just have it do the underlying queries over the internet. So you get the best of both worlds.
There was a discussion yesterday on HN about the DOM and how it'd be possible to do better, but the blog post didn't propose anything concrete beyond simplifying and splitting layout out from styling in CSS. The nice thing about JavaFX is it's basically that post-DOM vision. You get a "DOM" of scene graph nodes that correspond to real UI elements you care about instead of a pile of divs, it's reactive in the Vue sense (you can bind any attribute to a lazily computed reactive expression or collection), it has CSS but a simplified version that fixes a lot of the problems with web CSS and so on and so forth.
You can use JavaFX to make mobile apps. So it's likely just that the authors haven't bothered to do a mobile friendly version.
> The entire thing is a JavaFX app (i.e. desktop app)
Besides, this discussion is not about whether or not a site is mobile-friendly.
On this site, every mouse move and scroll is sent to the server. This is an incredibly chatty site--like, way more than it needs to be to accomplish this. Check the websocket messages in Dev Tools and wave the mouse around. I suspect that can be improved to avoid constantly transmitting data while the user is reading. If/when mobile is supported, this behavior will be murder for battery life.
So, like most of the non-first world? Hell, I'm in a smaller town/village next to my capital city for a month and internet connection is unreliable.
Having said that, the website was usable for me - I wouldn't say it's noticeably fast, but it was not show either.
Firefox mobile seems to think the entire page is a link. This means I can't highlight text for instance.
Clicking on things feels sluggish. The responses are fast, but still perceptible. Do we really need a delay for opening a hamburger menu?
Many of us don't have to worry about this. My entire country is within 25ms RTT of an in-country server. I can include a dozen more countries within an 80ms RTT. Lots of businesses focus just on their country and that's profitable enough, so for them they never have to think about higher RTTs.
edit: I am 80 miles from EWR not 200
More relevantly... who wants to architect a web app to have tight latency requirements like this, when you could simply not do that? GeForce Now does it because there's no other way. As a web developer you have options.
From Philadelphia suburbs to my actual Fly app in:
EWR 8.5ms (NYC)
SJC 75ms (California)
CDG 86ms (France, cross atlantic)
GRU 126.2 (Brazil)
HKG 225.3 (Hong Kong)
For me, on the web today, the click feedback for a large website like YouTube is 2 seconds for first change and 4 seconds for content display. 4000 milliseconds. I'm not even on some bad connection in Africa. This is a gigebit connection with 12ms of latency according to fast.com.
If you can bring that down to even 200ms, that'll feel comparatively instantaneous for me. When the whole internet feel like that, we can talk about taking it to 16ms
We almost forgot that's the point. Speed is good design, the absence of something being in the way. You notice a janky cross platform app, bad electron implementation, or SharePoint, because of how much speed has been taken away instead of how much has been preserved.
It's not the whole of good design though, just a pretty fundamental part.
Sports cars can go fast even though they totally don't need to, their owners aren't necessarily taking them to the track, but if they step on it, they go, it's power.
In fact in the Better Software Conference this year there were people discussing the fact that if you care about performance people think your software didn't actually do the work: because they're not used to useful things being snappy.
Everything I read about Linear screams over-engineering to me. It is just a ticket tracker, and a rather painful one to use at that.
This seems to be endemic to the space though, eg Asana tried to invent their own language at one point.
That said at this point Linear has more strengths than just interaction speed, mainly around well thought out integrations.
I don’t find Linear to be all that quick, but apparently Mac OS thinks it’s a resource hog (or has memory leaks). I leave linear open and it perpetually has a banner that tells me it was killed and restarted because it was using too much memory. That likely colors my experience.
It is specifically to do with behaviour that is enabled by using shared resources (like IndexedDB across multiple tabs), which is not simple HTML.
To do something similar over the network, you have until the next frame deadline. That’s 8-16ms. RTT. So 4ms out and back, with 0ms budget for processing. Good luck!
I posted a little clip [1] of development on a multiplayer IDE for tasks/notes (local-first+e2ee), and a lot of people asked if it was native, rust, GPU rendered or similar. But it's just web tech.
The only "secret ingredients" here are using plain ES6 (no frameworks/libs), having data local-first with background sync, and using a worker for off-UI-thread tasks. Fast web apps are totally doable on the modern web, and sync engines are a big part of it.
I just profiled it to double-check. On an M4 MacBook Pro, clicking between the "Inbox" and "My issues" tabs takes about 100ms to 150ms. Opening an issue, or navigating from an issue back to the list of issues, takes about 80ms. Each navigation includes one function call which blocks the main thread for 50ms - perhaps a React rendering function?
Linear has done very good work to optimise away network activity, but their performance bottleneck has now moved elsewhere. They've already made impressive improvements over the status quo (about 500ms to 1500ms for most dynamic content), so it would be great to see them close that last gap and achieve single-frame responsiveness.
The comments are absolutely wild in here with respect to expectations.
The stated 500 ms to 1500 ms are unfortunately quite frequent in practice.
This means that it's safe for background work to block a web browser's main thread for up to 50ms, as long as you use CSS for all of your animations and hover effects, and stop launching new background tasks while the user is interacting with the document. https://web.dev/articles/optimize-long-tasks
that is the entire point of the app, surely! whether or not the actual implementation is bad, syncing across devices is what users want in a note taking app for the most part.
If anything it is slow because it is a pain to navigate. I have browser bookmarks for my most frequented pages.
With the Linear approach, the server remains the source of truth.
Here I’ll offer my services. I’ll pretend to do a technical deep dive of your app for X amount. No one will know, I’ll just act super interested.
When the fuck did anyone ever go “omg this web app so impressive”, never, ever, never, ever.
Many blog post submissions here are someone diving into something they like, hardware, software, tool etc. and it’s just because people like to share.
I'm on the fence. IMO ivape provides a hypothesis, but brings it as fact. That isn't a good starting point for a discussion (though a common mistake), but that doesn't prove they are wrong either. Btw, I believe the HN guidelines encourage you to take the positive angle, at least for comments.
As for the topic at hand local-first means he end up with a cache; either in memory or on disk. If you got the RAM and NVMe you might as well use it for performance. Back in the days not much could be cached, but your connection was often too lousy or not 24/7. So you ended up with software distribution via 3.5 inch floppies or CDROM. Larger distributions used gigantic disk cache either centralized (Usenet) or distributed (BitTorrent). But the 'you might as well use it' issue is it introduces sloppiness. If you develop with huge constraints you are disciplined into deterrence to start, failing, or succeeding efficiently. We hary ever hear about all the deterrence and failing.
> No API routes. No request/response cycles. No DTOs. Just… objects that magically sync. It kind of feels like cheating.
> What makes this powerful is that these aren’t just type definitions - they’re live, reactive objects that sync automatically.
Is what twigged my AI radar too. LLM’s seem to really love that summarisation pattern of `{X is/isn’t just Y. Pithy concluding remark}`
Watching Tuomas' initial talk about Linear's realtime sync, one of the most appealing aspects of their design was the reactive object graph they developed. They've essentially made it possible for frontend development to be done as if it's just operating on local state, reading/writing objects in an almost Active Record style.
The reason this is appealing is that when prototyping a new system, you typically need to write an API route or rpc operation for every new interaction your UI performs. The flow often looks like: - Think of the API operation you want to call - Implement that handler/controller/whatever based on your architecture/framework - Design the request/response objects and share them between the backend/frontend code - Potentially, write the database migration that will unlock this new feature
Jazz has the same benefits of the sync + observable object graph. Write a schema in the client code, run the Jazz sync server or use Jazz Cloud, then just work with what feel like plane js objects.
Who paid you for these comments? Atlassian?
The enshitification that Jira went through is by itself an ad for Linear.