Just jump straight to business logic, scaffolding is done for you already.
I think in your question as well is an idea that apps from now on will be bespoke, small and unique entities but the truth is we are still going to be mostly solving already solved problems, and enterprise software will still require the same massive codebases as before.
The real win of frameworks is they keep your workers, AI or human, constrained to an existing known set of tools and patterns. That still matters in long term AI powered projects too. That and they provide battle hardened collection of solutions that cover lots of edge cases you would never think to put in your prompts.
More code in the context window doesn't just increase the cost, it also degrades the overall performance of the LLM. It will start making more mistakes, cause more bugs, add more unnecessary abstractions, and write less efficient code overall.
You'll end up having to spend a significant amount of time guiding the AI to write a good framework to build on top of, and at that point you would have been better off picking an existing framework that was included in the training set.
Maybe future LLMs will do better here, but I wouldn't recommend doing this for anything larger than a landing page with current models.
I kid but any reason you can think of applies to app development too.
1. Good abstractions decrease verbosity and improve comprehension
2. Raw HTML/CSS/JS are out of distribution just like assembly (no one builds apps like this)
3. Humans need to understand and audit it
4. You'll waste time and tokens reinventing wheels
This inuitively makes sense. LLMs mimic human behavior and thought, so for all the reasons you'd get lost in a pile of web spaghetti or x86, so would an LLM.
Plenty of people build apps with vanilla CSS and JS (and HTML is just HTML). It's a really nice way to work.
Here are a few links to get you started.
https://dev.37signals.com/modern-css-patterns-and-techniques...
1. Unlimited projects: when you spin up traditional backends, you usually use VMs. It's expensive to start many of them. With Instant, you create unlimited projects
2. User experience: traditional CRUD apps work, but they don't feel delightful. I you want to support features like multiplayer, offline mode, or optimistic updates, you'll have to write a lot more custom infra. Instant gives you these out of the box, and the agents find it easier to write than CRUD code
3. Richer features: sometimes you'll want to add more than just a backend. For example, maybe you want to store files, or share cursors, or stream tokens across machines. These often need more bespoke systems (S3, Redis, etc). Instant already comes with these out of the box, and the agents know how to use them.
There are a few demo sections in the post that show this off. For example, you can click button and you'll get a backend, without needing to sign up. And in about 25 lines of code, you'll make a real-time todo app.
How does it compare to photon networking? I've been using photon and webrtc mostly. I haven't had any issues, but I'm always interested in finding better solutions!
InstantDB is a joy to work with. Granted, I've only ever built small toy projects with it, but it's my go-to. Just so much simpler than anything else I've tried in this space.
The core product is so good that the AI emphasis feels weird. Hopefully that's just marketing and not a pivot. Unfortunate if that's what it takes to get funding these days.
We last updated our website when open sourced back in August 2024 https://news.ycombinator.com/item?id=41322281
Back then most folks weren't building full-on apps with AI yet.
Since then we've seen a large number of people find us through content on creating apps with AI. We felt our previous messaging didn't speak to that and we thought it was time for a refresh.
We also invested a lot to make the agent experience with Instant a delight!
> AI emphasis
It's not quite marketing or a pivot. We've just noticed that most of our users are coding with AI, and really optimized for that too.
I think there's two surprises about this:
1. If it was easier to make apps multiplayer, I bet more apps would be. For example, I don't see why Linear has to be multiplayer, but other CRUD apps don't.
2. When the abstraction is right, building apps with sync engines is easier than building traditional CRUD apps. The Linear team mentioned this themselves here: https://x.com/artman/status/1558081796914483201
One problem you may encounter with the 5 usd node: how do you handle multiple projects? You could put them all in one VM, but that set up can get esoteric, and as you look for more isolation, the processes won't fit on such a small machine.
With Instant, you can make unlimited projects. Your app also gets a sync engine, which is both good for your users, and at least in our experiments, the AIs prefer building with it.
And if you ever want to get off Instant, the whole system is open source.
I still resonate with a good Hetzner box though, and it can make sense to self-host or to use more tried-and-true tech.
For what it's worth, with Instant you would get a lot more support for easy projects. At least in our benchmarks, AI
I'd suggest including a skill for this, or if there's already one linking to it on the blog!
https://github.com/instantdb/instant/pull/2530
It should be live in a few minutes.
npx skills add instantdb/skills
Would recommend doing `bunx/pnpx/npx create-instant-app` to scaffold a project too!
IndexDB, Postgres, Javascript, Typescript, Clojure. Not bad, but not much more attractive than the usual technology zoo any startup seems to end up with.
Sending you guys lots of love and best of luck!
Here is how I built it in a WUI: I sent SSE events from Server -> Client streaming web-search progress, but then the client could update a `x` box on "parent" widget using the `id` from a SSE event using a simple REST call. The `id` could belong to parent web-search or to certain URLs which are being fetched. And then whatever is yielding your SSE lines would check the db would cancel the send(assuming it had not sent all the words already).
You kick off an agent. It reports work back to the user. The user can click cancel, and the agent gets terminated.
You are right, this kind of UX comes very naturally with Instant. If an agent writes data to Instant, it will show up right away to the user. If the user clicks an `X` button, it will propagate to the agent.
The basic sync engine would handle a lot of the complexity here. If the data streaming gets more complicated, you may want to use Instant streams. For example, if you want to convey updates character by character, you can use Instant streams as an included service, which does this extremely efficiently.
More about the sync engine: https://www.instantdb.com/product/sync More about streams: https://www.instantdb.com/docs/streams
Instant crosses that persistence boundary, your app can propagate updates to any one who has subscribed to the abstract datastore — which is on a server somewhere, so you the engineer don't have to write that code. Right?
But how is this different/better than things like, i wanna say, vercel/nextjs or the like that host similar infra?
This can work great, but you lose some benefits: your pages won't work offline, they won't be real-time, and if you make changes, you'll have to wait for the server to acknowledge them.
Instant pushes handles more of the work on the frontend. You make queries directly in your frontend, and Instant handles all the offline caching, the real-time, and the optimistic updates.
You can have the best of both worlds though. We have an experimental SSR package, which to our knowledge is the first to combine _both_ SSR and real-time. The way it works:
1. Next SSRs the page
2. But when it loads, Instant picks it up and makes every query reactive.
More details here: https://www.instantdb.com/docs/next-ssr
I had a Show HN that built with Instant: https://news.ycombinator.com/item?id=44247029 The common request from that thread was to add guest auth, and few months later Instant had it baked in, so it was really easy to add that feature. Great dev experience :)
Though, their console feels like it didn't get the love that the rest of the infra / website did.
Congrats on the 1.0 launch! I'm excited to keep building with Instant.
We're going to redesign the dashboard in the next few weeks.
One interesting observation from our users: though they use the dashboard less in some ways (the AI agents spin up apps and make schema changes for them), we found people use them _more_ in other ways. Instant comes with an Explorer component, which lets you query your data. We found users want to engage with that a lot more.
We wanted to make a tool that (a) would make it easy to build delightful apps, and that (b) builders would find easy to use.
This got us into making things that touch both local-first and AI.
On the local-first side, we took on problems like offline-mode, real-time, and optimisitc updates.
On the AI side, we built a multi-tenant abstraction, so you can spin up as many apps as you like, and focused on great DX/AX so agents found Instant easy to use too.
We are similar to supabase in the sense that we support a relational database. We're different in that with us, you get real-time queries, offline mode, and optimistic updates out of the box.
> Pocketbase
I am not too familiar with Pocketbase.
Stopa also gave an answer here! https://news.ycombinator.com/item?id=47711866
I see you have support for vanilla js and svelte, but it's unclear whether you can get all the same functionality if you don't use React. Is React the only first class citizen in this stack?
> Is React the only first class citizen in this stack?
Each system gets the same functionality. We centralize the critical logic for the client SDK in "@instantdb/core". React, Svelte, Tanstack, React Native et al are wrappers around that core library.
The one place where it's lacking a bit is the docs. We have specific docs for each library, but a lot of other examples assume React.
We are improving this as we speak. For now, the assumption on React is quite light in the docs, so it's relatively straightforward to figure out what needs to happen for the library of your choice.
If Instant is compromised, then that's a lot more dangerous. We minimize this risk following security best practices: keeping data encrypted at rest, keeping secrets hashed at creation time, etc.
/s
If you want more details, read their open source codebase or ask them specifically what documentation would boost your confidence, instead of leaving snarky comments.
Are we supposed to expose all entities and relationships and rely on row level security?
The home page has some examples of complex startups that use Instant as their core infra:
https://www.instantdb.com/#:~:text=Startups%20love%20Instant
> Are we supposed to expose all entities and relationships and rely on row level security?
Yes. This may feel foreign, but we think it's one the best ways to do permissions. We were originally inspired by Facebook's EntPrivacy. When you have permissions at the object layer, you can be more confident that _any_ query you write would be allowed.
I cover it in the essay here:
https://www.instantdb.com/essays/architecture#:~:text=is%20t...
To summarize:
In places where we process throughput, we generally stick a grouped queue and a threadpool that takes from it. The mechanics for this queue make it so that if there's one noisy neighbor, it can't hog all the threads.
There's more too (runbooks, rate limiting systems, buffers, isolated instances), but I thought this particular data structure was really fun to share.
Would love to check out /docs but it's currently a 404.
Our thinking was to first get the DX/AX + feature set solid with Instant and then let folks bring their own Postgres
We both offer a real-time queries out of the box. I am not 100% sure, I but think Convex also set up a multi-tenant database; so they can offer a good number of free projects well.
The way I would differentiate Instant:
With Convex you write your queries as Javascript functions. This means you have to do joins for example imperatively. With Instant, you can write queries declaratively.
As of today Convex doesn't work offline, and you have to write optimistic updates manually. Instant can run offline and comes with optimistic updates out of the box.
Both Convex and Instant support files out of the box. But with Instant you can write CASCADE delete rules, and you also get other services, like presence and streams.
The sync engine feature looks very interesting to me. There have been quite a few products available on the market today, but none has achieved a dominant share yet. So if this is your main strength, I'd like to see more demos built local first.
Curious if you considered shipping the engine itself as a standalone infra piece.
> Curious if you considered shipping the engine itself as a standalone infra piece.
We are thinking about supporting something like "Bring Your Own Postgres", which would allow folks to opt into just the sync engine piece.
Right now we focused on the integrated system, because we really wanted to optimize for a delightful developer experience on greenfield projects.
1. Supabase runs on VMs, so only supports 2 free projects. We built the backend to be multi-tenant, so we can give you unlimited free projects.
2. Supabase doesn't support offline mode or optimistic updates. Instant gives you a sync engine which does.
1) Transparency on pricing: This builds confidence. Need to know exactly what I pay for additional egress/ops (read/write). "unlimited" is not sustainable for the provider (you). For example, Firestore has detailed pricing that makes scaling sustainable for them. see https://cloud.google.com/firestore/pricing.
2) Transparency on limits: Req/s, max atrributes, max value length, etc. What about querying non-local non-indexed data (e.g. via server-side call), that's costly for you guys. So, what's the limit?
3) Simpler code in the docs/examples overall. Currently, they're not bad, but not great. For example, change the "i" used everywhere to "inst" or "idb". Assume dev is a noob!
4) Change simplify/terminology used. This is probably the most important, but hardest thing. Internally, keep the same triple structure. But dev just cares about tables/key/val. Or tables+rows. Namespaces/entities are confusing. Also, be consistent/clear. For example: "Namespaces are equivalent to "tables" in relational databases"..Perhaps you meant "namespaces are just a list of tables/entities"..slighly different, but far clearer I think? "Attributes are properties associated with namespaces"...I though attributes are associated with entities? Please keep in mind, I am completely new to InstantDB, so need to study the architecture more.
5) Simplify docs BIG TIME. And add an API REFERENCE (super important). Right now, you have: Tutorials, Examples, Recipes, Docs, Essays. These are all essentially "docs".
6) Simplify the "About" section. Should be 1/10th the size. Right now, it's like a fruit salad of docs, and re-iterating the features/benefits. Instead, put pics of both founders. Maybe investor list. Pics of your office?
Agreed lots of opportunity for simplification. There’s so much context in this space.
When talking about why sync we mention optimistic updates, multiplayer, and offline mode. To motivate the complexity of sync we talk about websockets, optimistic queues, and IndexedDB. To explain how we work we talk about triples, datalog, and CTEs.
We try to give a clean interface so devs don’t need to worry about this complexity, but yea it’s been an ongoing iteration to make both easier and transparent to understand!
But pairing Instant with Vercel works great too! We have a tutorial on how you can build an app with Instant and deploy it to vercel here