66 points by rmoff 4 days ago | 7 comments
ncb9094 1 minute ago
To me it sounds like NATS Jetstream but with Rust. I wonder what the reliability looks like when it is prod ready
randito 8 hours ago
Great link. I've always been drawn to sqlite3 just from a simplicity and operational point of view. And with tools like "make it easy to replcate" Litestream and "make it easy to use" sqlite-utils, it just becomes easier.

And one of the first patterns I wanted to use was this. Just a read-only event log that's replicated, that is very easy to understand and operate. Kafka is a beast to manage and run. We picked it at my last company -- and it was a mistake, when a simple DB would have sufficed.

https://github.com/simonw/sqlite-utils https://litestream.io/

tombert 1 hour ago
I love the idea of SQLite, but I actually really dislike using it.

I think part of my issue is that a lot of uses of it end up having a big global lock on the database file (see: older versions of Emby/Jellyfin) so you can't use it with multiple threads or processes, but I also haven't really ever find a case to use it over other options. I've never really felt the need to do anything like a JOIN or a UNION when doing local configurations, and for anything more complicated than a local configuration, I likely have access to Postgres or something. I mean, the executable for Postgres is only ten megs or twenty on Linux, so it's not even that much bigger than SQLite for modern computers.

mjmas 1 hour ago

  PRAGMA journal_mode = WAL;
And set the busy timeout tunction as well.

https://www.sqlite.org/c3ref/busy_timeout.html

apgwoz 1 hour ago
Any good and honest tansu experience reports out there? Would be nice to understand how “bleeding edge” this actually is, in practice. The idea of a kafka compatible, but trivial to run, system like this is very intriguing!
nchmy 50 minutes ago
I wonder how it compares to Redpanda
ktzar 6 hours ago
I didn't know about Tansu and probably would not use it for anything too serious (yet!). Bus as a firm believer of event sourcing and change of paradigm that Kafka brings this is certainly interesting for small projects.
8organicbits 5 hours ago
Quite cool. 7000 records per second is usable for a lot of projects.

One note on the backup/migrate, I think you need a shared lock on the database before you copy the database. If you dont, the database can corrupt. SQLite docs have other recommendations too:

https://sqlite.org/backup.html

brikym 4 hours ago
How does it compare to Redis streams with persistent storage?
tuananh 2 hours ago
everything is dead. what lives on is their protocol.

same for redis, kafka, ...