118 points by kirlev 15 hours ago | 6 comments
darkwater 2 hours ago
To the OP: nice karma trick posting the URL with the anchor to bypass the HN duplicates detector. Dang & co, this is a bug, it should be fixed.

I know because I stumbled on the same page following the links from the blog of the author of another post that made the frontpage yesterday (https://news.ycombinator.com/item?id=45589156), liked the TernFS concept, submitted it and got redirected to https://news.ycombinator.com/item?id=45290245

president_zippy 11 hours ago
Could anybody with applicable experience tell me how this filesystem compares in the real world to Lustre?

If it is decisively better than Lustre, I am happy to make the switch over at my sector in Argonne National Lab where we currently keep about 0.7 PB of image data and eventually intend to hold 3-5 PB once we switch over all 3 of our beamlines to using Dectris X-Ray detectors.

Contrary to what the non-computer scientists insist, we only need about 20Gb/s of throughput in either direction, so robustness and simplicity are the only concerns we have.

toast0 2 hours ago
If you only need 20 Gb/s, you might be able to meet your needs without an exotic distributed filesystem by just getting a single giant server with a rack full of JBODs:

Something like this [1] gets you 44 disks in 4u. You can probably fit 9 of those and a server with enough HBAs to interface with it in a 42U rack. 9x44x20TB = not quite 8 PB. Adjust for redundancy and/or larger drives. If you go with SAS drives, you can have two servers connected to the drives, with failover. Or you can setup two of these racks in different locations and mirror the data (somehow).

[1] https://www.supermicro.com/en/products/chassis/4U/847/SC847E... (as an illustration, sas jbods aka disk shelves are widely available from server vendors)

mat_epice 8 hours ago
There are several other systems I would recommend before TernFS for your environment. If you're looking at Lustre versus this in particular, Lustre has been through the wringer, and ANL/DOE has plenty of people who understand it enough to run it well and fix it when it breaks.

However, you are right. Your bandwidth needs don't really require Lustre.

president_zippy 8 hours ago
Seriously man, I'm asking because I don't know: which filesystems do you recommend instead? I dabbled in CephFS because our data is write-once, but helping computer illiterate research scientists at other universities and national labs retrieve their data is a lot simpler from Lustre because it's just plain-old POSIX filesystem semantics.

I'm not joking, I didn't ask this as a way to namedrop my experience and credentials (common 'round this neck o' the woods), I honestly don't know what all the much more competent organizations are doing and would really like to find out.

huntaub 7 hours ago
I’d be happy to chat more about your needs and try to help recommend a path forward. Feel free to shoot me an email at the address in my profile.
x______________ 5 hours ago
Is this an ad? Why can't the topic continue here as a reply to op?
Borg3 1 hour ago
Because its a consulting oportunity.
nhanb 59 minutes ago
I read somewhere that Hacker News should have been named Startup News, and sometimes interactions like the one upthread reminds me of that. I'm not saying it's wrong - if you're good at something don't do it for free and all that - but it's kinda sad that in-depth discussions on public forums are getting harder and harder to find these days.
anon-3988 5 hours ago
0.7PB of compressed data?
cpach 15 hours ago
poppafuze 14 hours ago
Great default license.
Joel_Mckay 13 hours ago
CephFS looks stable, and has diskprediction and Prometheus modules:

https://docs.ceph.com/en/quincy/cephfs/index.html

https://github.com/ceph/ceph

Still not completely decoupled from host roles, but seems to work for some folks. =3

semessier 14 hours ago
should post again when having 5% of the features of the other parallel file systems starting with RDMA, whereby it's not clear if this FS does even stripe that is if it is even a parallel file system
anon-3988 10 hours ago
Isn't this literally what ZFS is designed for? What is ZFS lacking that this is needed.
somat 10 hours ago
ZFS is not distributed. So probably closer to ceph or lustre. I have to admit, on my first pass through the page it failed to explain why it was better than ceph.
president_zippy 9 hours ago
Given all the good work ZFS does locally, it does make you wonder what it would take to extend the concepts of ARC caching and RAID redundancy to a distributed system, one where all the nodes are joined together by RDMA rather than ethernet; one where reliability can be taken for granted (short of a rat chewing cables).

It would make for one heck of a FreeBSD development project grant, considering how superb their ZFS and their networking stack are separately.

P.S. Glad someone pointed this out tactfully. A lot of people would have pounced on the chance to mock the poor commenter who just didn't know what he didn't know. The culture associated with software development falsely equates being opinionated with being knowledgeable, so hopefully we get a lot more people reducing the stigma of not knowing and reducing the stigma of saying "I don't know".

mgerdts 8 hours ago
This is hobby project I’ve been thinking about for quite a while. It’s way larger than a hobby project, though.

I think the key to making it horizontally scalable is to allow each writable dataset to be managed by a single node at a time. Writes would go to blocks reserved for use by a particular node, but at least some of those blocks will be on remote drives via nvmeof or similar. All writes would be treated as sync writes so another node could have lossless takeover via ZIL replay.

Read-only datasets (via property or snapshot, including clone origins) could be read directly from any node. Repair of blocks would be handled by a specific node that is responsible for that dataset.

A primary node would be responsible for managing association between nodes and datasets, including balancing load and handling failover. It would probably be responsible for metadata changes(datasets, properties, nodes, devs, etc., not posix fs metadata) and the coordination required across nodes.

I don’t feel like I have a good handle on how TXG syncs would happen, but I don’t think that is insurmountable.

nh2 9 hours ago
Even if you were build a ZFS mega-machine with an Exabyte of storage with RDMA (the latencies of "normal" Ethernet in the datacenters would probably not be good enough), wouldn't you still have the problem that ZFS is fundamentally designed to be managed by and accessed on one machine? All data in and out of it would have to flow through that machine, which would be quite the bottleneck.
president_zippy 7 hours ago
Because RDMA latency is still a lot lower than disk access latency, it depends more on whether or not the control logic can be generalized to distributed scale with some simple refactoring and a few calls to access remote shared memory, or whether a full-on rewrite is less time-consuming. I don't know, and I don't pretend to know.

All I know is that the semantics of RDMA (absent experience writing code that uses RDMA) deceive me into thinking there's some possibility I could try it and not end up regretting the time spent on a proof of concept.

mgerdts 9 hours ago
If your entire system is connected via RDMA networks (rather common in HPC) I would not worry at all about latency. If you are buying NICs and switches that are capable of 100Gb or better, there’s a reasonable chance they support RoCE.