Uncaught TypeError: e.description is undefined
e https://storageclass.info/storageclasses/:6
filterStoragesClasses https://storageclass.info/storageclasses/:6
oninput https://storageclass.info/storageclasses/:6
2 storageclasses:6:204943
e https://storageclass.info/storageclasses/:6
filter self-hosted:195
filterStoragesClasses https://storageclass.info/storageclasses/:6
oninput https://storageclass.info/storageclasses/:6Additionally, https://plausible.io/js/script.js is blocked by adblockers, and the search breaks completely then.
A lot of this chart seems weird - is it somehow autogenerated?
For example, what does it mean for a driver to support ReadWriteOncePod? On Kubernetes, all drivers "automatically" support RWOP if they support normal ReadWriteOnce. I then thought maybe it meant the driver supported the SINGLE_NODE_SINGLE_WRITER CSI Capability (which basically lets a CSI driver differentiate RWO vs RWOP and treat the second specially) - but AliCloud disk supports RWOP on this chart despite not doing that (https://github.com/search?q=repo%3Akubernetes-sigs%2Falibaba...).
Another example, what does it mean for a driver to support "Topology" on this chart? The EBS driver allegedly doesn't despite using most (all?) of the CSI topology features: https://github.com/kubernetes-sigs/aws-ebs-csi-driver/blob/7...
Also, listing "ephemeral volume" support is kinda misleading because Kubernetes has a "generic ephemeral volumes" feature that lets you use any CSI driver (https://kubernetes.io/docs/concepts/storage/ephemeral-volume...).
Fixed a few, where I saw thinks in their respective docs, and added features like file or object storage. Also added a few that weren't mentioned.
Topology is https://kubernetes-csi.github.io/docs/topology.html, ReadWriteOncePod is supposed to mean https://kubernetes.io/blog/2023/04/20/read-write-once-pod-ac...
Have you thought about TCP sockets between the apps and sharing state, or something like a redis database?
In this example, I have 200GB of ephemeral storage available on each node, ideally I'd like something like this:
node1: /tmp/data1 (200GB free space)
node2: /tmp/data2 (200GB free space)
node3: /tmp/data3 (200GB free space)
node4: /tmp/data4 (200GB free space)
node5: /tmp/data5 (200GB free space)
...pods could some how mount node{1..5} as a volume, which would have 5 * 200GB ~1TB of space to write to... multiple pods could mount it and read the same data.I tried out OpenEBS Replicated — and it is promising — but it doesn’t really seem mature yet. Im a bit scared to put production critical data into it.
A lot of times, finding a solution further up the stack or settling for backups ends up being more robust and reliable. Many folks have been burnt by all the fun failure scenarios of replicated filesystems.
What I would like to do is develop a system where applications just need to request replicated volumes which span a specific failure domain and push that logic down to the platform.
Doing anything special with your config? I already am setting placement options and played with replica options.
My only hope has been to wait for the V2 engine to become stable.