RPyC came along in the aughts. There's a long history of "transparent clustering and rpc" efforts in Python that could be used or drawn on.
Sad to see that history ignored here.
Perhaps I am misunderstanding this, but after looking at the code what exactly are we achieving here over other frameworks? The repo is obviously very new (and the author certainly seems busy), so perhaps a better question is what do we aim to achieve? So far it seems like the exact same pattern with some catchy naming.
Regardless, I love ambitious projects furiously coded by one crazy person. And I mean "crazy" in the best sense of the word, not as an insult. This is what open source is all about.
Please prove us all wrong. If you fail, you'll learn a ton!
Me too; the world lost a treasure, once.
RIP Terry Davis. There was so much to be learned from your approach.
As in, what if there was a Linux distro that focused, primarily, on building a Lua layer on top of everything, system-wise. Just replace all the standard stuff with one single, system-friendly language: Lua. C/C++ everything as it currently is: put Lua on top all the way to the Desktop.
It’s only a thought experiment, except there are cases where I can see a way to use it, and in fact have done it, only not to the desktop, such as of course embedded. Realtime data collection, processing and management. In this case, it is superlative to have a single system/app language on top of C/C++.
So I think there may be a point in the future where this ‘single language for everything’ becomes a mantra in distro land. I see the immense benefit.
Lua table state can be easily transferred, give or take a bit of resource management vis a vis sync’ing state and restoring it appropriately. Lua Bytecode, itself, in a properly defined manner can serve as a perfectly cromulant wire spec. Do it properly and nobody will ever know it isn’t just a plain ol’ C struct on an event handler, one of the fastest, except it’ll be very well abstracted to the application.
Oh, and if things are doing bytecode, may as well have playback and transactions intrinsically… it is all, after all, just a stream.
App state as gstreamer plugin? Within grasp, imho…
What happened was the Angband community imploded, and the number of variants went way down.
I don't know if this is a generalizable example and there may have been other factors at work, but it is a cautionary tale.
Angband is still around btw, and is still excellent. But I believe it's C and text config files again now.
The sage elders suggested that making the language harder to change (cough Python) is more likely to result in a single widely used dialect, with the differences at the library level rather than in language/macro level.
If just the end user application is in Lua, then maybe it's fine and the high level language slowdown won't matter. If you want to wrap the low level kernel APIs etc in a high level language as the canonical interface, I would be very skeptical.
This list alone has saved me many late debugging nights, just by not making or using systems that ignore the list.
[0]: https://en.wikipedia.org/wiki/Fallacies_of_distributed_compu...
It's easy to send text or bytecode to another instance of your runtime. I did a distributed system sending native code functions to be executed remotely.
Fair enough, it was for academic purposes, but it worked[1].
-------------------------------------
[1] By "worked" I mean got a passing grade on a thesis that got an IMDB number. Probably still got the actual PDF somewhere, or maybe it was indexed by google at some point.
https://martinfowler.com/articles/distributed-objects-micros...
https://elixir-lang.readthedocs.io/en/latest/mix_otp/10.html
And even then, once you have a workable set of primitives, it turns out that there are some orchestration patterns that recurs over and over again, so people will converge towards OTP once the primitives are there.
It seems like you would need an entire observability framework built and instrumented on top of this to ever really be that useful.
It's possible LLMs pick up improper English, of course, since proper is some measure of what used to be a norm, but may presently be perceived as outdated.
It is much safer to distribute signed code in signed packages out of band and send only non-executable data in messages.
It is more safe to store distributed messages in a page with the NX bit flipped.
A compromise of any client in this system results in DOS and arbitrary RCE; but that's an issue with most distributed task worker systems.
To take a zero trust approach, you can't rely on the shared TLS key or the workers never being compromised.
mDNS doesn't scale beyond a broadcast segment without mDNS repeaters (which don't scale) which are on multiple network segments.
Something centralized like Dask, for example, can log and track state centrally to handle task retries on network, task, and worker failure.
But Dask doesn't satisfy zero trust design guidelines either.
How are these systems distinct from botnets with a shared TLS key and no certificate revocation?
With redundancy and idempotency, distributed computation can work.
In order to run computation distributedly with required keys, sharded distributed ledger protocol nodes cost the smart contracts that they execute in a virtual machine with network access limited to in-protocol messages. Each "smart contract" costs money to run redundantly on multiple nodes, and so it should or must have an account with a verifiably-sufficient balance in order to run.
Smart contracts must be uploaded using a private key with a corresponding public key.
Smart contracts are identified by and so are addressed by a hash of their bytecode.
Wool, Dask, and Celery @task don't solve for smart contract output storage costs with redundancy. They don't set up database(s) with replication large enough for each computation step for you. Dask and Celery model and track the state of each executed DAG centrally with logs as trustable as the centralized nodes which are a single point of failure.
Why isn't Docker Swarm - which L2 bridges amongst all nodes without restriction - appropriate for a given application, given the Access Control Lists and cloud (pod,) configuration necessary for e.g. AWS or GCP to prevent budget overflow? Why quota grid users somehow?
Serverless functions must be uploaded/deployed before being run, too. To orchestrate a bunch of web services is to execute the DAG and handle errors due to network, node, and service failures and latency
But then what protocol do the (serverless function) services all implement, so that we don't have to have a hodgepodge of API clients to use all the services in the grid?
With (serverless) functions bound to /URL routes, cost each function and estimate resource requirements to continue to run a function of that cost. To handle just benign resource exhaustion, say, Scale up the databases and/or redundant block storages, change other distributed computation grid parameters for that pod or those pods of resources which service a named, signed function like /api/v1/helloworld_v2?q=select%20* which is creating contention for the costed resources of the organization
On what signals do you scale up or scale down - within a resource budget - to afford fanning out over multiple actually parallel nodes to compute and sign and store the data?