That doesn't sound right? For example, from the Fil-C GC docs [0]:
> If you call `free`, the runtime will flag the object as free and all subsequent accesses to the object will trap. Additionally, FUGC will not scan outgoing references from the object (since they cannot be accessed anymore).
I'm only using this configuration for the software I develop though (+ libc++ debug mode) as it's painfully slow, but it exercises the Qt codebase in depth.
> All memory safety errors are caught as Fil-C panics.
If your problem is a memory-based bug causing a crash, I think this would just... catch the memory-based bug and crash. Like, it'd crash more reliably. On the other hand, if you want to find and debug the problem, that might be a good thing.
SafER is better than deeply complex and unable to be understood except by Rust experts.
I like the concepts proposed by Rust but do not like fighting with the borrow checker or sprinkling code with box, ref, cell, rc, refcell, etc.
At some point there’s going to be a better designed language that makes these pain point go away.
Every task does not need speed and safety. Therefore, "everyone" doesn't need Rust.
But I could easily see a future where C++ is relegated to legacy language status. It has already had decades of garbage-collected languages chipping away at most of its general-purpose uses, but Rust seems capable and in a position to take away most of its remaining niches.
It's kind of why the old C++ programmer that I am decided to learn Rust in the first place - seemed like a good idea at the time to skate where the puck is heading.
Yes, agreed. My prediction is that the replacement is a friendly language that makes Rust's ideas ergonomic to use.
I'm not sure why this would be confusing or disliked by a C++ dev.
Rust's Box<T> is similar to C++'s std::unique_ptr<T>.
Rust's Rc<T>, Arc<T>, and Rc<RefCell<T>> serve similar uses to C++'s std::shared_ptr<T>.
Rust's Weak<T> is similar to C++'s std::weak_ptr<T>.
Verbosity of both is nearly identical. The big difference is that Rust enforces the rules around aliasing and mutability at compile time, whereas with C++ I get to find out I've made a mistake when my running code crashes.
Yes, C++ is pretty bad at this too.
The facts that these exist and the programmer has to always be consciously be aware of it is an indication that something has gone wrong in the language design.
Imagine if you were to write C code in 1970, but you always had to keep track of which registers each variable corresponded to. That is how I look at these.
(There were, indeed, early 'high level' languages that required you to do this :)
This is what separates a systems programming language suitable for OS and embedded development from managed languages, which are not.
The complexity ultimately stems from unavoidable details of the hardware implementation. Languages which do not offer similar representations will be incapable of making full use of the underlying hardware, including writing certain projects like bootloaders, firmwares, OSes, etc. Rust is pretty close to state-of-the-art in terms of providing reasonable abstractions over the hardware.
It sounds to me like you're used to managed languages with a runtime which are great for certain applications, but unable to be specific enough about memory layout, how data is formatted in memory, etc. for some tasks. Those choices were made for you by the language runtime's authors and necessarily limit the language's applicability to some problems. Rust, C++, and other systems programming languages don't have such limitations, but require you to understand more of the complexity of what the system is actually doing.
Any language which provides adequate representations for taking full advantage of the hardware is going to be on a similar order of complexity as Rust, C++, or other systems programming languages, because the hardware is complex. Managed languages can be nice for introducing folks to programming, precisely because much of the complexity is hidden in the runtime, but that can be a double edged sword when it comes time to approach a systems level task.
If you forced me to pick between Zig and Rust for a long-running project though, I'd pick Rust 10/10 times for the simple fact that it has been stable for more than a decade and already has momentum and funding behind it. Zig is a cool language - one that I've actually written more of than Rust - but it hasn't hit 1.0 yet and still has significant churn in both the language and standard library.
Rust is complex.
That is not a statement anyone can deny. Unless you're a Rust-bro "hey man I learned it so I’m baffled how you can't see it's super simple ..... etc etc" - often the implication that you're not very smart if you think Rust is complex.
Of everything that I have learned about programming, this is the BIGGEST lesson: - complexity is bad, avoid complexity. Complex is not the same as "sophisticated", which implies necessary intricacy. Complex means unneeded unnecessary cognitive load - things made harder than they should be when it could have been avoided - that's complexity. If you are writing complex code then you're writing bad code. And Rust is complex.
It is sad that we did not end up with a SIMPLE programming language that solves the memory safety problem.
What we needed was Rust-- i.e Rust without all the non-safety related extras that make it different - it's all the non-safety add ons that makes Rust into a Rube Goldberg machine.
I am not a savant by any means, and yet I was able to get up and running with Rust relatively quickly. What Rust isn't is ergonomic, in that Rust gets very annoyed with the ways that one might want to structure their code. Trust me, I got bit by the borrow checker countless times, and it did grate on me.
As a result, there are many tasks that I would avoid using Rust for, tasks where both speed and safety aren't critical. But if neither is a priority, the list of alternative languages I can resort to is quite long, much longer than Zig, C++, or C. And in the cases where both are a factor, I would consider being needled by the compiler to be a feature and not a bug.
Traits? Nope. We need some way for code reuse. Classes cannot be made memory safe without extra cost (at least, I don't know how can they). And they are not less complex either. Templates like C++? More complex, and doesn't allow defining safety interfaces. No tool for code reuse? That will also severely limit the safety (imagine how safe Rust was if everyone would need to roll their `Vec`).
The borrow checker of course cannot be omitted. ADTs are really required for almost anything Rust does (and also, fantastic on their own). Destructors? Required to prevent use after free.
Async can be removed (and in fact, wasn't there in the beginning) which is a large surface area, but even today it can mostly be avoided if you're not working in some areas.
I don't think anybody can deny Rust is complex, but most often it's inherent complexity (what you call "sophistication") given the constraints Rust operates in, not accidental complexity.
Somehow most of the libraries in the Rust ecosystem seem to interoperate with each other seamlessly, and use the same build system, which I didn't have to learn another unrelated language to use! Astounding!
We did, arguably. Those languages are called JavaScript, Python, Java, C#, etc. Those languages tend to be eschewed in certain niches, though, and it's there that simplicity tends to be harder to achieve.
> What we needed was Rust-- i.e Rust without all the non-safety related extras that make it different - it's all the non-safety add ons that makes Rust into a Rube Goldberg machine.
I think it might also be worth considering that some people find those "non-safety related extras" a good thing. Dropping backwards compatibility and/or familiarity, just like everything else, is a tradeoff, and that tradeoff might be worth it if you think the resulting semantics are nicer to work with.
Compared with what? C++? It is not.
What would the minimal set of features be, in your opinion?
> For example Zig which instead of introducing a new metaprogramkming language, it uses...... Zig - imagine using the same language instead of inventing a new additional language with all the accompanying complexity and cognitive load and problems.
Zig probably isn't the best comparison since Zig doesn't try to achieve the same level of compile-time memory safety guarantees that Rust aims for. For instance, Zig doesn't try to statically prevent use-after-frees or data races.
That being said, as with everything it's a question of tradeoffs. Zig's metaprogramming approach is certainly interesting, but from what I understand it doesn't offer the same set of features as Rust's approach. For example:
- Zig's generics are more similar to C++ templates in that only instantiated functions are fully checked by the compiler. Rust's generics, on the other hand, are completely checked at the definition site so if the definition type-checks the author knows it will type-check for all possible instantiations. Rust's approach also lends itself to nicer error messages since everything a generic needs is visible up front.
- Zig's comptime isn't quite 1:1 with Rust's macros. comptime is for... well, compile-time computation (e.g., reflection, compile-time branching, or instantiating types). Macros are for manipulating syntax (e.g., code generation or adding inline support for other languages). Each has things the other can't do, though to be fair there is overlap in problems they can be used to solve.
In any case, metaprogramming approaches are (mostly?) independent of memory safety.
> And result and option and move by dedfault - none of these things were needed but they all add up to more complexity and unfamiliarity and cognitive load.
I don't think Result/Option are that complex (if at all) since they're trivially derivable from discriminated unions/sum types/enums.
I'm also not sure how move by default is necessarily "more complexity... and cognitive load"? Maybe as a result of unfamiliarity, perhaps, but that seems more a property of a person than a language, no?
So what you're saying here is that you don't understand that Rust's rules around memory ownership, aliasing, and mutability are what allow the language to provide deterministic compile time memory safety without runtime cost. If you figure out another way to guarantee memory safety at compile time with zero runtime overhead, you should write a paper and start another language around it!
https://en.wikipedia.org/wiki/Capability_Hardware_Enhanced_R... exists, and is an exciting, laudable effort, I think. But requires hardware support as well as language modifications.
Rust is better than C++.
Also, I could be wrong but I believe any assembly linked into Fil-C bypassed the safety guarantees which would be something to keep in mind (not a big deal generally, but a source of hidden implicit unsafe).
Rust at least embeds this information in the API with checks. C and C++ are doc comments at best.
From my position the complexity incurred by ownership semantics in Rust does not stem from Rust’s ‘formalization’ and semi-reification of a particular view on ownership as a means of program constraint. The complexity of Rust, in relation to ownership, comes with the lengths I would have to go to design systems using other logical means of handling references (particularly plain hardware implemented pointers) to semantic objects: their creation, specification, and their deletion. Additionally, other means of handling resources (particularly memory acquired via allocation): its acquisition, transport through local and distributed processes (from different cores to over the wire), and its deletion or handing back to OS.
Rust adopts ownership semantics (and value semantics to a large degree) to the maximum extent possible and has enmeshed those semantics throughout all levels of abstraction in the language definition (as far as a singular authoritative ‘definition’ can be said to exist as a non-implementation related formalism). At the level of Rust the language, not merely discussions and discourse about the language, ownership semantics are baked in at a specified granularity and that ownership is compositional over the abstraction mechanisms provided. These semantics dictate how everything from a single variable on the stack to large size allocations in a general heap to non-memory ‘resources’, like files, textures, databases, and general processes, are handled in a program. On top of the ownership semantics sit the rest of Rust’s semantics and they are all checked at compile time by a singular oracular subsystem (i.e. the borrow checker).
The complexity really begins to rise, for me, if ai want to attempt to program without engaging with ownership as the methodology or semantics for handling all of the above mentioned ‘resources’. I prefer, and believe, that a broader set of formalisms should be available for ‘handling’ resources, that those formalisms should exist with parameterized granularity, and that the foundational semantics for those mechanisms should come from type systems’ ability to encode capabilities and conditions for particular types in a program. That position is in contrast to the universal and foundational ownership semantic, especially with the individualistic fixed granularity, that Rust chose.
That being said, it is bordering on insanity to attempt to program in such a ‘style’/paradigm/method in Rust. My preferences make Rust’s chosen focus on ownership seem complex at the outset, and attempts to try and impose an alternate formalism in Rust (which would, by necessity, have to try and be some abstraction over Rust’s ownership semantics which hid those semantics and tried to present a different set of semantics to thenprogrammer) take that complexity to even higher levels.
The real problem with trying to frame my position here as complexity is the following: to me Rust and its ownership semantic is complex because I do not like it’s chosen core semantic construct, so when I think about achieving something using Rust I have to deal with additional semantics, semantic objects, and their constraints on my program that I do not think are fit for purpose. But, if I wanted to program in Rust without trying to circumvent, ignore, or disregard it’s choices as a language and just decided to accept (or embrace) it’s semantic choices the complexity I perceive would decrease significantly and immediately.
For me, Rust’s ownership semantics create an impedance mismatch that at the level of language use FEELS like complexity (and acts like complexity in a lot of ways), but is probably more correctly identified as just what it is… an impedance mismatch, nothing more and nothing less. For me, I just chose not to use Rust to avoid that, but for others they get focused on these issues and don’t actually get to the bottom of their issues and just default to calling it complexity during discussion.
All in all, I am probably being entirely to optimistic about the comments about the complexity of Rust and ownership and most commenters are just fighting to fight, but I genuinely believe there is much to discuss and work through in programming language design theory and writing walls of text on HN helps me do that.
When considering some chosen subset of functionality for some specified use case, how do Rust and C++ compare in the ability to ‘master’. There are wide and varied groups (practically infinite) of features, constructs, techniques, and implementations that achieve targeted use cases in both languages, so when constructing a given subset which language grants the most expressivity and capability in the more ‘tight’ (i.e. masterable) package?
I think that’s a way more interesting discussion to have. Obviously, where the specified use case requires Rust’s definition of memory safety to be implemented 100% of the time (excluding a small-ish percentage of delimited ‘unsafe but identifiable’ sections) the Rust subset will be smaller due to the mandatory abstractions required to put C++ anywhere near complete coverage. So it may make sense to allow the subset to be defined as not only constructs in the base language, but include sealed abstractions (philosophically if not in reality) as potential components in the constructed subsets.
I may have to try and formulate some use cases to pose in a longer something to see if any truly experienced devs can lay out their preferred language’s best candidate subset in response. It would also be fascinating to see what abstractions and metaprogramming would be used to implement the subset candidates and figure out how that could factor into an overall measurement of the ‘masterable-ness’ of the given language (i.e. how impossible a task is it to be able to rely on a subject matter expert to implement any proposed subset for any given use case).
Massive projects like Qt also push compilers to their limits and use various compiler-specific and platform-specific techniques which might appear as bugs to Fil-C.