in the same way that passing parameters to a subfunction "creates" a special set of local variables for the subfunction, the tail recursion semantic updates this set of local variables in an especially clean way for loop semantics, allowing "simultaneous assignment" from old values to new ones.
(yes, it would be confusing with side effected C/C++ operators like ++ because then you'd need to know order of evaluation or know not to do that, but those are already issues in those languages quite apart from tail recursion)
because it's the way I learned it, I tend to call the semantic "tail recursion" and the optimization "tail call elimination", but since other people don't do the same it's somewhat pointless; but I do like to crusade for awareness of the semantic beyond the optimization. If it's an optimization, you can't rely on it because you could blow the stack on large loops. If it's a semantic, you can rely on it.
(the semantic is not entirely "clean" either. it's a bit of a subtle point that you need to return straightaway the return value of the tail call or it's not a tail call. fibonacci is the sum of the current with the next so it's not a tail call unless you somewhat carefully arrange the values you pass/keep around. also worth pointing out that all "tail calls" are up for consideration, not just recursive ones)
#!/bin/sh
foo
bar
vs #!/bin/sh
foo
exec bar
And you could perhaps imagine a shell that does "tail process elimination" to automatically perform the latter when you write the former.But the distinction can be important due to a variety of side effects and if you could only achieve it through carefully following a pattern that the shell might or might not recognize, that would be very limiting.
as you imply towards the end, i'm not confident this is a trick you can get away with as easily without the constraints of concatenative programming to railroad you into it being an easily recognizable pattern for both the human and the interpreter.
.NET is an interesting contrast. The equivalent of Java Bytecode in .NET (CIL) does have the concept of tail calls. This allows a functional language like F# to be compiled to the intermediate form without losing the tail call concept. It is still up to the first level compiler though. C# for example does not support tail calls even though it’s intermediate target (CIL) does.
Tail call elimination, if it exists in a language, allows coding certain (even infinite) loops as recursion - making loop data flow explicit, easier to analyze, and at least in theory, easier to vectorize/parallelize, etc
But if a language/runtime doesn't do tail call elimination, then you CAN'T code up loops as recursion, as you would be destroying you stack. So the WAY you code, structure it, must be different.
Its NOT an optimization.
I have no idea who even came up with that expression.
Why is this not done?
In lower level languages there are also a bunch of other issues:
- RAII can easily make functions that appear in a tail position not actually tail calls, due to destructors implicitly running after the call;
- there can be issues when reusing the stack frame of the caller, especially with caller-cleanup calling conventions;
- the compiler needs to prove that no pointers to the stack frame of the function being optimized have escaped, otherwise it would be reusing the memory of live variables which is illegal.
That is a common criticism. You're referring to the functional programmers. They would typically argue that building up state based on transient loop variables is a mistake. The body of a loop ideally should be (at the time any stack trace gets thrown) a pure function of constant values and a range that is being iterated over while being preserved. That makes debugging easier.
In a couple languages I've seen proposals to solve this problem with a syntactic opt-in for tail call elimination, though I'm not sure whether any mainstream language has actually implemented this.
https://crates.io/crates/tiny_tco
As an optimization my understanding is that GCC and LLVM implement it so Rust, C, and C++ also have it implicitly as optimizations that may or may not apply to your code.
But yes, zig does have a formal language syntax for guaranteeing tail calls to happen at the language level (which I agree with as the right way to expose this optimization).
I did not know that! But I am a bit confused, since I don't really program in either language. Where exactly in the documentation could I read more about this? Or see more examples?
The language reference for @call[0] was quite unhelpful for my untrained eye.
[1]: https://github.com/ziglang/zig/issues/694#issuecomment-15674... [2]: https://github.com/llvm/llvm-project/issues/54964 [3]: https://github.com/rust-lang/rfcs/pull/3407
> All current mainstream compilers perform tail call optimisation fairly well (and have done for more than a decade)
https://stackoverflow.com/questions/34125/which-if-any-c-com... (2008)
You can now get a guarantee by using non-standard compiler attributes:
https://clang.llvm.org/docs/AttributeReference.html#musttail
https://gcc.gnu.org/onlinedocs/gcc/Statement-Attributes.html...
https://ocaml.github.io/ocamlunix/ocamlunix.html
MirageOS
House OS,
https://programatica.cs.pdx.edu/House/
Just saying.
I've no idea if this fact still holds when the security manager will be removed.
However, there are other things still in the platform for which stack frames are significant. These are referred to as “caller sensitive” methods. An example is Class.forName(). This looks up the given name in the classloader of the class that contains the calling code. If the stack frames were shifted around by TCO, this might cause Class.forName() to use the wrong classloader.
No doubt there are ways to overcome this — the JVM does inlining after all — but there’s work to be done and problems to be solved.
We've been using TCO here ("tail call optimization") but I recall Guy Steele advocating for calling this feature TCE ("elimination") because programs can rely on TCE for correctness.
https://kotlinlang.org/docs/functions.html#tail-recursive-fu...
Kotlin also has a neat other tool, "DeepRecursiveFunction<T, R>" that allows defining deep recursion that is not necessarily tail-recursive.
Really useful if you wind up a problem that is most cleanly solved with mutual recursion or similar:
https://kotlinlang.org/api/core/kotlin-stdlib/kotlin/-deep-r...
It'd require a bit of engineering to get something working in native Java I'd imagine, even with the new JDK Structured Concurrency API offering you a coroutines alternative.
On the other hand, "tailrec" is a keyword and implemented as a compiler optimization.
The closest I've seen in Java is a neat IntelliJ plugin that has a transformation to convert recursive method calls into imperative loops with a stack frame.
This transformation and resulting tool was the result of someone's thesis, it's pretty cool:
https://github.com/andreisilviudragnea/remove-recursion-insp...
In general I don't think you can do this to INVOKEVIRTUAL (or INVOKEINTERFACE) as it covers cases where your target is not statically resolved (virtual/interface calls). This transformation should be limited to INVOKESTATIC and INVOKESPECIAL.
You also need lots more checks to make sure you can apply the transformations, like ensure the call site is not covered by a try block, otherwise this is not semantics preserving.
e.g.:
(defun func-a (x) (func-b (- x 34)) (defun func-b (x) (cond ((<= 0 x) x) ('t (func-a (-x 3))))
Because func-a and func-b are different (JVM) functions, you'd need an inter-procedural goto (i.e. a tail call) in order to natively implement this.
As an alternative, some implementations will use a trampoline. func-a and func-b return a _value_ which says what function to call (and what arguments) for the next step of the computation. The trampoline then calls the appropriate function. Because func-a and func-b _return_ instead of actually calling their sibling, the stack depth is always constant, and the trampoline takes care of the dispatch.
The ANTLR guys went through terrible contortions for their parsers.
Never felt like working those details out for ABCL.