(define (((add x) y) z) (+ x y z))
(define add1 (add 1))
(define add3 (add1 2))
(add3 3) ; => 6
it gets tedious with lots of single-argument cases like the above, but in cases where you know you're going be calling a function a lot with, say, the first three arguments always the same and the fourth varying, it can be cleaner than a function of three arguments that returns an anonymous lambda of one argument. (define ((foo a b c) d)
(do-stuff))
(for-each (foo 1 2 3) '(x y z))
vs (define (foo a b c)
(lambda (d) (do-stuff)))
(for-each (foo 1 2 3) '(x y z))
There's also a commonly supported placeholder syntax[1]: (define inc (cut + 1 <>))
(inc 2) ; => 3
(define (foo a b c d) (do-stuff))
(for-each (cut foo 1 2 3 <>) '(x y z))
And assorted ways to define or adapt functions to make fully curried ones when desired. I like the "make it easy to do something complicated or esoteric when needed, but don't make it the default to avoid confusion" approach.> I'd also love to hear if you know any (dis)advantages of curried functions other than the ones mentioned.
I think it fundamentally boils down to the curried style being _implicit_ partial application, whereas a syntax for partial application is _explicit_. And as if often the case, being explicit is clearer. If you see something like
let f = foobinade a b
in a curried language then you don't immediately know if `f` is the result of foobinading `a` and `b` or if `f` is `foobinade` partially applied to some of its arguments. Without currying you'd either write let f = foobinade(a, b)
or let f = foobinade(a, b, $) // (using the syntax in the blog post)
and now it's immediately explicitly clear which of the two cases we're in.This clarity not only helps humans, it also help compilers give better error messages. In a curried languages, if a function is mistakenly applied to too few arguments then the compiler can't always immediately detect the error. For instance, if `foobinate` takes 3 arguments, then `let f = foobinade a b` doesn't give rise to any errors, whereas a compiler can immediately detect the error in `let f = foobinade(a, b)`.
A syntax for partial application offers the same practical benefits of currying without the downsides (albeit loosing some of the theoretical simplicity).
When someone writes
f = foobinade a b
g = foobinadd c d
there is no confusion to the compiler. The problem is the reader. Unless you have the signatures of foobinade and foobinadd memorized, you have no way to tell that f is a curried function and g is an actual result.Whereas with explicit syntax, the parentheses say what the author thinks they're doing, and the compiler will yell at them if they get it wrong.
Yes, but the exact FP idea here is that this distinction is meaningless; that curried functions are "actual results". Or rather, you never have a result that isn't a function; `0` and `lambda: 0` (in Python syntax) are the same thing.
It does, of course, turn out that for many people this isn't a natural way of thinking about things.
Everyone knows that. At least everyone who would click a post titled "A case against currying." The article's author clearly knows that too.
That's not the point. The point is that this distinction is very meaningful in practice, as many functions are only meant to be used in one way. It's extremely rare that you need to (printf "%d %d" foo). The extra freedom provided by currying is useful, but it should be opt-in.
Just because two things are fundamentally equivalent, it doesn't mean it's useless to distinguish them. Mathematics is the art of giving the same name to different things; and engineering is the art of giving different names to the same thing depending on the context.
Not when a language embraces currying fully and then you find that it’s used all the fucking time.
It’s really simple as that: a language makes the currying syntax easy, and programmers use it all the time; a language disallows currying or makes the currying syntax unwieldy, and programmers avoid it.
In a pure language like Haskell, 0-ary functions <==> constants
let f :: Int = foobinade a b
And the compiler immediately tells you that you are wrong: your type annotation does not unify with compiler’s inferred type.And if you think this is verbose, well many traditional imperative languages like C have no type deduction and you will need to provide a type for every variable anyways.
What you say is true. And it works, if you're the author and are having trouble keeping it all straight. It doesn't work if the author didn't do it and you are the reader, though.
And that's the more common case, for two reasons. First, code is read more often than it's written. Second, when you're the author, you probably already have it in your head how many parameters foobinade takes when you call it, but when you're the reader, you have to go consult the definition to find out.
But if I was willing to do it, I could go through and annotate the variables like that, and have the compiler tell me everything I got wrong. It would be tedious, but I could do it.
let result =
input
|> foobinade a b
|> barbalyze c d
Or, if we really want to name our partial function before applying it, we can use the >> operator instead: let f = foobinade a b >> barbalyze c d
let result = f input
Requiring an explicit "hole" for this defeats the purpose: let f = barbalyze(c, d, foobinade(a, b, $))
let result = f(input)
Or, just as bad, you could give up on partial function application entirely and go with: let result = barbalyze(c, d, foobinade(a, b, input))
Either way, I hope that gives everyone the same "ick" it gives me. let result = (barbalyze(c, d, $) . foobinade(a, b, $)) input
Or if you prefer left-to-right: let result = input
|> foobinade(a, b, $)
|> barbalyze(c, d, $)
Maybe what isn't clear is that this hole operator would bind to the innermost function call, not the whole statement. let result = input
|> add_prefix_and_suffix("They said '", $, "'!") let foos = foobinate(a, b, input)
let bars = barbakize(c, d, foos)
Other languages have method call syntax, which allows some chaining in a way that works well with autocomplete.It can, or it can't; depending on the situation. Sometimes it just adds weight to the mental model (because now there's another variable in scope).
'Putting things' (multi-argument function calls, in this case) 'in-band doesn't make them go away, but it does successfully hide them from your tooling', part 422.
Seems like a disaster to use s-expressions for a language like that. I love s-expressions but they only make sense for variadic languages. The entire point of them is to quickly delimit how many arguments are passed.
In say Haskell `f x y z` is the same thing as `(((f x) y) z)`. That is definitely not the case with s-expressions; braces don't delimit; they denote function application. It's like saying that `f(x,y,z)` being the same as `f(x)(y)(z)` which it really isn't. The point of s-expressions is that you often find yourself calling functions with many arguments that are themselves a result of a function application, at that point `foo(a)(g(a,b), h(x,y))` just becomes easier to parse as ((foo a) (g a b) (h x y))`.
Functions can be done explicitly written to do this or it can be achieved through compiler optimisation.
It's also a question of whether this is exclusive to a curried definition or if such an optimization may also apply to partial application with a special operator like in the article. I think it could, but the compiler might need to do some extra work?
getClosest :: Set Point -> Point -> Point
You could imagine getClosest build a quadtree internally and that tree wouldn't depend on the second argument. I say slightly contrived because I would probably prefer to make the tree explicit if this was important.
Another example would be if you were wrapping a C-library but were exposing a pure interface. Say you had to create some object and lock a mutex for the first argument but the second was safe. If this was a function intended to be passed to higher-order functions then you might avoid a lot of unnecessary lock contention.
You may be able to achieve something like this with optimisations of your explicit syntax, but argument order is relevant for this. I don't immediately see how it would be achieved without compiling a function for every permutation of the arguments.
The flip side of your example is that people see a function signature like getClosest, and think it's fine to call it many times with a set and a point, and now you're building a fresh quadtree on each call. Making the staging explicit steers them away from this.
Irrespective of currying, this is a really interesting point - that the structure of an API should reflect its runtime resource requirements.
I was imagining you might achieve this optimization by inlining the function. So if you have
getClosest(points, p) = findInTree(buildTree(points), p)
And call it like myPoints = [...]
map (getClosest(myPoints, $)) myPoints
Then the compiler might unfold the definition of getClosest and give you map (\p -> findInTree(buildTree(myPoints), p)) myPoints
Where it then notices the first part does not depend on p, and rewrite this to let tree = buildTree(myPoints) in map (\p -> findInTree(tree, p)) myPoints
Again, pretty contrived example. But maybe it could work.I don't believe inlining can take you to the exact same place though. Thinking about explicit INLINE pragmas, I envision that if you were to implement your partial function application sugar you would have to decide whether the output of your sugar is marked INLINE and either way you choose would be a compromise, right? The compromise with Haskell and curried functions today is that the programmer has to consider the order of arguments, it only works in one direction but on the other hand the optimisation is very dependable.
foldr f z = go
where
go [] = z
go (x : xs) = f x (go xs)
when called with (+) and 0 can be inlined togo xs = case xs of
[] -> 0
(x : xs) = x + go xs
which doesn't have to create a closure to pass around the function and zero value, and can subsequently inline (+), etc.In that case I want the signature of "this function pre-computes, then returns another function" and "this function takes two arguments" to be different, to show intent.
> achieved through compiler optimisation
Haskell is different in that its evaluation ordering allows this. But in strict evaluation languages, this is much harder, or even forbidden by language semantics.
Here's what Yaron Minsky (an OCaml guy) has to say:
> starting from scratch, I’d avoid partial application as the default way of building multi-argument functions.
https://discuss.ocaml.org/t/reason-general-function-syntax-d...
sayHi name age = "Hi I'm " ++ name ++ " and I'm " ++ show age
people = [("Alice", 70), ("Bob", 30), ("Charlotte", 40)]
-- ERROR: sayHi is String -> Int -> String, a person is (String, Int)
conversation = intercalate "\n" (map sayHi people)
In python you have `*people` to destruct the tuple into separate arguments, or pattern matching. In C-languages you have structs you have to destruct.2. And performance, you'd think a slow-down affecting every single function call would be high-up on the optimization wish list, right? That's why it's implemented in basically every compiler, including non-fp compilers. Here's GHC authors in 2004 declaring that obviously the optimization is in "any decent compiler". https://simonmar.github.io/bib/papers/eval-apply.pdf
3. Type errors, the only place where currying is actually bad, is not even mentioned directly. Accidentally passing a different number of arguments compared to what you expected will result in a compiler error.
Some very powerful and generic languages will happily support lots of weird code you throw at them instead of erroring out. Others will errors out on things you'd expect them to handle just fine.
Here's Haskell supporting something most people would never want to use, giving it a proper type, and causing a confusing type error in any surrounding code when you leave out a parentheis around `+`:
foldl (+) 0 [1,2,3] :: Num a => a
foldl + 0 [1,2,3]
:: (Foldable t, Num a1, Num ((b -> a2 -> b) -> b -> t a2 -> b),
Num ([a1] -> (b -> a2 -> b) -> b -> t a2 -> b)) =>
(b -> a2 -> b) -> b -> t a2 -> b
Is it bad that it has figured out that you (apparently) wanted to add things of type `(b -> a2 -> b) -> b -> t a2 -> b` as if they were numbers, and done what you told it to do? Drop it into any gpt of choice and it'll find the mistake for you right away.I never understood why the latter was so popular. Just for automatic implitic partial application which honestly should just have explicit syntax. In Scheme one simply uses the `(cut f x y)` operator which does a partial application and returns a function that consumes the remaining arguments which is far more explicit. But since Scheme is dynamically typed implicit partial application would be a disaster but it's not like in OCaml and Haskell the error messages at times can't be confusing.
I don't get simulating it with tuples either to be honest. Nothing wrong with just letting functions take multiple arguments and that's it. In Rust they oddly take multiple arguments as expect, but they can return tuples to simulate returning multiple arguments whereas in Scheme they just return multiple arguments. There's a difference between returning one argument which is a tuple of multiple arguments, and actually returning multiple arguments.
I think automatic implicit partial application, like almost anything “implicit” is bad. But in Haskell or Ocaml or even Rust it has to be a syntactic macro, it can't just be a normal function because no easy variadic functions which to be fair is incredibly difficult without dynamic typing and in practice just passing some kind of sequence is what you really want.
(log configuration identifier level format-string arg0 arg1 ... argN)
After each partial application step you can do more and more work narrowing the scope of what you return from subsequent functions. ;; Preprocessing the configuration is possible
;; Imagine all logging is turned off, now you can return a noop
(partial log conf)
;; You can look up the identifier in the configuration to determine what the logger function should look like
(partial log conf id)
;; You could return a noop function if the level is not enabled for the particular id
(partial log config id level)
;; Pre-parsing the format string is now possible
(partial log conf id level "%time - %id")
In many codebases I've seen a large amount of code is literally just to emulate this process with multiple classes, where you're performing work and then caching it somewhere. In simpler cases you can consolidate all of that in a function call and use partial application. Without some heroic work by the compiler you simply cannot do that in an imperative style.1. Looking at a function call, you can't tell if it's returning data, or a function from some unknown number of arguments to data, without carefully examining both its declaration and its call site
2. Writing a function call, you can accidentally get a function rather than data if you leave off an argument; coupled with pervasive type inference, this can lead to some really tiresome compiler errors
3. Functions which return functions look just like functions which take more arguments and return data (card-carrying functional programmers might argue these are really the same thing, but semantically, they aren't at all - in what sense is make_string_comparator_for_locale "really" a function which takes a locale and a string and returns a function from string to ordering?)
3a. Because of point 3, our codebase has a trivial wrapper to put round functions when your function actually returns a function (so make_string_comparator_for_locale has type like Locale -> Function<string -> string -> order>), so now if you actually want to return a function, there's boilerplate at the return and call sites that wouldn't be there in a less 'concise' language!
I think programming languages have a tendency to pick up cute features that give you a little dopamine kick when you use them, but that aren't actually good for the health of a substantial codebase. I think academic and hobby languages, and so functional languages, are particularly prone to this. I think implicit currying is one of these features.
In the sense that "make_string_comparator" is not a useful concept. Being able to make a "string comparator" is inherently a function of being able to compare strings, and carving out a bespoke concept for some variation of this universal idea adds complexity that is neither necessary nor particularly useful. At the extreme, that's how you end up with Enterprise-style OO codebases full of useless nouns like "FooAdapter" and "BarFactory".
The alternative is to have a consistent, systematic way to turn verbs into nouns. In English we have gerunds. I don't have to say "the sport where you ski" and "the activity where you write", I can just say "skiing" and "writing". In functional programming we have lambdas. On top of that, curried functions are just a sort of convenient contraction to make the common case smoother. And hey, maybe the contraction isn't worth the learning curve or usability edge-cases, but the function it's serving is still important!
> Because of point 3, our codebase has a trivial wrapper to put round functions when your function actually returns a function
That seems either completely self-inflicted, or a limitation of whatever language you're using. I've worked on a number of codebases in Haskell, OCaml and a couple of Lisps, and I have never seen or wanted anything remotely like this.
That's not the case with Haskell.
Haskell has a tendency to pick up features that have deep theoretical reasoning and "mathematical beauty". Of course, that doesn't always correlate with codebase health very well either, and there's a segment of the community that is very vocal about dropping features because of that.
Anyway, the case here is that a superficial kind of mathematical beauty seems to conflict with a deeper case of it.
There is one situation, however, where Standard ML prefers currying: higher-order functions. To take one example, the type signature of `map` (for mapping over lists) is `val map : ('a -> 'b) -> 'a list -> 'b list`. Because the signature is given in this way, one can "stage" the higher-order function argument and represent the function "increment all elements in the list" as `map (fn n => n + 1)`.
That being said, because of the value restriction [0], currying is less powerful because variables defined using partial application cannot be used polymorphically.
And yeah I think this is the way to go. For higher-order functions like map it feels too elegant not to write it in a curried style.
Using curried OR tuple arg lists requires remembering the name of an argument by its position. This saves room on the screen but is mental overhead.
The fact is that arguments do always have names anyway and you always have to know what they are.
Maybe there could be a rule that parameters have to be named only if their type doesn’t already disambiguate them and if there isn’t some concordance between the naming in the argument expression and the parameter, or something along those lines. But the ergonomics of that might be annoying as well.
let warn_user ~message = ... (* the ~ makes this a named parameter *)
let error = "fatal error!!" in
warn_user ~message:error; (* different names, have to specify both *)
let message = "fatal error!!" in
warn_user ~message; (* same names, elided *)
The elision doesn't always kick in, because sometimes you want the variable to have a different name, but in practice it kicks in a lot, and makes a real difference. In a way, cases when it doesn't kick in are also telling you something, because you're crossing some sort of context boundary where some value is called different things on either side. def add(x: int, y: int) -> int { return x + y; }
def add3 = add(_, 3);
Or more simply, reusing some built-in functions: def add3 = int.+(_, 3);> This feature does have some limitations, for instance when we have multiple nested function calls, but in those cases an explicit lambda expression is always still possible.
I've also complained about that a while ago https://news.ycombinator.com/item?id=35707689
---
The solution is to delimit the level of expression the underscore (or dollar sign suggested in the article) belongs to. In Kotlin they use braces and `it`.
{ add(it, 3) } // Kotiln
add(_, 3) // Scala
Then modifying the "hole in the expression" is easy. Suppose we want to subtract the first argument by 2 before passing that to `add`: { add(subtract(it, 2), 3) } // Kotlin
// add(subtract(_, 2), 3) // no, this means adding 3 to the function `add(subtract(_, 2)`
x => { add(subtract(x, 2), 3) } // Scala fun x => add(subtract(x, 2), 3) // Virgil[1]: https://gavinhoward.com/2025/04/how-i-solved-the-expression-...
The "hole" syntax for partial application with dollar signs is a really creative alternative that seems much nicer. Does anyone know of any languages that actually do it that way? I'd love to try it out and see if it's actually nicer in practice.
And yes, another comment mentioned that Scala supports this syntax!
Same for imperative languages with "parameter list" style. In python, with
def f(a, b): return c, d
def g(k, l): return m, n
you can't do
f(g(1,2))
but have to use
f(*g(1,2))
what is analogical to uncurry, but operate on value rather than function.
TBH I can't name a language where such f(g(1,2)) would work.
#!/usr/bin/env perl
use v5.36;
sub f($a, $b) {
return ($a+1, $b+1);
}
sub g($k, $l) {
return ($k+1, $l+1);
}
say for f(g(1,2));
prints out 3
4With the most successful functional programing language Excel, the dataflow is fully exposed. Which makes it easy.
Certain functional programming languages prefer the passing of just one data-item from one function to the next. One parameter in and one parameter out. And for this to work with more values, it needs to use functions as an output. It is unnecessary cognitive burden. And APL programmers would love it.
Let's make an apple pie as an example. You give the apple and butter and flour to the cook. The cursed curry version would be "use knife for cutting, add cutting board, add apple, stand near table, use hand. Bowl, add table, put, flour, mix, cut, knife butter, mixer, put, press, shape, cut_apple." etc..
https://jonathanwarden.com/implicit-currying-and-folded-appl...
(Side note: if you're reading this Roc devs, could you add a table of contents?)
I'm failing to see how they're not isomorphic.
However, as the article outlines, there are differences (both positive and negative) to using functions with these types. Curried functions allow for partial application, leading to elegant definitions, e.g., in Haskell, we can define a function that sums over lists as sum = foldl (+) 0 where we leave out foldl's final list argument, giving us a function expecting a list that performs the behavior we expect. However, this style of programming can lead to weird games and unweildy code because of the positional nature of curried functions, e.g., having to use function combinators such as Haskell's flip function (with type (A -> B -> C) -> B -> A -> C) to juggle arguments you do not want to fill to the end of the parameter list.
From a theoretical perspective, a tuple expresses the idea of "many things" and a multi-argument parameter list expresses the idea of both "many things" and "function arguments." Thus, from a cleanliness perspective for your definitions, you may want to separate the two, i.e., require function have exactly one argument and then pass a tuple when multiple arguments are required. This theoretical cleanliness does result in concrete gains: writing down a formalism for single-argument functions is decidedly cleaner (in my opinion) than multi-argument functions and implementing a basic interpreter off of this formalism is, subsequently, easier.
From a systems perspective, there is a clear downside in this space. If tuples exist on the heap (as they do for most functional languages), you induce a heap allocation when you want to pass multiple arguments! This pitfall is evident with the semi-common beginner's mistake with OCaml algebraic datatype definitions where the programmer inadvertently wraps the constructor type with parentheses, thereby specifying a constructor of one-argument that is a tuple instead of a multi-argument constructor (see https://stackoverflow.com/questions/67079629/is-a-multiple-a... for more details).
The distinction is mostly semantic so you could say they are the same. But I thought it makes sense to emphasize that the former is a feature of function types, and the latter is still technically single-parameter.
I suppose one real difference is that you cannot feed a tuple into a parameter list function. Like:
fn do_something(name: &str, age: u32) { ... }
let person = ("Alice", 40);
do_something(person); // doesn't compile
The article is about programmer ergonomics of a language. Two languages can have substantially different ergonomics even when there is a straightforward mapping between the two.
Then there's an implication of 'sure, but that doesn't actually help much if it's not standar' and then it's not addressed further.
The article draws a three way distinction between curried style (à la Haskell), tuples and parameter list.
I'm talking about the distinction it claims exists between the latter two.
args = (a, b, c)
f args
…and that will have the effect of binding a, b, and c as arguments in the called function.In fact many “scripting” languages, like Javascript and Python, support something close to this using their array type. If you squint, you can see them as languages whose functions take a single argument that is equivalent to an array. At an internal implementation level this equivalence can be messy, though.
Lower level languages like C and Rust tend not to support this.
1) "performance is a bit of a concern"
2) "curried function types have a weird shape"
2 is followed by single example of how it doesn't work the way the author would expect it to in Haskell.It's not a strong case in my opinion. Dismissed.
I think you are focusing on the theoretical aspect of partial application and missing the actual argument of the article which having it be the default, implicit way of defining and calling functions isn't a good programming interface.
I wrote a non-trivial lambda program [1] which enumerates proofs in the Calculus of Constructions to demonstrate [2] that BBλ(1850) > Loader's Number.
[1] https://github.com/tromp/AIT/blob/master/fast_growing_and_co...
[2] https://codegolf.stackexchange.com/questions/176966/golf-a-n...
They are not equally easy for me to use when I'm writing a program. So from a software engineering perspective, they are very much not the same.