However, I personally think that monad tutorials tend to give people the wrong impression and leave them more confused than they were before, because they focus on the wrong thing.
A monad is not a complex concept, at all. IMO a more useful way to present the topic would be with one separate lesson for every common monad instance. Start with Maybe, then IO, then maybe State and List, and so on... because ultimately, every instance of a Monad works very differently. That's why the pattern is so useful in the first place, because it applies to so many places. (Note: this is a criticism of monad tutorials in general, not this one in particular, which seems to do a decent job on this front).
In my experience, people new to Haskell focus way too much on getting the "a-ha" moment for monads in general, when really you want a bunch of separate "a-ha" moments as you realize how each instance of a monad takes advantage of the pattern differently.
I also tend to think that monads are best demonstrated in Haskell rather than in other languages, if only because the notation is so much less clunky. That may just be me though. (EDIT: well, also because almost no other languages have typeclasses, so you have to approximate it with interfaces/traits/etc)
Also FYI: in part 2, the code examples have extra newlines in between every line, which makes it hard to read (I'm on firefox, if that matters).
I've been trying to grok monads for almost a decade. More and more I'm beginning to realize how "mundane" the concept is, and the usefulness really is just that specific pattern of mundanity.
Similar to pipelines on Linux, they are pretty basic, but their ubiquity and their use in composing unrelated things together is really, really useful, and you only get that if you use them in a variety of different ways.
Monads are extra cool because of the mathematical rigor behind them, and that's what I'm still trying to appreciate.
This is what monads being categorically commutative ("a monoid in the category of endofunctors") buys you. You want to turn monad X into monad Y? Sure, just join, flatten, return, bind in whatever way makes the type checker happy. Anything that only uses what's in the Monad typeclass must necessarily be a monad morphism, so if you're generic over your monads, you get that for free. And of course `fmap` and `bind` are required to be parameterized monad morphisms, so there's a lot you can get for free.
For multi-language distributed processing, particular if JSON is involved it’s worth a try.
To be fair I write a lot of Java where Optional is a train wreck in so many ways not least it could be null anyway, you are allocating objects needlessly, and I just see people get hypnotized by awkward code also they write bugs or scan right past them.
Related: https://buttondown.com/j2kun/archive/weak-and-strong-algebra...
Monads, I think, offer enough structure in that we can exploit things like monad composition (as fraught as it is), monadic do/for syntax, and abstracting out "traversals" (over data structures most concretely, but also other sorts of traversals) with monadic accumulators.
There's at least one other practical advantage as well, that of "chunking".
A chess master is more capable of quickly memorizing realistic board states than an amateur (and equally good at memorizing randomized board states). When we have a grasp of relevant, powerful structures underlying our world, we can "chunk" along them to reason more quickly. People familiar with monads often can hand-wave a set of unknowns in a problem by recognizing it to be a monad-shaped problem that can be independently solved later.
> When we have a grasp of relevant, powerful structures underlying our world, we can "chunk" along them to reason more quickly.
This is one thing I've observed about Haskell vs. other languages: it more readily gives names and abstractions to even the minutest and most trivial patterns in software, so that seemingly novel problems can be quickly pattern matched and catalogued against a structure that has almost certainly been seen before.
One example: I want to run two (monadic) computations, and then somehow combine together their results (with some binary operation). Such a trivial and fundamental mode of composition, that seems to lack a name in almost every other programming language. Haskell has a name for this mode of composition, and it's called liftM2.
Never again will you have to re-write this pattern for yourself, leaving yourself open to error, now that you have this new concept in your vocabulary. Other languages will happily let you reinvent the wheel for the umpteenth time, or invent idiosyncratic patterns and structures without realizing that they are just particular applications of an already well-studied and well-worn concept.
I've only had a couple of months of experience working with scala before the team switched to Java. The reasons were many but one of them was that the external consultant that was most knowledgeable in "thinking with functions" was kind of a dick. Making onboarding into a horror show of "go look up yt video x before I can talk about this functionality" with a condescending tone. So within a month he was let go and then no one in the remaining team really had the courage to keep debe it further. Some thought that they maybe could maintain the current functionality but the solution was only like half complete. (in the consultant mind it was complete because it was so generic you only needed to add a couple of lines in the right place to implement each coming feature)
That said, I would love to work in a hardcore Haskell project with a real team, one with a couple of other "regular" coders that just help each other out when solving the actual problems at hand.
Now for instance, arbitrary effects (error handling, resource management, etc) can be composed on top of an IO monad (e.g. via monad transformers), and MonadIO code, that is written to only depend on the IO effects, can still be executed in these contexts with more effects layered on top.
[0] https://hackage.haskell.org/package/base-4.21.0.0/docs/Contr...
More generally, Monads essentially support function composition between monadic functions, so you can use it to write code that is agnostic to the monad it runs in. This can let you write e.g. prod. Code that is in IO or Async or Maybe, but for unit testing run it in Identity.
Also, it allows syntax sugar such as do notation that makes it clear to work with even when you know which monad you're working in.
To give you a concrete example, in C#
Func<A, B>, List<A> -> List<B>
Func<A, B>, Task<A> -> Task<B>
Func<A, B>, Func<C, A> -> Func<C, B>
Can’t be expressed using a generalisation. But in Haskell, you can write
(Functor F) => Func<A,B>, F<A> -> F<B>
One of the biggest things that makes monads hard to understand is that the type systems of most languages can’t represent them. Annoying, that includes the “typeless” ones.
Func<A, B>, List<A> -> List<B>
That said, in C#, you can write: List<A> listA;
Task<A> taskA;
Func<A, B> func;
List<B> listB = from i in listA select func(i);
Task<B> taskB = from t in taskA select func(t);
And if it can resolve a method on List<T> called 'Select' that takes a Func<T, S> that returns a List<S>, and a method on Task<T> called 'Select' that takes a Func<T, S> that returns a Task<S> this will compile.In other words, I kind of think that Select methods (which can be Select extension methods, of course) amount to functors in C#?
from x in xs select fn(x)
and define a signature for it where `xs` and `fn` are the only input arguments, so that it accepts both `listB` and `taskB` without a compilation error.But I don’t need a function that does that, the language has syntax for it - I can just do that wherever I need it.
So you can’t write a function, then get a version that works on lists, then a version that works on tasks, then a version that works on nullables, then get a version that works on parsers. One if the big secrets on monads is that Haskell users don’t spend very much time thinking about them, while people without monads have to think about them all the time.
not just Select(Func<T, S>), but a Select(Func<T, S>) that preserves its original contextual instance of Select and doesn't leak into a different instance with Select.
> But I don’t need a function that does that
you don't need it yet ;)
Well, yeah, since a monad is a type, then a "typeless" PL will not be able to represent it.
Code that composes a bunch of operations, for whatever kind of composition those operations need (some people call Monad "programmable semicolons", because it's a lot like sequencing). E.g. traversals of datastructures, or some kind of "do this in a context" operation. Essentially any function you pass a "callback" to should probably be written in terms of monads so that it can accept callbacks that need different kinds of composition beyond just being called at different points in the control flow.
It’s similar for monad. If you can provide a unit constructor to turn an object value into a monad value and a “map” operation that unwraps a monad value, applies a function to it, and wraps the result, you have monadized the object type. Your objects can participate in any algorithm operates on monads.
The monad algorithms are the same. The only things different are the unit constructor and the mapping function.
It can be misleading to think of "unwrapping" a monadic value, since the monad interface does not support it. For example, there's no way to implement a function `List<T> -> T` using monad operations; it requires something entirely separate (e.g. indexing into a List, in this case).
What monads do provide is `join`, which turns nested monadic values into flat ones, like `List<List<T>> -> List<T>`. Even this seemingly trivial example is interesting though, since there are many ways to "flatten" a List<List<T>> into a List<T>: we could concatenate (e.g. depth-first), interleave (e.g. breadth-first), diagonalise (to support infinite lists), operate on chunks at a time (e.g. iterative deepening), etc.
this is called catamorphism, that is folding. The opposite transformation is called anamorphism, that is generation from a seed value.
https://hackage.haskell.org/package/base-4.21.0.0/docs/Contr...
https://hackage.haskell.org/package/base-4.21.0.0/docs/Data-...
But the bigger benefit is when syntax sugar like `do` notation comes in. Because it works for any Monad, people can write their own Monads and take advantage of the syntax sugar. That leads to an explosion of creativity unavailable to languages who "lock down" their syntax sugar to just what the language designers intended. In other words, what requires a change to other languages can often be a library in Haskell.
foo ::
_ =>
Exception String e1 ->
State Int e2 ->
Eff es Bool
foo = ...
We know that `foo` produces a `Bool` and the only effects it can do are to throw a `String` exception and mutate an `Int` state. That's it. It can't yield anything to a stream, it can't make network connections, it can't read from disk. In order to compose these operations together, `Eff` has to be an instance of `Monad`. That's the only way `Monad` turns up in this thing at all.So, that's what you get in Haskell that Python doesn't give you.
Monad is a weird type that a lot of languages can't properly represent in their type system. However, if you do what dynamically-typed scripting languages do, you can do any fancy thing that Haskell does, because it is weakly typed in this sense. (The sense in which Python is "strongly typed" is a different one.)
What you can't do is not do the things that Haskell blocks you from doing because it's type-unsafe, like, making sure that calling "bind" on a list returns a list and not a QT Window or an integer or something.
So it's not "what can a Haskell monad do that a Python class cannot". It's "what can a Python class do in a straightforward way that Haskell has to use a monad for, because Haskell put the programmer in a straightjacket where they couldn't do it without a monad". It's basically a pattern to get around the limitations of a language (at least when it's used for state).
So, they're certainly not a means of getting around a limitation of the language. If it was just a limitation that limitation would have been lifted a long time ago! It's a means of preserving a desirable property of the language (referential transparency) whilst also preserving access to mutable state, exceptions, I/O, and all sorts of other things one expects in a normal language.
See my comment here for a little bit more about the benefits of referential transparency: https://news.ycombinator.com/item?id=44448127
And if so, then it seems fair to say at least that monads were a way to get around the limitations imposed by a desirable feature of the language...
Yes, although there were solutions in the meantime. I/O was performed in the original version of Haskell through input-output streams and continuation passing style. It turns out that both approaches could have been given monad interfaces if "monad" as an abstraction had been understood at the time, but it wasn't, so they had ad hoc interfaces instead.
> And if so, then it seems fair to say at least that monads were a way to get around the limitations imposed by a desirable feature of the language...
I mean, sort of, but that seems more of a judgement than a fact. Would you say that function calls in C were a way to "get around the limitations imposed by not allowing global jumps"?
In both cases I'd just say they're a useful abstraction that lets you achieve a well-specified goal whilst preserving some desirable language property.
If C had started with a rule against global jumps, and only figured out function calls later, then yes I would say that.
You extremely often use iterators in a context where there's no way you could usefully slot in just "any" iterator and have some useful code. Suppose you have an iterator that iterates over the links that appear in an HTTP document, and write some code to fetch the HTTP resources so referenced. Well, obviously, "writing against the iterator interface" doesn't do you any good in that case. It's not like you can slot in an iterator that iterates over prime numbers to such code and get anything out of it.
What you can do with the Iterator interface is provide extremely generic tools that can be used against any Iterator, like, take the first x, skip every other one, reverse the iterator list (if finite and for a price), filter the results against a type-specific acceptance function, all kinds of things: https://docs.python.org/3/library/itertools.html These tools do not depend on the details of what the iterator is or how it works, only that it is one. In this case you might even use something as powerful as "give me an iterator and a function to run against the value that comes out of the iterator and I will run it in a parallel map and limit the number of workers and handle errors in this specific way", but all that code has no specific knowledge about URLs or fetching things from the web or anything like that. It just knows it has an iterator and a matching function for the value coming out.
Similarly, "writing to the Monad interface" gives you access to a wide variety of tools that work across all things that implement the monad interface: https://hackage.haskell.org/package/base-4.21.0.0/docs/Contr... What exactly they do depends on the underlying monad implementation. It happens that they turn out to be very useful in practice a lot of the time.
You can also create new compositions of the tools that only pay attention to the interfaces, like, "drop the first x values and then filter the rest" for an iterator, though often the libraries ship with the vast majority of what you need.
Written against the interface specifically you can only use exactly what is in the interface. But you also have the concrete types to work with, with whatever it is they do. Just as you can't really do much "real work" against just "something that provides a next value" when you have no idea what that next "value" is, but iterators are very useful with specific types, monads are the same way.
(You can then later work up to code that is allows swapping out which monad you may be using depending on how it is called, but I prefer to start here and work up to that.)
The trouble I have with Monads is that what get for free doesn't seem very exciting. Feels like I'm stuck in the world of a particular monad like State or Promises and then to do anything remotly usefull you have to bring ll of this monad tranformer machinery to switch worlds again.
"The trouble I have with Monads is that what get for free doesn't seem very exciting."
I think there's a lot of truth to that, actually.
One of the persistent myths about "monad" is that they somehow "add" to a datatype, that the datatype was able to X and Y, but now that it's a monad now it can do X and Y and Z and M and N. But that's not true. Monad is an interface that can be implemented on things. Once you implement it, you get a lot of little tools, but individually, none of them are necessarily mindblowing, and pretty much by definition it can't be anything the data type couldn't already do.
(Likewise, I'd suggest that what you get with iterator isn't really all that "exciting" either. Useful, oh yes beyond a shadow of a doubt. But it's not exciting. Iterator qua iterator doesn't let you do anything you couldn't do without it.)
The convenience comes in that they're now the same across all monads. mapM does what it does and you no longer need to consult the specific type you are currently using for what it does, and so on for each thing.
If one "removed" monad from Haskell, that is actually what would happen. It's not the Haskell wouldn't be able to do any fewer things. It's just that you'd have to consult each data type for these functions, and they'd be named different things. (And you wouldn't be able to abstract over these operations in different datatypes without basically rebuilding monad in the process.)
I think the standard for "knowing" monad isn't that you can type a bit of a do block and get it to do something, or that you can understand what a particular block of code using list as a monad does; it's when you completely naturally are programming along in Haskell, and you realize "Hey, I've typed
do
x <- something1 arg1
y <- something2 x
z <- something3 y
t <- something4 z
out, I bet there must be something to do that in Control.Monad" and you go and look it up and find out that yes there indeed is, and add >=> to your bag of tricks.I can reproduce the double line issue in part 2, this was my mistake and I missed it as part of my editing process. I'll delete part 2 while I make the corrections.
Right on. This is the "What Are Monads" fallacy: https://entropicthoughts.com/the-what-are-monads-fallacy
How useful, really? Monads don't even universally compose, which is what most people sell the concept for.
It basically allows you to pipe successive function calls returning different types by lifting these types into a monad.
Don't get me wrong, that promise is very powerful and in the rare few cases where it works, it unlocks beautiful composition, but the simple truth is that monads are really not that useful outside of Haskell (and I'd say, it's even questionable within).
From a programming perspective, the definition of monads is clear.
bind :: m a -> (a -> m b) -> m b
return :: a -> m a
You can start using monads immediately, and in a language like Haskell, things click fairly quickly, because monads are used everywhere and taken seriously in that language.But the implications and consequences of this definition for monads aren't always obvious, like how they can be used to structure operations or whatever.
And then there's the category theoretic business of monads which you don't need to understand for most programming purposes. That might be a source of confusion. As you more or less say, people have vague, built up expectations about monads. They expect something heavy and mysterious and doubt they're understood them according to the first definition. But the basic definition is straightforward.
Numbers are like this, too. You understand what a number is (a quantity). You can perform all sorts of operations and calculations using them without knowing number theory or the philosophy of mathematics.
-a data type with some hidden information
-a lot of functions that can ignore that hidden information
-some functions that can act (switch|add|mutate|signal|sequence) on that hidden information
people seem to think talking about flatMap somehow makes this intuitive despite the tautological issue of flatMap only making sense if you already know what's going on.
A list is not a monad. A list is a data structure; a monad is more like a "trait" or "interface." So you can define a List type that "implements" the monad interface, but this is not an inherent property of lists themselves. That's the sense in which a list "is a" monad: the OOP sense.
Haskell's List monad provides a model for nondeterminism. But that certainly isn't the only way List could satisfy the monad interface! It was a deliberate choice -- a good choice, possibly the best choice, but a choice nonetheless.
I can clarify this earlier in part 1 or 2 instead of in to-be-written part 3.
A nice demonstration of this is writing a very simple regex matcher with the List monad. A naive implementation in Haskell with the List monad Just Works, because it's effectively a direct translation of Nondeterministic Finite Automata into code.
δ : Q × E ⟶ Q
δ : Q × E ⟶ P(Q)
... where Q denotes the set of the automaton's states, E its alphabet of input symbols, and P the power set operation. Deterministic automata arrive at a definite, single state drawn from Q, while non-deterministic automata may arrive at a set (~list) of possible states, when given a current state from Q and next input symbol from E.[0] https://en.wikipedia.org/wiki/Deterministic_finite_automaton
[1] https://en.wikipedia.org/wiki/Nondeterministic_finite_automa...
Non-determinism, in that given some set of inputs it's possible to receive a collection (a list) of possible outputs.
With lists you can express things like all possible pairings of all possible outcomes, or the Cartesian product:
ghci> liftM2 (,) ['a', 'b', 'c'] [1,2,3]
[('a',1),('a',2),('a',3),('b',1),('b',2),('b',3),('c',1),('c',2),('c',3)]
... or in more explicit monadic do-notation: ghci> :{
ghci| do
ghci| x <- ['a', 'b', 'c']
ghci| y <- [1,2,3]
ghci| return (x, y)
ghci| :}
[('a',1),('a',2),('a',3),('b',1),('b',2),('b',3),('c',1),('c',2),('c',3)]
and so on.You operate with things that are bound to PL notions of specific languages. Instead, consider that list isn't a data structure, it's an abstraction that defines a multitude of position-ordered values. The multitude of position-ordered values called "list" is a presented entity of "monad", because it can be used as a valid input for a monadic computation defined consistently (in terms of the monad laws).
1. For any class X, there is a canonical method
F: X -> T<X>
2. For any class X, there is a canonical method G: T<T<X>> -> T<X>.
3. For classes X and Y, and any method f: X -> Y,
there is a corresponding method “T<f>”: T<X> -> T<Y>.
—————-Here “any type” means any type that is compatible with the template.
And then there’s some additional rules which make all these methods compatible, based on the different ways of stacking nested T’s and the “canonical” maps you get. Admittedly there is some confusing accounting here, but I also think most natural ways of constructing the above three requirements are going to satisfy them anyway. For List and Maybe it’s fairly obvious what the above methods are.
I dunno, maybe I have it wrong and someone can correct my understanding.
Thus, explaining the syntax and where the type variables go is explaining the least relevant thing about monads to their power and importance. It's certainly easy to showcase both the syntax and the list and maybe monads, that's part of the "monad tutorial fallacy". Gaining intuition for how to think about monads _in general_ is a lot harder and requires practice. Like, yes, list and maybe are "containers", but is `(->) t` a container? Is `IO`? How do these compose, if at all? What is this about "effect" semantics, "I thought monads were just burritos/containers"? etc. These are the hard, both conceptually and pedagogically, questions. Yes you need to know the syntax to use it in any given programming language, but knowing what scabbard your knife fits in doesn't give you the skills of how to use knife :)
When I taught my daughter chess at the age of 5, I did not teach her by laying out the rules and in general when I teach people anything, I don't start by emphasizing rules. Rules are often the least interesting part of any subject and I find rules are much easier to learn once a sufficient amount of motivation has been given first, and then we get to a point where we use rules to crystalize our intuition and express rigorously what we've come to learn. The rules are the end result of learning, not the starting point.
With chess, I taught my daughter by simply playing a simple game where I had a queen, and she had one king and two pawns, and her goal was to get one pawn across the board and my goal was to stop her.
The king and pawns are arranged so that with perfect play she is guaranteed a win, and eventually she learned the pattern that allows her to always win. I then switched it up so she gets the queen and I get the king/pawns but I arrange the king/pawns so that I will always lose if she plays perfectly.
After this, I added more pawns for white and a bishop for black. The motivation for adding these pieces is that to play perfectly and always win as white, you need to learn how pawns define the physical structure of the board, making walls and allowing for control over certain parts of the board, but this also comes at the cost of introducing clutter and making it hard for the king to maneuver.
After these principles are learned, then I introduce the knight, because the knight can jump over walls and isn't as burdened by clutter. I don't just put a knight in the game out of nowhere and make her memorize how it moves... by the time the knight is introduced, it feels natural to want a piece that has better maneuverability over the extra pawns.
And so on so forth... each piece is introduced with some kind of motivation for why it exists. It's not just there for the sake of existing because of some rule that you just need to memorize. It exists because it adds some kind of flavor and richness to the game and you end up absorbing the rules rather than just memorizing them.
Now I'm dwelling a bit on chess here... but the principle applies to programming as well. I don't think if you ever just listed the rules you did as a means of teaching someone about monads anyone would ever come to appreciate why these rules matter or why they should waste any time learning them and in fact I am fairly confident that this kind approach to teaching people functional programming is why it's taken so long for mainstream languages to appreciate these features and adopt them.
I would agree that most of these articles about monads are bad. Just study the definition, then study what you can do with monads, it's not that hard.
If you don't already know category theory, learning it is hard. The terms on wikipedia seem to form a dense graph of links. It's hard to get a foothold of comprehension. For people that already know C++, or are at least familiar with this syntax, this is more useful than describing it in haskell syntax or category theory. There seems to be a chicken and egg problem regarding haskell and monads. Learning c++ may be harder or easier than category theory. I'm no sure, as I don't understand either one of them. But this syntax makes more sense to me than something expressed in terms of category theory vocabulary.
Context: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
Usually articles that describe them in a very Math-y way go above my head. But the definition above was immediately clear (I saw it on HN).
I think this article is a bit more approachable than others I've read, but it still gets very confusing near the end.
Anyway, a nested To-Do list is (allegedly) a common form of a monad. Say I am trying to clean my whole house. Well, I could have an item for a task like cleaning the kitchen that has each task I need to do in the kitchen in order for the kitchen to be cleaned. I can do the same for the living room, garage, etc..
However, that is mainly for organization purposes. While I may write the tasks in a nested manner, I could very well write each item as just a long flat list of To-Do tasks, and in reality, all the tasks are effectively completed as if they were one large flat list.
Is that kind of what you mean by 'flatMappable'? Nested To-Do lists being flattened and mapped to one large list? As in, a To-Do list of To-Do lists is just a single, larger To-Do list?
The monad part is about what happens if you call flatMap() repeatedly. That is, each call to flatMap() is one action, and these actions can be nested without affecting the result.
Sorry, I should have added more context to my post. I edited my post and added a link to the MDN definition of the flatMap function.
https://users.scala-lang.org/t/what-is-a-monad-in-scala/4169
It's like... what would you call all types that have a .write() method? Writable right? What would you call all types that have a .dispose() method? Disposable. What would you call all types that have a .flatMap() method? Monad obviously.
I’m not sure there is a good name for the monad operation. Sometimes it’s called ‘bind’ but what does it bind?
I suppose you could call it ‘then’ like when working with Promises.
https://getkyo.io/#/?id=the-quotpendingquot-type-lt
All pure values are automatically lifted into the Kyo monad, so `map` is effectively `flatMap`.
From the linked docs:
> This unique property removes the need to juggle between map and flatMap. All values are automatically promoted to a Kyo computation with zero pending effects, enabling you to focus on your application logic rather than the intricacies of effect handling.
In the end it makes a lot of sense I think. What you do is manipulating values inside some wrapper. Whether this wrapper is a monad or not should not matter. Just do something with the value(s) inside, and that's mapping.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...
for x in list
doThings(x)
These comprehensions have a strange bonus feature, that you can do nested "loops" all at once, and even add "guards" (little if statements) newlist=
for x in list1
y in list2 if y > 3
z in list3
doThings(x, y, z)
But again, the comprehension, when "de-sugared", is secretly calling the map/flatMap/filter functions of list1, list2, list3 to get our result. You as the author of a given monad can implement those functions however you want, and they're all 3 lambda based. But notice how the comprehension is flattening those lambdas out! Our callbacks in callbacks are much more readable like this.Without comprehensions, you can still implement monadic functions in any old language (probably in C...?), and they're handy in their own right, but you don't get the flattening-of-callback-hell magic.
Do-notation in Haskell, or for-comprehensions in Scala are just syntax sugar for nested calls to `flatMap`, `filter`, and `map`.
I think this here shows it nicely:
https://www.baeldung.com/scala/for-comprehension#for-compreh...
In Scala you can add the needed methods to any type and than they will "magically" work in for-comprehensions. In Haskell you need to implement a Monad instance which than does the same trick.
The concrete implementations of these methods need to obey to some algebraic laws for the data structure which defines them to be called a monad. But that's pretty much it.
In my opinion all that Haskell in most "monad tutorials" just blurs an in principle very simple concept.
The in practice relevant part is that a monad can be seen as an interface for a wrapper type with a constructor that wraps some value (whether a flat value, some collection, or even functions, makes no difference), does not expose an accessor to this wrapped value, and has a `flatMap` method defined. It also inherits a `map` method, coming from an interface called "Functor". The thing is also an instance of an "Applicative", which is an interface coming with a `combine` method which takes another object of the same type as itself and returns a combination of again the same type (classical example: string concatenation can be a `combine` implementation if we'd say that `String` implements the `Applicative` interface).
With those caveats in mind, here's a more intensive scala-based monad tutorial I made:
https://github.com/zaboople/techknow/blob/master/scala/monad...
But really, don't burn up too much of your short life trying to come to terms with this stuff. There's a reason most languages don't get around to supporting Monads...
The whole thing about JS's Promises becomes way clearer when you see that they are a monad, except for one discrepancy (they auto-flatten themselves). It leads to much shorter and clearer code when doing pedestrian frontend stuff.
M<T1>::map(f: (T1 -> T2)): M<T2>
List<int>([1, 2, 3]).map(x => toString(x)) == List<string>(["1", "2", "3"])
You can always flatten the nested structure: M<M<T>>::flatten(): M<T> // [["a", "b"], ["c", "d"]] -> ["a", "b", "c", "d"]
This is usually expressed in a different form, more fundamental: M<T1>::flatMap(f: (T1 => M<T2>)): M<T2>
List(["a b", "c d"]).flatMap(x => x.split()) == List(["a", "b", "c", "d"])
You can notice how that map() thing does looping over a sequence for you.But Optional<T> is also a monad:
let x: Optional<int> = Some(1);
let y: Optional<int> = Nothing;
x.map(n => n + 1).map(n => n * 2) == Some(4);
y.map(n => n + 1).map(n => n * 2) == Nothing;
As you see, the same map() (and flatMap()) does the condition checking for you. and can be chained safely.You can also notice how chaining of map-like operations does operation sequencing:
fetch(url).then(content => content.json()).then(data => process(data))
Your language, like JS/TS, can add some syntax sugar over it, and allow you to write it as a sequence of statements: async () => {
const response = await fetch(url);
const data = await response.json();
process(data);
}
Promises are not exactly monads though, a Promise<Promise<T>> immediately transforms into Promise<T>. But other monadic properties are still there.Minor quibble, "can only be resolved as". The runtime absolutely holds Promise<Promise<T>>'s.
A list is not a monad. List is a monad. A list is an algebra for the List monad.
In detail:
* "A list is not a monad" - True!
* "List is a monad" - True. But I think "`List` is a monad" might be clearer.
* "A list is an algebra for the `List` monad." - False!
What's correct is the following:
* "An algebra for the `List` Monad is precisely a monoid."
Sketch of the construction:
(an algebra for the list monad is a monoid): Recall that an algebra is a set/type `A` together with a function `mult: List A -> A` together with some axioms. Think of such a function `mult: List A -> A` as the function that assigns to each list with elements in `A` the product over all those elements. The aforementioned axioms boil down to: (1) `mult([])` is a neutral element and (2) `mult` is an associative binary operation when restricted to two-element lists.
(a monoid is an algebra for the list monad): Same Idea - Given a monoid, we define a function `mult: List A -> A` by assigning to a list of elements of `A` the product of these elements. And the empty list we assign to the neutral element. Then we can use the associativity and properties of the neutral element to show that this function constitutes an algebra for the list monad.
I could make this distinction in part 3 (not written yet) although I want to balance not misleading readers, but not overcomplicating it too early on.
IMO if you already have it, this will be a lovely comparison full of insight, but if you haven't then it's full of confusing statements.
IMO what they are is utterly unimportant, except to mathematicians, and what you can do with them is more to the point.
The fact that explanations are so often in Haskell just makes them more unintelligible because you really need to know what problem they solve.
All a way of saying that, yep, you always have `map` when you have a Monad, but you don't need a Monad to have `map`.
If you want an example we can compare a regular list and a Ziplist. A regular list's Applicative instance does a cross product, while a Ziplist's applicative instance does a dot product.
(*) <$> [2,3] <*> [5,7, 11]
--> [10,14,22,15,21,33]
getZipList $ fmap show $ (*) <$> ZipList [2,3] <*> ZipList [5,7, 11]
--> ["10","21"]
There's no great way to write a Monad instance for ZipList. But it's an Applicative Functor and thus is also a Functor and thus you can map over it. https://www.mail-archive.com/haskell-cafe@haskell.org/msg572...For quirky reasons in Haskell, `fmap` the function implemented for every Functor instance. This is because `map` was already taken by lists. Weird, I know.
When I was first learning Haskell a million years ago, I was completely confused by the concept of a monad; I could, after enough fighting with the compiler, usually get something working, but it was a stochastic guess-and-check process trying to figure out what `IO` actually means. Even the `Maybe` was confusing to me, because I couldn't really figure out how the hell you relate something like "checking for null" with "writing to the disk".
I can't remember where I saw it, probably on the Haskell wiki somewhere, but when they pointed out the List is a monad, and after seeing an example of how it worked, I suddenly got it: in a hand-wavey way, a monad is basically just a value with a wrapper context [1], and from a practical perspective that's all it is. In the case of a List its wrapper context is that there might be 0 or many of those things in there, in the case of a Maybe its wrapper context is that it might exist or it might not, in the case of IO its wrapper context is that it's interfacing with the outside world, and once you abstract away the entire idea of context, you can suddenly open up an entire world of reusability.
This is a good tutorial, I will probably be linking it to people if they ever make the mistake of asking about monads.
[1] I don't need a lecture on the minutia of this, I know that there's a lot more to it in the theory world, I went to graduate school specifically to study functional language verification. I'm keeping it simple.
Not always! I find this is a big source of confusion; not all monads contain values, sometimes beginners think they can or should "get the value out" of a monad and that tends to lead to writing the wrong kind of code.
But its not a very robust one: its never true of fast programs on realistic hardware for example (not for a long time now). And all the rule bending (-fstrict-alias, bunch of stuff) exists in this tension between the grade school natural language paradigm and the reality of computers. I say grade school not to be pejorative, but rather because it is roughly the boundary where written natural languages begin to have interesting tensions around past and future and simultaneous, changing and not changing.
Functors and applicatives and monads and other type classes like these are the source of endless analogies because there isn't an accepted, broadly-understood terminology for this "well its roughly what would happen if you had a piece of paper and wrote things on it at every statement boundary and scratched off the old ones" (though Turing and von Neumann did formalize this in useful ways, they just don't generalize well to realistic computers anymore).
Monads are the mathematical object that is forced on you if you want a rigorous way to describe the semantics of program execution in the vicinity of this "common sense" notion. That's really what everyone is dancing around: your program is only well defined with either:
- a big rulebook full of exceptions and edge cases
- a compositional rule strict enough to give some useful predictability but lax enough to admit most useful programs.
It is this rigor/laxity tension as concerns text on a page and gates on a semiconductor that gives monads a privileged place in the towers of categories. When I worked on Sigma we were among the earlier adoptors of ApplicativeDo, for example, because we wanted a slightly different rigor/laxity tradeoff for performance reasons.
Monads are what happens when you do shift the giant pile of "back of the book" compiler details that describe program execution semantics into a much simpler set of rules, but at the cost of increasing the barrier to entry because you need to know the rules before you can print "hello world".
Like, why would you embarrassed about not understanding the details of MPLS networking if you're not a network engineer who works with MPLS?
Meaning that every collection is a set of possible inputs to the computation that is provided as the argument to a `flatMap` operation. Each `flatMap`, by definition, returns a new collection of possible outputs for each of the inputs, and each of those collections gets concatenated. Every item in the final output collection represents the result of following some path through the computations, selecting a single item at each step. Importantly, the type of the output of each `flatMap` operation can differ from the input.
You can imagine extending this by assigning probabilities, or making the domain continuous (I think...). These extensions would still be monads, just without being simple collections.
It's kind of like how multiplication over whole numbers is repeated addition, but that metaphor becomes less useful for other domains of numbers.
The statement as-is breaks pretty much immediately because, while there is a canonical list monad, there isn't a list monad, there are in fact several[1].
There are several more correct ways of phrasing the idea among:
"List can be given a monad instance"
"List forms a monad with pure and bind as defined"
"List is the underlying functor of a monad"
The point is that picking any old list implementation is likely not a monad without the supporting structure.
Will any of these help you learn what a monad is? Likely not. Monadology is a Mary's Room[2] problem; there is a qualia, a subjective sensation, when one understands monads having experienced them first hand. Subsequently monad tutorials are the best case against physicalism[3] yet devised.
1. https://hackage.haskell.org/package/exotic-list-monads-1.1.0...
The monad interface only requires ways to construct object using callbacks. The ‘bind’ operation takes a callback as an argument, but says nothing about when it’s actually called; it could be immediately, deferred, multiple times, or even never. It’s up to the implementation of the monad, as well as the language, if it’s a lazy language.
This is basically a framework. Like with other frameworks, the principle is “don’t call us; we’ll call you.” Arbitrary computation can happen between callbacks. The framework can do whatever control flow it wants, and this is what often makes frameworks opaque. Hiding control flow is what frameworks do, for better or worse.
So far, none of this is specific to a Monad. The Monad part comes from the type signature of the callback function passed in to flatmap(), which allows ‘bind’ operations to be nested.
Once you know what kind of thing you’re dealing with (frameworks) then you can go into why some frameworks qualify as a monad.
List can be an instance of a monad, i.e. a monadic type.
I think the trick to understanding monads is to see what benefits monad interface gives to the types that implement it.
Monads in software are just a standard API for any given type. That’s it. Theres no magic here. Just implement the standard and you have a monad.
It grinds my gears seeing monad tutorial after tutorial using the wrong metaphors or misleading explanations
I don’t think that’s helpful for people to understand _why_ monads though, and that’s generally what people are looking for.
[Content Warning: Lisp]
We define a small OOP framework for monads, plus macros which then can succinctly define different kinds of monads, including generating a comprehension macro for each one.
https://hackage.haskell.org/package/Agda-2.6.4.2/docs/Agda-S...
Seriously, I've read things about lists and nondeterminism a few times in this thread, and I can't help but wonder if "you guys" (functional programming nerds, maybe?) use the word "nondeterministic" different than the rest of the world?
If not, I'd love a good explanation about what makes lists non-deterministic, and why we would want that, and why they seem to be perfectly deterministic in imperative programming languages.
Let's start with function composition. We know that for any two types A and B we can consider functions from A to B, written A -> B. We can also compose them, the heart of sequentiality. If f: A -> B and g: B -> C then we might write (f;g) or (g . f) as two different, equivalent syntaxes for doing one thing and then the other, f and then g.
I'll posit this is an extremely fundamental idea of "sequence". Sure something like [a, b, c] is also a sequence, but (f;g) really shows us the idea of piping, of one operation following the first. This is because of how composition is only defined for things with compatible input and output types. It's a little implicit promise that we're feeding the output of f into g, not just putting them side-by-side on the shelf to admire.
Anyway, we characterize composition in two ways. First, we want to be clear that composition only cares about the order that the pipes are plugged together, not how you assemble them. Specifically, for three functions, f: A->B, g: B->C, h: C->D, (f;g);h = f;(g;h). The parentheses don't matter.
Second, we know that for any type A there's the "do nothing" identity function id_A: A->A. This doesn't have to exist, but it does and it's useful. It helps us characterize composition again by saying that f;id = id;f = f. If you're playing along by metaphor to lists, id is the empty list.
Together, composition and identity and the rules of associativity (parentheses don't matter) and how we can omit identity really serve to show what the idea of "sequences of pipes" mean. This is a super popular structure (technically, a category) and whenever you see it you can get a large intuition that some kind of sequencing might be happening.
Now, let's consider a slightly different sort of function. Given any type types, what about the functions A -> F B for some fixed other type F. F here exists to somehow "modulate" B, annotate it with additional meaning. Having a value of F B is kind of like having a value of type B, but maybe seen through some kind of lens.
Presumably, we care about that particular sort of lens and you can go look up dozens of useful choices of F later, but for now we can just focus on how functions A -> F B sort of still look like little machines that we might want to pipe together. Maybe we'd like there to be composition and identity here as well.
It should be obvious that we can't use identity or composition from normal function spaces. They don't type-check (id_A: A -> A, not A -> F A) and they don't semantically make sense (we don't offhand have a way to get Bs out of an F B, which would be the obvious way to "pipe" the result onward in composition).
But let's say that for some type constructors F, they did make sense. We'd have for any type A a function pure_A: A -> F A as well as a kind of composition such that f: A -> F B and g: B -> F C become f >=> g : A -> F C. These operations might only exist for some kinds of F, but whenever they do exist we'd again capture this very primal form of sequencing that we had with functions above.
We'd again capture the idea of little A -> F B machines which can be plugged into one another as long as their input and output types align and built into larger and larger sequences of piped machines. It's a very pleasant kind of structure, easy to work with.
And those F which support these operations (and follow the associativity and identity rules) are exactly the things we call monads. They're type constructors which allow for sequential piping very similar to how we can compose normal functions.
I do not even know what a monoid or an endofuncor is. While I enjoy math, despite not being the best at it, I am confident I never made it this far in my studies. I looked at the Wikipedia definitions, and I am even more confused now.
This is a book chapter, and you need the preceding chapters to grasp it I think. I'm still in the middle of it.
I mean really. Look at posts like this[0]. What does this give you? Nothing, in practical reality. Nothing.
How would you know? That's the classic Blub Paradox.
Being able to write a custom monad and then leverage the vast array of libraries that already exist has helped me deliver functionality to end users quicker, more maintainably, and with lower defect rates. They don't let you do anything that you couldn't do by writing it out longhand. But just like using generic container libraries instead of writing a specific container for every type you want to handle collections of, they're extremely helpful.