The two experiences couldn't be more different. While I loved the great development speed for my personal projects, where I am writing more code than reading it, joining an existing project needs the opposite, reading more code than writing it. And I can only repeat what many people say, dynamic typing makes this so much more difficult. For most code changes, I am not 100% certain which code paths are affected without digging a lot through the code base. I've introduced bugs which would have been caught with static typing.
So in my conclusion, I'm bullish on gleam, but also on other (static) languages embracing the cooperative green-thread/actor model of concurrency like Kotlin (with JVM's virtual threads). (On another note, I personally also dislike Phoenix LiveView and the general tendency of focusing on ambiguous concepts like Phoenix Context's and other Domain Driven Design stuff)
Fun to develop and solo administer. Small teams with a well known codebase can do amazing things. I work at orgs with multiple teams and new hires who don't know the codebase yet.
For me, the sweet spot is Go.
What actually drove me nuts was absence of guards and meaningful static analysis on return values. Even in my small but nontrivial personal codebase I had to debug mysterious data mismatches after every refactor. I ended up with a monad-like value checking before abandoning Elixir for my compiler.
What you're describing are the same uncertainties I've used to have writing PHP a long time ago, but since using optional types and PHPStan checker, it kind of serves as a compiler pass that raises those issues. The benefit being that I still can be lazy and not type out a program when I prototype the problem on my first pass.
Sort-of. Developers provide typespec which is like a hint and use dialyzer to find issues before runtime.
It’s in the works and recent versions of the compiler already catch some type errors at compile time, but it’s nothing remotely close to what you get from Typescript, or from any statically typed language like Go, Rust, Java, etc.
A blog post by them about this: https://building.nubank.com/tech-perspectives-behind-nubanks...
Are they moving from Clojure to Elixir, or adding it?
Their tech stack is probably enormous, it wouldn’t surprise me if they’re using both for different things
They have a success typing system (which isn't very good) and are working on a fuller system (which isn't very mature).
If typing is the only thing keeping you out, have a look at Gleam.
Having worked with Elixir professionally for the last six years now, it is a very mature platform, very performant and offers many things that are hard in other languages right out of the box.
I see this phrase around a lot and I wish I could understand it better, having not worked with Erlang and only a teeny tiny bit with Elixir.
If I ship a feature that has a type error on some code path and it errors in production, I've now shipped a bug to my customer who was relying on that code path.
How is "let it crash" helpful to my customer who now needs to wait for the issue to be noticed, resolved, a fix deployed, etc.?
Let it crash is more about autorestarting and less about type bugs. If you have a predictable bug in your codepath that always breaks something, it just means you never tested it and restarting will not fix it. But this kind of straightforward easy to reproduce bugs are also easy to test the hell out of.
But if you have a weird bug in a finite state machine that gets itself into a corner, but can be restarted -- "let it crash" helps you out.
Consider hot reload -- a field exists in a new version of a record, but doesn't exist in a old one. You can write a migration in gen server to take care of it, but if you didn't and it errored out, it's not the end of the world, it will restart and the problem will go away.
The reason this is a sound strategy is that in larger systems, there will be bugs. And some of those bugs will have to do with concurrency. This means a retry is very likely to solve the bug if it only occurs relatively rarely. In a sense, it's the observation that it is easier to detect a concurrency bug than it is to fix it. Any larger system is safe because there's this onion-layered protection approach in place so a single error won't always become fatal to your system.
It's not really about types. It's about concurrency and also distribution. Type systems help eradicate bugs, but it's a different class of bugs those systems tend to be great at mitigating.
However, if you do ship a bug to a customer, it's often the case you don't have to fix said bug right away, because it doesn't let the rest of the application crash, so no other customer is affected by this. And you can wait until the weekend is over in many cases. Then triage the worst bugs top-down when you have time to do so.
The farther you get from that being an issue, the less useful the "let it crash" philosophy becomes, e.g., if I hit "bold" in my word processor and it fails for some reason, "let it crash" is probably not going to be all that helpful overall.
I have seen systems that "should" have been failures in the field be held together by Erlang's restart methodology. We still had to fix the bugs, but it bought us time to do it and prevented the bad deployments from being immediate problems. But it doesn't apply to everything equally by any means.
"Crashing is loud" below is a phrase to combine with "remote error recovery" from the link above. Erlang/OTP wants application structure that is peculiar to it, and makes that structure feel ergonomic.
> If I ship a feature that has a type error on some code path ... How is "let it crash" helpful to my customer?
The crash can be confined to the feature instead of taking down the entire app or corrupting its state. With a well-designed supervision structure, you can also provide feedback that makes the error report easier to solve.
However, while a type error in some feature path is a place that makes type annotations make sense, type annotations can only capture a limited set of invariants. Stronger type systems encode more complex invariants, but have a cost. "Let it crash" means bringing a supervisor with simple behavior (usually restart) or human into the loop when you leave the happy path.
If a "human" has to enter the loop when a crash occurs, this limits the kind of system you can write.
I had to work on a system where a gen server was responding to requests from a machine, sent frequently (not high frequency, but a few times per second.) If for some reason the client misbehaves, or behaves properly but happens to use a code path that has a type error, the only option given by "let it crash" was to, well... crash the actor, restart the actor, then receive the same message again, crash the actor, restart the actor, etc... and eventually you crash the supervisor, which restarts and receives the same message, etc...
It's much more suitable as a replacement for adding try / catch everywhere and having to manually bubble exceptions.
So sure, the code with the error won't work (it wouldn't work in any language - you can make an error in all of them), but you will get a nice, full stack trace and the other processes in your VM won't be impacted at all. You won't bring down the service with a crash. Sometimes this is undesirable - you could deploy a service where the only endpoint that functions is the health check - but generally people don't do that.
Isn't that also covered at the framework (rails, django, whatever) level in other languages ?
Let It Crash refers to a sort of middle ground between returning an error code and throwing an exception. It does not directly address your customer's need, and you are right that they are facing a bug.
So if you were to use Golang with Let It Crash ethos, say, you would write a lot of functions with the same template: they take an ID and a channel, they defer a call to recover from panics, and on panic or success they send a {pid int, success bool, error interface {}} to the channel -- and these are always ever run as goroutines.
Because this is how you write everything, you have some goroutines that supervise other goroutines. For example, auto-restart this other goroutine, with exponential backoff. But also the default is to panic every error rather than endless "if err != nil return nil, err" statements. You trust that you are always in the middle of such a supervisor tree and someone has already thought about what to do to handle uncaught errors. Because supervision trees is just the style of program that you write. Say you lose your connection to the database, it goes down for maintenance or something. Well the connection pool for the database was a separate go routine thread in your application, that thread is now in CrashLoopBackoff. But your application doesn't crash. Say it powers an HTTP server, while the database is down, it responds to any requests that do not use the database just fine, and returns HTTP 500 on all the requests that do use the database. Why? Because your HTTP library, allocates a new goroutine for every request it handles, and when those panic it by default doesn't retry and closes the connection with HTTP 500. Similarly for your broken codepath, it 500s the particular requests that x.(X) something that can't be asserted as an X, we log the error, but all other requests are peachy keen, we didn't panic the whole server.
Now that is different from the first thing that your parent commenter said to you, which is that the default idiom is to do something like this:
type Message {
MessageType string
Args interface{}
Caller chan<- Message
}
// ...
msg := <-myMailbox
switclMessageType {
case "allocate":
toAllocate := args.(int)
if allocated[toAllocate
msg.Caller <- Message{"fail", fmt.Errorf(...), my mailbox}
} else {
// Save this somewhere, then
msg.Caller <- Message{"ok", , my mailbox}
}
}
With a bit of discipline, this emulates Haskell algebraic data types which can give you a sort of runtime guarantee that bad code looks bad (imagine switching on an enum `case TypeFoo`: foo := arg.(Foo)`, if you put something wrong in there it is very easy to spot during code review because it's a very formulaic format)So the idea is that your type assertions don't crash the program, and they are usually correct because you send everything like a sum type.
If you're at all interested, I'd suggest doing the basic and OTP tutorials on the Elixir Website. Takes about two hours. Seeing what's included and how it works is probably the strongest sails pitch.
There is also pattern matching and guard clauses so you can write something like:
def add(a, b) when is_integer(a) and is_integer(b), do: a + b
def add(_, _), do: :error
It’s up to personal preference and the exact context if you want a fall through case like this. Could also have it raise an error if that is preferred. Not including the fallback case will cause an error if the conditions aren’t met for values passed to the function.
It reminds of the not-missed phpspec, in a worst way because at least with PHP the IDE was mostly writing it itself and you didn't need to add the function name to them (easily missed when copy/pasting).
Typespec is an opt-in type hint for develop and build time.
I find them much better suited for specific tasks where there is little overlap or repetition.
https://hexdocs.pm/elixir/main/gradual-set-theoretic-types.h...
There's also tooling like dialyzer, and a good LSP catches a lot too. The language itself has some characteristics that catch errors too, like pattern matching and guard clauses.
With all that said, I'm still very keen for static typing. In the data world we mostly start with Python without mypy, and it's pretty hard to go back.
The eng department velocity will slow as code complexity grows and teams change. Dynamic typing makes this worse.
They are actively shipping a type system for Elixir though, which as far as I understand is pretty similar to TS so, great!
But pattern matching in Erlang does do a lot of the heavy lifting in terms of keeping the variable space limited per unit of code which tends to reduce nesting and amount of code to ingest to understand the behavior you care about at any moment.
there are other things that contribute to this like pretty universal conventions on function names matching expected outputs and argument ordering.
it does suck hard when library authors fail to observe those conventions, or when llms try to pipe values into erlang functions, and yes, it WOULD be nice for the compiler to catch these but you'll usually catch those pretty quickly. you're writing tests (not for the specific reason of catching type errors), right? right?
It seems like the Elixir/Erlang community is aware of this, as is Ruby, but it's a rather large hole they have to dig themselves out of and I didn't feel particularly safe using the tools today.
I've heard a lot of good things about the Erlang runtime and I did really like Elixir's pipe operator, so it was unfortunate.
Yes you are. First of all there isn't such a thing as "strict typing", types are either static/dynamic and/or strong/weak. I suppose you meant Elixir has no static types. It is however a strongly typed language.
And just like it usually happens, static typing enthusiasts often miss several key insights when confronting dynamically typed languages like Clojure or Elixir (which was inspired by ideas implemented in Clojure).
It's not simply "white" and "black", just like everything else in the natural world.
You have to address:
- Runtime flexibility vs. compile-time safety trade-offs — like most things, things have a price to it, nothing is free.
- Different error handling philosophies. Sometimes, designing systems that gracefully handle and recover from runtime failures makes far more resilient software.
- Expressiveness benefits. Dynamic typing often enables more concise, polymorphic code.
- Testing culture differences. Dynamic languages often foster stronger testing practices as comprehensive test suites often provide confidence comparable to and even exceeding static type checking.
- Metaprogramming power. Macros and runtime introspection enable powerful abstractions that can be difficult in statically typed languages.
- Gradual typing possibilities. There are things you can do in Clojure spec that are far more difficult to achieve even in systems like Liquid Haskell or other advanced static type systems.
The bottom line: There are only two absolutely guaranteed ways to build bug-free, resilient, maintainable software. Two. And they are not static vs. dynamic typing. Two ways. Thing is - we humans have yet to discover either of those two.
They clearly said they "can't go back to" it, meaning they've experienced both, are aware of the trade-offs, and have decided they prefer static types.
> Gradual typing possibilities. There are things you can do in Clojure spec that are far more difficult to achieve even in systems like Liquid Haskell or other advanced static type systems.
That's great for clojure and python and PHP, but we're not talking about them.
Dynamic typing also varies - there's type introspection, runtime type modification aka monkey patching, different type checking strategies - duck typing & protocol checking, lazy & eager, contracts, guards and pattern matching; object models for single & multiple dispatch, method resolution order, delegation & inheritance, mixins, traits, inheritance chains, metaprogramming: reflection, code generation, proxies, metacircular evaluation, homoiconicity; there are memory and performance strategies: JIT, inline caching, hidden classes/maps; there are error handling ways, interoperability - FFI type marshaling, type hinting, etc. etc.
Like I said already - things aren't that simple, there isn't "yes" or "no" answer to this. "Preferring" only static typing or choosing solely dynamic typing is like insisting on using only a hammer or only a screwdriver to build a house. Different tasks call for different tools, and skilled developers know when to reach for each one. Static typing gives you the safety net and blueprints for large-scale construction, while dynamic typing offers the flexibility to quickly prototype and adapt on the fly. The best builders keep both in their toolbox and choose based on what they're building, not ideology.
In that sense, the OP is wrong - you can't judge pretty much any programming language solely based on one specific aspect of that PL, one has to try the "holistic" experience and decide if that PL is good for them, for their team and for the project(s) they're building.
- Dynamic languages can harbor bugs that only surface in production, sometimes in rarely-executed code paths, yes. However, some dynamically typed languages do offer various tools to mitigate that. For example, take Clojurescript - dynamically/strongly typed language and let's compare it with Typescript. Type safety of compiled Typescript completely evaporates at runtime - type annotations are gone, leaving you open to potential type mismatches at API boundaries. There's no protection against other JS code that doesn't respect your types. In comparison, Clojurescript retains its strong typing guarantees at runtime. This is why many TS projects end up adding runtime validation libraries (like Zod or io-ts) to get back some of that runtime safety - essentially manually adding what CLJS provides more naturally. If you add Malli or Spec to that, then you can express constraints that would make Typescript's type system look primitive - simple things like "The end-date must be after start-date" would make you write some boilerplate - in CLjS it's a simple two-liner.
- Static type systems absolutely shine for refactoring assistance, that's true. However, structural editing in Lisp is a powerful refactoring tool that offers different advantages than static typing. I'm sorry once again for changing the goalposts - I just can't speak specifically for Elixir on this point. Structural editing guarantees syntactic correctness, gives you semantic-preserving transformations, allows fearless large-scale restructuring. You can even easily write refactoring functions that manipulate your codebase programmatically.
- Yes, static typing does encourage (or require) more deliberate API design and data modeling early on, which can prevent architectural mistakes. On the other hand many dynamically typed systems allow you to prototype and build much more rapidly.
- Long-term maintenance, sure, I'll give a point to statically typed systems here, but honestly, some dynamically typed languages are really, really good in that aspect. Not every single dynamic language is doomed to "write once, debug forever" characterization. Emacs is a great example - some code in it is from 1980s and it still runs perfectly today - there's almost legendary backward compatibility.
Pragmatically speaking, from my long-term experience of writing code in various programming languages, the outcome often depends not on technical things but cultural factors. A team working with an incredibly flexible and sophisticated static type system can sometimes create horrifically complex, unmaintainable codebases and the opposite is equally true. There's just not enough irrefutable proof either way for granting any tactical or strategic advantage in a general sense. And I'm afraid there will never be any and we'll all be doomed to succumb to endless debates on this topic.
That's true but some languages don't let you ship code to prod that multiplies files by 9, or that subtracts squids from apricots
I don't understand why when someone mentions the word "dynamic", programmers automatically think javascript, php, bash or awk. Some dynamically typed PLs have advanced type systems. Please stop fetishizing over one-time 'uncaught NPE in production' PTSD and acting as if refusing to use a statically typed PL means we're all gonna die.
In haskell (typeclasses), rust (traits), and elixir comparison is polymorphic so code you write intending to work on numbers will run but give a wrong output when passed strings. In perl and bash < is just numeric comparison, you need to use a different operator to compare strings.
In the case of comparison elixir is more polymorphic than even python and ruby, as at least in those languages if you do 3 < "a" you get a runtime error, but in general elixir is less polymorphic, ie + just works on numbers, not also on strings and lists and Dates and other objects like python or js.
I also experienced more type errors in clojure compared to common lisp, as clojure code is much more generic by default. Of course noone would want to code in rust without traits, obviously there are tradeoffs here, as you're one of the minority in this thread recognizing. There is one axis where the more bugs a type system can catch the less expressive and generic code can be. Then another axis where advanced type systems with stuff like GADT can type check some expressive code, but at the cost of increasing complexity. You can spend a lot more time trying to understand a codebase doing advanced typesystem stuff than it would take to just fix the occasional runtime error without it.
A lot of people in this thread are promoting gleam as if its strictly better than elixir because statically typed, when that just means they chose a different set of tradeoffs. Gleam can never have a web framework like Phoenix and Ash in elixir, as they've rejected metaprogramming and even traits/typeclasses.
>>> open('foo') * 3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for *: '_io.TextIOWrapper' and 'int'
You have to go to some length to get Python to mix types so badly.Well, yes, Python can sure feel pretty fucking awkward from both perspectives. It started as fully dynamic, then added type hints, but that's not the main problem with it, in my opinion, the problem that you're still passing around opaque objects, not transparent data.
Compare it with Clojure (and to certain extent Elixir as well). From their view, static typing often feels like wearing a suit of armor to do yoga - it protects against the wrong things while making the important movements harder.
- Most bugs aren't type errors - they're logic errors
- Data is just data - maps, vectors, sets - not opaque objects
- You can literally see your data structure: {:name "Alice" :age 30}
- The interesting constraints (like "end-date > start-date") are semantic, not structural and most static type systems excel at structural checking but struggle with semantic rules - e.g., "Is this user authorized to perform this action?" is nearly impossible to verify with static type systems.
What static types typically struggle with:
- Business rules and invariants
- Relationships between values
- Runtime-dependent constraints
- The actual "correctness" that matters
Static type systems end up creating complex type hierarchies to model what Clojure does with simple predicates. You need dependent types or refinement types to express what clojure.spec handles naturally.
Elixir also uses transparent data structures - maps, tuples, lists, and structs are just maps. It has powerful pattern matching machinery over type hierarchies - you match on shape, not class. Elixir thinks in terms of messages between processes and type safety matters less when processes are isolated.
Python's type hints do help with IDE support and catching basic errors, yet, they don't help with the semantic constraints that matter. They add ceremony without data transparency. You still need runtime validation for real invariants.
So Python gets some static analysis benefits but never gains a truly powerful type system like Haskell's, while also never getting "it's just data" simplicity. So yes, in that sense, Python kinda gets "meh" from both camps.
iex(2)> "/home/cess11" * 9
** (ArithmeticError) bad argument in arithmetic expression: "/home/cess11" * 9
:erlang.*("/home/cess11", 9)
iex:2: (file)
Documentation is full of type information: defmodule Multiply do
def m9(m1), do: m1 * 9
end
# elsewhere...
defmodule Caller do
def doit() do
Multiply.m9(2)
Multiply.m9("hi")
end
end
It won't raise an exception or give you a warning while compiling it (tested with 1.18.4). Even adding @spec m9(integer()) :: integer()
above its definition doesn't do anything.For a while I extrapolated my experience to mean “static typing is awesome and dynamic typing is horrible”, but then I started learning clojure, and my opinions have changed a lot.
There are a ton of things that make a codebase nice to work with. I think static typing raises the quality floor significantly, but it isn’t a requirement. Some other things that contribute to a good experience are
- good tests, obviously. Especially property based tests using stuff like test.check
- lack of bad tests. At work we have a very well-typed codebase, but the tests are horrible. 100 lines of mocks per 20 lines of logic. They’re brittle and don’t catch any real bugs.
- a repl!!
- other developers and managers who actually care about quality
All four of these examples seem pretty easy to find in the clojure world. Most people don’t learn clojure just to get a job, which is maybe a hidden feature of building your company with a niche language.
At the same time, I recognize that most of those examples are “skill issues”. Static typing does a good job of erasing certain skill issues. Which is great, because none of us are perfect all the time!
Camping too long on either side may create wrong assumptions like that all/most problems are type problems; make you conflate implementation with concepts and make you miss the real tradeoffs; you'd start ignoring context and scale.
Static types aren't always about "safety vs. flexibility" - sometimes they're about tooling, refactoring confidence, or documentation. Dynamic types aren't always about "rapid prototyping" - sometimes they enable architectural patterns that are genuinely difficult to express statically.
One really needs to see how Rust's ownership system or Haskell's type inference offers completely different experiences, or how Clojure's emphasis on immutability changes the dynamic typing game entirely.
There are really good arguments both ways.
Just use what you need or go with whatever your current project dictates. Over time you will probably feel drawn to both, for different reasons.
I agree 100%. At first I liked C# and Java types, but then I moved to Python and I was happy. Learning some Typescript pulled me back into the static typing camp, yet then I discovered Clojure it revealed to me how needlessly cumbersome and almost impractical TS type system felt to me in comparison. Experimenting with Haskell and looking into Rust gave me a different perspective once again. If there's a lesson I've learned, it's that my preferences at any point in life are just that - preferences that seldom represent universal truths, particularly when no definitive, unambiguous answer even exists.
Too sad that the dotnet world largely still operates in csharpland and F# unfairly gets ignored even within dotnet community. I never have found a team where F# is preferred over other options, and I'd absolutely love to see what it's like. Unfortunately, F#'s hiring story is far worse than other less popular languages and even if there is anything, you most likely end up supporting other projects in C#, and honestly, even though it's a fine language by many measures, I see C# as "past experience" that I personally am not too eager to try again and do it as my daily job.
Static types and unit tests are not equivalent either. A static type check is a proof that the code is constructed in a valid way. Unit tests only verify certain input-output combinations.
lol
atom, binary, boolean, function, list, map, pid, reference, integer, float, and tuple
There are a few others but they are generally special cases of the ones above. Having so few data types tends to make it much more obvious what you’re working with and what operations are available. Additionally, because behavior is completely separate from data it’s infinitely easier to know what you can and can’t do with a given value.
Ruby being dynamic drove me insane at times, but Elixir/Erlang being dynamic has been a boon to productivity and quality of life. I recently had to write some TypeScript and was losing my mind fighting the compiler, even though I knew at runtime everything would be fine. Eventually I slathered enough “any” on to make the burning stop… But! That’s something I haven’t had to do in years, and it was 100% due to type system chicanery and not preventing a bug or making the underlying code more sound.
There are still some occasions where having some static typing would be nice— but they’re pretty rare and often only for things that are extremely critical or expensive to fix. And IMHO even in those cases, Elixir’s clarity and lack of (implicit) state generally make up for it.
Sure, the nature of Elixir probably makes it easier but I find little joy in dynamic whack-a-mole and mental gymnastics to infer types instead of fricking actually being able to see them immediately.
I could go commando in TS as well and switch to JS and JSDoc, leaving everything gradually typed and probably be fine but I'd feel terribly sorry for anyone else reading that code afterwards. It'd be especially silly since I can now just infer my auto-generated Postgres zod schemas with little effort. Moreover, a good type system basically eliminates typing-related bugs which you guys apparently still have.
So please, don't over-generalize just because you think you got it figured out.
I feel terribly sorry for people that have to read typescript and work with TS zealots
It sounds like you’re really trying to justify something and that’s great for you. I’m really happy for you. Keep it up. May you soar where no junior dev has dared to soar before. God speed.
And look, I'm first to admit that TS type system isn't perfect (and it can cause some devs to go overboard) but I have read my share of Python scripts that were read-only from the minute they were born.
It sounds like you’re really trying to justify something and that’s great for you. I’m really happy for you. Keep it up. May you soar where no junior dev has dared to soar before. God speed.
And please, your condescension just sounds insecurity to me. It's highly amusing though that you try to play me down as a silly junior dev, I'm quite satisfied that my original assessment was correct.Erlang OTP relies on being able to swap to new functions to do upgrades without downtime.
All of these would nominally be considered a type signature failure.
There are some very sharp Computer Scientists who believe static typing is unnecessary. Joe Armstrong (co-designer of Erlang) once said: "a type system wouldn't save your system if it were to get hit by lightning, but fault tolerance would"
I have the same attitude toward overly permissive type systems that I do toward the lack of memory safety in C: People sometimes say, "if you do it right then it isn't a problem," but since it objectively IS a problem in practice, I would rather use the tool that eliminates that problem entirely than deal with the clean-up when someone inevitably "does it wrong."
Long story short, a particular machine that joined the cluster that morning had some kind of CPU or memory flaw that flipped a bit sometimes. Our Elixir server was fine because we were matching on valid values. Imagine a typed language compiler that makes assumptions about things that "can't" happen because the code says it can't... yet it does.
For example
match parseFoo json with
| Ok foo ->
process foo
| Error message ->
print message
This will skip bad messages and be statically typed in the valid case.As such fault tolerance does not guarantee that you MRI scanner does not kill your patients.
You likely want both.
Elixir's seamless pattern matching paradigm, IMO, largely negates the need for strict typing. If you write your function signatures to only accept data of the type / shape you need (which you are incentivized to do because it lets you unpack your data for easy processing), then you can write code just for the pretty path, where things are as expected, and do some generic coverage of the invalid state, where things aren't, rather than the norm in software development of "I covered all the individual failure states I could think of". This generic failure mode handling, too, greatly benefits from dynamic typing, since in my failure state, I by definition don't know exactly what the structure of my inputs are.
However,elixir depreately needs proper types. IMHO the needs for types are in no means negated by pattern matching, and I also see hints at why you would say so.
> If you write your function signatures...
The point of types is worry less refactoring.
If you work at a place where you can define the arhicture for the entire lifecycle of the application without ever needing to Refactor, then sign me up! I want to work there.
I see this story on an on: some hacker makes a language, they hate types because they want to express themselves. The languages gets traction. Now enterprise application and applications with several devs use it and typing gets essential - types will then gradually be added.
What? Elm was literally the result of a single grad students side project - elm incorporates both a sound type system and FRP.
This has nothing to do with time. It was a decision not to support it.
What you are describing is a runtime concern that has nothing to do with types.
These issues are neither amplified nor alleviated by using types.
Very opposite the Go model, btw.
Erlang/Elixir's approach is to simply say, "It's gonna fail no matter how many precautions we take, so let's optimize for recovery."
Turns out, this works fantastically in both cellphone signaling, which is where OTP originated, as well as with webserving, where it is perfectly suited.
It just does not catch logics errors.
in other words, any error that doesn't occur right at start can be recovered from at least for all those operations that do not depend on that error being fixed.
Both because that memory leaks are normal is types languages - and does usually not matter in most serious applications - and because this class of errors is usually not what types catch.
Types have value when you 1) refactor and 2) have multiple people working on a code base.
The error you see when you don't have types is something like a BadArityError.
It WILL log the error, with a stacktrace, so you have that going for you
Note that even with typing, you cannot avoid all runtime errors
Also note that this tech was first battle-tested on cellphone networks, which are stellar on the reliability front
You get zero help and punished hard for failing.
I never did track down the last spot where we screwed that up. This was a system we shifted from statsd, so the offending callers were either working by accident or only killing some data points for one stat and nobody noticed.
So then OpenTelemetry.js had to start sanitizing its inputs and not assuming the compiler should catch it. I still think it odd that something called “.js” was actually “.ts” under the hood.
Getting an error on "Value: " + 2 is very annoying if that's what you wanted to do.
The solution here is Static typing, not Strong/Weak typing.
I don't want to look at TIOBE, so let's look at the stack overflow survey from 2024. https://survey.stackoverflow.co/2024/technology#admired-and-...
Strong + Static: TypeScript, Java, C#, C++, Go, Rust, Kotlin, Dart, Swift, Visual Basic, Scala
Weak + Static: C
Strong + Dynamic: Python, Lua, Ruby, R, GDScript(*)
Weak + Dynamic: JavaScript, PHP, Matlab, Perl
At this point I'm reaching into the low percentages. I think it's pretty clear that Strongly + Statically typed languages are massively over-represented on the list.
Both Strongly and Weakly Dynamically typed languages are similarly represented.
Note: I'm open to editing the comment to move languages from and to various categories. I haven't used many of them, so correct me if I'm wrong.
not anymore. they were very popular in the industry at one point. very few other languages have so many independent implementations as smalltalk and lisp. a testament to their wide spread use in the past.
Strong + Static: TypeScript; Weak + Dynamic: JavaScript
that doesn't make sense. typescript is javascript with types, it can't be both strong and weak at the same time.
but i believe we have a different definition of weak. just because javascript, php and perl have some implicit type conversions doesn't make them weakly typed. i believe it takes more than that. (in particular that you can't tell the type by looking at a value, and that you can use a value as one or another type without any conversion at all)
C is weakly typed, it was always a major criticism, C++ too i think (less sure about that).
once you correct for that you will notice that all languages in the strong and static category are less than 30 years old and many are much younger. (java being the oldest. but there are older ones like pike which is also strong and static and goes back to 1991)
the strong and dynamic category goes back at least a decade more. (if you include smalltalk and lisp)
what this does show is that static typing has experienced a renaissance in the last two decades, and also the worst is really using any form of weak typing.
i still don't get what makes strong and dynamic a bad combination, other than it's bad because it is not static.
I know that there are discussion about what strong vs weak even means, but I think most people would place the weak distinction way above yours on a possible weak-strong spectrum.
C can certainly be argued to be weak. My understanding is that it's mostly due to pointers (and void* especially). C++ is much better in this regard. I mostly just did not want to add a Weak + Static category just for one language.
Well, now that you've defined Strong to also include all of the languages I consider Weak, then yeah, no issues at all.
but the interesting question is really the one posed at the beginning. what makes strong but dynamic a bad combination?
i think we agree that weak is bad. implicit type conversions like 1+"2" range from the annoying to problematic and dangerous. if we eliminate weak that only leaves strong and static vs strong and dynamic.
i agree that strong and static is better in most cases, type declarations help, and pike the language i use the most myself which is somewhere between python and go, has powerful types that are a joy to work with, but for comparison in typescript types come across as more annoying (in part because they are somewhat optional, and because they get lost at runtime, so they don't help as much as in truly typed languages) but strong and dynamic has shown to be a solid combination, especially with python and ruby. so i don't feel the combination is as bad as you seem to suggest.
Sure, Smalltalk isn't, but Lisp is a different story. In this context I assume we all mean to say "Lisp" and not "Common Lisp" specifically.
Lisp (as the entire family of PLs) is quite massively popular.
Standard rankings have major blind spots for Lisp measurement, they miss things like, for example, Emacs Lisp is everywhere. There's tons of Elisp on GitHub alone, and let me remind you, it's not a "general-purpose" language, its sole function is to be used to configure a text editor and there's mind-boggling amount of it out there. AutoLISP is heavily used in CAD/engineering but rarely discussed online. Many Lisp codebases are proprietary/internal. Also, dialect fragmentation artificially deflates numbers when measured separately - many rankings consider them different languages.
If you count all Lisp dialects together and include: Emacs Lisp codebases, AutoLISP scripts in engineering, Research/academic usage, Embedded Lisps in applications, Clojure on JVM and other platforms - babashka scripts, Flutter apps, Clojurescript web apps, etc;
...Lisp would likely rank much higher than typical surveys suggest - possibly in the top ten by actual lines of code in active use.
Log lines are whatever, but a system that goes into a crash loop or fails on most requests isn't great.
Far from it.
This is what Erlang has and it’s very convenient since once a design is fixed, I end up writing a spec to remind me what types the function expects.
@spec foo(String.t()) :: String.t()
def foo(bar)
is better than def foo(String.t() bar): String.t()
Thorough tests of the behavior of your system (which should be done whether the language is dynamic or not) catch the vast, vast majority of type errors. "More runtime errors" in a well designed codebase don't mean errors for the user - it means tests catch them
Seriously.. the secret to writing great dynamic code is getting very good at testing
And that was even before someone wrote DOOM in TS's type system.
Note: I've not yet done any serious web-development, mostly command line tools, which I realise is not DHH's main focus.
It combines the "opinionated" aspects of ruby and rails and the power of erlang. The BEAM is like no other runtime and is incredibly fun to work with and powerful once you get how to use genservers and supervision trees.
We use Elixir for Mocha, and my one issue with it (I disagree with OP on this) is that live-view is not better than React for writing consumer grade frontends. I wish Phoenix took a much stronger integration with React approach, that would finalize it as the top choice for a web stack.
If I didn't need native functionality, I'd probably just use the recently released `phoenix_vite`: https://github.com/LostKobrakai/phoenix_vite
Clojure brings about half the novelty of Elixir, runs on the JVM and still struggles to replace Java.
For what it’s worth, I’ve been using Elixir professionally for a few years now and haven’t touched Redis once.
Not sure why telling people they don’t need another service is bad for adoption?
(I know, modern Oracle has finally fixed this, but I have Oracle 11 and 12 systems which bring me daily joy)
If you want an in memory store without interfacing over HTTP, that is less foreign than ETS, try Duckdbex. Mix.install or add to mix.exs, then it's two lines and you get a connection to an ephemeral database with a SQL interface. Can probably do the same with SQLite but I've never done it. If your needs are simple you can just boot a GenServer with a K-V structure as state.
Pattern matching, the capture operator and functional programming style are the really "hostile" parts, in my experience.
It was never marketed that way though, and then Typescript took over which makes things even harder.
is there a popular pattern, perhaps as used by whatsapp, i guess?
How Elixir Powers the BBC From PoC to Production at Scale - Ettore Berardi | ElixirConf EU 2025 https://www.youtube.com/watch?v=e99QDd0_C20&ab_channel=CodeS...
I've seen a world where we tried to ship an elixir app "as if it was a go microservice" - it does not work, does not help, and you loose a lot.
Also, I wish the author had addressed the static typing. This was my biggest pet peeve when working on a non trivial app for 5 years (Mistakes were made because actors are not familiar, and fixing those mistakes was hard because of a lack of static typing.) I wonder how the situation has changed lately with the introduction of _some_ type checking.
To my surprise this there isn't really a good mobile story to build mobile apps for both Android and iOS with it, although it looks like it could be a great option for quick turnaround mobile apps with a web- or native frontend...
I know that there is something being worked on, eg. LiveView native: https://native.live/ , but that seems to target two entirely different frontend frameworks, one for each platform...
I started using capacitor as a wrapper for a HTML frontend, but I think I might potentially run into trouble when I'd try to move into production builds...
I think there's some space for research and maybe some nice starter packs / tutorials there... Because I think it is a big and pretty relevant market for browser-based apps, which Elixir seems to be very well suited to!
I'm grateful for any additional pointers, peace out! :)
We work with series of timespans a lot in our domain (ag tech irrigation automation), and I needed to write an algorithm that creates the time based “diff” comparison between two or more span series.
I’ve written this code a few times already in Swift (iOS app), Kotlin (Android app), and Python (firmware). Loop based and tricky around the edge cases.
Doing it in Elixir with pattern based recursion was a pure joy. Every single test past the first time I ran it.
I don’t love the Elixir syntax (it’s noise to signal ratio isn’t as good as say, Python); I don’t love the lack of a good IDE experience that really leverages Elixirs model. But I do love the mental computation model of Elixir, and the very simple set of rules, vs some of the other languages I have to work in.
I wish Elixir had more mindshare beside just LiveView and "real time" type functionality. Even building a GraphQL/JSON endpoint without real-time requirements, the functional nature (no side effects), pattern matching and ruby inspired syntax makes writing plain old JSON controllers a joy.
While Elixir might not have a package for every use case under the sun, the low level primitives are there.
It's like Rails except that there's much more resources for Rails to find if you made a mistake in the DSL
I think it helped that at the time I was trying to build some pretty advanced filtering functionality using Ecto and was having a pretty tough time. While searching for solutions I saw a few mentions of Ash and that it could solve the problem out of the box.
After a few days of experiments, I found that it was able to do the filtering I wanted out of the box and was even more functional than what I was trying to build.
For reference, I was trying to add some tagging functionality to a resource and I wanted to be able to filter by that tag, including multiple tags. Can I do that in Ecto? Of course, but Ash provided that out of the box after modeling the resource.
As an example, I wanted to add support for adding tags to a resource, and support filtering for the resource by the tag, and I wanted to be able to do this filtering both through Elixir functions as well as GraphQL.
Can I do this with Ecto? Absolutely, but I'd have to build it all myself.
With Ash, I created a `Tag` and `Tagging` resource, adding a `many_to_many` block on my resource, marked it as public. Then I added the `AshGraphql.Resource` extension to my `Tag` resource, marked it as filterable and I was done.
Now I can filter and sort by tags, filter by where tags _don't_ exist and more. I didn't have to do anything other than model my domain, and Ash took care of the rest for me. Not only that, but it probably did a lot of things for me that I probably wouldn't have thought of, such as supporting multi-tenancy, policies and more.
It really is a lovely tool, I can't say enough good things about it.
Does it have a learning curve? Absolutely! Is it worth overcoming it? Again, absolutely!
policy action(:invite_user) do
forbid_unless actor_attribute_equals(:role, :admin)
authorize_if {App.Checks.OnlyAllowedRoles, roles: [:student, :parent]}
end
And what's nice is that these policies apply for both the API and the frontend code without having to do anything extra :)Source: Has worked on million-line Ruby on Rails codebase
Still, every release now contains new type system features. Next up is full type inference. [2] After that will be typed structs.
[1] José Valim giving his balanced view on type systems: https://www.youtube.com/watch?v=giYbq4HmfGA
YMMV, but I think it writes fine unit tests, but really sub par functional or end to end tests that need to check business logic. I think that's just a hard case for LLMS and not an elixir issue though.
Learning the concepts that embody Erlang such as tail recursion and function matching, make Erlang worth learning. Erlang is also a special mix of Prolog and Lisp.
For me Erlang was worth learning because it's so interestingly different to many other languages because of its Prolog roots.
I don't think it's really impeded my ability to learn or use Elixir. But I could also see how learning it and the underpinnings for Elixir could aid understanding too.
I picked up Joe's Erlang book years after out of pure joy/curiosity.
Especially with LLMs, totally unnecessary.
You don't need to learn OTP at first either — it'll just slow you down and confuse you. Learn Elixir syntax and semantics, learn how to build a Phoenix app or whatever, then dive into OTP once you're comfortable with the basics.
Erlang knowledge is not needed for building products with Elixir at all unless you want to go very in-depth.
TLDR: sure i could have done it in node but it would have been a LOT more work and less reliable
Only thing that always gets overseen in language discussions about which one is best, is that most companies pick languages based on ecosystems and products they buy/adopt, not pick a programming language from a catalog and then go looking for problems to solve with that language.
Unless of course they are a startup trying to get hold of a new domain, many times creating the need for others to adopt a specific programming language, when their product gains enough market share, thus we are back into how (the other) companies pick programming languages.
That is how we end with languages that maybe should never have been mainstream, a lucky product, killer application, market adoption happens, people want to learn the language to get employed, and the adoption circle goes on, thus whatever misconceptions there are matter very little for most programming languages, once they pass the adoption threshold.
Half (if not more) of the things my company is working on right now would just not be a thing if we used Elixir. But instead, we're one of millions of companies re-inventing solutions to the same distributed computing problems using Node & AWS in a less optimal and worse-tested way. Such is life.
“Elixir’s foundations in functional programming…make it easier for large language models to reason about, generate, and test code. LLMs struggle with imperative codebases full of side effects, hidden state, and indirection…”
It seems a larger factor in LLM performance on a given language is the amount of code available to train on.
In fact, I would even go so far as to say that LLM code assistance may cause a boost in interest in those languages.
I don’t doubt that anecdotally you’ve had favorable results. However, I’m not aware of any research that backs up this claim. Or forget research - I’m not aware of any simple examples / prompts that would show it in a reproducible way across different LLMs.
Corpus size tends to dominate performance.
These things are literally days old. Claude 4, which is arguably the first truly useful LLM code assistant, released in May. Scientific research proceeds at a glacial pace. I'm just speaking from anecdotal experience. It may be a perceptual illusion, or it may be a real thing. Just my data point.
One could deduce that restrictive languages would naturally result in fewer runtime bugs produced by LLM's since they do the exact same thing for people. For example, Elm is literally designed to produce NO runtime errors, so coding Elm in an LLM (which I haven't done) would ostensibly result in almost all errors being caught upfront.
This is not to say that other types of logic errors, redundancy, etc. wouldn't slip through without human intervention.
I'm not a fan of FFI stuff because I'm a simpleton, but have had an easy time incorporating both Python and Java in Elixir systems.
Talking about stuff like this:
nodes =
node_data
|> Input.split_by_line(trim: true)
|> Enum.map(fn <<
t::binary-size(3),
" = (",
l::binary-size(3),
", ",
r::binary-size(3),
")"
>> ->
{t, {l, r}}
end)
|> Enum.into(%{})
Elixir promotes a "do it all in one place" model—concurrency, distribution, fault tolerance—which can be powerful, but when you try to shoehorn that into a world built around ephemeral containers and orchestration, it starts to crack. The abstractions don’t always translate cleanly.
This opinion comes from experience: we’ve been migrating a fairly complex Elixir codebase to Go. It’s a language our team knows well and scales reliably in modern infra. At the end of the day, don’t get too attached to any language. Choose what aligns with your team’s strengths and your production reality.
If you don't know Elixir and the BEAM well, of course you're going to have a bad time. That's true of any language.
> what happens when the server restarts / connection is lost / server dies?
> you lose all of the current client state but you can work around this by persisting all the client state somewhere.
> oh, so why am i using live view again?
Not exactly, it's built to hold state in memory by default but doesn't assume how you want to handle deploys or server restarts. There's a built in restore mechanism for forms and it's trivial to shuffle state off to either the client/redis/your db[1]. You'd have the same problem if you were storing all your state in memory for any web application regardless of your library choice. Or you conversely have the problem that refreshing the page trashes client state.
So there are two thinks here, you don't have to use live_view to use elixir or phoenix, and if you do you just need to actually think about how you're solving problems. The state can get trashed anywhere for any number of reasons. Tossing on the client and forgetting about it just moves the problem.
But that's the thing - traditional server-side web applications don't do this. The the stateless request/response cycle of traditional server-rendered apps is a _huge_ advantage from a system design standpoint. State is bad. State is hard to manage correctly over time. Elixir makes it possible to manage this in-memory state relationship better than other languages, but it's still difficult to do at scale regardless.
LiveView turns stateless web applications into stateful web applications, and this is a problem most folks aren't considering when they see developer experience improvements the LiveView project claims. This is _the_ specific tradeoff that LiveView makes, and I wish folks wouldn't handwave it away as if it were trivial to manage. It's not. It's a fundamentally different client/server architecture.
Source/disclaimer: I work at a large company with a large LiveView deployment, we have spent a ton of time and money on it, and it's a pain in the ass. I still love Elixir, I just think LiveView is oversold.
And realistically there are cases where I’d use another tool.
i realize there is still a dataloss problem when there is state on the client but there is a lot of simple stuff that causes problems if you are effectively reloading the page if the server disappears due to a redeploy or a crash.
for example i'm reading an email in my message client. i've scrolled half-way down the email. but now the server crashes i reconnect to live view and my scroll position when reading the email is reset.
when i look at live view i feel like its written by people that have never deployed something in production. of course this is not really true but i feel like it would be better if live-view people were more honest about the tradeoffs. its a very complicated situation and having some bad outcomes might be worth the increase in developer productivity but i feel like live-view people just ignore the bad outcomes.
also, take a cloud deployment. websockets seem to be an inherent problem in cloud deployments especially AWS. as far as i know AWS does not expose some event to a instance that is part of a load balancer that it is about to die. ideally if you have a websocket instance with a client and you realize you are scheduled to be reaped you would message the client that it needs to reconnect. then the client would reconnect to a server that would not be reaped and everything would be dandy. but AWS doesn't seem to have this functionality (or its not easy to implement!) but more importantly live view does not expose any kind of hooks so you can have 'safe' server redeployment out of the box.
https://hexdocs.pm/phoenix_live_view/form-bindings.html#reco...
If I were starting a new company today though I'd probably go with Elixir, and then I simply wouldn't bother with containers, Kubernetes, and schedulers. Just run some servers and run the application on them, let them cluster, and generally lean into the advantages.
As a community, we have got to stop saying this stuff. It's false. Nothing about Elixir or k8s precludes using the other. Elixir applications can and do run just fine in containers and k8s deployments. In fact, they're complementary: https://dashbit.co/blog/kubernetes-and-the-erlang-vm-orchest...
Erlang is low level, lightweight processes and message passing - perfect for micro-services and containerisation.
What Erlang lacks are high level web-oriented packages, i.e. markdown conversion, CSS and JavaScript packaging, REST (not quite true: cowboy is fantastic) - for which though Erlang was never intended.
However the cool thing is that you can combine both in the same project, allowing you to have high level Elixir and low-level process management in one project. This is possible because both use the same virtual machine - BEAM.
For example, the Erlang VM clustering can make use of K8s for Service Discovery. You can do ephemeral containers and use readiness probes to create a " hand over" period where new instances can sync their data from old, about-to-be-replaced instances.
I decided to learn Clojure for my next language.
No, you're not alone. After learning Lisp, structural editing and REPL-driven-development, I just don't feel like needing to learn new languages no matter how powerful they seem to be. Lisps like Clojure are highly pragmatic and offer something fundamentally different from the endless parade of syntax-heavy languages that dominate the mainstream.
Once you've experienced the fluidity of paredit or parinfer, where you're editing the structure of your code rather than wrestling with textual representations, going back to manually balancing brackets, fixing indetnation and carefully placing semicolons feels like reverting to a typewriter after using a word processor. The code becomes malleable in a way that's hard to appreciate until you've lived with it.
And the REPL changes everything about how you think and work. Instead of the write-compile-run-debug cycle, you're having a conversation with your running program. You can poke at functions, test hypotheses, build up solutions incrementally, and see immediate feedback. It's exploratory programming in the truest sense - you're not just writing code, you're discovering it.
The homoiconicity - code as data - means you're working in a language that can easily reason about and transform itself. Macros aren't just text substitution; they're proper AST transformations. This gives you a kind of expressive power that most languages can't match without tremendous complexity.
So when the latest trendy language appears with its new syntax and novel features, it often feels like rearranging deck furniture. Sure, it might have nice type inference or clever concurrency primitives, but you're still stuck in the old paradigm of fighting syntax and losing the conversational flow of development.
You've tasted something closer to the pure essence of computation, and it's hard to go back.
i am interested in clojure, but i am put off by it using the java run time. ;-)
Clojure runs not only on Java - you have Clojurescript, you have babashka and nbb, you have Clojure-Dart — if interested in building Flutter apps, you can even use Clojure with Python libs. If you need to target Lua, there's Fennel, which is similar as it's inspired by Clojure.
For me - Clojure is a hands-down best medium for data manipulation - there's just nothing better out there to explore some data - investigate APIs; sort, group, dice and slice any kind of data - CSVs, JSON, etc. Nothing else simply can match the joy how one could incrementally build up complex data transformations, it makes it incredibly productive for the "let me just quickly check this shit" scenarios that can easily turn into full analyses. REPL-driven nature of it makes it so much fun - you feel like you're playing a videogame.
I honestly wish every programmer knew at least some Clojure. I lost count of how many times I gave up on figuring out complex jq syntax and reached for Clojure instead. Or the times I'd quickly build a simple Playwright script for reproducible web-app bug trapping or quick data scraping that saved me hours of frustration and manual clicking.
Fennel -> Lua
Jank -> C++
can you elaborate please ? thanks !
Elixir abstracts that away and leaves a ruby-like language that hides much away - which good and fine.
Erlang is by no means a simple language to get one’s head around.
Processes, message passing, and behaviours are all completely first class in Elixir. There's no hiding. `spawn` is `spawn`. `send` is `!`. `GenServer` is `gen_server`. `@behaviour` is `-behaviour(...)`. The entire Erlang stdlib is available directly.
We use processes, messages, and behaviours all the time in regular Elixir work.
Elixir adds a different syntax (note that I did not say better), a macro system, a protocol system, its own stdlib functionality, some better defaults, and a build tool.
It's perfectly fine and reasonable to prefer Erlang (I learned Erlang before I learned Elixir), but for the benefit of other readers, they are really not that different. The distance between Elixir and Erlang is very small. They could almost be seen as dialects.
Why nobody's hiring for Elixir :(
if using elixir supposedly gives a competitive advantage, why aren't companies using to launch new products - both existing and startups.
a lot of those things quoted in the article are present on the jvm platform or through containers.
and btw some of those companies listed have migrated away from elixir e.g brex and discord.
Tools like Elixir which are focused on solving problems, aren’t flashy or easy for web dev influencers (yes, they’re a thing) to tout, and have a bit of higher barrier to entry are probably at a disadvantage in this way, even if they’re actually quite good.
This is why the industry is stuck on a revolving door of JS frameworks and the n-hundreth React self-reinvention. It’s not about how good any of this actually is, it’s about how well it scratches the itch for that new car smell while also being easy to pick up and make grand declarations on social media about.
A while back there was an effort to give more publicity on precise cases here https://elixir-lang.org/cases.html ; I think the effort is now moving to advertising the platform outside Elixir circles (e.g. more generalist conferences).
FWIW, I'm working on https://transport.data.gouv.fr, Elixir-based since 2016, the National Access Point to transportation data, which includes a business specific reverse proxy with a 3x YoY growth, with no plans to migrate :-)
And no, Discord has not moved away from Elixir. While they have adopted Rust for certain parts of their infrastructure, Elixir remains a core part of their backend, particularly for real-time communication and chat infrastructure.
to learn what? that the language was not good enough? that it wasn't marketed well enough? that there was not enough community support? not enough word of mouth?
lisp users say they have a competitive advantage, as do smalltalk users. even pike at a time had a competitive advantage in that it was/is more performant than other similar languages.
some of us in the pike community kept asking ourselves why pike is not more popular. the syntax is not obscure. roxen was a killer webdev server, 10 years ahead of its time. but no takers. why? (probably a mix of lack of marketing and community support/encouragement from the financial backers, but other languages become popular without backing, so what gives?)
my point is that most times there is nothing you can do. unless you have the marketing power of sun (java) or google (go, etc), popularity of a language is mostly a case of luck and serendipity (ruby, python) or filling a unique need that no other language could (javascript, rust)
Elixir also does not stand alone - the storied history of Erlang/BEAM in mission-critical distributed systems needs no introduction. Elixir lives within, and benefits from, that same ecosystem of libraries and tools.
So yeah I'm not sure what you believe there is to "introspect" about, nor to what end.
So like a thin mirror? Which only serves to show the viewer back to themselves? And which also consists of just two extremely simple parts, that is, two panes/sheets of uniform materials, and nothing more?
How do you even come up with this metaphor for a web-based application? It's horrible!
(I came across it, stopped reading the article to Google it, couldn't find it so came to HN to search the comments and then went back to the article and saw the asterisk later on)
I’m genuinely surprised how much controversy this statement has caused. It’s something I’ve said many times in a professional environment and literally no one has ever picked me up on it. Makes me wonder if they were _ever_ listening …
I didn’t read a lot about Nx and EXLA in the comments, so I thought I’d share my view:
In my opinion, that’s not a strong argument for Elixir, because pretty much every language has libraries like this. Rust, famously, never stops inventing these libraries (https://arewelearningyet.com/), and to illustrate my point, here are some libraries for niche(-ish) languages:
• Crystal (https://github.com/NeuraLegion/shainet)
• Haskell (https://hackage.haskell.org/package/neural)
• Ruby (https://github.com/unagiootoro/ruby-dnn)
• and many more…
They might be suitable for researchers or projects building networks from scratch, but this is already difficult with a library that plays along. For most people, it’s about fine-tuning, combining, and doing inference on existing models (and weights).
People like to criticize PyTorch, but in fairness, with it I can do useful things with recent models (which matters at the speed of things) pretty much the day they have their open source release (thanks to the transformers library). There exists an equivalent to transformers for Elixir (https://github.com/elixir-nx/bumblebee), but it doesn’t support the top 10 trending models on HuggingFace, which is a problem if you want to use them. Qwen 3, for instance, arguably the most useful open source model family at the moment, isn’t supported at all. And this is the issue all of these libraries face: they are not useful for the majority of people needing libraries like this out there.
Also, PyTorch is really well optimized, and there is a good reason that vLLM (yes, it’s not the fastest but pretty fast, supports almost everything, and is the de facto standard for open source LLM inference) is written in Python and PyTorch. Things like Flash Attention 2 are crucial for LLM inference and still hard to find. Yes, OpenXLA is technically very sophisticated, but I never got it working every time I wanted to use it, and JAX seems more comfortable if you want to use TPUs.
It’s still a very strange development that Python runs the AI revolution. Multi-node training might be a lot of fun with Elixir, but here performance (especially making sure the CPU is not stalling the GPU) is vital, which might make BEAM not an ideal choice.
There is hope though. Recently there have been some interesting developments in the Rust world (https://luminalai.com), and of course no HN comment about training and inference is complete without mentioning TinyGrad. But Elixir simply doesn’t make sense to me (for neural networks). I’m sorry.