I've written a bit of Racket code (https://github.com/evdubs?tab=repositories&q=&type=&language...) and I still haven't written a macro. In only one case did I even think a macro would be useful: merging class member definitions to include both the type and the default value on the same line. It's sort of a shame that Racket, a Scheme with a much larger standard library and many great user-contributed libraries, has to deal with the Scheme/Lisp marketing of "you can build low level tools with macros" when it's more likely that Racket developers won't need to write macros since they're already written and part of the standard library.
> But the success of Parsec has filled Hackage with hundreds of bespoke DSLs for everything. One for parsing, one for XML, one for generating PDFs. Each is completely different, and each demands its own learning curve. Consider parsing XML, mutating it based on some JSON from a web API, and writing it to a PDF.
What a missed opportunity to preach another gospel of Lisp: s-expressions. XML and JSON are forms of data that are likely not native to the programming language you're using (the exception being JSON in JavaScript). What is better than XML or JSON? s-expressions. How do Lisp developers deal with XML and JSON? Convert it to s-expressions. What about defining data? Since you have s-expressions, you aren't limited to XML and JSON and you can instead use sorted maps for your data or use proper dates for your data; you don't need to fit everything into the array, hash, string, and float buckets as you would with JSON.
If you've been hearing about Lisp and you get turned off by all of this "you can build a DSL and use better macros" marketing, Racket has been a much more comfortable environment for a developer used to languages with large standard libraries like Java and C#.
As a common lisp developer, that is only very vaguely true for me.
The mapping I prefer for json<->Lisp is:
true: t
false: nil
null: :null
[] #()
{} (make-hash-table :test #'equal)
This falls out of my desire for the mapping to be bijective:- The only built-in type that is unambiguously a mapping type is hash-tabe.
- nil is the only value that is falsy in CL
- () is the same as nil, so we can't use it as an empty list; vectors are the obvious alternative
- Not really any obvious values left to use for "null" so punt to a keyword.
true #t
false #f
null ()
[...] (& ...)
"k" : v (: k v)
{...} (@ ...)
Where &, :, @ are defined as: ($define! &
($lambda args (cons list args)))
($define! :
($vau (key value) env
(list key (eval value env))))
($define! @
(wrap
($vau kvpairs env
(eval (list* $bindings->environment kvpairs) env))))
Using the "person" example from the JSON/syntax section on Wikipedia: ($define! person
(@
(: first_name "John")
(: last_name "Smith")
(: is_alive #t)
(: age 27)
(: address
(@
(: street_address "21 2nd Street")
(: city "New York")
(: state "NY")
(: postal_code "10021-3100")))
(: phone_numbers
(& (@ (: type "home") (: number "212 555-1234"))
(@ (: type "office") (: number "646 555-4567"))))
(: children
(& "Catherine" "Thomas" "Trevor"))
(: spouse ())))
I would then define `?` ($define! ? $remote-eval)
Now we can query the object. > (? age person)
27
> (? postal_code (? address person))
"10021-3100"
> (car (? children person))
"Catherine"
> (cdr (? children person))
("Thomas" "Trevor")
> (? type (cadr (? phone_numbers person)))
"office"
> (? number (car (? phone_numbers person)))
"212 555-1234"
> ($define! full_name ($lambda (p) (string-append (? first_name p) " " (? last_name p))))
> (full_name person)
"John Smith"I've not written a Scheme macro since. I've written hundreds of Kernel operatives though.
I was also a typoholic previously, but am in remission now thanks to Kernel.
An example: building the equivalent of a switch statement, but that compares (via string equality) with a set of strings. The macro would translate this into code that would do something like a decision tree on string length or particular characters at particular positions.
Basically anything that's done with a preprocessor in another language can be done with macros in Lisp family languages.
I use C preprocessor macros extensively and don't have the typical dislike for them that many people have - though I clearly understand their limitations and the advantage Scheme macros have over them.
Since learning Kernel, the boundary of "compile time" and "runtime" is more blurry - I can write operatives which behave somewhat like a macro, and I do more "multi-stage" programming, where one operative optimizes its argument to produce something more efficient which is later evaluated - though there are still limitations due to the inability to fully compile Kernel.
As one example, I've used a kind of operative I call a "template", which evaluates its free symbols ahead of time but doesn't actually evaluate the body. When we later apply the some operands it replaces the bound symbols with the operands, looking up any symbols to produce an expression which we don't need to immediately evaluate either - but this expression has all symbols fully resolved. This is somewhere between a macro and regular operative.
Consider:
($define! z 10)
($define! @add-z
($template (x y)
(+ x y z)))
In this template `x` and `y` are bound variables and `+` and `z` are free. The template resolves the free symbols and returns an operative expecting 2 operands, effectively providing an operative with the body: ([#applicative: +] x y 10)
When we call the template with the two operands, it resolves any symbols in the arguments and returns the full expression with no symbols present, but it doesn't evaluate the expression yet. > ($let ((x 9)
(y 7))
(@add-z (* x 3) (- y 13)))
([#applicative: +] ([#applicative: *] 9 3) ([#applicative: -] 7 13) 10)
When we decide to evaluate the expression, no symbol lookup is necessary - it can perform the operation rather quickly, despite the slow interpretation.---
The $template form above isn't too difficult to implement. I've iterated several forms of this - some which only partially resolved the bound symbols, but lost them in a RAID failure. An earlier version which has some issues I still have because I put it online:
($provide! ($template)
($define! $resolve-free-symbols
($vau (params expr) env
($cond
((null? expr) ())
((pair? expr)
(cons (apply (wrap $resolve-free-symbols)
(list params (car expr))
env)
(apply (wrap $resolve-free-symbols)
(list params (cdr expr))
env)))
((symbol? expr)
($if (member? expr params)
expr
(eval expr env)))
(#t expr))))
($define! $resolve-bound-symbols
($vau (params expr) env
($cond
((null? expr) ())
((pair? expr)
(cons (apply (wrap $resolve-bound-symbols)
(list params (car expr))
env)
(apply (wrap $resolve-bound-symbols)
(list params (cdr expr))
env)))
((symbol? expr)
($if (member? expr params)
(eval expr env)
expr))
(#t expr))))
($define! zip
($lambda (fst snd)
($cond
(($and? (null? fst) (null? snd)) ())
(($and? (pair? fst) (pair? snd))
(cons (list (car fst)
(list* (($vau #ignore #ignore list)) (car snd)))
(zip (cdr fst) (cdr snd)))))))
($define! $template
($vau (params body) senv
($let ((newbody
(eval (list $resolve-free-symbols params body) senv)))
($vau args denv
(eval (list $resolve-bound-symbols params newbody)
(eval (list* $bindings->environment
(zip params args))
denv)))))))
---At present the best interpreter is klisp, and the fastest is bronze-age-lisp, which uses klisp - with parts of hand-written 32-bit x86 assembly.
I've been working on a faster interpreter for a number of years as a side project, optimized for x86_64 with some parts C and some parts assembly. It has diverged in some parts from the Kernel report, but still retains what I see are the key ingredients.
My modified Kernel has optional types, and we have operatives to `$typecheck` complex expressions ahead of evaluating them. I intend to go all in on the "multi-stage" aspect and have operatives to JIT-compile expressions in a manner similar to the above template.
I've written a number of less complete interpreters over the years. I currently have a long-running side-project to provide a more complete, highly optimized implementation for x86_64.
I thought the particular technology I was working in was "part of the problem", as I felt pigeon-holed by .NET and C# to always be a corporate-monkey CRUD consultant. So, I went out in search of something better. Different programming languages. Different environments. Just something that wasn't working for asshole clients who thought it was okay to yell at people about an outage in a hotel on the complete opposite side of the country that was more due to local radio interference than anything I had done in the database code that configured things. Long story involving missing a holiday with my family over something completely outside of my control and yet I still got blamed for it. The problem wasn't the technology, it was the company I was working for, but at that time in my life, I didn't understand the difference.
Racket was a life preserver at that time.
It's really hard to explain, because I never actually ended up working in Racket full-time and I haven't even touched it in probably 10 years. But it still has this impact on my identity as a software developer. I learned Racket. I forced myself out of being a Glub programmer and into someone who saw the strings that underwrote The Universe. The beauty of S-Expressions and syntactic forms and code-is-data and all that. It had a permanent impact on my view of what this job could be.
I still work primarily in .NET. Most of the things that were technological issues about .NET Framework got absolved by what was first .NET Core and what is now .NET. So, I no longer feel like my tools are holding me back. And I'll forever be thankful to Racket (and the community! The Racket listserve was amazing back then. Probably still is, I just don't interact with it anymore) for being there for me.
Edit: Haskell was in fact another language I explored at that time, in addition to Ocaml and Ruby and Python (ugh! Don't get me started on Python!) and many other things. They were all "cool" in their own way, but nothing felt like Racket. They all had their own weird rules that felt like being bossed at again. Racket felt like art. Racket felt like it was there for me, not the other way around.
[0] I still think of this time as the "mid-point" in my career, but it's now been long enough ago that I've been more past the crisis than I was ever in it. Strange feelings.
That society as a whole accepts this kind of abuse, no matter industry or circumstances, is beyond me. It's an abuse of power. If anybody did this to anyone, the only appropriate response should be to walk and never come back. Nobody would want to accept this kind of crap from family and friends, so why is it ok in a professional setting? Because of the money/power dynamics at play? We need consensus in society to walk, that would end it in no time.
Hm… I think I have bad news for you.
Contrast this to Haskell's use of DSLs – although they really can be quite dense sometimes, I feel like, when I get stuck, I can always just dig into the documentation on Hackage, and figure out things from the type definitions (even when explanations in docs are lacking). Though it does require being comfortable with the abstractions being used (monads and such). Rust is similar in this manner but to a lesser extent.
But again, maybe the macro critique stems from my inexperience with them.
I see this mentioned often, and it sounds amazingly useful (especially the part about fixing in production!). But how truly widespread is it among the Lisp dialects to be able to connect to a running program, debug, and hotfix it? I understand Common Lisp has it, but I struggled to figure out how to do it in, say, Racket. Admittedly I'm am relatively inexperienced Lisp programmer, so maybe I wasn't looking in the right place or for the right words. Which Lisp dialects do indeed support the extreme version of this capability to inspect and edit running programs?
Out of the box, there’s zero security or audit trail. Building that properly isn’t trivial and, even with it in place, many corporate infosec teams would have fits if you suggested that engineers can make arbitrary inspections/modifications to a running production system.
Where it could be appropriate, often you’re running the code in autoscaling containers or something similar. Modifying one instance then is rarely anything but a terrible idea.
Where I have used it is for things like long-running internal batch systems that run a single instance and never touch any sensitive data. Connecting a REPL in those cases is much more flexible and powerful than, say, building a dashboard UI or a control API over http, and you get it for free.
The Lisp REPLcis still superior because it comes with more stuff, like DECOMPILE, INSPECT and so on that can only exist because the language is essentially a compiler even at runtime, which can also be a problem for sensitive domains… but in Java you can do all those things using the IDE so the distance between what is possible in Lisp and a language with good IDE support like Java and Kotlin is now negligible in my opinion.
Unfortunately JRebel has killed their free tier, so I'd now point unwilling-to-pay programmers to something like https://github.com/JetBrains/JetBrainsRuntime which is IntelliJ/Eclipse/whatever-independent. I haven't tried it myself yet though... Given they only address the biggest class reloading concerns, I doubt it's actually comparable to JRebel for business-world Java. JRebel handles among other things dynamic reloading from XML changes and reinitializing autowired Spring beans that other classes use for dependencies.
*Caveat, I've been out of the professional Java grind for a while, I'd be pleasantly surprised if some new version that's come out contradicts me.
Though nowadays the IntelliJ debugger with the OpenJDK is enough for me. I know what works and what doesn’t so I rarely feel frustrated.
Maybe emacs lisp works that way?
Universal hot reload is really a messy beast though. For every "yeah we can just reload this without re-init'ing the structure" there's another "actually reloading causes weird state issues and you have to restart everything anyways" thing.
I've found that hot reloading _specific targetting things_ tends to get you closer to where you want. But even then... sometimes using browser dev tools to experiment on the output will get you where you want faster than trying to hot reload clojurescript but having to "reset" state over and over again or otherwise work around weirdness.
I think this flow works well in Emacs though because you're operating on an editor. So you can change things, press buttons, change things, and have a good mental model. Emacs Lisp methods tend to have very little state to them as well (instead the editor is holding a bunch of exposed state).
Meanwhile React (for example) has _loads_ of hard-to-munge state that means that swapping one component for another inline might be totally fine or might just crash things in a weird way or might not have anything happen. Sometimes just a full page refresh will save you thinking about this
(define (displayln msg) (display msg) (newline))
(define (inner x) (+ x 1))
(define (outer x) (inner x))
(displayln (outer 5))
(define (inner x) (+ x 2))
(displayln (outer 5))
outputs 6 and 7 in every one I tried, not the 6 and 6 I expected.Fun fact is that giving AI repls also reduce error rates so much that you can save up to half the tokens/time or more.
It is obvious why it is not really used or recommended as it really falls flat in a team setting, mostly even when 2 people are involved. But fixing bugs live as they happen and then spitting out a new .exe for clients is still a lot faster than modern alternatives. Far more dangerous too.
It's a shame that other scripting languages that theoretically have the capabilities to do this don't do this (looking at you, node! Chrome dev tools are fine but way too futzy compared to `import pdb; pdb.set_trace()` and "just" using stdin)
I do also use Emacs, and with Emacs Lisp `trace-function` means you can very quickly get call traces in your running instance without having to pull out a debugger and the like. Not like you can't trace functions with `gdb` of course. But the lowered barrier to entry and the ability to do in-process debugging dynamically means you just have access to richer debugging tools from the outset.
I read some Erlang article saying that hot swapping is not actually very useful in production because of some reasons, and instead a blue-green deployment is preferred. Can't find the link atm. This was close: https://learnyousomeerlang.com/relups
Compare to this comment: https://news.ycombinator.com/item?id=42405168 Hot swaps for small patches and bugfixes, and hard restarts for changing data structures and supervisor tree.
But the nicest part is that once you connect to the production application, apart from network lag it's no different than if you were developing and debugging locally on similarly specced hardware to the server, you have all the same tools. Many of the broader activities around "debugging" don't need to happen in a paused thread that was entered with an explicit breakpoint or error, they can happen in a separate thread entirely. You connect, then you can start inspecting (even modifying) any global state, you can define new variables, you can inspect objects, you can define new functions to test hypotheses, redefine existing functions... if you want all requests to pause until you're done, you can make it so. Or if you want to temporarily redirect all requests to some maintenance page, you can make that so instead. A simple thing I like doing sometimes when developing locally (and I could do it on a production binary too) is to define some (namespaced) global variable and redefine a singly-dispatched method to set it to the self object (possibly conditionally), and once I have it I might redefine the method again to have that bit commented out just so I know it won't change underneath me. Alternatively I can (and sometimes do) instead set this where the object is created. Then I have a nice variable independent of any stack frames that I can inspect, pass to other method calls, change properties of, whatever, at my leisure without really impacting the rest of the program's running operation. Another neat trick is being able to dynamically add/remove inherited mixin superclasses to some class, and when you do that it automatically impacts all existing objects of that class as well. Mixin classes are characterized by having aspect-oriented methods associated with them; you can define custom :before, :after, or :around methods independent of the primary method that gets called for some object.
I was thinking the whole time, "this person would _love_ Clojure".
https://www.gnu.org/software/kawa/index.html
I am one of these people who cannot countenance a Lisp that doesn't have `syntax-case`.
At some point I thought of forking it to then cut out and polish the core, but then my attention got caught by graal's truffle framework as a plausibly better path for implementing scheme in java
Short article. Worth reading. But all I swallowed was this one sentence.
Its the sytax. If you like semicolons, thats why you like Pascal-like languages.
it has both upsides and downsides. the upsides mostly win for me.
Interesting. How do people cope with this in practice? Does it mean you can't really use log() -statements for debugging?
However, I’ve had very little use of printing for debugging. In Haskell you write small (ish) and pure functions that you can test extensively with property based testing. The types already help a lot as well.
So basically the only place where you deal with unexpected input is at the communication boundaries of the app where you are in some form of IO already and printing just available.
Of course, you will have IO somewhere in a executable where you can handle logging so just separate pure and IO and make sure you have good tests for the pure functions. Also, linting to catch partial functions and dangerous lazy ones (or use an alternative prelude).
Most of this can still be done from IO places where the pure functions collect enough error information bubbling up (e.g. content and line/col of parser errors etc.) to not need ad hoc print statements for debugging.
You lose the ability to log "why" some effect is happening.
[1] From Toward Safe, Flexible, and Efficient Software in Common Lisp at the European Lisp Symposium, "[Coalton] has been used for the past 5 or so years [...] first in quantum computing and now a serious defense application." https://youtu.be/xuSrsjqJN4M&t=9m14s
I agree with you further and you did an excellent promotional comment for Coalton and CL; keep doing that please. I have said many times here before that I did not like my time away from CL and Coalton makes it even better.
(I write Haskell professionally)
"Haskell: more elegant than Javascript and C++" would make a good promotional motto.
That's like bragging how prettier you are than Danny Trejo.
What really prevents people from writing in Haskell at a reasonable speed is the poor language design. Programming languages are supposed to aid in reading by emphasizing structure. It's important to emphasize that a particular group of "words" constitutes a function call, or a variable definition, or a type definition -- whatever the language has to offer.
Haskell is a word salad. Every line you read, you have to read multiple times, every time trying to guess the structure from the disconnected acronyms. It belongs to the "buffalo buffalo buffalo buffalo" gimmick family. This is a huge roadblock on the way to prototyping as well as any other activity that implies the ability to read code quickly. And then it's also spiced by the most bizarre indentation rules invented by men.
This is not at all a problem with eg. SML or Erlang, even though they are roughly in the same category of languages.
Haskell would've been a much better language if it made its syntax more systematic and disallowed syntactical extensions s.a. introduction of user-invented infix operators, overloading of literals (heaven, why???) and requiring parenthesis around function arguments both for definition and for application. The execution model is great, the typesystem is great... but the surface, the front door to all these nice things the language has is just some amateur level nonsense.
* * *
As for the upsides of using languages from the Lisp family for practical problems... I don't find (syntax-rules ...) all that exciting. I understand this was an attempt to constrain the freedom given by Common Lisp macros, and I don't think it worked. I think it's clumsy and annoying to deal with. The very first time I tried to use it, I ran into its limitations, and that felt completely unjustified. To prototype, you want freedom of movement, not some pedantry that will stand in your way and demand you work around it somehow.
The absolute selling point, however, is SWANK. Instead of editing the source code, you are editing the program itself, that can be interacted with in points of your choosing. I don't know of any modern language that offers this kind of experience. I think, even still in the 80s, this approach to programmers interacting with computers was common. At school, we had terminals with some variety of Basic, and it worked just like that: you type the program and it instantly shows the effect of your changes. Then, there was also Forth, which also worked in a similar way: it felt like you are "talking" to the computer in a very organized and structured way, but real-time.
Most mainstream languages today sprouted from the idea of batch jobs, where the programmer isn't at the keyboard when the program runs. They came with the need to anticipate and protect the programmer from every minor mistake they might've easily detected and fixed during an interactive session far, far in advance.
Whenever I think about writing in C, or Rust, or Haskell, I imagine being tasked with going to the grocery blindfolded: I'd need to memorize the number of steps, the turns, predict the traffic, have canned strategies for what to do when potatoes go on sale... I deeply regret that programming evolved using this evolution path, and our idea of what it means to program is, mostly, the skill of guessing the impossible to predict future, instead of learning to react to the events as they unfold.
When someone argues subjectivity (in a negative sense), they need to show that the opinion does not rely on facts, rather it's based on... nothing (feelings).
I offered a very easy way to numerically assess the negative impact of poor language design choices made by Haskell designers. It's not about what I "feel" about the language: in Java, you write three-words program, and you get, usually, a unique interpretation. In Haskell, you write a three-words program, and you get 9 (nine) possible interpretations. It's impossible for a human to examine nine interpretations simultaneously and figure out which of them are valid and might fit the context. So, reading a Haskell program takes longer and requires more effort than a Java program.
Of course, Haskell programmers find ways to adapt to their misfortune. They try to avoid pathological cases (eg. writing four-words programs, let alone five!), they memorize a lot of acronyms and non-typographical symbols that they later use to prune the search for a possible meaning of the program. They invent conventions on top of the bare language design that constrain the search space for possible programs to make their task easier.
It's absolutely possible that after layers of conventions and a long time spent memorizing various acronyms and symbols, Haskell programmers catch up to speed of programmers in other languages: after all, the superficial difficulties with the language might seem like a small price to pay for the access to the language's riches that lay beyond the surface. The language grammar rules cannot account for the entirety of the performance of the programmers who chose to write in the language.
This situation is very similar to the "universal" (claimed, but not in practice) mathematical language, which is extremely difficult to read, write, edit, typeset... yet the tradition of using it prevails and the overwhelming majority of mathematicians use, and prefer using the "universal" mathematical language even though much saner alternatives exist.
I see OP's point. Haskell feels (or felt, I admit I haven't been keeping up the last 15 years) needlessly obtuse sometimes, like how people love to invent new infix operators all the time.
I couldn't disagree more. Yes, there is more upfront work understanding Haskell code. But it's very dense. Once you understand the patterns, you can read it much quicker. Just like map/filter/fold are harder to understand then a for-loop, but once you do, you can immediately see what kind of iteration is applied. The for-loop can do all kinds of crazy index manipulation that you always have to digest from scratch.
> And then it's also spiced by the most bizarre indentation rules invented by men.
Again, quite surprised by this criticism. The rule is extremely simple: inner expressions must be indented more. You're free to decide by how much. That's why there are many "styles" out there. Maybe that's what you mean with bizarre. But it's not like the language is forcing weird constraints on you. If anything the constraints are too lax. Any other language with non-mandatory indentation allows that as well. In general, I really don't understand why not more languages do mandatory indentation. You only need curly braces and semicolons if you want the option to write a whole if/else/while/... statement in one line. But nobody does that.
Not to support the parent comment, which I disagree with, but If you use multi-line let-bindings, those require that you indent not just more than the previous line, but as much as the first token after the let keyword on the previous line. It’s a very strange rule, all the more surprising because it’s inconsistent even with the rest of the language. It is totally avoidable if you, like I think most experienced haskellers do, just prefer ‘where’, but people more familiar with procedural code usually lean into using ‘let’ everywhere because it feels more familiar.
I think the strange indentation used to be required in more places - I vaguely remember running into it a lot more when I started with Haskell 20 years ago, but that was also just when I was new to the language. These days I just keep ‘let’ to a bare minimum, so it doesn’t bother me. One thing that made Elm frustrating was that it disallowed ‘where’ clauses, forcing you to deal with this weird edge case all the time.
let
f = 9
fo = 10
foo = 123
in f+fo+foo
vs. let
f = 9
fo = 10
foo = 123
in f+fo+foo someValue = let f = 9
fo = 10
foo = 123
in f+fo+foo
rather than: someValue = let f = 9
fo = 10
foo = 123
in f+fo+foo
I think it used to be the case that it had to be indented past the `=` or the `let` even if it was't on the same line. Note also that `in` has to be indented past `someValue`, but doesn't need to be indented as far `let`.This is fine:
someValue = let
f = 9
fo = 10
foo = 123
in f+fo+foo
So, it is possible to land on sane indentation, but the parser is much pickier than, e.g., Python's off-sides rule, so it takes some trial and error for new users to find it, and it can be frustrating if you're just temporarily modifying an expression to quickly try something out.I honestly think it would be less surprising if the parser just disallowed writing the first binding on the same line as the `let` entirely, treating it only as a block, but some people (bewilderingly) do seem to prefer to write their code with the excessive indentation (I'd imagine with editor support, rather than manually maintaining the spacing).
[proceeds to agree on all points]
Not even sure what to tell you... Have more introspection?
Syntax highlighting? Please take a look at https://play.haskell.org/
I am completely baffled by this comment. Are you missing the parenthesized function calls by any chance? If so then I can relate a bit.
For background: my first time in college, I was studying typography. An integral part of this trade is figuring out what is easier for people to read by answering questions s.a. what is the best line length, what number of columns per page is the best, what number of ascent elements per font face is the best, considering letter frequencies and coincidence and so on.
It also comes with the editing part, as in the trade of taking a manuscript (a text intended to be published) and making sure that the text meets certain reader expectations in terms of consistency, clarity, structure. This, obviously, includes the use of punctuation, but it's more about the language structure, things like adjectives order or anaphora usage etc.
Programming languages can be judged using the same rules, because, in the end of the day, we read them and need to interpret them. People have particular strengths and weaknesses when it comes to reading: we can remember the anaphora's anchor for only so long, we can hold only so many "variables" in fast-to-access memory, we only can do so many levels of adverb phrase nesting and so on.
Haskell was designed by someone completely oblivious to human abilities to read. It's very demanding and straining when it comes to extracting structure from text in the same way how, in English, you'd struggle to extract structure from so-called "garden path" sentences, because it's intentionally obfuscated. I don't believe Haskell is intentionally obfuscated, instead, I attribute the poor performance to the lack of awareness on the part of the author.
To convey the same point by means of example: Haskell is almost uniquely bad in that given a program
A B C
the programmer can't tell if the program is actually A(B, C), or B(A, C), or C(A, B), or A(B(C)), or A(C(B)), or (A(B))(C), or (B(C))(A), or (B(A))(C), or (C(B))(A).There's absolutely no reason a language should offer these kinds of puzzles, especially in a very large quantity as Haskell does. Removing this "feature" would make the language a lot easier to work with.
0: OK, there are some additional non-ASCII Unicode symbols, but everything but string literals should be kept ASCII IMO.
What do you mean, "can't tell"? If I see this in Python
(A)(B)(C)
how do I know which of your 9 it means? Well, I'm a Python programmer so I know that it means A(B)(C)
which is the function A applied to B, which returns a function that gets applied to C. If you're a Haskell programmer you know that it means the same thing.I grant you that it is odd to those who are unfamiliar and it took me quite a while to get used to it, but it's much better to write that way in Haskell when writing programs that use higher-order functions.
But that is a choice. I prefer not using complex function compositions and the lenses due to this, split complex expressions into a bunch of let bindings etc..
So you also can write very readable code in Haskell.
syntax-case is the general purpose construct to use. syntax-rules is a restricted, easy-things-should-be-easy construct.
It is pretty awful to write things like that.
* Prolog (and, by extension, Erlang).
* Pascal.
* Java 5 and earlier (and Go, as it's almost a Java's twin).
These languages somehow manage to hit the sweet spot of enough system and enough diversity, few unexpected syntax constructs (eg. Pascal or Java have the "dangling else" problem, but it's manageable compared to the problems introduced by optional statement delimiters in Go or JavaScript for example). In every case, a programmer must program defensively against these sorts of language "pathologies".
To give some examples of questionable or outright bad design decisions:
* In Common Lisp (and Scheme as well as a number of similar languages) there's a problem with identifying the open parenthesis that will be closed by typing the closing parenthesis. Programmers must invent tools and techniques to manage this problem.
* In C++, there's a laughable (or, at least was, for a long time) rookie "whoopsie" when it comes to ">>" in templates vs infix operator. And the "solution" offered by the language designer makes you think they were just... lazy (add space).
Here are also examples of some (perhaps, accidentally) good decisions:
* Kebab-case in many Lisp family of languages. In Latin script, the position of the hyphen in the middle of the lower-case letter is a better choice then, eg. underscore (which is tutted to be a "not a typographic character"). Same reason why, eg. in traditional Hebrew hyphens are at the height of a capital letter (Hebrew doesn't have lower-case letters and the shape of letters is better suited for hyphens at the top rather than the middle).
* Clojure as well as Racket (afaik, deliberately) introduced more kinds of parenthesis-like delimiters to make it easier to guess which expression is being terminated by the currently typed delimiter.
* * *
Note that this is a "superficial" metric, because languages are also valuable for concepts they are able to express both in terms of program logic as well as program application to the hardware it manages; the ability to process, modify, generate, analyze the language automatically; the ability to constrain the language to a desired subset of all available operations... Incorporating all of these into a single metric seems like mission impossible :)
Are you mixing tabs and spaces? Maybe an example here would help.
>overloading of literals (heaven, why???)
No, this is important, so that default strings don't to have to be something crummy. Even C++ got on this bandwagon.
>and requiring parenthesis around function arguments both for definition and for application.
??? Again, an example would be helpful. Usually the complaint with Haskell is that people don't use enough parenthesis.
>The execution model is great
...I thought lazy execution was widely agreed to be the worst part of Haskell.
This is not what "rules" means. Rules aren't about what I do. Rules are about what the language treats as legal or illegal. I don't write in Haskell at all because I don't like it and have no use for it, but Haskell rules don't change because of that, they are still mindbogglingly complex when it comes to telling the programmer if the next line is the right amount of space to the right or not. None of that complexity is necessary and could've been totally avoided if the language used statement delimiters.
> No, this is important, so that default strings don't to have to be something crummy.
My argument is that to get a little accidental convenience you sacrificed a huge amount of routine convenience. The mental load of having to distrust a string when you see it is just not worth the accidental convenience of writing a prepared statement and making it appear as if it was a string. In other words, you are the guy who traded a donkey for three beans, but the beans didn't sprout into a huge ladder that took you to the giant's castle. You just made a very watery soup and that was that.
> Again, an example would be helpful.
Look up the example I gave in the adjacent reply.
> I thought lazy execution was widely agreed to be the worst part of Haskell.
It's good because it's unique and, when it fits the purpose, it's useful for that particular purpose and neigh irreplaceable, because it is unique. It's worth having for the sake of research, to understand how languages can be designed and what tools or techniques can be discovered on this path. This is said from the perspective that Haskell is not the end product, but rather a research attempting to study how languages can work and what concepts they can develop.
I mean, it does. White-space sensitive syntax is entirely opt-in when you chose to omit delimiters. Here's an explicit delimiter example:
let {
a = 1;
b = 2
} in a + bIn case you're into machine learning, I'm also building something similar - a tensor-first, native Clojure-like ML framework.
This hits home for me. Print statements are essential to the way I code. I use them to debug, to examine variables and parameters, to trace execution flow. I rarely use debuggers.
For my own language, I have the syntax highlighting set to put the `trace` keyword in red, so I can easily clean up.
But the lazy evaluation does imply that trace functions only execute when the statement is actually forced evaluation. But it is actually quite helpful during debugging..
For stuff I like to work on on my own time, "how I like to work" is a major forcing constraint. So it's no surprise that I have a large number of Lisp projects sitting around. Maybe it's because I'm auDHD, but the ability to evolve a program through active dialogue with the machine (and not of the sloppotron variety) just fits better with how I think through a problem and its solution.
Either with S9 Scheme for quick fun (it has Unix sockets and ncurses :D ) or Chicken Scheme for completeneless (R5RS/R7RS-small + modules), I always have fun with both.
Oh, and well, Forth, too, but more like a puzzle (altough it shines to teach you that you can do a lot with a fixed point). Hint: write helpers for rationals -a/b where a is an integer and b a non-zero integer- and complex numbers by placing two items in the stack for each case (for rat helpers you need four (a/b [+-*/] c/d) .
You can have a look at qcomplex.tcl (either online or installed) as an example on how can it work even under JimTCL itself by just sourcing that file. Magic, complex numbers under jimsh thanks to the algebraic properties. So, you can implement the same for yourself in some Forths, even under EForth for Muxleq. Useless? It depends, under an ESP32 it can be damn fast, faster than Micropython.
Racket:
> (define (fact n)
(if (= n 1)
1
(* n (fact (- n 1)))))
> (fact 6)
720
OCaml: # let rec fact = function
| 1 -> 1
| n when n > 1 -> n * (fact (n - 1))
in fact 6;;
- : int = 720 fac n = product [1 .. n]https://people.willamette.edu/~fruehr/haskell/evolution.html
fac 1 = 1
fac n = n * (fac (n - 1))
which is a working Haskell implementation?I mean, in Scheme it is longer to write. I enjoy Lisps and use Emacs for everything, but Haskell can be as terse, or even more terse. (Which is not always a good thing.)
What do you mean? It's one of the first things taught in any tutorial for the ML family or Haskell.