It finally clicked how assembly instructions could draw images on a screen and all the magic inside my computer vanished instantly.
Big fan of dumb switch boxes
I'm definitely coming from my own perspective, having initially been self-taught, and then finding myself with the opportunity to go back and academically study CS at a well ranked university. Supposedly, this would fill all of the "holes" in my knowledge (you see this sentiment a lot). I found the classes fun, and I did well in them. But I was also constantly trying to find usefulness in them when it came to work I actually was doing, and didn't find any. Though I don't regret doing it as an interesting life experience, it was honestly a waste of time and money when it comes to my CS skills.
Which isn't to say that SICP is useless for everyone, or that no one will get value out of it. But reading the article in the link, as well as all of the replies, it's telling that people are writing so many paragraphs but aren't able to say anything concretely. Even multiple paragraphs accusing people of looking for excuses not to do it - but not providing any concrete examples of its usefulness. This should set off red flags.
Yes, there are people who are completely clueless about core concepts. Though SICP is obviously not the only place to learn about those concepts. Further, most people hanging out in places where those concepts are discussed and are considering whether or not to do SICP in their free time likely have been exposed to the concepts already (and might already have a pretty good grasp on them).
There always is a bit of posturing and ego involved when some of these things come up, and they often lead to people giving very poor advice.
Also, YMMV: I have a math background and got sucked into programming because the appeal of being able to hold an idea in my hands by writing a program that I could poke and prod and play with was really exciting. The quixotic spirit of the book fed my enthusiasm. (And this is neither here nor there, but I think pedagogy in a first course should probably skew quixotic)
People get into programming for all kinds of reasons -- I recognize that SICP isn't the right book for everyone, and that there is a lot of elitism around this book. Nobody should feel bad about not getting much from it, and I think it has severe pedagogical flaws, but I'm very glad and grateful that it exists because for me, personally, it was exactly what I needed.
Edit: Sorry, was cranky from being hungry. I take issue with “useless in the real world.” The first chapters introduce variables, functions, recursion, and lambdas, and not much more than that. That’s the beauty of the book. Flexible ideas that are immensely useful and form a great foundation for learning computing. It’s not the best total beginner book, I’ll admit, but it’s a great beginner-intermediate book for those who seek mastery.
This is great in its own way, but nowadays, there’s a preference for a programming course that’s more extroverted, that lets you build interesting things by including a program as one component of a larger system. I believe even MIT switched to an introductory course where they program robots using Python, or something like that?
See section 2.2.4: https://mitp-content-server.mit.edu/books/content/sectbyfn/b...
> there’s a preference for a programming course that’s more extroverted, that lets you build interesting things by including a program as one component of a larger system.
Black box abstraction is a huge theme throughout all of SICP.
I went looking for what primitive drawing commands it was built on, and it’s barely mentioned:
> The details of how primitive painters are implemented depend on the particular characteristics of the graphics system and the type of image to be drawn. For instance, suppose we have a procedure draw-line that draws a line on the screen between two specified points. Then we can create painters for line drawings.
That’s a pretty typical avoidance of I/O. (Also, generating static pictures is rather different than reacting to input.)
My intro CS class was taught in Scheme. I haven't found what I learned in that class to be quixotic at all. Instead, I view it as setting me up with a strong foundation that colleagues of mine who learned in a more "popular" language lack. It's put me at a permanent advantage for designing and building software - even decades later, I find that there are ways of decomposing problems that I can understand easily, and colleagues of mine who learned on Java still have trouble with, because they simply lack the "design vocabulary" that would enable them to think about the problem in the right way.
We never again used the language past that first introductory course, but I still don't think it was a waste of time, because it allowed us to cover so much ground so quickly. In 3 months we went from basic fundamentals to building our own object-oriented programming system from scratch. That's amazingly powerful. A deep knowledge of what object-oriented programming actually is, and how it works under the hood, is usually considered an advanced and esoteric concept that few people really understand. Fast forward about 15 years, and having that kind of knowledge in my head allowed me to take a job at a Java shop, having never touched the language previously, and then within just a few short months have a deeper understanding of how Java works than colleagues of mine who had been using it on a daily basis for 20 years. So that they were coming to me for help with Java problems, despite having an order of magnitude more experience with the language.
Rewind back to school, and it's the same story. The second class in the freshman curriculum used C++. It was not a course in C++, mind - it was a class on algorithms and data structures. Learning C++ was just the first couple of weeks. And learning it mostly consisted of being taught the syntax and the build and debugging tools. The semantics for everything - pointers, classes, templates, etc. - took a negligible amount of time because it could be explained to us in terms of concepts we had already learned by building ourselves - in a 3 month intro class.
I couldn't really get into Logo- it felt too indirect, too computer-sciencey, so I went into BASIC and later Machine Language to learn how to build games (I was influenced by Choplifter, Lode Runner, Wizardry, and Ultima). It wasn't until some time later that I learned LISP and began to appreciate functional programming and the turtle model (I still am amused at how much I learned on that original Apple II that still applies today to tech like SVG).
I reflect a lot on how Norvig was a LISP guy and then when he saw what was happening in scientific computing, pivoted to treating Python like LISP (https://www.norvig.com/python-lisp.html and https://www.norvig.com/lispy.html) although I don't know that anybody really anticipated we'd have systems like Jax which really turn functional programming in an imperative language on its head.
What SICP is about, is programming theory. How data structures can be formed and manipulated to solve abstract problems and how these programs are executed.
Sure, nothing in this book will tell you the best way to program a video game, but the presented concepts are of genuine importance and can be seen in a wide area of circumstances. Ultimately the book want's you to think more abstractly, which is great way to understand things.
For instance, the part about tagging objects with their type reshaped my thinking about static type systems in general. Static typing, for example in C, is essentially just moving the location of the type tag from existing at runtime within the struct, to the "analysis" phase of the compiler. It's a pretty simple idea, but it made the "degree of dynamism" tradeoff click in my head. All the information _has_ to exist somewhere, and the only question is where. And Scheme is just at an extreme where the _everything_ is stored dynamically.
See also "shift left"
> All the information _has_ to exist somewhere, and the only question is where.
Not entirely sure what you mean about this, but not all type information has to exist somewhere. Sometimes things get encoded into a type that might never be represented otherwise -- like whether an integer is a width or a height, or whether a string is a name or an address.
Then the code is written to make sure you never make the mistake of sending a width to a height, because that would be silly.
The type (width or height) is represented as logic in the code.
theoretically - if you have a type system that allows you to know something is a width or height you can often reduce logic to keep track of these things but my experience is that level of granularity in your typing reduces dynamism too much. I would rather keep track of that in code (as it probably will have to deal with it anyway) than make my type system deal with it.
But note that to make all of that work well you really need your language to provide a lot of syntactic sugar features which are missing in most older languages, so all the operations read seamlessly, instead of being full of ceremony. One could do all of this in, say, old Java 1.4 or something, but the amount of type annotation, along with the massive mounts of boilerplate to get all of the operationg working right, and checked at compile time would make it all Not-Worth-It(tm) But with enough language features, and a smart enough compiler, `val acc = mySpeed / someSeconds` will work, straight up, with the safety included.
I was thinking about this as I was reading your comment, and wondering: is it nonsense? If I have 5 seconds of time as well as 10 milliliters of water... I can consider them as being packaged together... which is addition. And I can subtract 3 milliliters and 6 seconds from them without running out of what I had. Nothing wrong with that, really. Five potatoes and a gallon of water is the same notion, just more familiar. Seems no more nonsensical than dividing length by time, right? Food for thought...
Which means, for example: (3 liters and (i.e. "+") 6 seconds) + (4 liters and 2 oranges) = (7 liters and 6 seconds and 2 oranges). Perfectly sensible addition, which you do all the time. There's no stipulation the output has to be a single number...
The point is, making a vector, that's not addition.
> because it's very much not
I don't expect to change the mind of someone who thinks that addition and defining a vector are the same thing. You've just redefined generally common terms and then started accusing people of being uneducated for not sharing your own eccentric definitions. Poor show.
I didn't intend that to be rude, I was pointing out this is something everyone learns in a standard curriculum i.e. that this is very much the opposite of esoteric. But you seemed determined to reply rudely, so I'll stop here.
surely, you have an idea of what you're using that function for, while you're writing it.
The compiler is doing a check to see if the function makes some minimal logical sense (ala, a typecheck), but it cannot read your mind. But this implies the types are actually an actualization of the thoughts in your head, in a formal manner.
Is a pretty bad system because you say "won't even exist in your head" but I dont even know if I should pass it integers, floats a srtring. Or what it will return.
I'm skeptical that sophisticated type systems need to exist.
Perhaps I should dust off my copy and start reading SICP
When I was younger I used to passionately defend those things I've seen as beautiful, but after years experience talking with people passionate about their fields and learning and those who never will be: If you lack the innate curiosity to explore those things others have declared marvelous, then this book will offer you no value.
Every time I crack this book open I get excited and I've read it multiple time and one most of the exercises. I can think of few other books that really expose the beauty and simultaneously strong engineering foundations of software.
You have "tons of experience programming" and sound like you've already decided you know what needs to be known (otherwise why even ask rather than just read it free online), I doubt this will offer you anything you haven't already seen before.
> Every time I crack this book open I get excited and I've read it multiple time and one most of the exercises. I can think of few other books that really expose the beauty and simultaneously strong engineering foundations of software.
---
https://www.stilldrinking.org/programming-sucks ( https://news.ycombinator.com/item?id=7667825 and others)
> Every programmer occasionally, when nobody’s home, turns off the lights, pours a glass of scotch, puts on some light German electronica, and opens up a file on their computer. It’s a different file for every programmer. Sometimes they wrote it, sometimes they found it and knew they had to save it. They read over the lines, and weep at their beauty, then the tears turn bitter as they remember the rest of the files and the inevitable collapse of all that is good and true in the world.
I recommend (not ironically) Double Binded Sax by the group named Software.
After you finish that, I recommend queuing up Friedrich Nietzsche by Klaus Schulze.
Not GP, but my time is limited, so asking "is this something an experienced programmer would find worthwhile and insightful" is a fair question.
But I find the book a marvel of pedagogy and rank it as maybe one of the greatest textbooks of all time across disciplines. The lessons are packed so densely yet so concisely that you'll appreciate different things on successive reads.
If you're experienced it will also read very easily and quickly so it then becomes quite an easy and enjoyable skim and then you don't have to rely on other people's accounts of whether it is or isn't worth your time.
I think it really depends on your actual formal education, how valuable this book will be to you. If you have a CS degree basically everything in there should have been covered during your studies, you already know how to operate on trees, recursive algorithms and how to build an interpreter. The main value to you would have been seeing it all come together in one place.
If you don't have that kind of education I believe it is extremely valuable to see programming from a theoretical perspective. The book is very good at laying out a cohesive narrative of how you go from very basic structures to complex programs and how these programs can be executed by a computer. The highlight definitely is the implementation of lisp in lisp.
There were also chapters, mostly towards the end, which I think aged quite poorly and seemed mostly irrelevant. I don't think there is much lost by skipping them.
There's an old video [0] where he talks in-depth about his explorations of functional programming. It's perhaps a bit more about Haskell than about SICP, but in it he's also quite enthusiastic about what a veteran programmer can still learn from this book.
I'd say for the vast majority of developers, experienced or otherwise, it's a good read.
Watching 2-3 of these will probably answer your question.
[1] https://groups.csail.mit.edu/mac/classes/6.001/abelson-sussm...
It gives you basics of things like recursion and message passing that are easy to use immediately (well, once you figure them out -- it took me two weeks to be able to write even the simplest recursive function), but there are large conceptual things you won't really see the depth in until you have more experience. The last project is the meta-circular evaluator - writing a scheme program to interpret scheme programs. It is difficult to understand that without having a lot of other background.
Even then along the way learning things like exceptions as implemented through call/cc might be a little too much for a beginner programming. The book is in some ways talks about concepts way outside of a beginner's comfort zone.
I worked through it after years of practical experience as a self-thought programmer, and I learnt a lot (and found it quite challenging, especially the mathy parts).
So it is going to depend a lot on your background and the effort you put in. It may be better done as part of a reading group where you can discuss the exercises than on your own as some are meant to highlight why certain approaches are hard and are not commonly used.
I’d say it’s actually more valuable to people with programming experience. It gave me a greater appreciation for solving complicated problems in elegant ways. And perhaps ambition to solve other more practical but still complex problems in similarly elegant ways.
It’s free online. And YouTube has lectures from the authors- presented in all their early 80’s glory.
</Slashdot_nostalgia>
You don't have to read every word of a book to understand if it's interesting to you. I purchased a bunch of technical books the other day that I had never heard of based on opening them up, reading a bit of the intro and flipping through the examples.
Relatively few of my favorite books have come through recommendations compared to those that I have come across through serendipitous discovery.
For anyone who has no time to browse books, then most of the best books in existence would be of little interest to that person.
> why don’t you just start reading it and decide for yourself?
A journey of a thousand miles....
You do understand that you don't have to read an entire book before forming an opinion, right?
> sort through everything that exists
"Book that has been considered a classic for forty years and was used as the intro text at MIT for decades" is a long, long way from "everything that exists".
You asked if you should read it, and almost everyone who bothered to reply has said yes. Each time this topic comes up and people ask if they should read it the majority of responses are 'yes'. And I presume if you've been around software for that long, you've seen all of those threads and previous questions and answers like I have - meaning you likely knew what people would say when they responded.
So, in the end, it's up to you to decide if you'll read it or not.
I don't know you personally, but from my life experience it sounds to me like something I've seen in other circumstances. You've more or less decided that you're not going to read it, but feel like you're missing out and you want someone who as read it to say "it's ok to not read it". So you can resolve both the feelings of the decision to not read it, and the uncomfortableness of feeling like you're missing out.
It's possible I'm way off the mark on the above, but I mean it to be helpful - as I can say I've seen what looks to be this same pattern many times in life.
or what you're likely looking for: In the end, there are only so many hours in the day. You gave it a fair shot and it wasn't your vibe. It doesn't say anything about your strength as an engineer, it just has a specific approach and it's not a match for everyone. That doesn't make it a bad thing, it just means you'd rather spend your time learning and exploring in other ways. And it probably would've had more impact earlier in your career than at the level of experience you have now.
But of course before wrapping up I do need to undo my comment: I think you should give it another go. Maybe skim past the early part if it feels a bit to introductory and come back to it later. But it's a book that continues to grows on you the more time it's been since you read it. The concepts it presents are subtle but impactful in changing how you think about software. You don't fully grasp it when you read it. It's just that afterwards you just start seeing things through its lens often. I haven't read any other book like it.
"We shall not cease from exploration, and the end of all our exploring will be to arrive where we started and know the place for the first time." - TS Eliot.
You aren't a curious person who studies things for their own sake and finds wonder in exploring ideas. That's fine, this book is clearly not for you so why concern yourself with it? Most of my friends who are obsessed with their field count many intro books as their favorites and frequent re-reads.
> condescending promises of enlightenment.
It sounds like you're more upset that other people enjoy and therefore recommend this book. You're the one asking for proof that it's worth your time. It's clearly not. People who are curious and like to explore ideas in computing recommend this book frequently to other like minded people.
If you don't like sushi, why question somebody's recommendation for their favorite omakase place?
Right?
> Yet people promise me that this one is different, while giving me nothing but condescending promises of enlightenment.
You do understand that it's possible for different people to place different values on the same thing, right?
I mean, clearly there are many, many people who do find the book valuable. You're not one of them. That's fine! But it doesn't mean the book sucks.
By just about everyone who's ever compiled a list of the greatest CS books of all time?
Here you go:
https://duckduckgo.com/?q=classic+computer+science+books&t=b...
Count how many of those have SICP on their list (hint: most of them, excluding the ones that aren't actually lists of CS books -- e.g. the ones that have Jobs and Gates biographies, etc.)
I didn't just make that up, dude. And this isn't Wikipedia, either.
When it started to get to object orientation I started having problems following. It just didn't make sense. Maybe I should try again, but it just always made me feel that for my particular mind functional thinking was better.
life ← {⊃1 ⍵ ∨.∧ 3 4 = +/ +⌿ ¯1 0 1 ∘.⊖ ¯1 0 1 ⌽¨ ⊂⍵}
https://aplwiki.com/wiki/John_Scholes%27_Conway%27s_Game_of_...I'm not sure how you don't call that an eso lang.
def life(x):
y = np.array([np.roll(x, -1), np.roll(x, 0),np.roll(x, 1)])
y = np.array([np.roll(y, -1, 1), np.roll(y, 0, 1),np.roll(y, 1, 1)])
z = y.sum(1).sum(0)
a = np.array([3 == z, 4==z])
c = np.logical_and(np.array([np.ones(x.shape), x]), a)
return np.logical_or(c[0], c[1])
Video of Scholes building up the life APL expression can be seen here: https://www.youtube.com/watch?v=a9xAKttWgP4Oh,and educational too!
Seriously, it was a really valuable foundational course. But 100% it scared some folks to other majors.
Well, the acronym is pretty close to SCP...
One of Brian's primary points is the following:
> Scheme ... has a very simple, uniform notation for everything. Other languages have one notation for variable assignment, another notation for conditional execution, two or three more for looping, and yet another for function calls. Courses that teach those languages spend at least half their time just on learning the notation. In my SICP-based course at Berkeley, we spend the first hour on notation and that's all we need; for the rest of the semester we're learning ideas, not syntax.
Bullshit. Again, I was a TA for this course. You do not spend the rest of the semester on ideas, you spend the rest of the semester on the students being very confused.
This "everything looks the same" property of Scheme and of all LISP-like languages is a bug, not a feature. When the semantics is different, humans need the syntax to be different. In contrast, LISP/Scheme make everything look the same. It is quite hard to even tell a noun from a verb. This makes learning it and teaching it hard, not easy.
Brian is selling a fantasy here. If you think Scheme is so great, look at this nightmare of examples showing the various ways to implement the factorial function in Scheme: https://erkin.party/blog/200715/evolution/
All of this "abstractions first, reality second" agenda is just a special case of what I call "The Pathology of the Modern": the pathological worship of the abstract over the concrete. Everything modernism touches turns into shit. I am done with living in modernist shit and I hope you are too.
Prof. Harvey's claim rings completely true to me. Students understood the syntax quickly, and spent little time on it. It was not a point of frequent confusion. There were plenty of difficult concepts in the course, but the details of the programming language were not, for most students, among them.
Students who already had programming experience when they started the course often had more trouble than inexperienced students, but mostly because they had to unlearn imperative habits since the imperative features of the language, except for I/O, weren't used until late in the course.
SICP covers a huge breadth of material, from basic computational ideas to algorithms and data structures to interpreters and compilers to query languages to concurrency, and does it in an entertaining and challenging way. Even decades later, I find myself pulling ideas from it in my daily programming work.
I worked at Google for almost twelve years, and I can't count the times I found myself muttering, when reading a design document, "I wish this person had read SICP."
I'm certainly biased, but I would encourage anyone who would like to become a better software engineer to read SICP and study it carefully. Take your time with it, but do read it.
I was never one to really dig lisp. I prefer the structure and the groundedness of a statically typed systems language (I mostly do systems work). But I took on reading SICP in the hope of finding something new and interesting, and to level up my skills. However, I got bored by the it. Probably made it through more than half of the book.
It's a bummer because I'm left with the feeling of missing out. Am I not worthy or too obtuse to get what's so great about the book? Or maybe I am in fact not the target audience, having too much practical experience that the book doesn't seem worth my while.
Something that Clojure does is differentiating between () = lists of calls, [] = vectors (the go to sequential data structure), {} = maps. This definitely helps the eye to see more structure at a cursory glance. It has a little bit more syntax compared to Scheme, but the tradeoff seems to be worthwhile.
Secondly, I think it's very healthy to be wary of indirection and abstraction. I'm not sure if I agree with the tone and generalization about modernism, but I think there's a burden of proof, so speak, when it comes to adding abstractions, especially in the long term.
And do the harder exercises. Really do them, not just read and tell yourself you understand how to do that one and move on.
Syntax is absolutely neither natural nor unnatural, by nature, to humans, but it’s a fact that fewer symbols to memorize is easier than more symbols to memorize. The problem is a failure to launch. Some people never truly understand that it’s not just syntax, it’s semantics. Data is code, code is data. That’s why it all looks the same. This artificial distinction in “C-like languages” is more harmful for the enlightened programmer than it is helpful. Unfortunately not everyone that reads SICP experiences enlightenment the first time (or ever, I guess?)
So yes, fewer symbols means easier memorization, but you could take that to the extreme and you'll find that binary is harder to read than assembly.
I think Lisp is really elegant, and the power to treat a program as a data structure is very cool. But scanning Lisp programs visually always takes me a little more effort than most other languages.
Parentheses are just a scapegoat.
Traditional Lisps are not functional, but multi-paradigm.
Working with lists is functional though, in that operations that build larger lists out of smaller lists or atoms return a value that you must capture. You don't create an empty list with a persistent identity, which you treat as a bag. New programmers are encouraged to write "pure Lisp", which is a term that denotes list manipulation which treats cons cells as immutable (or any other objects you happen to be using, but mainly those).
Javascript treats character strings similarly the way traditional pure Lisp treats lists. You cannot mutate an existing string to add characters to it, but perform arithmetic on strings to produce new strings. Yet that doesn't prevent the adoption of Javascript. People are cheerfully doing text processing in Javascript in website after website after web application.
The most popular Lisp currently is supposedly Clojure and it is much more doggedly functional than traditional Lisps like Scheme and Common Lisp.
Nope; the parentheses thing is just pure trolling by mainly non-users.
Anyone who actually uses some kind of Lisp could easily write comments that target true weaknesses.
I suspect there is a group out there who has genuine problems with the parentheses, due to cognitive problems like dyslexia and ADHD and whatever. However, I don't see how they can do well with any programming language syntax. Show me what you do use, and how far you've gone with it before I can take you seriously about the parentheses.
The one area where "code is data" remains a nice idea in my mind is for metaprogramming. And whenever I've done more metaprogramming than small doses, I've come to regret it later, no matter what the language was. (Small doses of metadata can be done even in statically typed, AOT compiled languages without RTTI).
The reason is I think, just basic data structures and simple procedures built in to a language allow you to express most everything you need, in a very direct manner. The number of distinct concepts you come up with as a programmer can usually be directly defined in the base language. Metaprogramms won't create new concepts as such, it's only code run in a different phase. There is definitely a case for generic/templated data structures but it seems it's best to use them sparingly and judiciously. Be wary of them duplicating a lot of code, fatting up and slowing down your system at compile time and/or runtime.
I took 61A from bh. Personally, I agree with bh's statement that you quoted. Where I encountered difficulty was applying the ideas in a different context (e.g. C or Java). Brian spent time addressing this precise difficulty (in the last lecture or so), but it still wasn't enough for me.
I do heartily agree with you calling out "the pathological worship of the abstract over the concrete". Knuth's Concrete Mathematics was also bucking this trend (e.g. https://youtu.be/GmpxxC5tBck?si=tRHQmuA4a-Hapogq&t=78). I'm curious, once you came to this opinion/realization, how did your teaching/learning change?
I took CS61A by Brian Harvey in 2009. I loved the course and I actually spent very little time learning the syntax and most of the time learning the concepts.
So I fully agree with Prof. Brian Harvey here.
Mostly kidding but different paradigms bear different pain points it seems.
Oh and lastly, the let-us-care-not-about-syntax is also an argument at Brown edu (krishnamurti and his team IIRC)
That said, I'd be curious to hear what your students had to say about scheme confusing traits.
The thing is, somehow syntax and some forms of abstractions cast a magic spell on most of the population (at time myself included) .. it's your mental interface to the semantics, so you want more syntax to be able to have more abilities, but syntax composes badly.
At least to me that's why I enjoyed lisps / lambda calc, it reduces the domain into a more homogeneous space, suddenly more things are possible with less. Although it seems that the mainstream rather enjoys doing simple thing with verbose tools (it does looks like you're doing a lot of work with a lot of advanced terminology) than solving hard problems with APL oneliners (hyperbole).
Different psychologies ?
If those two things are already well-understood, the nature of OO as a some syntactical sugar and a couple lookup tables is readily apparent.
Without that background, the terminology seems weird and arbitrary and the behavior magical.
The idea that a language based on a small, elegant set of composable primitives is inherently better for programming in the large as well has not been borne out in practice.
And I was really surprised how quickly and effortlessly I picked up the part of Scheme taught in the book. Faster than any language I had encountered thus far - Python included.
At least part of the goal of CS 50 at that time was to explicitly weed students. They didn't want undeclared students to waste a whole lot of time on CS only to find out they were not going to be accepted into CS. Instead, they went through one hard course to find out. Perhaps that explains why some of it was overwhelming to some students?
I was a TA on an SICP course at a UK university, disagree with you. The students weren't confused, the simple syntax really helped and, because all the students had good maths knowledge, a functional style was a lot more intuitive than imperative.
FYI, the course has since been replaced with Python programming.
What?
Unless the list is quoted or something, the first item after the opening paren is always the "verb", yes?
Notably, this isn't intrinsic to Lisps - Common Lisp uses a different syntax and namespace for function names and variables. My understanding is that Scheme et al's decision to merge the namespaces/syntax was not without controversy in the Lisp community (the Lisp-1 v Lisp-2 debate).[0]
[0] http://www.nhplace.com/kent/Papers/Technical-Issues.html
Nor in C. Nor in JavaScript. Nor in Java. Nor in...
I mean, what is "foo"? Could be the name of a function. Could be a char variable. Could be a double precision float. Could be a pointer to an array of pointers to functions returning doubles. Without going back to its definition (or prototype, for function arguments) you can't tell, much the same as you can't tell in Scheme without looking for the matching define or set!
I feel like I must be missing something here. What?
If I were to hazard a guess at what the original poster was getting at, it might be the culture of those languages, combined with the power of Lisp to redefine its own syntax.
Lispers value concision, love higher-order functions, and love wrapping things in other things to reuse code, so you might easily see a non-trivial stretch of code without a single function call you recognise. Imagine code where the smallest chunk look something like (dq red foo '(hat n (ddl m) f)). There could be anywhere between zero and eight functions in that snippet, or any one of those might be a macro which re-orders the others in any way (or perhaps its parents include a macro, in which case you really can't assume anything about how / if this stretch is executed at all), it could be a wrapper around something that in other languages would need to be an operator (perhaps it's an if statement?), etc etc.
It's absolutely true you can shoot yourself in the foot in any language, but Lisp is unusually good for it. It's part of its power, but that power comes with a cost. Imagine talking with someone that had a proclivity for making up words. In small doses, this might be fun and save time. In larger doses, you begin losing the thread of the conversation. Lisp is sorta like that. It might seem flammorous, but before you prac it grombles, and you plink trooble blamador!
All software is written by making up new words. The bigger the software, the more words.
> you can shoot yourself in the foot in any language, but Lisp is unusually good for it
I've never shot myself in the foot writing Lisp, and have not heard any anecdotes about it. (Well, maybe the one about Cycorp's Cyc decades old code base being large and inscrutable.)
You're making shit up.
> You're making shit up.
An unnecessarily abrasive way of saying you disagree, no? Your own lived experience doesn't match mine, and therefore I must be lying? You're being irrational and mean spirited.
Lisp can't at the same time be uniquely powerful, but also no different to any other language. Lisp is a uniquely flexible language, which is one of its main strengths. Uniquely flexible languages impose a cost for readability and collaboration. You're free to disagree and insult me further, but I think this is self-apparent. Lisp's flexibility makes it a great lone wolf language (well, if you neither want access to a majority of libraries nor closeness to bare metal, which is a bit of an odd middle ground for a lone wolf), but it's awkward in organisations and collaborative contexts, where other, less flexible languages have generally overtaken it.
There are lots of programming languages which are "uniquely powerful": C++, Prolog, Haskell, ...
> Lisp is a uniquely flexible language
I'm not sure if I buy "uniquely", but "very" would be fine.
> Uniquely flexible languages impose a cost for readability and collaboration.
At the same time it provides also important features for readability and collaboration. There are code bases of complex Lisp software, which are maintained by small&changing teams for several decades.
Lisp is effective not so much for "lone wolfs", but for small teams (5 to 100 people) working in a shared infrastructure with larger groups. Example: SBCL is a complex Common Lisp implementation, which goes back to the early 80s (-> Spice Lisp). SBCL is maintained by a group of people and has monthly releases. Around it there is an eco-system of software.
Simpler Lisp dialects can also be effective for larger groups. For example there are many people using "AutoLisp" (or versions of it), a simple Lisp dialect for scripting AutoCAD (and various competitors).
I'm curious, what are some of the important features for readability and collaboration that you mention Lisp offers?
It's actually very different to 'read source code and use batch compilation', from 'interactively exploring the source code and the running program at the same time'.
Relatively typical is the preference for long and descriptive names in larger software bases, with lots of documentation strings and named arguments.
* Development environments come with many introspection capabilities: describe, documentation, inspect, break, ...
* There are standard features for built in documentation strings for functions, variables, macros, classes, ...
* Macros allow very descriptive code. One can extended the language such that the constructs are very descriptive and declarative.
* Macros allow embedded domain specific code, which makes the code very readable, and gets rid of unnecessary programming details.
* Symbols can get arbitrary long and can contain arbitrary identifiers.
* Functions often have named parameters. Source code typical makes extensive use of named parameters.
* Details like manual memory management are not needed. -> code is simplified
* Many language constructs have an explicit and tight scope. -> for examples variables can't be introduced in arbitrary places in a scope.
* The language standard is very stable.
* Language extension is built-in (macros, reader, meta-object protocol, ...) and everyone uses the same mechanisms, with full language support in the extensions. -> no need tof additional and external macro processors, templating engine, XML engines, ...
* Users can more easily share/improve/collect deep language extensions, without the need to hack specific compiler implementation details, since the extension language is Lisp itself.
* Typical code is not using short identifiers or one letter identifiers with a complex operator hierarchy.
* Development is typically interactive, where one loads a program into Lisp and then one can query the Lisp system about the software (edit, who-calls, graph classes, show documentation, ...). Thus the developer does not work only with text, but can live interact and inspect the software, which is always in a debug mode.
* The code can contain examples and tests, which can be immediately tried out by a programmer while reading the code.
* There is a standardized language with widely different implementations. For collaboration it is can be very helpful that even then much of the core code can be shared, instead of having to reinvent the wheel for those different environments. The Lisp code can query the runtime and adapt itself to the implementation. Other systems have that too with an extra external configuration tools. Often it is possible for a different user that shipped source changes can be loaded into a running software. It is then immediately active and information about argument lists, documentation, class hierarchies, etc. is instantly updated.
Here is an example for a interactive definition of a function with documentation, type declarations and named arguments.
CL-USER 12 > (defun some-example-for-hackernews (&key author to title text)
(declare (type symbol author to)
(type list text))
"This code is an example for Hackernews, to show off readability features."
(print (list author 'writes 'to to))
(print (list 'title 'is title))
(print text)
(values))
SOME-EXAMPLE-FOR-HACKERNEWS
CL-USER 13 > (some-example-for-hackernews
:author 'lispm
:to 'troad
:title 'lisp-features
:text '("example for a function with documentation, type declaration and named arguments"))
(LISPM WRITES TO TROAD)
(TITLE IS LISP-FEATURES)
("example for a function with documentation, type declaration and named arguments")
CL-USER 14 > (documentation 'some-example-for-hackernews 'function)
"This code is an example for Hackernews, to show off readability features."
Another example: DEFCLASS is a macro for defining classes. Again, documentation and introspection is built-in.
The developer does not need to read and work with dead text, but can interactively explore and try out the software, while using self-documentation features. As one can see the macro uses similar named argument lists as functions. There is a slot named WARP-CLASS and arguments for types, initialization arguments, documentation, and so on. The macro then expand this form to larger code and saves the user a lot of typing. The language can use similar mechanisms to be extend with other features, without the need to go into compiler hacking. Thus language extensions can be written and documented by users in a standard way, which greatly enhances the way how to use and understand language extensions. CL-USER 31 > (defclass space-ship ()
((name :type 'string :initarg :name :documentation "The space ship name")
(warp-class :type 'number :initarg :warp-class :documentation "The warp class describes the generation of the warp propulsion system. 1 is the slowest and 5 is the fastest")
(warp-speed :type 'number :initform 0 :documentation "The current warp speed"))
(:documentation "this class describes space ships with warp propulsion"))
#<STANDARD-CLASS SPACE-SHIP 8220381C2B>
CL-USER 32 > (make-instance 'space-ship
:name "Gondor"
:warp-class 3)
#<SPACE-SHIP 8010170AE3>
CL-USER 33 > (describe *)
#<SPACE-SHIP 8010170AE3> is a SPACE-SHIP
NAME "Gondor"
WARP-CLASS 3
WARP-SPEED 0
CL-USER 34 > (documentation 'space-ship 'type)
"this class describes space ships with warp propulsion"
I do very much like the named and typed arguments. I took the liberty to do some further reading about SBCL's capacity for compile-time type checks [0], which is a pleasant surprise. I did some quick experimenting, and was also quite impressed with SBCL for catching function calls passing unknown keys at compile time, before the call is invoked.
Perhaps the fact that many Lisp guides feel compelled to start with a terse implementation of lambda calculus might actually be somewhat of a disservice, in hiding the more practical side of the language?
:-)
> was also quite impressed with SBCL for catching function calls passing unknown keys at compile time, before the call is invoked.
Generally CL compilers tend to check argument lists at compile time. Number of args, correct keyword arguments, ...
SBCL is especially good, due to its further support of declarations as assertions and its support for various compile time checks. You'll also get Lisp backtraces in a natively compiled Lisp then as a bonus. Also for newcomers it is quite helpful, because SBCL gives a lot of warnings and other feedback for various possible problems (from undeclared identifiers, unused variables up to missing optimization opportunities).
> Perhaps the fact that many Lisp guides feel compelled to start with a terse implementation of lambda calculus might actually be somewhat of a disservice, in hiding the more practical side of the language?
That's true. Lisp was often used in education as a vehicle to learn things like lambda calculus (or similar). Practical programming or "software engineering" with Lisp wasn't part of those courses.
There are books which cover those topics, too. Like "Practical Common Lisp" by Peter Seibel, "Paradigms of AI Programming" from Peter Norvig or "Common Lisp Recipes" by Edi Weitz.
For SBCL one definitely needs to read the manual to get an idea about its extended features.
Name three concrete anecdotes with organization names, projects and timelines.
Any language can be a "lone wolf" language. People have collaborated in making very large, well-documented projects in C. They have also made things like this:
https://www.ioccc.org/2001/herrmann2.c
a random-dot-stereogram-generating program whose source is a random-dot stereogram.
A language that doesn't let you be a Lone Wolf if you are so inclined is something that is not designed for grown ups, and not worth using if it has any alternatives at all in its space.
I'm honestly unsure what the point of this exchange is. Your response style seems to be to pick one sentence, seemingly at random, and launch a hyperbolic and extremely abrasive tirade against it. Which is both unpleasant and unlikely to lead to any meaningful exchange of perspectives or ideas.
I completely agree. This may be more of an area that finds you on sure footing.
> What the point of this exchange is
I identify with Lisp, and take the trolling personally.
Perhaps you ought identify less with your tools, you'd find yourself feeling less attacked when they're discussed (and attacking others?). There's an alternative version of this exchange where you contribute your Lisp knowledge in good faith and I benefit from your thoughts. Bit late now, but food for thought.
That's a baseless, misinformed attack on Lisp people, such as myself; if many people read and believe that, it becomes economically harmful.
Almost every capability in any Lisp dialect can be used responsibly, and in a way that a later maintainer will understand, due to good structure of the code, naming, documentation and other practices.
If I think someone is wrong, does that necessarily mean they're acting in bad faith, that they're an idiot, and I'm entitled to bully them? What if I'm mistaken? What if I'm not mistaken and they are in fact wrong - does that make such a reaction acceptable? Effective? Pleasant?
For me, this is a single unpleasant exchange that I get to leave behind, forget, and never think about again. For someone with the aforementioned negative perceptual filter, this is an unpleasant exchange they'll recreate and relive in different contexts, again, and again, and again. I find that kind of sad, honestly.
The irony here is that you're clearly quite experienced with Lisp, and had you responded instead with "hey! not quite - here's what you might be missing about how Lisp tends to be used in production... ", this would have been a very different exchange! But instead you chose to call me a lying idiot, which - well - I honestly can't picture anything positive ever coming out of that. Behaving like a bully automatically undercuts anything else you may have to say, which is a disservice to the experience you no doubt have to share. And even if you don't feel like sharing it, why choose to randomly start a conflict? If the goal was to defend Lisp's honour, is that an effective method? Is anyone reading this going to walk away thinking "My, what a lovely and welcoming community Lisp has, I should go check it out"?
I'm out, feel free to have the last word. Let's see if you use it to be mean or not.
> This "everything looks the same" property of Scheme and of all LISP-like languages is a bug, not a feature.
But you are mixing up things here. There are things that look different. Most things in Scheme can be understood as function calls and match that syntax, but there is different syntax for define, let, cond, if, and others. Not everything looks the same. What you might actually mean is, that everything is made of s-expressions. That is actually very helpful, when you work with code. It makes it very easy to move things around, especially in comparison to languages like Python, with significant whitespace indentation.
> When the semantics is different, humans need the syntax to be different.
I learned multiple languages before Scheme, and they did leave their scars, but I find, that I do not need syntax to be that much different. Maybe I am not human.
> In contrast, LISP/Scheme make everything look the same. It is quite hard to even tell a noun from a verb.
Is that a feature of the English language? I have rarely had this issue in Scheme. Perhaps it is because I think a lot about names when naming things.
> This makes learning it and teaching it hard, not easy.
Maybe I only had bad classes and lectures before reading SICP on my own, but I found, that I learned much more from it than most teaching before that was able to teach me.
> Brian is selling a fantasy here. If you think Scheme is so great, look at this nightmare of examples showing the various ways to implement the factorial function in Scheme: https://erkin.party/blog/200715/evolution/
And what exactly is your criticism?
That there are many ways of writing the function? That is a property of many general purpose programming languages. For example we could look at something like Ruby, where it has become part of the design to allow you many ways to do the same thing.
Or the richness of programming concepts available in Scheme? Is that a bad thing? I think not. You don't have to use every single one of them. No one forces you to. But am I glad to have them available, when have a good reason to use them.
Surely you are aware, that the page you link to is at least partially in jest?
> All of this "abstractions first, reality second" agenda is just a special case of what I call "The Pathology of the Modern": the pathological worship of the abstract over the concrete. Everything modernism touches turns into shit. I am done with living in modernist shit and I hope you are too.
I don't know where you got the idea, that SICP lauds "abstractions first, reality second". This is not the essence of SICP. SICP invents abstractions, once it shows, that some previous approach was not sufficient. A good example is the whole "develop a package" thing, where piece by piece the requirements grow and data directed programming is introduced.
It truly levelled me up as a programmer when I was hit with that insight.
To the vast majority of programmers, syntax matters. C-style with brackets, or python whitespace, or Ruby do/end, these fit better the brains of the majority of programmers. Perhaps not the majority of HN readers but the majority of corporate devs.
Another example of this is Erlang and Elixir. Elixir adds a couple of features over Erlang, macros and protocols, but Erlang does everything else. What made Elixir take off where Erlang didn't, after decades, is that Elixir has a syntax that people are comfortable with. Erlang has a syntax that will summon Cthulu.
Though, slightly off topic, but worth mentioning, that both Erlang and Elixir communities support each other very well. For example, now not only elixir is built on top of Erlang, but also Erlang adopts some things from elixir, such as monadic expression `with` from elixir inspired `maybe` in Erlang, or starting OTP27 Erlang is using ExDoc introduced by Elixir to generate documentations.
Common Lisp has left brackets like {} and [] to the user (aka developer). It supports "reader macros", where the user can extend/supersede the syntax of s-expressions.
So, specialized tools/libraries/applications can introduce these brackets for their own use. Examples are embedded SQL expressions, notations for Frames (special objects, in kind of a mix of OOP and Logics), grammar terms, etc.
Thus it explicitly supports the idea of "people will introduce new brackets with new meanings".
So it’s probably just what people learn first + lack of ‘marketing’ or negative PR (there are no libraries or ecosystem! The thing that least bothered me about CL but people with npm leftpad experience seem bothered by it).
It’s interesting as I worked I almost everything in production; c/c++ (including the MS 90s flavour), Delphi, VB, Perl, PHP, Java, C#, Haskell, F#, Common Lisp, Erlang, TS/JS, Python, Ruby, asm (z80, arm, x86) and I simply have not had a better overal experience than CL. The others are better at some things but a an overal experience, CL just is a pleasure.
What's closer to innate is the Algorithmic Language, Algol for short, the common ancestor of the vast majority of languages in common use (but not, notably, Lisps).
Algol was designed based on observational data of how programmers, who had to somehow turn their ideas into the assembler to run on machines, would write out those ideas. Before it was code, it was pseudocode, and the origins predate electronic computers: pseudocode was used to express algorithms to computers, when that was a profession rather than an object.
That pseudocode could have been anything, because it was just a way of working out what you then had to persuade the machine to do. But it gravitated toward a common vocabulary of control structures, assignment expressions, arithmetic as expressed in PEBCAK style, subroutine calls written like functions, indexing with squared brackets on both sides of an assignment, and so on. I revert to pseudocode frequently when I'm stuck on something, and get a lot of benefit from the practice.
So I do think that what's common in imperative languages captures something which is somewhat innate to the way programmers think about programs. Lisp was also a notation! And it fits the way some people think very well. But not the majority. I have some thoughts about why, which you can deduce an accurate sketch of from what I chose to highlight in the previous paragraph.
I believe you, but do you have a source for this? I can't find papers on how they chose to develop the syntax of Algol in the beginning.
There are lisp dialects that are very imperative, for example elisp, but they still use S-expressions. Historically they might have been considered “functional” because they have first-class functions and higher-order functions like mapcar, but nowadays practically every modern programming language (except go!) has these.
The thing all lisp dialects have in common is not where they land on the imperative vs. functional spectrum, but rather the fact that the syntax is trivial and so it’s easy to write powerful macros.
Code is communication, and communication needs redundancy for error correction. You can see it in natural languages, and it makes sense to have it in programming languages as well. Using different kinds of syntax for expressing different ideas is an easy way to increase redundancy without making the code more verbose.
Then the AI Winter killed it and people avoided it like the plague.
Today's cruft like you ... Python, JS, whatever ... would not stand a chance in the world of the 1980s on that hardware.
It's amazing how far they were able to bloat up Lisp while continuing to peddle it commercially.
Leaner Lisps running on small systems existed all along, but they would rescue Lisp from the associations brought about by big Lisp.
This is what fascinates me about Unix, they created an OS which works with text and processes, as opposed to binary structures and function calls when computers were hundreds of times slower. Even today the overhead of process creation and serialization for pipes is not negligible, how the hell did they manage to do it in 1970s?
It s weird people prefer reading implicit text.
What has happened in reality is that C became really popular and then all the people designing languages they wanted to be popular, rather than to be experimental, or to push boundaries, etc obviously chose a syntax which was familiar with most programmers, ie a syntax like C’s.
Further, one can disprove that the syntax is particularly important by simply pointing to Python which became immensely popular despite a lack of curly braces and even worse with significant white space simply because colleges and bootcamps decided it would be a good language to teach programming to beginners.
I would argue the important part are the blocks in the former two, which sort of gets lost in the homogeny of lisps. Whether a block is marked with curly braces or indents doesn’t matter much - they being dissimilar to a regular expression does. Of course well-formatted lisp code tries to indent as well, but still there is a lot of visual noise there making it harder to visually inspect the code, I would guess.
Of course familiarity with a given way is significantly more important. We pretty much learnt the non-intuitive writing of math, to Chinese people their writing system is the intuitive one, etc.
static char _getch() {
char buf;
if (read(0, &buf, 1)) return buf;
return '\0';
}
would become: (define _getchar ()
(declare static)
(return-type 'char)
(let ((buf (char)))
(if (read 0 (& buf) 1)
buf
"\0")))
It also responds to a few parents up "almost all of the innovations in lisp [...] have been absorbed into more popular languages" - pervasive interactivity hasn't even been taken up by some "Lisps", let alone has it been absorbed outside Lisp.
some time ago I tried Racket, and just no. recently I tried Scala ZIO HTTP, and yes.
Maybe it's the types? Maybe it's the parens. But probably both. I cannot really recall my experience, just that manipulating code was ridiculously clunky. My assumption was that the IDE will manage the parens for me and when I'm moving something somewhere it'll figure out if I messed up the parens.. and ... no, nothing. I had to balance them with hand.
It says nothing about what makes a language easy to learn.
Many things are socially constructed, but not everything.
When 6.001 (the introductory class for which SICP was written) was launched, most of the students who took it had never used a computer before. Yes, MIT students. This was around ~1980. And in the first hour of their first class they were already doing symbolic differentiation in scheme.
I think your view of what’s “natural” is a just so story.
People heavily trained in maths can take quickly to languages designed to make programming look like maths, that's hardly a surprise.
I wouldn't base my assumptions about what most people find natural on the experience of MIT students taking 6.001 in 1980.
(Not to mention, 'doing' is doing a lot of heavy lifting in that sentence. I could show you a intricate sentence in French in the first hour of your first French class, but unless you came up with it yourself, are you demonstrating much learning just yet?)
I would certainly be interested in the results of a study that put a simpler interpreter / compiler and a language reference in front of motivated non-programmers, but I strongly suspect that the amount of elegant tail recursion we'll see will be limited (and I'd very much expect there to be a correlation between that and a training in mathematics).
Imho, data comes from experiments, but experiments come from hypotheses, and hypotheses come from experience.
WRITE(6,28)
READ(5,31) LIMIT
ALIM = LIMIT
5 SUM=0.0
DO 35 ICNT=1,LIMIT
READ(5,32) X
35 SUM = SUM + X
AMEAN = SUM/ALIM
WRITE(6,33) AMEAN
GO TO 5
28 FORMAT(1H1)
31 FORMAT(I3)
32 FORMAT(F5.2)
33 FORMAT(8H MEAN = .F8.2)
END
Most modern programming languages seem to take inspiration from C, which took inspiration from BCPL, and that from Algol. Others took inspiration from Algol directly, like Ada, or Lua. And Python has indentation-based block structure, rather than having blocks of statements delimited by braces or or an "end" keyword.I'd argue a lot of programming language evolution is influenced by the capabilities of our IDEs. When you code in a text editor, the terse syntax of C is great and brings advantages over the verbosity of Pascal, Basic or god forbid Cobol. Once your editor does auto-indentation the braces seem redundant and you get Python. Smart completions from IntelliSense are essential to efficiently writing C#, and now that LSP has brought that to every IDE or smart text editor we have the explosion of popularity of more explicit and more powerful type systems (Typescript, typed Python, Rust). Programming languages are shaped by their environment, but the successful ones far outlive the environment that shaped them.
Backus also shifted away from imperative inspired languages to design FP/FL language (I thought they were contemporaries of BCPL but came 10 years later, later than APL), even though he contributed to FORTRAN directly.
I remember learning JavaScript as a kid (for some class) and trying to get used to the mutable variables, having to mutter to myself "Okay, here, let x be 4. After this line, x is x + 1, which is 5, a new value." From there, eventually thinking things like: "After every loop, x changes to be itself plus 1. So after the loop, x will be its value before the loop plus however many times the loop ran." Things like that. Basically informal Hoare logic without realizing it.
I had almost forgotten, because I then went years before I programmed again, and the language I learned was C, which was probably easier because I was already familiar with while loops and mutable variables.
Maybe it would have been equally intuitive to learn a functional language first. It's probably no more intuitive to mutter that under your breath versus stuff about the type system and equational reasoning.
On the other hand, it seems easier to get beginners interested in programming with an imperative approach. In our assignments in that class using JavaScript, we used libraries to make little games, which imperative programming seems better-suited for.
Why do you believe this is anything more than an historical accident?
For example, it wasn't what Alonzo Church gravitated to when he invented the lambda calculus in the 1930s, before any programming languages or indeed general-purpose computers existed.
> 99 Bottles of Beer implemented with a loop is intrinsically going to be easier to read than an implementation with tail recursion
First, you don't need to use explicit tail recursion. See e.g. https://99-bottles-of-beer.net/language-haskell-1070.html
Second, this sounds like unfamiliarity, not anything inherent. Why is it "intrinsically easier to read"? For a tail recursive version, the main tail recursive function would look like this in Haskell:
_99bottles 0 = printVerse 0
_99bottles n = do
printVerse n
_99bottles (n - 1)
In fact, with a bit of experience you might write this as: _99bottles 0 = printVerse 0
_99bottles n = printVerse n >> _99bottles (n - 1)
It's only less easy to read if you're completely unfamiliar with the concepts of pattern matching and recursion. But the same is true of any programming language.Given the above, what's a "for loop" and why would you need one? Sounds complicated and unnatural.
A better name for "non-OOP" programming is procedural programming, where you organize code in long blocks that go straight down, code duplication is accepted vs jumping all over the place, etc. Honestly underrated. It can be quite easy to understand.
Strictly-evaluated FP is also imperative. The only really different languages are the ones with different evaluation systems or that can do things besides evaluate - people like to say Haskell is the best here but I think it's actually unification languages like Mercury. Maybe even SQL with transactions.
There are lots of great parts in FP, and for the last ~10-15 years imperative programming languages have made a lot of effort to add them to their syntax. You just need to leave out the more dogmatic parts that make FP popular in academia.
I agree with you otherwise though.
Why is tail recursion better generally? I'm not familiar with FP very much, but it feels like loops more closely resemble the way computers execute them than tail recursion.
Loops, while not bad per se, do have a lot of foot-guns. Loops tend to be used to make all sorts of non-trivial changes to outside state (it's all still in scope), and it can be nightmarish to debug errors that this may produce. Let's say you're looping over chickens in your upcoming Hen Simulator 2024, and you call a function from inside your chicken loop to update the henhouse temperature, which has a check to see if the temperature has gotten too high, which might result in a chicken overheating and passing on into the great farm in the sky, which changes the amount of chickens remaining, but wait, isn't that what you're looping over? Uh oh, your innocuous temperature update has caused a buffer overflow and hard crash. In a rare and possibly hard to reproduce case. Have fun debugging!
Generally, functional programming prefers encapsulated solutions - arguments go in, results come out, nothing else happens - which makes it easier to reason about your code. The most common replacement for loops is something like map, which just applies a lambda to each member of a list. This should make it somewhat harder to achieve the mess above (the other chickens shouldn't be in scope at all, so your temperature update function should complain at compile time).
With tail recursion, you could make a function that takes a list of chickens to update. You pop the first chicken, update it, and recur on a list of the remainder of the chickens. Because this needs to be a function (so you can recur), you have control of the arguments, and can determine what exactly is passed to the next iteration. You can't overflow the buffer, because you're passing a new 'remaining' list every time. This is also where you can get a little clever - you can safely change the list at will. You can remove upcoming chickens, you can reorder them, you can push a new chicken into the list, etc. If a hen lays an egg mid-loop, it can be updated as part of the same loop. Plus you have the same scope safety as you do with map - you can't do anything too messy to the outside state, unless you specifically bring it in as an argument to the function (which is a red flag and your warning that you're doing something messy with state).
Lisp was once a very popular introductory programming language and students learned it just as easily or easier than any other language.
Maybe innate, maybe it's an offshoot of teaching math in an infix style, 1 + 2 vs. + 1 2.
I have no trouble with lisp's parens, i like them. What I never liked though, is that the first item in the list was an operator, a verb lets say, and the rest were the nouns; whereas, you could also have nested lists say of numbers where there were no operators. Never felt right (not that I can think of a better way, not worth adding more parens)
You can see that when observing novices programming (without stack overflow or similar help). They often assumes that it will get done (magically) as soon as they call that function. And their code organization reflects the ad hoc thinking instead of a planned endeavor.
Considering the state of the web I do not think this is making the argument you intend.
All that to say, I completely emphatically agree with the original comment. The world would have been so much better off with Scheme as the language of the web.
You have no idea whether this is actually true, or whether people have just fit their brains to what is out there.
The idea that programming language syntax fits people's brains rings untrue for anyone who has watched beginners struggle with it, or remembers being one.
1. Many people try programming.
2. The vast majority of the people who try programming are subject to external forces that guide them to whatever they learn and use.
3. Out of these, a certain fractions stick with it and are found programming in the long run, even working in it.
We could easily conclude (probably quite wrongly) that the popular languages turn people away from programming, except for a few weirdos for whom they click.
Millions have learned javascript because it is the technology of the web. Are they better off?
So many people have been introduced to programming, computer science, and programs through the abstractions provided by javascript, and that sucks
And yet, when you tell them the reasons, why some other syntax than their Java/PHP/Python syntax would be better, they usually counter with "It's just syntax." or "Every language can achieve the same result, why care so much about syntax." or similar.
> C-style with brackets, or python whitespace, or Ruby do/end, these fit better the brains of the majority of programmers. Perhaps not the majority of HN readers but the majority of corporate devs.
I would need a source for that.
I think most programmer's brains have simply not been exposed to other syntaxes much or much too late, when their brain already calcified the C-style syntax. Or they don't actually care.
As much as everyone poops on js it is a very forgiving language for embedding.
I'm reasonably confident that all the anecdotes you hear about 10x improvements from switching to Lisp are just programmers learning about functional programming and good design patterns for the first time. But those aren't contingent on using a Lisp, and I'd argue using Lisp brings an enormous amount of cruft and baggage that makes FP seem far more alien and difficult than it needs to be.
I don't think FP by itself is that massive of a win despite what some dubious studies or zealots say, but it's certainly better than enterprise Java. I've read my fair share of horror stories of Haskell in production too.
With all due respect, he is ironically probably the biggest blub programmer.
There's nothing stopping you from writing a massively-scaling e-commerce site in Verilog and running it on an FPGA, but it - uh - probably isn't the soundest course of action.
It's like the Lisp crowd can't be honest with itself and not realise the serious shortcomings of lisp projects are: maintainability.
Every single lisp project out there, hell this also applies to Haskell, is so radically different because everybody's too busy reinventing its own abstractions and macros (and language extensions in case of Haskell) that you just throw your hands up in the air in dismay.
It's like the Lisp crowd, that can barely attract like-minded people to collaborate on some simple open source projects out there so you get 20 broken JSON parsers, cannot see the link between that issue and industrial lisps.
I love lisps, they are fun, they have their place in the industry when you leverage them to their core strengths.
Try to ask yourself: why does PHP has more quality killer-software than all of lisps combined?
Isn't that the problem with Javascript too, though? Javascript isn't good enough, let's all use jQuery. No, jQuery isn't good enough, let's all use Vue. No, Vue isn't good enough, let's all use Angular. No, Angular isn't good enough, let's all use React. No, React isn't good enough, let's...
Vanilla JS was replaced by jQuery after about a decade. A decade later, React/Vue/Angular (React came first, not last) replaced jQuery
There was a paradigm shift in the kind of websites that people wanted to build. For better or worse, we went from HTML + CSS with some JavaScript (jQuery) to SPAs. It's easier for a team of junior developers to build a SPA with React compared to jQuery.
The abstractions that you're describing changed because the goal changed.
---
There has been plenty of churn around build tools, libraries on top of React/Vue, the paradigms within React/Vue (Hooks, Vue 3 composition API), state management libraries, and the adoption of TypeScript.
Again, this isn't really reinventing abstractions as much as dealing with an evolving language that wasn't designed for the applications we're building today (though it is clearly quite capable!)
Chromium: we’re not doing proper tail calls
The old url at http://mitpress.mit.edu/sicp/full-text/book/book.html is now 404, and most of what you find on that site will try to get you to do an "eTextbook rental".
The last release is from January 2023. Seems fine enough for learning.
> A computational process is indeed much like a sorcerer's idea of a spirit. It cannot be seen or touched. It is not composed of matter at all. However, it is very real. It can perform intellectual work. It can answer questions. It can affect the world by disbursing money at a bank or by controlling a robot arm in a factory. The programs we use to conjure processes are like a sorcerer's spells. They are carefully composed from symbolic expressions in arcane and esoteric programming languages that prescribe the tasks we want our processes to perform.
Not everyone appreciates that kind of language and the suggestion that software development is some kind of alchemy that can only be understood by the chosen ones is the kind of bullshit that makes people loose interest. It actually projects an amateurish not professional image of the profession. I always found the SCIP crowd unbearable. Which is sad, because there is good information and solid knowledge in that book.
Personally? I love how it’s written. I find it clever, engaging, and that it uses clever analogies and examples, with a theme running throughout. The little comics provide some levity as well.
I guess, to me, thinking of programming as alchemy effectively levels the playing field a lot. It means nobody is “born with it,” unlike looks or a straight nose or anything like that. It’s all about learning.
That’s how I read it, anyway.
Amateurish? Maybe. But, and this is important: we were all amateurs and it’s an intro to CS book. By definition, the people reading this book should be amateurs.
By the time I was ready to start college was well versed in a number of (imperative) programming languages, data structures, algorithms, etc. However was exposed to SICP in college and fell in love with both the book and emerging functional languages (this was the late 80s).
SICP remains my favorite CS / programming book of all time.
So the coming introductory courses will use JS + LLMs?
PAIP also focuses more on Common Lisp specifics.
IMHO PAIP is extremely well written. Book and Code: https://github.com/norvig/paip-lisp
SICP has a mixed reception. Many find it excellent, others find it to focused to teach CS concepts in a math-oriented way. Thus there have been written several alternatives to SICP, using Scheme, Logo, other Functional Programming Languages, ... I think it's a great book, with a lot of excellent books written as a reaction to it.
1. They taught a bunch of extremely advanced concepts like infinite lists of prime numbers where the 'CAR' and 'CDR' functions could be used to iterate down the list, which calculates the next prime number on-demand.
2. At the end of the course, they made the horrific mistake of encouraging students how to write their programs in LISP and then embed the LISP code into other languages, such as Algol, which produces the most extreme spaghetti code in the history of mankind!
The problem with SICP is that it teaches many advanced language concepts but the students are unprepared to absorb them and haven't ever struggled with the types of problems these advanced concepts are meant to solve! It's like showing a peasant farmer how to drive a modern combine before they've ever tried ploughing their field with a horse plough! I can pretty safely say, as a systems programmer, I have used exactly zero of the concepts taught in SICP back in 1980.
But most importantly, when professors teach SICP, they can feel good about themselves because they get the misguided impression that they are teaching something that grows their student's capabilities as a programmer and when talking to other professors teaching introductory computer science, the SICP professsors can say, "look at this cool shit my students did - i bet your students wish they could write shit as cool as this!" - end of story.
Incidentally, I never went to class and got an "A" in the class. It was annoying to have to take this remedial brainwashing class after already taking 2 other CS classes (including introduction to programming and assembly language programming) at the University of Illinois.
>
> (Footnote: Nope. Berkeley's new first course for majors uses Python...)
This is the saddest part of the whole post. Taking the intro course in Scheme set me on a path of thinking about computing in a whole new way. While they try to teach the same concepts with Python, it's just not the same thing as using a truly functional language.
A huge loss for Cal students for sure.
- Parentheses obscure the structure of the language. When different syntax is used for different parts of the language, it's easier to visually scan the code. This is simply because punctuation/arrows/etc. visually stand out more than plain identifiers.
- Parenthetical languages have just as much syntax as non-parenthetical languages, it's just a shittier syntax. Try writing (let (x 1) (y 2) (+ x y)) and see what happens: it's a syntax error. Look at the indentation rules in DrRacket: there's let-like, cond-like, etc., because all of those constructs have different syntax. But it all kind of blends together in a sea of parens.
This weakness of paren-heavy languages is also their greatest strength, though. Since the syntax is so generic, it's easy to extend, and these language extensions can be indistinguishable from the base language. That's a big deal!
BTW, what structure editor would you recommend? Which have you tried?
I can understand your pov as a professional coder, doing enough coding that you can really master the syntax of your language of choice. I code occasionally for scientific research, and the less syntax I have to remember, the better. Little syntactic constraint in combination with structural editing really is a killer feature in my context.
> the less syntax I have to remember, the better
Part of my point is that parenthetical languages don't actually have that much less syntax. You have to remember one of these two syntaxes:
let x = 1;
let y = 2;
do_stuff
(let
((x 1)
(y 2))
do_stuff)
Now you say that there's a lot more possible variation in the first case; it could be: let x = 1; # Rust
var x = 1; # JS
x=1 # bash
And I point out that there's a lot of possible variation in the second case too: (define x 1) (define x 2) do_stuff
(let (x 1) (y 2) do_stuff)
(let x 1 (let y 2 do_stuff))
There is less syntax with parens. But it's not zero syntax, and you still need to memorize it.Here are my notes from 2023-07:
https://docs.racket-lang.org/more/index.html
* okay, seriously. functional programming is nice, but these fucking parenthesis are ridiculous. the VSCode extension is okay, but doesn't help at all with formatting, etc.
* "car" and "cons", yeey, but "first" would have been so hard?
* the whole "define x 'x" is also meh.
* no return, so sometimes something just takes the last one and returns.
* there's string->url ... why not string->parse-url .. no, would have been too verbose. MAYBE YOU COULD HAVE SAVED SPACE BY OMITTING THE FUCKING PARENTHESES
*
/ end notesehehe ... well ... I think I will keep trying it again every few years. is there a pythonish version, where indentation matters and no need for wrapping parens?
* there's string->url ... why not string->parse-url .. no, would have been too verbose. MAYBE YOU COULD HAVE SAVED SPACE BY OMITTING THE FUCKING PARENTHESES
string->url is consistent with the way they do things in Racket. Note in that same document you linked the use of number->string and string->number, the -> indicates a type conversion. Along with string->url there is also the reverse, url->string, and some other conversion functions. That consistency is actually pretty nice, it means you can guess and check ("I have a string and want a url, will this work?" Oh, great it does!) or guess and search the docs before checking with the REPL or whatever.https://docs.racket-lang.org/net/url.html
* "car" and "cons", yeey, but "first" would have been so hard?
car shows up once, cons not at all, but he does use cdr. first, second, and rest are available, I don't know why he didn't use it in this demonstration. If you want to use first, go for it.They're just different. And once you've come familiar with the language, you miss s-expressions everyday, because they're just that easy to work with, especially with something like paredit. Why? because the grammar is easy to parse and reason about. The whole code is a tree. And evaluation is mostly working from the leaves to the root.
> "car" and "cons", yeey, but "first" would have been so hard?
It comes from the nature of the language. "cons" is to construct a pair of values, and "car" to get the first one, while "cdr" returns the second one. But lists are composed of cons cells (check how it works), and in that case you could argue for "head" and "tail" for the function names. But "car" and "cdr" were first and you could alias them easily.
> no return, so sometimes something just takes the last one and returns
The computing paradigm is expression evaluations, not sequence of instructions (although sequencing is there). An s-expression is always equivalent to something, and that something is what you're computing. Something like (first '("name" "email")) is the same as "name". Or (if (> x 0) :pos :neg) with x = 5 is the same as (if t :pos :neg) and the same as :pos. [0]
No return is needed. It's tricky when doing iteration, but whenever you're doing sequencing (they mention that in the docs for CL), the last value is equivalent to the whole thing.
[0]: https://en.wikipedia.org/wiki/Lambda_calculus#Reduction
> It's tricky when doing iteration ...
and that's my problem, that in the name of simplicity everything nice is thrown out. and "don't even think about it" and "you are holding it wrong" is the official motto. sure, I'm happy to adapt if I feel I got something in return, ie. memory safety with Rust, powerful type system in Scala, etc.
all in all, sure, it's Turing-complete, and obviously millions of people already grok it and are productive in Lisps, but to me - and apparently to the vast majority of programmers - it's too foreign.
Lisp is not a silver bullet. Whatever you can do with lisp, you can do with C or with JavaScript. What's different is how you do it. And it turns out that it's easier to create elegant solutions in Lisp as the mental model is heavily based on mathematics (lambda calculus). It's a different models of computing and solutions you're used to may no longer applied. Instead you reach out to a new way of solving the problem.
When I say iteration is tricky, it's that most of the time, you relying on some mutable state to do the looping (i counter) and early termination, but in CL and Clojure, there often are easier ways.
I'd recommend learning about computing models. Some solutions are easier to solve in one than the others. And now computers are powerful enough that we don't have to worry about performance (that often) and we can focus on creating better programs.
... easy and seamless user-defined compiler extensions with lisp
Aside from that, you could have tried to use `first` instead of `car`. It would've worked.
And yes, there happens to be a pythonic version named `Rhombus`, which is the flagship language of the racket system.
I thoroughly agree. I am deeply into functional programming, but syntax built entirely around endless nested parentheses has never felt like anything but a nightmare to me. Doubly so because even in 'clean' code it's reusing the same syntax with what are for most coders three clearly different logical concerns (defining functions, listing statements to execute in order, and invoking functions).
That's the imperative model which foundations is the Turing machine. Lisp comes from lambda calculus and you're not doing the above really. At it's core there's the value (or the symbol that's represent it), then the abstraction, which defines how to get the value, and the application, which let you know with what to replace unknowns (variables) in your abstraction so you can get the value. And it's recursive.
A program can be a value (not that useful), an abstraction (kinda like a nice theorem), or an application (the useful choice). Defining a function is creating a named abstraction, which is just a special value. Invoking a function is applying the parameters to that abstraction, which means replacing variables to get a more simplified version with the goal to finally have a value. If you can't get a value, that means the result is still an abstraction and then you still have to do more applications.
You either have a symbol or atom (which is a value) or you have a list which is an application (except the empty list, which is the same thing as nil). An abstraction is created by special forms like (defun foo (bar) ...) in CL. but the result of the latter is still a symbol. An atom is equivalent to itself, and a list is equivalent to having applied the abstraction represented by the first element to the rest. Anything else is either special forms, macros, or syntactic sugar.
So unless you're applying imperative thinking to the lambda calculus model, there's no confusion possible.
You might be interested in Rhombus: Racket with ML-inspired syntax. https://docs.racket-lang.org/rhombus/index.html
Same with AOCP, but I greatly prefer scheme to MIX.
Otherwise these works just seem like a doomsday rebuilding references rather than knowledge with any practical presentation.
It sounds like you'd like to outsource your problem solving and thinking. Some things require work, and from that you learn and grow, which is the greatest reward.