This was the world I walked into in 1986 as an undergraduate studying Mathematics and Computation. I was quite quickly indoctrinated in the ways of Z notation [1] and CSP [2] and had to learn to program in ML. I still have all the lecture and class notes and they are quite fascinating to look at so many years later. Funny to read the names of celebrated researchers that I just thought of as "the person who teachers subject X". I do recall Carroll Morgan's teaching being very entertaining and interesting. And I interacted quite a bit with Jim Davies, Jim Woodcock and Mike Spivey.
Having decided I wanted to stay and do a DPhil I managed to get through the interview with Tony Hoare (hardest question: "Where else have you applied to study?" answer: "Nowhere, I want to stay here") and that led to my DPhil being all CSP and occam [3]. I seem to remember we had an array of 16(?) transputers [4] that the university had managed to get because of a manufacturing problem (I think the dies were incorrecty placed making the pinouts weird, but someone had made a custom PCB for it).
Imagine my delight when Go came around and I got to see CSP in a new language.
[1] https://en.wikipedia.org/wiki/Z_notation
[2] https://en.wikipedia.org/wiki/Communicating_sequential_proce...
[3] https://en.wikipedia.org/wiki/Occam_(programming_language)
Most however will be most familiar with Quicksort[0] and NULL.
As far back at 1995 people were warning against using threads. See for example John Ousterhout's "Why Threads are a Bad Idea (for most purposes)" <https://blog.acolyer.org/2014/12/09/why-threads-are-a-bad-id...>
I disagree with this. As long as you had an understanding of critical sections and notify & wait, typical use cases were reasonably straightforward. The issues were largely when you ventured outside of critical sections, or when you didn’t understand the extent of your shared mutable state that needed to be protected by critical sections (which would still be a problem today, for example when you move references to mutable objects between threads — the concurrent package doesn’t really help you there).
The problem with Java pre-1.5 was that the memory model wasn’t very well-defined outside of locks, and that the guarantees that were specified weren’t actually assured by most implementations [0]. That changed with the new memory model in Java 1.5, which also enabled important parts of the new concurrency package.
[0] https://www.cs.tufts.edu/~nr/cs257/archive/bill-pugh/jmm2.pd...
I took major exception to this. The real world doesn't have non-things, and references do not demand to refer to non-things.
If your domain does actually have the concept of null, just make a type for it. Then you won't accidentally use a 6 or a "foo" where a null was demanded.
OTOH seeing undefined method 'blah' for nil:NilClass isn't really any better.
Do we want to model the "real world"? This seems to hark back to that long-forgotten world of "animal-cat-dog" OO programming.
Is your point here that every pointer type for which this can be the case should include an explicitly typed null value?
* It's the same referent as all the other things you don't have. Your struct has a Spouse, why does it not also have a RivieraChateau, a SantaClaus, BankAccountInSwissFrancs. If you can answer why you left those out, you know why to leave out Spouse.
* Why stick to 0-1 Spouses. As soon as you do that you're gonna need n Spouses, where some of them will have an ex- flag set, married from&to timestamps.
> Is your point here that every pointer type for which this can be the case should include an explicitly typed null value?
* It shouldn't make a difference whether it's a pointer or a reference or a value or a class. If you believe null is one of the Integers, you should also believe that null is one of the ints. Why should your domain change to accommodate C idioms.
It could even use a special “NULL” address. Just don’t pollute every type with it.
The concept as proposed by Hoare is strictly necessary for things like partial relations, which are encountered very frequently in practice.
It is true however that a large number of programming languages have misused the concept of a NULL reference proposed by Hoare.
As you say, there must be distinct types that may have or may not have a "nothing" value.
It's a null pointer exception.
We are all very lucky to have lived through the foundation of a new science and new engineering over the last 50 years.
I hadn't realised that Hoare was present when Meyer first used the term 'contract' to describe his ideas.
What made Hoare's 2009 confession so impactful wasn't that he was solely responsible — it's that he was the first person with that level of authority to publicly say "this was wrong."
That's what gave Rust, Swift, and Kotlin permission to design around it.
Both lists and atoms could appear in any place in any function or special form.
NIL was a special atom, which was used to stand for an empty list. Because it could appear in any place in a LISP program, it could be used anywhere where one had to write that something does not exist.
In a programming language with a more complicated and also extensible system of data types the handling of "nothing" values must also be more complex.
Any optional type can be viewed as a variable-length array of its base type, whose maximum length is 1 and a null length indicates a non-existent value.
This is equivalent with the use of NIL in LISP.
However, it is better to consider optional types as a distinct method of deriving types from base types than arrays, structures or unions, because in most cases more efficient implementations are possible for optional types than implementing them as variable-length arrays that may have a null length or as tagged unions where one of the alternative types is the empty set (a.k.a. void).
I quote from the manual of LISP I: "Here NIL is an atomic symbol used to terminate lists".
I am not sure which is the rule in Common LISP, but in many early LISPs the predicate (atom NIL) was true.
In early LISPs, the end of a list was recognized when its CDR was an atom, instead of being another list. The atom could be different from NIL, because that final list could have been an association pair, pointing towards two associated atoms.
The fact that in early LISPs NIL was an atom, but it was also used to stand for an empty list caused some ambiguities.
EDIT: I have searched now and in Common LISP the predicate (atom NIL) remains true, like in all early LISPs, therefore NIL is still an atom, even if using it almost anywhere is equivalent with an empty list.
Theories of Programming: The Life and Works of Tony Hoare published by ACM in 2021 - https://dl.acm.org/doi/book/10.1145/3477355
See the "preface" for details of the book - https://dl.acm.org/doi/10.1145/3477355.3477356
Review of the above book - https://www.researchgate.net/publication/365933441_Review_on...
PS: You can check with some lady named "Anna" on the interweb for book access :-)