The VAX was a 32-bit CPU with a two stage pipeline which introduced modern demand paged virtual memory. It was also the dominant platform for C and Unix by the time the Bellmac-32 was released.
The Bellmac-32 was a 32-bit CPU with a two stage pipeline and demand paged virtual memory very like the VAX's, which ran C and Unix. It's no mystery where it was getting a lot of its inspiration. I think the article makes it sound like these features were more original than they were.
Where the Bellmac-32 was impressive is in their success in implementing the latest features in CMOS, when the VAX was languishing in the supermini world of discrete logic. Ultimately the Bellmax-32 was a step in the right direction, and the VAX line ended up adopting LSI too slowly and became obsolete.
There were just a lot of them. My high school had a VAX-11/730 which was a small machine you don't hear much about today. It replaced the PDP-8 that my high school had when I was in elementary school and visiting to use that machine. Using the VAX was a lot like using a Unix machine although the OS was VMS.
In southern NH in the late 1970s through mid 1980s I saw tons of DEC minicomputers, not least because Digital was based in Massachusetts next door and was selling lots to the education market. I probably saw 10 DECs for every IBM, Prime or other mini or micro.
It would have been good to know more about why the chip failed. There's a mention of NCR, who had their own NCR/32 chips, which leaned more to emulations of the System/370. So perhaps it was orders from management and not so much a technical failure.
Didn't Multics, Project Genie, and TENEX have demand paging long before the VAX?
It shifts all bits except for the sign bit, leaving it unchanged.
I have read many ISA's manuals and not seen this elsewhere. Most ISAs don't have separate arithmetic and logic left shift instructions. On M68K, which does, the difference between `ASL` and `LSL' is only that the former sets the Overflow flag if any of the bits shifted out is different from the resulting sign bit whereas the latter clears it.
"An arithmetic shift right 1 bit position is performed on the contents of operand m. The contents of bit 0 are copied to the Carry flag and the previous contents of bit 7 remain unchanged. Bit 0 is the least-significant bit."
So if value was used as signed integer, that's a sign-preserving /2 (and easy to expand to 16 or more bits).
Z80 also has a SLA, which does shift all bits.
Findecanor was talking about left shifts though.
Only if you define your integer division as rounding towards minus infinity which almost none of the languages do (they usually round towards zero). See e.g. [0] for the additional instructions needed to correct the result.
Now, I personally think this is a mistake and my PLs always round integers down instead of towards zero, but others may disagree.
PowerPC is the only production-ISA I've found that has an arithmetic right-shift instruction designed for rounding towards zero. It sets the Carry flag if a 1 is shifted out and the result is negative. Then only an "add with carry" instruction is needed to adjust the result.
I am fairly certain that restoring/non-restoring unsigned binary division algorithms can be made to do signed division that would round down with minimal change that is not "divide the absolute values and then fix the signs", and for algorithms used in high-speed division hardware the choice of rounding doesn't really matter.
This is the main thing I wanted to know, the last section heading in the article, and not explained by the remaining text. AT&T choosing someone different is a lame excuse (others could have bought in, like how Apple got ideas from Xerox PARC), and the rest is padded out with a restatement of how the Bellmac-32's ideas shaped future chip development.
We really could use a place like that today.
Are there still regularly ground-breaking innovations (which ones e.g. in the last decade) coming out of the same lab today, whatever its owner or name?
Graphene chips are an insanely exciting (hypothetical) technology. A friend of mine predicted in 2010 that these chips will dominate the market in 5 years' time. As of 2025 we can barely make the semiconductors.
Apple makes chips that have both excellent performance per watt, and overall great performance, but they make small generational jumps.
On the other hand, startups, or otherwise small-but-brilliant teams can still produce cool new stuff. The KDE team built KHTML, which was later forked into Webkit by three guys at Apple.
Paxos was founded on theoretical work of three guys.
Brin & Page made Google. In the era of big & expensive iron, the innovation was to use cheap, generic, but distributed compute power, and compensate for hardware failures in software. This resulted in cheaper, more reliable, and more performant solutions.
But yeah, most of the "moonshot factories" just failed to deliver anything interesting. Maybe you need constraints, market pressure?
On the other hand, during those years Intel has been extremely good at adopting very quickly any important innovation made by a competitor, while also succeeding to obtain better manufacturing yields, so that they were able to have greater profits, even with cheaper products.
Bellmac-32 has not been important commercially, but without it a product like Intel 80386 would have appeared only some years later.
With 80386, Intel has switched their production of CPUs from NMOS to CMOS, like also Motorola had done one year earlier with 68020. Both Intel and Motorola have drawn heavily from the experience gained by the industry with Bellmac-32.
AKA, Intel was extremely innovative in manufacturing. Turns out that because of Moore's law, that was the only dimension that mattered.
That's being polite. Attributing the chip's failure to AT&T buying NCR is ridiculous; that happened in 1991.
Here's a rundown of what actually happened:
* After the divestiture, AT&T from 1984 is finally allowed to build and sell computers. (This is also why Unix was not a commercial product from AT&T until then.) Everyone, in and outside AT&T, thinks Ma Bell is immediately going to be an IBM-level player, armed with Bell Labs research and Western Electric engineering. One of many, many such articles that conveys what everyone then expects/anticipates/fears: <https://archive.org/details/microsystems_84_06/page/n121/mod...> If there is anyone that can turn Unix into the robust mainstream operating system (a real market opportunity, given that IBM is still playing with the toy DOS, and DEC and other minicomputer companies are still in denial about the PC's potential), it's AT&T.
* AT&T immediately rolls out a series of superminicomputers (the 3B series) based on existing products Western Digital has made for years for AT&T use (and using the Bellmarc CPU) and, at at the lower end, the 6300 (Olivetti-built PC clone) and UNIX PC (Convergent-built Unix workstation). All are gigantic duds because, despite superb engineering and field-tested products, AT&T has never had to compete with anyone to sell anything before.
* After further fumbling, AT&T buys NCR to jumpstart itself into the industry. It gives up five years later and NCR becomes independent again.
* The end.
>This is such an uplifting story until you think about how the 8086 is just about to wipe it off of the map.
People today have this idea that Intel was this dominant semiconductor company in the 1980s, and that's why IBM chose it as the CPU supplier for the PC. Not at all. Intel was then no more than one of many competing vendors, with nothing in particular differentiating it from Motorola, Zilog, MOS, Western Digital, Fairchild, etc.
The 8088's chief virtue was that it was readily available at a reasonable price; had the PC launched a little later IBM probably would have gone with the 68000, which Intel engineers agreed with everyone else was far superior to the 8086/8088 and 80286. Binary compatibility with them was not even in the initial plan for the 80386, so loathed by everyone (including, again, Intel's own people) was their segmented memory model (and things like the broken A20 line); only during its design, as the PC installed base grew like crazy, did Intel realize that customers wanted to keep running their software. That's why 80386 supports both segmented memory (for backward compatibility with the virtual 8086) and flat. And that flat memory model wasn't put in for OS/2, or Windows NT; it was put in for Unix.
That, and it had a compatible suite of peripheral chips, while the M68K didn't... Something I vaguely recall an Intel FAE gloating about soon after: "And we're going to keep it that way."
And it was the only processor I ever used that had a STRCPY opcode.
https://openlibrary.org/books/OL670149M/Computer_Organizatio...
It mainly covered MIPS but most of the concepts were about as minimal as possible. As in, it would be hard to beat the amount of computation per number of pipeline stages.
Back then, 4 stages was considered pretty ideal for branch prediction, since misses weren't too expensive. I believe the PowerPC had 4 pipeline stages, but Pentium got up to 20-30 and went to (IMHO) somewhat pathological lengths to make that work, with too much microcode, branch prediction logic and cache.
Unfortunately that trend continued and most chips today are so overengineered that they'd be almost unrecognizable to 90s designers. The downside of that being that per-thread performance is only maybe 3 times higher today than 30 years ago, but transistor counts have gone from ~1 million to ~100 billion, so CPUs are about 100,000 times or 5 orders of magnitude less performant than might be expected by Moore's law at a 100x speed increase per decade. Bus speeds went from say 33-66 MHz to 2-4 GHz which is great, but memory was widely considered far underpowered back then. It could have ramped up faster but that wasn't a priority for business software, so gaming and video cards had to lead the way like usual.
I always dreamed of making an under $1000 MIPS CPU with perhaps 256-1024 cores running at 1 GHz with local memories and automatic caching using content-addressable hash trees or something similar instead of associativity. That way it could run distributed on the web. A computer like this could scale to millions or billions of cores effortlessly and be programmed with ordinary languages like Erlang/Go or GNU Octave/MATLAB instead of needing proprietary/esoteric languages like OpenCL, CUDA, shaders, etc. More like MIMD and transputers, but those routes were abandoned decades ago.
Basically any kid could build an AI by modeling a neuron with the power of ENIAC and running it at scale with a simple genetic algorithm to evolve the neural network topology. I wanted to win the internet lottery and do that, but the Dot Bomb, wealth inequality, politics, etc conspired to put us on this alternate timeline where it feels like the Enterprise C just flew out of a temporal anomaly. And instead of battling Klingons, groupthink has us battling the Borg.
whats wrong with rep movsb?
I might be a bit rusty in my x86 assembly, but wouldn't "repnz movsb" be the x86 strcpy opcode, for zero-terminated strings?