Simple CPU Design
Comments
recursivedoubts
cowsaymoo
My love is from the BU intro computing class from a number of years ago which was using the Harvey Mudd Miniature Machine
https://www.cs.hmc.edu/~cs5grad/cs5/hmmm/documentation/docum...
recursivedoubts
very cool machine!
mhh__
My first introduction to this stuff was also the little man computer. I won't say when as it might make some readers feel old, but very fond memories of playing around with it.
Similarly fond memories of the teacher letting me just do my own thing at the back of the classroom after noticing I was writing an interpreter rather than hard-coding all the logic in the task.
Probably hard to state just how good of an introduction he was to programming, actually. Not a codeforces-style geek but was constantly drilling the value of writing beautiful code that is as simple as possible to a bunch of 14 year olds of which probably only me and a friend were listening. Maybe it's not coincidence that amongst my friends/mutuals from that age there are core maintainers for I think three for pretty big languages
jakesomething
I made a Little Man Computer simulator this year that I use to teach students, but it uses binary and 2's complement which helps them learn that too: https://www.mathsuniverse.com/lmc
hnuser123456
Which MSU?
redundantly
Your class sounds amazing. Do you have any of your lectures online?
helij
I just have to say. Such a beautiful and accessible website. No fluff, no ads, no distractions. I love it!
JLCarveth
No HTTPS either
weakfish
Doesn’t work well on mobile, though. Agree otherwise
cjfd
I considered trying to do a simple CPU design from logic gates too. But I ended up wondering about some of the performance characteristics. Maybe some people who are knowledgeable are reading this. What I am wondering about is the switching speed of logic gates as compared to the signal speed in the electric connections for a realistic CPU. I.e., how many logic gate lengths (assume logic gates to be square) does an electric signal travel in an electric connection in the time that is needed for a logic gate to invert its output. Another one that seems relevant is how much spacing electric connections need compared to the size of a logic gate.
ofrzeta
does this answer your question? https://monster6502.com/
"The MOnSter 6502 runs at about 1/20th the speed of the original, thanks to the much larger capacitance of the design. The maximum reliable clock rate is around 50 kHz. The primary limit to the clock speed is the gate capacitance of the MOSFETs that we are using, which is much larger than the capacitance of the MOSFETs on an original 6502 die."
timthorn
If you want to go a step further, here's one built from discrete transistors: https://www.megaprocessor.com/
uticus
even just building logic gates out of transistors is half the battle. for that, the referenced site also has https://www.megaprocessor.com/stepping-stones.html
robinsonb5
There's a nice little web forum (remember those?) for people interested in toy / experimental CPUs at anycpu.org
I'm not active there any more, but I used to be when I was developing my own toy CPU: https://github.com/robinsonb5/EightThirtyTwo
elvircrn
I started this journey a while back using Tanenbaum's MIC-1 during my Uni days with a another colleague. Still have it online if anyone is interested: https://github.com/elvircrn/mic-1.
uticus
nice - reminds me of the excellent "Computer Organization & Design" by Patterson and Hennesy https://a.co/d/9U9Adl9
nxobject
As much as I really benefited from being able to internalize system architectures like these many times over… I do wish now, as someone who ended up in software, that there were similar hand-holdy third guides to implementing the “core” of out-of-order superscalar execution engines, too. They’re crucial to understanding how modern processors _kinda actually work to a zeroth order approximation_, even though it’s impossible to convey the engineering scope of modern CPUs to those who need hand-holding, but I
ranger207
At Georgia Tech I had one class (CS 2110) that dealt with implementing a simple in-order non-pipelined processor, one class that dealt with implementing a pipelined processor (CS 2210), then two classes (CS 4290 and CS 3220 IIRC) that dealt with implementing an out-of-order processor (4290 was more theory and also covered caches; 3220 was entirely implementing it on an FPGA). So, that sort of thing does exist, but IDK if most universities will let you take single classes like that
allenrb
That sounds like a great sequence! Offhand, I don’t think there were any OOO microprocessors when I did computer architecture at Tech (Go Jackets!)
artemonster
how awesome that this exists. I was learning how CPU works and designing my own CPU w emulator like 20 years ago as a teenager by googling into obscure forums, blog posts and homemade cpu webring. I made an experiment not long ago "would I be able to find in google by myself all learning materials to do that again". The outcome of that experiment deeply unsettled me. Google just gives you shit and total garbage. Half of the results are AI generated, other half is sloppily written half assed abstract pseudo tutorial like nonsense on medium or other paid-for-engagement platform. My children would not be able to reproduce such self learning without watching some youtubers doing it or by accessing some curated paid course or by accidental stumbling upon "gems" like this i.e. from HN. We desperately need back old google and old internet and somehow save and preserve humanitys knowledge.
LeftHandPath
I am glad you followed up on this, to see if you could do it again! That matches my experience.
I remember feeling like the big tech corps had turned "consumer" into a pejorative and started relentlessly abusing their customers circa 2016 or so... Especially microsoft, post Windows 8. Consumer devices don't need to work. That's for pro devices. Consumer devices just need to sell ads, soak up user time, and let businesses market their goods for consumption!
The majority of search results from late 2019 or so and onwards have only degraded. Even on other platforms, like YouTube -- you get 4-5 real results, and the rest are "suggested for you", even if you've logged out. Google and Youtube both feel like "consumer" search engines, where advertising and eyeball time trump usefulness and user authority (i.e. the user being able to ask for what they want, and get it).
spencerflem
I agree. Its hard though, SEO people are malicious, persistent and with modern tech, have incredible tools.
And with hand curation, its hard to feel like its 'worth it' when instead of being able to build a community, your results are scraped and shown out of context.
If you have any thoughts on how to get that sort of culture back I'm open to it
acegopher
I pay for kagi.com and they seem to be fighting that battle. I also frequent their "small web" (https://blog.kagi.com/small-web) initiative.
artemonster
tbh I have dreamt about what could be possible if we were making some sort of "closed doors" internet branch. You can access with a single account bound to you, invite only, something like PGP with a web of trust. Good "legacy" internet websites can be chained and indexed through some sort of thematical webring, with a good search and comment functionality added on top, as a global HN. Any external content is opt-in and vetted. Internal content with user rating system (not googles SEO algorithm ranked), i.e. allowing users to downvote nonsense bullshit into hell. robots on internal content are allowed through strictly controlled API that also pays original authors. Browsing automatically costs some "tokens" that are being paid towards owners of sites you visit so at least semi useful sites can sustain themselves and good ones make money, without spamming everything with ad banners or being incentivized to do ragebait-clickbait content. But thats all nonsense dreams, nobody will be willing to pay for browsing internet, even if high quality.
spencerflem
In the same vein, I feel like the 'fair source' movement makes sense - pay a fixed percent of profit and get access to a massive collection of licenced software.
Just like with yours though, allocating it fairly is centralized and very hard to make everyone happy. And nobody wants to pay for something that used to be free.
robinsonb5
Part of me thinks we need a new protocol, and a new lightweight web built around markdown with absolutely no (client side) active content allowed.
What I'm not sure about is how to combat bad actors / spammers / low-effort pages and AI slop. I'm leaning towards some kind of git-like storage with history as a mandatory part of the protocol, and some kind of cryptographic web-of-trust endorsement system.
spencerflem
Sounds kinda like Gemini on top of IPFS/Dat/Hypercore. Imo some cool things but I'm not sure the problem is a technical one.
Content addressing has some real benefits in allowing something like the internet archive to be transparent (ie: it doesn't matter who hosts it). But that's mostly solving linkrot.
Searching through everything is still as hard as ever, and if the incentives are the same will be just as gamed. And people would have to make good content in the first place which is hard to justify without a good audience at the same time
linguistbreaker
I'm starting to use Claude.ai more and more instead of googling. For the moment this seems to cut through the noise of the modern web.
spencerflem
I believe that it does, I'm worried long term that it will discourage people from making and curating webpages themselves though.
linguistbreaker
Definitely a possibility - hopefully AI will similarly empower creation of better content instead of AI slop noise.
In fact I wonder if Claude.ai could come up with similar CPU teaching tools and a syllabus based on some of the great resources linked in this discussion.
spencerflem
I mean, probably, but only because it was trained on this already.
For new things though, why would you bother posting them to the internet if you can't use it to build an audience or make a connection.
linguistbreaker
and possibly not even credited for the content you created...
True
wyager
For anyone interested in a "not-so-simple" CPU design, I have a couple old college assignments from a CPU design class that may be fun to peruse:
This one is a 4-stage pipelined CPU: https://github.com/wyager/Lambda16
This one is a superscalar out-of-order CPU: https://github.com/wyager/Lambda17
Both are written in Clash, which is a subset of Haskell that compiles to target FPGAs. It's an incredibly OP HDL.
I don't think I ever ran the second one on an actual FPGA, because at the time values of type `Char` wouldn't synthesize, but I think the Clash compiler fixed that at some point.
I teach the introduction to computing class at MSU and agree entirely: most students need to start with the absolutely most simple introduction to computing possible.
My favorite two models are:
The Scott CPU
https://www.youtube.com/watch?v=cNN_tTXABUA (great book, website is now offline unfortunately: https://web.archive.org/web/20240430093449/https://www.butho...)
An extremely simple non-pipelined 8 bit CPU. The emulator lets you step through tick by tick and see how the machine code is driving an operation. I spend one lecture showing each tick of a bitwise AND and following the data around from the instruction into the instruction register, how the instruction selects the general purpose registers, runs it through the ALU and then moves the data back from the accumulator into a register. It's one of my favorite lectures of the year.
The Little Man Computer - https://www.101computing.net/LMC/
A higher level Von Neumann style computer that helps introduce students gently to assembly where they can fully understand the "machine code" since it's just decimal. We then build an emulator, assembler and compiler for an extension to LMC that introduces the notion of a stack to support function calls.
It's a fun one semester class, not as intense as NAND-to-Tetris but still an overview of how computing works.