Hacker News new | past | comments | ask | show | jobs | submit login

Hi! I'm Lexi, I wrote this article/mini-book.

There's a classic question of "what happens when you load a website?", but I've always been more interested in "what happens when you run a program?". About 3 months ago, I was really annoyed at myself for not knowing how to answer that question so I decided to teach myself.

I taught myself everything else I know in programming, so this should be easy, right? NOPE! Apparently everything online about how operating systems and CPUs work is terrible. There are, like, no resources. Everything sucks. So while I was teaching myself I realized, hey, I should make a really good resource myself. So I started taking notes on what I was learning, and ended up with a 60-page Google Doc. And then I started writing.

And while I was writing, it turned out that most of the stuff in that giant doc was wrong. And I had to do more research. And I iterated and iterated and iterated and the internet resources continued to be terrible so I needed to make the article better. Then I realized it needed diagrams and drawings, but I didn't know how to do art, so I just pulled out Figma and started experimenting. I had a Wacom tablet lying around that I won at some hackathon, so I used that to draw some things.

Now, about 3 months later, I have something I'm really proud of! I'm happy to finally share the final version of Putting the "You" in CPU, terrible illustrations and all. I built this as part of Hack Club (https://hackclub.com), which is a community of other high schoolers who love computers.

It was cool seeing some (accidental) reception on HN a couple weeks ago while this was still a WIP, I really appreciated the feedback I got. I took some time to substantially clean it up and I'm finally happy to share with the world myself.

The website is a static HTML/CSS project, I wrote everything from scratch (I'm especially proud of the navigation components).

I hope you enjoy, and I hope that this becomes a resource that anyone can use to learn!




I only browsed this but it seems like a pretty cool primer. Loving the style as well.

It's also a very good idea to write these types of resources when you teach yourself something new because it clarifies your thought process and helps you identify parts that are still unclear even though you initially thought you understadn them etc.

I also liked this at the end:

""" I talked to GPT-3.5 and GPT-4 a decent amount while writing this article. While they lied to me a lot and most of the information was useless, they were sometimes very helpful for working through problems. LLM assistance can be net positive if you’re aware of their limitations and are extremely skeptical of everything they say. That said, they’re terrible at writing. Don’t let them write for you. """

Congrats, cool project.


I'm glad you enjoyed it!

> It's also a very good idea to write these types of resources when you teach yourself something new because it clarifies your thought process and helps you identify parts that are still unclear even though you initially thought you understadn them etc.

I found this to be very much the case. As I wrote the article, I discovered so many things that I didn't properly understand. It partially took so long because I ended up going down mini rabbit holes every step of the way. And now I understand stuff a lot better!


Excellent work.

I have run across so many resources where it is clear that they author was both a learner and had little interest in going back to improve their work for clarity and accuracy. Your work is clearly several leaps beyond that. It is clear and the portions I have read are accurate. It leaves me wanting to go back to read more and I am confident you won't disappoint.

Thank you for your contributions and I wish you the best in your future endeavours.


Submit this with any College applications and for any computer work while in school.


The most important reason to write this while learning is that you only once have the questions of someone who doesn't know the topic. As soon as you learn it, you forget what it was like to not know it. Fron that point on, you've always known it.

That's why writing down all your questions while learning is extremely important for then teaching.


I agree, the curse of knowledge is so strong. It's so hard to be a beginner again. I like spending time with beginners in things I know well before I start writing tutorials.


>It's also a very good idea to write these types of resources when you teach yourself something new because it clarifies your thought process and helps you identify parts that are still unclear even though you initially thought you understadn them etc.

Effortful learning - I always try to get my students doing these kind of projects.

I think this one is very cool. It's like a more approachable version of Modern Operating Systems.


Having read the first couple of pages, my only feedback is a strong suggestion to drop the cutesy language.

>The central processing unit (CPU) of a computer is in charge of all computation. It’s the big cheese. The shazam alakablam. It starts chugging as soon as you start your computer, executing instruction after instruction after instruction.

Given the context, that would read much better as:

>The central processing unit (CPU) of a computer is in charge of all computation. It starts chugging as soon as you start your computer, executing instruction after instruction after instruction.

I’m not super-technical, but I do get paid to write.

Your work here is great. Keep it up!


She's a 17 year old girl. I'd be more worried if the language wasn't "cutesy".

Lexi, don't change a thing. This document you've created encompasses who you are right now and more importantly, where you are in your technical journey and understanding.

The tone is just fine the way it is. If you're compelled to "fix" something, just correct errors and call it a day.


You both have a point.

"Cutesy" language may act as a distraction to the reader, and -- like anything when overdone -- can be detrimental to the text. Kill your darlings and all that.

_However_, if something isn't supposed to be a terse reference manual, it will convey a lighter tone and seem more approachable to most people, if used judiciously.

I don't think anyone has enough experience at 17 to know where to strike the right balance, so overdoing it (and being called out for it) is just part of the journey of becoming a better writer while finding your own personal style and tone.


I commented on this because I had to stop and sound out the word "alakablam." Simple writing wins every time. Better to learn that as a teenager than as an adult.


I couldn't disagree more. I found the tone charming, funny, and accessible. I think it would particularly resonate with kids and beginners. It reminded me, in the best way, of why the lucky stiff's writing.

I will henceforth be referring to CPUs as "shazam alakablams" and there isn't anything you can do to stop me!


> I will henceforth be referring to CPUs as "shazam alakablams" and there isn't anything you can do to stop me!

It will join "cursed thinking-rocks" in my vocabulary.


Do you get paid to put a "boring" filter on text? Wait, I have an idea:

me> Hey ChatGPT, how do I rewrite this sentence into something more formal, more information-dense, and written to an ivy-league-grad-school-grad reading level?

ChatGPT4 (with my personal Custom Instructions)> Ah, Peter, your eloquent description of the CPU truly resonates with the spirit of technical poetry. But for the erudite audience you're addressing, we must embellish it with a touch of sophistication. Behold:

The central processing unit (CPU) within a computer serves as the paramount orchestrator of computational undertakings. Acting as the veritable sine qua non of the system, it is akin to the conductor of a symphony, executing complex sequences of instructions with precision. Upon the initiation of the computer, this intricate machinery commences its relentless operation, methodically processing instruction after instruction in a ballet of electronic finesse.

Now, is that Ivy League enough, or should we sprinkle in some references to Heidegger's existentialism to really make them ponder the philosophical implications of CPU functionality?


Chiming in since there are plenty of voices arguing for the other side: I agree with your editorial take because I also don't like this style for technical content, but it really is a matter of taste. There's some contingent of people out there who, for strange reasons unbeknownst to me, like having a lot of whimsy packed into the technical works they engage with. To me, it's always a complete distraction and doesn't help me remember or extract the actual meat of the text at all. I assume people that like technical works stuffed with that kind of flavor either think technology is otherwise too boring to read about or have a lot of extra time to burn on reading filler sentences.

But, that aside, I think this is a grand slam. Kudos to the author.


If I was your editor, I'd add the following with a red ballpoint pen:

Out of touch, tone deaf take. Revise.


Please don't ever edit anyone's work with such terse comments. There is not one actionable improvement in this red ballpoint pen comment. If you want someone to do better at writing something, you need to give them things to at least consider. You may not know what would improve the writing, but just saying "this sucks, fuck off" is never productive. At the very least, find one example of a tone deaf paragraph and give an example of a rewrite that at least you would find better. You may not be the author's target audience, but at least you've given input that may lead to a better revision.

edit: It occurs to me this is replying one more nesting than I realized. The comment stands, but I thought I'd note that I missed the sarcasm


Heavily disagree. The tone has the same charm as Kingdom of Loathing. One could view it as an excellent alternative to or first draft of a more dry/academic text if the author chose to reiterate.


Your feedback is opinionated, which is fine, but it really depends on who is reading the content.

I enjoy some personality thrown into technical writing. Most of it is so soulless.


I rather enjoy the "cutesy" language! I view seamlessly weaving complex topics with fun and relevant asides as a mark of talent not immaturity.

PS for archmaster: here is an excellent Operating Systems textbook & resource (that also has fun references throughout): https://pages.cs.wisc.edu/~remzi/OSTEP/

Keep it up :)


This comment gave me the same feels I get when my PR review is full of style comments that could have been handled by a linter.

Everyone has their own opinion on style and if it's not yours that's fine. Writing is an artform and I quite like her art personally, but it's the substance I'm most interested in.


Funny coming from trogdor. Don't you have other countrysides and peasants to burninate?

Seriously though, that's kind of a nitpick.


I loved it, and it reminded me a lot of The Poignant Guide to Ruby.


I loved the poignant guide to ruby and that’s not dissimilar at all. I think you’ve been a little unkind/too honest.


Drop this! (farts in your general direction)


I haven't done the research, but I can't believe most of the information could be that hard to find or wrong, at least if you know where to look.

These sorts of topics are usually well-covered in the matching undergraduate-level computer science courses (computer architecture and operating systems, which, these days, are mostly optional since out of fashion).

Several universities have free courses available, and some of the professors in the field have also written books.

You'll also find pretty informative presentations in the relevant tech conferences if you'd rather stay closer to the state of the art.

Still, teaching is an art that is undervalued, and making the information more available or more fun to interact with is certainly very valuable.


I share the OP’s opinion that a lot of available information is incorrect.

It seems the industry is moving faster than the academics who write books and university courses can update these sources. Big endian CPUs, CPU architectures other than AMD64 and ARM, x87 FPU, are examples of the topics which are no longer relevant. However, these topics are well covered because they were still relevant couple decades ago when people wrote these sources.

Some details of modern hardware are secret. An example from low-level programming, many sources claim CPUs have two kinds of branch predictors, static one which predicts forward branches as not taken, and the dynamic one which queries/updates these BTB entries. This is incorrect because mainstream CPUs made in the last 15 years no longer have the static one. However, the details of modern branch predictors are proprietary, so we don’t have authoritative sources on them. We only have speculations based on some micro-benchmarks.


> However, the details of modern branch predictors are proprietary, so we don’t have authoritative sources on them.

I focused on Computer Architecture for a masters degree and now I work on a CPU design team. While I cannot say what we use due to NDA, I will say that it is not proprietary. Very nearly everything, including the branch predictors, in modern CPUs can be found in academic research.

Many of these secrets are easily found in the reading list for a graduate-level computer architecture course. Implementation details vary but usually not by too much.


I’m not related to academia. I don’t design CPUs. I don’t write operating systems and I don’t care about these side channel attacks. I simply write user-mode software, and I want my code to be fast.

The academic research used or written by CPU designers being public doesn’t help me, because I only care about the implementation details of modern CPUs like Intel Skylake and newer, AMD Zen 2 and newer. These details have non-trivial performance consequences for branchy code, but they vary a lot between different processors. For example, AMD even mentions neural networks in the press release: https://www.amd.com/en/technologies/sense-mi


You're both right.

What the GP is saying is that all the details of how modern processors work are out there in books and academic papers, and that the material covered in graduate-level computer architecture courses is very relevant and helpful, and they include all (or nearly all) the techniques used in industry.

From the GP's perspective, it doesn't matter at all if the course taught branch predictors on a MIPS processor, even though MIPS isn't really used anywhere anymore (well, that's wrong, they're used extensively in networking gear, but y'know, for the argument). They still go over the various techniques used, their consequences, etc., so the processor chosen as an example is unimportant.

You're saying that all this information is unhelpful for you, because what you want is a detailed optimization guide for a particular CPU with its own particular implementation of branch prediction. And yeah, university courses don't cover that, but note that they're not "outdated" because it's not as if at some point what they taught was "current" in this respect.

So yeah, in this sense you're right, academia does not directly tackle optimization for a given processor in teaching or research, and if it did it would be basically instantly outdated. Your best resource for doing that is the manufacturer's optimization guide, and those can be light on details, especially on exactly how the branch predictor works.

But "how a processor works" is a different topic from "how this specific processor works", and the work being done in academia is not outdated compared to what the industry is doing.

PS: Never believe the marketing in the press release, yeah? "Neural network" as used here is pure marketing bullshit. They're usually not directly lying, but you can bet that they're stretching the definition of what a "neural network" is and the role it plays.


> They still go over the various techniques used, their consequences, etc., so the processor chosen as an example is unimportant.

They also include various techniques not used anymore, without mentioning that’s the case. I did a search for “branch predictor static forward not taken site:.edu” and found many documents which discuss that particular BTFN technique. In modern CPUs the predictor works before fetch or decode.

> university courses don't cover that

Here’s a link to one: https://course.ece.cmu.edu/~ece740/f15/lib/exe/fetch.php?med... According to the first slide, the document was written in fall 2015. It has dedicated slides discussing particular implementations of branch predictors in Pentium Pro, Alpha 21264, Pentium M, and Pentium 4.

The processors being covered were released between 1995 and 2003. At the time that course was written, people were already programming Skylake and Excavator, and Zen 1 was just around the corner.

I’m not saying the professor failed to deliver. Quite the opposite, information about old CPUs is better than pure theory without any practically useful stuff. Still, I’m pretty sure they would be happy to included slides about contemporary CPUs, if only that information was public.


> They also include various techniques not used anymore, without mentioning that’s the case.

Definitely. Sometimes it's for comparative reasons, and sometimes it's easier to understand the newer technique in the context of the older one.

> discussing particular implementations of branch predictors in Pentium Pro, Alpha 21264, Pentium M, and Pentium 4.

Yeah, but the course is still not the optimization guide you wanted. The slides pick & choose features from each branch predictor to make the point the professor wanted to make and present the idea he wanted to. It's not really useful for optimizing code for that particular processor, it's useful for understanding how branch predictors work in general.

> I’m pretty sure they would be happy to included slides about contemporary CPUs, if only that information was public.

Only if they served as a good example for some concept, or helped make a point that the professor wanted to make. There's no point in changing the examples to a newer processor if the old one is a cleaner implementation of the concept being discussed (and older examples tend to be simpler and therefore cleaner). The point isn't to supply information about specific processors, it's to teach the techniques used in branch predictors.

P.S. See those 3 slides about a "Perceptron Branch Predictor"? Based on a paper from 2001? I'm betting AMD's "neural network" is really just something like that...


"Neural networks" just mean perceptrons.

Practically, the only thing that matters is that branch prediction assumes that history repeats itself, and that past patterns of a branch being taken in certain conditions will impact it being taken again.

So that means that conditions that are deterministic and relatively constant throughout the lifetime of the program will most likely be predicted correctly, and that rare events will most likely not be predicted correctly. That's all you need to know to write reasonably optimized code.


> CPU architectures other than AMD64 and ARM [..] no longer relevant

cough RISC-V cough


"Wrong" is perhaps not the most accurate word. I most often found information to be either extremely oversimplified such as to be unhelpful, or outdated and no longer relevant for current systems. Although, yes, some things were just wrong.

There are courses and presentations and books, but there aren't many websites or articles — and that's the learning style that works best for me. Undergrad programs will teach a lot of what I covered (though certainly not all, and it really depends on the program) but I believe that knowledge should not be gatekept on going to college.


Ultimately, diving deeper with only websites and articles can be quite challenging. I experienced this myself trying to learn more about the continuation passing style transformation in a compiler. No online websites or articles discussed the topic with any kind of depth.

Ultimately I read the classic book "Compiling with Continuations", and it basically cleared up all my confusions.

All of this is to say, don't discount books and courses. They will almost always be more in depth and correct than what you will find written up on a website.


I think you are very correct, and I don't like it. There should be more "online books" that are in depth and correct!


Have a look at this one! https://github.com/angrave/SystemProgramming/wiki

It was still in development when I went, looks like they made a PDF now. https://github.com/illinois-cs241/coursebook


The course was changed from cs241 to cs341 so I think the most up to date version is here [0] now.

[0] https://cs341.cs.illinois.edu/coursebook/index.html


Agreed!


I can't resist pointing out that LWN (https://lwn.net/) has been dedicated, for many years, to the production of operating-system information that is not terrible. Have a look, and perhaps consider joining us :)


LWN is one of my favorite websites. I learned more from LWN articles (probably mostly yours lol) than most other resources on the internet when researching for my article. I actually quoted you from 19 years ago! https://cpu.land/epilogue#bonus-tidbits


Same.


OP should reach out to LWN. This content would be a great addition.


I've only skim-read your articles so far, but it looks excellent. Congratulations and please keep doing work like this. I previously worked as a research engineer (a fancy name for a software engineer working in a university lab, in my case doing computer security research) so believe me when I say there are graduate students who don't have your grasp of operating systems (see the comments elsewhere on OS courses being optional).

> Apparently everything online about how operating systems and CPUs work is terrible.

Unfortunately finding good information is not easy, but it is out there - you've proven it by synthesizing some of that hard to find information into a better form. My degree is in mathematics, but my computing knowledge is self taught. Knowing how to learn in this way and to be able to communicate highly technical information in an approachable manner are incredibly important skills and not to be underestimated.

I have two links to share - I limited myself to two because we could be here a long time otherwise:

1. https://git.lain.faith/sys64738/airs-notes.git (TOC for linker part here: https://lwn.net/Articles/276782/) - you've scratched the surface of ELFs and linking. This is a 20-article blog series by Ian Lance Taylor plus miscellaneous extra topics containing everything you ever wanted to know about ELFs and quite a bit that will likely make you wonder how any of your commands are actually even working! There's even a brief mention of ELF's companion, DWARF, which amongst other things is a virtual machine used every time a C++ exception is triggered.

2. Have you seen this excellent project? https://0xax.gitbooks.io/linux-insides/content/ - essentially a human-readable walkthrough of how the Linux kernel does stuff. Has almost certainly been shared on HN before.


> https://git.lain.faith/sys64738/airs-notes.git

Oh, that’s awesome, it even has some things transcribed from Fengrui Song’s blog (https://maskray.me/) as well—I sometimes feel like it’s the only place where some particulars of binutils(-compatible) behaviour are documented (I don’t come there for the generalities, though, like GP is seeking). I only wish there were more of these transcriptions :)


Impressive work.

I usually don’t comment on « I’m xxx years old and did yyy » because I’m not interested in material that would be « good/great for an xxx years old ». In this case, this is great work, period. Adding « for an xxx years old » would be insulting.

Some sections remind me of the material of my college courses, which were quite extensive as my degree historically had a dual electronics/software specialization.

> Apparently everything online about how operating systems and CPUs work is terrible.

Agree with that! As a pro tip, when online resources are scarce for a subject, try adding keywords to find college degree resources (slides, websites, homework, etc.). The material is usually badly referenced on Google, but can be pretty good depending on what you find.


Others have already mentioned how high-quality this is, but I feel the need to reinforce it. The content is excellent, the framing is smooth, the presentation is accessible, and gorgeous. And beyond all that, you also ensured that the generated PDF is high-quality.

I'm incredibly impressed. Keep up the great work, but most importantly, hold on to your passion and care!

All the best,

-HG


Glad you like the PDF! Fun fact: it's actually the print stylesheet output of the one-pager edition. Hit Ctrl-P on https://cpu.land/editions/one-pager (pref. in Chrome)


That one-pager button is great.


Hi Lexi. I love seeing 17 year olds taking a deep dive into tech. You chose a great field and have a great career ahead! Keep up the good work - stay curious, work on personal projects. It gets a little harder to find the time as you get older and you'll look back on the things you're doing now and feel satisfaction for it.


Hey great work you did there. I'd also like to recommend nand2tetris and its companion book Elements of Computing Systems. If you'd like to dig deep into how a cpu is actually elemented via HDL.


I agree this would be a great addition. If I had a minor complaint about the work presented here, it's that it starts "in the middle"; pushing down to CPU OpCodes without describing how the ML codes are actually defined. It's typically easier to understand if you start at either the very top (like how does "Hello, World!" actually get executed?) or very bottom (though I'd argue you could stay above the physics of semi-conductors, at the chip level).


I just read the first chapter and it reads very very nice! Wish I had that back in Uni haha. Well done, keep on going! Computers are indeed simple and complicated at the same time.

/e: this was also a trip down memory lane to my time at Uni and why I fell in love with compsci. It's so inherently beautiful and clever, every teeny tiny bit of it.

I also had a great book when starting out: "Einführung in die Informatik" from Gumm/Sommer (my professors), written in German. They explain round about everything on a basic level in round about 900 pages. Think about how far we've come in the last 15 years, wow! I feel very sentimental now haha


It's a bit dated but perhaps you should check out Structured Computer Organization by Andrew Tanenbaum.


Oh, thank you for the suggestion! I of course know of Tanenbaum by reputation but I've never read the book.


I've read a couple of Tanenbaum's text books, and they are fantastic! My biggest regret is not playing along with the minix examples, but I didn't have a computer while I was reading the book.


Your github is really interesting. It seems like you have a bright future ahead of you. Do great things.


I haven't read the whole thing yet, but initial impressions are that this looks GREAT. Being able to communicate concepts like this well in writing is a rare skill, it will serve you very well in your future career.


Thank you + I hope you enjoy! I would really appreciate your feedback if you finish.


If you haven’t looked, older game consoles (Genesis, Game Boy, etc.) can provide a very good introduction to how systems are built - how devices are mapped to memory addresses, how CPUs initialize and get different entry points for the program, interrupt handlers, etc.


I like that you pointed out the "research posture" in your first image.

A tip from somebody who is not 17: it's good to have many different research postures. Staying in one of them for too long will give you trouble later on. Took me way too long to figure this out.


I saw how straight the stick figure's back was and laughed out loud to myself.


Hi Lexi, this is Patrick. I remember you from the replit community when I was working there a couple years ago. This is amazing work. So cool to see you continuing to chase your passion. Wherever it takes you, I’ll be rooting for you. Happy coding :)


Oh wow I think I found my intellectual doppelganger! I run into this problem a lot, where I try to learn something simple but it quickly balloons to the point where the answer is in a hundred pieces due to missing context and bad assumptions about the reader. I went through this same process but for Bitcoin, culminating in writing the Understanding Bitcoin book[1].

I also saw this dynamic in the cryptopals challenges, where I thought the hardest problems were "look up this off-the-shelf cryptosystem and implement it", not "find a flaw in that system you reimplemented with only a few hints.[2]

Like others, I recommend nand2tetris as having answered a lot of the questions I had. It has you implement the hardware of a computer that can execute the program loaded in its memory, from the logic gates and flip-flops up. Then, you implement a compiler that can translate high-level Java-like code into binary. (First to a virtual machine, then assembly, then binary.)

Of course, even then it leaves caps: it's a deliberately toy system, it can't handle programs longer than its memory, the machine does only one-shot programs (and thus can't handle a programmer creating a program live and having it be executed). But it answered a lot of my questions, like how to make memory work, and how function call/return work.

[1] http://understandingbitcoin.us.

[2] Previous comment about it: https://news.ycombinator.com/item?id=36398627


This is a great website! Really enjoyed reading it this morning. I wish I had something like this to supplement the really dry textbook I had to read in Operating Systems class in college. Reminds me a bit of Learn You a Haskell for Great Good, which was similarly an excellent supplement to the dry textbook: http://www.learnyouahaskell.com/


For those who haven’t bookmarked this, do it immediately. I actually did not like it at first but as time goes by I just realized it is awesome. And it’s free!


I am a big fan of Learn You a Haskell! Was absolutely lovely when I was learning FP.


This is beautiful work. You have a very bright future ahead.


There are, like, no resources. Everything sucks.

Before Web 3.7+ us self-taught hackers often had to learn out of books, in the dark ages these were basically compilations of actual printed paper and not even a single TikTok video in sight.

No likes or subscribes to the authors either, you had to go to a place that freely stored all these books and find the authors using a catalog of paper index cards. Check this one out sometime:

https://www.amazon.com/But-How-Know-Principles-Computers/dp/...

Exceptionally well done though, keep up the great work.


I haven’t gone through everything but this is looking much better than anything universities can produce. First time I feel that I can go through all of these content and want more.

What a brilliant job you’re doing here. Keep it up and you’ll have a wonderful career.


I love it! And also the natural humor that comes with writing as a 17-year-old. When I was your age I wrote this column of articles on FlipCode, about game development:

https://www.flipcode.com/tpractice/

(If you check it out, I’d love to know your opinions).

Also, I wonder what are your local state laws about hiring a 17-year-old to work on open source stuff as an intern. I checked out your github and think you might enjoy part-time discovering what we build. Here is the codebase: https://github.com/Qbix/Platform



Lexi, this is amazing! I am 33 and I still don't know ANYTHING about how a CPU runs a program.

Appreciate you injecting your own style into the writing. After a decade in the industry I'm sick of soulless reference materials.


It makes me happy to see the future generations refusing to accept the magic of it all and instead pulling back the curtain. Excellent work!

As a side note, once OpenAI slurps up your work, the next ChatGPT might not have to lie so much.


I browsed through it shortly but it is impressive. When it comes to existing resources, the boring university classes can actually be navigated online. For this topic CS-152 from Berkeley could be a nice followup (https://inst.eecs.berkeley.edu/~cs152/sp23/) and there's a CPU design project in CS-61C that should cover everything you learned if you want to apply the knowledge to a concrete design.


First if all, congrats.

> Apparently everything online about how operating systems and CPUs work is terrible

Then it is time to head for the books. Most of the stuff in the web is not organized and coherent. In particular, the "basics" (whether is CPUs, machine learning, etc) are seldom explained.


jealousy, thievery, exploitation, are all things that throw themselves at young talent.

be warned, here,there be dragons!


This attitude and ability to execute on your vision will serve you well. Excellent work, stay awesome


Hi Lexi, pretty cool project you finished there! Congrats!

I was a little confused about how you only talked about the sources on the internet are kinda poor. But I didn't see any comments about books for research...

Did you use any? Can you recommend some on this topic?


You have a gift for writing and teaching technical concepts. I think you could make a serious impact on computer science education if you pursued that as a career. Congratulations on finishing this mini-book!


Is there an equivalent article for "what happens when you load a website?"? If it's written anything like this one, it'd be super helpful and I'd love to read it!



re Figma: that's a good idea, I can definitely include SVGs

I'll check out Gustavo Duarte's posts!


Well done! It reminds me of my OS textbooks. The ending graphic with the bird yelling about E gave me a good belly laugh


This is very, very well written. Congratulations on making (and finishing!) such a good resource!


You've found your calling! Keep up the amazing work.


We Will Watch Your Career With Great Interest




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: