Hacker News new | past | comments | ask | show | jobs | submit login
Playstation Architecture: A Practical Analysis (copetti.org)
342 points by plerpin on April 21, 2020 | hide | past | favorite | 111 comments



I remember developing for the PS back in 1996-1999. The first title I worked on I was building the graphics engine and animation systems. Originally in C, then in MIPS assembler to get as much perf as possible: with a fixed target the difference for your title would be mostly down to the performance of the graphics engine.

I got to the point where I'd fitted the entire graphics engine and animation system into 4K, so it would fit in the instruction cache, and moved as much regularly used data into the 1K scratchpad as I could fit (Yes, an L1 cache that you decided manually what to put in it!). Access to the scratchpad would take 1 cycle.

Then I'd 'hand interleave' asm operations. Reading from memory was slow, it took 4 cycles, which normally would be filled with NOPs by the C compiler (1 load instruction, 3 NOPs). So, I'd use assembly instead of C and try to fill those NOPs with other actual operations that didn't need the memory being requested, essentially doing hand-crafted concurrency.

Because loading and storing from/to memory was such a common operation, this would make the code very, very hard to maintain, and sent me slightly crazy for a while! Often it meant doing x, y, z operations (for 3D processing, like vector multiplication) concurrently, but wherever the the NOPs could be reduced, more could be done.

With various other bits of cunning I eventually got it to the point where I broke the manufacturers specs for whatever Sony said the PS could do per-second (memory is a bit fuzzy about what those specs were, but I remember myself and the team being pretty damn pleased at the time).

It was a fun machine to program for. The Saturn which was out at the same time always struggled to keep up because it was so hard to develop for, even though on paper it was better. I think that was what struck the death knell for Sega.


The current AMD chips have 1.5MB of LEVEL 1 CACHE! To say nothing of the 8MB L2, 32MB L3, and the integrated video processor.

And you had to do the work that an optimizing compiler could do today, although they have to target a "theoretical" CPU rather than a fixed hardware set like the old consoles.

Since all the superscalar tricks such as out of order, spec exec, and the like are now maxed out, maybe what should happen is that CPU vendors should concentrate on optimizing a fixed VM-sized hardware profile that the compilers can more efficiently target.

A CPU ISA is somewhat like that, but it has so much variance.

With Moore's law gone, we will actually have to start removing abstraction from the development pipeline and get to more optimized and direct code to drive performance.


> Reading from memory was slow, it took 4 cycles, which normally would be filled with NOPs by the C compiler (1 load instruction, 3 NOPs). So, I'd use assembly instead of C and try to fill those NOPs with other actual operations that didn't need the memory being requested, essentially doing hand-crafted concurrency.

This isn't usual for the MIPS ISA, is it? MIPS has branch delay slots but not memory delay slots, per my understanding. (Of course if there's a data dependency on a pending load the CPU still has to stall.)


The R3000 suffered from the need for load delay slots too, unfortunately. I may have forgotten all the exact details though. I am genuinely in awe of people who seem to retain the details of this stuff 10 - 20 years later, my brain certainly doesn't work like that!


For anyone interested how it was developing for the original PSX (and therefore under hardware constraints)

How Crash Bandicoot Hacked The Original Playstation

https://www.youtube.com/watch?v=izxXGuVL21o

Immensely interesting video! "Old" hardware makes me feel so humble regarding what we have now. Hardware limitations back then really pushed developers towards novel approaches and solutions.


Thanks for the interest in our shared video gaming past. I had a lot of fun making that video. The PS1 was a fun machine as it was capable, complex enough that you felt it had secrets, but not so bizarre or byzantine that you felt learning them was a waste of time. And you were pretty much the only one in there as the libraries were just libraries, not really an OS. Still true of the PS2 although that was a complex beast, but by the PS3 there was more of a real OS presence. If you want some more, slightly different, slightly overlapping info on the PS1 or making Crash, I have a mess of articles on my blog on the topic: https://all-things-andy-gavin.com/video-games/making-crash/


Oh wow, this is akin to spotting a celebrity out on the street!

I happened to already be half-way through the extended "war stories" interview on the making of Crash you had done and it is superb; you are a joy to listen to! I remember reading these blog articles of yours many years ago but will definitely be revisiting!

As an aspiring hobbiest game developer, I often feel I have missed out on this golden age of game development - where you really had to think outside the box and hack your way around the architecture to achieve your design and performance goals.


I really enjoyed your "Making Crash Bandicoot" blog posts and the "War Stories" video. I would love to read about your work on Jak & Daxter and working on the PS2.


Me too, the Jak & Daxter games have a very special place in my heart as my childhood introduction to gaming. I'd love to reach about it!



Andy Gavin (in the video) was also responsible for the development of GOAL, Game Oriented Assembly Lisp (https://en.wikipedia.org/wiki/Game_Oriented_Assembly_Lisp)


Naughty dog (the studio founded by Gavin and Jason Rubin) is also home of the ICE team, a technology hub for Sony titles, I wonder if Sony saw the tech expertise culture in the studio and decided to put a team to handle psx dev tech with them.

https://en.m.wikipedia.org/wiki/Naughty_Dog#ICE_Team


awesome! thanks for sharing.


While hardware is a lot more performant now, game devs are still pushing the boundaries, I'd say even more than back then. Modern triple A games like red dead redemption 2 are a giant collection of such tricks.


>Modern triple A games like red dead redemption 2 are a giant collection of such tricks.

It's just too bad somewhere along the way they forgot games were supposed to be fun as they were slapping eachother's backs over how ingenious they were to be able to render every flea and tick on a horse's asshole while you clean it out and feed them a carrot.


Those war stories videos are very interesting and impressive. But I’m personally glad I’ve never been asked to program a video game in assembler using a trackball and no keyboard.

https://www.pcmag.com/news/first-kirby-game-was-created-with...


Agreed, on both counts. I've seen some other War Stories videos and they're pretty good. It's interesting to see some perspectives from people involved other than developers (although they can be a bit hyperbolic/dramatic at times). https://www.youtube.com/watch?v=BQ3iqq49Ew8


On the same note, I discovered this[1] yesterday. An explanation of why things looked a warped in PS1 games. The explanation actually blew my mind. I had heard the term zbuffer before but didn't really know what it did.

[1] https://www.youtube.com/watch?v=x8TO-nrUtSI



That was cool, also check the comments for a neat thread by someone that worked on an early PlayStation game!


Wow I never imagined my article being in hackernews. Thanks for sharing it! If anyone has any comments, requests or want to report a mistake, please drop me a message (email address is in that website), I’m constantly adding more material.


I really enjoyed the special features (3d model viewers etc) you added to this article. It's a great presentation and an interesting read.


There’s a lot of awe in the comments about how limitation drove ingenuity. It’s true throughout most of the second half of the 20th century electronics, not just the toys that came at the end. The gameboy is a great example. A Z80 is an incredibly simple thing. You have to sit there and hack together games in assembly, performing all sorts of tricks to keep the memory footprint down. The first speak and spell is another example. I wish I could conjure up more, but basically any novel toy that involved electricity between 1950 and 1970 is a work of art. You don’t need a computer, or even a transistor, to use electronics tools in a creative way to make an interactive toy.


It makes me feel nostalgic to read about the times when people still tried to understand the hardware and make things work amazingly well despite harsh resource limits.

Nowadays, even simplistic programs take ages on a device 1000 times more powerful. I had time to read this article because restarting our ruby development server is so excruciatingly slow.

It seems the craftsmanship aspect of computer programming is getting lost, and all that remains is the large leverage that one can use to drive profits with software.


I was a programmer back then, and it's not the "golden age of programming" that some make it out to be. For one thing, WE were complaining about how big and bloated software was becoming compared to the "old days" when programmers only had 64KB to work with. On top of that, it took SO LONG to develop basic functionality that you could probably build in a one hour language tutorial now. Things that were excruciating to build are trivial now. Yes, there is a cost. Abstraction adds bloat, but it also allows developers to be much more productive.

I have a friend I play go with that is an old Univac programmer (he's 30 years older than me) and he often pulls the "call me when you have to use punch cards".

Developers always look back on the past a bit wistfully, but I assure you that some things never change.


Thank you for this wisdom and perspective.


I've found this sentiment common in posts like these, but most of them seem to conveniently ignore how complex software (especially video games) and hardware has become. Compare the screenshots of a game released in the PS1 era, to even a low budget indie title released today (eg. Hellblade), that's just the graphics, you have other aspects like multiplayer and AI. Games aren't written from scratch on bare metal machines now because it's not feasible to do so.

This applies to general software too, things that weren't big deal/non-existent in 90's are important now, security, networking, OS native UI frameworks. The OS itself doesn't give you FFA access to the machine anyway.

Also if you really wanted, instead of using Ruby you could bang out a backend/REST API using assembly[1]/C[2] even today, just that nobody wants to do that because it's a fucking pain in the ass.

[1] https://asm32.info/index.cgi?page=content/0_MiniMagAsm/index...

[2] https://kore.io/


I don't think it's about developers caring less.

I think it's because businesses have figured out that it is most cost-efficient to throw powerful hardware on problems rather than spending money on bespoke solutions or specialists in most cases.


If it is all about business decisions, then why is a lot of open source software having the same kinds of problems?


Open source software still has costs associated with, if not in cash, then in time.


> It seems the craftsmanship aspect of computer programming is getting lost, and all that remains is the large leverage that one can use to drive profits with software.

Too many lazy/greedy developers fighting against users and operating systems (coughelectroncough)


Or maybe software is just more widespread than before and it's no longer an exercise practiced by a few selected (by hard-work, luck, you name it) "experts". It is completely understandable if these new wave of developers care their time budget more than efficient hardware utilization.

This is the tendency for any craft that gains popularity over time. We can only advocate best practices by making them accessible, approachable and applicable. We can then hope the crowd listens and if they don't, that's ok, it's their loss (or not if their audience doesn't care).

A good analogy is, you don't have to build an earthquake resistant building if you are gonna use it to store some metal junk (carpenter tools, gardening tools etc). It's not a house and there is no danger even if you live on top of an active fault line.


I understand that Electron apps tend to be a resource hog, but what other options for software create as many cross platform opportunities with as much work? I think for every single greedy developer/company there's another small software project only able to get off the ground because of the maximized opportunities. This is coming from a relatively new full time developer who's trying to get a small side project off the ground.


This is also partially the fault of the operating system companies for not making native development as easy and portable as it should be.

Things like SwiftUI and the upcoming "clips" (or whatever the iOS/Android "partial/temporary apps" tech is going to be called) are a step in a good direction.


Why should Apple care about portability of their Desktop apps to OS's like Windows? Native API's are almost by definition non-portable. That's why you need something like Qt to bridge the gap for you and provide a OS-agnostic API.


So much this. I began a configuration program for a desktop app recently, and it's taking months of spare time to get even a rudimentary UI with Qt. Electron or Qtwebview would probably only take a few hours for a web dev like myself.

All that said desktop development is an interesting challenge, if sometimes maddening.


> I understand that Electron apps tend to be a resource hog, but what other options for software create as many cross platform opportunities with as much work?

Is laziness a good excuse for poor software? For wasting the time and resources of every one who uses the application? For disregarding the conventions of the host platform including any user preferences and accessibility features?


The flaw in your arguments is that good and fast software is usually not the main goal nor the priority of people who drive software aka businesses


> For wasting the time and resources of every one who uses the application?

I don't believe this argument for a second, with the ubiquitousness of successful Electron apps.


An application that takes longer to load, is slower, and consumes more CPU and RAM is by definition wasting the user's time and resources. The slow and bloated nature of Electron applications is griped about quite often. That people are required to use Slack for work doesn't change that.


JavaFX?


All cross-platform solutions suck. At least consider writing one native client for every supported platform. Only support the platforms that your customer base truly needs. Focus on the most important platform first.


Java is pretty good when it comes to being cross platform


No it isn't!

Sure, you can run some sort of Java VM on a lot of platforms. That's no big distinction, you can say the same about Javascript, Python, Lua and so on.

Once you enter the realm of graphical applications, it's a clusterfuck, just like any other "cross-platform" thing.


Does Swing not run pretty much the same on Mac, Linux, and Windows?


Yes, it sucks on all these platforms equally badly.


Electron is the future. Why? Developers would rather spend more of your computer's time (and other resources) than their own time. And most end users -- e.g., people who work in a Slack-based business and so NEED the Slack app -- consider the tradeoff worth it.


Or, more likely, don't have an option nor a real voice.


PWA should be the future for apps that don't need much permissions. Why we need individual Chromium for just chat app?


A Ruby dev server doesn't need to be slow. Put some "craftsmanship" resources into making your dev tools tighter and your app less bloated with dependencies and startup steps.


> Put some "craftsmanship" resources

Hard to find time for when you have 30 JIRA tickets every two-week "sprint", none of which have anything to do with improving performance.


> restarting our ruby development server is so excruciatingly slow

Is it running on Windows? (Honest question.)


It's crazy to see just how far they've come from the first console to the latest and all the lessons they've learned along the way.

https://www.youtube.com/watch?v=ph8LyNIT9sg (Some Playstation 5 architecture highlights).


Well, as much as I would love to share the excitement, all they've come to is what essentially amounts to another gaming PC. Not sure how 'far' this is, but clearly all innovation happens now on the PC side. (It almost seems now that the fact that the earlier Playstation models and other consoles had the custom architecture that was different from PC may have had to do with the need to provide enough power in a smaller package.)


Did you watch Cerny’s presentation, specifically the part on the custom SSD architecture? I took the opposite conclusion: This is the most innovative development from the console market in possibly decades. They’re throwing everything behind optimizing the SSD-to-VRAM pipeline in a way that component-built PCs won’t be able to do until several new industry standards are developed, with potentially profound impacts on the way games are designed.

It’s specifically an interesting contrast with the Playstation 3’s exotic Cell processor architecture, which was probably driven more by a desire to appear innovative than by practical applications. By moving to standard x86 architecture, Sony has counterintuitively allowed its system designers to focus on areas that will actually give game developers some novel possibilities.


> optimizing the SSD-to-VRAM pipeline in a way that component-built PCs won’t be able to do until several new industry standards are developed, with potentially profound impacts on the way games are designed.

What are the innovations? I'm just seeing gen 4 pci-e paired with some sort of nvram solution (details are sparse). Apologies if I'm missing something spectacular here...


The only really interesting thing is that the console SoCs have decompression offload hardware, so data coming in from the SSD at 4-5GB/s can be unpacked to a 9+GB/s stream of assets, straight into the RAM shared by the CPU and GPU. A desktop PC can easily handle the decompression on the CPU with a combination of higher core counts and higher clock speeds, but then the data still needs to be moved across a PCIe link to the GPU.

The SSDs themselves are nothing special, and there's no clear sign of anything else in the storage stack for either new console being novel. It looks like they're improving storage APIs and maybe using an IOMMU, neither of which requires new industry standards for the PC platform.


When it comes to graphics assets, they've been stored compressed and decompressed by the GPU on the fly pretty much since early versions of DirectX. What is the innovative part here?


They're using lossless compression algorithms that are a lot more complicated than S3 Texture Compression and friends, and work on arbitrary data rather than being appropriate only for textures.


Custom architecture was prevalent back then because there weren't any real standards for developing a system that could render 3D graphics. If you wanted 3D rendering, you had to do it yourself. No one was selling you a pre-built stable chip.

The now defunct fixed function pipeline was still being formalized in PCs, let alone consoles. Each console was a massive exploration into what could be done and who could make the best SDK for that hardware to entice developers to make games.

The fact that all modern consoles and PC's are similar should be applauded. It's the culmination of decades of hardware and software learnings, standardized into the optimal hardware for graphics and game performance.

The fact that they all found what they needed in a similar architecture footprint is good for hardware design, good for developers, good for business, good for porting, good for optimization, and finally, good for performance.

Sony, Nintendo, and Microsoft are no longer able to design the latest and greatest chipsets. The complexity is just too much for them to reasonably pay to do so. That's why AMD does it for them. Why would you reinvent the wheel when someone has already spent decades making Ferrari's and is begging you to use their engine at a discount, bulk manufacturing included?

Back during the PS1, Sony had a heavy hand in the graphics card design. Not as much anymore. They tell AMD what they want to do and AMD eschews an existing chip to do it for them.

I know it sounds less glamorous, but honestly, it's the most remarkable thing I've ever seen in the history of modern manufacturing, short of landing on the moon. Having one company able to provide chipsets for any and all applications, including gaming at a practical whim seems pretty amazing.

tldr; The consoles are all the same because AMD/nVidia are so much better at hardware design than Microsoft/Sony/Nintendo that it's not even funny anymore. If someone has the latest and greatest already available for licensing, just buy it and get on with making amazing games.


> Having one company able to provide chipsets for any and all applications, including gaming at a practical whim seems pretty amazing.

its amazing, but is it a good thing?

could we say the same about say, amazon?


Another great article about PSX architecture, with regards to the DOOM port specifically: http://fabiensanglard.net/doom_psx/index.html

I highly recommend the series about various ports of Another World on the same website as well.


Unfortunately there's no writeup or such about it, but I worked on a doomed (ha!) project to port Unreal 1 to PSX. I was doing level design at the time. The programmers did manage to get a functioning renderer up. It was limited to more simple geometry than the PC software renderer could handle at the time, but still worked enough you could have made a game on it, had other things not gone wrong with the project. If you search for "Unreal PSX" you'll find some work by a couple diehard retro unreal fans to try to finish up some of the partially completed content.


Thanks!


What's amazing is that it had only 2 MB of RAM, that's at least 2000 times less than a cheap phone has today. In fact, that's less than a moderate quality mobile phone photo as stored on disk.


I think these programmers were forced to understand their subject matter much better than today.

Say for (a silly) example you want the first 10 000 digits of Pi. It's pretty easy to just store that today. But back then you didn't just have to know what Pi is, you had to have the smallest program to calculate it that you can think of.

Would be interesting to hear too how Tekken 3's coders managed to work with 2 MB of RAM.


Which is why I find so ironic that many still think only languages like Assembly, C or C++ have a place in IoT on devices like ESP32.

Sure, we used Assembly when performance was the ultimate goal, but also plenty of high level languages, including stuff like Clipper for database front ends.

512 KB with a couple of MHz are already capable of doing a lot of stuff, one just needs to actually think how to properly implement it.


> Which is why I find so ironic that many still think only languages like Assembly, C or C++ have a place in IoT on devices like ESP32.

Short answer, Parkinson's law.

When I browse the Web, open my Windows File Explorer, open Photoshop, open Visual Studio, open just a graphical application that needs GPU acceleration, or do whatever thing that should not be an issue, I find that Parkinson's law very much applies.

https://youtu.be/GC-0tCy4P1U?t=1727

https://youtu.be/GC-0tCy4P1U?t=2172

https://twitter.com/rogerclark/status/1247299730314416128

Wtf. Did you see how fast VS6 started on a machine from almost 20 years ago. Today I get depressed whenever I need to open Visual Studio, and I don't even know that I use any more functionality that wasn't available 20 years ago.

Many many programmers (in my perception at least) are extremely dissatisfied with today's state of computing, and the reason why we got here is the popular opinion that we don't need to worry about performance.

Nobody should optimize representations of Pi or count CPU cycles by default. That's not the point. But if you claim that Java is always fast enough for example, then I think we're having a strong disagreement. It's partly confirmation bias and being unaware of vast parts of the landscape, but it's also a fact, that I couldn't name you a complex GUI written in Java with satisfying ergonomics.


Java (the language) is not especially slow; back in the mid 2000s I was optimising a Java GUI to display hundreds of thousands of nodes for chip design clock trees.

Java (the culture) makes it hard to be performant. There's a great tendency to use all sorts of frameworks and elaborate object systems where much simpler code would give you at least 90% of the functionality for 10x the performance.

But if you get some programmers with experience beyond Java who care about performance and are rewarded for performance, it's certainly possible.

I briefly wondered whether the demoscene ever had a go at Java, and indeed they did: http://www.theparty.dk/wiki/Java_Demo_1999 / https://www.youtube.com/watch?v=91HzuGqpTHo


Yep I agree. Java isn't slow if you build a trivial application. Or an HTTP server. Or something for batch processing. Or if you're a masochist working around performance issues until you reach about ~C level of performance and performance "robustness".

The problem is what Java encourages you to do. And I didn't want to single out a language; Java is just one that I revisit from time to time and I'm always astonished how awkwardly hard it is to do the simplest things in a straightforward (and CPU-efficient) way.

And yes, there are a lot of slow C++ programs, and even slow C programs. There's just a clear tendency for C programs to be faster and more responsive.


> tendency for C programs to be faster

I was involved in a massive rewrite of a website from C to Java a while back. One coworker observed that, when they were coding in C, it took a lot longer to get anything working, but once you did, it was pretty solid: C had a tendency to just crash if anything was wrong, so you'd work for quite a while before you got something that didn't crash consistently. Java, on the other hand, allowed you to get something working (that is, running without crashing) much quicker: but the things that were out of place were still there, they just caused harder-to-find problems that were much more likely to become customer-facing before they were caught.


The problem isn't Java, after all Photoshop also has performance issues when compared with its old self, and it is C++.

The problem is that mechanical sympathy seems to be a lost art, and it is going to be slow as molasses regardless of the language being used.

What to expect when Electron is the new darling for writing desktop applications?


I totally agree that the problem is lack of mechanical sympathy.

A big problem is simply that software development is too hyped, and there is too much money in the industry, so there are too many unexperienced developers bringing bad software. Also, projects are too ambitious, leading to compartmentalization of implementation, leading to slow and unreliable code. The problem often starts already with the things that people want to build.

And my guess is that the use of Java and other object languages is highly correlated with these developments.


Legacy products like visual studio and photoshop also suffer from age. Those codebases are both over 20 years old and have a gazillion features that customers start to rely on. Add in that and codebase that old will be slower to make changes on then something fresh and you end up where they are.


Visual studio had a UI rewrite to .net about a decade ago.

VSCode is electron.

If they had kept maintaining the old thing maybe they would experience less performance issues, which you say is suffering from age, but actually they threw out the old thing and rewrote on costlier and more recent frameworks.


The bizarre part is that if you mention to the people making visual studio that every version is slower and has more latency and lag in the interactivity, they seem to have no idea what you are talking about and ask what specific situation you are running in to. They either don't know or don't acknowledge the evolution of the performance of their software.

I read once that adobe acrobat has thousands of static variables that initialize on startup. Multiple pdf readers like sumatraPdf are successful largely because of their bloat and start up time.

It's crazy what happens when programming teams have no priority for not wasting user's time and computer resources.


Given the security track-record of those languages, I sure hope not.


> think how to properly implement it

OTOH, the only people who can think how to properly implement it are people who are also perfectly comfortable with Assembly, C or C++.


Not necessary, when I did my degree, our Algorithms and Data Structures class was done in C, a couple of years later the same course was upgraded into Java.

However, I know from former university friends, that eventually became TAs, that was what required from students was kept at the same level of requirements.

The professor responsible for the class, had a battery of tests (almost a decade before unit tests became a known term), and having the tests green was the first requirement to even accept the class projects.

Those tests did not only test for correctness, performance, memory consumption and execution time were all part of the acceptance criteria.

This is what is missing from many teaching institutions.


yeah but modern programmers are also solving harder problems. Compare games from early consoles and today, the scope is vastly larger.


Like shipping Electron apps.


Folks crap on Electron apps and rightfully so, but I absolutely LOVE one thing about them relative to native GUI apps.

I can hit cmd+ and cmd- to scale their content and UI up and down.

That is dang near a killer feature for me.

Very handy for presenting, screen sharing, when I move my apps to an external monitor with a slightly different UI, or for moments when my eyes are simply tired and I want something easy to read.

You can say that I shouldn't need hundreds of MB of RAM to run Slack's desktop client, and you ain't wrong, but I've got plenty of RAM. My eyes and my time are much more finite resources.


Writing Electron apps for me, as a matter of fact, brings back the time when one had to be hyper-mindful of CPU cycles, just like back in the day when I was writing Windows apps in C++.

Everything is going through the prism of "but how much will it cost in terms of performance? Granted, this should be the case for server software as well, but in case of clients, you don't know what ghetto shit your code will run on - you write for the worst possible case. Server specs are unknown.


Honestly dude, I know you are trying to joke and all but this just sounds incredibly rude. This minimization of someone else's skill is petty and absurd. Electron apps serve their purpose and have their place, hating on them will not get you anywhere.


Electron solves a very important problem. It efficiently prevents those pesky Linux users from using your software. Most electron apps simply do not work with Wine.


Wait, what? The point of electron is to be cross platform, I never had an issue running an electron app on Linux.


It seems to me that the point of Electron is, in many cases, to leverage existing knowledge of client-side web programming paradigms and tools. Multi-platform support is just a bonus that comes with it.


I'm not sure if you saw the interview on Polygon [1], but I really liked hearing about doing the first Tekken port from arcades to PS1 and dealing with the newer memory constraints.

[1] https://youtu.be/XB0mgLo9lEw?t=1214


"back then you didn't just have to know what Pi is, you had to have the smallest program to calculate it"

A floating point division on two three digit numbers used to be used as a good enough approximation to 𝛑.


3.14159 is often a good enough approximation for pi.


It probably felt like a lot compared to the ~128K the previous generation of consoles had.


2000 times more than a ZX81.


2000 times better graphics than a ZX81.


It's a matter of perspective. When one witnessed it from times of Pong until now, it is maybe much easier to appreciate the whole journey. I still find myself trying to save bandwidth on my internet connection that was very scarce resource not-that-long-ago. When I see people listening to music via youtube, unnecessarily transfering all that video content that nobody really watch, I want to cry :). Same with programming - going from ZX to where we are now, you can easily feel the luxury of having almost infinite memory and CPU power. And of course, games now are hardly comparable to what we had on ZX. Great thing is to step back and try to program things like Arduino or ESP8266/ESP32 where you, again, have to think twice about resources.


How comparable is an ESP32 to a Spectrum 48K? (apart from memory, the esp has many times more to play with)


ESP32 is of course so much more powerful than ZX was. I meant that programming of ESP is much different to usual daily job enterprise stuff.


Neat article, content wise. I wouldn't call it in-depth because it's very high level, but it is neat.

The presentation could be made much better by getting rid of those tabbed sections and just making the article one long page.


Thanks. The reason for the tabs is that each article has many sections (CPU, graphics, audio, etc) so I imagined some people may be more interest particular sections than others. For this, if you find something interesting, you can use the tabs to read more about it. Otherwise, you only have to scroll down a little bit. This behaviour is only found when viewed on a desktop pc by the way, mobile users will see a long page.

Anyway, I’m not a professional web designer, the tabs are just an attempt to keep the info concentrated.


I liked the tabbed sections, but I feel like they could use some differentiation from the rest of the content. For example, in the "limitations" tabbed section, I had to scroll down and flip through the tabs to determine where the tabbed content ended. Separating this out a little more would make it apparent that "changing this tab only changes this content," something along these lines (rough in-page mockup): https://imgur.com/D2lhOwV


That's a nice idea, I'll experiment with it. Cheers


>> making the article one long page.

On a desktop at least, let us please not :)

* Allow me to understand the structure and length of content I'm entering

* Allow me to jump quickly to what I need, now and in the future

* Allow me to load only what I want

All good things! :)


Is "page-flipping" basically double buffering or am I missing something?


I hadn't heard the term in a while, but I believe page-flipping more specifically means that the buffer roles can be switched in hardware (e.g. by changing a "framebuffer start address" register in the CRTC or RAMDAC) instead of requiring a buffer to be copied (e.g. via a DMA or blitter operation). Both schemes are double-buffered in the sense that you're never actively rendering to the same buffer that's being scanned out, but page-flipping has significantly less overhead.


When's the last time anyone produced a hardware platform where double-buffering required a full copy rather than just updating a pointer/register? PCs moved past that in the '90s at the latest, and I'd expect most other platforms that supported 32+ bit addressing on both the CPU and graphics processor were similarly capable of relocating the front buffer at will.


In all multiple-framebuffer capable hardware platforms I know of, there's a pointer to the FB involved (in the case you describe, an address-bearing register).

Otherwise a copy is required into a hardcoded address or physical memory, and then you're not double-buffering anymore.


It blows my mind that the little mobile device in my hand is able to not only render but actually interact with models at the touch of my fingertips that were once only possible with a major home console. How far we’ve come :)


>1024×512 pixels with 16-bit colours or a realistic one of 960×512 pixels with 24-bit colours allowing to draw the best frames any game has ever shown…

One small detail that I don't see mentioned in the article is that the GPU cannot actually rasterize at 24bpp, only 15bits RGB555. 24bpp is mostly only used to display pre-rendered static images or video decoded by the MDEC. I seem to recall one 2D game that managed to have 24bpp gameplay but it was a clever hack more than anything else. Internally the GPU always functions at 24bpp however, it just dithers and truncates to RGB555 so an emulator can actually remove the truncation and run at 24bpp "natively".

Beyond that some of the GPU's limitations can be improved in emulators with more or less complicated hacks. In particular a modification called PGXP can be used to side-channel the depth and sub-pixel precision data to the GPU implementation to allow perspective correct and more precise rendering: https://www.youtube.com/watch?v=-SXT-y0vKv4

It doesn't work perfectly with all games and it's fairly CPU-intensive but it looks pretty decent when it works well.

>MIDI sequencing: Apart from playing samples, this chip will also synthesise MIDI-encoded music.

I don't know what that means. I implemented the SPU on my emulator a couple of weeks ago and I'm not really sure what that refers to.

>The port of the controller and the Memory Card are electrically identical so the address of each one is hardcoded, Sony altered the physical shape of the ports to avoid accidents.

To expand on that: the interface always talks to both controller and memory card within the same slot, so when you talk to memory card 1 you also talk to whatever is plugged into the controller port 1. Then in the serial protocol the first byte tells who you're talking to (0x01 for pad, 0x81 for mc), and the other device is supposed to see that and remain in high-z.

So actually plugging a memory card in a controller port (or vice-versa) would work, the problem would be if you plugged two memory cards or two controllers in the same port, in which case they'd speak on top of each other.

Beyond that the protocol to discuss with the memory card and especially gamepad is, in my opinion, absolutely insane. It's over-complicated and under-featured. It's also incredibly slow (especially for memory card access).

Regarding copy protection:

>On the other side, this check is only executed once at the start, so manually swapping the disc just after passing the check can defeat this protection...

That works with most games, but later games were more clever: you could relock the drive and restart the init sequence early on to see if the drive really recognizes the disc.

It was also used as a protection against early modchips: since those would constantly stream the SCEx magic string to unlock the drive (instead of just during the first sectors like a real disc would) you could lock the drive, read some sectors that shouldn't be able to unlock it then re-check. If the drive is unlocked you know there's a modchip and you display a spooky message about piracy. Note that this technique would detect the modchip even when playing with an authentic disc so you'd effectively be unable to play the game at all on modded hardware.


Thank you for helping me improve the article. I'll take a closer look at your comments tonight.


Spyro looked a lot better without the textures IMO


Spyro without textures and only gourand shading looks like every mobile game from 2009-2019.

From the examples it looks like the textures in Spyro were mostly about hiding all the ugliness around the edges from the straight gourand shader output, aside from the characters.


I think Spyro is one of the best examples of Gouraud shading mastery on the system. The textures are there to add detail, in fact they had a basic LOD system for the environment where they swapped textured models with gouraud shaded models using the same tint for objects far away.

The skyboxes were rendered as well as meshes and then shaded, and they still hold up today from an artistic point of view: https://imgur.com/gallery/vocZw




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: