The code running on the CPU isn't the only thing the computer is doing in that one second, the X86{,_64} opcodes we could capture aren't necessarily exactly what the CPU is doing, and code being run by extra controllers and processors isn't likely to be accessible to anyone but the manufacturer.
In 1989, we'd've also been looking at code running on a single core with a single-task or cooperative-multitasking OS (for most home computers, anyhow), with simpler hardware that an individual could completely understand, and it would run at a speed where analyzing a second of output wouldn't be completely beyond the pale.
I've analyzed CPU logs from DOS-era programs and NES games. I certainly haven't analyzed a full second of the code's execution; I'm usually focused on understanding some particular set of operations.
I think this could only be done from an outside perspective for any given unit of time (meaning e.g. somehow monitoring the actual physical state of every atom of a modern system) and could only be performed by other machines.
I think we've reached a sort of "chicken and egg" point in computing history where we can only understand any given device with the help of other machines, though not necessarily with AI or Machine Learning.
Maybe that sounds obvious, given that VLSI tools have been around for a long time, but I think the point is that there is no such thing is as full knowledge in this realm and we are already totally dependent on our computing devices to understand our computing devices.
It's an interesting thought - how would one implement this? I think the closest thing that comes to mind so far is the DARPA Cyber Grand Challenge work, some of which is open source (https://github.com/mechaphish) and a good starting point at the software level anyway, where the question was similar in some ways: describe what's happening in this code path at time t, but also take it a step further and generate a patch to fix a bug(s).
>I think this could only be done from an outside perspective
>only be performed by other machines
JTAG allows in-device debugging. Of course you need another machine, by some definition, to view the results or you would indefinitely apply recursion.
> simpler hardware that an individual could completely understand
Our hardware now is so much more complex, what has it gained us? The quick answer is performance, but is it true? What about correctness? Hard to prove either way, but my guess is we've gained a little bit on performance and lost on correctness.
I could say 'gained a little bit on performance' is a wild understatement, because the actual gain is several orders of magnitude, but that in itself would be a wild understatement. It could be taken to suggest that a second of work on a modern PC could be duplicated on a 1989 PC if you were willing to wait all day, but in reality it couldn't; very little of what I use computers for today could be done on the machines I had in 1989 at all no matter how long I was prepared to wait.
My computer certainly does more than the one 27 years ago did, in the senses of operations it performs in a second, capacity of data storage, increased intercommunication options, increased selection of devices that I can interface with it, and so on. Some of these are just a change in magnitude, and the changes in the software are just as important as changes in the hardware.
There are more parts of the computer (again, both hardware and software) that are undocumented. Taken as a whole, the system is more capable, but closed hardware and software makes me wonder about capabilities in my computer that serve someone that isn't me.
Personally, I like to occasionally remember that there's not a single computing device in my life I really own. It can be either a comforting piece of knowledge or a frightening one depending on your perspective. I like to take the former perspective.
I see what you mean, but also no one ever owned a loaf of bread that could be cut, toasted and buttered remotely by a totally unknown 3rd party either...
Good point. You walk into the diner implicitly trusting the line cook and the bread baker and so on. Very similar to the chain of trust implicit in the tech we use. Only I believe the chain of trust in the tech to be much longer and proven to have been repeatedly broken in recent decades. Also, line cooks have the luxury of tossing out loafs of bread that have moulded over.
My first computer was an 14MHz Amiga 1200 in the early 90s. In some ways the performance difference isn't too noticable, e.g. the GUI was often more responsive than those I use today.
In other ways it's clearly different; e.g. waiting minutes for a JPEG to decode, as the scanlines slowly appeared one after another.
I think that the expectations have changed though - my first computer was a C64, and loading and starting a game was a several minutes long wait. I don't remember that I was bothered that much by that, but when I tried playing an old game even 15 years ago, I couldn't believe how much time it took.
The same for my Amiga, starting it took a really long time, but I don't think I did mind. The Workbench was never slow though. Unfortunately both my A1200 and C64 have died so I can't test my patience anymore. I remember that on the A500, flood fill in Deluxe Paint was a visible process though :-)
I used the A1200 with a MC68030 expansions while going to the university up to about 1995 and it wasn't much slower to work with than the DEC Alphas that we had there - except for things requiring raw CPU power. Most of the time waiting was for I/O, and my crappy small hard drive was probably faster than the NFS mounts anyway. The Alphas had 384 MB if I remember correctly though, which was just crazy.
The Amiga wasn't fast enough to play 16 bit mp3-files in stereo even with the 68030 cpu though.
In 1995 I replaced the Amiga with a PC with Linux on it, and computing was still amazingly fast. Installing slackware was a two week project however, mostly because I had to download everything in the university and use floppies and partly because the floppies were reused and flaky so I had to go back and redownload many disks.
The internet was crazy-slow outside the university until about 1998 when I was lucky enough to live in a block that got fiber for some reason. It was still slow at most workplaces for a another decade.
At around 1998 I got a job and a laptop for work. I installed Linux and Window Maker (or was it Afterstep?) and it was totally fine to work on. It might have had a Pentium with 32 or possibly 64 MB memory. All in all it was really fast, once it had booted, and I mainly used emacs and gcc. I remember that booting Windows on that machine was much slower.
As far as I can remember it was also possible to use a browser without having 1 GB or RAM at that time.
A full compile of our product took 6 hours though. It wasn't always necessary but it had to be done occasionally. A few years later it took 30 minutes to compile.
Today, I get irritated if I have to wait more than 30 seconds before I can test a line of code.
The reason that modern cores are in the billions of transistors is to get performance with correctness. Tons of speculative work that can be rolled back if some invariants don't hold to be true.
GPUs are definitely in the performance at the expense of correctness bucket. It's common for errors to occur in the LSB which isn't a big deal for colors, but might be a big deal for data.
I assume that SapphireSun refers (primarily) to individual manufactured device faults not a problem where the design fails to comply with the floating point standard. EDIT: although the faults I'm referring to would be just as likely in the MSB so maybe not.
SapphireSun, these are mitigated in large part by ECC. It was offered by both major manufacturers and now at least one of them still offers it.
In 1989, we'd've also been looking at code running on a single core with a single-task or cooperative-multitasking OS (for most home computers, anyhow), with simpler hardware that an individual could completely understand, and it would run at a speed where analyzing a second of output wouldn't be completely beyond the pale.
I've analyzed CPU logs from DOS-era programs and NES games. I certainly haven't analyzed a full second of the code's execution; I'm usually focused on understanding some particular set of operations.