Hacker Newsnew | past | comments | ask | show | jobs | submit | marmakoide's commentslogin

Having something Turing-complete is surprisingly easy, and it hides everywhere. The repository have a small document that explains how you can use printf() as a computer : it can performs additions, logical union and negation, which is enough.

It was unintentional, but Ken Thompson being Ken Thompson, can't be 100% sure.


List of examples: https://gwern.net/turing-complete

It was probably unintentional, yeah, I don't recall any mentions of early printf being overloaded to do stuff, nor is it clear why you would do that since you're using it in a much more convenient Turing-complete language already (C).


So there was no extension of the functionality over time, all the formats have been supported from day one?


The key features that is used here is the '%n' format specifier, that fetches a pointer as the next argument, and writes a character count back.

There is actually an interesting question here: was '%n' always in printf, or was it added at one point?

I took a cursory look at some old Unix source archives at TUHS: https://www.tuhs.org/cgi-bin/utree.pl

As far as I can tell from the PDP-11 assembly, Version 7 research Unix (relevant file: /usr/src/libc/stdio/doprnt.s) does not appear to implement it.

The 4.1BSD version of that file even explicitly throws an error, treating it as an invalid format specifier.

The implementation in a System III archive looks suspiciously similar to the BSD one, also throwing an error.

Only in a System V R4 archive (relevant file: svr4/ucblib/libc/port/stdio/doprnt.c) I found an implementation of "%n" that works as expected.

I guess it was added at some point to System V and through that eventually made it into POSIX?


I think it was first introduced in 4.3 BSD Tahoe (released June 15, 1988): https://www.tuhs.org/cgi-bin/utree.pl?file=4.3BSD-Tahoe/usr/...

This was an update to the earlier 4.3 BSD (1986) which still implemented printf() in VAX assembly instead, and doesn't support the %n feature.

So %n may have originally been implemented in 4.3 BSD Tahoe and made its way into SVR4 subsequently.


Don't look at the end destination, look at the journey to the destination

* Learn low-level details of a basic but real-world CPU

* Practice the brain gymnastic of programming an atypical Turing-complete computer

Your created new connections in your brain, put to use some of the old established connections. Having a machine spit-out the emulator would rob you of all that. Like, you can drive from A to B, but running for A to B can do you much good.


There are lots of C compilers (LCC, TCC, SDCC, an army of hobby projects C compilers) available as open-source.

I am curious about what results would be for something like a lexer + parser + abstract machine code generator generation for a made up language


It's stochastic monkeys, but enhanced with a really good bias towards coherent prose, built upon a gigantic corpus.


So, it's monkeys specifically good at typing Shakespeare.


Hot take : the whole LLM craze is fed by a delusion. LLM are good at mimicking human language, capturing some semantics on the way. With a large enough training set, the amount of semantic captured covers a large fraction of what the average human knows. This gives the illusion of intelligence, and the humans extrapolates on LLM capabilities, like actual coding. Because large amounts of code from textbooks and what not is on the training set, the illusion is convincing for people with shallow coding abilities.

And then, while the tech is not mature, running on delusion and sunken costs, it's actually used for production stuffs. Butlerian Jihad when


I think the bubble is already a bit past peak.

My sophisticated sentiment analysis (talking to co-workers other professional programmers and IT workers, HN and Reddit comments) seems to indicate a shift--there's a lot less storybook "Ay Eye is gonna take over the world" talk and a lot more distrust and even disdain than you'd see even 6 months ago.

Moves like this will not go over well.


AI proponents would say you are witnessing third stage of 'First they ignore you, then they laugh at you, then they fight you, then you win'


> Butlerian Jihad when

I estimate two more years for the bubble to pop.


The plan went from the AI being a force multiplier, to a resource hungry beast that have to be fed in the hope it's good enough to justify its hunger.


Self-plug here, but very related => Robustness and the Halting Problem for Multicellular Artificial Ontogeny (2011)

Cellular automata where the update rule is a perceptron coupled with a isotropic diffusion. The weights of the neural network are optimized so that the cellular automata can draw a picture, with self-healing (ie. rebuild the picture when perturbed).

Back then, auto-differentiation was not as accessible as it is now, so the weights where optimized with an Evolution Strategy. Of course, using gradient descent is likely to be way better.


Regular Sunday 10 miles here, then I had the pleasure to experience plantar fasciitis. I love running, but the injuries can be really annoying


I fixed my recurrent back pain with a 6 mn daily morning, ie. plank, side plank, reverse plank, 1mn 30 sec each.

Posture muscles are not very well known in the general public. Loss of strength due to aging and sedentary lifestyle makes standing, seating, etc uncomfortable.


The best posture is your next posture.

I'd say as long as you use all your muscles meaningfully everyday, and don't spend hours in a single position, you're good.


Significantly faster compilation means less friction to iterate ideas, try things, which in the end lead to more polished results.

A nice interface is agreable, but maybe there are diminishing returns when you pay it with large compile time. I remember pondering about that when working with the Eigen math library, which is very nice but such a resource hog when you compile a project using it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: