Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's because array, or even if/else statements are already higher-level abstraction. To be Turing complete you simply need: a memory that can be read from or written to, and the ability to "jump" or "move" to some memory location.

Turing complete may sound big, but almost anything that can compute is Turing complete.



You also need an unbounded storage. Technically, if you have unlimited-width integers then two variables is all you need, but most languages have limited-width integers which makes them finite-state automata.


Why two variables? Can't you always encode two arbitrary-width integers into one arbitrary-width integer?

For example, using 0b0001 as a split character, to end up with the pseudocode left, right = memory.split('0001') and then encoding everything else without using any 0b0001 pattern. It might be awfully complicated, but you can always use a fixed but arbitrarily large part of this arbitrary length integer as scratch space to implement this encoding.

Edit: for an easier time, put one integer in all the even-indexed parts of the binary number, and the other in all the odd-indexed parts of the number.


Technically all computers in our universe aren’t, but we don’t make a fuss about it. In theory they aren’t complete, but in practice they are. Wonder if there’s a theorem which links the maths with physics in an intuitive way. Thermodynamics sort of do, but entropy isn’t that intuitive?


You can have e.g. Python (lists and bignums) implementation that would acquire more and more storage as needed: first from the RAM, then from disk, then doing fancy cloud Memory-as-a-Service requests, then commissioning building more datacenters, then branching out to the other planets etc. until it runs into the finiteness of the universe, all the while providing the abstraction of lists and numbers that can grow without hard limit (although with growing access latency). The practical limit is determined entirely by the things extraneous to the computation model itself.

On the other hand, if you have e.g. C, then your program is in principle can not have an array with indices larger than SIZE_MAX, period — which is guaranteed to be a finite number. This limit is built into the C abstract machine. Trying to weasel out of it by using FILE I/O is hindered by the fact that ftell() has to return accurate results that must fit into long (now, if ftell() were unavailable, or were allowed to fail for large enough offsets, then fseek()/read()/write() could actually be used in an obvious way for simulating a tape without a hard limit on its size). So this computation model, while practically Turing-complete, is not theoretically Turing-complete.

So, my original point was that built-in support for arrays, while providing ergonomics and performance, does not really extend the computational capabilities of the language: the languages with finite-sized integers but without arrays can simulate finite-sized arrays just like so. And if your integers are unbounded then you can use e.g. Gödel numbering to simulate unbounded arrays (again, with loss in ergonomics and performance).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: