Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, this has nothing to do with ontology vs epistemology. That's a philosophical problem. The problem here is that a measurement is a sample from a random distribution. A TM cannot emulate that. It can compute the distribution, but it cannot take a random sample from it. For that you need a random oracle (https://en.wikipedia.org/wiki/Random_oracle).


This is not really about quantum computing. A classical probabilistic Turing samples from a random distribution:

"probabilistic Turing machines can be defined as deterministic Turing machines having an additional "write" instruction where the value of the write is uniformly distributed"

I remember that probabilistic Turing machines are not more powerful than deterministic Turing machines, though Wikipedia is more optimistic:

"suggests that randomness may add power."

https://en.wikipedia.org/wiki/Probabilistic_Turing_machine


Power is not the point. The point is just that probabilistic TM's (i.e. TMs with a random oracle) are different. For example, the usual proof of the uncomputability of the halting problem does not apply to PTMs. The proof can be generalized to PTMs, but the point is that this generalization is necessary. You can't simply reduce a PTM to a DTM.


The problem is about physics, not Turing machines. You don't need to make a random choice as part of your physical model, the model only makes predictions about the distribution. You can't represent the continuous dynamical manifolds of classical or quantum mechanics on a TM either, but that's ok, because we have discrete models that work well.


I am asking myself:

Does a probabilistic Turing machines needs aleatory uncertainty? (would have called this ontological but (1) disagrees)

Epistemic uncertainty would mean her:

We don't know which deterministic Turing machine we are running. Right now, I see no way to use this in algorithms.

(1) https://dictionary.helmholtz-uq.de/content/types_of_uncertai...


The whole point of Turing Machines is to eliminate all of these different kinds of uncertainty. There is in point of actual physical fact no such thing as a Turing Machine. Digital computers are really analog under the hood, but they are constructed in such a way that their behavior corresponds to a deterministic model with extremely high fidelity. It turns out that this deterministic behavior can in turn be tweaked to correspond to the behavior of a wide range of real physical systems. Indeed, there is only one known exception: individual quantum measurements, which are non-deterministic at a very deep fundamental level. And that in turn also turns out to be useful in its own way, which is why quantum computing is a thing.


Right, the point is that we don't need a solution to the 'measurement problem' to have a quantum computer.


Well, yeah, obviously. But my point is that you do need a solution to the measurement problem in order to model measurements in any way other than simply punting and introducing randomness as a postulate.


And is that solution required to be deterministic? If so, that is another postulate.


You have to either postulate randomness or describe how it arises from determinism. I don't see any other logical possibility.

BTW, see this:

https://arxiv.org/abs/quant-ph/9906015

for a valiant effort to extract randomness from determinism, and this:

https://blog.rongarret.info/2019/07/the-trouble-with-many-wo...

for my critique.


> You don't need to make a random choice as part of your physical model

You do if you want to model individual quantum measurements.


Interaction in quantum physics is something that remains abstract at a certain level. So long as conservation principles are satisfied (include probability summing to one), interactions are permitted (i.e., what is permitted is required).


Yes. So? What does that have to do with modeling measurements, i.e. the macroscopic process of humans doing experiments and observing the results?


Would you agree that measurement is considered an interaction?


Sure. So?


Right I did hijack the thread a bit, but for me, the distribution is more than enough. The rest is just interpretation.


Well, no. The measurements are the things that actually happen, the events that comprise reality. The distribution may be part of the map, but it is definitely not the territory.


Isn't this just circling back to the original ontic vs epistemic though -> map vs territory?


No, because the original map-vs-territory discussion had to do with the wave function:

> Is the wavefunction epistemic or ontological?

https://news.ycombinator.com/item?id=42383854

Now we're talking about measurements which are indisputably a part of the territory.


Technically measurement devices are described by wavefunctions too.


Well, yeah, maybe. There's a reason that the Measurement Problem is called what it is.


I'm replying here since we appear to have reached the end.

Presumably measurement involves interaction with 3 or more degrees of freedom (i.e., an entangled pair of qubits and a measurement device). This is something, for most types of interactions (exclude exactly integrable systems for the moment), classical or quantum, we cannot analytically write down the solution. We can approximately solve these systems with computers. All that to say, is that any solution to any model of an 'individual' measurement will be approximate. (Of course, one of the key uses of quantum computing is improving upon these approximate solutions.) So what type of interaction should you pick to describe your measurement? Well, there is a long list and we can use a quantum computer to check! I guess part of the point I am trying to make, is when you open the box of a measurement device, you enter the world of many body physics, where obtaining solutions to the many-body equations of motion IS the problem.


> We can approximately solve these systems with computers.

Yes, but with quantum measurements you cannot even approximate. Your predictions for e.g. a two-state system with equal amplitudes for the two states will be exactly right exactly half of the time, and exactly wrong the other half.


I guess I don't have an issue with being wrong if we treat 'measurement' like a black box.


"God does not play dice with the universe" said Einstein.

But he hasn't met my Dungeon Master...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: