n^n^n^n^n^n is an upper bound for general n, but the special case of only 2 people can be done with just 1 query.
the algorithm is just to have person A cut into two equal slices and then person B choose their favorite slice. it takes just 1 second, which is quite a speedup over the upper bound...
Yep, this is it. The key is not that there are two time dimensions but two independent time translation symmetries, each which translates the system by a different period of time corresponding to the two frequencies in their pulse. The two time dimensions is an analogy that’s useful for the theoretical treatment of such a system.
looks great. wish I could break it free from the browser though. a browser tab can't really be the backdrop to my work, and clicking into a browser tab just to view the visualization is an extra distraction -- defeating the purpose. Doesn't seem realistic, but I wish I could set this to be my desktop background without loosing the interactivity.
Hey, thank you for the feedback. I absolutely get what you're saying, a solution can be 'installing' the website as an app (you should get the prompt in the browser to do so). Plan for the future is to have both a desktop and mobile app though, which would definitely be the best solution
does anyone know if you could take a window and make it always be in the background -- the reverse of "always on top", say "always on bottom"? doesn't seem to be an option on Mac or in Gnome that I can tell but maybe there is someway to gain that functionality.
My understanding is that this project is more of a way for established researchers in a very theoretical field to start thinking about climate while using their existing skill sets to contribute.
In the long term, if we eventually make it to the scenario where we stop putting CO2 into the atmosphere on net, then each kg of CO2 input into the atmosphere must be compensated by a kg of CO2 removed by some mechanism or another. The reasonable thing to do in that equilibrium would be to set the price to output CO2 - collected by governments as a carbon fee - equal to the price to capture CO2, paid out by governments as a negative carbon reward.
Of course we could just do that now, the trick is how to get to that equilibrium scenario without collapsing the economy in the process.
Is this AI generated text? It doesn't provide any context a normal human would want. The attempt at providing context feels exactly like an AI searched a database of summaries of research for something related.
how do you know the passionate to the point of addiction Yang supporters are real?
I think its likely that many are, but that there are also many passionate supporters of the other candidates who just lurk on Twitter without posting due to social norms about how often you should chime in and how forcefully. The social norms around Yang supporters on twitter have changed by bots posting often and loudly, breaking the bubble and giving permission to people to do it too. If you then turn the bots off, the YangGang effect would last. (I'm sure this applies to not just Yang...)
I've seen proof of concepts that could be strung together to do that, using a combination of Generative Adversarial Networks, style transfer, and rule-based semantic reasoner.
Someone should make a Twitter-like that verifies posters at various levels based on humanness/geographic locale and other characteristics.
Then posters could limit conversations to those with >90% human score (with score going up when twitter has more certainty that you are a human) located in the US (for US political conversations), etc.
I'd certainly pay a few bucks to be verified and contribute much more often to conversations than I do now, as now it just seems like screaming into the bot void.
There seems to be a lot of confusion here about the complexity, for example calling the simulation here "in the linear regime" due to IBMs result. This is inaccurate.
A source of this confusion is that we need to discuss space and time complexity simultaneously. In the algorithms for quantum simulation (and many other algorithms), there is a trade off between space complexity and time complexity. ELI5: You don't have to store intermediate results if you can recompute them when you need them, but you may end up recomputing them a huge number of times.
For the quantum circuit, the standard method of computation gives exponential memory complexity in number of qubits (store 2^N amplitudes for a N qubit wavefunction) and time complexity D2^N, i.e. linear in circuit depth and exponential in number of qubits. For example, under the IBM calculation, 53 qubits at depth 30 use 64 PB of storage and a few days of calculation time, while 54 qubits use 128 PB and a week in calculation time. Adding a qubit doubles the storage requirements AND the time requirements.
Under google's estimation of the run time, they were using a space time memory tradeoff. There is a continuous range of space-time memory tradeoffs - USE MAX MEMORY as IBM does, MAX RECOMPUTATION (store almost no intermediate results, just add each contribution to the final answer and recompute everything) and a range of in-between strategies. While I don't know the precise complexities, the time-heavy strategies will have time complexity exponential in both N and D ( 2^(ND) ) and space complexity constant.That's why googles estimate for time complexity is so drastically different than IBMs.
Side note: IBM also uses a non-standard evaluation order for the quantum circulation, which utilizes a trade-off between depth and number of qubits. In the regime of a large number of qubits, but a relatively small depth, you can again classically simulate using an algorithm that scales N*2^D rather than D2^N, using a method that turns the quantum circuit on its side and contracts the tensors that way. In the regime of N comparable to D, the optimal tensor contraction corresponds to neither running the circuit the usual way or sideways but something in between. None of these tricks fundamentally change the ultimate exponential scaling, however.
As an extra step, you could also run compression techniques on the tensors (i.e. SVD, throwing away small singular values), to make the space-time complexity tradeoff into a three-way space-time-accuracy tradeoff. You wouldn't expect too much gain by compression, and your accuracy would quickly go to zero if you tried to do more and more qubits or longer depth circuits with constant space and time requirements. However, the _real_ quantum computer (as Google has it now, with no error correction) also has accuracy that goes to zero with larger depth and number of qubits. Thus, one can imagine that the next steps in this battle are as following: If we say that Google's computer has not at this moment beaten classical computers with 128PB of memory to work with, then google will respond with a bigger and more accurate machine that will again claim to beat all classical computers. Then IBM will add in compression for the accuracy tradeoff and perhaps again will still beat the quantum machine.
So this back and forth can continue for a while - but the classical computers are ultimately fighting a losing battle, and the quantum machine will triumph in the end, as exponentials are a bitch.
No. I think this is where your logic is off: "Furthermore, I think that if I try to read an entangled state, it's as if I am reading the entire state all at once"
You can't read the entire state. A quantum channel where you send n-bits lets you read n-bits, not exponential in n-bits.
There is indeed reduction of signal as you increase temperature. Under the error correction theorem, you can amplify that signal back to full strength as long as you are under a threshold temperature (but with ever-increasing resources as you approach the threshold temperature.)
Also this: "This is going to generate heat, and the rate at which I can dissipate that heat is bounded by the speed of light". Sure, the heat has to be transferred out of the system, which takes time that is bounded by some distance divided by the speed of light. But the qubit is on a 2D chip surrounded by a 3D refrigerator. The distance doesn't necessarily increase with increasing qubits.
>You can't read the entire state. A quantum channel where you send n-bits lets you read n-bits, not exponential in n-bits.
An entangled state is different, Since I can read all of the bits outside of each other's light cones, they cannot causally influence each other, which is why I said as if instead. There is no such thing as "all at once", but I can properly read each qubit before all of the others, as far as that qubit is concerned and it would still obey the correlation. The reading of an entangled quantum state is indeed a single measurement.
>Under the error correction theorem, you can amplify that signal back to full strength as long as you are under a threshold temperature (but with ever-increasing resources as you approach the threshold temperature.)
2) I don't think that quantum error correction ultimately solves the problem of heat. It tries to solve the problem of single bit/sign flips from not doing your computation in a closed system. My argument assumes that the only things that exist in the entire universe are the computer and the heat.
"This is going to generate heat, and the rate at which I can dissipate that heat is bounded by the speed of light". Sure, the heat has to be transferred out of the system, which takes time that is bounded by some distance divided by the speed of light. But the qubit is on a 2D chip surrounded by a 3D refrigerator. The distance doesn't necessarily increase with increasing qubits.
3) you're misinterpreting my statement of the problem. I don't have to get each additional qubit just as cold as the original. Every qubit I add requires me to get the entire quantum state to half the temperature that I had previously. The geometry of your refrigerator here only matters in that it's finite dimensional.
the algorithm is just to have person A cut into two equal slices and then person B choose their favorite slice. it takes just 1 second, which is quite a speedup over the upper bound...