Hacker Newsnew | past | comments | ask | show | jobs | submit | someguyorother's commentslogin

Sure thing, here's your neural VR interface and extremely high fidelity artificial world with as many paperclips as you want. It even has a hyperbolic space mode if you think there are too few paperclips in your field of view.

Users of practical software would probably not accept the program taking forever, so you could implement a runtime constraint. With a runtime constraint, every TM effectively halts, so making nontrivial observations about them should at least be computable.

Not that it would be easy.


The dark humor in this is that any such technologically advanced future where humans have a meaningful say will eventually look like one of abundant luxury communism: it's just that the oligarchs' version will have a lot of people die first before the oligarchs enjoy their abundance.

The third option is that the oligarchy fully internalizes its pursuit of ruthless concentration of power. But in that case, someone will probably create an AI that's better at playing the power game, and at that point, it's over for the oligarchs.


I think you could do most of it as a point and click. Perhaps with the exception of that one command (if you know what I mean) because the mere possibility of it would be revealing in a point-and-click. But you could do that in a Sierra AGI type graphical adventure because that still has a parser.

On topic, I would myself recommend Coloratura - https://ifdb.org/viewgame?id=g0fl99ovcrq2sqzk - for the sense of wonder/unusual protagonist.


Perhaps you could do a hierarchical approach somehow, first generating a "zoomed out" structure, then copying parts of it into an otherwise unspecified picture to fill in the details.

But perhaps plain stable diffusion wouldn't work - you might need different neural networks trained on each "zoom level" because the structure would vary: music generally isn't like fractals and doesn't have exact self-similarity.


You seem smart. How do I follow you?


>You can do MCMC like AlphaGO and see ten moves ahead.

The existence of adversarial attacks shows that most neural networks have pretty bad worst-case performance. Thus sticking GPT-3 into alpha-beta or MCTS could just as easily give you an ungeneralizable optimum, because optimizers are by nature intended to find extreme responses. Call it a Campbell's law for neural nets.

The actual AlphaZero nets are probably more robust because they were themselves trained by MCTS, although they still don't generalize very well out-of-sample: IIRC AlphaZero is not a very strong Fischer Random player.


In the same way. Most proposed fusion systems use deuterium-tritium fusion where a significant amount of the energy is carried away as neutrons, so direct energy conversion wouldn't be possible anyway.

From the article you referenced:

> ITER will not produce enough heat to produce net electricity and therefore is not equipped with turbines to generate electricity. Instead, the heat produced by the fusion reactions will be vented.

So in a fusion plant, the particle energy would turn into heat (by the particles interacting with matter), this would heat up water (or some other carrying fluid), turning a turbine that produces electricity. See also https://en.wikipedia.org/wiki/DEMOnstration_Power_Plant which contains some diagrams showing just how that would be done.

More exotic reactions (e.g. p-B11) have been proposed, where almost no energy is in the form of neutrons. Theoretically, you could then use electrostatic devices to capture the energy directly without any of the mess with Carnot efficiency. However, getting p-B11 fusion going is much harder than d-t.



> What you call "american ideas" is the only thing that works in the anonymous environment.

What about BitTorrent or its various file-sharing predecessors? It has no cash, they had no cash. Or Tor? Exit nodes don't demand money as compensation from attracting the attention of people in authority.


The current crowning achievement of formal methods is, as I understand it, seL4. It is a formally proven microkernel of about 8500 LOC. There's still a while to go until they can scale to 100kLOCs, unfortunately.


CompCert is also very impressive. It's not, however, free software / open source (the source is available though)

https://www.absint.com/compcert/structure.htm

A problem with both seL4 and CompCert is that the code written to express the proofs is huge, much larger than code that actually does stuff. This puts a ceiling on the size of the projects we can verify.

F* is a language that tries to address that, by finding proofs with z3, a smt prover; z3 can't prove everything on its own but it cuts down proof code by orders of magnitude. They have written a verified cryptography stack and TLS stack, and want to write a whole verified http stack.

https://www.fstar-lang.org/

https://github.com/project-everest/hacl-star

https://www.mitls.org/

https://project-everest.github.io/

F* (through Low, a verified low-level subset of F) can extract verified code to C, which is kind of the inverse than the seL4 proof: seL4 begins with C code and enriches it with proofs of correctness; hacl* (a verified crypto F* lib) begins with a proven correct F* code and extracts C code (I gather the actual crypto primitives is compiled directly to asm code because C has some problems with constant time stuff). This enables hacl* to make bindings to other languages that can just call C code, like this Rust binding

https://github.com/franziskuskiefer/evercrypt-rust

Also this F* stuff is all free software / open source, so it might become a very prevalent crypto and TLS stack


There are ways to keep an AI in sealed hardware and making sure it can't affect the world, for instance by using an objective function that only deals with mathematics, and doesn't deal with the real world at all.

E.g. the AI is given a fixed amount of hardware and told to produce an algorithm that solves some NP-complete problem (say integer programming) in expected time as close to polytime as possible, as well as a mathematical proof that the algorithm satisfies the claimed close-to-polytime complexity bound. Then humanity can just solve the NP-complete problems separately once they have the algorithm.

This objective function doesn't care about the physical world -- it doesn't even know that a physical world exist -- and so it's about as likely to directly affect the physical world as MCTS or AlphaGo.

The "AI is going to run out of control" is a very compelling narrative (as everybody who has read the Sorcerer's Apprentice understands). But that doesn't make it true. Beware the availability heuristic.

(Incidentally, I think AI destroying mankind because it's too smart is an unlikely outcome. It's much easier for the AI to subvert the human-designed sensors linked to its objective function; and if the AI is sufficiently smart and the sensors aren't perfect, then it can always do so.)


These counterarguments are only possibly effective because you're imagining some particular kind of AI. When there is a useful AI, of course we will want it to be able to interact with people and have it control physical things in the real world. Just like existing computers do.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: