Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"A distinction should be made between coming up with a move and understanding the motivation for the move."

Sure, but realizing what the reasoning could be in retrospect (sort of reverse engineering) is by definition easier than coming up with a (paradoxical) move in the first place, so if you can't even do the former...

"polynomial time heuristics such as that would come from neural networks are much more likely to be human discernible."

AlphaZero's playing style definitely feels more human-like, as it has a "speculative" quality to it.

To quote no other than Kasparov:

"I admit that I was pleased to see that AlphaZero had a dynamic, open style like my own. The conventional wisdom was that machines would approach perfection with endless dry maneuvering, usually leading to drawn games. But in my observation, AlphaZero prioritizes piece activity over material, preferring positions that to my eye looked risky and aggressive. Programs usually reflect priorities and prejudices of programmers, but because AlphaZero programs itself, I would say that its style reflects the truth."

It's probably a bit deceptive, because what looks risky to us, probably isn't really perceived as "risky" by the neural network, which can "feel" the deeper correctness of the decision, much unlike traditional engines, whose conservative cautiousness stems from their inherent limitations. (We know that now - before AZ it was assumed this is simply the "right" way to play, just like Kasparov says).



> but realizing what the reasoning could be in retrospect (sort of reverse engineering)

Not in this situation. While the chess engine performed a network guided non-deterministic search and returned a result, its neural network did not. With tools like in the paper, we can now map the network's internal states to something close to human relatable concepts.

Even without such tools we could observe what lines were productive by analyzing a few ply deep and comparing with other moves we could have made. It's closer to something becoming obvious with hindsight, where seeing what works compared to what you thought of helps to highlight the flaw in your plan or something you missed, while reinforcing your understanding of the engine's moves. That scenario is a common enough one.


I meant "reasoning" as in finding out the objective reason, within the realm of chess itself. Not in how the neural network conducted this reasoning under the hood.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: