Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Many-core and networked computer systems fight indeterminacy. My networking algorithms would be a lot simpler if I could guarantee that everything operated in lockstep; the fact that I have to discard lockstep to get performance is because lockstep is a high-cost abstraction over a high-entropy (so, basically non-deterministic) underlying reality, not because indeterminacy is somehow inherently better.

It's a concession.



A concession to the physical world?


For networking, yes. For multicore, it's merely a concession to the fact that instructions on my architecture are variable-length, and there's a transparent cache mechanism (requiring knowledge of memory access patterns, which requires knowing the result of the computation ahead of time).


Because arbiters are used in communications among cores,

a many-core computer has inherent indeterminacy.


Not necessarily? If you have a real-time OS and you write your program well, you can synchronise timings and have cores send messages to each other without queues. It's hard to write fast code that does that, but in narrow circumstances, it's possible (I'm thinking embedded applications) – and when it is possible, it's faster than the equivalent algorithm with indeterminacy.

Indeterminacy slows things down. It's a concession.


Indeterminacy using many=cores speeds up processing.

Enforced determinacy slows down processing.


And natural determinacy, if you can get it, speeds up processing more than indeterminacy. The useful question is “what's the lowest-level model I can usefully use?”, not “can I do it with indeterminacy?”.

Do you think the above statement is wrong? If so, why?


Indeterminacy is crucial for next generation Intelligent System.

See

https://papers.ssrn.com/abstract=3581859




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: