Not so fast. You can still get nondeterministic behavior with things like thermal effects (think temperature affecting clock speed affecting relative timing between threads running on different processors)
However, from the perspective of the programmer it is non-deterministic. It only becomes deterministic when it is coupled with a particular scheduler and dispatcher. Most programmers strive to write cross platform code.
That's false. If you're doing it "right" then it will produce the same functional output for all of the possible execution paths. That doesn't make it deterministic.
The programmer is still allowed to specify operations to perform as well as restrictions on the order in which the operations are performed. Not fully specifying what to do just gives the computer more options to pick from -- it will still pick one of those options.
which is the part that gets rid of the "does what you tell it" bit, since now it's doing what it thinks it should, which might not be very consistent or easily determinable because minor variations during a race can produce wildly differing results.
If I tell someone, "bring me a sandwich or a bowl of chili," and the person brings me a bowl of chili, that person has done exactly what I told them to do.
What if they get bored waiting at the counter in the cafe and come back empty-handed? 'Cos that's what can happen as soon as IO is allowed into the picture; particularly so with network IO.
The problem is that combining this with multithreading makes for a combinatorial explosion of possible states that you might have to deal with. It's just not feasible to exhaustively specify everything., and leaving things implicit means that you're expecting the computer to do things you haven't told it to do.
Sure, there is complexity. It's easy to make assumptions or forget real-world details too. But you DID tell it to time-out, because that's how the I/O library is specified.
True, but my point is still valid. When you start a thread, you're saying "do this these tasks in whatever order is most convenient for you". And the computer does just that.
Philisophically, you can't say either way. For example, consider the case where the kernel scheduler uses random numbers from a hardware random number generator which uses quantum uncertainty. Since physics doesn't know whether or not quantum mechanics is deterministic, we can't say whether or not this system is deterministic.
Knowing the laws wouldn't necessarily make the system deterministic, though. You'd need hidden variables (and non-local ones, to boot).
As best we can tell, sometimes certain information doesn't exist so you may very well get something truly random that averages out to something that looks like what we think of as deterministic, classical behavior.
I would say there are just so many unknown variables (probably impossible to know), to the extent that people can get away with pretending that it's non-deterministic, when it actually is deterministic.
Which is why there's no way to be certain that it is or is not deterministic. According to current knowledge, quantum uncertainty is not deterministic -- solving the Schroedinger wavefunction for hydrogen, for example, shows no way to predict the future location of an electron. The best we can do is a probability distribution.
If, however, as the parent mentions, hidden variables do exist behind quantum mechanics, it could turn out that it was deterministic all along.
> "Equation X doesn't help us understand phenomena Y"
How does that even remotely suggest non-determinism?
> If, however, as the parent mentions, hidden variables do exist behind quantum mechanics,
Isn't it obvious that we're not even remotely close to knowing everything?
The uncertainty principles simply suggests that we perhaps can never grasp or understand all the details; it doesn't in any way imply that these unknown details don't exist. It'd be foolish to think that "because we can't predict it therefore it's non-deterministic".
This is a bit of a silly statement, as it's grossly misleading. Firstly, in the very strictest sense it isn't true, given that defects in operating systems and processors and single event upsets due to cosmic rays can introduce some degree of randomness into computer processing. Secondly though, even ignoring all of that, it's not helpful. It's a bit like telling a chemist "remember, the chemicals will always do exactly what you tell them to." Well, sure, that's true in the abstract, but the full complexity of the entire process is greater than any human mind can cope with, so it's necessary to use models and techniques that bring that complexity down to human scale. The same goes for the full breadth of the complexity of programming significant pieces of software today.
The computer will do exactly what you tell it to, but your brain is not capable of determining exactly what the computer will do (it'll process billions of instructions a second, if you tried to fully map out the operations of a computer in complete detail you wouldn't get even a fraction of a second done in your lifetime). Which is why we have to use abstractions, models, and various techniques to render that complexity manageable by human minds.
Down voted for lacking a sense of humour. Seriously, why did you feel the need to explain at large that computers are deterministic when OP was clearly making a joke. (I know I'm probably going to get down voted myself but I had to vent off)
Having read the post you voted down, I don't think he/she was actually responding to my joke but to the concept I was quoting. He/she seems to be arguing against computers being entirely deterministic at all.
I wish there were a better/more formal way to quote things on HN as it can lead to confusion with attribution. Purely italic text doesn't quite cut it, but nor does the "source code" quoting feature.
Knowledge representation--depends on which kind. The kind that involves logic and resolution turns out to be just another problem that you have to represent as graph search.
On the low, algorithmic level that's often true, but a lot of the difficulty imo is at higher levels of the stack. We're very good at writing SAT-solvers, for example, but directly writing problems as SAT is not a particularly good knowledge representation, especially if you want to build systems that are flexible and can interact with humans.
So a lot of the interesting work (imo) is on non-graph knowledge representations, like answer-set programming, situation calculus, etc. They often ground out in some variety of graph search to do the inference, but that's just the solver algorithm, not where the research that interests me is at. It's like saying that graph search grounds out in x86 asm twiddling bits in a computer, so all AI is just bits in a computer, which is also true but misses the point.
Though it does remind me that there was a comment from someone in the 60s or 70s amounting to, "all AI boils down to heuristic search".
Making it 'raw fast' is what compilers/JITs are for.
You still need to pick the right algorithms, but I've personally been really focusing on elegancy. Maintenance and Operation of code is the major costs of software, not writing new stuff -- so when you write new stuff, I believe that decreasing those other costs should be the first priority.
I agree completely. A common mistake is thinking that performance can be tacked-on later. Thinking about performance up-front can inform elegant designs.
Targeted performance improvements tend to make specific operations more efficient. Elegant architectures tend to completely eliminate huge swaths of duplicate and unnecessary operations, and so typically have the opportunity for much greater performance improvements.
I don't find that at all — I find that keeping it elegant keeps it adaptable, applicable in a broad set of circumstances, and understandable. And that these things keep a program from getting slower over time (both pre- and post-deployment) as requirements change.
I also find that a lot of times when programmers look at something elegant and say "that'll be slow", it's because they're wrongly estimating the relative speeds of things. A common example: they're counting function calls when they should be counting I/O operations that are many, many orders of magnitude slower.
Closely related to the first Thing of problem reductionism: "All problems in computer science can be solved by another level of indirection... Except for the problem of too many layers of indirection." — David Wheeler
Every problem can be solved by breaking it up into a series of smaller problems.
A good ballpark, but not entirely true. At some point you'll end up with a smaller problem that cannot be broken up further. For the most part these may be solved problems, like `increment foo`, but at some point you might hit an unsolved atomic level problem that you either need to spend a lot of time working on, or forces you to find an alternative path.
When you hit a problem like that, it's time to start going in the other direction. All problems in computer science can be solved by another level of indirection.
You have a program. It has input and output, and is either running or done. With abstraction, you can have a view of the program that is just a bidirectional data pipe. You give it information, you request information back. At some point you're done talking to it and drop the reference. The entire idea of 'halting' is gone.
No, that doesn't work, because programs in general don't give one output bit per input bit, and they can also take a very long time to come up with their answer.
E.g. input stream: stream of description of Turing machines and their input tape. Output stream: A stream of booleans that are true iff the respective machine holds.
I guess I should have addressed that idea specifically. If you transfer total control to a subprogram that directly looks for halting, you've just moved the problem, you haven't really abstracted it.
Apply indirection to anything with unspecified runtime. When you need a result, put a reasonable waiting time on it, and if it doesn't happen assume the source has died and handle it as an error code.
Interesting. So perhaps we're talking about the 'two things' that are specific to software development. Maybe it's worth considering the rest of the different 'two things' that apply more generally as well?
With the obligatory footnote: "Unless you get into multithreading - then all bets are off." ;-)