Hacker News new | past | comments | ask | show | jobs | submit login
Intel Skylake sample breaks 6.5GHz mark on LN2 (dvhardware.net)
35 points by DiabloD3 on July 22, 2015 | hide | past | favorite | 23 comments



I really miss the days of increasing clock speeds. I attended a conference on molecular dynamics simulations two weeks ago (FOMMS), and more than one person lamented the fact that long timescale (> millisecond) simulations are going to be difficult to attain now that clock speeds have stalled. GPUs are great for simulating more atoms, but they are in fact worse at performing longer simulations. If no more effort is put into making processors faster, then the only ways forward are to either build special purpose machines (like Anton) or to invent some clever mathematical techniques.


Long timescale atomic simulations of liquids is a really challenging problem. Most of the well developed accelerated dynamics methods only work well for solids where there is a well defined time scale separation between atomic lattice vibrations and reactive events. In liquids (e.g. protein folding) the timescale separation is not as well defined. There seems to be complexity at every level (vibrations, small rotations, dihedral angle changes, etc). Liquid like systems have a very rough potential energy landscape with an enormous number of shallow minima that must be explored to understand the dynamics.

I look forward to the clever algorithms that will be developed that will enable modeling liquid systems on the experimental timescale. This is recently become possible in some solid-state systems, but it will take some time and effort before it happens for liquids. Then it will be much easier to do first principal studies of organic/bio chemistry.


Maybe a stupid question, but isn't it require to run these long-time simulations multiple times anyways? So one could evaluate these all at once by introducing randomness (vibrations, small rotations, dihedral angle changes, etc) once an interesting point in time is reached, and then simulating in parallel from then onward.


A bit off topic but I often think about if we developed the technology to simulate in real time a mass of about 4 pounds and the technology to do a cat scan (or something similar) with atomic granularity. We could sort of cheat when it came to AI and just take a human who already knows what we are trying to do, scan him, and simulate his brain. Never the less it seems we are very far from that possibility.


It's a very large unknown if a disembodied simulated human brain, even if it was a perfect copy, wouldn't be screamingly insane. All human brains we know of are embodied. Now, if you could also simulate (or at least stub out) the entire body, that might help, but that's a lot more than "four pounds"...


> It's a very large unknown if a disembodied simulated human brain, even if it was a perfect copy, wouldn't be screamingly insane.

What does "disembodied" mean here? Whatever you use to emulate a human brain is going to have a physical substrate and support system (body). It may be differently bodied, but its not going to be disembodied.

(Certainly we know -- and in some cases even have some understanding of the mechanisms -- that parts of the body besides the brain are relevant in controlling how the brain works, so any cognition simulation intended to approximate human thought is likely to need simulation of those parts as well as the brain in the narrow sense, whether through direct physical analogs or simulations of the effects through alternative means.)


>What does "disembodied" mean here? Whatever you use to emulate a human brain is going to have a physical substrate and support system (body). It may be differently bodied, but its not going to be disembodied.

If we surgically implanted a human head on a cow it would be "differently bodied" too, but it's obviously not what the parent means.

The casual use of "physical substrate and support system" as if it means the very specific "body" parent talks about, which is obviously the human body the simulated brain originated in, is bizarro.

A PC box won't cut it as "body".



For reference, here's what the current (CPU-Z validated) OC records look like: http://valid.canardpc.com/records.php (it's pretty safe to assume most of these used LN2 as well)


How, under the current laws of physics, did someone manage to squeeze 8500MHz out of a Celeron D????


It's based on Pentium 4 core which was extremely overclock-able, plus it's a single core, which makes it easier to get to that stable enough for a screenshot state.


It's too bad they haven't included actual performance comparisons. The Celeron (from '06) running at ~8GHz will most definitely be outperformed by any modern i7 running at stock speeds.


In what kinds of tasks? Single-thread performance has increased very slowly since the Core 2 Duo era. 8GHz is blazing-fast, even for an "old" processor.

eg. http://www.anandtech.com/bench/CPU/1335



> The CPU was overclocked from its default clockspeed of 4000MHz to 6531MHz without deactivating any cores or the Intel Hyper-Threading feature. The voltage was increased to 2.032V and the chip was chilled using LN2 cooling.

It's even more crazy that they managed to do this without disabling any of the cores. I wonder if the OS parked any of these cores. This blurb doesn't mention if they put the CPU under any load, or if the 6531MHz was achieved without load.

Also, liquid nitrogen? Is this common in the overclocking community now?


In the extreme-overclocking community yeah. (There are tournaments for this sort of thing afaik)


An AMD CPU broke 8GHz with liquid helium! http://hothardware.com/news/amd-breaks-frequency-record-with...


Yep. Liquid Nitrogen is the go-to coolant for overclocking records/competition.


> Also, liquid nitrogen? Is this common in the overclocking community now?

Oh yes. https://www.youtube.com/watch?v=WZr0W_g0dqk [2003]


I'm not a bare metal guy so excuse my ignorance, what is the bottleneck for higher speed processors? Is it simply cooling?


It is a combination of things, of which cooling is one. A more fundamental and challenging physical limit is increased loss due to higher frequency (which is why chip voltage needs to be increased). It gets to the point where too much signal loss occurs across "long" lines between parts of the chip. This is why using plasmonics in ICs is an active field of research; the hope is to be able to use optics for long distance interconnects inside the processor.


The fact that we're at a point where millimeter distances from one side of a CPU die to the other is starting to be considered "long distance communication" is bemusing and awesome at the same time.


In order to get a higher switching speed of a transistor you need to increase the voltage. For the power usage of an electronic device holds: P = V^2 / R, therefore if you want a higher clock frequency your power usage increases with the square of the applied voltage. Which means that you'll need excessive amounts of cooling power (e.g. LN2) to keep your CPU from dying at such high core voltages (2V). Usually processors are run at ~1.2V for factory specs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: