Hacker Newsnew | past | comments | ask | show | jobs | submit | throwaway_yy2Di's commentslogin

Apropos, to anyone who wants to watch the live moves, with expert commentary, they are here (this is the defendant from the Reuters story):

https://chess24.com/en/read/news/chess24-win-moscow-case-ann...

https://www.youtube.com/channel/UCkTCNuQ2mGfW6-SpHpaze_g (direct link to livestream)

If you're looking for live computer engine lines, Steinar Gunderson offers that here, with 38 cores running Stockfish:

http://analysis.sesse.net/

As well as PGN files (live-updated):

http://pgn.sesse.net/


Regarding "38 cores": can anyone comment on whether that's sufficient to perfectly analyse the game in near-real-time, or is it just good enough to provide interesting analysis? I.e. would Stockfish running on 38 cores be competitive with / better than World Championship players?


modern chess programs are vastly superior to human players, regardless of one or 38 cores. stockfish has a elo rating of ~3350 (probably a little higher when running on 38 cores, not sure). carlsen has 2850. that corresponds with a 95% winning probability per game (or to be more precise: stockfish is expected to win 95% of all possible points over several chess matches)

it's hard to say if that's close to "perfect analysis", because we don't know what a perfect game will look like (chess is not a solved game). but compared to human level, one could say it's close to perfect.


I'm not saying Stockfish isn't stronger than any human player (even the world champion), but you can't compare ratings in two different systems like that. Magnus has never played a FIDE-rated game with Stockfish, nor has Stockfish played any rated games with a human opponent.


Even mobile phones have been at World Championship level for years, but there's no idea to know how close to perfection we are. Top computers still lose games to other top computers.


It's easy to tell computer engines aren't analyzing perfectly, because when we tell to them to play themselves, they lose a large fraction of their games, as both colors. Perfect play would always force at least a draw from at least one side.

There's no known tractable way to solve chess. There's something like 10^120 move orders [0], and no known way to find perfect play without brute-forcing (almost) all of them. Chess engines can't solve to to the end of a game to see which moves are certain to win; they can only explore to a very shallow depth, and evaluate the horizon nodes by very human-like [1] approximate heuristics.

It looks perfect from a human PoV (the best human players have no chance of winning); but there's still an unimaginably large gulf between chess engines and mathematically perfect chess.

[0] https://en.wikipedia.org/wiki/Shannon_number

[1] https://github.com/official-stockfish/Stockfish/blob/master/...


Also useful, the highlights from the first game: https://www.youtube.com/watch?v=-Y_h3eZQ-j0


In the past Daniel King and Christof Sielecki also each did really nice highlight videos, so I'm wondering if this'll discourage them this time.


Not my workaround:

http://spectrum.ieee.org/aerospace/space-flight/titan-callin...

http://descanso.jpl.nasa.gov/seminars/abstracts/viewgraphs/H...

This was an extremely serious bug in NASA/ESA's Cassini-Huygens probe, in the S-band link between Huygens (landing on Saturn's moon Titan) and Cassini (acting as radio relay).

It was a timing bug. There'd be a very high relative velocity between Cassini and Huygens, creating a significant (~2e-5) Doppler shift in the link. This shifted the frequency of the 2 GHz carrier (by 38 kHz). Likewise, it shifted the symbol rate of the 16 kbps bit stream (by 0.3 bps). The second effect was overlooked. On the demodulating end (Cassini), the bit-synchronizer expected the nominal bit rate, not the Doppler-shifted bit rate. Since its bandwidth was narrower than the 0.3 bps Doppler shift, it was unable to recognize frame syncs; this was proven in experiments post-launch. The parameter that set the bitrate was stored in non-modifiable firmware.

As it was when launched, Huygens would be unable to return any instrument data. For some context, this was the only probe that's ever visited Titan, at a cost of about $400 million.

The workaround

[spoiler]

The workaround was a major change in the orbit trajectory of Cassini (a $3 billion probe). Details aside, it set up an orbit geometry with this feature: at the time Huygens was descending in Titan's atmosphere, Cassini would be flying at a ~90° angle to their separation. The relative velocity was still 20,000 kph, but tangential velocity doesn't contribute to Doppler shift.


That's a truly epic workaround!


Do they always use star tracker when making these kind of trajectory changes?


Apropos, the Gaia space telescope's first data release is next week. It's an extremely large dataset of the kinematics (3d position + velocity) of Milky Way stars.

http://www.cosmos.esa.int/web/gaia/dr1


    ...tidally locked, possibly losing their water and
    atmosphere in tens or hundreds of millions of years.
How would this happen?


To piggyback on this, the Nature paper (which is paywalled) has a free preprint version here:

http://www.eso.org/public/archives/releases/sciencepapers/es...

(PDF)


    However, there is reason to hope that the even larger
    European Extremely Large Telescope will have enough
    resolution (about 5e-8 radians? hard to tell from their
    official publications)
6-12 mas is the advertised figure (0.006" = 3e-8 rad). That's the FWHM for its adaptive-optics imaging camera [0]. If you look at the details [1], it achieves the best resolution (6 mas) in the near-infrared J band, and for Nyquist-sampling reasons the pixel scale is half that (3 mas).

[0] https://www.eso.org/public/usa/teles-instr/e-elt/e-elt-instr...

[1] https://www.eso.org/sci/facilities/eelt/fp7-elt-pub/wfi_work...


Yes, because the lifespan of low-powered red dwarfs is vastly longer than the sun's. The wiki page cites a figure of 4 trillion years (!) for Proxima:

https://en.wikipedia.org/wiki/Proxima_Centauri


    The planet is about 0.05 AU from Proxima Centauri, meaning
    we need an angular resolution of about 1.9e-7 radians to
    even distinguish it from its host star. Is that realistic?
Much more than that; that's the angle for HALF-maximum brightness, but since the star's many orders of magnitude brighter than the planet, you'd need a much larger reduction than 1/2. Unfortunately, the diffraction-limited pattern [0] has fat tails -- it's not Gaussian, the brightness is slow to drop off away from the center (polynomially slow? [1]). I understand you'd need >100 times the FWHM angle in practice, on the order of 1" for JWST for instance [2]

This is why coronagraphs will be so useful.

[0] https://en.wikipedia.org/wiki/Airy_disk#Mathematical_details

[1] a log-log graph shows the envelope is close to inverse-cubic (x^-3)

[2] http://nexsci.caltech.edu/workshop/2016/NIRCam_Planets_and_B...


Technically, 1.22λ/D is the angle for the first dark circular ring of the Airy disc (first zero of the relevant first-order Bessel function [0]). But you are still right that the host star needs to be blocked out in some way to produce a useful image. I think NASA is working on some ways to do this, see [1].

[0] https://oeis.org/A245461

[1] https://exoplanets.nasa.gov/resources/1015/


You're exactly right; good catch.


Here's a better answer on /r/askscience:

    Can we directly image the planet from earth?

    1. "The planet/star contrast is 10^-7 " This basically means
    for every 10,000,000 photons from the star, we would measure
    ~ one from the planet.
    
    2. "Current instrumentation using adaptive optics and
    coronography on 10 m class telescopes (like Sphere on VLT or
    Gemini Planetary Imager) aims at achieving a contrast of
    10^-6 to 10^-7 at an angular resolution of 100-200 mas"
    
    3. "The planet has a separation of 38 mas".
    
    4. Therefore with the best planet imagers we cannot
    currently directly image the planet. Our best hope is the
    E-ELT which should have first light in 2024.
https://www.reddit.com/r/askscience/comments/4zdkra/askscien...


I don't think JWST could even resolve it from its parent star, judging by [0]. They're too close together.

Proxima Centauri has a luminosity of 0.0017 suns [1], so Earth-like conditions would occur at ~sqrt(0.0017) au = 0.04 au, an apparent angular separation of arcsin(0.04 au / 4.2 light years) = 0.03". JWST wants star-planet separations at least an order of magnitude larger [0], ideally with very hot, infrared-bright planets.

E-ELT should be able to resolve them, according to [2]. Page 7 gives values for a very similar scenario: an Earth-like planet around an M dwarf (like Proxima), 6 pc away (5 times farther), at an angular separation of 0.015" (half as wide). In this scenario, E-ELT could image the star and planet as separate points, and take useful spectroscopic measurements of the planet's light. For it example it could detect a spectral line of oxygen (O2) in 4 hours of exposure time.

[0] http://nexsci.caltech.edu/workshop/2016/NIRCam_Planets_and_B...

[1] https://en.wikipedia.org/wiki/Proxima_Centauri

[2] https://www.eso.org/sci/meetings/2014/exoelt2014/presentatio...


    can it be easily ruled out?
Yes: parallax breaks it. If something's colinear with a star today (and occults it), it won't be colinear 6 months later, when the Earth's on the other side of the sun.

At 2 light years for instance, that spacecraft would trace an apparent ellipse 3.3" in diameter, while Tabby's star (1,480 ly) would be stationary (0.004" parallax). In comparison, the star's apparent disk is just 30 μas wide (0.00003").


If I'm doing that math right, it works out that to not be ruled out by parallax, the object would have to be almost 2 AU in diameter if it was within a few light years of Earth. That seems pretty ridiculous--now we have them not only building a megastructure, but flying it to Earth!

However, I wonder if there is a way to save the idea without requiring a ridiculously large spacecraft? Suppose instead of one ginormous ship, it is a fleet of big ships each big enough to cause irregular dimming if that ship happens to get between Earth and KIC8462852. If the fleet is spread out, it could then be that one ship or another ends up between Earth and KIC8462852 at several times during one Earth orbit around the Sun.

That would give us the irregular dimming throughout the year.

For the slow continuous dimming, perhaps that is something going on with the star itself. That could be what made the inhabitants of the system decide to build a fleet of big ships and leave their weirdly behaving and very scary star.


Maybe the visible size of the ship is bigger, as it's unfolding / building a solar sail to brake using our sun's radiation? (like a parachute on a drag racer). Ah, but 2au is probably still way too big?


What if the parachute was a big magnetically contained gas cloud ?


Excellent point... that answers my question. Thanks!


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: