Hacker Newsnew | past | comments | ask | show | jobs | submit | andromaton's commentslogin

A comment from someone who knows or knew the author or was part of the project sharing details that makes readers feel like they've just been handed backstage passes.

A highly voted comment that seems insightful if you don't know the domain but amateurish if you do.

A comment noting the article is from YYYY and asking if the mods can add (YYYY) to the title.

A comment explaining that this has long been a solved problem in some other language or domain

A commenter pointing out that the article is implicitly US-centric

A tedious response highlighting that YC is a US investment group and that US-centric articles should be expected.

A comment noting that the submitter has posted this exact link multiple times in the last six months.

A post by dang about related posts:

A Technical Blog Post by a Big Name Expert - https://news.ycombinator.com/item?id=5326511 - March 2013 (189 comments)

A compelling title that is cryptic enough to get you to take action on it - https://news.ycombinator.com/item?id=43219556 - Feb 2025 (112 comments)

A Hacker News thread where every comment describes itself - https://news.ycombinator.com/item?id=38451203 - Nov 2023 (74 comments)

A request for others to add to the list.


A post thanking dang for his tireless moderation work.

A post agreeing, adding a personal anecdote about a gentle nudge received years ago that the commenter still thinks about.

A comment trying to get a response from dang regarding the recent viral Sam Altman thread

A passive-aggressive reminder that link was already posted earlier in the thread implying you should have credited it:

https://news.ycombinator.com/item?id=47721995


Indeed, some examples:

https://news.ycombinator.com/item?id=12340348 Neural network spotted deep inside Samsung's Galaxy S7 silicon brain (2016)

https://ieeexplore.ieee.org/document/831066 Towards a high performance neural branch predictor (1999)


I'm borderline shocked that all of this extra overhead is somehow more efficient than something as simple as computing both branches or something.


The required computing resources double at every branch where you take both paths, and if you speculate ahead by 100+ instructions, with let's say up to 20 branches, it gets way out of hand.

I could see CPUs sometimes taking both paths for close, hard to predict branches. Does anyone have information on that?


I believe the way things are currently trending is that architectures might turn some short hard to predict branches into predicated instructions instead (similar to x86 CMOV or some ARM conditional execution instructions). Outside of short branches the overhead for loading up to 2 instructions for every 1 that gets executed can be too costly. Branch predication on SIMD/SIMT instructions is already the way things work for GPUs and AVX256/512 from my understanding.


Tbh I'm surprised so much guidance.


Elijah Sandiford at Linus Tech Tips asking Linus Torvalds what happens to Linux if Linus dies. Recorded around September 2025. Aired in November. Plan made in the 2025 Maintainers Summit. Great episode: https://www.youtube.com/watch?v=mfv0V1SxbNA&t=1860s


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: