Hacker Newsnew | past | comments | ask | show | jobs | submit | morelandjs's commentslogin

Being fully leveraged is great for capitalism and horrible for your mental health and general well being. Ever live under crushing debt? It’s awful. Step back for a minute and look at the big picture. People are taking on debt to eat lunch. That’s insane. The author acts like it’s zero sum and if BNPL isn’t the one offering the short term loan someone else will at worse terms. That’s just not true and hand waves over the fact that people who are uneducated are unwittingly being coaxed into making bad decisions with glittery UX and one click checkout.


Would honestly prefer this to an overly burdensome interview process.


How to avoid fingerprinting? In general, and is there anything for Safari?


You can't avoid fingerprinting. Drawing a real life comparison: Even having no finger at all is a identifiable characteristic in itself.

The only thing you can do is minimize and standardize the amount of identifiable characteristics shared like the tor browser does.

https://tb-manual.torproject.org/anti-fingerprinting/


The smartest people I’ve ever worked with to date were from physics grad school. Still remember the time my coworker was doing code profiling, decided he was unhappy that the exponential function from the standard library was too slow, and decided to write a Taylor series approximation that gave him the precision he needed and cut the run time in half. He also learned C++ in a weekend and was vastly better at it by the end of that weekend than most people I’ve met in industry. And these were just every day occurrences that made it a thrill to go to work. Working with talented people is a drug.

Some tips for younger people considering it: get involved in undergraduate research, apply to fellowships, shop for an advisor with a good reputation, start anticipating and preparing for an industry transition early, travel, date, and enjoy life!


I don't want to take away from his brilliance, but generally Taylor approximations perform far worse than the standard library implementations. It's also the first tool of choice for physicists, so who knows ...?

My guess, though, is that if he improved the performance, he used some other wizardry (Chebyshev or something similar).


Sometimes what you need is less precision, much faster. Carmack's famous inverse square root falls into this category.

If anything it's a lesson that the definition of brilliance is being in the wrong place at the wrong time... ;-)


Carmack denied writing it, and if WP is to be believed, he didn't.

https://en.wikipedia.org/wiki/Fast_inverse_square_root


I think Carmack credits someone else as the origin - possibly some magazine entry.

These days I think the reciprocal square root intrinsic is the fastest where precision is not that important.

I think there was a bit twiddling hack for pop count which was consistently faster than the equivalent cpu intrinsic due to some weird pipelining effect, so sometimes it is possible to beat the compilers and intrinsics with clever hacks.


https://github.com/Duke-QCD/trento/blob/master/src/fast_exp....

Check it out for yourself! I’m not claiming this was some kind of prodigious programming move, just something memorable that stuck with me.


Looks like he’s using a lookup table based on std::exp in combo with the Taylor expansion


Honestly the whole story sounds like a tall tale to me.

> He also learned C++ in a weekend and was vastly better at it by the end of that weekend than most people I’ve met in industry

I doubt this. Really, really doubt this. Sure, geniuses exist, but I don't buy it.


If he already knew how to code in other object oriented languages, and was really just learning C++ syntax over the weekend, it’s not as much of a stretch.


C++ is one of the most flexible and unopinionated languages you could ever encounter.

The idea that someone who knows a high-level object-oriented language could translate that to immediate success in low-level C++ syntax at a level higher than the experts that developed the libraries over a weekend is frankly fantastical.


> the experts that developed the libraries over a weekend is frankly fantastical.

this is not synonymous with "most [C++ programmers] in industry"

The claim was the person learned it better than most people in industry, not most people writing the libraries upon which the industry is based

EDIT: Also we don't technically know when this happened. If this story is from the 1990s, it's a lot more likely, because think of how many shitty C++ programmers there were back then since we didn't have all the language options we do now. It was still the language taught in schools, for example. Then it was Java and Python and JS etc. But back then, Jonny Mackintosh was writing bad C++ out of uni.


Also it's constrained to "most people I've met in industry". If OP doesn't work with C++ developers...


It's more likely that he was a decent C developer and learnt the basics of C++ and then ignored most of it.

C++ --


prob why there is so much garbage C++ code. someone needs to set the right way of doing things


Having seen physicists code, I REALLY doubt this.


Bah! Let's go invert the matrix!


The standard library implementations use Taylor approximations


The smartest people I’ve ever worked with were college dropouts.


Next gen will probably leverage AI summarization extensively and effectively. They’ll have better tools to efficiently consume information than lawyers in the past.


Perhaps. But what's it like now, when AI's are notorious for hallucination on legal subjects?

(And obviously - if the AI's are really competent on the hard parts of legal stuff, wouldn't firing 99% of the lawyers be the reasonable strategy?)


I’d like to defend the opposite perspective. Kids today are connected to a firehose of information-dense content distributed over the web and through cell phones. They’re consuming more information than any generation in the past, both garbage and high-value content.

There is an undeniable “holier than thou” attitude pushed by avid book readers. Frankly, there are a lot of books and long-form content which could be summarized without losing value into a single internet blog post. People have changed the way they consume information and books are on the decline but I reject the idea this is a crisis.


The fragmentary nature of online content is what makes it harmful. A book is a narrative that builds upon some basic terms and contexts. A LinkedIn post with some condensed life wisdom avoids any context and many of the “ifs”. Consuming thousands of these won’t allow you to come up with cohesive explanatory models, either. More likely, you will get confused, overwhelmed and possibly angry.


Most kids are addicted to social media, and algorithms there are optimized to feed attention-grabbing garbage. At least in my country there's a clear link between smartphone use in classroom and worsening educational outcomes. We're also enjoying a surge in ADHD diagnosis of small children, as well as mental healt crisis among teenagers, which I suspect have a lot to do with excessive smartphone and social media usage.

I agree that lots of books are garbage too, but they still require some amount of focus and attention, which makes them much, much less bad for brain development than watching TikTok videos all day.


Do you really think it's "both garbage and high-value content"?

From what I've seen it's either or. It seems the exception that a kid or teenager will watch a healthy mix of both.

I agree that lots of books and long-form content could be summarised, but if you ever read a book than watched the movie adaptation, you can understand that even though there's not necessarily a lot of value being lost, there can be.

Also, I think book reading helps a lot with your attention span and capacity to engage with content for a long time. I think the problem is not assigning books to read to those kids, its assigning the wrong ones. Any kid can get absorbed by a book where they can relate to the characters.


I can't fathom what kind of mental gymnastics one must go through to tout the widespread loss of functional literacy to be a good thing. I suppose you also believe we were all better off as cavemen and that humanity was a mistake?


Plenty of hypothesizing recently that literacy as we know it might have been a merely an intermediate phase in human civilization, between total lack of literacy and future human–human communication or human–computer interfacing through emoji-rich shortform text and voice. And even well before the rise of the smartphone, one might have occasionally heard in nerd circles that boring prose literature is superseded by manga: “look at what Japanese people are reading on the train, and their society is doing fine”.


Am one of the authors cited in the paper. I think the real opportunity for this type of inverse problem estimation work is resolving the shape and fluctuations of the nucleon.


It’s good to periodically reexamine your own positions against that of the majority and be open to realignment and different ideas, but remember that the collective opinion of society over the long term may look back unfavorably on the collective opinion of society over a period of time in the past. It’s OK to hold minority opinions, and it’s OK to disagree with the majority of Americans who voted for Trump.


Physicists model heavy-ion collisions at the LHC using fluid dynamic simulations, and to get accurate predictions of final particle correlations, you need to account for the position fluctuations of discrete protons and neutrons within the nucleus.


Not sure what point you are trying to make here. Double ML is a valid approach for debiasing confounding effects.


I disagree. It's vulnerable to all sorts of mishaps. You're now having to worry about data leakage between your treatment group AND your target variable. Casual inference without experiment data is all just a mathematical exercise to make a one size fits all approach to identifying relationships. Yes, correlation has weaknesses. But the name "causal inference" is grossly misleading. It's "well if we assume X, Y, and Z then the effect which we have already assumed is causal is probably around this order of magnitude". And hey, maybe that will help you identify cases where a confounding variable is actually the thing that matters. But you're not going to do better than just doing an analysis on the variables and their interactions. You don't have the brainpower to do this at a scale larger than pretty much all causal methods will begin to fail. It does not offer you the legitimacy the name implies.

I think it confuses far more than it helps.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: