Hacker Newsnew | past | comments | ask | show | jobs | submit | anvuong's commentslogin

It mostly has to do with sparsity in high dimensional space. When you scale things to the extreme everything is very far away from each other, the space is sparse, and random vectors have very high chance to be orthogonal, etc. All of these makes optimization incredibly slow and difficult. Just another facet of the so called "curse of dimensionality".

The action of sifting through through poop to find gold actually positively develops my critical thinking skill. I, too, went through a phase of just asking LLM for a specific concept instead of Googling it and weave through dozens of wiki pages or niche mailing list discussions. It did improve my productivity but I feel like it dulls my brain. So recently I have to tone that down and force myself to go back to the old way. Maybe too much of a good thing is bad.

How can you be sure putting the finger on the chip raise the temp? If you feel hot that means heat from the chip is being transferred to your finger, that may decrease the temp, no?

What do you mean not effective? I worked in a digital marketing/advertising company and we needed to re-architect our whole system to comply with GDPR, it was a pain in the ass for both backend team and analytics team.


Firefox performance has been trash for years, for many reasons. I still stick with it because it was included in my Ubuntu 8.04, which was the first OS I installed by myself, and more recently because of its stand regarding privacy. Now I might as well bite the bullet and move to Chrome or Edge, performance is much much better.


How do you define performance? For my use case I don't see any difference in speed compared to say, Safari.


[flagged]


That's simply not true. I regularly find Firefox to be faster than Chromium. And the opposite is also true, but the difference isn't big. None of the two browsers has a clear advantage, and neither gets in the way of normal usage, nor in the way of heavier usage (I do some light data crunching and 2D convolution in the browser).


that may be great for stand up comedy, but on HN we tend to expect slightly more substantiated discussions.


There's Chromium


There's chromium, and then, for those who take their privacy more seriously than the average VPN customer that just wants to do piracy, there's ungoogled-chromium.

It's like chromium, just without feeding heaps of your personally-identifying metadata directly to Google, who give it directly to the NSA, who give it directly to Elon Musk and DOGE.

Remember, ALL mass surveillance by ALL intelligence agencies is ALWAYS a threat to your freedom, because you don't get to revoke it. You weren't consenting to sharing your information with the Obama administration, you were consenting to sharing your information with all future administrations, no matter how far removed from your own worldview those future administrations may be.

There is one solution. We the people demand an end to ALL government surveillance as well as severe legal consequences for all US government employees who ever helped build such systems, even if they were "just following orders", because neither following orders nor ignorance of the larger picture is an excuse for facilitating moral atrocities.


> metadata directly to Google, who give it directly to the NSA, who give it directly to Elon Musk and DOGE

Source?


A few years ago Google came under fire for sending extra identifiers to Google's websites in a header named X-Client-Data[0]. I don't remember how many identifying bits that includes or whether they still do it, but it was the tipping point for me.

[0]: https://news.ycombinator.com/item?id=22236106


sources:

- Google's own privacy policy

- Extensive evidence provided by brave national heroes and civil rights legends like Edward Snowden and Chelsea Manning, including training materials for specific named NSA programs like PRISM that offer explicit lists of cooperating partners, including Google, Microsoft, Apple - basically every big tech company, but also smaller but popular ones, such as Skype (pre-Microsoft acquisition); this is well known among technology's civil rights advocates, and isn't hard to find discussion of by credible technologists, including the folks behind Protonmail¹

- Elon & DOGE: see literally every major American news network besides Fox pretty much since the inauguration, large swathes of the internet, several prominent discussions on HN. It's not just illegally mass-surveilled stuff either, they're going through all sorts of classified stuff right now!

¹ Here's one such example literally on the front page as of the time of writing this comment: https://news.ycombinator.com/item?id=43201732


Depends on what you want to know. If you want to get some trajectories then simulation of the stochastic differential equation is required. But if you just want to know the statistics of the paths, then in many cases you can write and try to solve the Fokker-Planck equation, which is a partial differential equation, to get the path density.


This is confirmation/survivorship bias. You only hear about these positive cases. The vast majority just ends up rediscovering old techniques and their year-long paper/work got rejected.


What's the difference? LLMs confidently lie or produce incorrect results all the time, with "conviction".


Isn't batch-first a Pytorch thing? I started with Tensorflow and it's batch-last.


TFv1 or TFv2? AFAIK it's batch-first in TFv2


Neither. Think of it as something like redis or memcached. It's external to the program, and the program will run just fine without it. But it avoids a lot of duplicate works.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: