Hacker Newsnew | past | comments | ask | show | jobs | submit | janaagaard's commentslogin

I want this for YouTube.


it works for youtube shorts, and can be enabled for any app where you scroll since it detects if you are scrolling (linkedin, tiktok, reddit, etc)


Then ScrollGuard is for you?


Use libretube or grayjay.


> And depending how you drive you might not use friction brakes at all.

I’ve been told that should remember to use your friction brakes once in a while, to keep them in shape. Is this still true with cars that have effective engine brakes?


It is still true and it is a big deal on EVs which are using calipers instead of drums for braking; i.e. Tesla. Problem is that their brake calipers would rust (due to lack of usage) and then seize on major braking and would not disengage anymore.

Use it or lose it kind of situation with brakes on EVs


As I understand it, the brilliant idea is that the small variantions in brightness of the pixels look just like standard noise. Distinguishing the actual noise from the algorithm is not possible, but it is still possible to verify that the 'noise' has the correct pattern.


Correct pattern for the correct time span matching random fluctuations in the electrical grid.


I think that will be handled by the AC to DC conversion in most systems.


Nope. Mains hum is picked up by microphones as well as light intensity:

https://en.wikipedia.org/wiki/Electrical_network_frequency_a...


A Danish audio newspaper host / podcaster had the exact apposite conclusion when he used ChatGPT to write the manuscript for one his episodes. He ended up spending as much time as he usually does because he had to fact check everything that the LLM came up with. Spoiler: It made up a lot of stuff despite it being very clear in the prompt, that it should not do so. To him, it was the most fun part, that is writing the manuscript, that the chatbot could help him with. His conclusion about artificial intelligence was this:

“We thought we were getting an accountant, but we got a poet.”

Frederik Kulager: Jeg fik ChatGPT til at skrive dette afsnit, og testede, om min chefredaktør ville opdage det. https://open.spotify.com/episode/22HBze1k55lFnnsLtRlEu1?si=h...


> It made up a lot of stuff despite it being very clear in the prompt, that it should not do so.

LLMs are not sentient. They are designed to make stuff up based on probability.


I love this turn of phrase. It quite nicely evokes the difference between how the reader thinks vs how the LLM does.

It also invites reflections on what “sentience” means. In my experience — make of it what you will — correct fact retrieval isn’t really necessary or sufficient for there to be a lived, first-person experience.


Making stuff up is not actually an issue. What matters is how you present it. If I was less sure about this I would write: Making stuff up might not be an issue. It could be that how you present it is more important. Even less sure: Perhaps it would help if it didn't sound equally confident about everything?


Why would sentience be required for logically sound reasoning (or the reverse, for that matter)?


Unfortunately, they could have been thinking, but the designation of the training/inference separation made them all specimens.

https://news.ycombinator.com/item?id=44488126


It's not the exact opposite*, the author said that if you're doing boilerplate _code_ it's probably fine.

The thing is that since it can't think, it's absolutely useless when it comes to things that hasn't been done before, because if you are creating something new, the software won't have had any chance to train on what you are doing.

So if you are in a situation in which it is a good idea to create a new DSL for your problem **, then the autocruise control magic won't work because it's a new language.

Now if you're just mashing out propaganda like some brainwashed soviet apparatchik propagandist, maybe it helps. So maybe people who writes predictable slop like this the guardian article (https://archive.is/6hrKo) would be really grateful that their computer has a cruise control for their political spam.

) if that's what you meant *) which you statistically speaking might not want to do, but this is about actually interesting work where it's more likely to happen*


In a world where the AI can understand your function library near flawlessly and compose it in to all sorts of things, why would you put the effort into a DSL that humans will have to learn and the AI will trip over? This is a dead pattern.


It's a big leap from that hypothetical world back to ours.


This is completely ignoring the purpose of a DSL.

Dead pattern? Really?


Maybe reconsider assumptions? Maybe DSLs shouldn't be done anymore if they're not able to be utilized by AI agents easily


I’m not going to make my code worse because your broken tool finds it easier.


As a writer I find his take appalling and incomprehensible. So, apparently not all writers agree that writing with AI is fun. To me, it’s a sickening violation of integrity.


It's all fine as long as you keep that fetish in your dungeon.


Yeah, if I were their reader, I'd most likely never read anything from them again, since nothing's stopping them from doing away with integrity altogether and just stitching together a bunch of scripts ('agents') into an LLM slop pipeline.

It's so weird how people use LLMs to automate the most important and rewarding parts of the creative process. I get that companies have no clue how to market the things, but it really shows a lack of imagination and self-awareness when a 'creative' repackages slop for their audience and calls it 'fun'.


- "4.4% of all the energy in the US now goes toward data centers"

- "by 2028 [...] AI alone could consume as much electricity annually as 22% of all US households."

What would the 22% be if compared against all US energy instead of just all US household?


US households constitute 21% of all energy use [1], so 22% of this is 0.21 * 0.22, which is 4%.

[1] https://rpsc.energy.gov/energy-data-facts


The React example feels dishonest because it could be a lot simpler:

- All the imports are only needed because a UI library was used. This is not necessary.

- It also looks like 'User' is imported, but not used anywhere.

- The curly braces and the return statement can be left out.

- React.FC<> is unnecessary, you can just specify the type after {users} or, at least, simplify React.FC to FC.


I was also confused. It looks like the article has been corrected, and now uses the familiar 'o3' name.


I really hope Europe seizes this as an opportunity to start pushing public money, public code.

https://publiccode.eu/en/


After using Prettier to format my code and turning on format-on-save, I pretty much don’t use the tab key anymore. This doesn’t invalidate your point, - I am merely guessing as to why the tab key seemingly has been reassigned.


These charts should be on a log scale.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: