Hacker News new | past | comments | ask | show | jobs | submit | more pants2's comments login

How is your recruiting process going to improve to catch that in the future? Seems like a pretty big screwup, hopefully he wasn't given any sort of admin access.


That and videos of concerts / fireworks shows.

For a fun example, every time I'm on the Las Vegas strip I see dozens of people taking videos of the Bellagio Water Show.

There are 30 shows per night, if 50 people take videos in 4K 60fps (default on new iPhones), that's around 60 GB of data per show or ~600 TB per year of just videos of the Bellagio Fountain Show!


Feels like in my youth the biggest risk was stumbling on some freaky gore/porn that scarred you, but somehow that doesn't seem as bad as the risk of getting hooked on dopamine-optimized brainrot, alt-right propaganda, or micro transaction focused games.


This. I was a kid with too much free time and exposed to the internet with some but not enough supervision.

I stumbled with some f up stuff that i still remember to this day. But somehow I'm grateful that it wasn't the current brainrot


I couldn't agree more, this pretty much describes me in my youth. I am legitimately terrified of what would have happened to me had I had access to some/certain alt-right influencers, I know for a fact that a younger me would have easily fallen into that mess. I get why it's appealing (at least to a certain person in a certain headspace).

I'm so glad the worst I feel like I ran into were freaky gore/porn.


Yep, that's exactly my fear. The brain rot zombification of our society. How do I stop my kids from getting suckered into an endless scrolling doom loop?


> How do I stop my kids from getting suckered into an endless scrolling doom loop?

I think the first and best way to ensure this is to not do it yourself.


Pretty much every parent I know does not allow their preteens and younger unsupervised tablet/phone use because the internet is so bad for them at that age. Typically they allow a curated set of content during specific times, and devices are physically removed from the children's environment afterward.


I always laugh when professors complain that students doing poorly in the class don't show up to class or office hours; they might just be really bad teachers, and they know those things won't help them.


Typically these online homework systems are pushed by the department much to the chagrin of the professors, but the students are required to pay ~$100/ea for the privilege of doing automated homework, and the department gets some nice kickbacks.


Student abuse. Bundling textbooks with single-use codes for these systems is also malicious.


> it commits to that word, without knowing what the next word is going to be

Sounds like you may not have read the article, because it's exploring exactly that relationship and how LLMs will often have a 'target word' in mind that it's working toward.

Further, that's partially the point of thinking models, allowing LLMs space to output tokens that it doesn't have to commit to in the final answer.


That makes no difference. At some point it decides that it has predicted the word, and outputs it, and then it will not backtrack over it. Internally it may have predicted some other words and backtracked over those. But the fact it is, accepts a word, without being sure what the next one will be and the one after that and so on.

Externally, it manifests the generation of words one by one, with lengthy computation in between.

It isn't ruminating over, say, a five word sequence and then outputting five words together at once when that is settled.


> It isn't ruminating over, say, a five word sequence and then outputting five words together at once when that is settled.

True, and it's a good intuition that some words are much more complicated to generate than others and obviously should require more computation than some other words. For example if the user asks a yes/no question, ideally the answer should start with "Yes" or with "No", followed by some justification. To compute this first token, it can only do a single forward pass and must decide the path to take.

But this is precisely why chain-of-thought was invented and later on "reasoning" models. These take it "step by step" and generate sort of stream of consciousness monologue where each word follows more smoothly from the previous ones, not as abruptly as immediately pinning down a Yes or a No.

But if you want explicit backtracking, people have also done that years ago (https://news.ycombinator.com/item?id=36425375).

LLMs are an extremely well researched space where armies of researchers, engineers, grad and undergrad students, enthusiasts and everyone in between has been coming up with all manners of ideas. It is highly unlikely that you can easily point to some obvious thing they missed.


Another angle might be that gluten is likely the primary source of refined carbs for most people, which can contribute to SIBO and general Dysbiosis.


There are lots of open-source projects that took many millions of dollars to create. Kubernetes, React, Postgres, Chromium, etc. etc.

This has clearly been part of a viable business model for a long time. Why should LLM models be any different?


So funny to see React among these projects. Tells a story about “frontend” on its own.


Sugar has extremely well documented negative effects even in the amounts you find in a regular soda (let alone three). Most of the Aspartame studies test massive doses and are generally inconclusive. If it's a choice between the two, go with the Aspartame.


You might give QuestDB a try, it supports all the above except native graphing, though it does support Grafana and have a nice query UI. It's lightweight and blazing fast in my experience.


Thanks, it looks awesome !


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: