Hacker News new | past | comments | ask | show | jobs | submit | andersa's comments login

It's... awful? Why would I want all this distracting shimmering as I scroll?

Apple really isn't what it once was, this is embarrassing.


It's going to be like the pre-internet dark ages, but worse. Back then you only didn't find the information. Now, you find unlimited information, but it is all wrong.

Content from before the AI Cambrian explosion is going to be treated like low-background steel.

https://en.wikipedia.org/wiki/Low-background_steel


I don't know, this sounds a lot like in the late 90s when we heard a lot about how anyone could put information on the internet and that you shouldn't trust what you read online.

Well it turns out you can manage just fine.

You shouldn't blindly trust anything. Not what you read, not what people say.

Using LLMs effectively is a skill too, and that does involve deciding when and how to verify information.


The difference is in scale. Back then, only humans were sometimes putting up false information, and other humans had a chance to correct it. Now, machines are writing infinitely more garbage than humans can ever read. Search engines like Google are already effectively unusable.

I think there will be solutions, although I don't think getting there will be pretty.

Google's case (and Meta and spam calls and others) is at least in part an incentives problem. Google hasn't been about delivering excellent search to users for a very long time. They're an ad company and their search engine is a tool to better deliver ads. Once they had an effective monopoly, they just had to stay good enough not to lose it.

I've been using Kagi for a few years now and while SEO spam and AI garbage is still an issue, it is far less of one than with Google or Bing. My conclusion is these problems are at least somewhat addressable if doing so is what gets the business paid.

But I think a real long term solution will have to involved a federated trust model. It won't be viable to index everything dumped on the web; there will need to be a component prioritizing trust in the author or publisher. If that follows the same patterns as email (ex: owned by Google and Microsoft), then we're really screwed.


>> Well it turns out you can manage just fine.

You missed the full context: you would never be able to trust a bunch of amateur randos self-policing their content. Turns out it's not perfect but better than a very small set of professionals; usually there's enough expertise out there, it's just widely distributed. The challenge this time is 1. the scale, 2. the rate of growth, 3. the decline in expertise.

>> Using LLMs effectively is a skill too, and that does involve deciding when and how to verify information.

How do you verify when ALL the sources are share the same AI-generated root, and ALL of the independent (i.e. human) experts have aged-out and no longer exist?


> How do you verify when ALL the sources are share the same AI-generated root,

Why would that happen? There's demand for high quality, trustworthy information and that's not going away.

When asking an LLM coding questions, for example, you can ask for sources and it'll point you to documentation. It won't always be the correct link, but you can prod it more and usually get it, or fall back to searching the docs the old fashioned way.


> Well it turns out you can manage just fine.

The internet has ravaged society with disinformation. It's a literal battlefield. How can you have come till this conclusion?


This thread started from the question of where the experts with the ability to use LLMs effectively would still come from in the future.

I was making the point that it's still easy to find great information on the internet despite the fact that there's a lot of incorrect information as well, which was an often mentioned 'danger' on the internet since its early days.

I wasn't speaking to broader societal impact of LLMs, where I can easily agree it's going to make misinformation at scale much easier.


Fair point, well made.

TensorRT-LLM being open source is a lie, all the important kernels are loaded from cubins.

Yeah you're right (although, they started to open source some of that recently iirc). I meant SOTA for inference engines we can actually download and use ourselves.

I'm curious how much you paid in the past month for API fees generated by these tools. Or at least what order of magnitude we're talking about.


I don't know, like, weekly coffee shop tab levels of money?


Interesting, I guess the ones I saw complaining about blowing through hundreds worth of API calls per session were just doing something wrong?


I don't know, it depends on what they were accomplishing. "Hundreds of dollars" (in expectance) is not a meaningful amount stood up against any significant amount of shipping code.


If no one really becomes an expert anymore, that seems like great news for the people who are already experts. Perhaps people actively desire this.


Problem is, at some point those experts retire or change their focus and you end up with COBOL problem.

Except instead of just one language on enterprise systems no one wants to learn because there is no money in them, it's everything.


That seems like even better news for the people about to be paid large sums to fix all that stuff because no one else knows how any of it works.


It's also far inferior to Udio or Suno.


I absolutely hate the gradient on the progress bar. When a video is full screened I'm no longer able to tell at a glance how far along the progress bar is because the gradient and the grey segment for buffering all fades together. This is so simple to not mess up. It worked before. Why did you change it??


some product manager promo packet, thats why


"Show me the incentives and I'll show you the outcome"


I agree. It's not even "change for the sake of changing". This is "worsening a product for the sake of my own promotion" (or for justifying my role).


I added this to my userstyle CSS for YouTube:

  /* Remove pink tint from end of progress bar */
  .ytp-play-progress.ytp-swatch-background-color,
  .style-scope.ytd-thumbnail-overlay-resume-playback-renderer,
  .ytp-scrubber-button.ytp-swatch-background-color {
      background: var(--yt-spec-static-brand-red) !important;
  }
Only does the video progress bar, but I might extend it to the handful of other things that use the gradient.


Works as a uBlock rule too:

  youtube.com##div.ytp-play-progress{background: #f03 !important;}


Thanks!


There are many use cases:

  - Download games from Steam  
  - Download or upload AI models and datasets from Huggingface and similar  
  - Any kind of remote work where you are regularly working with multi GB files
A little server I just put inside my apartment has dual 10Gb/s ethernet ports.


To just link this with no context, it really needs an intro that summarizes in one sentence what it actually does. The description currently reads like a bunch of technobabble.


I agree. One image of what it is doing would improve the comprehension of the algorithm.


After all these years, Microsoft is finally rolling out win32 app isolation, so maybe we are finally on the good path...


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: