Hacker Newsnew | past | comments | ask | show | jobs | submit | tomaskafka's commentslogin

Or whatever team have done this at Google is no longer incentivized to continue the work and it just slowly falls apart

That's also just a plain accesibility - levelling the situation where someone is less fluent in a given language. You could even translate from english to simple english!

Interesting. It's the oral version of "rewrite this paragraph to be understandable by someone with a 5th grade education".

Wild to think about.


I’d love to use the android phone, as they seem to have much better and actually useful AI integration, but they are not phones but “advertising company tracking devices with tacked-on end user functionality”. Similarly, Chrome is not User Agent, it’s Corporation Agent.

That’s because denial of responsibility is one of purposes of enterprises. It’s not an accidental byproduct.

I love how the final step is “Dopamine is released in target brain”

We can minimize the latency and save on rendering by doing only this step :).


Also all major GPUs now have machine learning based next frame prediction, it’s hard to imagine this wouldn’t be useful.


Plus whether there's further benefits available for the FSR/DLSS/XeSS type upscalers in knowing more about the scene. I'm reminded a bit of variable rate shading where if renderer analyses the scene for where detail levels will reward spending performance, could assign blocks (eg, 1x2, 4x2 pixels etc) to be shaded once instead of per-pixel to concentrate there. It's not exactly the same thing as the upscalers, but it seems a better foundation for a better output image compared to a blunt dropping the whole rendered resolution by a percentage. However, that's assuming traditional rendering before any ML gets involved which I think has proven its case in the past 7 years.

I think the other side to this is the difference between further integration of the engine and scaler/frame generation which would seem to involve a lot of low level tuning (probably per-title), and having a generic solution that uplifts as many titles as possible even if there's "perfect is the enemy of good" left on the table.


1. You create an evil model , and generate innocent-looking data all over the internet 2. Some other model is trained on the internet data, including yours 3. The other model becomes evil (or owl-loving)


Thank you! Great explanation! As I guessed this is much less alarmist and sensational than what the paper seems to be claiming.


Ok interesting, how did you come to that conclusion? It seems to me this could introduce serious issues in multiple ways.


Uh oh. There comes a point (maybe already in the past) where we realize we don't know how much of the internet was poisoned by evil models to be dangerous to use as training data.

Dark forest. My guess would be the Chinese may already be at work.


How about we create tools that don't consume 5+ % of user's CPU (and keep it busy, not allowing it to sleep, decreasing battery life significantly) when idling or doing some background work?


This here is a single comment that explains everything. Firefox is kept clueless.

Sorry to all the devs grinding inside the machine - you are doing great work, and while it is not your fault the ship is going in the wrong direction, you are providing the fuel for it to keep going there by keeping your heads down and not revolting.

VGR's "Gervais principle" is a great series about recognizing the psychopaths at the helm and their power games. https://www.ribbonfarm.com/2009/10/07/the-gervais-principle-...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: