Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wish my AI would tell me when I'm going in the wrong direction, instead of just placating my stupid request over and over until I realize.. even though it probably could have suggested a smarter direction, but instead just told me "Great idea! "


I don't know if you have used 2.5, but it is the first model to disagree with directions I have provided...

"..the user suggests using XYZ to move forward, but that would be rather inefficient, perhaps the user is not totally aware of the characteristics of XYZ. We should suggest moving forward with ABC and explain why it is the better choice..."


Have you noticed the most recent one, gemini-2.5-pro-0506, suddenly being a lot more sycophantic than gemini-2.5-pro-0325? I was using it to beta-read and improve a story (https://news.ycombinator.com/item?id=43998269), and when Google flipped the switch, suddenly 2.5 was burbling to me about how wonderful and rich it was and a smashing literary success and I could've sworn I was suddenly reading 4o output. Disconcerting. And the AI Studio web interface doesn't seem to let you switch back to -0325, either... (Admittedly, it's free.)


It really gave me a lot of push back once when I wanted to use a js library over a python one for a particular project. Like I gave it my demo code in js and it basically said, "meh, cute but use this python one because ...reasons..."


Wow, you can now pay to have „engineers” being overruled by artificial „intelligence”? People who have no idea are now going to be corrected by an LLM which has no idea by design. Look, even if it gets a lot of things right it’s still trickery.

I get popcorn and wait for more work coming my way 5 years down the road. Someone will have tidy this mess up and gen-covid will have lost all ability to think on their own by then.


You must be confusing „intelligence” with „statistically most probable next word”.


One trick I found is to tell the llm that an llm wrote the code, whether it did or not. The machine doesn't want to hurt your feelings, but loves to tear apart code it thinks it might've wrote.


I like just responding with "are you sure?" continuously. at some point you'll find it gets stuck in a local minima/maxima, and start oscillating. Then I backtrack and look at where it wound up before that. Then I take that solution and go to a fresh session.


Isn’t this sort of what the reasoning models are doing?


Except they have no concept of what "right" is, whereas I do. Once it seems to gotten itself stuck in left field I go back a few iterations and see where it was.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: