Hacker News new | past | comments | ask | show | jobs | submit | mamp's comments login

I've been using Gemini 2.5 and Claude 3.7 for Rust development and I have been very impressed with Claude, which wasn't the case for some architectural discussions where Gemini impressed with it's structure and scope. OpenAI 4.5 and o1 have been disappointing in both contexts.

Gemini doesn't seem to be as keen to agree with me so I find it makes small improvements where Claude and OpenAI will go along with initial suggestions until specifically asked to make improvements.


I have noticed Gemini not accepting an instruction to "leave all other code the same but just modify this part" on a code that included use of an alpha API with a different interface than what Gemini knows is the correct current API. No matter how I promoted 2.5 pro, I couldn't get it to respect my use of the alpha API, it would just think I must be wrong.

So I think patterns from the training data are still overriding some actual logic/intelligence in the model. Or the Google assistant fine-tuning is messing it up.


I have been using gemini daily for coding for the last week, and I swear that they are pulling levers and A/B testing in the background. Which is a very google thing to do. They did the same thing with assistant, which I was a pretty heavy user of back in the day (I was driving a lot).


When do you turn it off? I have a Mac M1 Studio and I just let it sleep. If things get weird I reboot. I think I recall using the power button about a year ago after returning from vacation after I had shut it down.


Right now I mount up to 7 HDDs to the Mac via SMB, have some Streamdeck / Pedal and the necessary external SSDs for fast storage connected. I will see if the SMB mounts come back OK after sleep (my laptop acts as server) but the Streamdeck and HDDs wake up randomly so overall it's easier to switch everything on and off depending on usage.


Stop underwear off complaining about the mini, you should complain at streamdeck


Like seriously WTF are people turning it off its 3 watts at idle lol, most power supplies have that much phantom drain lol


Everyone keeps citing idle, which is when the device is on and active but not particularly doing anything.

The standby power draw is 1W or less. I've used Mac Minis for years -- just replaced my M1 with an M4, though the M1 left me wanting for nothing -- and the number of times I've interacted with the power button is so negligible I imagine I've gone over a year without touching it. When I haven't touched it in a while it goes to standby, waking instantly when I engage it again.


Not everyone lives the same way. I am seriously considering a Mac Mini as my next upgrade yet I live in a RV and move frequently. Are there ways that I can keep the Mac mini powered while traveling.. sure, but why would/should I?


Are you not turning off entire circuits to reduce power draw when mobile? I’m actually thinking about one of these for my truck camper and its power draw seems fine, but the stumbling point for me is the additional power draw from the monitor it would require. I think I’m leaning toward an M4 MBP with nano textured screen for maximum power efficiency and ability to work outside when it’s nice, though I have not yet put much effort into researching efficient monitors


My EU mind is blown by these claims. Let’s take the lowest(1W) at sleep mode. With a thousand mac minis at sleep mode, that is already 1kW! In my country, a single person household’s yearly electricity package comes at 1400kW(+100 depending on provider) per year.

Note: intentionally keeping it simple, please don’t nitpick.


No household uses 1400kW, and kW/year doesn't make sense. Do you mean 1400kWh/year? That seems pretty low (NZ is 7000kWh/year), but if so, you're comparing power to energy, which doesn't mean much. 1W 24/7 < 9kWh/year, which is pretty small.


Personal guess from a fellow European citizen: I think they meant to say 1700 kWh/year. According to most German power utilities, the average 2-person household consumes about 2400 kWh/year.


It’s not clear what your point is because you’ve made a strong argument for it being negligible.


But unfortunately it’s not premature. It’s been a problem for so long!


I suspect it's not so much the "expert" instruction but the list of subjects of the expertise. These words will generate embeddings that have a better chance of activating useful pathways and relationships within the LLM's generation path.

My understanding is the goal is to prime the LLM with context that it will use when generating answers, rather than hoping it will infer connections in the feed forward layers when generating answers based on a sparse prompt.


Was it the "map of Tasmania"? Then I could understand why...

https://info.umkc.edu/womenc/2016/05/20/showin-off-her-map-o....


Apple’s first in-house designed chip was the A4 in 2010.


I agree with your assessment.

The real challenge is how can organisations best leverage super stars to achieve great things. It's difficult because managers have to be smart and confident, and the super star can't be allowed to undermine others.

The high prevalence of insecure managers and arrogant super stars makes this hard to do.


With ChatGPT some of the limitations of the tech are handled by the user e.g. starting a new chat when you want to discuss a new topic. An assistant has to detect changes in user context somehow. Also, I think it would be harder to know what to inject in the prompt since conversations are more like context based RAG rather than topic (embedding) based.

Then you have all the usual generative issues: hallucinations, alignment, sticking within guardrails, no repeatable testing, drift. The potential for errors at that scale is pretty staggering.


ChatGPT has helped me survive learning Rust.


That's the opposite of my experience. The Rust book helped a lot and going to chatGPT for additional questions usually ended up with bad answers.


People should specify which version of GPT they are using.


I went from playing with Wordstar at my dad’s office to Zardax on the Apple II which was a breeze in comparison.

https://archive.org/details/ZardaxWordProcessorManualAppleII...


Zardax called today's (MS Word) "Autotext" a "Glossary", cool.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: