Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Voice commands are awful. They misunderstand you, they require you to memorize a specific syntax to talk to a single car, your other car probably has a different one, and so will your next car. They're horrendously verbose and it takes seven hours to instruct it which address it should navigate to.


Our car, Renault Megane e-Tech, uses Google, which I find works quite well. My SO has trouble finding the right words, and more often than not, the car does what she wants.

Though I found it struggles with non-native names, when I try to make calls. But starting navigation, turning off seat heating etc works very well.


Anecdotal counter-point: my Google home usually understands me (male voice) but either ignores or misinterprets my roommate (female voice). Not sure if it’s the same tech necessarily, nor where the fault lies. But this is across multiple generations of Home/Nest devices, in multiple locations.

Not to mention that there’s a 20% chance it misunderstands me with simple commands like “lights on” (so I get random Spotify songs instead).


The number of commands for a car are pretty limited, so I don’t have any problem. But, I know that voice recognition has a very bad reputation, and also that nobody prefers it over buttons.


I can imagine a near future where voice controls are flexible, personalised, and portable using small LLM models running locally. They would adapt to your style and needs and be sort of like a universal interface to a lot of devices - not sure about cars to be honest but for navigation and similar functionality I imagine they would be pretty good.

I am also working on a small personal prototype of this - anyone know if there is a more serious project or company that is getting into this?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: