Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Google's biggest problem in my opinion (and I'm saying that as an ex-googler) is that Google doesn't have a product culture. Google had the tech for something like ChatGPT for a long time, but couldn't come up with that product. Instead it had to rely on another company showing it the way and then copy them and try to out-engineer them...

I still think ultimately (and somewhat sadly) Google will win the AI race due to its engineering talent and the sheer amount of data it has (and Android integration potential).



> is that Google doesn't have a product culture.

This is evident in Android and the pixel lineup, which could be my favorite phone if not for some of the most baffling and frustrating decisions that lead to a very weirdly disjointed app experience (comparing to something like iOS's first party tools).

Like removing location based reminders from google tasks, for some reason? Still no apple shortcuts-like automation built-in, keep can still do location based reminders but it's a notes app so which am I supposed to use? Google tasks or keep? Well, gemini adds reminders to google tasks and not keep if I wanted to use keep primarily.

If they just spent some time polishing and integrating these tools, and add some of their ML magic to it they'd blow Apple out of the park.

All of Google's tech is cool and interesting, from a tech standpoint but it's not well integrated for a full consumer experience.


I still can't fathom how one of my favorite Android features simply disappeared years ago: the 'time to leave' notification for calendar appointments with address info.


Google recently let go ALL -- EVERY SINGLE -- L3/L4/L5 UX Researcher

https://www.thevoiceofuser.com/google-clouds-cuts-and-the-bi...

Could it be argued that perhaps UX Research was not working at all? Or that their recommendations were not being incorporated? Or that things will get even worse now without them?


The link says:

> Some teams in the Google Cloud org just laid off all UX researchers below L6

That’s not all UX researchers below L6 in the entire company. It doesn’t even sound like it’s all UX researchers below L6 in Google Cloud.


Maybe Apple should follow suit.. I jest, but I’m still processing the liquid glass debacle.


At least it's uniform. Unlike Material 3 expressive which might look different depending on the app, or not be implemented at all, or only half implemented in some of Google's own apps even, much like with every other Android redesign.

I get Google can't force it on all the OEMs with their custom skins, but they can at least control their own PixelOS and their own apps.


It’s not uniform at all. Some parts of the interface and of their apps get it, others don’t. Some parts look more glassy, some more frosty. It’s all over the place in terms of consistency. It’s also quite different between Apple’s various OSs, although allegedly the purpose was to unify their look.


And even when it does copy other products, it seems to be doing a terrible job of them.

Google's AI offering is a complete nightmare to use. Three different APIs, at least two different subscriptions, documentation that uses them interchangeably.

For Gemini's API it's often much simpler to actually pay OpenRouter the 5% surchargeto BYOK than deal with it all.

I still can't use my Google AI Pro account with gemini-cli..


Then there's the billing dashboards...

It's amazing how they can show useless data while completely obfuscating what matters.


Yeah, the whole billing death march is what ended up making me pick OpenAI as my main worhorse instead of GOOG.

Not enough brain cycles to figure out a way to give Google money, whereas the OpenAI subscription was basically a no-brainer.


As of this week you can use gemini-cli with Google AI Pro


I had great fun this week with the batch API. A good morning lost trying to work out how to do a not particularly complex batch request via JSONL.

The python library is not well documented, and has some pretty basic issues that need looking at. Terrible, unhelpful errors, and "oh, so this works if I put it in camel-case" sort of stuff.


litellm + gemini API key?

I find Gemini is their first API that works like that. Not like their pre-Gemini vision, speech recognition, sheets etc.. Those were/are a nightmare to set up indeed.


To be fair, according to OpenAI they started ChatGPT as a demo/experiment and were taken by surprise when it went viral.

It may well be that they also didn't have a product culture as an organization, but were willing to experiment or let small teams do so.

It's still a lesson, but maybe a different one.

With organizational scale it becomes harder and harder to launch experiments under the brand. Red tape increases, outside scrutiny increases. Retaining the ability to do that is difficult.

Google does experiment a fair bit (including in AI, e.g. NotebookLLM and its podcast feature are I think a standout example of trying to see what sticks) but they also tend to try to hide their experiments in developer portals nowadays, which makes it difficult to get a signal from a general consumer audience.


If I can take a slight tangent. This is what I will remember OpenAI for. Not the Closed vs Open debate. They caused the democratization of access to AI models. Prior to ChatGPT, I would hear about these great models Deep Mind and Google were developing. They'd always stay closed behind the walls of Google.

OpenAI forced Google to release and as a result, we have all of the AI tooling, integrations, and models. Meta's leaning into the stolen Llama code took this further and sparked the Open Source LLM revolution (in addition to the myriad contributors and researchers who built on that).

If we had left it to Google, I suspect they'd release tooling (as they did with TensorFlow) but not an LLM that might compete with their core product..


According to Karen Hao's Empire of AI, this is only half accurate. And I trust what Karen Hao says a lot more.

OpenAI mistakenly thought Anthropic was about to launch a chatbot, and ChatGPT was a scrappy, rushed-out-the-door product made from an intermediate version of GPT-4, meant to one-up them. Of course, they were surprised at how popular it became.


Do you mean an intermediate version of GPT-3? That's more the timeline I'm thinking.


Google is definitely good at experimenting (and yeah NotebookLLM is really cool), which is a product of the bottom-up culture. The lack of a consistent story with regard to AI products however is a testament to the lack of product vision from the top.


NotebookLM came out of Google Labs though, and in collaboration with outside stakeholders. I'm not sure I would call it a success of "bottom-up" culture, but a well realized idea from a dedicated incubator. That doesn't necessarily mean the rest of the company is so empowered or product oriented.


-> With organizational scale it becomes harder and harder to launch experiments under the brand

I feel like Google tried to solve for this with their `withgoogle.com` domain and it just ends up being confusing or worse still, frustrating when you see something awesome and then nothing ever comes of it.


I don't think Google was ever going to be the first to productize an LLM. LLMs say stupid shit - especially in the early days - and would've just attracted even more bad press if Google had been the front runner. OpenAI came along as a small, move-fast-and-break-things entity and introduced this tech to the public, and Google (and others) was able to join the fray after that seal was broken.


Good point, if Google had released the first version of Bard or whatnot as the first LLM it probably would've received some good press but also a lot of "eh just another Google toy side project". I could've seen myself saying that.


It would've joined the Google graveyard for sure.


This has plagued Google internally for decades. I’m reminder of Steve Yegge’s Google rant [1] from 14 years ago, and ChatGPT is evidence that they still haven’t fixed it.

It’s amazing how pervasive company cultures can be, and how this comes from the top, and can only be fixed with replacing leadership with an extremely talented CEO that knows the company inside out and can change its course. Nadella from Microsoft comes to mind, although that was more about Microsoft going back to its roots (replace sales oriented leadership with product oriented leadership again).

Google never had product oriented leadership in the same way that Amazon, Apple and Microsoft had.

I don’t think this will ever change at this point.

For those who haven’t read it, Steve Yegge’s rant about Google is worth your time:

1 https://gist.github.com/chitchcock/1281611


> Google doesn't have a product culture

Fair criticism that it took someone else to make something of the tech that Google initially invented, but Google is furiously experimenting with all their active products since Sundar's "code red" memo.


Well, they had an internal ethics team that told them that their technology was garbage. That can't help. The other guys' ethics teams are all like "Our stuff is too awesome for people to use. No one should have this kind of unbridled power. We must muzzle the beast before a tourist rides him" and Google's ethics team was like "our shit sucks lol this is just a Markov chain parrot doesn't do shit it's garbage".


Which, to be fair—we're talking about the pre-GPT-3.5 era—it kind of was?


Don't you remember all of the scaremongering around how unethical it would be to release a GPT3 model publicly.

Google personally reached out to someone trying to reproduce GPT3 and convinced him to abandon his plan of releasing it to the public.


There was scaremongering about releasing GPT-2.

GPT-2!!


You're right. I was remembering gpt2 and it was OpenAI that reached out. He was in contact with Google to get the training compute.

https://medium.com/@NPCollapse/the-hacker-learns-to-trust-62...


And here we are after deepseek and the qwen models and so so much more like glm 4.6 which are reaching sota of sorts.


I mean, the level of scams that have occurred that time due to LLMs have increased so it's not exactly wrong.


The unfortunate truth when you're on the cusp of a new technology: it isn't good yet. Keeping a team of guys around whose sole job it is to tell you your stuff sucks is probably not aligned with producing good stuff.


There's almost like an "uncanny valley" type situation with good products. As in new technologies start out promising, but less okay. Then as they get better they becomes close to being a "good project" the more it's not there yet. In that way it could feel sort of worse than a mediocre project. Until it's done.


There's a world of difference between saying "our stuff sucks" vs "here are the specific ways our stuff isn't ready for launch". The former is just whining, the latter is what a good PM does.


And we (average users) are really luck for that. Imagine a world where Google had been pushing AI products in the first place. OpenAI and other competitors would not stand a chance and it would have ads in 2024. They'd have captured hundreds of billions of value by now.

The fact that we had Attention Is All You Need was freely available online alone was unbelievably fortunate from hindsight.


OpenAI were the ones that came up with RLHF, which is what made ChatGPT viable.

Without RLHF, LLM-based chat was a psychotic liability.


Along with its engineering talent and resource scale, I think their in-house chips are one of their core advantages. They can scale in a way that their peers are going to struggle to match, and at much lower cost. Nvidia's extreme margins are Google's opportunity.


Didn't Google have Bard internally around the same time as ChatGPT?


Bard came out shortly after ChatGPT as a prototype of what would become Gemini-the-chatbot.

There were other, less-available prototypes prior to that.


Meena/Lamda were around the same time as gpt-2


Search for Meena from Google.


Most people might remember it from the headlines:

> In June 2022, LaMDA gained widespread attention when Google engineer Blake Lemoine made claims that the chatbot had become sentient. The scientific community has largely rejected Lemoine's claims...

From https://en.wikipedia.org/wiki/LaMDA


Yeah, that was my introduction to LLMs!


https://research.google/blog/towards-a-conversational-agent-...

Damn, that's crazy. Or at least in hindsight it is. I don't remember anything big deal being made about it back then.


Why sadly? I’d rather the originators of the technology win.


its a different skillset, and also partially company culture.

For example does a CSS expert know how to design a great website? _maybe_…but knowing the CSS spec in its entirely doesn’t (by itself) help you understand how to make useful or delightful products.


ChatGPT-3.5 was more of a novelty than a product.

It would be weird to release that as a serious company. They tried making a deliberately-wacky chatbot but it was not fun.

Letting OpenAI to release it first was a right move.


To me, I want openai to release the Chatgpt 3 and chatgpt 3.5 as the phenomenal leap of intelligence and even I appreciated the Chatgpt 3 a lot, more so than even now like It had its quirks but it was such a good model man.

I remember forming a really simple dead simple sveltekit website during Chatgpt 3. It was good, it was mind blowing and I was proud of it.

The only interactivity was a button which would go from one color to other and it would then lead to a pdf.

If I am going to be honest, the UI was genuinely good. It was great tho and still gives me more nostalgia and good vibes than current models. Em-dashes weren't that common in Chatgpt 3 iirc but I have genuinely forgotten what it was like to talk to it


> Android integration potential

Nearly all the people that matter use iPhone... Yet Apple really hasn't had much success in the AI world, despite being in a position to win if their product is even only vaguely passable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: