Hacker News new | past | comments | ask | show | jobs | submit login

> Regulatory and PR risks are similarly grave. For example, Google couldn't have pulled off something like TikTok without all kinds of regulators jumping at their throats right away. They had to wait for ByteDance to clear the way and then launched their own "also-ran" clone. It's the same story with ChatGPT: Google had the tech but not the freedom to let it loose.

I think this is directionally true: Google would have taken a lot longer to release something like Bard/ChatGPT if their hand had not been forced, but I don't think pr/regulatory pressure was the reason YouTube Shorts was not done before TikTok.

I think short form video is just hard to monetize in comparison to long form. Why would you make a product that has uncertain appeal and is likely to be a money loser if it does succeed?




Indeed, the company behind TikTok (called ByteDance) didn't even have an IPO yet. It is unclear how much money they are earning from TikTok. It's conceivable that TikTok itself makes no money and is subsidized by the company's other products like Toutiao.

If Google were to try this early, it is uncertain that Google will discover a monetization strategy before the product joins the Google graveyard.

Let's not even talk about short form video, just YouTube. How many years did Google subsidize YouTube with Search money before it really turned up advertising on YouTube? Do we know how much effort Google expended in experimenting with monetization strategies for YouTube?


I don't know YT's monetization history, but longform video is incredibly easy to monetize because advertisers are willing to pay much more for their content being there. They get some edge from all the tech they have built for matching ads to users, but it's just fundamentally one of the easiest things to monetize on the internet, so I don't think they would have struggled there.


> Indeed, the company behind TikTok (called ByteDance) didn't even have an IPO yet. It is unclear how much money they are earning from TikTok. It's conceivable that TikTok itself makes no money and is subsidized by the company's other products like Toutiao.

Or, which is more likely, by the CCP. TikTok is the perfect piece of propaganda warfare - it gives destabilizing forces, anything from weird left-wing Hamas supporters to the hardcore far-right / incel crowd, a direct link to the brains of our children. It's unreal just how toxic the trending content on TikTok is, and how little effort is done to moderate it. Way worse than the YouTube radicalization spiral [1], but for whatever reason there's almost zero attention to TikTok.

[1] https://www.technologyreview.com/2020/01/29/276000/a-study-o...


Academic studies of social media are often very hampered by tooling and data access and studying a moving target.

It's hard to even know if the methodology of the paper you cited (analyzing comment trajectories) is a good one, given YT is constantly tweaking their algorithms, including in response to public outcry, and this phenomenon does not show up in other analysis: https://12ft.io/proxy?q=https://www.theatlantic.com/technolo...

I assume the methodological questions are even trickier for TikTok which has many more creators than YT.

I would love to see someone actually study TikTok though, since people love to ascribe blame to platforms for radicalizing people rather than accepting that some users just have views we find unacceptable regardless of the platform.


> It's the same story with ChatGPT: Google had the tech but not the freedom to let it loose.

I wouldn't be so sure, in my case ChatGPT passes the bar of being mildly useful but Bard is still absolutely useless. I can see two equally likely explanations for this: they simply can't manage to pull it off due to their culture or they can't release something that isn't massively more nerfed than the competition.


This story that Google would be destroyed if they had released GenAI tech before OpenAI is BS. Most people I know were instructed to tell people it was because of “responsibility” that GenAI was not out yet, while the entire company was freaking out and playing catch up with OpenAI (a 770 people company!) and promising non existent things to customers. There was nothing ready or thought through. Many people at Google thought they would be dominating AI forever. Little did they know that regardless of having a 170k people workforce, a 700 people start up nearly knocked them down. Google does not use Google Cloud for nearly nothing, also because of safety and security threats. I never understood why they could not be like AWS and use their own cloud. I wonder how long they will keep going trying to sell people something they don’t trust themselves.

About culture: it has deteriorated very much so. Working in Google was one thing, moving to Cloud was horrific and the sign I needed to get out of there. It has a culture that is rotten and somewhat worse than other Alphabet orgs, and tons of individuals with very poor technical acumen. Friends recommend friends who are morally flexible and they end up in Cloud. That is the easiest back door to Google and to a career ruin too given there are very poor execution and scarcity in technology innovation.


> There was nothing ready or thought through.

This was not my view being somewhat close to the technologies there. Lamda did exist internally for several years, with pretty good demos.

It was clearly not GPT-4, but there was a lot of exec-level fear about releasing these systems and having them say something offensive.

"Responsibility" is kind of bullshit, but fear of bad press is very real.


You’re giving Google too much credit. They couldn’t even conceive of short videos. Why? See earlier in the thread.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: