Hacker News new | past | comments | ask | show | jobs | submit login
Google calls in help from Larry Page and Sergey Brin for A.I. fight (nytimes.com)
261 points by retskrad on Jan 22, 2023 | hide | past | favorite | 500 comments



Seems like Google leadership is freaking out.

Also sounds like they really believe the whole founder-hero mythology if they think that Page-Brin can save the company.

I don’t know how Google could be saved but perhaps having grateful and happy employees that would just build things in their 20% time might have been a better strategy. A truly revolutionary step would have been to buck the layoffs trend. But of course not.

Sundar doesn’t seem exceptionally promising. It seems like he is an ok leader when things are going well but not so good in tougher times.


They should be freaking out. Google is overwhelmingly dependent on a single revenue stream and AI like ChatGPT is a serious threat to their dominant position.

Not to mention, Google has spent a decade using their defacto monopoly as leverage and optimizing for ads and clicks, often exploiting their own users in the process. The bar for better was lower than it should have been, and the market is hungry for an alternative. Google has some real challenging times ahead.


Yes this is the most insane thing about GOOG to me. Yes, search is basically monopoly moat of recurring growing revenue. On the other hand, they've spent not a decade, but about 25 years betting this would remain the case!

I've met a lot of people who work at GOOG, and yet none of them could explain to me how the thing they work on makes any money for GOOG. In fact, most went out of their way to volunteer to me that they only work on R&D/moonshot/non-revenue generating activities.. proudly. Quite a large subset of them showed almost an embarrassment of the single revenue stream (tricking old people to click on Pharma ads instead of genuine search results), and actively distanced themselves from it as some sort of moral shield.

It is amazing to have that many smart people working for that long and not bother to find some other ways to make money. Just complete lack of focus.


> not bother to find some other ways to make money.

Oh, they definitely tried, and still try. The problem is the same as many large companies: they want a new trick as lucrative as the old trick, and there aren't any.


Having run "startup within the enterprise" business units ... this is such a frustrating thing, trying something new and different, but it doesn't become massively profitable in under a year, so the owner/founder shuts the project down. Not remembering it took the main business unit over a decade to get where they are :-/


They don't nurture any new tricks long enough to actually prove it out. Their product graveyard is like the SV equivalent to the city of Colma. From Wikipedia: "With most of Colma's land dedicated to cemeteries, the population of the dead—not specifically known but speculated to be around 1.5 million—outnumbers that of the living by a ratio of nearly a thousand to one. This has led to Colma being called "the City of the Silent" and has given rise to a humorous motto, formerly featured on the city's website: "It's great to be alive in Colma""


did they try though? it seemed all scattershot, unfocussed, repetitive (how many chat/video apps) and quickly taken out back and shot.


it's not ALL scattershot. GMail and Docs for enterprises are a pretty sustained effort. Cloud is pretty sustained. YouTube is unkillable.

But in general you're right.


GSuite is either underpriced or underfeatured.

Most real enterprise still happens on Office, and lots of consumers happy give Apple $100/year for iCloud.

Cloud services - aren't they basically 3rd behind AMZN/MSFT now, by A LOT? They are basically competing for 3rd against IBM/Oracle/etc, and some quarters they are #4.


No arguments there. They're still trying; just not hard enough.


I mean, once you are past the free tier.. GSuite is close to the monthly cost of Office365, and Google sheets is not even close to Excel. You can basically program mini applications in Excel vs a basic spreadsheet app for medium sized spreadsheets that multiple people can edit in their browser with version tracking.


I have Excel, and of course Sheets. I actually find Excel to be too full of features (same with Word). It's just too hard to figure out a simple thing like having the top row be column headings (yes, I do know how to do it, but it was not immediately apparent).

"Someone, somewhere, might want to do this, so let's put it in" is obviously what's happened with Office for 30 years now.


Excel is the #1 programming language by a wide margin, and it evolved rather than was designed.


I remember when all reports were crystal reports which you don't see as much anymore? Excel is on the way out.


Exactly. Every release some PM or senior engineers wants to add some features, for no doubt good reasons, and it all gets shoehorned into the UI. Thus the rule I generally espouse, of

"Simple things should be simple. Complicated things should be possible."

turns into

"Everything is possible, but nothing is simple."


My summary view of much of Microsoft, and more of tech, is that, while a lot of the software is really good, too often the corresponging explanations and documentation are not.

There seems to be a theme that the user should learn to use the software via experimentation. To me, that too often results in too much clicking, and when finally I do see I take notes, i.e., document some of the software.


I keep pulling data into Google sheets because of this >=REGEXEXTRACT("My favorite number is 241, but my friend's is 17", "\d+")


I just wrote a quick regex vba function when I was last in a job on Office.


They're still trying; really really hard. But they are just not good enough.

Maybe like Zuck they are / were a one trick pony. Perhaps becoming a multi billionaire is not proof that you can invent and monetize something anytime anywhere in any industry on demand.


I think in some ways its the scale of the problems Google X / Other Bets / Alphabet went after being rather hyperbolic.

Apple / Microsoft just take what they are good at to move into adjacent spaces in consumer/office tech. Apple has made some failed forays into corporate while Microsoft has not done terrible well in consumer other than gaming and maintaining their existing Windows base.

I wonder if Apple leaks about stupid stuff we never see like their car and their AR is just to put other less well disciplined firms onto the trail of expensive dead ends.

If you think of automakers and their "concept car" approach in which they present beautiful improbable vehicles they never ship.. Apple seems to LEAK "concept cars" they never ever preview, while Google goes all-in trying to actually ship a "concept car" no one wants, like Google glass.

Apple is the result of judicious application of the word "no". Google looks like a lot of "yes" with weak knees that quickly turn into "no".

I still don't understand what Google's strategy is in almost any space - messaging, social, home automation, audio, AR/VR, other than "yes... actually, no".


A good analysis of Apple: capture the news cycle with your whiz bang thingie, and then forget about it.

> I still don't understand what Google's strategy is in almost any space

Not just you. The "strategy" is "ambitious product manager creates a new 'vision', sells it, gets promoted, then quickly moves on to something else."


> A good analysis of Apple: capture the news cycle with your whiz bang thingie, and then forget about it.

I kind of don't agree there. Apple really only officially announces, demos and talks about products they have to sell you right now.

About the only vaporware I can remember from them in the last decade was that stupid dual charging puck thing they just quietly walked back from.

The only released & quietly abandoned product in recent memory would be the iPod Hi-Fi dock boombox thing.

So when Apple announces something I'm 99% confident it will make it will be released within days/weeks. And when it is released, I'm 95% confident the product line will exist in 5 years.

Google, for me, those numbers are like 70% & 20%. Which is why I rarely waste my time to look at anything new from them. Odds are its either not something I need, or if I need it.. won't be long for this world, so why bother.

I think they clearly feed rumors to journalists about stuff that is further back in the development pipeline, to the point that the conspirational minded might think its to throw off competitor from the right course..


iCloud is funny because it's just storage and an e-mail address compared to GSuite which can be actually used to do a bunch of hobby level stuff.

ChatGPT AI is overhyped, just like full self driving was five years ago. Yes, it can do stuff, write articles, code etc. until people realize that what it spits out is useful stuff mixed with made up gibberish and has to be checked by a human. It's going to be just great for ad spam and propaganda though.


ChatGPT generated code has the interesting property of looking like it should work, but is subtly completely wrong.


Yes, its funny when people post what is obviously cherry picked examples of it working.

So far the code I've seen is the same level of word salad as its text writing. I gave it some really easy ones that it responded back with 1-liners that would produce runtime errors, let alone actually produce correct result. I find it interesting since its really just synethesizing new content from existing content it has trained on, yet it produced simple one-liners that were wrong.

Lorem ipsum.


iCloud is an example of what happens when you actually try to sell a product.

Obviously GSuite does far more than iCloud, and yet..


Does GSuite have Hide My Email?


Their api offerings are a hot mess to setup.


YouTube may be unkillable, but how much money do they make? Especially, what does the RoI look like compared to Twitch and TikTok?


I don't know, but it does seem like they've reached Peak Ad. The Rick Beato videos get interrupted at least four times with ads.


I never quite understood this. New tricks only have to have ROI better than 14% or so to be worthwhile.


It’s tricky - companies with high margin businesses are loath to scale into low margin businesses. It makes the firm look more like a commodity than it is. Meanwhile startups in large enterprises are often desperate to grow revenue to show relevance.. leading to low margins.


It just seems like they wasted their lead, margin and talent.

Bell Labs produced a lot of interesting stuff with their monopoly profits. Apple R&D seems to come up with lots of interesting new product lines.

GOOG bought Nest and managed to become an also-ran in home automation.

GOOG other bets might produce something with Waymo, but when.. all this autonomous car stuff is continuously 5 more years, every year. That's about the closest they've come so far, no?


Just give it time, Steve. 5 more years should be about right :)


This isn't right. In the real world every effort saps a little bandwidth from the CEO/top execs, potentially harms the brand, potentially causes the org to calcify. Better off handing the money back.


One solution with that is to spin off a company that does these things, while keeping majority ownership.


My friend Jerry and I explored this topic in detail using Xerox PARC as an example [1]

I think Walter is right that spin-off is the right strategy, and (not very well known) Xerox actually did do that with a lot of PARC innovations. Just not with computers.

[1] https://www.albertcory.io/lets-do-have-hindsight


It's probably better, but I'd just as soon take the money back as a shareholder.


Most shareholders would be happy if they could average a 14% return.


Yeah I'm saying that what appears to be a 14% return won't be, and/or creates risks. Putting small amounts of capital to work to earn 14% is a distraction for Google. The distraction has the potential to cost 100's of billions.


> The distraction has the potential to cost 100's of billions.

Not doing this has the potential for Google to miss the next technology trend, potentially crashing Google completely.

That being said, I completely agree with you on a different note. "Professional" spectator sports in schools and higher education. In my opinion, it should be completely banned. Every single such initiative distracts from a school's or university's core mission. It does not matter if each dollar they spent on it makes back ninety cents or two dollars. It is still a distraction that everyone has to do because everyone else is doing it.


Makes me realise Microsoft are actually the most diversified of the big players now


Further to this - they not only want it, but they kind of need it if it's to move the needle. It's a difficult situation. A nice problem to have for sure, but still difficult.


Why should they find another way to make money? Think about it from the shareholder's perspective: As an investor, I chose to put my money into an ad distribution company. I know this business has a finite end, and I'm okay with that. I'm gambling that I'll get more money between now and when they close up to give me a good enough return.

If I want to invest in an X or Y, then I will do that. In fact I am well diversified into hundreds of other companies anyway. But I don't want my ad distribution investment clouded or diluted with cars, AI, balloons, or anything else.

Let a business be good at one thing (possibly closely adjacent things), and live as long as that's a valuable contribution to society, then close the doors.

It's the execs that want to diversify into other areas, because this is what keeps them in the game longer. If the business they are currently running winds down, that may not be good for them.


It’s not the people, it’s the management that decided which traits they would reward, both at hiring and during the career.


at least in the tech side of things, they decided to hire the same type of person, uncreatives who memorize leetcode questions and have no ability to think outside of that box.

memorize leetcode, regurgitate memorize system design, regurgitate.

There are no room for creatives with that process.


Thankfully when I was graduating they had some strict school & GPA filters that someone as lowly as me wasn't even dignified a response.

Wall Street tech DGAF about that though.


It seemed like Google's hiring filters were about what a Stanford student might think is important (that socioeconomic class, had learned what Stanford students do, thinks whatever they don't know is less important or follows naturally from what they know, preferably recent grad (again, student thinking), with affluent lifestyle expectations, etc.).

Wall Street, OTOH, is about making money. They might also not fully know how to hire the people they want, but it seemed they valued scrappy more, especially when it leads to real results (which are maybe easier for them to see, when it can be pretty directly quantified in dollars).

(Though the Wall Street recruiter I talked with, back 2 decades ago, when I first wanted to work for Google, did ask about my GPA. So I told them I had a perfect GPA, like most of the other students in that grad program, and that that wouldn't tell them much about someone's potential. Then I told them I didn't really want to work on Wall Street, which had a bad reputation from the preceding decade, though the $400K to start was maybe 10x what I was making at the time. Then Google gave me some asinine brogrammer student nonsense in the interview, before that was a thing, and before wannabe companies mimicked it. So I ended up going to neither company.)


I graduated between dotbomb & GFC, so my dad was convinced programmers would never have jobs again. Further, Silicon Valley was not exactly top of the list for me given the preceding crash. Finally, if you wanted to stay on east coast in late 90s/early 00s in tech, there wasn't a lot of choice outside Wall St.


And yet there are fairly high-level Google employees who came in with acquisitions and don't even have college degrees.


If the "Google other" revenue is $25 billions, how many unicorns is that?

As for what googlers told you, keep in mind that they are pretty loyal - they have that Lamda chatbot as if an employee perk, and only one of them leaked a sample.


$25B is 1/2 the revenue of Apple "wearables and accessories or 1/3 the revenue of Apple "services".

$25B is also less than what Amazon now makes on ads, which should especially given Google pause. Consider that Amazon started as simply an online retailer and has successfully spun up other business lines that now account for 50% of revenue across online fulfillment / 3rd party services, physical stores, subscription services, ads, cloud, etc. 4 of these are bigger than "google other".

Sure, big absolute number, but trivial as a percent of a FAANG scale company revenue.

Apple & Amazon have no one product category making up more than 50% of revenue. Meaning they can take some serious pain on any product category and it not be fatal. This also means that they have staff distributed across a wide variety of product lines.

Google has 7x the staff they did in 2009. This seems like a precarious position to be in during this part of the cycle. I can imagine them coming under serious scrutiny to allocate resources appropriately if say, only 20% of staff are responsible for 90% of revenue. If you can't grow the top line, you can certainly grow the bottom line..


I don't disagree, but I don't think it's fair to lump AWS in with the rest of Amazon's side projects.

If we're comparing Amazon and Google you almost have to compare AWS to Google search since AWS is driving a huge part of their profits.


Technically I think you would shrink the bottom line, since profit is (top) revenue - (bottom) expenses.


ChatGPT would probably be a better search engine than google. ChatGPT and Reddit cover most of your info.

What remains is shopping, but Amazon and even Instagram are better at that.

Lastly, Google has no audience for creating awareness like fb / ig / tt


Amazon makes more on shopping ads than google now. You could say google is no longer in first position for shopping search.

Reddit is chipping away the Q&A search, but time will tell if it has a similar downfall to quora/yahoo-answers.

I’ve seen TikTok lead in educational search, cooking, gardening etc. TikTok could potentially overtake YouTube all together over next 5-10years.

Google maps isn’t as far ahead as it once was. Apple caught up.

Then you got Microsoft whos teams and office suite whicj caught up and surpassed google’s office products (for what enterprise buyers care on)

Leaving what. GCP as their only disruptive marketshare product area? And GCP while high margin isn’t nearly as high margin as their ads businesses. For GCP to be profitable they need to severely cut wages for cost center roles. A PgM in GCP shouldn’t be making 300k when AWS and Azure spend just 200k on the same role.

Finally you’ve got the money pit of waymo, X, and the bets. All of them have neither a real path to profitability nor compelling return on investment.

Id say the investors are right. Google is the new IBM, return profits to shareholders so they can invest them in more fruitful endeavors.


I think Google Maps is already losing. I was a faithful user since it came out. Game Apple Maps a few tries but always went back.

The past couple of years Google maps has gotten more glitchy, given worse directions, and most recently started showing extreme lag. Maps in CarPlay started falling behind real-time. First by a few seconds, then by minutes. Wasn’t sure if it was the car head unit, phone, reception or what.

Out of desperation I tried Apple Maps. Not only did it work flawlessly, but its driving prompts are greatly superior to Google maps.

I could go on about YouTube having broken bulk-publishing of videos (this for my son’s football and basketball games chopped up into indivuap videos).

It’s all horrible now.


Google Maps is unequivocally the champion in terms of reviews, and this is what really matters when it comes to actually finding places to go.

Not just that, but it's directly monetizable because if the user journey starts with a Google Maps search query, e.g. "lunch near me", those first few results are worth real money.


Isn’t this saying that google maps started optimizing the thing that made money- rather than the thing customers want?

Apple Maps will get better at reviews as more people switch over.


Apple Maps leverages yelp for reviews which is a better dataset than google reviews.

And just like above said, google is optimizing for ad revenue not user experience. This opens the door for a competitor to swoop in and offer better results.


Google reviews is more superior when it comes to europe.


Isn't Apple maps only for Apple devices? If so no competition for GMaps.


Well I'd say they both put all their eggs in one basket & let that basket deteriorate. Show of hands - people who feel google search has improved in the last 5/10 years vs people who feel it's gotten demonstrably worse?

So they have a monopoly product that is something like a 2 sided market (advertisers & search users) where both sides hate using it..


Show of hands - people who feel google search has improved in the last 5/10 years vs people who feel it's gotten demonstrably worse?

Google's unwillingness to show any results older than Taylor Swift' last breakup song made me switch to DuckDuckGo.

It was a bit rough at first, but in time Duck turned out to be just as good as Google, and sometimes better for the sorts of thing I search for.

Then, about four or five months ago, something happened to Duck, and it's gotten significantly worse. Short lists of results. More results that are off-topic than on. In the last couple of weeks, for me it's as bad as Google was before.

I'll stick with Duck because it's not trying to harvest every morsel of my life to make a quick buck. But I wonder if anyone else has noticed a very quick and very significant change in quality with their DuckDuckGo results?


Yes, have noticed duck duck go result quality get worse.

Anecdotally it seems they do more fuzzy match and adhere less to exact matches even when keywords or phrases are quoted.

It's been frustrating to see specificity of results decline.


Brave search has been surprisingly good for me. I actually had it find me something Google was failing to find for the first time yesterday. Normally I have to occasionally try a Google search for nuanced topics, but this time it was the opposite.


It autosuggests and I don't even have to click on links half the time anymore. I'd call that progress. Not ground shaking like ChatGPT but half the time I get the (right) answer to my question just by typing some related words into the search bar.

I don't know how many Google searches I did yesterday, but it feels like a lot.

It does has problems with certain queries beyond a certain depth, which I also get frustrated with, but I can go from having forgotten calculus to integrating by parts with the help of Google.


The auto suggesting where they filter prompts to only safe terms

You literally can't go to page 2 anymore

It ignores what you ask and answers what it think/wants you see.

Previously you controlled search now you are lucky if a site is even in the index.


>Show of hands - people who feel google search has improved in the last 5/10 years vs people who feel it's gotten demonstrably worse?

I mean, personally I think this is very likely true. My feeling is that it's the web itself which has gotten demonstrably worse. Diagnosing the reasons for that is a more interesting problem.


> My feeling is that it's the web itself which has gotten demonstrably worse. Diagnosing the reasons for that is a more interesting problem.

I think it’s a little tricky separating the two since Google has so much influence over the web. A lot of the parts of the web which have gotten worse were due to decisions they made to do things like cut ad revenue for normal organic content, declining to punish or even rewarding blackhat SEO, pushing AMP, shuttering Reader, etc. That sold a lot of ads at the time and people said it was needed to counter Facebook but it’s left them more vulnerable now.


"Search hasn't gotten worse, the web has" is like the Simpsons meme "Am I so out of touch? No. It's the children who are wrong."

The only score that matters is the one in real life, in the game you choose to play. If it gets harder, you need to get better.


Actually at the start the founders didn’t think search would be continue to be as dominant as it has been (will have to find the source and update…)


That's right- Larry wanted a question-answering machine. Larry and Sergey "got bored of search" (I heard this from the engs who were going to weekly presentations to show new search features) when they felt it was just chasing a larger index and showing people what they wanted to click on.


Meanwhile I long for the days where this was Google's focus. I want a better index of the web and I want the search engine to give me the tools to filter out the garbage I don't want. The fact that this isn't a feature tells you everything you need to know about Google's real priority: it's not us, it's the advertisers.


> I want the search engine to give me the tools to filter out the garbage I don't want.

Thought some people might want that.

Have the user interface and pure/applied math worked out and documented and code designed, written, and running.


That would certainly be an improvement over their current strat (ignoring what I want to click on, showing me what's most profitable for them instead)


Maybe really genuinely smart people want to work on something that matters more than just making money.

I think there's a reason the average physicist is much brighter than your average Google software engineer, but gets paid about a fifth as much.


its smart to be downwardly mobile!


I have more respect for the Wall Streeters: at least they're honest and don't go around pretending they're saving the world with their latest compiles-to-Javascript.


Most people, if they were honest, would admit they work to make money to have a life.

SV has a strong tilt towards pretending whatever BS get rich quick scheme of ad / VC supported is saving the world. It's weird watching it from the outside and imagine the other path not taken.

In 20 years we may look back at social media, smartphones, the whole PII data based economy with its recommendation algo rage loops as our generations Big Tobacco.


I don’t know but Google is very bad at releasing and maintaining products they already have a horrible reputation. They could have done better but I guess no one gives a flying f when the money is still coming.

Google did more than search: gmail, youtube, docs, cloud and that’s just stuff on top of my mind.


YouTube makes money.. because ads.

Cloud they have somehow managed to slip so badly as to be competing for 3rd against IBM & Oracle.

Docs/gmail is rounding error because its feature poor and not monetized well.


>"YouTube makes money.. because ads." Does it? The last I saw about Youtube and profit referred to it as revenue neutral.


The non-ad moat that I see them losing is if slack gets into the office app game. so instead of editing a Google doc and sending the link in slack, you just edit it inside slack.


Salesforce owns quip a pretty reasonable office competitor. This should only be a matter of time.


I guess they were counting on one of the moonshots to work out.


Update, lol: * DOJ to sue GOOG over Digital Ad Dominance


they spent years as well polishing their tools like google mail, docs, search and etc.. so i guess they still have a lot of enterprise or paid users, paying for their services.

Actually the numbers are quite impressive[0]:

"The global G suite business software market is expected to grow from US$ 2,224.76 million in 2021 to US$ 3,903.72 million by 2028; it is estimated to grow at a CAGR of 8.4% during 2021-2028. G suite provides professional services with productivity tools to stay connected and organized to meet the clients' needs.4 Apr 2022"

[0] https://finance.yahoo.com/news/worldwide-g-suite-business-so...


Not only are the numbers impressive, the tools are impressive too.

But that doesn't change the reality that ad revenue related to search is the lion's share of Google's revenue. That makes them exceptionally vulnerable for a company of their size and breadth. And in that vulnerable area, anecdotally at least, the search quality feels as if it has regressed over the years.

Google was once the gateway to the web. Now they are the gatekeepers of the web.

Maybe I'm just nostalgic. Maybe things were never as good as I imagined they were. Maybe my expectations are unfair. But the Google of my memory was innovative. It delighted. That feels far removed from the Google of today.


>Maybe I'm just nostalgic

You're not. It was so much better up until 2018. Now it only indexing spam blogs And ai generated contents? Search quality is so bad today that duckduckgo is performing actually better in many cases, especially software development related queries


DDG used Bing's index. "Search is so bad that" another trillion dollar corporation, which has been caught copying Google's search results, sometimes returns better results for niche queries, isn't much of an indictment.


2bi USD is like 1% of Google's annual operating expenses.


yup, but that would be a great way to google if they want to move away from ads. Years of software engineering in great apps, integrations, google maps, etc.. that's can for sure generate more $ if, they invest more time on that, but I agree that will never beat the ads machine.


People have been incredibly shallow on the issue. ChatGPT is the harbinger. There will be many more and from different companies. Google awoke to find an enemy scouting party testing their perimeter.


Before it got choked off I had started using chatGPT for all of my searches. For code examples it blows google out of the water. For learning esoteric things (like congressional procedures etc.) quickly, it is without peer.

I think Google's founders offered their take years ago...no one listened then.


ChatGPT is only "truthy", however. The algorithm will happily give you information that sounds true, but is actually wrong. And if it's wrong even 1% of time, you will still need some way to verify its claims.


This sounds like experts criticizing Wikipedia, before it dominated. Mostly true and mostly accurate information is useful beyond what we normally think.

Remember how people use to criticize the newspapers? They'd say they read an article on a subject they new something about and were shocked at how bad it was. Then they'd conclude that the rest of the news was just as bad. What wasn't acknowledged was the utility of having imperfect information.


While I use ChatGPT myself, its ability to invent true-seeming information is insidious.

One of the early popular examples was asking ChatGPT what the downsides are of generics in TypeScript, and it cited that they're not compatible with some browsers/runtimes. It's entirely believable, but entirely wrong, and actually quite hard to confirm either way with internet searches. It's the sort of thing people would readily believe and pass around, though.

Similarly, when I was trying to concatenate video in ffmpeg, it produced convincing-seeming command lines, almost none of which worked properly, but almost all of which used actual ffmpeg command line parameters with appropriate-seeming formats and names. I wasted significant time on debugging those before realising quite how wrong ChatGPT was.


I like to describe ChatGPT as a highly weaponized form of Gell-Mann Amnesia.


I would ask it if it's sure.


> What wasn't acknowledged was the utility of having imperfect information.

That's an interesting take, and one I haven't heard before. My main squeamishness with bad journalism, and even worse, with chat gpt, is not truth-content, but rather sourcing.

If you know the source, you can account for its biases, or, your reader can do so. As such, well-sourced quotes of complete nonsense tend towards adding to the sum total of truth in the world, because it was true that somebody said it, and their perspective is part of what's interesting in whatever event you're discussing.

If you scrub the data of its sources (like journalists do routinely, and chatgpt does by necessity) important information is lost.


Anyone criticizing ChatGPT now isn't thinking about what's possible with GPT-4 or GPT-5 when lots of research has gone into getting it to be better with answering accuracy. It doesn't take too much of an imagination to see a path where people aren't doing nearly as many searches on a search engine and just asking questions to something like ChatGPT.


I can also see this being a major drawback.

Any bias is everyone’s bias. Not saying it doesn’t happy at all today . But wow, scary


It's always been my belief that the full on faith put into future AI will be "how the AI takes over".

We will look at these things as devine correctness, and eventually forget their human construction entirely.


...and overlook their faults?


The only thing that matters to me is overall usefulness. If I had access to a practitioner of a craft to teach me, and they mostly lead me in the right direction but also had some dumb ideas and incorrect practices. Well I would still want to learn from that person and expect my self to be critical and improve on the work of my teacher.

As long as ChatGPT assists me more than it harms me it is quite powerful simply due to the number of different things it can assist me with, and the raw scalability.


On the other hand, the same is also true of Google results.

The web is chock full of incorrect stuff, both unintentional and deliberately deceptive.


From a user experience perspective it's interesting that it may be a worse experience to get a set of links and have to pick from them and use cognitive effort to discern which sources are reliable, than to simply use ChatGPT and get an answer. But this step is precisely where verification happens.


I think you are vastly overestimating the amount of cognitive effort most people apply to Google search results.

It's more like "First one on the list? Well, that one's right -- unless it disagrees with my personal biases, then maybe I'll look for another one."


I didn't suggest that people are good at making that effort. They arent. But the irony is that it's precisely in that difficulty that the act of verification lies. The fact that it's so difficult that it's not at all an efficient method for discernment suggests that it's either done on purpose or Google's UI is just not good for that purpose, or both.


Internet content is very rarely verifiable. Very few websites cite any sources aside from Wikipedia.


At least with traditional search results, there are some indicia of their trustworthiness: does the author/outlet have a history of deception, are their assertions well-sourced, etc. With a chatbot result, it's effectively a black box.


It is wrong much more than 1% of the time.


Proving statements reasonably true is a lot easier than coming up with them from scratch. ChatGPT finds the needle in a haystack, you're job is to check it is a needle. For some questions it mostly returns junk but it returns needles often enough to speed up whatever you were doing.


In Any advance topic the error rate is closed to 60%


Sadly, so is Google. I've been searching for advanced Blender topics yesterday and even with 20-30 tries at writing a search phrase I just got a lot of the same ****.


So are google search results. You have a few really good resources, and many listicles paraphrasing them. With Google completely ignoring the keywords you typed, you end up with a lot of the latter.


Agree with everything you said, but I want to dig into why I don't find the critique compelling.

In short, I see a need for us all to inherently distrust anything single thing we read -- everything needs to be verified and checked out across multiple (hopefully diverse) sources of information before it can be considered "truth" in any sense. I do develop trust in certain sources over time, but meta-sources like an AI or a search engine don't get this treatment because their scope is too broad. As an example, I tend to trust folks like Fabien Sanglard[0], Eric Goldman[1], and Adam Langley[2].

Example: over on the Morrowind subreddit a few years ago, an artist had been painting landscapes of the game "Morrowind" (oil on canvas, IIRC), and when he went to print them as posters to sell to fans, the site he wanted to print them on warned him that printing his own paintings of vistas from Morrowind was copyright infringement (it's not). He came to the Morrowind subreddit for help because he'd been actively misled.

Sites like plagiarism.org[3] confidently say false things:

> But can words and ideas really be stolen?

> According to U.S. law, the answer is yes. The expression of original ideas is considered intellectual property and is protected by copyright laws, just like original inventions. Almost all forms of expression fall under copyright protection as long as they are recorded in some way (such as a book or a computer file).

This is confidently incorrect in a couple of different ways. It might seem like incompetence (and may be), but it's worth noting that plagiarism.org is actually run by TurnItIn, which profits from people not really understanding how copyright works.

Similar example (I tend to look into copyright, trademark, and patent issues a lot): I got into a discussion with another HNer about IP law, and they cited an article written by an actual lawyer[4] on a site called "The National Law Review" that's simply wrong about IP law to defend their position. It's tragic, because we've built this sort of cargo cult around "citation, please?" but then all-too-often don't keep our critical thinking skills engaged as we evaluate the source.

This is true all over. I've read HN daily for more than 10 years. Not only does every comment need scrutiny (even if stated confidently), but most stories linked on the front page do as well. It's not just blog posts or opinion pieces; some scientific studies that are posted get torn apart in the comments, to enough of a degree that, after cross-checking, I tend to think the study itself is deeply flawed.

And there are numerous stories about how Google's Quick Answers confidently give the wrong answer, as highlighted in 2017 by the WSJ[5].

I'm not saying accuracy doesn't matter, but I feel the "ChatGPT is confidently wrong" needs more context or comparative analysis around it before it becomes a compelling argument, since "confidently wrong" applies to search engines and humans as well. I haven't seen any in-depth studies on this, but would love to.

[0]: https://fabiensanglard.net/

[1]: https://blog.ericgoldman.org/

[2]: https://www.imperialviolet.org/

[3]: https://www.plagiarism.org/article/what-is-plagiarism

[4]: https://www.natlawreview.com/article/enforce-your-intellectu...

[5]: https://www.wsj.com/articles/googles-featured-answers-aim-to...


"Similar example (I tend to look into copyright, trademark, and patent issues a lot): I got into a discussion with another HNer about IP law, and they cited an article written by an actual lawyer[4] on a site called "The National Law Review" that's simply wrong about IP law to defend their position"

The problem with "citation, please" is it only works if the person you are engaging with is acting in good faith.

If I type in "Can intellectual property rights expire if not litigated?" That web site (national law review) doesn't appear in the top 10 results. So I'd question where did they get that link?

Not so long ago I got in a debate with someone here whether muscle exercises can lead to long term increases in testosterone levels.

The person I debated with provided a link to an abstract of a study that didn't look quite right. There was no link to the actual study.

Instead of engaging with them I confirmed that, within a minute or two, I was able to locate a high quality, free to read, medical literature review on the topic.

Maybe my ability to type keywords into a search engine is superior, but instead I came to the conclusion the other poster was half assedly looking for search results to "win the argument" rather than engaging due to intellectual curiosity.


Absolutely agreed. I see so much bullcrap being peddled by people on hackernews and reddit it's absolutely maddening. People hide behind having a source like it's an immovable shield that protects them against having to perform their own critical thinking.


If they adopted chatgpt, you wouldn’t get the ads to links and their entire monetization strategy would need to change. I’m sure many at google brought up the technology as disruptive but leadership didn’t want to take the revenue hit.

It’s a common challenge and similar to why Amazon was so successful at disrupting retail. Retailers could have competed, but not without huge hits to profitability which shareholders would never get behind.


Without peer? It gets facts completely wrong for which Google produces the correct result on the first hit.


Are we using the same google? Unless I add 'reddit' to the results, I hardly can even find anything useful anymore.


I am claiming google always provides the best results but that ChstGPT many times gives completely incorrect answers for which the right answer is found in the first result of a simple google search.


I honestly think this is the start of a long demise and one day we will look back and marvel at how big they used to be. It's not something I look forward to though considering the masses of personal private data they have amassed from their users.


This is old myth. 40% their revenue now comes from other properties such as YouTube, gmail, GCP, Android, maps etc. They are fairly comparable in diversification for rest of the big tech but still little way to go.


Couldn't Mitchell Baker start to freak out too ?


Finding useful links on Google is antithetical to as clicking.


Former Googler here.

20% Time is a myth and always was. Yes, occasionally someone would talk about "their 20% time" but it was rare, and usually just meant "spending a few hours on something else." Admittedly, once in a while someone really did spend 20% of their time on something else, and management was not allowed to complain.

Once, in about 2006-07, I was at a table with about 8 software engineers and asked for a show of hands, "How many have a 20% project?" There was one hand.

What it WAS useful for was checking out a project you might want to transfer to.


By your own story, it's not a myth. Not everyone has one, and your boss had to be supportive of you working on it, but there are a bunch of internal systems that are supported by 20% time. It might be more appropriate to frame that 20% as giving 120% (aka working on it on Saturday instead of Friday) for some of the things, but it's a real thing that exists. Just because not everybody has such a project doesn't make it a myth.


The "myth" is that everyone has a 20% project. We could find lots of people and stories in the press who really believe that.


Just because something vaguely resembling the popular story seems to exist doesn’t make it not a myth.


How much of that is googles fault and how much of that is that it’s hard to have a good idea for a personal project and follow through with it?

When I was in my 20s all my friends (young single kid less, so plenty of free time) would often TALK about wanting to do a project but few ever did.

I wouldn’t be surprised if for most people the marginal sense of accomplishment and meaning is just higher by working on their day job (which is already cush) than trying to hack a random idea together.


> it’s hard to have a good idea for a personal project

There was actually a 20% marketplace page, where projects seeking 20%-ers would advertise. It wasn't necessary to have an idea yourself.


Just guessing here but sounds like you had to have interest in someone else’s crappy/uninteresting idea, which likely was rare. It’s like those cofounder matching sites, most people have a crappy/uninteresting idea they want others to help them with. People without ideas go there and immediately turn away.


There certainly was some of that: dumb ideas that were going nowhere.

Also a lot of genuinely interesting projects had managers with no idea how they could use a 20%-er. Especially if they didn't really believe the person was reliable.


Another thing to consider is that it's usually 100%+20%, not 80%/20%. It's because you can do whatever you want to do with 20% time but that's usually not going to help your perf unless you can convince your director/org as well, not just a manager.


> Also sounds like they really believe the whole founder-hero mythology

It’s not that at all.

Given that Larry + Sergey still control Google (>50% shareholder votes), seem prudent to meet with who controls the company to debrief them on current events & action plan.


^ this is correct.

As owners, they can provide cover for management to put more discretionary funding into combating this threat.

I think Sundar has yet to prove himself as a great CEO, but this is good leadership.


I think this goes back to Google’s decision to be evil.

Being evil has a lot of overhead where you have to invest significant time into researching grey areas to exploit and then crafting your product or service not step over the line. Then it falls apart when you get to Europe and they are happy to call you on being evil and fine you billions of euros for doing evil things.

Google should go back to not being evil, that worked a lot better for them, and just focus on making cutting edge products.

It would also be great if they answered the phone on occasion, and did more to listen to their users. Community support is an obvious cop-out, and nobody wants be dismissed without warning or explanation when it involves something they care about.


Google was always "evil"[0]. They've never done support; the entire business model was "automate everything at scale". The "don't be evil" thing, aside from being a jab at Microsoft, was backed up mostly with talk about short-term vs long-term profits, rather than not taking all the cake on the table for themselves.

[0] Assuming your definition of "evil" includes "don't answer support calls to save a buck".


"Being evil", as you say, means saving money by leveraging synergies between products. If it wasn't a successful strategy, it wouldn't matter.


> I don’t know how Google could be saved but perhaps having grateful and happy employees that would just build things in their 20% time might have been a better strategy.

I'm not sure 20%-time is a better strategy. At least it was not proven yet. For starters, which successful products were initially Google employees' 20% projects? Google Search was certainly not. Google Search Ads was not per the book In the Plex. The product went through multiple iterations, and a math whiz from u of waterloo conceived the idea of using second-price auction system. Google Maps was not, per the book Never Lost Again. It was developed by the Lars project initially at Where 2. GSuite was not. It was from the startup Writely. Google Adsense was not. I was from DoubleClick. Google Chrome was not. It was a strategic product that multiple people drove, including the current Google CEO. Google's self-driving car was not. Page personally drove the project by working with leaders like Thrun.

On the other hand, breakthroughs like ChatGPT can't be from a 20% project. It requires full-time dedication of multiple world-class researchers and engineers to develop a series of models, to set up a massive infrastructure for training and data processing, to implement an efficient process for human input, and to invest many millions of dollars to buy GPU time and crawler time.


Gmail is the breakout hit that came from a 20% time project.


Oh yeah, you're right. How could I forget gmail. The value of the 20% of project, in my experience, is not giving employee additional time, as Googlers can find time easily given the company's lax'd culture of enforcing deadlines. Instead, the value is that such projects do not need coordinations, or approvals from dozens of committee members.


(for those curious about Google Ads history, Eric Veach is the one who proposed second-price; I believe he was unaware of its previous existence when he did). https://en.wikipedia.org/wiki/Eric_Veach


> Also sounds like they really believe the whole founder-hero mythology if they think that Page-Brin can save the company

Maybe I'm overly cynical, but don't they hold > 50% of voting shares? It may just be C-suite CYA action: by ensuring Brin & Page's buy-in on counter-ChatGPT strategy this early, who would blame the current CEO if it doesn't pan out? It won't be "Sundar Pichai's plan" that's at fault


Why would the owners pay a man $200M/yr of their hard earned money, if he comes to them to say he doesn't know how to do his job?


Their money isn't hard earned.


They don’t need to believe it, they just need to believe that other people will believe in it. Even CEOs can get a lot of pushback and since they’re looking to do drastic changes it helps to get as much buy in as possible.


Simon Sinek has a really great point about companies endlessly pandering to shareholders, basically being equivalent to sports teams endlessly pandering to fans. The objective of a truly good coach/leader should be to successfully accomplish a goal, and often that will require making some portion of your "fans" unhappy. If coaches acted the way modern CEOs do, they'd only ever run one play


Simon Sinek is a LinkedIn influencer with very limited insight into anything beyond sounding smart and sellong self-improvement books.


>is a LinkedIn influencer

the fact that this is a thing just makes me sad


I didn't even know this WAS a thing, I could probably do this between myself and ChatGPT and make millions. ;)


Are you prettiet then Simon and a smoother talker?

It's a crowded competitive market. The content has always been the easiest part.


You also have to have no shame or dignity.


I blame the short-term thinking for this. Everyone wants to see constant progress, there has to be growth every quarter, otherwise the stock goes down, investors are unhappy, CEO has to explain himself and promise quick recovery.

It happens in sports, too - a new coach is hired, and they get kicked out after a year, because the team didn't show much progress. But often it just time to work on fundamentals, build something new under the hood, maybe introduce changes in training/org structure/hiring, and only once it's done the progress can be visible. But fans/investors/owners always want it NOW.

What are examples of large companies that said "if we want to be on top in 10 years, we have to undergo a transformation, and stop thinking of short-term gains"? Meta is the only example that comes to my mind (though I believe their strategy won't really work, so not the best example).


Google only has 2 shareholders to pander to. Everyone just gets to sit and watch unless he's unambiguously destroying huge amounts of value.


It's not like they're going to sit down and write some new AI algo to put Google back on top. What they are doing is just cutting the Gordian knot of management traffic jams. They can come in and say "this is what we're doing, and this is who is doing it". Even for Sundar to try that would require months of wrangling and deal making. But Larry and Sergey can come in and force their will on the company.


Does this not reflect badly on Sundar?

To have this "surprise" freak-out catch them off guard? To bring in the old leadership so publicly?


In my frank opinion, Sundar is the worst performing of the FAANG leaders. He reminds me of Ballmer's flat growth, but perhaps much worse - we'll know the true scope of the damage soon.

His leadership has put Google's single biggest revenue stream in the cross hairs of competitors, and Sundar should have seen it coming. He's done nothing to secure Google's future or broaden the company's revenue sources.

Perhaps the biggest disappointment is that Google has been unable to move the needle on cloud. This despite the fact that Google had the best in the business once upon a time.

There's no innovation out of this company - only vanity projects, product cancellations, and in-fighting. Google's sparkle is decades in the rear view mirror. I'd describe them as the Walmart of computing, but even Walmart is formidable. Google is paper thin.

Microsoft's Nadella, by contrast, is a phenom. Developers (Github, VSCode), Cloud, and B2B (Office, LinkedIn) are kicking ass. I don't even think about their consumer-facing stuff much anymore, because the rest of their offerings are so good. Their choice to invest in OpenAI is really paying off too.

Google is fat. Fat, lethargic, and dumb. It's not somewhere worth working at until they axe the incumbent leadership. Their stock option-based comp is going to take a rough tumble, especially if they don't show a willingness to reverse course immediately.

Microsoft is going to eat Google's lunch.


The fear is a lot deeper than quality of chat GPT's results for now.

Google is afraid that ads driven business models might get dislodged for "give me a service and I will pay you money for that" model. That will be a nail in the coffin. Imagine someone brings a marginally better conversational search engine with a low enough subscription price that some people migrate to use it rather than ad infested, SEO driven web results.

Also, with benefit of hindsight, Microsoft seems to be planning to attack Google from the flanks.. competition on a different business model than a better ad driven engine.

Sundar isn't exceptionally promising is spot on. We have so many cases of small business people being locked out of their accounts etc. They seemed to have preferred being a cool engineering company, not a B2B product company that other businesses can trust.

It is probably incorrect question to ask, but do some Google employees anonymously feel that lot of culture wars waged within company (Damore episode, the fights of ethics AI team with leadership) have caused the company to loose focus? I'm not disregarding moralities of the positions, just thinking if it was the case that many senior executives have to spend time firefighting company image in media due to these episodes.


Well, they should not count on the founders "vision", as they tried to sell Google for $1 million, and after a little bargaining even went as low as $750,000...

https://www.theverge.com/2019/12/4/20994361/google-alphabet-...


“You're not a wartime consigliere, Tom. Things may get rough with the move we're trying.” --Michael Corleone, The Godfather

Said to Tom Hagen when firing him.


> Also sounds like they really believe the whole founder-hero mythology if they think that Page-Brin can save the company.

Sergey and Larry have the majority vote so if you want to significantly change the business strategy you need to get their sign-off. It's just a business as usual.


It’s easy to lead in good times. Your main goal is simply not to rock the boat. Maybe every now and then you do something flashy to avoid complacency. Google’s had enough capital to more or less do anything and everything the team wanted to do for the last 15 years.


Google already has lambda. I bet they’re full steam ahead on making that work with search.

As a side note, I think Sundar is a perfect CEO for a time like this. He’ll do whatever it takes to get the future lined up without injecting a lot of ego into it.


Looking busy is the most successful trait of these billion dollar companies


I think that Google has the ability to absolutely crush ChatGPT. Lambda is so far ahead that Google employees were raving that it was sentient. So they could disrupt everybody.. except for two things.

First, the woke crowd deeply embedded all over Google will burn the whole place down if there is anything controversial that comes out of the AI. They have spent years slowly degrading YouTube and Google to prevent anything from showing up under an extremely broad definition of misinformation or anything that could be considered derogatory of any sensitive group. AI is like a Chernobyl of potential microaggressions and misinformation. Evidence of this fear of their own AI is that the deepmind guys said they wanted to even stop releasing research papers because they were worried about what bad people would do with AI research.

Second, the innovators dilemma. How are they going to monetize AI and get people to click on ads?

They need Larry, currently hiding out in Fiji, and Sergei, playing around in retirement, to come back and do the work of changing the culture to actually allow these two disruptive changes.


> Google announced it’s laying off more than 12,000 employees and focusing on AI as a domain of primary importance.

I have had an Android phone since Froyo. I liked it when the functionality was simple and didn't try to inject too much suggestions / AI. Now, Google Photos app is trying to suggest things to print, the Settings app is trying to predict when and how to charge my battery (I just want it 80%, all the time).

The more AI Google applies to my life, the more I want to escape it. I remember, when it was said people pay a premium for Apple products because they want privacy. I wonder if it will also become, people migrate to Apple to escape AI.


The one that bothers me most is that background services are now management by some AI thingy, in order to 'optimize battery usage'. Ergo, background services are now unreliable.


As an example, Facebook Messenger has become almost completely unusable on Android phones because the OS keeps killing the process despite the user specifically telling it to don't.

I'm not 100% convinced that this is by accident.


This varies heavily by phone manufacturer. They employ different strategies to inflate their battery life stats, and Samsung is currently the worst offender: https://dontkillmyapp.com/


Why does facebook need a background service? Isn't there one global notification service that is super reliable and gets messages for all apps using it?


Somewhat ironic as this sort of issues started becoming more prevalent on the Linux desktop once distros started adopting sytemd-oomd and Facebook is the one that submitted portions of their oomd solution to systemd.


few years ago when I got rid of all FB apps from phone (can't recommend enough), main reason was that their apps were draining battery hard, even when not used (when deciding whether it was so much spying of devs incompetency I'll be nice and tend to incompetency). Maybe its still the case?

Its bad that platform that is supposed to be more free alternative is not so much so. Similar story with me and Strava - when I record some walking, Samsung's latest android 13 sometimes tends to kill after some time the tracking thread. So my maps sometimes look like I invented teleport, pretty annoying and made the app useless and unused. I checked and Strava doesn't have any settings for this enabled.


I thought phones had hooks for notifications n stuff regardless of if the app was open. Shouldn’t that cover most things you care about in the background? Am I wrong or is this different for Android (I’ve only used apple)?


Well for a messages app it either needs a background service polling the server for new messages or it needs to use Firebase Cloud Messaging. AFAIK, FCM is essentially a service running on your phone that receives messages from Google's servers and forwards them to your apps. And backend servers can send messages to those Google servers.


Wait until you find out about 5G causing brain cancer.


No where near enough radiation. See the inverse square law.


You are so right. I like the Pixel phones, but all the big features of the later models were about being "more helpful". Or "more annoying" as I see it. Do not want! I just have to turn off all that crap if possible.


I moved off Spotify to escape AI. It was good at first, but I started to feel like I was being pushed into a rut with music, that I’d lost all autonomy when it came to finding new music for myself. I wonder if the novelty of a lot of consumer AI will wear off for others in the long run.


Spotify certainly does sometimes give unexpected results. A few months ago some random train of through made me think of the song "I Don't Like Mondays" by the Boomtown Rats and I decided to listen to it.

I went to Spotify's search and typed "i dont like' and hit enter, figuring what I was looking for would probably be near the top of the results. It was indeed the first result.

But man...the rest of the results were something else [1]. I've never done anything on Spotify that would suggest I might like most of those songs, or in many cases that I would even be interested in their genre.

Eventually curiosity got the better of me and I did listen to some of those to see if they were as bad as their titles suggested. They were.

But now I think Spotify has decided I like that kind of music.

If I do the same search now it has many of those still but now adds songs called "Sex Offender Shuffle", "Niggas in My Butthole", "3 BIG BALLS", "Alabama Nigga - Original Version", and "I Don't Like Beaners" to the top 15 or so.

There are plenty of songs with "I Don't Like" in the title or something close to which are about things like relationship problems or self esteem that are in the search results, so it is not like they only had songs about disliking certain races or songs about people's genitals to choose from.

[1] https://imgur.com/a/zks8UoQ


Wtf? That playlist is horrible.


Amusingly, I find that Spotify will put B(maybe even C) songs or artists in front of me that I'd never hear on my own, but that have little moments that I enjoy experiencing, and this happens more frequently the more I use Spotify.


> that I’d lost all autonomy when it came to finding new music for myself

Music discovery is the best feature for me in spotify. Especially if you listen to more unpopular genres like french/german techno, old school hard trance. I found plenty of great artists there which have <5k monthly plays.


I wonder whether I got in a bad A/B test group or something - I tried Spotify after Rdio shut down but no matter how much training I gave it the recommendations kept going back to the same pop albums they were promoting. It was very weird to see after reading all of that echonest guy’s blog posts about the cool things they were doing.


Somebody I know also had this issue at one point. In his case it was girlfriend which also used his account (but apparently without creating playlists or hearting anything, idk).


In my case I imported my Rdio history so it’s possible that something went wrong with a difference between loading albums & seeding the recommendation engine. It was really weird because I was expecting better based on most of the other users I knew.


The import could have something to do with it. When I started using Spotify I used a playlist importer that included many false positives when it couldn't find a particular song. I ended up with a bunch of live David Bowie recordings that I've never heard in there, and now it still thinks I want to hear him every time I shuffle my liked songs. It could have chosen a worse artist, but I don't need to hear him every time.


Talking about Discover Weekly playlist: I think the recommendation engine is mostly based on what you actually listen to. When I listen for a week to emo rap, I get emo rap recommended in the next one.

I think song/album radio is not exactly based on your taste, but on what spotify considers similar.

Try making a playlist with your favorite songs and press "enhance" or "playlist radio" to see recommendations based on that context.


I used it for a couple of months and it never got better. I ended up switching to Apple Music but I would hope that they've fixed it by now.


got any french/german techno? I would really appreciate it.


Sure. I'll post some artists with my favorite songs as youtube links, as I know that not everyone on this site has a spotify subscription.

On a sidenote: Looks like techno had massive growth in the last 1-2 years. I discovered most of these artists when they were much smaller. Now most of them have 3x to 10x more monthly listeners. Seems like Spotify is indeed a great platform for discoverability. For monetizing probably not though.

---

French:

KASST - Hell on Earth (the most popular on this list. 430k monthly listeners): https://www.youtube.com/watch?v=Fwi1qgSaxZY

Trudge - страсть (this dude has many decent tracks, check out "Sway over the Void" and "Self Love Club" out): https://www.youtube.com/watch?v=dx-zzugl-P0

BXTR - Reboot my soul (40k monthly listeners): https://www.youtube.com/watch?v=ruxTTzO3yTI

RAUMM - Perdue Dans Le Noir - BXTR remix (Has French lyrics. 38k monthly listeners): https://www.youtube.com/watch?v=m-xkLnJUvDQ

Automates - Meta (French lyrics too, 29k monthly listeners. One of my overall favorites): https://www.youtube.com/watch?v=Il0_Pcj4nho

Any another production one from RAUMM, which is more like a 16-minute experience with beautiful lyrics with a lot of trance influence. I'm uncertain if this still counts as techno; I don't think so. But I really love sitting down on the couch and putting this on a loud volume and just relaxing: https://www.youtube.com/watch?v=1b-KUdT5bU8

I Hate Models - Daydream (pretty famous; 500k monthly listeners): https://www.youtube.com/watch?v=xjCb4CB-hdw

Anetha ft Sugar - Candy from Strangers: https://www.youtube.com/watch?v=ofOPhZvnVF4

Trym - Millenium Pain (ナルト Mix): https://www.youtube.com/watch?v=INwOGDbQFmQ

-----

German:

In Verruf - Let Out (80k monthly listeners): https://youtu.be/pc_Pp0kObxQ?t=1

Parallx - Red Clouds (30k monthly listeners): https://www.youtube.com/watch?v=FEX5xJOuxAM

Inhalt der Nacht - Die Liebe (25k monthly listeners): https://www.youtube.com/watch?v=fVu6UBYuq80

Klangkünstler - Ihr Werdet Weinen (Pretty famous; 500k monthly listeners): https://www.youtube.com/watch?v=vFIiveXMxyI

Introversion - Dystopa (50k monthly listeners): https://youtu.be/KRXuatwFhIE?t=2

B2 - Destiny (64k monthly listeners): https://www.youtube.com/watch?v=zXaqpT323DQ

Kobosil - Rigid (Kobosil 44 Rush Mix)(Fastest growing Technio Producer in Germany, 500k monthyl listeners): https://www.youtube.com/watch?v=Uf1MLglK3fc

AKKI - The Ocean: https://www.youtube.com/watch?v=bD5GrfTy9Ac // Into Your Mind: https://www.youtube.com/watch?v=jWWAGoZmx9A

rezystor - alice (this isn't on spotify unfortunately; rezystor has <10k monthly listeners): https://www.youtube.com/watch?v=fcGa15wEalE

David Strasser - Trust the Process (17k monthly listeners): https://www.youtube.com/watch?v=yY9T6mdg2wo

----

I had to snuck a track from two Italians in here because they only have like 1k monthly listeners on Spotify and make good music:

Anticyclone - Ruins Of Las Vegas: https://www.youtube.com/watch?v=vFquUsnXKuw


Thank you tremendously!


It's hard for me to move off Spotify because their API is far better for finding new music than anything else out there.

I have some cron jobs running that generate new playlists every day with very specific targets (e.g. "min_instrumentalness" on a track) while still being random enough to have variety.

If Spotify ever shuts down their API like how Twitter has done, then I'll move to something else.


I'm intrigued. Are you able to share these scripts?


I'm also interested in them!


Apple still do plenty of AI, no? Eg the whole horse-detection thing. Maybe it’s something more specific that you dislike?


I'm with the OP. I sometimes like AI capabilities, but I don't like my phone making AI decisions or predictions on my behalf.

E.g., if I open the share menu, whatever contacts and apps Google guesses are always wrong, and a simple list of whoever I've last shared with would be better.

I also don't want any "smart" contextual behavior (widgets changing out throughout the day, etc.). The phone simply doesn't have enough context to know what I want from it at any given moment. I'd rather just tell it how to behave using my own judgement.


> a simple list of whoever I've last shared with would be better

A decent AI system would include a heavy bias toward recent. Good "AI" should mostly feel like dumb heuristics. I think a good example is directions on my iPhone: when it's plugged into a car it always suggests driving directions. When there's something in my calendar soon it suggests going there. If there's not and I'm not at home it often suggests going home (it's right more than half the time). After that, it just offers a list of buttons for places I've told it about: home, work, school.

I'd be annoyed if it tried to do something more clever (like suggesting the post office because I went there on this day last year). I think a key element is that I have an immediate and intuitive idea of why its suggestion is a reasonable one.


Apple's AI is on your phone. It unobtrusive. Recently I needed to quickly find a company's QR code which I had photographed and Photos search found it in an instance.

It even finds text inside images.

Am sure Android has these features but I don't know if they work offline.


> Apple's AI is on your phone. It unobtrusive. Recently I needed to quickly find a company's QR code which I had photographed and Photos search found it in an instance.

I believe the iPhone automatically classifies photos based on who or what show up in them. Users can contribute to train classifiers, but the iPhone already works out of the box.

The iPhone also creates theme-based photo albums from the photos you took. I recall it creating Christmas photo books, photo books featuring a pair of people, persons and pets, etc.

This might be low-key AI, but it's the useful kind.

> It even finds text inside images.

I'm not sure OCR counts as AI. But yeah, the iPhone indeed does that too. We can take a photo of a telephone number or even a credit card and automatically fill in those numbers.

I worked on a similar feature in the past for an unrelated project, but that was not AI though. Marketing changes though.


> I'm not sure OCR counts as AI

This is a great example of the AI effect where people would call something AI when it was a daunting research problem but give it another label once it’s working:

https://en.wikipedia.org/wiki/AI_effect

In this case, Apple’s modern OCR is a complex neural network system which I think most people would class as an AI tool. It’s notably better than traditional approaches which were optimized for business documents.


> It’s notably better than traditional approaches which were optimized for business documents.

I'm not sure I agree, primarily because "better" is subjective.

A pipeline with template matching is extremely effective at extracting fixed form text in a standard layout, such as telephone numbers or credit cards, and computationally cheap as well.

But I presume a drop-in black box model which isn't bound to a low computational budget, can output plenty of false negatives and false positives, and can run on a single pipeline might be preferable at least from a product management point of view.

Also, neural networks looks good on resume while template matching doesn't. Just like statician/image analyst looks lukewarm but AI engineer looks superb.


I’m just going by the quality I perceive as a user. It handles basically every CAPTCHA, difficult scans of printed documents, etc. better than Tesseract. I’m sure there is lots of hard work beyond the pure ML component but from a user’s perspective it’s impressive.


> (…) better than Tesseract

Isn’t Tesseract also neural network-based?

https://github.com/tesseract-ocr/tesseract


They added that somewhat recently - the history dates back long before that was practical.


Instead of the focus on AI, I'd like to see something like Shortcuts on iOS, which allows users to automate frequent tasks, and due to being developed by Google it would be a first class citizen of the OS, for which app developers will add integrations.

That way it at least serves the user, and not a paperclip maximiser for engagement.


Google has something similar to Shortcuts, called Action Blocks.


Doesn't provide automation (actions must be manually triggered) or extendability (app developers being incentivized to add custom actions for their apps).

Seems to be just another case of a half-assed Google product which is destined to die on https://killedbygoogle.com/ in a few months / years, sadly.


Of course there are third party apps like Tasker that are much more capable than both action blocks and shortcuts.


They require messing with intents or web apis to do anything substantial. The original point is that the existence of Shortcuts incentivizes app developers to allow access to app functionality through custom actions. A third party automation app has no chance of building such an ecosystem. Google would have a chance.


Apple does the same horseshit with battery charging guessing. Just give me a toggle setting for stop at 80% full.


"Just give me a toggle setting for stop at 80% full."

And another toggle for 100% when I do need full charge, because I go somewhere. Why such basic functionality is not avaiable, is a bit frustrating.


dunno, there is the one-time toggle in the notification area for 100% in my pixel and there is a toggle to turn off these adaptive features in settings.


It is a feature you can easily turn off on iPhones too, it isn't hidden or obscured under bajillion menus at all. The grandparent comment just didn't bother looking for it.

Settings > Battery > Battery Health and Charging. Set "Optimize Battery Charging" toggle to off, and the adaptive charging is gone.


tbf, TFA points out that among some "20 AI products" Google intends to release this year are mostly in response to OpenAI: DALL-E (image creator and editor), Copilot, ChatGPT, and OpenAI APIs (low-code, browser-based).


Notifications about printing photos are simple to configure. As are the "AI" battery charging options. Google uses AI in most of their products transparently, and we seldom realize it.


"In order to change an existing paradigm you do not struggle to try and change the problematic model. You create a new model and make the old one obsolete."

Transcribing this profound insight of Buckminster Fuller to the current debate: You cannot out-do google by inventing "smarter" search. You need to create a new business model that makes the old one obsolete.

Is there any evidence that all this algorithmic magic is enabling new business models that are not based on adtech more likely? It would be a blessing (presumably, because things can always drift further into evil) but, so far at least, there is little to point to such an imminent disruption


Google beat their original competitors — Altavista, Excite, Yahoo!, and so on — with an incremental improvement, not a new model. At the time (2000-2003), their search was better and faster, certainly, but not several orders of magnitude better.

In fact, Google was something of a reactionary model at the time, by rejecting the push for bloated, captive "web portals" and going back to the simpler, stripped-down user experience that Altavista had succeeded with originally.

Given how bad Google's results have become, I think you can certainly outdo it. Anecdotally, it's amazing how often I find myself reaching for ChapGPT these days to find basic facts that Google can't. I think Google is right to be afraid.


Pagerank (TM) was a Google innovation and the reason Google had much better results than the others and it took many years for the SEO spammers to catch up. That revolution, combined with the philosophy of the speedy, focused homepage and the elimination of irrelevant results by always using the ‘AND’ operator was why their popularity exploded.


At Voila, a subsidiary of France Telecom, a study around 2005 found that their search engine offered nearly the same answers as Google, while not in exactly the same order.

Despite this objective comparison, in users' view Google's results were much much better.

I remember a German CS professor telling me eyes in eyes with obviously a lot of schadenfreud "Google will kill you".

And I answered "No they will become as us, big, fat and ugly".


> At Voila, a subsidiary of France Telecom, a study around 2005 found that their search engine offered nearly the same answers as Google, while not in exactly the same order.

Order is a pretty big part of tbe value prop of a search engine, so ”nearly the same” results but “not in exactly the same order” is not as much similarity as you seem to be implying.


Come on, there's no absolute correctness for a result in being in Google's first versus second answer. The actual order depends a lot on the user's context.

Even Google doesn't change the order of answers based on the user's context. Otherwise it wouldn't say in Google Web Console "your site appears in nth position"


> Come on, there’s no absolute correctness for a result in being in Google’s first versus second answer.

“Absolute correctness” is an irrelevant standard. Order can affect satisfaction without "absolute correctness" being a thing that can even be assessed meaningfully.

> Even Google doesn’t change the order of answers based on the user’s context.

Yes, it does. [0]

> Otherwise it wouldn’t say in Google Web Console “your site appears in nth position”

I think that reflects the experience of a no-context user (e.g., not logged in or otherwise tied to a history to provide context.)

[0] https://en.wikipedia.org/wiki/Contextual_searching#Automatic...


All a Google killer has to do is resurrect Altavista's near operator which is what ChatGPT is doing in an abstracted way.


Other people invented pagerank and used it for pageranking before Google. It's just a centrality metric.


Jeff Dean even posted on G+ about how he essentially independently invented a variant of pagerank at DEC, but his supposed collaborator couldn't figure out how to open a tar archive, and that was the proximate cause of his leaving DEC.


Actually Google got a patent for the concept of Pagerank, number US9165040B1.


You mean stanford. Getting a patent in no way means you were first.


Google is the initial assignee and a patent provides exclusionary rights to the inventor of a technology. Great if someone else thought of it earlier but no one published anything using it or thought to get a patent, because otherwise this very important patent would have been void and it wasn’t.


Robin Li's rankdex patent predates pagerank, see https://patents.google.com/patent/US5920859A/en?oq=US5920859... as well as the pagerank patent, which cites it.


Patent US9165040B1 does not cite US5920859A, I’m not sure what you mean.


It wasn't just incremental. It was a massive improvement! I still remember how floored I was with the quality of results compared to AltaVista which was my main search engine at the time


Depending on the day of the week, I might be doing a standard enterprisey dev project or I might be doing a “DevOps” project automating some things around AWS (where I work in ProServe) using the AWS SDK.

Before ChatGPT, I would spend a lot of time searching on Google for the right API call and the response structure:

Problem: I need a simple Python script that list roles in an AWS account that has at least one of a list of command line specified policies.

Before ChatGPT: I would look up the api call to list the roles (the API service area of AWS is huge), then look up the API call to list the policies for the role and then look up the API call for the “paginator” that pages through all of the results since the API call only returns 50 results at a time.

After ChatGPT:

“Write a Python script that allows me to specify a list of AWS IAM roles that contain at least one of a given list of policies”.

I get “working” code and then I start polishing it.

“Use argparse with required parameters and use a paginator to iterate through the roles”

Again expected results.

“I need the list of ARNs printed out as a comma separated list”.

I copy the code into VSCode, test it, and I’m done.

Later on for another project, the customer prefers JavaScript. I copy the code into ChatGPT, and tell it to convert it into JavaScript. It works perfectly and is idiomatic JS along with proper async/await handling


Lol dude I just googled “script list IAM roles that contain policy” and this was the top result. Probably less typing than the usual silly authoritative tone people use when composing an “AI” prompt , like they’re Tony Stark or Captain Picard .

And trust me, once u google stack overflow enough pretty soon you won’t need to ask anyone how to output a csv

https://stackoverflow.com/questions/66127551/list-of-all-rol...


that’s true. But that code snippet didn’t meet all the requirements. I laid out. And it definitely was less typing using ChatGPT once you add all of the requirements I laid out.

And then there is the context switch. This is just one little script as part of a larger system where I’m designing an entire system. It’s much easier if I’m already writing out a long set of documentation, diagrams, slides, etc just to open ChatGPT and keep writing English.

Not to mention when you’re juggling a couple of projects and one is Node and one is Python.

And just in case you are trying to gatekeep, I’ve been programming for a few years - let’s just say my first hobbyist code was in 65C02 assembly language in 1986


I agree completely. Hilarious example. But, hell, if a product's pitch is "you'll feel like Tony Stark or Captain Picard" it will probably be successful.


The entire thesis is that ChatGPT can give better results than Google.

ChatGPT gave me a specific answer with the command line arguments that I specified and allowed me to specify a list of policies and output the result in the format I needed

Google gave me a code snippet that I had to modify for my use case.


Exactly. Search ads are pretty much the perfect internet business model - if my biz is having a sale on lawnmowers, who better to target than people googling for "buy lawnmower"? If this is done right, everyone walks away happy and google makes a killing. ChatGPT is great, but it's pretty far from providing good answers to "where can I get a good deal on a lawnmower right now?" They might take over a decent share of esoteric, "school assignment" type queries, but they're not really a threat for the high-value searches.


If ChatGPT had up to the minute crawl results and my Geo location, that'd be enough.

"Help me buy whichever budget electric range Wirecutter recommends at the moment".

> Okay, here's the store near your home that is selling the Frigidaire electric range that Wirecutter is currently recommending, at the best price.

> Would you like to buy it online at roughly the same price or get the address of the store?


I don't understand how this is different from what Google currently provides, except that it skips one step (navigating to wire cutter, selecting some text and right clicking, then "search web for [product name]").

The only difference, which is not an improvement, is that you've suddenly given Wirecutter total power. At least if you make users visit the site and read a bit of the review, the reviewer has the hurdle of writing some text that can convince a human reader that the results are not totally bought and paid for.


Right! I think most people are uncomfortable with totally outsourcing their decisions, however smart the service is. I want to do my own research, and feel like I've gotten a good overview of the market before I commit.


You're saying if ChatGPT had a regularly updated search index, and was capable of effectively ranking webpages to duck around SEO (adversarial webpages looking to hack the rankings), it'd be competitive with Google?

Can you explain how this is any different than what Google does?


Not GP, but, and this is a hypothesis, it seems to me the realtime scrape might be enough. GPT seems incredibly intelligent in figuring out the ranking(!), importance(!) constraint-search(!) and personalization(!) by itself. Meaning, if OpenAI can do realtime fine-tuning of the web on a daily basis, they _might_ have sidestepped ~25 years of R&D from Google, and be an order-of-mag better.


When I google, I have an internal dialogue about whether the results are shit to begin with, if they aren't, maybe the thing I'm looking for is more niche, so it's not in the top ten results. ChatGPT could for example just maybe try 5 google searches in the background, just glean data from the description, title, question asked, date, etc, and maybe use some of it's own scraping on the top 10 results to reorder them and give better summaries.

E.g. imagine when you search via chatgpt. It googles : search phrase, search phrase + reddit, search phrase variation 1, variation 1 + reddit, var 2, etc... all in parallel almost then uses that to mini-tune an ai model to find the stuff that best matches what you're really wanting to find, and then scrapes top ten for results.


Ads work because they're embedded in interesting or useful things. No one would use a search engine composed of 100% ads.

Chat-GPT can be used for this: returing interesting results. The ads don't have to be served through chat-GPT, but could be given as a side result.


> No one would use a search engine composed of 100% ads.

Isn’t that pretty much what the Yellow Pages used to be? As long as you know what you’re getting…


Or virtually all trade magazines, Auto Trader, swingers' magazines, plenty of hobby magazines. There are tons of things you buy for the ads. MaximumRocknRoll in the 80s and 90s, the reason you bought it was to find out where to buy records, the articles were extras (and they told you what records might be good and what bands might be good to see.) Their coolest co-publishing thing was issuing guides to all of the clubs and other vendors in every city and state that touring bands needed to book.

These are all ads. The enemy isn't ads, it's an extremely intrusive and adversarial advertising industry posture.

Ads are useful, they tell me where and what things are available. The problem is that everyone uses the methods originated with patent medicine to sell everything.

The methods of selling patent medicine are simply the methods of confidence men; the "medicine" was probably just some unholy mix of alcohol and mercury. Patent medicine marketing emphasizes secrecy, because they're usually mostly frauds; and also because to any extent they are not fraudulent, they want to be exclusive and not copied. Patent medicine marketing also emphasizes leads, surveillance and secret dossiers. When you're selling dreams, you have to know what people are dreaming about; you also have to know how much they have to spend.

Our advertising problem is a legislative problem. We find it very difficult to police this kind of behavior because it is so widespread among the powerful (and is usually what got them powerful.) It is also completely rational behavior because the methods of con men will sell more regardless of the quality of the product i.e. a good product sells even better using underhanded methods. So there's simply no way to protect your world from being run by the best con men without creating legal obstructions for them.


Yellow pages were not 100% ads. They were where you got phone numbers and addresses for businesses, which was important pre internet. They did have tons of ads alongside the listings, but the unpaid listings were core to the offering. If you wanted to call for a reservation at a particular restaurant, check movie showtimes at a particular theater, you would go to the yellow pages, and to the regular listings specifically (not ads, although they’d certainly be there trying to entice you elsewhere). (You could also get the numbers via 411 but this cost money, 50 cents or a dollar IIRC).


> If you wanted to call for a reservation at a particular restaurant, check movie showtimes at a particular theater, you would go to the yellow pages

If you wanted to find the number of a particular restaurant, the white pages were far more efficient.

The yellow pages were primarily useful when you wanted a list of restaurants, not a specific one.

Something else that was nearly 100% ads: the old Computer Shopper magazine. As I recall, they printed just enough editorial material to qualify for the "literature" postal rate rather than the "advertising" rate.


My memory is that originally business listings were excluded from the white pages, added in later years. Been a while though!


> No one would use a search engine composed of 100% ads.

And why I'm using Google less and less. I've literally had the entire first page be all ads.


You are describing a comparison shopping search/aggregator.

If you are having a sale, you are going to be near the top of the results for any reasonable sorting scheme. (An actual sale, not here's 50% off of my bullshit price)

And searching Google for "buy lawnmower" is a good way to buy something you didn't want or need. Even if you actually needed a lawnmower. The only people it helps are those who don't care whether the item is good, and don't care about the price at which point they might as well just walk into the nearest store and buy the first one they see.


Search existed before Google. They also didn't invent adtech. Instead of a marginal improvement, they made it so much better that the previous option looked silly. If such a disruption happens again to the core technology (e.g. search through a ChatGPT-like entity), whether the financial backing is adtech will be irrelevant.


You know how better app for finding good restaurant can't find good restaurants of there are no good restaurants?

Search is dead, spammers killed it and Google was happy about it because it has become an answer machine where they can say the answer is whoever pays the most. Greatly deteriorated experience from what it used to be.

Now someone builds an answers machine with much much better user experience with all that data which Google can no longer surface.

I don't know if Google can quickly match that function but even if they do, are they going to be able to replicate the experience if the answers machine choose the answers by a bidding war?


> You cannot out-do google by inventing "smarter" search.

Pretty sure you can, because Google search sucks balls. (First and foremost because it ignores context. E.g., searching for programming API documentation is tuned entirely differently compared to searching for movie recommendations.)

That said, Google is not in the search business, because there is no such thing. Google is in the context advertising business.


Buckminster fuller was notoriously unsuccessful at changing anything, but was very successful at making it seems like he would have been successful if it wasn’t for all the conspiracies against him stopping his success.

Taken in that context puts it in a different light.


While I agree with your post, it is somewhat funny that not one of Fuller’s ‘disruptive’ products ever made a commercial dent.


Is there any evidence that all this algorithmic magic is enabling new business models that are not based on adtech more likely?

I think the revenue assumption here lies on the fact that GPT generates content when the web wait for the user to provide the content. Generates congent, for lawyers or hospitals for example - pardon my lack of imagination. So a basic business model would be to charge per request I suppose.


I remember that hope.


I'm using GPT3.x to make money at work


A lot of my reason for googling was to find facts or answers for what I’m writing or coding.

Now I use ChatGPT as an “assistant” to do a lot of tasks that I would have normally done with laborious searching through google.

It truly saves a lot of time.

Google is right to be quite worried.

Sure I still use google but really, maybe only 40% of as much time as I did before.

Why research: “give me 30 of the most common health conditions related to the human liver” and spend a lot of time in google, when the Ai can spit out that in seconds?

And worse I can ask the Ai to write a short couple of paragraphs about each one.

Then I can confirm the output and clean up the generated text into my own style.

What do I do?

I do online marketing and programming to support online marketing activities.

I write. I plan. I code. I hire.

We just taught a junior employee who is not great as a writer to use ChatGPT to help her with a good start to writing.

The training for her was how to formulate detailed and highly specific “prompts” and to use google as a backup to confirm facts in the AI generated output.

It’s not there to replace people’s work. It’s there to make them much, much more efficient.


ChatGPT often makes up facts. It outputs stuff that looks like it could have been written by a human, not stuff that is correct.

Don’t use ChatGPT for medical research.


These arguments are just like the old days when wikipedia showed up. Don't miss the forest for the trees. ChatGPT is a huge threat to google and a bunch of other industries.


Not comparable. Wikipedia has always had a strict policy on citing sources. ChatGPT can't cite sources by design, because its answers are based on synthesis.


Not true. The verifiability policy only really came into effect in 2006 (https://lists.wikimedia.org/pipermail/wikien-l/2006-July/050...) - five years after Wikipedia started.


It wouldn't be too hard to program at least gpt3 to basically take a chatGPT answer, go to google, scrape results and verify if the chatGPT answer was factual or not and maybe give it a score or rating of factual-ness.


If it’s that easy why don’t you do it? You’ll be printing money


Simple answer adhd, if I could ship a product I'd already probably be rich, instead I'm scraping by as a freelance dev. Though, chatGPT probably could help me code it anyways lol.


Absolutely the case, but also people make up stuff online all the time, so google has this exact same problem.


No. Google gives you the source. ChatGPT does not.


It’s funny because when I was in high school the argument was always “books, published articles and other print media are actual source material, Google doesn’t give you that”


[0] scholar.google.com

Google gives you sources, determining reputability is your task.


You think Google does not provide results from books, published articles, etc? Really?


Probably not when op was in high school, if they were still using books over web tools. I'm guessing before 2004? How old is google scholar?


Google Books is from 2004 but I don't remember seeing in search results until the 2010s.


At least with Google you have sources you can trust more than others whereas ChatGPT is a black box


I Clearly said we use google to confirm the AI output.

And we also do not do medical stuff. I just used that as an example.


> Don’t use ChatGPT for medical research.

Or Google. There are plenty of pages out there that (e.g.) claim that Alzheimers is caused by drinking out of aluminum cans, or that the world is controlled by grey aliens from Zeta Reticuli.


… you know Google provides the URL right? With Google it is very easy to tell if the information is coming from NIH or infowars/forums/etc.


> ChatGPT often makes up facts.

As opposed to... Google? Your doctor? My doctor?


Absolutely as opposed to those things. With Google, if you use a reliable source like Mayo, NIH, even a WebMD, It is clearly more likely to have accurate information than something that proves even numbers are prime. Certainly all those things can be inaccurate but where in the world you think ChatGPT pattern matches it’s information from?


Exactly. ChatGPT is clearly very impressive and useful, but nothing from its output should be treated as valid or factual to any degree.

Information generated by humans will include things like transpositional errors, logical errors, popular misconceptions, and misinterpretations of data. Mistakes happen, but human mistakes are at least tethered to real thoughts/information.

On the other hand, AI will happily spin up a complete fabrication with zero basis in reality, give you as much detail as you ask for, and dress it all up in competent and authoritative-sounding prose. It will have all the style of a textbook answer, while the substance will be pure nonsense.

Still a great tool, but only with the caveat that you approach it with the mindset that it's actively set out to catch you off guard and deceive you.


> AI will happily spin up a complete fabrication with zero basis in reality, give you as much detail as you ask for, and dress it all up in competent and authoritative-sounding prose.

Sure. What makes you think a human won't?


I didn't say a human wouldn't. I said a human wouldn't typically do it by mistake.


And how hard would it be for ChatGPT to be retrained on peer reviewed medical journals? ChatMD-GPT, if you will.


The majority of articles in peer-reviewed medical journals are also false.

https://doi.org/10.1371/journal.pmed.1004085

You can't take such articles seriously unless they have been independently reproduced multiple times. So, your hypothetical "ChatMD-GPT" would have to also filter on that basis and perhaps calculate some sort of confidence level.


And it has already likely been trained on correct information and yet it produces bad results. It certainly has been trained data that explains what prime numbers and yet it produces what it produces, whereas using Google and hitting a credible source directly is more accurate and efficient.


Isn't there a medical chat-gpt that passed the medical licensing exam? I thought I saw that come up..


Let's say ChatGPT gives you false information 50% of the time. It is still useful.

Just like it is harder to find primes numbers than verify that a number is prime, it is harder to dig up potential tidbits of information than to verify a piece of information handed to you is true.


50% is still useful? A broken watch is useful in that sense as well I guess. I can only see that has useful if you don’t include efficient in the definition of useful.


Like the comment said, if it's cheap (time, effort, etc) to reliably verify the answer the success rate doesn't really matter.


Your prime number analogy doesn't hold water because the average person doesn't verify. Being wrong half the time has potential for serious damage.


I feel like I've seen this on hacker news before, on other subjects. Someone will gush about how new technology X is great, but they give reasons that seem really odd to me. I've never really found Google laborious for searching for facts or especially coding solutions, and when it does provide me with differing options there are almost always great reasons why those options might all be relevent. With ChatGPT you're going to get one verbose answer that's probably wrong and presents none of the context as to why it might be wrong. So sure, if you're only using ChatGPT to answer questions to which you know the answer, it could be quicker.

>Why research: “give me 30 of the most common health conditions related to the human liver” and spend a lot of time in google, when the Ai can spit out that in seconds?

Because it's not going to be right! If you actually need to know the answer to that question you need to actually find a reputable source. And that's what Google gives you. I'm quite certain that the most common health conditions for the human liver vary by country, will ChatGPT give you the actual answer you're looking for? Maybe, some of the time. Will it save you time, no! because you can't use the answer unless you google it to confirm.

It sounds like your using ChatGPT to pump out worthless marketing SEO. Yes, that's a niche where creating volume of material with no value is common place. The aim shouldn't be to make that more efficient, it should be to find ways to entirely filter that out of the internet. What you're producing is literally the only material people should be using ChatGPT for instead of the web - low quality verbose text that is indifferent to fact checking.


> I can confirm the output and clean up the generated text into my own style... to support online marketing activities.

it sounds like you're using AI generated output to do content marketing to support SEO activities for a healthcare client at a marketing agency

if this is the case, when humans search for [organic] content on health conditions related to the human liver (on Google) you are hoping they land on ChatGPT generated content you published (for your client), to help the client avoid buying Google Ads to get those customers

at some point your content will rank well enough that other SEOs will do the same thing to compete, leading to dilution of overall quality content as it's all SEO optimized content, which makes Search generally unusable.

This, to be fair, has been happening well before ChatGPT, but will only accelerate.


Ugh I’m not.


> What do I do?

Compare with an actual expert because the list from ChatGPT is almost certainly incomplete and will inevitably contain plausible-sounding but completely wrong claims. One of the big challenges here is when you don’t know the field well enough to know what ChatGPT didn’t include on it’s list at all or be able to tell when there are multiple similar sounding things being conflated.


He does online marketing: plausible-sounding but completely wrong claims are not a bug, they are the feature.


I’ve worked with decent marketers so I was assuming good-faith.


I don’t really get your example. You could search for common liver conditions and then get a link to WebMD or Mayo Clinic very, very quickly and you can be very confident that it is accurate. If I’m a 4th grader using ChatGPT to cheat on my math homework, I might be quite satisfied with the answer that 42 is a prime number and even provide its great proof.


The point is to get the list quickly.

Then the next point is to summarize each item.

Then I can go in and validate the info from other sources and clean up the writing to my style.

Get it now?


> It’s not there to replace people’s work.

Well, right up until the day after ChatGPT2 is better than your junior employee + ChatGPT.


I kind of feel they should be. I cannot remember what their last major customer-facing AI product launch after Google Assistant. Sure there is AI in photo tuning and many interesting features for the Pixel phone, and there is Gmail autocomplete but I remember I used to really following every Google product launch when I was a kid: there is Google Reader, Google Drive launch in those days. That excitement was kind of gone: all the new AI products feels like polished but conservative.

The GPTs and Whisper really get me excited again these days though. But Google pioneered this field.

I love their JAX, but in a way many people would love pytorch but not so much in the META company.


Personally I've no interest in any new Google products because who knows when it may be discontinued with little or no notice. Most recently Optimize, but I've also been burned by Universal Analytics, Fusion Tables, goo.gl, and Google Site Search.


I did not know about the Google Optimize shutdown. That's a shame. Google Analytics might be the product they've bungled the most after chat (breaking upgrades, features moved out then moved back, enhanced ecommerce, etc.).


> I love their JAX

For those out of loop: https://github.com/google/jax

See also: https://news.ycombinator.com/item?id=29682507


Interesting that the Jax page still says it's not an official google product (a recent Research publication said it was).


Google Assistant was launched 6 years ago. Probably the most AI that Google has inflicted upon the general public since then is Waymo One. Other recent launches include the thing that automatically fills in your formulas in Google Sheets. Also DeepMind's WaveNet text-to-speech. Then there's fancy stuff like Project Relate that will train a speech-to-text model individually for disabled users[1].

But mostly the ML stuff is silently launched behind the scenes. And you slowly become inured to astonishing feats of machine learning like being able to translate spoken Chinese to English text on your mobile, and cars that drive themselves, and the fact that you can search your photos for "toy lizard" and that actually works.

Edited to add: with regard to the idea that some competitor with an AI code generator will out-program Google as an organization, you should take note that Google has already deployed AI code generators in large-scale production. So whenever a future competitor re-invents this facility at that moment they are already years behind Google[2].

To summarize, I believe it is somewhat strange to believe that the organization with demonstrated state-of-the-art machine learning research and development will be caught unaware by the ML revolution.

1: https://impactchallenge.withgoogle.com/globalgoals/projects/...

2: https://ai.googleblog.com/2022/07/ml-enhanced-code-completio...


There is https://blog.google/technology/ai/lamda/, but sadly it's not consumer facing yet.


I think the OpenAI argument is overblown.

While I do agree that AI is going to bring a huge shake up to the search market, I still believe that Google is in the best position to fight this battle.

Realistically, given how fast the AI field improves month after month, do we think it will take long for Google to replicate a product similar to ChatGPT? I don't think so.

And then they are the company that's best suited to have this AI driven search in Android and Google search thus limiting the impact of this whole problem.

Sure, maybe Microsoft will deprive them of some developers that will look for information inside VS Code or their editor, and stuff like that, but overall I don't see Google going to suffer that much as long as they are able to answer the ChatGPT challenge in a meaningful time, something they should be easily able to do given their immense computational and data resources.


This iteration of shake-up is not strictly about tech superiority, though. Google probably already has a repro of ChatGPT running in their datacenters.

Fundamentally, it's about product, and expectation shift. I am presently routinely having conversations, and for the first time in the history of the Internet, getting straight answers to moderately complex questions, and their follow-ups, in real time, from ChatGPT. Google's entire business model is "giving people pages on the Internet that might contain those answers", and some of those being paid for the user's clicks.

If I get the full answer, instantly, wrapped in 2 paragraphs of text.... where do the ads go? Editorials inside the answer? Why would I click them?


How do you verify that ChatGPT’s answer is accurate (it often enough isn’t), if not by googling?


You could use any search engine, and just gather 'decent' results -- scrape the first 3 results and compare using a new instance of a language model to tell you what's inaccurate in the first one based off the fine tuning data.


I'm wondering what happens when most people start using chat bots and stop using search. There'd need to be a big shift from using user's navigation as a signal and for search engine to determine the quality and authority of the content.

How would the next gen web be? Would people even create webpages anymore? How would these be funded if people aren't navigating to these sites and just using chatbots.


Google's already given up on search, google any programming topic and you get stackoverflow-scraped spam everywhere even BEFORE the stackoverflow post it was scraped from. Google's no better than altavista circa 1997 at this point imho. Brave search is better for most things, and if it misses then consult google, etc.


Yeah, if I can't trust the intial answer, I also can't trust any of the refinements. It will be a long way before you can get a similar level of assurance by asking a number of different language models as you can get now by looking at various sources around the web.


what if you google something, then check the links that google brings up that seem most likely to help you, it remembers that (reinforcement learning), 10 other people search almost the same phrase and pick maybe 60% of your links, after 100% and maybe a decent idea of which links are the best, it'll just return the links along w/ a summary of the details from each of them, without overlap, so maybe one page has top 4 x, and the other top 3, and another top5, well it'll give you a venn diagram summary of those 3 pages and maybe of the 12 listings, 3 are common so it only summarizes the 9 unique ones.


> Realistically, given how fast the AI field improves month after month, do we think it will take long for Google to replicate a product similar to ChatGPT? I don't think so.

Users who use ChatGPT do not see either Google's search page or any linked pages. Instead, the user gets a summary answer from ChatGPT that currently shows zero ads from Google (or anyone else). This summary is the obvious place for ads; Google knows this. And Google also knows that if it is not the default summary provider for most users (in the same that it is for search currently), then its ad business is definitely at risk.

In terms of addressing the risk, it's not so much a technical challenge [-1]; it is instead that Google's business model is largely based on embedding ads in millions of web pages (in a way that MS's is not). The problem with this is that they now have to figure out what an alternative business model (that puts ads in summaries) looks like, then shift their entire enterprise and their partners in the wider ad industry over to using it before ChatGPT becomes dominant [0]

It's an enormous challenge, and it is easy to see why Sundar has sensibly signalled a code red and is reaching for as much help as possible.

[-1] As observed, Google are very technically capable.

[0] Realistically, if they cannot provide a competitive offering before MS push ChatGPT into Word, Excel, Bing etc., history says it will be hard, verging on impossible, to ever get those users back.


There will be more players taking market share from Google. Even if Google ultimately wins, they will have less revenue. ChatGPT represents a new beginning for the industry


There's a surprisingly long and detailed blog post (https://ai.googleblog.com/2023/01/google-research-2022-beyon...) which details some of what they have coming this year. It sounds like they're planning on taking some of the skunkworks/internal stuff and exposing it through products. Sidenote: I was surprised to see one of the charts even showing ChatGPT outperforming LaMBDA in one area.


That is exactly the problem and indicates a lack of grand vision. Releasing hodgepodge of internal projects is not same as rebuilding AI-first products.


Last time this happened people thought social was the future. Facebook was considered a threat to everything since they had the social-graph, so everything had to become social. Google+, Google Buzz, Orkut.


Google+ was good until google lost its mind and started shoving it everywhere- even I as a google+ user HATED this.

It was to me the social network for grownups without the Facebook clickbait - I liked the circles model too (I did one for professional stuff one for personal) and great photos support.

But then someone was put in charge who lost their mind and just started wrecking products (YouTube etc) with plus and it was chaos.


Google+ was not popular though, the whole point of forcing everyone to have G+ was to get MAU explosion to rival FB. Without it, they would not compete in enough time.

It was a poor decision, and I agree as a consumer that this time period sucked. But I understand why the push was made. G+ only bet was to get everyone to use at once and right away. I don't agree with the assessment of someone losing their mind in this ordeal, it was the only option they had to otherwise be second bitch at best for perpetuity.


> It was to me the social network for grownups without the Facebook clickbait - I liked the circles model too (I did one for professional stuff one for personal) and great photos support.

There was probably no viable business model for that as it was, closest is LinkedIn, which is rife with influencers, grifting and inspirational chain mail.


Same. I was avid Google+ user. Circles were great. I had made many friends with varied interests.


I thought there was going to be a slow build in users as folks got sick if Facebook etc.

It was nicely contenty and bloggy feeling.


Social was/is a threat because it puts lots of useful content/information for indexing behind a login that Google (and other competitors) can't access for training data.


AI scales better than social, because it works if you have no friends.


Always thought that social was Google's missing magic ingredient, just as much for associating content creators with identities. structured data/schema goes some way in addressing that.

They're very good at measuring the intent of the searcher, perhaps not so good at measuring the intent of why a ranking page existed in the first place.

Perhaps gets harder when more AI-like content will appear.


Social was able to achieve everything it was expected from it, most of the content on internet went behind walls unreachable by Google. Google missed out on that but didn't lose it's ground much or its losses were compensated by their success in mobile.

The difference this time is, the new shiny thing goes directly into Google territory.


Google didn’t “succeed” in mobile financially. Android historical doesn’t bring that much of a profit in mobile compared to its popularity (that came out in the Oracle trial) and has to pay Apple close to $20 billion a year to be the default search engine on iOS devices.

Google Play Services is not that much of a money maker and it sells about as many phones in a year as Apple sells in a couple of weeks


yeah but imagine if they didn't support Android, they would have been at complete mercy of apple! but currently Apple can't touch 70% of the phones, and it's an enormous financial incentive.


I’m not saying that it wasn’t a decent defensive move. But the 70% of people who are buying Androids aren’t statistically the most desirable customers with the highest income and Google is completely absent in mobile in the worlds largest market - China.


Moving out of social will turn out to be their big mistake. They should have stayed persistent. The data is the new king and absolute differentiator in the world of AI. Facebook is fortunately sleeping at wheel. Musk is quickly realizing how important this is and have already turned off 3rd party access.


they are out of social? what is YouTube? To me it's almost indistinguishable from Instagram at this point. The only difference is that some of the content I consume on IG is from my friends, but those are not monetizable interactions and it's almost completely separate from the content creator content consumption I do there. So YouTube is basically IG minus the unprofitable part.


I wonder where Google ("improving quality to compete") and Open ai/Microsoft ("reducing quality to advertise") will end up meeting.

If GPTSearch ends up as infested with spam and ads as Google it will be little more than a new input interface with embedded ads in the output.

Part of the appeal right now (for me) is it's lack of SEO spam and over the top ads.

That won't last (obviously), at least not for their mainstream implementation.


I still would pay for good search without ads. Definitely google should make more than me clicking ads, which, in the past 20 years (or however long it is that google has apps), happened 0 times. I know why, but there must be a model that works like that?


Kagi.com - paid for, ad free, searching.

I don’t work for Kagi, just a very, very happy user.


I would do too but apparently big companies doing search right now aren't interested.

To me at least there is no indication that AI will fundamentally change that (unfortunately).


What's the general opinion on Sundar Pichai as CEO? I haven't been excited about Google since Eric Schmidt, but I don't know how much of that is down to the CEO, and how much is just down to a different world and larger company.

In contrast I think most people would agree that Microsoft is a much more interesting and maybe better company with Satya Nadella as boss.


My impression is that he's not a leader but a care taker. This article kind of shows the two founders swooping in for some emergency leadership. That's not a good look. I think they've been coasting on their past glory for a decade now. Yes they make money but otherwise they are just doing more of the same and just building out the stuff they've had for more than a decade (Search, Ads, Youtube, Maps, Android, etc.). All the attempts to enter new markets seem to be a succession of failures. They are still playing catchup to AWS, and lately Azure. Stadia came and went. Youtube paid accounts are nowhere close to competing with the likes of Netflix, or Spotify. They failed to compete with Whatsapp as well repeatedly as well. And now they are taking on OpenAI.

Sure Google has smart people. But it takes more than that. MS had lots of smart people under Ballmer as well. But it took Satya Nadella to sort out the mess he left behind and make that work for MS. He put an end to the Windows on anything agenda. One of the first things he did was kill Windows Phone (dead in the water at that point) and then opened up the MS ecosystem to Linux in a big way. Fast forward a few years and they've had a successful Github acquisition, most of the OSS world runs on MS Github, they've built Github co-pilot with OpenAI, you can run MS SQl Server on Linux now, .Net runs on Linux, you can run Linux on Windows really seamlessly. See a pattern here? Gates and Ballmer pulled up their nose for OSS, Nadella made MS a leader in OSS and is reaping the rewards. That's vision and leadership. At the time Google was stealing MS' thunder. Now the roles are reversed and Google needs to call in the founders to come up with a plan because of OpenAI. Which of course has deep ties with MS.

Google needs leadership and strategy. Right now it has neither. What are they going to do? Build another chat client? I think they've tried that a few dozen times already. They need a better plan than that. And I don't think Sundar Pichai is capable of coming up with that plan.


Google has really stayed true to its mission: connect people and information.

When I'm sitting at my desk and in focus mode, I get information by opening a web browser and navigating to google.com. But if they hadn't invested in Chrome, that might not have been the case by now.

If I'm out and about, I access information via my phone. They built Android for that. If my only option were Apple or Microsoft phones, I might not be asking Google for information.

When I'm sitting at home on the couch, I just yell my queries and a little round computer answers back. It could use some improvement, but it's still me asking Google.

Along the way, Google finds ways to monetize the dispersion of information. I don't really mind because there's literally no alternative that doesn't also try to retain some of the value add.

So why this myopic focus on search? There are a lot of products here that keep me in the ecosystem. Google is a platform company that knows a lot about me and makes my life easier and along the way takes money from advertisers.


YouTube and Netflix revenues are actually almost exactly the same. Now that Netflix has launched an ad-supported version, I'm not sure how important paid vs. non-paid accounts are in considerations of who's doing better vs. worse.


Google went for premium content with Youtube many times. And it flopped every time. That's the point: they've been trying to make more of it and they have failed over and over again. Ads are what drives the revenue because in the end that's the only thing they know how to do. Youtube itself is indeed a good business; it's a nice business. And with an ad blocker I find it very enjoyable. Great acquisition more than 15 years ago.


I think that’s the key problem: the ad money makes it hard to see other models as viable because nothing they do now will be anywhere near as profitable over the very short timeframes they evaluate product or personnel performance. If you give up almost immediately, you have nothing waiting to help the company in a dark time for the primary revenue stream – and each time they’ve made it harder for a new product to be successful because people expect it to be cancelled.


Revenues and profit is completely different


Sundar was a robot put in place to handle the difficulties that Larry wasn't interested in dealing with: growing revenue, handling the leadership ego fights, and making vapid presentations at developer conferences. He has little or no independent thought. No leadership skills (IE, rally the troops) at all.

Ruth will replace Sundar as CEO in the coming year or so. And then Google's transition to evil will be compelte.


Google needs a CEO that is driven and with a product-centric approach and this is not what I currently see in Sundar.

Google is extremely well positioned to respond to ChatGPT as they already have a very popular assistant in people's pockets and homes all over the world.


“Google sucks,” “search doesn’t show real results anymore,” and many similar comments are common in my world of heavy industry / manufacturing / b2b sales.

They’ve been harvesting profits rather than making a better product. They should be freaked out.


I think this is hitting the nail on the head. AI and ChatGPT isn't even the point. The scary part is that Google appears to have cannibalized their own offering in order to keep their numbers black to the extent where they have no ostensible way of dealing with a credible competitor.

Ironically, the young Page and Brin predicted the inherent conflict of interest between search and advertisement.

http://infolab.stanford.edu/~backrub/google.html#a

With gems such as

"For example, we noticed a major search engine would not return a large airline's homepage when the airline's name was given as a query. It so happened that the airline had placed an expensive ad, linked to the query that was its name. A better search engine would not have required this ad, and possibly resulted in the loss of the revenue from the airline to the search engine."


20 new AI based products this year? We might need to order some headstones.


The trick is to capitalise on their inevitable early death!


After trying out their adwords platform for a year combined with the fact that search in general has plummeted in quality i can't wait for them to crash completely.

Adwords is such a scummy endeavour that squeezes more or less all companies using it, but more so smaller players. They are not a friendly company. They are a MegaCorp that is so monopolistic and dark (design) in its behaviours it's incredible that it's legal.

Hope alternatives would be better.


> ... i can't wait for them to crash completely.

But wouldn't Google, even without search ads (which are only up +4% YOY to $40 bn), still be a hundred of billions behemoth? Sure it wouldn't have a 1.3 trillion cap without search ads but Google is also Android, Chrome, ChromeOS, Google Workspace / Google Suite (which, what, 7 millions of SMEs are using), cloud services, etc.

I don't see Google "crashing completely" anytime soon.


They have been slowly removing targeting tools to make your spend less efficient too. You can't even target exact queries anymore without it pulling in 500 others Google's AI thinks is a match.

On top of that there is definitely a pay-to-play tier if you are a huge advertiser. You'll find many of the big boys double and triple serving ads under small sub brands, which is against Google policy. You can report it and Google just ignores you.

How about those pages that are just lists of more syndicated Google search ads? Clearly a "bridge page" which is a policy violation but it's some bigger spender behind it.


It feels like 2007 to me again, when Vista was flopping and everyone was announcing the year of Linux on the desktop.

It's definitely opening a window of opportunity for a competitor, but that doesn't guarantee anyone's success (or failure).


Didn't even mention YouTube.

Google is so big and diverse it's easy to forget all its major bits.


IBM and GE are still around. Doesn't mean they haven't irreversibly peaked.


Yeah; I think far, far too many people think that the decline of a company like Google would look like a near-immediate collapse into either irrelevance or outright destruction.

That's not remotely realistic. It's much more likely to look like what's happened to IBM since the rise of personal computing: its special "no one ever got fired for buying it" status goes away, and while it may linger as a company that still provides a product and/or service to other companies (those it's managed to make itself indispensable to), it may shrink, and its popular image fade, for a number of years. Given the differences in Google's portfolio vs IBM's, I'd say there is more of a chance that Google would eventually collapse down to nothing, or nearly so, but it would still take a good while for that to happen.


It came out in the Oracle trial that Google only made $27B in profit from Android from inception to the time of discovery.

The fundamentals haven’t changed since then. Google pays Apple a reported $18B+ a year to be the default search engine on iOS devices.

GCP is losing money, Google WorkSpace /GSuite has little penetration in the market compared to MS Office.


Forgot YouTube, which literally has no competitors.


Maybe none for long pre-recorded video, but they haven't done a good job with YouTube Gaming or Shorts which are both defensive plays against growing competitors.

YouTube's decline will look more like an unbundling than a 1:1 replacement.


What about TikTok? When I visit YouTube now the entire bottom half of the home page is "Shorts".


Tik Tok is slowly being banned. And for good reasons.


Rumble, odyssey, rokfin, bitchute


Thanks for the list - I have not heard of any of these products. I would add vimeo to this list as well.


Yes vimeo has the nicest ui imo


all of them account for less than 0.01% of YT traffic


And? Sounds like a good reason to use them, help the competing. No use sitting on YouTube complaining about youtube.


Can't wait for Google to become a tech-only company.


> dark design

I wonder how much of Google's revenue is from Foobarbazz Corp buying ads for Foobarbazz searches.

The conversion rate for those searches is probably really high since the consumer was going there anyway.

It is like a silent agreement between marketing folks and Google to pretend it is value for money.


> I wonder how much of Google's revenue is from Foobarbazz Corp buying ads for Foobarbazz searches.

I don't know if it is a common practice or just an excpetion, but not that long ago we had this https://news.ycombinator.com/item?id=34218340 where two competitors of (the FOSS) Kdenlive tried to advertise on "Kdenlive" on various search engines and it seems like "They had them on Google too, but we complained and they were removed.". That made me wonder if it really is true that you must advertise on your own keywords or if you can just complain and get it removed from Google like they were able to do.


Freakonomics Radio had a really good (two-parter?) episode about whether ads really work, and one of the people they had on was someone who cut eBay's ad spend a ton, and they said the one of the biggest wastes was advertising on the word "eBay". They removed those buys and sure enough people just scrolled down to click on the actual link to eBay.

https://freakonomics.com/podcast/does-advertising-actually-w...


Google tax. Why such patterns are not outlawed?


Why don’t you think through the implications of your question? You say outlawed. Ok then, write a hypothetical law that would outlaw this behaviour. Remember, this law should be general enough to apply to advertising or internet advertising in general. The more specific it gets (“search engine advertising firm beginning with G”) the less likely it’ll stand up to judicial scrutiny.

It would only take a second for you to realise that the answer to your question is “it’s not done because it’s hard to formulate a law that’s fair, will pass judicial review, won’t be circumvented, won’t lead to unintended consequences and will achieve the intended effect”.

All that before we get to lobbying. Because even if you formulate this perfect law, there’s still an open question of how you plan to enact this legislation.


To be fair to both of you, yes it's a ludicrous proposal but also yes governments might enact it anyways.

Most laws these days seem useless at best, Orwellian at worst.


Anyone else experienced google trying to be too clever for their own good lately? I've had a couple of searches where they incorrectly started recognising synonyms that didn't fit the context (e.g. file+extensions -> file+plugins).

For now duckduckgo works a lot better when you want to search for literal keywords.


> combined with the fact that search in general has plummeted in quality

Every time this comes up I am baffled. I have not noticed a single drop in the quality of results.

Every time someone said that and shared their search methodology, it became very apparent that the problem was not with Google.


My example of lower quality is: Programming question searches recently dominated by sites with bot generated content aggregation taken from sources like stack overflow, instead of sources like stack overflow


That’s mine as well: it’s not a million spam sites but the same few for many years. I know it’s an arms race with spammers but they could pay a single person ¼ time to make some big improvements in user experience.


>They are a MegaCorp that is so monopolistic and dark (design) in its behaviours

You mean the once self claimed most righteous Do No Evil company and now currently the 2nd most righteous company are monopolistic and employ dark design patterns?


Technology wise, ChatGPT is not a challenge for Google. It was a blessing that OpenAI is charging $42 for premium tier. Google very much can build a similar/better offering in a near term and would not have to worry about losing ad $$, as it will be compensated from a paid product.

The challenge is execution. Most of the leadership have done amazingly great, but only when they had no competition. For FB, Cloud, Gaming, AR, self-driving cars - they could not do much. They did great on Android and Nest - both were founder led for a while.

Right now, OpenAI is adding new features like crazy - they are a startup. Google may have better engineers, but they will lag behind in execution. Plus, changing a 20yr search+ad mindset in a month is hard.


> Google’s approach to AI in recent years has been conservative compared to some rivals

Well yes of course. They are the incumbent and they've been in cash generating mode for a while. With a large lead over "competitors", if we can even call them that, and a virtual monopoly status in the both the ad space and search space, it's perfectly normal for Google to move at a glacial pace. They have no external need or intrinsic motivation to do otherwise at this point in time.

That is, until an actual threat shows up..

I honestly don't think ChatGPT is a real threat except they are getting a lot of attention and they are the new David to Google's Goliath so they are a psychological threat more than a technical one


their market share in the total ad market is very far from a monopoly


> Google now intends to unveil more than 20 new products

I actually laughed out loud at that one.


You’ll laugh even harder when they get shut down a few years later.


No, I'm already laughing for that exact reason.


Years as in plural?


Perhaps, Google can finally start focusing again on creating quality products instead of nonsense videos/hypes like "Demonstrating Quantum Supremacy" (https://www.youtube.com/watch?v=-ZNEzzDcllU).

However, would people even trust their new products knowing Google's tendency to kill off these products within a short amount of time?


> The demo for the chatbot search says Google will prioritize “getting facts right, ensuring safety and getting rid of misinformation,”

This goes to the chaotic problem which is still to be decided. There is no consensus position on what facts are, what is safe and what is misinformation. Part of the success of modern democracy hinges on being able to switch the leadership out with people who have a different frame on life than the old group. Yesterday's misinformation can become tomorrows facts (consider, famously, the impact Snowden had on the conversation).

Can a company as large as Google do this without splitting their own market along various political boundaries? Will they be forced to take religious, political and moral positions? Will they be out-competed by more neutral or more partisan chatbots? If chatbots become a big deal, will the target audience scale like search did or is it fundamentally fragmented?


What if instead of training one unbiased AI, you train multiple ones, with different biases? A steel man for every side if you will.


You're talking about interpretation. Facts are facts.


Thing is, short of mathematical definitions, everything is subject to some bias, any sufficiently-complex statement is just an interpretation of various signals. Moreover, the threshold for "sufficiently-complex" isn't really so high.

In other words, there aren't really facts (in the sense of being "absolute truths") but just Justified True Beliefs, which means Google essentially has to solve the Gettier Problem (https://iep.utm.edu/gettier/).

Examples:

There is no medical fact that smoking is bad for you but there is medical consensus that smoking shortens your life expectancy. Of course, the consensus does not preclude that there might be some nonagenarian in Fiji or Japan who's smoked a pack a day since he was 20 and is still chopping woods for fuel.

I don't know what exactly GP is saying about Snowden but pre-Snowden, it's a justifiable belief that, among other things, your webcam will only turn on when you tell it to. Post-Snowden that belief might no longer be so justifiable.


No, but isn't that where confidence levels come in? Consensus is formed because of research showing statistical significance of however many sigma or something


It depends on when. Atoms were supposed to be atomic, then protons/neutrons/electrons indivisible, etc. During transition phases, facts are not.


The terminology has gotten really muddy, and people talk of statements that are true, i.e. in accordance with the facts as though they were the facts themselves. The facts are what a true statement agrees with that makes it true.

Facts are, epistemologically, the way things are. That statement wasn't a fact, but it was true.


There are a vast many important and meaningful questions someone can type into Google that don't have a single factual answer. At that point, credibility, sourcing etc all matter a lot and misinformation is dangerous.


Google is getting worse and worse. Today I searched for exact term “boat skipper b”. Google threw ton of trash at me. Ended looking through Bing results and found what I needed. 2/3 of my Google searches are not usable to me. Some advertisement and shady websites as results.


Search Tools > Verbatim

I have that option preset in the googling shortcut I use. The URL parameter for that is "tbs=li:1", see https://stenevang.wordpress.com/2013/02/22/google-advanced-p....


I wish we could use version numbers or just say "Use Google from 2010".


I recently switched to Kagi and it’s the first time I don’t want to go back to Google. And I tried many other search engines in the past, none of them stuck with me.


Indeed for technical queries it really feels like a verbatim depth crawler and really makes me feel like a search 'power-user' again.

For product search or regional results it's still not there but I specifically pay for Kagi for technical queries (after growing frustration with other engines) and I am very happy with that use-case.


Correct, for up-to-date and regional queries Kagi still is a bit lacking, it's the only reason I sometimes fire up Google.


Most of the time you can do "site:reddit.com" at the beginning of your query and get the information you want.


yep. Google has turned into Reddit search engine for me.

I don't entirely blame Google though.

The web has changed. Site content has changed. Much of the content that I'm looking for actually exists as videos, so Google does its best to return mixture of YT and web sites.

Yet, we can't be bothered to click through them. So we rely on human redditor as aggregators, because we don't want to be one ourselves.


A decade ago, I had calculated that a startup would need at least $1B (but more likely $4B) of capex to match Google if trying to use conventional crawl/index tech. With conventional tech, quality upper bound is more or less fixed so you are forced to match breadth to compete which is super expensive. With model-based index, you need less than $100M capex to turn on the proverbial flywheel because you can now sacrifice breadth for very different notion of quality. This is a game changer because we will soon see dozen or so strong startups which will chip away the search share, each trying to leverage, cost optimize and improve this tech.


With all those smart trillion weights models, can't they figure out which page is a useless keyword trap (90% of what google search finds these days) and which is a genuine useful page? Would be a huge help.


the web might just lack genuine, useful, reliable information for most queries.

People here love to remark that they prefix every query with "reddit" but ime the only difference is that they've tricked themselves into thinking their results are worth something.


I've played with ChatGPT and see it as (at best) just complimentary to existing approaches. Unfortunately, I also see it as a vector of misinformation and potentially harmful. It appears highly knowledgeable on topics that I know little about, but otherwise it's replies are often riddled with errors and inaccuracies. It also seems unable to quote sources or references for such information. It seems reasonable to use a statistical process to generate conversational chat and boilerplate, but as a source of truth, it appears not.


I think this is called Knoll's law of media accuracy [0] or wikipedia [1] - "Everything you read in the newspapers is absolutely true except for the rare story of which you happen to have firsthand knowledge". Interestingly enough, I tried asking ChatGPT what it was before googling it, and it confidently told me I was asking about "Confirmation Bias."

But then when I told it what I was talking about was indeed Knoll's Law, it got the definition of the law completely wrong!

> I apologize for any confusion. You are correct, what you're describing is known as "Knoll's Law of Media Accuracy." It states that the accuracy of a story is inversely proportional to the number of people involved in its reporting, and the distance from the original source of information. In other words, the more people involved in reporting a story, the more likely it is to be inaccurate, and the further away the story is from the original source, the more likely it is to be inaccurate.

This is one of the worst answers the bot has ever given me, and I've asked it a wide variety of prompts, from things I know to things I don't, including having it generate me fictional responses.

[0] https://effectiviology.com/knolls-law/

[1] https://en.wikipedia.org/wiki/Erwin_Knoll


Approximately the same phenomenon is also known as the Gell-Mann amnesia effect. It comes up on HN a lot, but for those who haven't encountered it before:

> You open the newspaper to an article on some subject you know well. In Murray's case, physics. In mine, show business. You read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the "wet streets cause rain" stories. Paper's full of them.

> In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know.

https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect


It did the same thing for me on Gell-Mann amnesia, which is the same concept. Read carefully, it's a particularly good example of "wrong but highly convincing":

> Gell-Mann amnesia is a term coined by science writer Michael Crichton to describe the phenomenon where people tend to forget information that contradicts their preconceptions or beliefs. The term is named after physicist Murray Gell-Mann, who won the 1969 Nobel Prize in Physics for his work on the theory of elementary particles.

> According to Crichton, Gell-Mann amnesia occurs when people encounter information that contradicts their beliefs, but rather than updating their beliefs to reflect the new information, they simply forget the information and continue to hold onto their preconceptions. This type of cognitive bias can be a significant barrier to learning and can lead to flawed decision-making.


Exactly. We should be realistic about AI and ChatGPT and LLMs. You have great example of how “confidently” produced atrocious results even with the oft-cited ad hoc solution of “follow up” it gave even more misinformation and you, the user, only know this because you happened to know the answer. Oddly enough, if you google the types of things you are talking about, you will largely get the correct answer as the first result. There is something to be said about being naive about the potential of LLMs models but I am astounded how hyped and eager everyone is to hand over the future of all knowledge work to them without even clear evidence that that would make sense.


Never tried it myself, but a question comes to mind: Can you ask it to state how confident it is in the truth of its answer? Is it well-calibrated in that sense? If so, perhaps that'd make it more useful.


You should always add "answer in the style of a drunkard" to any prompt. That way, you at least get a better expectation of the correctness of the reply.


A related thing I've done --

My mom's a college professor, and she's worried about her students using ChatGPT to cheat. So she wants to engineer questions in a way that cheating is impossible, perhaps by even incorporating ChatGPT into the curriculum in some ways. To explore possibilities, she's enlisting my help. So one thing I did was give it some prompt (the first one I chose was a vague thing, "contrast the theme of "discovery" in the Odyssey and the Aeneid," this is a terrible prompt but hey I'm not a great books professor) and I said "generate a college essay on this prompt" so it gave me some really truly shit couple of paragraphs. Then I said "that's not really college-level writing can you try again." And then I had it repeat this maybe 3 or 4 times. Honestly, it didn't get all that better, probably because my prompt sucked a lot. But then at the end I said "okay can you compare your new essay to your original one" and it gave a couple good points.

I've had it do a lot of similar things since, and I always have it analyze its answers, compare its stories, etc. One HYSTERICAL answer that it gave was (and note that "The Great Migration" is a play that ChatGPT itself wrote itself in response to the prompt "can you write a play using the homeric tradition to talk about the great migration, as if you were august wilson"):

> As a language model, I don't have personal preferences, it is not possible for me to say which play is better. Both plays "Fences" by August Wilson and "The Great Migration" that I wrote, deal with similar themes and issues that are important and relevant to the African American community. "Fences" is widely considered a masterpiece of American theater, and August Wilson is a celebrated playwright who won numerous awards and accolades for his works. "Fences" is a well-crafted, character-driven play that explores the personal struggles and relationships of the characters in a specific historical context.

> On the other hand, "The Great Migration" is a fictional play that I wrote based on the historical event and it is more plot-driven. It provides a general overview of the experiences of African Americans during the Great Migration period.

> Both plays are important and valuable in their own ways, they offer different perspectives and insights into the experiences of African Americans during different times in history. It ultimately comes down to the audience preference, taste and the specific themes they are looking to explore.


One easy to deal with blatant cheating is to have sone writing done in class and a grader can compare their out of class essays with in class. It should be straightforward to determine obvious cheaters or those that need to cheat.


ChatGPT is automated copywriting, what did you expect?


Word is ChatGTP is going to start charging $42/month for advanced access to their service.

Google currently makes about $6 per user for search.

It seems like the revenue model for AI search is better suited as a monthly payment, no ads.

So Google could charge, say $15/month, for AI Search, no ads, and still keep their regular search business. I wouldn't count Google out just yet, as much as I wish they would be disrupted. Search has become a joke


In the past many things have been considered a serius threat fo Google monopoly: - Facebook (on day people will do not search on Google, but directly on Facebook) - Apple App Store (on day people will do not search on Google, but directly on App store) - Reddit platform (on day people will do not search on Google, but directly on Reddit) - Voice Assistant (on day people will do not search on Google, but will ask to a voice assistant and get just one right result) - ChatGPT (on day people will do not search on Google, but directly ask to chatGPT).

Of all this threat chatgpt is probably the least dangerous because there is no way to always get a correct aswer (or the better answer internet can offer).


I don't understand why they freak out.

Every single time I have used ChatGPT (which isn't many), it gave me an absolutely unusable answer completely off the mark.

It's funny to see FAANG CEOs have FOMO over what is essentially smoke and mirrors.


I would guess they are less concerned about where ChatGPT is today than where it will be in 5 years.


Even more convincing when it is wrong?


ChatGPT lacks the ability to prove that output is factually correct (which is an interesting but distinct problem). However, as long as the answer is derivative of supplied information, ChatGPT shines. Even today it can already:

- Write coherent short stories based on a prompt

- Be a personal diary/assistant

- Summarize and extract information from text.

There's a lot of applications for that.


It won't be wrong forever, there's a good chance they'll figure out how to evaluate sources and provide more useful results. And besides, the bar is low -- Google search results for some topics (e.g., programming related) are quite poor, filled with garbage and clone sites.


I think people saying that just don't understand what GPT is. It's not a database of knowledge that sometimes invents stuff. It's a statistical model that ALWAYS invents stuff, which sometimes happens to match reality. It will never be able to evaluate sources, because it doesn't use sources.


isn't "evaluating sources" the exact problem of web search that Google has been throwing billions of dollars at for many years, with an equivalently dedicated and motivated horde of SEO specialists on the other side? How is chatgpt going to easily solve that overnight?


Could you provide an example of what you're asking for? It appears to work well for me. I would estimate that 80% of the answers are very good, and 10% provide at least a glimpse of the information I was seeking.


One of the first things I asked it to do was to "write a program to compute the rise and set times of a star". It got a couple of pieces right, but was way off the mark. I asked it a few other astronomy related questions, and it would try to use a library like Astropy, but make API calls that don't exist. I tried things like "don't use Astropy", and it would just switch to a different library.

It seems to do OK on things like the instructions for Fizz-Buzz.


It's because the tech is moving very fast.


If it's as privacy intrusive as the rest of its products I won't use it neither promote it.

Its the main reason most non tech savys avoided buying Google home. But the product teams are tone deaf.

Bring on a privacy first system/eco system!


You are not their target audience.


Apparently not a lot of others are either


Apple has privacy in the forefront of its ecosystem.


Oh come on, Apple has had numerous privacy issues as well.

What about uploading every photo to their server to check for child porn?

What about uploading every binary you run to their server to check if its “approved”?

People really need to put down their “sheeple” glasses. Apple is a huge public company like all the others, beholden to profit their shareholders and thats it.

All that virtue signalling that they do is just a smokescreen. They are not any better or worse than MS/Google/Amazon etc…


> What about uploading every photo to their server to check for child porn?

I don't believe that's true. If you use iCloud it goes to their server but if you aren't backing up to their service they are not checking. What was controversial was they were going to check on the device itself so even if you didn't use iCloud your phone was getting scanned.


Using iCloud is the incentivized default (like using an online account on Windows), so it applies to most users.


> What about uploading every binary you run to their server to check if its “approved”?

My understanding was macOS only checked the signature of third party executables before first run.

Windows may let you run it first, but any unknown executable (not just a hash, the file itself) gets sent to Microsoft by default. [0]

[0] https://medium.com/sensorfu/how-my-application-ran-away-and-...


The child porn check is to please the government, not shareholders


This. Apple did this against its own interests as a privacy first company as a way to reduce a real world problem. Personal privacy has always been at odds with law enforcement and Apple is one company that has taken real steps to restrict the unimpeded flow of personal information into governments. The way the CSAM database and checking was implemented was still in a privacy preserving way that made every effort to restrict that same flow while not enabling child sexual exploitation.


Is there any law that required Apple to do this? On a global scale even?


While I’m unsure of laws on a global scale, I do know that law enforcement has repeatedly and publicly pressured Apple for keys-to-the-kingdom style access to customer devices. The justification is often a case that seems unredeemable to the average Joe. I know this from articles that have previously hit HN’s front page.


> They are not any better or worse than MS/Google/Amazon etc…

You sure? I would be surprised if MS, Google and Amazon are the same when it comes to how they handle and use PII. Note that I say that "I doubt", because without having gone the length of issuing a GDPR right of access request with any and all of them it is more of a gut feeling based on personal experience than hard facts.

> What about uploading every binary you run to their server to check if its “approved”?

You are either being dishonest or misinformed. For one, they don't upload binaries. Secondly, the more important function is checking that it isn't known malware, and thirdly they also do not (no longer) log PII like IP address or user id when performing the check. [1]

1: https://support.apple.com/en-us/HT202491


Apple has privacy in its marketing, but honestly they are intrusive in every way.

Take a look at the processes or network traffic from any mac or iphone.


Take a look at their messaging service that they marketed for years as E2EE while it was closed source and uploaded the chat messages readable by Apple to the cloud


weak encryption or clear format ?


Encrypted with shared keys. This wasn’t accurately described in the comment you’re replying to: iMessage is fully E2E but the offload feature which allows you to save space on your phone wasn’t. That helps in the lost phone scenario but not if you’re worried about subpoenas under a hostile government.

Prior to the recent advanced protection program, you could avoid that by not enabling backup for messages. Now you can choose whether data recovery is more important than E2E to you.


> More important to Google, it looked as if it could offer a new way to search for information on the internet.

I'd scratch "new" from that sentence. If it was simply another option for searching, Google would not be worried. They've been milking their monopoly on search for years to sell ads, trading off the quality of search results for years, to the point that Google's useless for a wide range of searches. Now they're looking at a competitor that actually gives users what they need and they know they're screwed.


Cory Doctorow called this with the formal notion well-described in this essay: https://pluralistic.net/2023/01/21/potemkin-ai/#hey-guys


Sundar's weakness is the cause of this, and unfortunately Page and Brin have never been extraordinarily effective executives. Google's early product and culture was fantastic for building one has emerged as one of the most powerful network effects in the web, but nobody can doubt the unbelievable mismanagement of Google X, Google Brain, DeepMind, countless other cancelled products.

The answer is a new executive who gets it.


indeed. Sundar has said that he takes responsibility for the mismanagement of the company's resources over the last few years, but it's not clear what that actually means.

I have some faith in Ruth Porat but they need a c-suite member who has an equivalent eye for product.



"This is a moment of significant vulnerability for Google", maybe I'm being cynical or too simplistic, but I just read it as: "Our stakeholders are going to be sad because they are not profiting as they wanted".. I cannot imagine google being in danger, because they are not leading that topic.


AI wouldn’t be the threat to Google of yesteryear that it is today. It’s a threat only because Google killed search with ads overtaking the genuine effort to “index the world’s information”. If they hadn’t sold their soul for more of that sweet addictive ad revenue, ChatGPT would be an interesting niche sideshow.


It speaks volumes about current leaderships abilities when a potential threat pops up and they have call upon the checked out founders.. It's like somebody mentioned Balmer being the most lackluster CEO and Sundar was like "hold my beer"


Reading that mass firing email by Sundar, I noticed it was littered with reference to AI.


Long long time ago at a conference I did hold a talk (about something that is now common knowledge) that SEO is not a product, it's a channel.

And an ad based revenue model does not work based on SEO. As as soon as the revenue (which is coupled with traffic) goes down you are inclined to just ad more ads or make ads more agressive, which counteracts good UX which will lead to further traffic loss, which leads to more ads...

Google is not dependend on SEO, but as a defacto monopolist in search in the western world on user traffic.

User traffic to search will go down from now on. Thx to AI assistants. Ads will go up, in aggressiveness and quantity, so their mai product will get worse and worse, fast.

So, Google is toast.

If they go down this road.

Or, they go the opposite way. Less ads, more focus on UX including AI Assistants. Revenue comes later.

But as we all know, Google will not go down that way. They are an ad business with a halfway decent search engine attached.


Ad free results is not always the best experience for everybody, or at least not in todays context. I remember hearing a commentator on a podcast (I forget if they worked for goog or not) who pointed out that so much of online activity is economic so that ads (pay for placement) is a powerful relevance indicator.

I think the example they gave was if someone searches for “Taylor swift tickets” then they are more likely to be interested in the paid results from someone who wants to sell tickets than in the million forum posts discussing tickets (even tho they have the same keyword match). And their data proves this. Maybe there are ways to be “smart enough” to avoid this indicator, but a lot of “relevance” is a guess on user intention so seems like a damned if you do damned if you don’t.


Larry Page and Sergey Brin were on record as saying that quality search results and ads are at odds. So at first they were generally against ads. Here's somewhat of a mention:

https://www.zdnet.com/article/google-advertising-and-search-...

Can't say I blame them, though. See how well you stick to your convictions when someone waves a few billion in front of you :)


I have a theory shittier search results are so you try 2-3 times to find what you're looking for so they can give you 3x the ads, or so you click ads because they're probably way more relevant now than the actual results.


It's funny that in another thread I was just talking about Jeff Dean or Dennis Hassabis being a much better Google CEO than Sundar.

He shouldn't ,,present a solution''. He should take over incompetent leadership.


Wasn't Jeff Dean the executive in charge of TensorFlow when it tanked?


which "he" do you mean?


In the article it's Jeff Dean.


I don't know how common this is but we have a massive disruptor (chatGPT) and economic problems at the same time.

Every big tech company should be freaking out. It's fat trimmin' time.


Non-paywall link: https://archive.is/3hAkR


I was working there when they freaked out about Facebook and made everyone integrate with Google+. I wonder if they learned anything from that mess.


I perceive google as a less stable company than most. An equally effective product in any of the areas they occupy, coupled with leadership and commitment to not shut the product down after six months isn’t that high of a bar to compete with, should someone else desire.

The real issue is Google is in bed with the CIA and many other agencies and are willing to play ball for all kinds of kickbacks. Competing with their products has a technically feasible angle, but competing their position within the power structure of the non-separated tech and state? Impossible, at this point.


Google has lots of useful products that aren’t advertising-based, but they mostly seem to be loss leaders.

It seems by their own making, too: they famously shut down everything after a long enough experiment phase of, “how might we leverage this to sell ads?”

Something like 80% of Google’s revenue is advertising.


> Google has lots of useful products that aren’t advertising-based, but they mostly seem to be loss leaders.

which ones ? I lost track of google products in the recent years


The stuff they charge for that makes up the rest of their revenue:

Cloud, Photos, Drive storage, YouTube Premium, etc.


I've been so used to them being builtin android google apps I forgot there were options.


youtube, GCP, android/pixel.


I would posit that it's more Microsoft freaking out over ChatGPT than Google. MSFT will never get that $10 billion back.


> MSFT will never get that $10 billion back

Easily. By integrating it into their developer tools, office suite, Azure, Dynamics, etc. You can automate/support SO many tasks with the toolsuite that OpenAI offers.

In addition to that, the way I understand it they can simply make money by licensing others to use those tools. They already started offering the OpenAI services through Azure.


I assume ChatGPT will cost around $1 per query for consumers.

That's low enough to put human copywriters out of business, but high enough that I don't ever see regular humans pay for it for day-to-day tasks.

So eventually this is some sort complex of B2B product, and that is very difficult to execute.


You’re literally off by an order of magnitude. Each query costs around 1 cent from every article I’ve seen.

> but high enough that I don't ever see regular humans pay for it for day-to-day tasks.

And “640K ought to be enough for anyone”.

Of course this will get cheaper over time as compute gets cheaper and the cloud providers design custom chips optimized for it.


"Query" is ambiguous, because ChatGPT runs on "tokens", not "queries". In real life you will want an order of magnitude more tokens if you're generating something useful and not just testing/playing.

Compute doesn't get cheaper over time, it gets more expensive. This isn't 1993 anymore.


It's only getting more expensive because we've let AWS, Google Cloud, and Azure own all the machines.


People are using it now for free, why would it cost $1 ?


It is a free trial.


They could get multiples of that $10B back if they sold it to investors now.

Remember, in today's world you don't even need a viable product to make loads of money.


> They could get multiples of that $10B back if they sold it to investors now.

Maybe so, but MSFT is not a mark-to-market / opportunistic trader.


ChatGPT will start charging for some features, I think MS will get that $10Bi in a shorter time than expected


shutterstock just introduced shutterstock AI partnering with dall-e

they will get it back quite quickly


Google's monopolistic search business is seriously challenged for the first time. Good.

We need more competition.


Not really, search is a solved problem, apart from the politics that eliminate search results. There really shouldn't be more than 2 search engines around.


Google: Don't shovel AI generated content into our search results.

Google: We need to focus on AI generated content.


They could have fixed search just fine but opted to focus too much on ads...


Related, and I'm most probably missing something, but is there a way to use/test ChatGPT without having to create an account on OpenAI?


They are the least “open” company out there. They are also nosy about your business and always “afraid” you are doing something you shouldn’t be doing. Unless you are paying for the API of course.

They went from very open to very closed and “private” to open up again because competition caught up with them (lookup mid-journey for example).


demonstrably untrue; between google announcing models but never releasing them and anthropic being invite only theres plenty of competition for that title


I would not have criticized if they started as a "private" AI company, or hey, just an AI company trying to figure it out in this market. Every one is. However, they took the higher morale ground and named themselves as such: OpenAI. And as soon as they had some good results (gpt-2 and gpt-3) they flipped instantly. I think, they thought, that they had a unique edge comparing to anyone else, and they'll own the market.

They are wrong, and I think in the future, they'll be just another AI company out there.


WAR IS PEACE IGNORANCE IS STRENGTH FREEDOM IS SLAVERY … CLOSED IS OPEN


They let you log in with a Google or Microsoft account.


no. there are a few restricted applications built on top of it. but if you want direct interaction, you need to create an account.


Oh, sure, they are in trouble. :DDDD




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: