Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Not true at all. The original ChatGPT was useless other than as a curious entertainment app.

Perplexity, OTOH, has almost completely replaced Google for me now. I'm asking it dozens of questions per day, all for free because that's how cheap it is for them to run.

The emergence of reliable tool use last year is what has sky-rocketed the utility of LLMs. That has made search and multi-step agents feasible, and by extension applications like Deep Research.



If your goal is to replace one unreliable source of information (Google first page) with another, sure - we may be there. I'd argue the GPT 3.5 already outperformed Google for a significant number of queries. The only difference between then and now is that now the context window is large enough that we can afford to paste into the prompt what we hope are a few relevant files.

Yet what's essentially "cat [62 random files we googled] > prompt.txt" is now being confidently presented with academic language as "62 sources". This rubs me the wrong way. Maybe this time the new AI really is so much better than the old AI that it justifies using that sort of language, but I've seen this pattern enough times that I can be confident that's not the case.


> Yet what's essentially "cat [62 random files we googled] > prompt.txt" is now being confidently presented with academic language as "62 sources".

That's not a very charitable take.

I recently quizzed Perplexity (Pro) on a niche political issue in my niche country, and it compared favorably with a special purpose-built RAG on exactly that news coverage (it was faster and more fluent, info content was the same). As I am personally familiar with these topics I was able to manually verify that both were correct.

Outside these tests I haven't used Perplexity a lot yet, but so far it does look capable of surfacing relevant and correct info.


Perplexity with Deepseek R1 (they have the real thing running on Amazon servers in USA) is a game changer, it doesn’t just use top results from a Google search, it considers what domains to search for information relevant to your prompt.

I boycotted ai for about a year considering it to be mostly garbage but I’m back to perplexifying basically everything I need an answer fo

(That said, I agree with you they’re not really citations, but I don’t think they’re trying to be academic, it’s just, here’s the source of the info)


I'd love to read something on how Perplexity+R1 integrates sources into the reasoning part.


> all for free because that's how cheap it is for them to run.

No, these AI companies are burning through huge amounts of cash to keep the thing running. They're competing for market share - the real question is will anyone ever pay for this? I'm not convinced they will.


> They're competing for market share - the real question is will anyone ever pay for this?

The leadership of every 'AI' company will be looking to go public and cash out well before this question ever has to be answered. At this point, we all know the deal. Once they're publicly traded, the quality of the product goes to crap while fees get ratcheted up every which way.


That's when the 'enshitification' engine kicks in. Pop up ads on every result page etc. It's not going to be pretty.


The question of "will people pay" is answered--OpenAI alone is at something like $4 billion in ARR. There are also smaller players (relatively) with impressive revenue, many of whom are profitable.

There are plenty of open questions in the AI space around unit economics, defensibility, regulatory risks, and more. "Will people pay for this" isn't one of them.


As someone who loves OpenAI’s products, I still have to say that if you’re paying $200/month for this stuff then you’ve been taken for a ride.


Honestly, I've not coded in 5+ years ( RoR ) and a project I'm involved with needed a few of days worth of TLC. A combination of Cursor, Warp and OAI Pro has delivered the results with no sweat at all. Upgrade of Ruby 2 to 3.7, a move to jsbundling-rails and cssbundling-rails, upgrade Yarn and an all-new pipeline. It's not trivial stuff for a production app with paying customers.

The obvious crutch of this new AI stack reduced go-live time from 3 weeks to 3 days. Well worth the cost IMHO.


Yeah, I'm skeptical about the price point of that particular product as well.


This is my first time using anything from Perplexity and I am liking this quite a bit.

There seems to be such variance in the utility people find with these models. I think it is the way Feynman wouldn't find much value in what the language model says on quantum electrodynamics but neither would my mom.

I suspect there is a sweet spot of ignorance and curiosity.

Deep Research seems to be reading a bunch of arXiv papers for me, combining the results and then giving me the references. Pretty incredible.


It's not free because it's cheap for them to run. It's free because they are burning that late-stage VC dollars. Despite what you might believe if you only follow them on twitter the biggest input to their product, aka a search index, is mostly based on brave/bing/serpAPI and those numbers are pretty tight. Big expectations for ads will determine what the company does.


Yeah, I don't get OPs take. ChatGPT 3.5 was basically just a novelty, albeit an exciting one. The models we've gotten since have ingrained themselves into my workflows as productivity multipliers. They are significantly better and more useful (and multimodal) than what we had in 2022, not just marginally better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: