Tangential: I’ve been using Phind for programming related searches for a couple months now, and while it was a marvel at first, it seems to continuously get worse unfortunately. I’m not talking about adding a quota for GPT-4, which is totally understandable.
The UI which used to be very intuitive is now a confusing mess since the “pair programmer” or something update, and external links in the sidebar mimicking a traditional search engine, which I find quite useful, are gone, replaced with references following the generated answer which may or may not exist, which also waste vertical space.
The answers seem to be worse too even in GPT-4 mode. It used to quickly correct itself if I point out something is wrong or I’m actually looking for something else. Now there appears to be a lot of useless repetition of what was said before before it changes its mind, if it does at all.
Another tangent: I feel this speaks to the hazard on building on these generative AI APIs. Your product may work great on Day 1, but there’s no way to guarantee it’ll still work as well on Day 500. Either due the models being nerfed, or due to the models becoming stale over time.
This is extremely interesting, in the light of YC providing early access for AI startups (with one of the big benefits being OpenAI credits). YCombinator explicitly used to advise against building platforms and startups over somebody else's tooling for reasons like these, yet it seems that they are past that advice now and back on the hype train.
Phind co-founder here. We released an update on Friday that should’ve fixed many of the quality issues with the pair programmer. Have you tried using it since?
We’re also adding ways to view all the external links in pair programmer. And we’re keeping the old “basic search” mode, so you can keep using it if you wish.
This. I find the new “chain” search worse than the original model when it first came out. I feel like I only get a successful answer half the time lately.
The UI which used to be very intuitive is now a confusing mess since the “pair programmer” or something update, and external links in the sidebar mimicking a traditional search engine, which I find quite useful, are gone, replaced with references following the generated answer which may or may not exist, which also waste vertical space.
The answers seem to be worse too even in GPT-4 mode. It used to quickly correct itself if I point out something is wrong or I’m actually looking for something else. Now there appears to be a lot of useless repetition of what was said before before it changes its mind, if it does at all.