Just yesterday I did my first Deep Research with OpenAI on a topic I know well.
I have to say I am really underwhelmed. It sounds all authoritative and the structure is good. It all sounds and feels substantial on the surface but the content is really poor.
Now people will blame me and say: you have to get the prompt right! Maybe. But then at the very least put a disclaimer on your highly professional sounding dossier.
You didn't expect it to do all the job for you on PhD level, did you? You did? Hmm.. ;) They are not there yet but getting closer. Quite a progress for 3 years.
I think what some people are finding is it's producing superficially good results, but there are actually no decent 'insights' integrated with the words. In other words, it's just a super search on steroids. Which is kind of disappointing?
This sounds like a good thing! Sounds like “it’s professional sounding” is becoming less effective as a means of persuasion, which means we’ll have much less fallacious logic floating around and will ultimately get back to our human roots:
I think it's bound to underwhelm the experts. What this does is go through a number of public search results (i think its google search for now, coudl be internal corpus). And hence skips all the paywalled and proprietary data that is not directly accessible via Google. It can produce great output but limited by the sources it can access. If you know more, cos you understand it better, plus know sources which are not indexed by google yet. Moreover there is a possiblity most google surfaced results are a dumbed down and simplified version to appeal to a wider audience.
I have to say I am really underwhelmed. It sounds all authoritative and the structure is good. It all sounds and feels substantial on the surface but the content is really poor.
Now people will blame me and say: you have to get the prompt right! Maybe. But then at the very least put a disclaimer on your highly professional sounding dossier.