Agreed. I'm increasingly using ChatGPT to research topics. In that way, I can refine my question, drill down, ask for alternatives, supply my own supplementary information, etc.
I think of AI as an intelligent search engine / assistant and, outside of simple questions with one very specific answer, it just crushes search engines.
I use the LLMs to find the right search terms, and that combination makes search engines much more useful.
LLM by themselves give me very superficial explanations that don't answer what I want, but they are also a great starting point that will eventually guide me to the answers.
Likely because I'm asking specific Python development questions, I'm getting specific answers (and typically 2-3 variations) that I can drill down into. That response is likely different than for a more general question with a wider array of possible answers.
it's especially great in voice mode. I love to take a long walk or jog with earbuds in and use it to learn a new topic. I don't well through indoctrination, I learn much better by asking it to give a very big picture overview, start to build an intuitive understanding, and then start asking pointed questions to fill in the blanks and iterate towards a deeper understanding. I find ChatGPT's voice mode to be very engaging in that use case—far more so than Gemini's actually—and I've used it to learn a bunch of new technologies. It's like having a never-bored professor friend you can call and ask endless questions and keep getting answers without annoying it!
Yeah, I find setting it in voice mode and then directing it to a specific site that I'm interested in discussing is super useful, so I start off in ChatGPT*, then I switch windows to what I'm interested in looking at. For instance, I want to go over a GitHub repo. I can ask it to verbally ask it to go there and then chat with it as I read through the README file, and it provides great a soundboarding experience. No pun intended. Heck, I'm even dictating this response through Wispr Flow, which I have found to be more useful than I had anticipated.
* Gemini lets you do this by actually streaming your screen through and verbally chatting about whatever is on screen. While interesting, I find the responses to be a little less thorough. YMMV.
It is extremely dangerous to believe that anything said by an AI assistant is correct.
Even with supposedly authoritative peer-reviewed research papers it is extremely frequent to find errors whenever the authors claim to quote earlier work, because the reality is that most of them do not bother to read carefully their claimed bibliography.
When you get an answer from an AI, the chances greatly increase that the answer regurgitates some errors present in the publications used for training. At least when you get the answer from a real book or research paper, it lists its sources and you can search them to find whether they have been reproduced rightly or wrongly. With an AI-generated answer it becomes much more difficult to check it for truthfulness.
I will give an example of what I mean, on which I happened to stumble today. I have read a chemistry article published in 2022 in a Springer journal. While the article also contained various useful information, it happened to contain a claim that seemed suspicious.
In 1782, the French chemist Guyton de Morveau has invented the word "alumine" (French) = "alumina" (Latin and English), to name what is now called oxide of aluminum, which was called earth of alum at that time ("terra aluminis" in Latin).
The article from 2022 claimed that the word "alumina" had already been used earlier with the same sense, by Andreas Libavius in 1597, who has been thus the creator of this word.
I have found this hard to believe, because the necessity for such a word has appeared only during the 18th century, when the European chemists, starting with the Swedish chemists, have finally gone beyond the level of chemical classification inherited from the Arabs and they have begun to classify all known chemical substances as combinations of a restricted set of primitive substances.
Fortunately, the 2022 article had a detailed bibliography, and using it I was able to find the original work from 1597 and the exact paragraph in it that was referred to. The claim of the 2022 article was entirely false. While the paragraph contained a word "alumina", that was not a singular feminine adjective (i.e. agreeing with "terra") referring to the "earth of alum". Instead of this, it was not a new word, but just the plural of the neuter word "alumen" (= English alum), in the sentence "alums or salts or other similar sour substances can be mixed in", where "alums" meant "various kinds of alum", like "salts" meant "various kinds of salt". Nowhere in the work of Libavius there was any mention of an earth that is a component of alum and that could be extracted from alum (in older chemistry, "earth" was the term for any non-metallic solid substance that neither dissolves in water nor burns in air).
I have given in detail this example, in order to illustrate the kinds of errors that I very frequently encounter whenever some authors claim to quote other works. While this was an ancient quotation, lots of similar errors appear when quoting more recent publications, e.g. when quoting Einstein, Dirac or the like.
I am pretty sure that if I would ask an AI assistant something, the number of errors in the answers will not be less than when reading publications written by humans, but the answers will be more difficult to verify.
Whoever thinks that they can get a quick answer to any important question in a few seconds and be done with it, are naive because the answer to any serious question must be verified thoroughly, otherwise there are great chances that those who trust such answers will just spread more disinformation, like the sources on which the AI has been trained.
Appreciate your perspective. To be clear, I'm using it to become a better small game developer and not relying on it to answer anything I would classify as an "important question". Moreover, I don't take everything AI tells me to be 100% accurate (and I never copy/paste the response). Rather, I use it as an assistant with which I can have a back and forth "conversation" to acquire other perspectives.
Despite a lot of effort, I'm just not a highly skilled developer and I don't have any friends / colleagues I can turn to for assistance (I don't know a single software developer or even another person who enjoys video games). While resources like StackOverflow are certainly useful, having answers tailored to my specific situation really accelerates progress.
I'm not trying to cure cancer here and much of what would be considered the "best approach" for a small game architecture is unique to the developer. As such, AI is an incredible resource to lean on and get information tailored to my unique use case (here is my code...how does {topic} apply to my situation?")
And yes, I find errors from time to time, but that is good. It keeps me on my toes and forces me to really understand the response / perspective.
I think of AI as an intelligent search engine / assistant and, outside of simple questions with one very specific answer, it just crushes search engines.