I used it about 15 minutes ago, to help me diagnose a UI issue I was having. It gave me an answer that I would have figured out, in about 30 minutes, in about 30 seconds. My coding style (large files, with multiple classes, well-documented) works well for AI. I can literally dump the entire file into the prompt, and it can scan it in milliseconds.
I also use it to help me learn about new stuff, and the "proper" way to do things.
Basically, what I used to use StackOverflow for, but without the sneering, and much faster turnaround. I'm not afraid to ask "stupid" questions -That is critical.
Like SO, I have to take what it gives me, with a grain of salt. It's usually too verbose, and doesn't always match my style, so I end up doing a lot of refactoring. It can also give rather "naive" answers, that I can refine. The important thing, is that I usually get something that works, so I can walk it back, and figure out a better way.
I also won't add code to my project, that I don't understand, and the refactoring helps me, there.
I have found the best help comes from ChatGPT. I heard that Claude was supposed to be better, but I haven't seen that.
I don't use agents. I've not really ever found automated pipelines to be useful, in my case, and that's sort of what agents would do for me. I may change my mind on that, as I learn more.
What I like about Chatbots vs SO is the ability to keep a running conversation instead of 3+ tabs and tuning the specificity toward my problem.
I've also noticed that if I look up my same question on SO I often find the source code the LLM copied. My fear is that if chatbots kill SO where will the LLM's copied code come from in the future?
I use Perplexity as my daily driver and it seems to be pretty good at piecing together the path forward from documentation as it has that built-in web search when you ask a question. Hopefully LLMs go more in that direction and less in the SO copy-paste direction, sidestepping the ouroboros issue.
Agreed. It was a very important part of my personal journey, but, like so many of these things (What is a “payphone,” Alex), it seems to have become an anachronism.
Yesterday, I was looking at an answer, and I got a popup, saying that a user needed help. I dutifully went and checked the query. I thought “That’s a cool idea!”. I enjoy being of help, and sincerely wanted to be a resource. I have gotten a lot from SO, and wanted to give back.
It was an HTML question. Not a bad one, but I don’t think I’ve ever asked or answered an HTML question on SO. I guess I have the “HTML” tag checked, but I see no other reason for it to ask my help.
As I never used SO except to understand it for doing business for developers, I know many found the community aspect/self building/sense of worth aspect important, same with Quora. Do you have a idea of how this will change things for developers? Is that a real thing I was seeing? (maybe even an opportunity!)
Well, people in general, tend to have self-image issues, and it seems to be more prevalent, in the developer community, than in other vocations.
One of the reasons that SO became so successful, was the "gamification" of answering questions. Eventually, they started giving the questions, themselves, more attention, but, by then, the damage was done.
Asking questions became a "negative flag." If you look at most of the SO members with very high karma, you will see that their total count of questions asked, is a 1-digit value, with that digit frequently being "0."
So the (inevitable) result, was that people competed to answer as many questions as possible, in order to build high karma scores. In its heyday, you would get answers within seconds of posting a question.
The other (inevitable) result, was that people who asked questions, were considered "lesser people," and that attitude came across, loud and clear, in many of the interactions that more senior folks had with questioners. They were treated as "supplicants." Some senior folks were good at hiding that attitude, some, not so much.
Speaking only for myself, I suspect that I have more experience and expertise, actually delivering product, than many of the more senior members, and it is pretty galling, to be treated with so much disrespect.
And, of course, another inevitable thing, was that the site became a spamhaven. There was a lot of "shill-spamming," where someone asks a question, and many of the "answers" point to some commercial product. If you attempted to seriously answer the question, your answer was often downvoted, causing you damage. I think they got nuked fairly quickly, but it was quite a problem, for a while (It's still a huge problem in LinkedIn groups. I never participate in those, anymore).
I have found that, whenever I design anything; whether an app, or a community, I need to take human nature into account.
Yes, it's been an important part of tricking humans into sharing their knowledge with other humans to obtain a huge Q&A dataset to train the AI without any consent of said people.
My goal from posting on various forums like SO is to scale the impact of my knowledge to as many people as possible, to give something back. I really don't care what modality or mechanism is used to distribute my contribution to others.
Why should I care if my SO answer I posted 7 years ago ends up in an LLM output in some random model? I wasn't getting paid for it anyway, and didn't expect to.
I view my random contributions across the web ending up in LLMs as a good thing, my posts now potentially reach even more people & places than it would have on a single forum site, that's the whole point of me posting online. Maybe I'm an outlier here.
>>I'm not afraid to ask "stupid" questions -That is critical.
AI won't judge and shame you in front of the whole world, for asking stupid questions, or not RTFM'ing well enought, like Stackoverflow users do. Nor will it tell you, your questions are irrelevant.
I’ve always worked that way. In school (or in seminars), I ask questions that may have the whole room in stitches, but I always learn the lesson. The worst teacher I ever had, was a genius calculus professor, who would harangue you in front of the class, for asking a “stupid” question. That’s the only class I ever took an Incomplete.
That’s the one thing about SO that I always found infuriating. It seems their favorite shade, is inferring that you’re “lazy,” and shaming you for not already having the answer. If anyone has ever looked at my code, “lazy” is probably not a word that springs to mind.
In most cases, I could definitely get the answer, myself, but it would take a while, and getting pointers might save me hours. I just need a hint, so that I can work out an answer.
With SO, I usually just bit my tongue, and accepted the slap, as well as the answer.
An LLM can actually look at a large block of code, and determine some boneheaded typo I made. That’s exactly what it did, yesterday. I just dumped my entire file into it, and said “I am bereft of clue. Do you have any idea why the tab items aren’t enabling properly?”. It then said “Yes, it’s because you didn’t propagate the tag from the wrapper into the custom view, here.” It not only pointed out the source error, but also explained how it resulted in the observed symptoms.
In a few seconds, it not only analyzed, but understood an entire 500-line view controller source file, and saw my mistake, which was just failing to do one extra step in an initializer.
There’s absolutely no way that I could have asked that question on SO. It would have been closed down, immediately. Instead, I had the answer in ten seconds.
I do think that LLMs are likely to “train” us to not “think things through,” but they said the same thing about using calculators. Calculators just freed us up to think about more important stuff. I am not so good at arithmetic, these days, but I no longer need to be. It’s like Machine Code. I learned it, but don’t miss it.
>>I’ve always worked that way. In school (or in seminars), I ask questions that may have the whole room in stitches, but I always learn the lesson.
In my experience, if a question is understood well enough, it basically directly translates into a solution. In most cases parts of questions are not well understood, or require going into detail/simplification/has a definition we don't know etc etc.
This is where being able to ask questions and getting clear answers helps. AI basically helps your do understand the problem as you probe deeper and deeper into the question itself.
Most human users would give up after answering you after a while, several would send you through a humiliating ritual and leaving you with a life long fear of asking questions. This prevents learning, as a good way of developing imagination is asking questions. There is only that much you can derive from a vanilla definition.
AI will be revolutionary for just this reason alone.
Forcing you to read through your 500 line view controller does have the side effect of you learning a bunch of other valuable things and strengthening your mental model of the problem. Maybe all unrelated to fixing your actual problem ofc, but also maybe helpful in the long run.
Or maybe not helpful in the long run, I feel like AI is the most magical when used on things that you can completely abstract away and say as long as it works, I don't care what's in it. Especially libraries where you don't want to read their documentation or develop that mental model of what it does. For your own view, idk it's still helpful when AI points out why it's not working, but more of a balance vs working on it yourself to understand it too.
Well, the old Java model, where you have dozens of small files, for even the simplest applications, may be better for humans, but it's difficult to feed that to an LLM prompt. With the way I work, I can literally copy and paste. My files aren't so big, that they choke the server, but they are big enough to encompass the whole domain. I use SwiftLint to keep my files from getting too massive, but I also like to keep things that are logically connected, together.
Judge for yourself.
Here's the file I am working on: [0].
The issue was in this initializer: [1]. In particular, this line was missing: [2]. I had switched to using a UIButton as a custom view, so the callback only got the button, instead of the container UIBarButtonItem. I needed to propagate the tag into the button.
Agree on the verbosity and occasional naivety. But the fact that it gives working starting points is what really moves the needle. It gets me unstuck faster, and I still get to do the creative, architectural stuff
I’ll ask it how to accomplish some task that I’ve not done, before, and it will give me a working solution. It won’t necessarily be a good solution, but it will work.
I can then figure out how it got there, and maybe determine a more effective/efficient manner.
I used it about 15 minutes ago, to help me diagnose a UI issue I was having. It gave me an answer that I would have figured out, in about 30 minutes, in about 30 seconds. My coding style (large files, with multiple classes, well-documented) works well for AI. I can literally dump the entire file into the prompt, and it can scan it in milliseconds.
I also use it to help me learn about new stuff, and the "proper" way to do things.
Basically, what I used to use StackOverflow for, but without the sneering, and much faster turnaround. I'm not afraid to ask "stupid" questions -That is critical.
Like SO, I have to take what it gives me, with a grain of salt. It's usually too verbose, and doesn't always match my style, so I end up doing a lot of refactoring. It can also give rather "naive" answers, that I can refine. The important thing, is that I usually get something that works, so I can walk it back, and figure out a better way.
I also won't add code to my project, that I don't understand, and the refactoring helps me, there.
I have found the best help comes from ChatGPT. I heard that Claude was supposed to be better, but I haven't seen that.
I don't use agents. I've not really ever found automated pipelines to be useful, in my case, and that's sort of what agents would do for me. I may change my mind on that, as I learn more.