Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On SO I can spend time digging through the questions the search index thinks are related, reading through the answers and the comments on the answers. If I'm lucky I find what I need. If not I then need to spend another bunch of time trying to formulate a question in a way that won't get down voted or marked as a duplicate. Then I need to wait for an answer.

Or I can spend a much shorter amount of time formulating a question for Chat-GPT and generally get a helpful, focused answer without any pedantic digressions.

It seems likely that the AI benefits from the information in SO. If Open AI can help improve the SO experience that would be fantastic.



Yeah, the problem is that you are relying on free contributors, these free contributors will get discouraged if your ideas can just be stolen by ChatGPT as their idea for a solution.


Most SO answers are clarifying a niche implementation detail or gotcha of a programming language, troubleshooting someone's build configuration, etc. If an LLM trained on that info and later helped someone solve their problem by spitting out an answer, I don't see who was discouraged, nor do I think any "ideas" were "stolen."

You don't go to SO to crowdsource creative ideas. It's for very specific one-off questions that many people will likely find themselves asking at some point.


Also, people rely on the feedback to show how helpful their contributions are. The SO economy relies on "karma". If you silo off the view from the production you get a situation where producers are no longer incentivized.


Here is one easy way to solve this problem based on my current workflow: ChatGPT recognizes the novel pseudo addition you arrive at in your coding and prompts you to review the Q&A summary it creates and posts on your behalf. I would happily do this.


Agreed, and I believe SO and OpenAI must realize this also. It's in everyone's best interest to keep the contributions coming. I certainly hope they can figure out a way to achieve that.


By that logic moderators on Reddit should be upset that people are profiting off their free services.

For some reason, they don't. Honestly, I don't understand why, but there is a cohort of people out there who are ok with it.


I think it's about changing expectations.

If one becomes Reddit moderator then from day 1 they knew Reddit will benefit from them. If they didn't want it, they would not become one. When this changed (say Reddit closed their API) the moderators got really upset.

But when people posted on StackOverflow, they expected that their work is used by fellow humans, and that they get recognition for their hard-worked answers (even if it's just their name in the rank table). When this changed, people got upset.

Either way, I'd expect people who joined StackOverflow after this deal is announced are not going to be upset. But they are the minority, given how long SO has been around.


Eh, I think people's motivations for responding on forums like SO are other than whether ChatGPT will incorporate their information or not.


If you can predict the future about what compels people to work for giant corporations for free, go and be a billionaire.


Until ChatGPT gives you plausible-sounding but completely wrong answer and you have no way to react - you can't explain that it wrong, or downvote, or avoid that poster.

(Well, you can stop using ChatGPT, and that's what I ended up doing. General idea or inspiration? Sure, I can ask it. Specific technical question? Nope, google it is)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: