Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If the LLM can automate giving answers then we don’t need SO as much as before.

This is probably the misunderstanding: the LLM can only automate giving answers because it has been trained on all of SO (or other similar communities). It's a summary of SO, not an alternative to it. When new problems arise, the LLM will need to be re-trained to include the new SO answers, it will not be able to synthesize new knowledge.

So, if SO is dead, the LLM can't get the info anymore to answer questions about new topic - but you won't be able to tell for a long time, long enough to kill SO most likely (assuming this actually gets traction, of course).



I’m not convinced the LLM doesn’t have emergent answers. I often ask ChatGPT questions about data wrangling that are quite esoteric, such as “write some code in R that sorts an array as alphabetical, but puts x and y at the beginning”. Even if it gives the wrong answer at first, it seems to get it right eventually.


But why would SO (or an equivalent) be dead. I know very little about ML so it is entirely possible that I'm way way way off here but it seems like a product with this sort of tool integrated would be capable of determining when a query produced a not-very-useful result and could even aggregate such queries. We'd have a very powerful training system where the AI can communicate back "hey I need training on this sort of stuff" and then iterate. If this is valuable, people can be paid to provide this training input.


If Google launches this tool, Google will make money off the answers this tool provides. The fact that this tool is trained on the content from SO will not mean SO gets any money. Also, if users just get the answers from Google Bard, they will not visit SO, and will not contribute to SO's community or revenue. So, the SO community will eventually die if Google Bard is good enough.

The whole premise of the economics of these LLMs is built on the assumption that the training data is (mostly) free. If you need to pay people to provide the training input, you will quickly find that you're spending more money on creating the training data than you're getting out of the finished model.


Stack Of never paid anyone though, so it seems possible for Google to launch a service where people answer questions to feed the AI and give back similar tokens as SO.

I mean, they could make a game, where people have to try to beat the AI (and other humans) in making the best answers to questions.


Sure, they can try to essentially create and run a new SO - though that's still far more costly than what they did today. Especially when considering similar effects on other content sites.


people will visit SO less, make it less profitiable to stay alive.


If this the case? If the LLM must be re-trained each time a new problem arises, it means that it doesn't "reason"...so what's the point?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: