InstructGPT is basically click through rate optimization. The underlying models are in fact very impressive and very capable for a computer program, but they’re then subject to training and tuning with the explicit loss function of manipulating what human scorers click on, in a web browser or the like.
Is it any surprise that there’s no seeming upper bound on how crazy otherwise sane people act in the company of such? It’s like if TikTok had a scholarly air and arbitrary credibility.
Is it any surprise that there’s no seeming upper bound on how crazy otherwise sane people act in the company of such? It’s like if TikTok had a scholarly air and arbitrary credibility.