Token cap will probably be the biggest problem here.
After validation.
After getting the changes to disk, documented, actually compiling, etc…
But the biggest problem is that transferring the nuance that is external to the code base is typically really tiresome and lengthy grunt work and again token cap.
Yeah, but the jump to 32k didn't take a few months, it was years in the making. Otherwise you could extrapolate with "yesterday we had 4k, today we have 32k, tomorrow we will have 256k", that isn't how we do it. If we follow the same exponential pace 256k would have to wait 3 years, and even that is unlikely.
"Luddites" refusing to accept that we might be onto something that is going to change humanity forever...
The sad thing is that real luddites would go out and actively sabotage AI development because they think it's a real threat. Yet these people just makes bold and false claims in online forums and continue to move the goalposts once they're proven wrong. Sad. Pathetic. (and obviously, I don't mind being downvoted. Whatever! :)
Few years, yes. I’m with you “the change is coming” but we still need to transfer millions of tokens in and out to cater for context and out of repo intricacies.
After validation.
After getting the changes to disk, documented, actually compiling, etc…
But the biggest problem is that transferring the nuance that is external to the code base is typically really tiresome and lengthy grunt work and again token cap.