So, it is loosely the same as copilot? I understand that approach is a tad different, but result of converting natural language descriptions into code-changes should be comparable.
And both are trained on large corpus of github sources
Is there a way to test it somehow? Public API maybe?
> converting natural language descriptions into code-changes
Do people actually use Copilot for that? I just let it work its magic uninstructed. I guess it sometimes uses comments and function/variable names for its suggestions but that's about it. 99% of the time it just looks at my code, the context and neighboring files to predict what I'm trying to do.
I use it most of the time as smart auto-completion as well, but sometimes for boilerplate it helps to just write a comment what you want to achieve, basically like a ChatGPT prompt.
for my day job, no, not frequently. When I'm writing in an unfamiliar language like bash or something, I'll do a little # implement a function that does x, y and z
And both are trained on large corpus of github sources
Is there a way to test it somehow? Public API maybe?