I'll use a locally hosted Llama 2 or CodeLlama instance as a 'consultant', via a chat window. These models can be great for that! A well-formulated question often elicits a precise and accurate answer, even from the unspecialized model.
I won't use Copilot or anything else that integrates that tightly into my workflow, even though it is now possible to do so without losing the incremental-cost and customizability benefits of selfhosting.
The context switch is important. To a very good first approximation, our task as engineers is to think before we assume, and I have found Copilot recklessly encourages the latter at the expense of the former.
I won't use Copilot or anything else that integrates that tightly into my workflow, even though it is now possible to do so without losing the incremental-cost and customizability benefits of selfhosting.
The context switch is important. To a very good first approximation, our task as engineers is to think before we assume, and I have found Copilot recklessly encourages the latter at the expense of the former.