The claude models are just a part of Claude code. I've worked with both copilot with the Claude models and Claude code itself. Claude code is way more capable, and has a greater likelihood of successfully completing a task.
I've been using git + git-lfs to manage, sync and backup all my files (including media files and more) and it's quite convenient, but native support for large files would be great. I'd for example really like to be able to push large objects directly from one device to the next.
At the moment I'm using my own git server + git lfs deduplication using btrfs to efficiently handle the large files.
If large objects are just embedded in various packfiles this approach would no longer work, so I hope that such a behaviour can be controlled.
That's actually something I implemented for a university project a few weeks ago.
My professor also did some research into how this can be used for more advanced UIs.
I'm sure it's a very common idea.
Do you have a link to the code? I'm curious how you implemented it. I'd also be really intrigued to see that research - does your professor have any published papers or something for those UIs?
Well pipe.pico.sh always uses a proxy server so throughput and latency are worse, but you have your own namespace for the pipes and thus don't have to synchronize random connection strings
I have to imagine they ran the numbers and found that that wouldn't be an issue. I wouldn't be surprised if the majority of enterprise use was completely unlicensed before this.
reply