Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> has yet to give me anything I wouldn't be able to just write myself

Sure it has: Time.

In terms of economics it's really simple: Does Copilot free up more than 10$ worth of your time per month? If the product works at all as I understand it (I haven't tried), the answer should be a resounding "yes, and then some" for pretty much any SE, given the current market rates. If the answer is no (for example because it produces too many bad suggestions which break your flow), the product simply doesn't work.

There might be other reasons for you not to use it. Ego could be one. Your call.

> Also feels kind of icky to train on open source projects and then charge for the output.

I don't know why it would feel any more icky than making money off of open source in other ways.



It is completely different than using open source programs to make money. Many open source licenses explicitly require any derived work to maintain the copyright notice and a compatible license. If I use github copilot to create a derived work of something somebody else published on GitHub, I have no idea who wrote the upstream code or what license they made it available under. The defense for this is the claim that GitHub copilot doesn’t create a derived work, since the code it produces is very different than anything upstream (this is claimed in the original paper from openai). However, many people have found examples showing this to be a questionable or wishful-thinking claim.


Lack of training data is obviously not gonna be a linchpin in this project, no matter how reproachful the hs crowd looks upon copilot in regards to oss licensing. Even if we are prepared to dub the copilot team liars (bold move, good luck in court) there is always gonna be enough code to go around to make this thing happen regardless. Rumors are microsoft could chip in some.

In addition, the idea of "derived work" in code snippets is, quite frankly, nuts. There is only so many ways to write (let's be generous on the scope of copilot) 25 lines of code to do a very specific thing in a specific language. If you have 1000000 different coders do the job (which we do) you'll have a significant amount of overlap in the resulting code. Nobody is losing sleep because of potential license with this. Because that would be insane.

I have noticed that upholding oss licensing (at least morally) is kind of a table manner on hs. That's fine, but this is some new level of silly.

It's also not gonna persist, because no matter how much we love our oss white-knightedness, we love having well paying jobs more.


Having used it quite a lot I'm not sure it does save me $10 of time per month. At least as often that it generates usefully correct code it generates correct appearing but actually totally wrong code that I have to carefully check and/or debug.

It's quite nice not to have to type generic boilerplate in sometimes I guess but it's very frustrating when it generates junk.


Same experience for me. Checking the code it generated, and the subtle bugs it created which I missed until tests failed, made it at best a net-zero for me. I disabled it after trying for 2 months.


You lasted long than I did! Disabled after a few days.

I think it really depends on what languages you use though. If you use something like Kotlin where there's really almost no boilerplate and the type system is usefully strong, the symbolic logic auto-completion is just far more reliable and helpful. If you're stuck in a language where there's no types, and there's lots of boilerplate to write, then I can see it may be more helpful.


I turned it off a week ago because I found it was wasting time when everything it generated required going back to fix issues.


> I don't know why it would feel any more icky than making money off of open source in other ways.

For me, this entirely comes down to the philosophy of how a deep learning model should be described. On the one hand, the training and usage could be thought of as separate steps. Copyrighted material goes into training the model, and when used it creates text from a prompt. This is akin to a human hearing many examples of jazz, then composing their own song, where the new composition is independent of the previous works. On the other hand, the training and usage could be thought of as a single step that happens to have caching for performance. Copyrighted material and a prompt both exist as inputs, and the output derives from both. This is akin to a photocopier, with some distortion applied.

The key question is whether the output of Copilot are derivative works of the training data, which as far as I know is entirely up in the air and has no court precedent in either direction. I'd lean toward them being derivative works, because the model can output verbatim copies of the training data. (E.g. Outputting the exact code with identical comments to Quake's inverse sqrt function, prior to having that output be patched out.)

Getting back to the use of open source, if the output of Copilot derives from its training data in a legal sense, then any use of Copilot to produce non-open-source code is a violation of every open-source licensed work in its training data.


I want to love github co-pilot, but its just not there yet. For trivial stuff it's great, but for anything non-trivial it's always wrong. Always.

And my problem is : Time.

Cycling through false positives and trying to figure out if it's right costs me way more than $10 a month in productivity.

I cant wait for better versions to come out, but right now, no.


But I don't get paid on a piece rate; the amount of time I spend working is constant. Anything that increases my productivity just means I get more work done. (Others may differ, but I know from experience that I like to keep to a fixed schedule.) And that's mostly benefitting my employer, not me, so it seems like something my employer should pay for, if they believe in it.


Yeah it's just practicality for me. There is software I pay a lot more for that I use a lot less.

$100/year is a steal for the amount of tedious code copilot helps me with on a daily basis.


I could also make a mistake due to Copilot which takes me time to fix, and then I end up spending more time checking code where I previously used it. It has similar pros/cons than copy/pasting




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: