Giving the users source code and no rights to do anything with it really doesn't give them very much. Software freedom advocates are not just nerds who really enjoy reading source code. The point is that if you have the source code and the right to change it then you take control of the activity the software is doing, or helping you to do. If you don't have control, being able to read the source code is not very useful.
Well certainly there is some reasonable middle ground between everything and nothing, which is presumably what licenses like this one are trying to explore? People are afforded rights beyond just looking at it, they can run it, modify it, learn from it, there's just some specific things that are carved out (to my understanding as a layperson).
Every major voice that I've seen comment on this thus far seems to acknowledge the value and the rationale behind the license, they're just saying you can't call it open source.
Would you say that the "free software" and "open source" movements are largely synonymous? I guess I thought free software was a more strict subset that has the "all or nothing" philosophy centered around truly "free" software. Which is honorable and commendable. But is there really no room for stuff that's a little more gray area? I get the rug pull, I get that it's not truly owned by the community, but it still seems like a useful ally and a step in the right direction from fully closed source. With support there could be norms and culture developed to safeguard from bad behavior.
The context of what the software does should also be taken into account. The best arguments I've seen in this thread revolve around the idea that you can trust that you can pull an "open source" library into your project without hesitation. That is a beautiful "free as in libre" building blocks vision. But the software in question here is a completed end-user product. It's not ever intended to be used in another project. But they want to make some kind of good-faith effort to share and be more "open" than normal proprietary SaaS. Instead of this being a good thing it's a huge drama because it's not enough. At least that's my read.
> Would you say that the "free software" and "open source" movements are largely synonymous?
Would you say Canadians and Americans are largely synonymous? They have many shared interests. There are many dual citizens. The differences are subtle often.
> The best arguments I've seen in this thread revolve around the idea that you can trust that you can pull an "open source" library into your project without hesitation. That is a beautiful "free as in libre" building blocks vision.
Several people highlighted a key idea of open source and free software is you can run an open source program without hesitation. This was not beautiful? These arguments were inferior?
> But the software in question here is a completed end-user product. It's not ever intended to be used in another project.
Libraries used parts of programs. Programs became libraries. Programs evolved into different programs.
> But they want to make some kind of good-faith effort to share and be more "open" than normal proprietary SaaS. Instead of this being a good thing it's a huge drama because it's not enough. At least that's my read.
I don't know how the submitted article could have contradicted this more clearly.
> Would you say Canadians and Americans are largely synonymous? They have many shared interests. There are many dual citizens. The differences are subtle often.
What would source available be in this analogy?
> Several people highlighted a key idea of open source and free software is you can run an open source program without hesitation. This was not beautiful? These arguments were inferior?
I think we're saying the same thing. Those are the arguments I was referring to.
> Libraries used parts of programs. Programs became libraries. Programs evolved into different programs.
I don't know what this means.
> I don't know how the submitted article could have contradicted this more clearly.
An expanded analogy would not increase understanding I think. It would be better to ask and answer the question behind the analogy request probably.
Did we say the same thing? You suggested a library and a completed end-user product should be considered differently I thought. I suggested they should be considered similarly.
The title of the submitted article said it was okay source available is not open source. It said DHH's choice of license reacts to a real pressure in open source. It said source available was defensible and generous.
> An expanded analogy would not increase understanding I think.
Why not? Is there a better one? The relationship between open source and source available (and free software) is the core of what I'm trying to understand.
> You suggested a library and a completed end-user product should be considered differently I thought.
I suggested that later. I maybe should not have used library at that point. I'm not saying that they should fundamentally be considered differently. I'm just saying it might behoove the movement to be a little bit more welcoming, and guiding, of some of these gray area efforts.
Software licensing is fundamentally a social/political/legal "people" issue, not a mathematical/logical one. Clinging to a beautiful elegant mathematical rule that sticks it head in the sand w.r.t. the messy world of human behavior, while also claiming the moral high ground, is maybe what I feel is a bit of bad faith (or, more charitably, clashing ideologies that I feel could be united by finding a middle ground). I would also argue that there is somewhat more responsibility to show good faith on the side of the gatekeepers than the people being kept out.
> The title of the submitted article said it was okay source available is not open source.
When I said "huge drama" I was referring to the whole thing - the posts referenced by that article, the comments, the words that have been spilt on how many words have been spilt on this issue. The point of the article is, it's great, but do not call it open source. Because there is a stringent mathematical definition of open source, and even though there's a wider world of projects that might not fully identify with that but want to participate in the colloquial spirit of being "open," we want to be very clear that they are not a part of this and need to sit at their own "not-open" table.
> it seems obvious that disclosure policy for FOSS should be “when patch available” and not static X days
This is very far from obvious. If google doesn't feel like prioritising a critical issue, it remains irresponsible not to warn other users of the same library.
If that’s the case why give the OSS project any time to fix at all before public disclosure? They should just publish immediately, no? Warn other users asap.
Why do you think it has to be all or nothing? They are both reasonable concerns. That's why reasonable disclosure windows are usually short but not zero.
Because it gives maintainers a chance to fix the issue, which they’ll do if they feel it is a priority. Google does not decide your priorities for you, they just give you an option to make their report a priority if you so choose.
Timed disclosure is just a compromise between giving project time and public interests. People have been doing this for years now. Why are people acting like this is new just because ffmpeg is whining?
And occasionally you do see immediate disclosures (see below). This usually happens for vulnerabilities that are time-sensitive or actively being exploited where the user needs to know ASAP. It's very context dependent. In this case I don't think that's the case, so there's a standard delayed disclosure to give courtesy for the project to fix it first.
Note the word "courtesy". The public interest always overrides considerations for the project's fragile ego after some time.
(Some examples of shortened disclosures include Cloudbleed and the aCropalypse cropping bug, where in each case there were immediate reasons to notify the public / users)
Full (immediate) disclosure, where no time is given to anyone to do anything before the vulnerability is publicly disclosed, was historically the default, yes. Coordinated vulnerability disclosure (or "responsible disclosure" as many call it) only exists because the security researchers that practice it believe it is a more effective way of minimizing how much the vulnerability might be exploited before it is fixed.
Unless the maintainers are incompetent or uncooperative this does not feel like a good strategy. It is a good strategy on Google's side because it is easier for them to manage
This criticism seems at face value to also apply to first-class functions, which I thought was a totally uncontroversial pattern. Do you dislike those too?
can you please explain what the actual problem here is? i am trying to read through that issue discussion, but i am not quite getting what makes first-class functions problematic. as far as i am concerned, not having first-class functions would be a serious limitation to a language that would make me avoid using that language for anything serious.
> why predict that the growth rate is going to slow exactly now?
why predict that it will continue? Nobody ever actually makes an argument that growth is likely to continue, they just extrapolate from existing trends and make a guess, with no consideration of the underlying mechanics.
Oh, go on then, I'll give a reason: this bubble is inflated primarily by venture capital, and is not profitable. The venture capital is starting to run out, and there is no convincing evidence that the businesses will become profitable.
I am constantly astonished that articles like this even pass the smell test. It is not rational to predict exponential growth just because you've seen exponential growth before! Incidentally, that is not what people did during COVID, they predicted exponential growth for reasons. Specific, articulable reasons, that consisted of more than just "look, like go up. line go up more?".
Incidentally, the benchmarks quoted are extremely dubious. They do not even really make sense. "The length of tasks AI can do is doubling every 7 months". Seriously, what does that mean? If the AI suddenly took double the time to answer the same question, that would not be progress. Indeed, that isn't what they did, they just... picked some times at random? You might counter that these are actually human completion times, but then why are we comparing such distinct and unrelated tasks as "count words in a passage" (trivial, any child can do) and "train adversarially robust image model" (expert-level task, could take anywhere between an hour and never-complete).
Honestly, the most hilarious line in the article is probably this one:
> You might object that this plot looks like it might be levelling off, but this is probably mostly an artefact of GPT-5 being very consumer-focused.
This is a plot with three points in it! You might as well be looking at tea leaves!
> but then why are we comparing such distinct and unrelated tasks as ...
Because a few years ago the LLMs could only do trivial tasks that a child could do, and now they're able to do complex research and software development tasks.
If you just have the trivial tasks, the benchmark is saturated within a year. If you just have the very complex tasks, the benchmark is has no sensitivity at all for years (just everything scoring a 0) and then abruptly becomes useful for a brief moment.
This seems pretty obvious, and I can't figure out what your actual concern is. You're just implying it is a flawed design without pointing out anything concrete.
The key word is "unrelated"! Being able to count the number of words in a paragraph and being able to train an image classifier are so different as to be unrelated for all practical purposes. The assumption underlying this kind of a "benchmark" is that all tasks have a certain attribute called complexity which is a numerical value we can use to discriminate tasks, presumably so that if you can complete tasks up to a certain "complexity" then you can complete all other tasks of lower complexity. No such attribute exists! I am sure there are "4 hour" tasks an LLM can do and "5 second" tasks that no LLM can do.
The underlying frustration here is that there is so much latitude possible in choosing which tasks to test, which ones to present, and how to quantify "success" that the metrics given are completely meaningless, and do not help anyone to make a prediction. I would bet my entire life savings that by the time the hype bubble bursts, we will still have 10 brainless articles per day coming out saying AGI is round the corner.
> The length of tasks AI can do is doubling every 7 months
The claim is "At time t0, an AI can solve a task that would take a human 2 minutes. At time t0+dt, they can solve 4-minutes tasks. At time t0+2dt, it's 8 minutes" and so on.
I still find these claims extremely dubious, just wanted to clarify.
Yes, I get that, I did allow for it in my original comment. I remain convinced this is a gibberish metric - there is probably no such thing as "a task that would take a human 2 minutes", and certainly no such thing as "an AI that can do every task that would take a human 2 minutes".
"It’s Difficult to Make Predictions, Especially About the Future" - Yogi Berra. It's funny because it's true.
So if you want to try to do this difficult task, because say there's billions of dollars and millions of people's livelihoods on the line, how do you do it? Gather a bunch of data, and see if there's some trend? Then maybe it makes sense to extrapolate. Seems pretty reasonable to me. Definitely passes the sniff test. Not sure why you think "line go up more" is such a stupid concept.
Isn't one of those scalers getting money from NVIDIA to buy NVIDIA cards which they use as collateral to borrow more money to buy more NVIDIA cards which NVIDIA put up as revenue which bumps the stock price which they invest into OpenAI which invests into Oracle which buys more NVIDIA cards?
Its not a Ponzi scheme, and I don't have a crystal ball to determine where supply and demand will best meet, but a lot seems to be riding on the promise of future demand selling at a premium.
I'm not yet ready to believe this is the thing that permanently breaks supply and demand. More compute demand is likely, but the state of the art: resale-users, providers, and suppliers, will all get hit with more competition.
Does that affect the reasoning? Even though I think it is profoundly wrong, I am not going to risk 20 years in prison for something realistically nobody is going to care about.
Obviously untrue. Iran has 90 million citizens, and multitudes of that number do care out of principle. I am not trying to change your mind, but hope you would be more precise in your language next time to describe why you don’t care.
Sounds more like "nobody I care about is going to care about", which seems rather reasonable and eminently human. But maybe a useful increase in precision.
Who said I don't care? I care a lot, I just don't care more than I care about staying out of federal prison. It would be one thing if you'd be carrying the torch and the public would be behind you if only you'd be so courageous. But that is a naive fantasy: I know, and you know, that you'd be hounded as a "terrorist", they would throw the book at you and throw away the key. Approximately 0.01% of your fellow citizens would even understand the issue, let alone be on your side.
Petition for change, sure. Complain in public. Protest. But don't martyr yourself for nothing.
Historically, the rounding-up were not meant to straight-up murder people, but to get rid of undesirables..
Now, if we read the original parent post wrote:
> [..] the stakes go up: $1M USD fine and up to 20 years in federal prison. [..] you, the manager or executive in charge, and anyone else who is in the know on the transaction is now facing 20 years in federal pounding-in-the-ass prison if they don't immediately cease all communication and break off contact
That sounds a lot closer to "rounding-up" than "a strong attempt to prevent technology transfers supporting unwanted regimes."
Now for the kicker. Taking into context the US developments of the past 9 months, the people affected by such legal threats are a lot closer to the indiscriminate "rounding people up" part than to the "balanced and reasonable legal consequences" part.
Just a small thing to ponder about before blurting out things such as
> I am not going to risk 20 years in prison for something realistically nobody is going to care about
Yesterday, one might not have cared about communists being rounded up, today it might be "illegal" migrants being ICEed up, who knows what it will be tomorrow.
If you hear about someone getting actually punished for this then I would probably agree with you. But I (and I think everyone else in the thread) was talking about whether people in the US should risk that punishment in order to support our Iranian friends. OP would not do so and neither would I, because the risk is too great. But I've said repeatedly, I oppose the law, you can't then spin around and act like I'm advocating for rounding people up just because I don't think it's worth my while to break it.
If you want to break that law, go ahead, I will support you all the way, and you will end up in jail anyway.
When I was there (maths, 2010s) it was 1 supervision per course per fortnight. You had 4 problem sets in an 8 week course (for long courses). 16 lectures meant 3 sets, 12 lectures meant 2. I never heard of a college doing more than 2 students in a supervision.
The mean talent at Oxbridge and at the Ivy League is pretty similar. The talent level of Ivy League scholarship holders is significantly higher than either. Obtaining a scholarship is a significant hurdle that not all applicants clear - so it is very naive to act as if any Oxbridge candidate could just walk into a scholarship. And if you agree that they couldn't walk into it, then it obviously is a hurdle, contrary to your comment.
> Debase the currency, surprised when it has less value?
This bizarre comment is not related to the issue at all.
Do they have enough money available to fund everyone who can't afford to come, or do they have to decide who to fund from a wider pool of otherwise good applicants?
MIT, Stanford, Harvard, Princeton, and I believe most or all of the other Ivies, all fund 100% of the demonstrated financial need of every student, and they do not consider the financial needs of applicants when making admission decisions.
No, not for international students. Stanford (I haven't checked others) is very explicit about having a limited number of scholarship for international students: https://financialaid.stanford.edu/undergrad/how/internationa.... Admissions for US applicants are indeed need-blind.
Higher education is a strange purchase that is engineered to extract the maximum amount of money (up to full-cost tuition, fees, etc.), based on financial records which you are forced to provide.
Any asset except for a residence is typically considered something that could be tendered to the university, and is accordingly deducted from financial need.
This means that external scholarships are limited as to how much they can reduce the expected parental or student contribution. Anything beyond this limit is deducted from need and pocketed by the university.