> "Google contributed ports of NaCl for ARM and Amd64."
So, Mono/C# in Chrome? If NaCl/PNaCl becomes available outside the Chrome App Store, that could be pretty exciting -- Mono does an excellent job with AOT performance and mobile-scale memory utilization.
It would be great to hear more about this, are there any 'Hello World' examples? I assume the API's that NaCl-exposed have a corresponding set of C# interfaces? How does one write to the browser window, get user input, network I/O, etc?
That post is old, but the instructions are still valid.
> I assume the API's that NaCl-exposed have a corresponding set of C# interfaces? How does one write to the browser window, get user input, network I/O, etc?
Unfortunately there is nothing available to expose Pepper interfaces directly to the managed environment that is shipped with this, though it is possible to auto generate bindings and glue code to make it work. What you'll see in the HelloWorld example I linked above is a very manual way of doing this, exposing an internal call that itself just calls directly through the C Pepper interface, effectively allowing Pepper calls from C#.
I have actually written code to produce LLVM IR to conform to a platform's native C ABI. It's quite a bit more work than "just a bit of register/stack allocation". For one thing, LLVM IR doesn't expose the concept of a stack pointer, so you can't just put things on the stack where they need to go. You need to put things in LLVM's abstractions in places where you happen to know the codegen will lower them to where they need to go. And it's different for every platform.
In the relative context of implementing a production runtime, that's "just a bit of register/stack allocation".
If you're hobby-hacking a project, then you may as well just target something that already exists, because micro-inefficiencies that matter on the scale of a deployed production platform probably don't matter for your human-resource-constrained hobby project.
If you're designing the next browser language runtime, or a language meant to be used outside of hobby deployments, there's no point in permanently hamstringing your implementation architecture to save a small amount of implementation time within what is already a large and complex problem space.
C is a heavyweight intermediary language which brings with it a /lot/ of baggage. There's no valid direct technical reason other than financial resource constraints to justify compiling to C (... or JS, for that matter) instead of a more suitable and expressive IR/bytecode.
It's impossible to implement a variety of useful abstractions in portable C -- for example, function trampolines that do not permute register state and do not appear in the callstack after execution.
> C is a heavyweight intermediary language which brings with it a /lot/ of baggage. There's no valid direct technical reason other than financial resource constraints to justify compiling to C (... or JS, for that matter) instead of a more suitable and expressive IR/bytecode.
1. In theory. But what is that "more suitable and expressive IR/bytecode"? I would argue such a bytecode should be portable, but LLVM IR is not that.
2. There are lots of reasons to target C. C can be compiled by many compilers, while LLVM IR can only be compiled by LLVM to things LLVM can compile to. For example, game consoles, various new embedded platforms, etc. - they all have C compilers, but LLVM might not support them (and possibly cannot support them).
All of your arguments about going through C would go away if you had enough money to build your own ideal toolchain, including the development of your own IR for your language.
> 2. There are lots of reasons to target C. C can be compiled by many compilers, while LLVM IR can only be compiled by LLVM to things LLVM can compile to. For example, game consoles, various new embedded platforms, etc. - they all have C compilers, but LLVM might not support them (and possibly cannot support them).
Is the point to produce something optimal for users, or produce something optimal for developers? Most successful game and application platforms trend towards the former, whereas your work on web technologies trends towards the latter.
No, it is not. But what does it have to do with anything discussed here? I can't see IR being something which locks users or developers to a particular vendor.
LLVM IR is less portable than C. That's a valid direct technical reason. Generating C (or another high level language) also makes integration with tools in that language potentially trivial (depending on the semantics of the language you're compiling), and depending on your preference you may find it far easier to debug the output.
As for portability, while you're right that a number of things can't be implemented in portable C, you gain the benefit that C has been ported to far more systems than LLVM - there are C compilers even for the Commodore 64.
I agree with you that C is heavyweight, but LLVM is too. Personally I prefer to build code generators directly - the LLVM infrastructure may be great, but I much prefer to write compiler that are self contained and bootstrapped in the language they are intended to compile. Then again I might just be difficult.
I have to disagree with that. There are lots of languages whose purpose is to produce JS. For example, it would be very silly for CoffeeScript to use LLVM IR. It would end up reïmplementing all of the things that it shares with JS (that is, most of JS) with no conceivable benefit. Maybe if/when PNaCl is everywhere, it would make sense for it to do that, but even then it's a hard sell as long as four giant companies are competing their asses off to optimize JS.
But I do agree with you if the language doesn't correspond well with JS. In that case, it's probably better to target LLVM and then compile to JS via emscripten.
There's no valid direct technical reason other than financial resource constraints to justify compiling to C (... or JS, for that matter) instead of a more suitable and expressive IR/bytecode.
How does one do call/cc in LLVM, or delimited continuations, for that matter?
You can't do that in portable C as-is. Watch what happens when you try to restore thread state on a different system thread using an OS that instrisically ties system threads to user threads to thread local variables to the stack.
And no, getcontext() and setcontext() aren't portable.
Why would I care if it's portable or not? I've never understood the mentality of "but it's not portable". If something isn't not portable, you put #+ and #- on top of it. I'd rather go for "straight-forward and performant" than for some abstract code transformations to make it runnable on platforms that almost nobody uses to the detriment of everyone else.
LLVM is the lowest common denominator. I'm really not fond of lowest common denominators. (Not to mention the fact that it's bloatware in the first place - it's about twice the size of my favourite compiler, and that's before you add the actual language you're trying to compile.)
Because you literally cannot implement it safely or correctly from the C level on Mac OS X, and any other platform with similar thread/stack/thread-state constraints.
Unless that's a platform you think "nobody uses" ...
The only indirect technical justification is in maintaining compatibility with existing browsers, but that falls over pretty fast when you're a browser maker (Mozilla) and refusing to work with another browser maker (Google) on jumping over that compatibility hurdle.
Replacing JS may not be technically difficult, but practically it's extremely difficult (IE6, anyone?). Plus JS has got so fast, it's becoming hard to see the point anyway.
Getting your bytecode working in shipping in all browsers has proven to be impossible; there's a reason people compile to Javascript, it's the only practical choice.
Even if they did, that's not enough, you have to get it into all popular browsers, not just two. JavaScript works everywhere, you have to match that to have any hope of replacing it.
Ah, so it will lose because it will lose. So we can't implement it because we won't implement it.
That's ridiculous. Mozilla is willing to stick their neck out there implementing a mobile phone operating system, but trying to deploy a sane runtime environment to replace JavaScript is just too risky?
It's reality because Mozilla makes it a reality. See how this is a circular argument?
Meanwhile, native mobile and desktop keep growing, and growing, and the web as an application platform loses out, because none of the native platform vendors are quibbling about whether they can convince themselves to ever do something different than what they were doing before.
For my company, this is exactly what's happening. Like Facebook, we found HTML5 to be insufficient for our app, so our resources have shifted to iOS and Android development. Native gives much better and consistent performance and UX, and the idea of write once-run anywhere on the web is still a pipedream. Cross-browser compatibility is still terrible.
The problem with HTML5 is it was designed by committee for displaying documents not dynamic apps. While JS has come a long way to closing the gap with native performance, it's the other HLML5 tech that's holding the web back. CSS is ill-suited to be hardware acceleration and DOM is a performance sucking hack that kills the UX on mobile platforms.
That didn't stop them from trying to launch a mobile phone platform in competition with Apple, Microsoft, and Google.
If they're willing to accept that risk, by comparison, how large of a risk is trying to push through a better standardized browser execution environment, with Google's cooperation?
God forbid they succeed, and we finally have a competitive application platform.
I see; you're the type who thinks he's smarter than everyone in the industry and just has all the answers and ignores anything that doesn't fit your idea. Never mind that no one has been able to replace JS, just listen to you and all those problems will evaporate. Goodbye.
> I see; you're the type who thinks he's smarter than everyone in the industry ...
You mean, like Google, who continually pushes to do exactly what I've described here, only to be stymied by:
- Apple, who has no reason to support the web as competitive to their native platform.
- Microsoft, same.
- Mozilla, who refuses to consider that there might be a world beyond HTML/CSS/JS because they believe those specific technologies are intrinsic qualities of the web, and thus central to their mission of supporting the web.
Looks like the only people I disagree with are the Mozilla camp. Microsoft and Apple have different priorities, and Google is continually frustrated by exactly what I've described here.
Every major browser vendor has pushed for some stuff and resisted other stuff. Only extremely rarely does anything new make it to 'cross browser compatible' status.
There are far far worse systems for evolving widely used platforms.
> There are far far worse systems for evolving widely used platforms.
That's fine, but perhaps it would behoove Mozilla to not participate in dooming the web as a competitive application platform simply due to a misguided belief that the web is defined by HTML/CSS/JS?
Yes, the reality is that proprietary application platforms are taking over the application market, and that the web is slowly losing one of its major market advantages: a huge brain trust of web-only engineers and web-only engineering organizations.
You're asserting that there hasn't been a significant management and hiring shift in engineering departments over the past 5 years, moving away from what became a web monoculture in the post-90s environment, from roughly 2000-2005?
> That doesn't even make sense as proprietary application platforms have always owned the application market.
So you admit the web is ill-suited to serve as an application platform, and is failing to acquire traction in that space despite considerable but ill-focused efforts to the contrary?
No. As someone intimately involved in startup hiring, there's been a massive shift in the make-up of technology organizations.
Mobile has gone from a side-show farmed out to consulting organizations to a mainstream in-house development effort, and the organizations themselves have shifted management and priorities accordingly.
It used to be that almost everyone had a web engineering organization in-house, even non-technology companies. That is changing. Companies like the NYTimes have gone from being grossly unable to manage mobile efforts and farming their work out to subpar contractors, to straight-up building a top-quality team of mobile developers.
Here's the tricky thing about that, too. Those developers, by the nature of where they work in the technology stack, are already quite versatile, and can choose technology solutions outside of the web stack. The problem that most organizations faced originally was that their web departments were a mono-culture and couldn't adapt.
So now you have companies that can and are building technology outside the web, and that means that the network effects that existed before are being torn down. The web tried to leap onto the application bandwagon, and the web failed. Now other technologies are taking over that space.
Neither NaCL, PNaCL, nor asm.js are 3rd-party browser plugins.
I agree that it's time to move on. It's time to treat the web as a real application platform, instead of as document model with a JavaScript scripting interface.
NaCL, PNaCL aren't plugins because they're features of one specific browser. I don't know of any 3rd party developers using them.
I think asm.js is brilliant and I hope it does turn out to be the way forward. But it's also a poster child for maintaining compatibility with existing browsers through standard Javascript.
Direct: Constraints/requirements directly related to implementing a maximally effecient runtime architecture.
Indirect: Market constraints that drive technical limitations.
When it comes to browsers, Mozilla both creates and exists within market constraints that drive indirect technical reasoning.
Lashing your industry to JS for another 10 years as a bytecode target is clearly not an optimal long-term solution as compared to the current state of the art, but it might make sense as a legacy support mechanism given the indirect technical constraints.
I'm just not convinced that something like CoffeeScript would benefit technically from compilation to a different target than JS. When the purpose of a language is simply to have all the features of another language, but with nicer syntax, what is to be gained from rebuilding all of those features from scratch?
The one quasi-counterexample I can think of in that subset of languages is Objective-C which essentially just adds sugar on top of C and can be transformed relatively simply to C (there are library functions you can use to build classes and objects and call methods from scratch, using no actual Objective-C syntax). My understanding is that Objective-C compilers don't go through C as an intermediate, but this is merely to reduce compiling time, not to improve efficiency at runtime. If Objective-C did target C instead, the end result would be identical.
So in other words, you are defining away most of what drives technology decisions.
That makes it totally uninteresting to try to meet your constraint.
We pretty much never try to implement a maximally efficient runtime architecture, cost or resources be damned.
By this argument, there's no direct technical reason why we should not spend 20 years hand writing machine code and proving the code sequence optimal for each target architecture.
Those examples are all very different. What is your goal - speed? Code size? Portability? Security? Those examples don't all achieve those to the same degree, and I would argue that modern JS VMs are competitive with them in many respects.
> What is your goal - speed? Code size? Portability? Security?
That depends on the target. I selected them because they represent different aspects of the state of the art.
> ... and I would argue that modern JS VMs are competitive with them in many respects
But not all, and in many places, not even close. JS VMs make trade-offs that we don't need if we abandon JS, and on top of it all, you have Mozilla insisting against shared-state multi-threading supported and exposed by the VM, which discards enormously important optimization opportunities) -- and that's just the tip of the iceberg.
Get rid of JS, get rid of Mozilla's self-enforced constraints on the browser, and look very seriously at something like NaCL which blends native execution performance (including being able to drop to SIMD instructions) with security sandboxing.
Let language authors target the low-level machine, and target your high-level general purpose bytecode.
Once you've built a runtime in which I could implement a JS runtime that's as performant as yours, we will no longer be operating in a two-tier universe in which browser makers think that JS is good enough for everyone but themselves.
JS VMs make tradeoffs, yes. Regarding your specific examples:
1. Shared-state multithreading. This has been proposed by various people in the past, and is still being debated. There are people for it and against it in various places. It might happen, if there is consensus to standardize it.
2. SIMD: PNaCl (which I mention since you mention NaCl in that context) does not have SIMD. But of course it can add SIMD, just like Mono and Dart have, and there is a proposal for JS as well, hopefully that will get standardized.
So even those limitations are in principle resolvable. They do depend on standardization, of course, but so would any VM you want to run on the web.
> Let language authors target the low-level machine, and target your high-level general purpose bytecode.
asm.js is meant to get performance similar to a low-level machine. It's already very close to native on many benchmarks.
> Once you've built a runtime in which I could implement a JS runtime that's as performant as yours
I don't follow that. You can't make a fast JS runtime in your previous examples (the JVM, PNaCl, .NET, etc.).
> I don't follow that. You can't make a fast JS runtime in your previous examples (the JVM, PNaCl, .NET, etc.).
I included ones you could, such a NaCL. And I should probably have also included the native platforms (Apple, MS, Google), because they're the competition, even if their sandboxing or runtime environment isn't what one might call 'state of the art'.
At the end of the day, it's the two-tier universe that bothers me the most. You're content to foist the JS runtime on everyone but yourselves. Once you implement Firefox entirely as an application running in a JS VM, the argument that JS is good enough might carry some weight.
The point is that NaCl is not portable, just like a native platform such as Windows.
Nonportable native platforms are there, you can write apps for them. They are even the dominant platforms on mobile.
But the web's entire purpose for existing is to be portable. So you do need something like JS or PNaCl or the JVM. And all of those, due to being portable and secure, impose limitations, such as limiting the native operations that you perform. That's unavoidable. But again, if you don't like that, develop directly for a native platform.
> The point is that NaCl is not portable, just like a native platform such as Windows.
So provide fat binaries. Maximize performance on ARM and i386, with fallback to PNaCL for future platforms that are neither.
> They are even the dominant platforms on mobile.
For a good reason. Adopting their strengths while eschewing their weaknesses (proprietary and single-vendor) would benefit the entire industry greatly.
Fat binaries are limited, and people will simply provide the parts for the platforms they currently care about, without care for future platforms.
That's fine for most things, but web content is something that we do want to always be accessible.
It is hard to adopt the strengths of native execution using all the lowest-level tweaks specific to one platform, because that inherently limit portability by definition (and often also security).
I share your goals, but don't think there is an obvious better compromise than the one we are all already making on the web.
The teams at Mozilla and Google have done an amazing job of improving JavaScript performance. But, I also have to agree with PBS, that Mozilla has a knee-jerk reaction to many of Google’s promising technologies like WebP and PNaCl. Like many HTML5 technologies that are being shoehorned into areas they were not originally designed for, JS was never meant to be a bytecode and likely will never be able to achieve the performance possible with PNaCl.
If W3C and WHATWG were competent and delivered a viable web application platform, we wouldn't be hiring Android and iOS developers now. Mozilla needs to be more open to new technologies like PNaCl that enable the web to be competitive.
I don't think "knee-jerk" is a fair description. Consider the collaboration between Google and Mozilla on WebM, WebRTC, and countless others.
> JS was never meant to be a bytecode and likely will never be able to achieve the performance possible with PNaCl.
LLVM IR was also never meant to be a bytecode. But in both the case of JS and LLVM IR, the question is the final result, not the original intention, even if the two are often connected.
> Mozilla needs to be more open to new technologies like PNaCl that enable the web to be competitive.
PNaCl is not even shipped, so it is too early to evaluate it. But the last benchmarks I saw for compilation speed and performance were mixed.
(NaCl is much more proven, but NaCl is not portable.)
All you need to do is perform NaCL validation of pages before marking them executable; this is what NaCL already does.
You can either do this through a high-level "JIT API" that generates safe machine code from a validated IR (aka PNaCL), or through a fancier version of mprotect() that validates the actual machine code (aka NaCL).
In a wondrous hypothetical future where processors support a NaCL restricted operating mode, you wouldn't even need to validate; just set the processor state to thumb^Wnacl mode, and define a syscall instruction that switches to direct execution (without actually requiring a context switch to the kernel, just flip an execution flag).
This is why NaCL is so damn interesting, and JS/asm.js is not. NaCL has the possibility of turning our design of restricted execution environments on its head, in a very good (for performance) way.
You can already do that with NaCl - or even asm.js with eval, I bet it wouldn't be that much slower. But current JITs apparently gain a significant amount of performance with inline caching, which expects code to be able to be rewritten very quickly.
Edit to respond to your edit: although it would be cool to be able to have a "sandboxed mode" that can somehow be switched out of more cheaply than an interrupt, the whole thing seems like a massive hack to me. After all, NaCl does not take advantage of its pseudo-ability to do so: NaCl code runs in its own process and already incurs a context switch whenever it communicates with the browser, so there is no inherent hardware reason NaCl couldn't just run directly under the kernel and have the kernel provide the same level of sandboxing as the NaCl runtime currently does. It's just an issue of getting such an approach to work portably with existing kernels... hardware support might be able to make it easier to get that to work, but it's probably unnecessary.
> In a wondrous hypothetical future where processors support a NaCL restricted operating mode
If we speculate wildly, why not an "asm.js restricted operating mode"? Not saying that's a good idea, but I'm not sure why a NaCl one would be either. Both PNaCl and asm.js should reach pretty much native speed anyhow.
> If we speculate wildly, why not an "asm.js restricted operating mode"?
Because we already had Jazelle, and it sucked, and we learned our lesson. A Jazelle that operates on ASCII-encoded bytecode with traps for unsupported JS constructs? No thanks. :)
> Not saying that's a good idea, but I'm not sure why a NaCl one would be either. Both PNaCl and asm.js should reach pretty much native speed anyhow.
Pretty much native and actually native are very different things.
I've had to sit down and hand-optimize critical paths that would not have been viable otherwise, and would have meant discarding a feature, or introducing a significant impact on usability -- and that's on platforms where similar investments in runtime library optimization were already made by the platform vendor, too.
If we're going to throw away performance, it has to be for a good reason. As someone who doesn't spend all day on platform development (as interesting as it might be), my focus is in providing the best possible user experience.
I'd love to do that on the web, but the web needs to punting throwing away solid technology decisions because of the bad technology decisions made in the early 90s when none of us knew what the hell we were doing.
> Btw, the L is not capitalized in NaCl.
Whoops. Thanks. Now I will look less stupid in front of software engineers AND my chemistry buddies. :)
> If we're going to throw away performance, it has to be for a good reason
I agree.
We are losing performance in return for: portability, security and standardization. The web runs everywhere, has a good outlook for continuing to do so (no fat binaries of current archs), and anyone can build a new web browser based on standards.
None of the other options proposed give us portability, secutiy and standardization right now. Perhaps with more work they might, and perhaps JS VMs will get closer to their speed as well. It's good to try to from both sides to improve things, and people are doing so.
I don't think anyone is overlooking some obvious better solution - that fits the requirements - that is before us. PNaCl (NaCl is not portable, so not relevant here) is interesting, just like the JVM and CLR, but they have major hurdles to pass if they want to be standardized, and that effort has not even begun in the case of PNaCl.
I've written ARM assembly for iOS apps to optimize carefully for memory ordering constraints and memory access latency (eg, pipeline stalls), and made use of NEON SIMD for certain critical paths.
This has yielded (ballpark) 2x-5x improvements to runtime performance; some operations essentially become 'free' from the application perspective whereas they took a significant hit previously and could cause UI stuttering and/or significant CPU burn (which also directly correlates to battery life consumption).
Assembly is far from dead in desktop/mobile development.
Yes, I agree, it is ridiculous. That's what happens when you have to deal with software licensing. The whole point of licenses is to be incompatible, if all licenses were compatible then we would only have one license.
I didn't come up with this system, if you must blame something, blame copyright. If you think it's bad this way, it's even worse when the two licenses in question are proprietary; often times in that situation you won't even have the option of a rewrite or a workaround. The nature of copyright is that not even free software licenses can give you the legal ability to do everything you could possibly think of doing, but they give you enough to ensure that every user is granted the four freedoms as long as the copyright holds. The FSF's position is that it would be more meaningful for the community to convince RibbonSoft to re-license as GPLv3 than it would be to have the short term gain of having DWG support available sooner.
I should lay out the options for you, so you can understand them.
LibreCAD is GPLv2 only, it has code from Ribbonsoft and also from independent contributors.
In order for them to transition to GPLv3, the following must occur:
- Ribbonsoft has to clear the code release with their legal department again to see if any contracts prevented parts of the code being released with such a license, then to approach the contractors who contributed to the program and get their input, then they have to go through and see if any patent licenses they have for their code would prevent such a transition. This equates to a lot of money that Ribbonsoft may not want to dish out again.
- Any contributors to LibreCAD who used GPLv2 only licenses or didn't assign copyright to the team would have to be contacted and their permission would have to be obtained for a change.
- Any other GPLv2 only code would have to be stripped out in order to obtain compatibility with GPLv3
On the other hand, the FSF holds all the copyrights to the LibreDWG codebase. It would cost them nothing to change the license to GPLv2, nor would it waste any time with reimplementation of code already out there.
The answer is so simple that I'm finding it difficult to believe that you still can't see it.
>The answer is so simple that I'm finding it difficult to believe that you still can't see it.
That the FSF should compromise its entire purpose so Ribbonsoft can save money and avoid duplicating code? I don't think anybody donates to the FSF in order that Ribbonsoft can maintain their business model.
It's not its entire purpose, the GPLv2 is still a valid GNU license, it's still copyleft and this sort of behaviour from them isn't helping the community at all, nor is it helping free software. It's pragmatic and idiotic.
A lax permissive license could potentially lead to a whole host of other problems including licensing problems. These licenses are not free from incompatibilities either, see the infamous BSD advertising clause for an example of that. And a permissive license certainly would not guarantee the project stay "unencumbered" either. The most adequate solution to this would be to either work on persuading RibbonSoft to relicense under GPLv3, or to just do a workaround. Attacking the GPL serves no purpose, even if you disputed the choice of license it would make more sense to criticize RibbonSoft than it would to criticize the FSF just for writing the GPL.
The BSD advertising clause has been dead for years, and it was primarily incompatible with ... the GPL.
The GPL is incompatible with licenses that impose more restrictions than it does, even if those restrictions are fairly limited, like the advertising clause.
I can't say I've ever heard of license incompatibility stupidity outside of the GPL. Lots of GPL problems, though: OpenSSL, dtrace, ZFS. Apple actually dropped GCC and invested in producing the liberally licensed clang BECAUSE of the GPLv3.
It's a bit strange that you blame then GPL when it is just as much the authors of the incompatible licenses' fault, especially when in some cases it's completely their fault because the authors of those licenses deliberately wrote them with the intent of being incompatible with the GPL. Additionally I find it strange that you fault the values of the GPLv3 for Apple ditching gcc, rather than faulting Apple for being an abusive proprietary company that has problems with the values that the GPLv3 imposes. As if the FSF should be working to please Apple?
No, if you'll re-read my post you'll see how I described that even if you view this issue in a vacuum, the other parties are still at fault as well. Apple especially does not want you to "get along," and especially not if you're releasing something on their app store. Now please stop this blind hatred and FUD spreading, it's not constructive.
Apple implemented a full AOT/JIT/Disassembler (LLVM), debugger (lldb), static code analyzer (clang), C and C compiler (clang), and c++11 stdlib, all under a free MIT-like OSS license.
There all built as libraries, can be linked against to build IDEs, JITs, etc etc etc.
That instance is fine. It certainly is nice that they released it under a free software license, it would have been better if they used copyleft, but there is nothing wrong with the license they chose. They part where they don't want you to "get along" is the part where they distribute proprietary software, sell devices with proprietary hardware locks, forbid use of the GPL (and other free software licenses) on their app store, etc. See this blog entry for an example of how this has specifically applied in one case to some FSF-copyrighted GNU software: https://www.fsf.org/news/2010-05-app-store-compliance
Why would it be better if LLVM and clang were copyleft? Then it couldn't be used in other products as a library, and the entire industry would be set back to where we were with GCC in the first place -- unable to fully leverage our tools to push the state of the art forward.
It could be used as a library as long as the other projects also made their software copyleft. The goal of copyleft is freedom for the users, not "pushing the state of the art forward." If they reject certain software because of copyleft they aren't being denied freedom, they were offered freedom and they refused.
> It could be used as a library as long as the other projects also made their software copyleft.
That never happened. On the other hand, LLVM has lead to a tool renaissance.
> The goal of copyleft is freedom for the users, not "pushing the state of the art forward."
Users already have that freedom. They can simply refuse to use proprietary software. They don't require a paternalistic GPL license to 'enforce' a freedom they already had.
It has happened in many cases. Many companies have used, modified and contributed to GCC in the path with no complaints. The whole anti-copyleft thing did not start until recently, because of increased resistance from companies like Apple who have an active anti-freedom political agenda.
The simple act of refusing to use proprietary software does not give users freedom in software, it just prevents them from being subjugated by that one particular piece of proprietary software. Copyleft is also not necessary to have freedom, but it is pragmatic in that it leverages copyright in an attempt to further the political goal of freedom. It would be nice if this didn't have to be done, but we cannot ignore the issue of copyright, it will not go away quietly.
> It has happened in many cases. Many companies have used, modified and contributed to GCC in the path with no complaints.
Hardly. Companies have complained plenty, but escaping the network effects of the GPL was expensive enough that they couldn't do much about it.
> The simple act of refusing to use proprietary software does not give users freedom in software, it just prevents them from being subjugated by that one particular piece of proprietary software.
Why not? If they want open-source software, they can use it. There, they're free.
> Copyleft is not necessary to have freedom, but it is pragmatic in that it leverages copyright in an attempt to further the pro-freedom political agenda.
By attempting to enforce a communist ideal of shared ownership of the means of production. Most reasonable people don't consider that to be 'freedom'.
The only groups that wish to "escape the network effects of the GPL" are proprietary companies that wish to attack their own users' freedoms. The fact that you have a choice of being able to reject proprietary software is not the issue at hand, because people always have that choice, and it is good that we do. The issue is why you should make the choice and what ramifications it has.
>shared ownership of the means of production
I mentioned this in a different thread but this has no context in the current discussion, the "means of production" are a complete non-sequitur in relation to software. Free software isn't "communist," it simply rejects authoritarianism.
Actually, as an liberally licensed open-source author, I wish to "escape the network effects of the GPL" because I want proprietary companies to use my software, too.
That means I have to escape the GPL despite your wanting to force me to participate. You can keep claiming this isn't communism, except that it exactly parallels the Marxist notions of shared ownership.
Your choice of not wanting to accept the GPL doesn't limit your freedom, you had the choice to accept it and you declined. The fact that the software is copyrighted in the first place is the only thing that could be potentially limiting your freedom, and the GPL cannot do anything about that.
Users of proprietary software have the choice and can decline, too. Since users dont need it to protect their freedom, why is the GPL necessary if not to restrict freedom?
Because what it actually protects is rights, rights of the end user.
You of course know this, but you instead choose to endlessly harp on the meaning of the word 'freedom'.
Licencing your code under GPL grants the recipient of any binary in which your code is included a specific set of rights.
These rights include the right to the source code should the recipient want it.
This particular right makes GPL very interesting for many developers as they can release code, and then as recipients of modifications of that code, get access to those modifications in source code form and thus gain improvements to their original code.
If those users don't want to use code that isn't released liberally, they don't have to. So what are you actually securing for them? It's not freedom, they already have the freedom to not use proprietary software.
What you secure is the end user right to the source code of the _actual_ binary they recieve, which gives them the further rights to _examine_ and _modify_ said code and generate and _run_ a binary with their own modifications.
You know, those end user rights which GPL was created to preserve.
You don't need the GPL for that; just provide the source code for your software. If users actually care about having the code, they can use your binary for which code exists.
If they don't care, they can use proprietary software, or proprietary extensions that make your software better.
Either way, the user has full control over their choice of software, and nobody's actions can make your original open source code disappear.
The GPL is really about enforcing your ideals onto other people's code if it happens to be reliant on yours. That's a legit quid pro quo for a license, but it has jackall to do with freedom or user rights.
>The GPL is really about enforcing your ideals onto other people's code if it happens to be reliant on yours.
Oh please, enough with the bs, your code won't just 'happen' to be relient on GPL licenced code. It's just as much of a choice as that of end users choosing not to use proprietary code which you keep repeating.
And of course it has to do with user rights, that is what GPL preserves.
The right to the source code of a binary containing GPL licenced code, the right to modify and build binaries from that said source code.
It legally binds anyone who uses GPL licenced code to grant those _rights_ to their end users.
Being able to examine the code for the ACTUAL binary you recieve, rather than some original source code which may very well have gone through numerous changes before being compiled into the ACTUAL binary you recieve are very different things.
And I've already described how from a developer standpoint this is of importance as they are likely interested in recieving enhancements to their code in 'source code form' if they choose to licence under GPL, but it's also of importance in other aspects aswell since you as an end user may want to examine the source code to make sure it doesn't do something you don't want, or change it's behaviour to do something _you_ do want.
Your entire argument depends on the fiction that users can't make their own choices, or that somehow your code can be made non-OSS once released.
Neither is true, and this hokey 'user rights' notion is just a thinly veiled justification for a paternalistic communist view of open source collaboration, in which you want to control not only your own code, but the code that other people write, too.
It wouldn't be so insidious if it was presented honestly, as a set of limitations on freedom for your own benefit as the author of the GPL software, rather than in terms of the moral high ground of granting freedoms.
No my entire argument depends on the FACT that GPL preserves end user rights which end users are not entitled to with permissive code and are not given with proprietary code. All your attempts to evade this point shows that you have no interest in any honest conversation.
This is not about end user choice, this is about end user rights. With GPL licenced code end users have the _right_ to the source code which created the _actual_ binary they recieve. They don't have to demand the source code, but it is their right, they don't have to examine, modify, run a resulting binary of their own, but it is their right.
These rights are NOT preserved with permissive licencing, and they are NOT granted by proprietary code.
You try to muddy the waters by saying that 'users' don't have to use proprietary code unless they want to, but again developers doesn't have to use GPL licenced code so that doesn't have anything to do with this at all.
Whenever you want to use someone else's code you are subject to their conditions, you sure see nothing wrong with setting conditions for using proprietary code (users can always say no), so by what logic do you think developers should not be allowed to set GPL conditions for their code?
Your communist rant makes you sound like some crazy right-wing extremist, you want to have the right to use other people's code in your proprietary projects, but the notion of other developers instead wanting access to code used in conjunction with theirs in return somehow strikes you as some oppressive communist scheme??? Seriously???
You seem to think open source developers somehow owe you code to use in a proprietary fashion, and if they don't provide it under such conditions they are 'communists who wants to control other people's code'. I can't quite understand people like you.
That's because you've pretty substantially misinterpreted what I said.
I don't think "developers somehow owe you code to use in a proprietary fashion" -- developers are free to place any licensing restrictions they want on their work.
What I do think is that the GPL is paternalistic, communistic, and intellectually dishonest.
Here's why:
- Once open sourced, your code stays open sourced forever. There's no way to "un-opensource" code, and users can make use of it, forever.
- If users want the source code to the tools they use, they are absolutely free to only use binaries from people that provide the source code. If they don't want the code, they don't have to.
Ergo, users already have the "rights" and/or "freedoms" you're pretending to gift to them. QED.
What the GPL actually does is create a quid-pro-quo communist common ownership of the means of production (code) between developers. This has nothing to do with "rights" of end-users, and everything to do with restricting the rights of developers should they choose to participate in the GPL ecosystem.
There's nothing wrong with that system (other than it being unworkable and viral and restrictive, but there's no law against being obtuse), but it is intellectually dishonest to claim some sort of moral or ethical high ground.
The GPL is a simple mechanism to restrict what other people do with the code they write in exchange for using your code, not about "four freedoms" or "user rights". The parallel's to Marxist Philosophy ought to be pretty obvious in the aptly titled "Why Software Should Not Have Owners" essay from RMS: http://www.gnu.org/philosophy/why-free.html
>Neither is true, and this hokey 'user rights' notion is just a thinly veiled justification for a paternalistic communist view of open source collaboration, in which you want to control not only your own code, but the code that other people write, too.
No, this makes no sense. The only people who want to "control code" are proprietary companies. Copyleft and the GPL is a rejection of this, the main goal of it is to ensure that all users have equal control over the software. I don't know know how you draw the conclusion that the GPL only benefits the author, as the author loses most if not all of the (unjust) powers granted by copyright by publishing under the GPL.
> The only people who want to "control code" are proprietary companies. Copyleft and the GPL is a rejection of this ...
So copyleft/GPL licensing isn't an attempt to 'control' code? Despite the fact that this is exactly what they do?
> ... the main goal of it is to ensure that all users have equal control over the software.
The users already have the choice to use open-source software vs. proprietary software. If they want to have "equal control" over the software, all they have to do is make the choice to only use open-source software.
> I don't know know how you draw the conclusion that the GPL only benefits the author, as the author loses most if not all of the (unjust) powers granted by copyright by publishing under the GPL.
The author enters into a quid-pro-quo arrangement by which they get access to other people's code under terms equal to their own, and they're free to relicense their own code for whatever commercial use they want. It's an attempt to create a communistic shared ownership of code.
Copyright is what allows people to "control code." Copyleft is necessary to in essence "reverse the effects" of copyright. So yes it does technically leverage the control copyright provides, but this is necessary as everything is copyrighted by default. The author being able to re-license code he owns the copyrights to is an unfortunate side effect of copyright, but there is little that copyleft can do about that, it could happen regardless of the license chosen. A possible solution would be to transfer copyright to a group or person you know will never license anything under a proprietary license, such as the FSF.
Please refrain from using non-sequiturs like "communistic" to describe things. Copyleft does not deny anyone the ability to exercise any freedom including selling the software. The idea of copyright, and the idea that software should have "owners" are non-capitalistic ideas to begin with, so if anything the proprietary software companies are practicing authoritarianism.
It's hardly a non-sequitor; the parallels with Marxist thought on common ownership of the means of production are rather undeniable.
Like communism did to economies, the GPL has done our industry immeasurable harm by attempting to enforce sharing, creating closed ecosystems where the cost of market entry was so high that other's had no choice but to participate -- such as was the case with GCC, until the GPLv3 gave Apple sufficient cause to make a massive investment in breaking the GPL's shackles on the compiler/runtime tools software market.
>the GPL has done our industry immeasurable harm by attempting to enforce sharing, creating closed ecosystems
Your 'logic' is ridicoulus, GPL is no more 'enforcing' sharing than proprietary software is 'enforcing' non-sharing.
And proprietary code is the epitome of a 'closed ecosystem' as it by definition doesn't share it's source code. Meanwhile GPL code is freely shared amongst compatible licenced code.
Overall your entire line of thought is clearly that developers should not be allowed to share open source unless they allow it to be used in proprietary code, because if they don't they cause 'immeasurable harm' to the software industry.
If anything it's your kind of person who has done 'immeasurable harm' to the software industry, leeches looking for a quick buck, who thinks it's unfair if they have to compete against open source if they can't take that open source and modify and sell it. Which in the end is all that your arguments boils down to, you want the right to use someone else's code without having to return the favour, if you can't then you cry foul.
Whenever I come across someone like you I'm really glad the GPL exists as an alternative.
> Your 'logic' is ridicoulus, GPL is no more 'enforcing' sharing than proprietary software is 'enforcing' non-sharing.
You call my logic ridiculous, and then you reiterate my entire point. The GPL is enforcing sharing in the same way proprietary software is enforcing non-sharing.
Whereas the MIT and BSD licenses provide just as many freedoms as the GPL, without enforcing anything. They are, defacto, more free, both for end users and end developers. End users are free to only use open-source products, if they so desire, and developers are free to use liberally licensed open source however they wish to, too.
> Overall your entire line of thought is clearly that developers should not be allowed to share open source unless they allow it to be used in proprietary code, because if they don't they cause 'immeasurable harm' to the software industry.
This is a strawman argument that has no basis in what I actually said.
I think people should be allowed to use the GPL, just like I think people should be allowed to advocate communism. I also think that we should do our best to demonstrate the fallaciousness of their arguments, because they have the capacity to cause significant harm to our industry.
> Whenever I come across someone like you I'm really glad the GPL exists as an alternative.
And whenever I come across someone like you, it is made apparent that the GPL is more of a religion and an a political statement, rather than a reasoned decision made from an understanding of the economic and human realities of industry and our society.
>You call my logic ridiculous, and then you reiterate my entire point.
No, the 'point' you've been trying to push during this entire conversation is that proprietary software gives the end user the 'freedom' to choose not to use it, but somehow you claim that developers don't have that same 'freedom' when it comes to not using GPL licenced code, which of course is a big lie.
>Whereas the MIT and BSD licenses provide just as many freedoms as the GPL, without enforcing anything.
Stop trying to muddy the water with the meaning of the word 'freedom', we've already established that GPL is about rights, these rights are not provided by permissive licences at all. Again, GPL licenced code assures that the source code will be made available to end users, permissive licences assures nothing of the sort.
>because they have the capacity to cause significant harm to our industry.
How can they cause 'significant harm' to our industry? Furthermore how has GPL caused the 'industry immeasurable harm' which you claim it has?
>And whenever I come across someone like you, it is made apparent that the GPL is more of a religion and an a political statement
Yes the good old communist/religious/political card which always gets thrown by GPL haters when their arguments fall to pieces.
And you're not even close, I've spent my entire professional career writing software which know in the vast majority of cases has ended up being proprietary. I have no problem whatsoever with charging for software, and unlike Stallman I see nothing unethical about proprietary software.
My viewpoint is that of a developers right to set any conditions they want for THEIR code, which includes permissive, proprietary or copyleft. If I as a developer want to release my code under a licence which makes sure that any recipients of programs using MY code will also have the source code to those programs available, then that is my right (under the legal system we have now).
It doesn't matter if my motivation is that of wanting the source code of any enhancements made to my code (most likely motivation from a developer perspective), or if my motivation is political/philosophical (FSF), I still have just as much right as any other developer to set the conditions for using my code.
And neither of these motivations are in any way inferior to your motivation of wanting to make money.
> No, the 'point' you've been trying to push during this entire conversation is that proprietary software gives the end user the 'freedom' to choose not to use it, but somehow you claim that developers don't have that same 'freedom' when it comes to not using GPL licenced code, which of course is a big lie.
No, that's not what I've said at all.
> Stop trying to muddy the water with the meaning of the word 'freedom', we've already established that GPL is about rights, these rights are not provided by permissive licences at all. Again, GPL licenced code assures that the source code will be made available to end users, permissive licences assures nothing of the sort.
The permissive licenses DO provide those rights, if the users choose not to use proprietary software. If they want to use software for which the code isn't provided, they can do that too. Either way, you've given them nothing they didn't already have.
> How can they cause 'significant harm' to our industry? Furthermore how has GPL caused the 'industry immeasurable harm' which you claim it has?
The network effects of GCC/GDB being 1) very expensive to reproduce, and 2) easier to contribute to then replace, and 3) GPL'd, held back the advancement of everything from developer tools (IDEs, static analyzers, debuggers, disassemblers) to JIT implementations for 20+ years.
> Yes the good old communist/religious/political card which always gets thrown by GPL haters when their arguments fall to pieces.
If this essay was any more Marxist, it would be carrying a red flag and speaking Russian.
> And neither of these motivations are in any way inferior to your motivation of wanting to make money.
They're inferior because they're promoted based on intellectual dishonesty and false premises:
- Claiming that you're "granting freedoms", despite the fact that people already have them.
- Employing economic network effects by which the intention -- and end result, should the GPL succeed -- would make it not economically feasible for people to make a choice as to whether they wish to engage in your communal ownership of the means of production.
Fortunately, enough people have seen the logical holes in the GPL's premise that that the attempt to assume control over individuals' use of software, through economic clout of network effects, has not succeeded, despite setbacks such as GCC.
>Either way, you've given them nothing they didn't already have.
Of course you do, if you licence your code under GPL you ensure that all end users of programs using your code will be given the source code to those programs aswell.
>The network effects of GCC/GDB being 1) very expensive to reproduce, and 2) easier to contribute to then replace, and 3) GPL'd, held back the advancement...
More nonsense from you, nothing prevented anyone from forking GCC and 'advancing' it, in fact that's exactly what happened, and later that fork became the main project.
This is the exact opposite had GCC/GDB been proprietary, no one could fork it.
And GCC certainly hasn't prevented any proprietary competition either, instead it's existance has made sure that the proprietary competition have had to give better value to consumers as there has been a free alternative, which has certainly helped advance compiler development in general.
>If this essay was any more Marxist, it would be carrying a red flag and speaking Russian.
GPL is a software licence, not a political manifesto. Are you saying GPL became the most widely used licence in the world because all developers who chose to licence their original code as GPL did so because they were politically motivated? Hardly, I'd say the vast majority chose GPL because of it's tit for tat mechanism which ensured them access to modifications of their code.
The largest and most successful cooperatively developed software project in the world: Linux, had GPL chosen as it's licence by it's creator for exactly this purpose, not for political reasons.
And as it's a licence GPL has no impact at all unless a developer _chooses_ to licence their code as such and another developer _chooses_ to use it.
>Claiming that you're "granting freedoms", despite the fact that people already have them.
I've already said that I don't agree with the word 'freedoms', again it is 'rights' which is what they should have been called as it is rights which is passed along to the end user. And no, these rights are not something end users are entitled to with permissive licences, they don't have the right to get the source code with binaries which uses permissively licenced code. You can stop this bs now.
>Employing economic network effects by which the intention -- and end result, should the GPL succeed...
GPL is already a success, it's a viable licence choice, used in a ton of software.
Permissive licences are also successes, it doesn't matter if GPL is used more, permissive licences fill a need and are therefore widely used aswell.
Also these licence types (copyleft, permissive) are typically used for different type of software, copyleft is usually the choice for full solutions/applications, while permissive licences are typically used for component/framework code. As such reflecting how they satisfy different needs amongst developers.
And there will always be a place for proprietary software aswell as long as they produce value for users which makes it worth the 'cost' (typically monetary). And if you can't compete with something someone gives away for free, then you really should be doing something else.
The 'industry' doesn't owe your proprietary projects open source code or protection from competition of free alternatives.
Only as far as your 'freedom' consists of violating any of the rights which GPL bestows upon end users.
GPL exists to preserve rights for end users, and since one of those rights is that of recieving the source code it is incompatible with proprietary projects. As such you can say that GPL is open source code for open source.
You violate the rights which the GPL licence grants them, which is the rights to the source code of the actual binary they recieve, so that they can examine, modify and run their own versions of the actual binary.
In practice this works out well for developers who release their code under the GPL and want to enjoy any enhancements made to their code in return.
They will as 'end users' of any distributed modifications have the right to the source code of those modifications.
This way those modifications/enhancements aren't locked away from the original developer in a proprietary project.
If users don't want to run binaries for which source isn't available, they are under no obligation to do so, and can simg choose to use binaries for which source is provided; no so-called ethical "right" to source code access is violated, as the source once released remains open source.
And developers who wants to create proprietary projects are under no obligation to use GPL licenced code. What is your point?
GPL licenced code comes with conditions, such as that of granting the end users certain rights, including the right to the source code of the actual binary they recieve.
Proprietary code by definition will not grant the end user access to the source code from which the binary they recieve was made, and it typically comes with the condition of monetary compensation for it's use.
In both these cases the 'end user' can choose not to use what is offered.
>Users can already choose to not use proprietary software
And developers can already CHOOSE not to use GPL licenced code, this part of your argument has no point whatsoever.
The GPL licence exist to grant the right of end users to examine, modify and compile/run the source code of the binaries they recieve.
The user does NOT 'already have these rights' with proprietary software.
When a developer licences their code under GPL it means that all recipients of the code in both it's original state or modified will have the above rights.
And again since developers will be 'end users' themselves once they recieve a binary containing modifications to their code, it creates an effective tit for tat mechanism where they will recieve the source code of any modifications.
There is no 'right' taken away from 'developers', they have no 'right' to use code other than under the conditions set upon it by it's owner, this holds true for all licences.
> Nonsense, by that logic your proprietary software is taking away users 'right' to use your proprietary software in an open source context.
Indeed, it is. The difference is that we're not intellectually dishonest about it, and don't try to dress up the mutual exchange of value (user's money for our code) in some sort of ridiculous redefinition of "freedom", and we certainly don't claim to be "more free" than liberally licensed open-source software.
>and don't try to dress up the mutual exchange of value (user's money for our code) in some sort of ridiculous redefinition of "freedom"
Bullshit, this is what you've been trying to do during this entire discussion. You claim over and over again that the the user has the 'freedom' not to use proprietary software.
But when that same 'freedom' is directed at developers who has the same 'freedom' not to use GPL licenced code, then suddenly you say their 'rights' are being taken away.
According to you proprietary developers are somehow robbed of a 'right' when they can't use GPL licenced code, which is nonsense as the only right they have to use ANY code is by the conditions set by the code owner, be it conditions of a licence or conditions of monetary compensation.
Your hypocrisy shines through your entire line of poorly constructed arguments.
You dislike GPL because you as a proprietary developer can't use that code, which for some reason you think you have a 'right' to.
> You dislike GPL because you as a proprietary developer can't use that code, which for some reason you think you have a 'right' to.
You're being obtuse; I've never said what you claim. What I've said, repeatedly, is that the the GPL grants fewer freedoms than liberal licenses such as the MIT license or the BSD license; it doesn't "protect" or "grant" any "freedoms" that the MIT and BSD licenses don't already provide themselves.
What the GPL does do is restrict usage to enforce a quid-pro-quo relationship on its users, with the political goal of leveraging network effects to push a Marxist ideology of dismantling private ownership in favor of shared ownership of the means of production.
If the goal was 'freedom', then it would be enough to provide users with free access to your code; nobody can deny them that free access once it's provided. The goal isn't freedom, however, and painting it as such is both intellectually shallow and dishonest.
> According to many, including the FSF, GNU, Stallman and myself, increasing access to the source code for everyone is increasing freedom.
Once released freely, the source code never stops being free, so I don't really know what they're talking about in terms of "freedom". MIT licensed code is almost infinitely free, it will never stop being free, and nobody can take it away from you.
It seems to me that they're interested in forcing other people's source code to be free.
It's not just the code itself, but also derivatives thereof. Eg, no one would argue that iOS is free in any sense of the word.
There are many, many reasons for wanting the ability to inspect, modify and compile code yourself. The GPL secures those rights in perpetuity, while other licenses are more lax towards those particular rights. So, for example, you'd be losing that freedom by using BSD-derived iOS.
Basically the GPL is designed to protect users of the software, not developers of the software.
Those rights are already secured in pepetuity. Your code, once released as opensource, is always opensource.
If users don't want to use code that contains proprietary modifications to your code, nobody is forcing them to -- your opensource code hasn't disappeared.
As the copyright holder of a piece of code, I can use the GPL to ensure my users won't ever have to wonder about the provenance of my bit of code, no matter where it ends up. When developers adopt the GPL, it should be a conscious decision to protect those rights for future users.
The goal isn't for code to be open source Just Because. Take a moment to consider the motivations behind the FSF's definition of free software, as linked above by drcube. RMS has been talking about this for 30 years.
You clearly misunderstand his goals and those of GNU/FSF -- that is evident from every single one of your posts on this article, despite all these people trying to explain it to you. Not going to continue here; have a good evening.
I'd argue quite the opposite, I think I understand them quite clearly, and I think the facts speak for themselves in terms of how much grief the GPL and GPL incompatibilities have caused our industry.
But the rights in question are those of the end user, and these rights include that of recieving the source code of the actual binary containing GPL licenced code, which will include any (possible) modifications of said GPL code.
This is in my opinion the main attraction GPL has for developers, as they as 'end users' are given the right to any source code modifications done to their original code.
Users already have that right with liberally licensed open source code. If they don't want to use proprietary extensions for which they can't acquire the source, that's fine; nobody is forcing them to.
Which is what GPL guarantees, binaries which is guaranteed to provide source code.
Which is why developers who wants to guarantee this right to end users for the code they release, choose to licence their code under GPL.
Only copyleft style licences guarantees this right to end users, so if you are a developer who wants your users to have this right secured (which has a practical benefit for developers as they as end users of modifications to their original code, are guaranteed the source code of those moddifications), you will use copyleft style licencing.
If you guarantee that your binary has source, it has source. Guaranteed.
If users only want binaries that guarantee they have source, they can insist on only downloading/acquiring binaries from people that guarantee they provide source.
If you as a developer want to ensure that users of your code or modifications thereof will have access to that source code (end user which often includes the original developer himself) permissive licences does NOT _guarantee_ that.
Only way to guarantee that is to make it a condition for using the code in question, which is exactly what GPL does.
Do we really need to continue this dance of yours?
When some corporation sells you some proprietary software, the consumer has the FREEDOM to choose if he wants to fork over his cash in exchange for it.
When the proprietary developer encounters some piece of GPL software, he's being forced to comply with it, even if it's still his own decision whether or not to incorporate the code in his project.
Ah, language.
(TL;DR: nobody is forcing anyone to accept the GPL. The GPL offers you a deal - passing forward the four freedoms is just the price of using the library.)
The GPL 'freedom' is not that of freedom from licence conditions, it's about the freedom(s) given to the end user(s) through to these licence conditions.
Personally I've always thought that it would be better if they just used the term 'rights' which is what the GPL conditions actually grant the end user, I can't say I find the wording 'dishonest' though, just a poor choice.
The license grants rights and imposes limitations. One of those limitations prevents users from mixing code freely.
Users already had the right to not use proprietary products; what the GPL has actually done is to take away their right to use your code with a proprietary product.
As such, you've granted users fewer rights as compared to liberal open source licenses, not more.
Users -- those who are actually interacting with a running version of the software -- lose nothing with the GPL. On the contrary, their rights to the code are secured.
Developers -- those who modify and redistribute the software, in binary form or otherwise -- do indeed have restrictions designed to protect the rights of users.
So yes, the GPL does impose restrictions in specific cases to secure rights in the general case.
> Users -- those who are actually interacting with a running version of the software -- lose nothing with the GPL. On the contrary, their rights to the code are secured.
They've been denied the 'right' to use proprietary extensions to the software. Those extensions may provide more benefit to the user than the open source software alone: see also, Mac OS X.
That's no less real a 'right' than the 'right' to have access to the source code, which is something they've never lost because open-source always remains open-source, and they remain free to only use open-source software.
> So yes, the GPL does impose restrictions in specific cases to secure rights in the general case.
Those rights are already secured, because nobody forces users to use proprietary software. If they only want to use software where code is available, they have every right to do so.
You are, once again, either misunderstanding or willfully misrepresenting everything that's being said. Users aren't being denied anything, because proprietary software doesn't exist under the GPL. Developers and distributors do lose the chance to keep their code closed. The GPL protects users, not developers.
You seem to think downstream developers have an entitlement to release proprietary software, even if it goes against the wishes of the original authors. News flash: devs aren't forced to use GPL'd software, either. If you want to keep your software proprietary, you're free to use something under another license, or write something under your own copyright.
Really, it's not even about being "open source" per se; the motivations of the GPL go much deeper than simply making the code available. But you obviously don't agree with those, from the other thread.
Plugins are where it gets weird. Technically there should not be proprietary plugins at all with the GPL, but it might depend on the interface or manner of linking (eg, would a HTTP-based "plugin" API be considered a derivative work?). For some cases, licensing the original software under LGPL may be sufficient to support proprietary plugins.
Ultimately it's a judgement call by the original author and what use-cases or freedoms he/she wishes to support. Hopefully they've thought that far ahead, though. It's not an easy question for the majority of the population that doesn't quite lean as far as RMS does.
The users can already choose to not use proprietary software, so what is it protecting them against? Themselves?
> You seem to think downstream developers have an entitlement to release proprietary software, even if it goes against the wishes of the original authors ...
No, I just believe that the GPL is intellectually dishonest. It's about controlling other people's means of production, with the end-goal of creating a communist ecosystem in which it's essentially impossible to not participate due to inherent market entry costs.
The users can already choose to not use proprietary software, so what is it protecting them against? Themselves?
It's simplistic reasoning to ignore the social effects. For example, non-copyleft free software is extremely vulnerable to the EEE strategy, since any proprietary vendor can take the code and re-release it with extra or changed features under a proprietary license, which might almost extinguish the original software for lack of interest, forcing the user to have to choose between the Free version or the "upgrade", which is actually compatible with everyone else's.
No, I just believe that the GPL is intellectually dishonest. It's about controlling other people's means of production, with the end-goal of creating a communist ecosystem in which it's essentially impossible to not participate due to inherent market entry costs.
What happened to the person choosing to not use the software? Suddenly when it's the poor proprietary developer, he's being controlled by the bad copyright holders?
And how is the GPL intellectually dishonest? Replacing proprietary software by giving free software developers an advantage is an explicit goal of the GNU project. How is it dishonest?
In any case, you're fighting the wrong windmill. It's not the GPL that gives anyone such power, it's copyright. It's that government-granted monopoly that allows control over other people's means of production. The solution to your problem is simple: fight for its elimination.
> And how is the GPL intellectually dishonest? Replacing proprietary software by giving free software developers an advantage is an explicit goal of the GNU project. How is it dishonest?
The usual explanation is "four freedoms" and giving users freedom.
Leveraging network effects to create a communist shared ownership of the means of production is the honest explanation of the GPL, and that has nothing to do with 'freedom', and everything to do with network-enforced Marxist ideals.
> * It's that government-granted monopoly that allows control over other people's means of production. The solution to your problem is simple: fight for its elimination.*
I have no problem with copyright, and I don't want to forcibly eliminate the GPL. I'd be happy for it to die an honest death after careful and rational consideration by the industry.
The usual explanation is "four freedoms" and giving users freedom.
No, that's what all Free Software does. The GPL is an hack to extend them as widely as possible, by giving Free Software an advantage over proprietary code.
Leveraging network effects to create a communist shared ownership of the means of production is the honest explanation of the GPL, and that has nothing to do with 'freedom', and everything to do with network-enforced Marxist ideals.
There's no ownership here, only State granted monopolies. Property is an institution for allocating scarce resources; copyright is a government granted privilege designed to "promote Progress". The GPL is a way of defusing the crony system that takes away people's control of their own property - their machines. You're seeing Marxism where it doesn't exist.
On a related note, it's interesting to think that the USSR eliminated private property, yet they established and kept copyright - with fairly extensive terms, in fact.
I have no problem with copyright
"It is difficult to get a man to understand something, when his salary depends upon his not understanding it!" ;)
(By the way, I'm honestly sorry you're being downvoted. I find the attitudes of these cowards who downvoted based on disagreement rather disgusting.)
I downvoted him this time. Not only because I disagree, but also because espousing misinformation of that sort is genuinely damaging -- it's downright wrong, and calling something communistic carries a lot of negative connotations (whether justified or not).
But the worst part is, that argument sounds plausible at first glance. The marginal cost of data distribution is near-zero, and we've hit information post-scarcity. Reconciling that with traditional economic models is awkward. RMS/GNU already carry enough baggage, and without understanding their motivations, it's very easy to attach incorrect labels to them and their goals. You've been very eloquent in describing those, so thanks.
Well, then I think you should've replied and wrote that when downvoting.
Personally, I don't think downvoting is the appropriate response. Particularly, I don't think it would have the desired effect, since someone who might be affected by the supposed misinformation is unable to understand the reason behind the downvote, so they might make the same assumption that I did.
"[A]ll men may be restrained from invading another's rights and from doing harm to
one another, and [this] law of nature... which wills the peace and preservation
of all mankind... is... put into every man's hands, whereby everyone has a right to
punish the transgressors of that law to such a degree as may hinder its violation"
-- John Locke, Second Treatise of Government
Your management hasn't really changed. I grew up with MS in the 80s and 90s -- I'm not quick to forgive a monopoly that set the whole industry back by 10 years.
The peon actively chose to work there and participate in Microsoft's attempts to dismantle any competition in any field they enter through highly questionable tactics.
Such cynicism and paranoia just breeds an equal amount of such on our side. It makes it that much harder for those of us who wish to engage and do good in the OSS community.
Earnestly and dedicatedly inducing people to break their contractual obligations, fall afoul of local hotel regulations, and make life suck a little more for their neighbors.
Can you imagine writing desktop software for Linux? Inconsistent APIs, API breakage, ABI breakage, sizable variances between distributions, multiple desktops UI implementations, and you may as well just forget about mobile.
I'm not justifying Apple's ever increasing shift towards enacting strict constraints on their platform developers, but the comparison with Linux is pointless.
The company I work for develops high-end desktop software for Linux, Mac and Windows, and Linux generally causes us the least trouble and OS X (thanks mostly to atrocious graphics drivers and OpenGL support in addition to really bad memory allocation/paging issues that have only just been fixed in Mountain Lion) the most trouble. And this is on limited numbers of hardware and software variations for OS X.
We "officially support" RHEL 5.4 and RHEL 6.0 (and thus corresponding CentOSes).
But we've got customers running everything from RHEL 4.5 to the latest and greatest Ubuntus, Mints and Arch.
All with the same binary installers.
The only linux-related weirdness I can recall in the last two years distro-wise was Scientific Linux, which seemed to have some weird XOrg config issues.
So you integrate with the user's desktop environment, using standard local widgets, theming, integration with UX guidelines, local library dependencies, etc?
Or are you shipping a Qt app with your own dependencies included? If so, Qt is pretty notoriously buggy on OS X (and rightfully disliked by most users as it stands outside all platform conventions), perhaps explaining some of your complaints.
Local widgets & theming yes, although generally we ship with our own themes by default as we make VFX apps, and the last thing artists want is bright UI to distract them.
I don't believe there are general UX guidelines for Linuxes, but for things like notification popups, system tray stuff, yeah, we do all that natively to the distro through Qt (DBus handles all that transparently within Qt very nicely).
We ship Qt as shared libs with the binaries and all dependencies we need system-wise statically - i.e. zlib, libpng, libjpeg are statically linked. For things like embedded Python, we have to do the same on OS X anyway, due to different python and zlib versions per OS X version.
As for Qt and OS X regarding nativeness - I keep hearing this, but I never see any good examples. I agree it's possible to create Qt Apps on OS X that looks crap, but it's also possible to make them look native as far as I'm concerned. The only thing I can remember not being able to easily do in Qt regarding OS X widgets is the horizontal grouping of side-by-side buttons in radio-button fashion. But it's easy enough to knock up a QWidget subclass which replicates this. But admittedly I don't think I've tried to replicate every possible OS X control/widget...
I think what might be the most difficult part of making a Qt app on OS X look native is the layout and spacing stuff, which generally does seem to be a bit crap within Qt on OS X.
I can have the same code to do a side-panel with edit values in a panel, and it looks great on Linux and Windows:
http://imgur.com/kUI90ry
and the spacing and padding is all over the place - so I have to add a manual style-sheet to get the padding and spacing right, which is crap (image above is without extra stylesheet). So I'm not saying it's perfect or easy, but I think it is possible.
The dialog you've shown for OS X does not look native at all. Not just the spacing, but especially the second selector ('render') is quite odd. Also the font usage for the labels, the light-gray borders around the input fields and the tab focus is strange.
Does QT basically render a bitmap, or does it use the native controls and just has some odd default styling? If it's the first, then I would probably do the UI in native Objective C/Cocoa and keep a nice cross platform base for the VFX core of the app. Not sure if that's feasible for your application of course.
I concede the borders are slightly different, but I personally wouldn't have noticed or minded.
The render button is a custom button - the only difference from the native one is the right-hand menu triangle which isn't native.
Qt has styles, and draws controls itself via vector drawing. And you can customise this yourself: So you've got all the power in the world to do what you want, but at the expense of loads of complexity that no-one's really going to go to the trouble of doing. So basically, it tries to emulate the controls on all platforms, and does a pretty poor job on OS X.
Thank you, very interesting. Am I correct that the button 'Visible to: All' is basically an action button where you can use the arrow on the side to change the action within? Those are not really common on OS X. Guess that's why it looks a bit out of place.
I would side with the reader below, I think that going for a completely different look works better than 'almost' native. Ableton Live is a good example I think that has (almost) the same UI on Mac as on Windows which doesn't look bad on either OS.
Not quite: it's a highly-custom multiple selection mask button (with intelligent title based on the selection options) - doesn't really have any native equivalent on any platform as far as I'm aware:
The OS X version looks ancient and not like a modern OS X app at all. Is that due to limitations in Qt? I had considered Qt for cross platform development, but after seeing that comparison it seems like it would be better to maintain separate front-ends for different platforms.
I am thinking of Pixelmator, or Apple's Final Cut Pro and Motion for comparison to your screenshot. Even modern versions of Photoshop manage to look much nicer than that.
It's got a custom overall stylesheet for the tab widgets as I don't like the OS X tab style or colour, so they're completely different.
Qt "emulates" native control appearances by drawing them itself. This means it doesn't get it 100% correct (in OS X's case). You can apply stylesheets to configure all aspects of spacing and appearance, so in theory it's possible to write a stylesheet to completely emulate OS X's controls (or at least the majority of them), but that'd be a lot of work which you probably wouldn't want to do.
I could make it any colour I wanted with stylesheets in Qt - this was a personal app, and I'm not too concerned with what it looks like.
Mostly things that smelled like memory leaks (pull a matte, leave it for a while, come back and tweak it => long coffee break). Also, restoring from idle seemed to be the hardest on Snow Leopard.
Which keyer? Primatte's had a few issues in 6.3 and previous.
on 10.6 (and 10.7), memory allocation/paging is pretty atrocious if you haven't got much free mem left.
Basically, OS X prioritises paging to disk over freeing up Inactive memory (which isn't actually being used for anything at present), which is pretty crap. So it's very easy to make it page when it shouldn't when you allocate loads of memory, and the system will generally just grind to a halt.
Apple fixed this in 10.8, so now it will free up Inactive memory first, before it starts paging.
But neither of these explain any crashes within Nuke.
The kernel. Everything else from libc on up was rewritten, and under Google's control. The Play store is hardly in AOSP, much less their new IntelliJ-based development stack.
The Play Store along with all the other Google Experience apps like gMail and Maps is not in AOSP. Google has no obligation to open-source those, nor do they want to.
Just use Qt. Works brilliantly, great API and documentation, and gets you 95% of the way to Windows versions and 85% of the way on OS X too. On OS X you'll have issues with native painting within OpenGL drawing and maybe event throttling / reordering for mouse / pen tablet events that you have to install a manual event filter for to fix, as OS X doesn't always send input mouse/keyboard events in the correct order.
There's a massive amount of software that exists outside of note-taking apps where the customer will be more than happy to put up with different-looking buttons because the value the software provides is so high.
There are many software categories where visual-design decisions are not the only competition advantage.
Design is far more than just visual, and your dismissal of design as "different looking buttons" suggests that you don't really understand design very well.
I don't think anyone was, least of all me. In any case, I'll bite:
Different UI toolkits make doing certain things hard, certain things easy, and some things damn near impossible without large effort on the part of the developer.
So when the parent commenter says "Nobody on OS X wants a Qt app", I'd argue that yes, although you can make a "well designed QT app", it's much harder to make a "well designed OS X app" with QT.
Why will it be less likely to be a well designed OS X app? Because it will be different to a Cocoa app made with interface builder, in subtle and sometimes not so subtle ways. Sure you can code around these and make adjustments, but the level of effort and investment to get to the stage of if you'd just built it with a more suited tool for the job (Cocoa / Interface Builder on OS X) is quite high.
This is why there is some truth the the admittedly generalised statement made above that "Nobody on OS X wants a Qt app".
I think a more useful statement is that "Nobody on OSX wants a poorly designed app because the platform has a high standard for design". It is possible to design well designed apps without using Cocoa - see Sublime Text.
I have. It used to use X11, I saw even more complaints then.
There aren't any competitors, however, so people use Mathematica.
I use high-end expensive that are Qt-only too, but I would gladly switch if there were other options, because Qt is buggy, uses non-intuitive non-standard UI, exhibits poor performance due to impedance mismatches between the Mac and Qt event/threading models, and ultimately decreases overall utility of the UX.
Can you give me an example of the "non-intuitive non-standard UI"?
This may well be true, but it's quite possible it's just the developer was lazy and didn't do all the effort to make it look native.
Edit: in fact, I'll give an example: VirtualBox - it does look crap on OS X, but that's because the developers obviously didn't make any effort to make it look native. They've got icons in tabs pane buttons and huge toolbar buttons along the top (which isn't a unified toolbar btw - which would make it look more native), which goes completely against the way OS X apps tend to look.
VirtualBox, IDA Pro, Google Earth (in fact, everything from Google that uses Qt), EAGLE (PCB software), OpenSCAD, QGIS ...
I've literally never been fooled by a Qt app -- a Mac user can spot that hot mess from a mile away. It's not just a matter of visual layout and UX, although that matters and always falls somewhere between subtly or completely wrong.
The controls also tend to lag at weird times, behave slightly strangely, demonstrate unusual font metrics, etc. The apps tend to exhibit bugs related to menu bar event handling, window management, on and on, and crash more than any other apps I use daily.
This isn't a huge surprise given that Cocoa is designed to be the first and only gatekeeper between events, the OS, UI controls, and Cocoa's main event loop, and most of what makes Mac apps a Mac app has to do with the APIs and integration that you lose first-class access to once you deploy Qt -- that includes 'simple' things like UI layout metrics, which Qt has to re-implement.
I abandoned almost all those Qt apps on my list above. I even dropped EAGLE, despite being "native", in favor of Altera on Windows, which I paid a hefty premium for in no small part because it wasn't a pile of broken UX, Windows or not.
I still use IDA Pro, but the minute http://www.hopperapp.com/ can replace my IDA Pro usage, I'll switch in a heartbeat.
Although having looked at hopper, I must say I'm not convinced it seems any more native than a Qt app on OS X could look if work was concentrated with this goal in mind. So as I said in my VirtualBox comment, I think that while indeed it is a Qt deficiency that apps don't look native straight from the compiler, more work could be done by the developer to make it more native-like.
I've seen controls acting weirdly but only QPushButtons - that seems to be regarding the hit-test area - we've had to sub-class a lot of them on OS X to fix this by making the hit test area bigger which is annoying. There's also a pretty bad spacing/padding/layout issue as I alluded to in another comment.
I agree with the event stuff as well - definitely one of the biggest things which affects us is that sometimes mouse/keyboard events don't get sent to the app if the app isn't active or the mouse isn't over a particular window. Debugging through Qt, it seems that Qt never gets the event from the OS, so we have to install event filters for all windows and intercept them which is crap. We've also seen weird stuff like key release events being received before key down events - again debugging through Qt, it looks like an OS X issue in that that's the order they come in from the OS, but again, without a native version to compare to, it's difficult to tell or test.
Generally the crashing of our apps is due to graphics drivers or memory allocation issues (we deal with huge amounts of memory), and I'm not aware of Qt itself being the result of any crashes on OS X for our apps any more than other platforms, but I guess without a native version to compare to, it's difficult to say - but Qt is very rarely the cause of crashes in my experience and I generally develop for Linux and OS X.
Somewhat off topic, but does anyone know how Adobe does their cross platform interface? It doesn't feel like native Mac OS UI, though I think it works pretty well.
In fact they do: Phil Schiller himself on 10th June at the WWDC keynote mentioned that we've ported Mari to OS X, and he was extremely impressed with it and the fact that we've ported it.
It's heavily Qt-based, and doesn't really follow any Apple design guidelines. But it's the best 3D texture painting app in the world for high-end work.
So, Mono/C# in Chrome? If NaCl/PNaCl becomes available outside the Chrome App Store, that could be pretty exciting -- Mono does an excellent job with AOT performance and mobile-scale memory utilization.