I respect the maintainer's decision, but I don't understand the justification.
> but when it was communicated with Mockito I perceived it as "Mockito is holding the JVM ecosystem back by using dynamic attachment, please switch immediately and figure it out on your own".
Who did the communication? Why is dynamic attachment through a flag a problem, and what was the solution? Why is "enable a flag when running tests" not a satisfactory solution? Why do you even need a _dynamic_ agent; don't you know ahead of time exactly what agent you need when using Mockito?
> While I fully understand the reasons that developers enjoy the feature richness of Kotlin as a programming language, its underlying implementation has significant downsides for projects like Mockito. Quite frankly, it's not fun to deal with.
Why support Kotlin in the first place? If it's a pain to deal with, perhaps the Kotlin user base is better served by a Kotlin-specific mocking framework, maintained by people who enjoy working on those Kotlin-specific code paths?
Mockito was indeed a poor fit for Kotlin. MockK is the one. Except I suppose for shops that have projects that mix Java and Kotlin and already have a Mockito tests.
Some complexities are discovered along the way, people don't know everything when they start.
They could also drop the support after some time, but then it would have created other set of problems for adoption and trustworthiness of the project.
I'm curious, what exactly feels bloated about Java? I don't feel like the Java language or runtime are particularly bloated, so I'm guessing you're referring to some practices/principles that you often see around Java software?
Whatever efficiency may hypothetically be possible with Java, you can in-fact spot a real world Java program in the wild by looking for the thing taking up 10x the memory it seems like it should need… when idle.
Yes yes I’m sure there are exceptions somewhere but I’ve been reading Java fans using benchmarks to try to convince me that I can’t tell which programs on my computer are Java just by looking for the weirdly slow ones, when I in fact very much could, for 25ish years.
Java programs have a feel and it’s “stuttery resource hog”. Whatever may be possible with the platform, that’s the real-world experience.
I held the same view as you when I was 22, more than 15 years ago.
With over 15 years of professional experience since then, my perspective has shifted: Java demonstrates its strength when stability, performance, and scalability are required (e.g. bloody enterprise)
A common misconception comes from superficial benchmarking. Many focus solely on memory consumption, which often provides a distorted picture of actual system efficiency.
I can point to EU-scale platforms that have reliably served over 100 million users for more than a decade without significant issues. The bottleneck is rarely the language itself, it is the depth of the team’s experience.
> Many focus solely on memory consumption, which often provides a distorted picture of actual system efficiency.
When other languages can do the same thing with an order of magnitude less RAM, any other efficencies in the system tend to be overshadowed by that and be the sticking point in peoples memories.
You may argue that holding on to this extra memory makes subsequent calls and reads quicker etc, but in my experience generally people are willing to sacrifice milliseconds to gain gigabytes of memory.
node is a notable exception. Compared to java node is a hellhole. the standard library is non-existent, most libraries are a buggy mess, the build system is horrible...in fact there is no reliable build system that solves all your typical problems in 1 app. The list goes on.
The JVM eats a chunk of memory in order to make its garbage collector more efficient. Think of it like Linux's page cache.
I haven't worked with too much Java, but I suspect that the distaste many have for it is due to its wide adoption by large organizations and the obfuscating "dressed up" tendency of the coding idioms used in large organizations.
The runtime isn't inherently slow, but maybe it's easier to write slow programs in Java.
Technically kind of true but at the same time Android apps are predominantly Java/Kotlin. It speaks more to Java just having a bad desktop story. But it’s also why Android devices need 2x the ram
While for typical backend situations, reference counting has a crazy high throughput overhead, doing atomic inc/decs left and right, that instantly trashes any kind of cache, and does it in the mutator thread that would do the actual work, for the negligible benefit of using less memory. Meanwhile a tracing GC can do (almost) all its work in another thread, not slowing down the actually important business task, and with generational GCs cleaning up is basically a no-op (of just saying that this region can now be reused).
It's a tradeoff as everything in IT.
Also, iPhone CPUs are always a generation ahead, than any android CPU, if not more. So it's not really Apples to oranges.
That would be a compelling counter if and only if languages like Java actually beat other languages in throughput. In practice that doesn’t seem to be the case and the reasons for that seem to be:
* languages like c++ and Rust simply don’t allocate as much as Java, instead using value types. Even C# is better here with value types being better integrated.
* languages like c++ and Rust do not force atomic reference counting. Rust even offers non atomic ref counting in the standard library. You also only need to atomic increment / decrement when ownership is being transferred to a thread - that isn’t quite as common depending on the structure of your code. Even swift doesn’t do too badly here because of the combination of compiler being able to prove the permission of eliding the need for reference counting altogether and offering escape hatches of data types that don’t need it.
* c++, Rust, and Swift can access lower level capabilities (eg SIMD and atomics) that let them get significantly higher throughput.
* Java’s memory model implies and requires the JVM to insert atomic accesses all over the place you wouldn’t expect (eg reading an integer field of a class is an atomic read and writing it is an atomic write). This is going to absolutely swamp any advantage of the GC. Additionally, a lot of Java code declares methods synchronized which requires taking a “global” lock on the object which is expensive and pessimistic for performance as compared with the fine-grained access other languages offer.
* there’s lots of research into ways of offering atomic reference counts more cheaply (called biased RC) which can safely avoid needing to do an atomic operation in places completely transparently and safely provided the conditions are met .
I’ve yet to see a Java program that actually gets higher throughput than Rust so the theoretical performance advantage you claim doesn’t appear to manifest in practice.
Of course with manual memory management you may be able to write more efficient programs, though it is not a given, and comes at the price of a more complicated and less flexible programming model. At least with Rust, it is actually memory safe, unlike c++.
- ref counting still has worse throughout than a tracing GC, even if it is single-threaded, and doesn't have to use atomic instructions. This may or may not matter, I'm not claiming it's worse, especially when used very rarely as is the case with typical c++/rust programs.
> You also only need to atomic increment / decrement when ownership is being transferred to a thread
Java can also do on-stack replacement.. sometimes.
- regarding lower level capabilities, java does have an experimental Vector API for simd. Atomics are readily available in the language.
- Java's memory model only requires 32-bit writes to be "atomic" (though in actuality the only requirement is to not tear - there is no happens before relation in the general case, and that's what is expensive), though in practice 64-bit is also atomic, both of which are free on modern hardware. Field acces is not different from what rust or c++ does, AFAIK in the general case. And `synchronized` is only used when needed - it's just syntactic convenience. This depends on the algorithm at hand, there is no difference between the same algorithm written in rust/c++ vs java from this perspective. If it's lockless, it will be lockless in Java as well. If it's not, than all of them will have to add a lock.
The point is not that manual memory can't be faster/more efficient. It's that it is not free, and comes at a non-trivial extra effort on developers side, which is not even a one-time thing, but applies for the lifetime of the program.
> ref counting still has worse throughout than a tracing GC, even if it is single-threaded, and doesn't have to use atomic instructions. This may or may not matter, I'm not claiming it's worse, especially when used very rarely as is the case with typical c++/rust programs.
That’s a bold claim to make that doesn’t seem to actually be true from my experience. Your 5ghz CPU can probably do ~20 billion non atomic reference adjustments whereas your GC system has to have atomics all over the place or it won’t work and atomics have parasitic performance on unrelated code due to bus locks and whatnot.
> Java can also do on-stack replacement.. sometimes
That’s not what this is. It’s called hybrid RC and it applies always provided you follow the rules.
> The point is not that manual memory can't be faster/more efficient. It's that it is not free, and comes at a non-trivial extra effort on developers side, which is not even a one-time thing, but applies for the lifetime of the program.
The argument here is not about developer productivity - the specific claim is that the Java GC lets you write higher throughput code than you would get with Rust or C++. That just isn’t true so you end up sacrificing throughput AND latency AND peak memory usage. You may not care and are fine with that tradeoff, but claiming you’re not making that tradeoff is not based on the facts.
> the specific claim is that the Java GC lets you write higher throughput code than you would get with Rust or C++
No, that has never been the specific claim - you can always write more efficient code with manual memory management, given enough time, effort and skill. I wasn't even the one who brought up c++ and rust. Like literally I write this twice in my comment.
What I'm talking about is reference counting as a GC technique vs tracing as a GC technique, all else being equal - it would be idiotic to compare these two if no other "variable" is fixed. (Oh and I didn't even mention the circular references problem, which means you basically have to add a tracing step either-way unless you restrict your language so that it can't express circular stuff).
As for the atomic part, sure, if all it would do is non-atomic increments then CPUs would be plenty happy. And you are right that depending on how the tracing GC is implemented, it will have a few atomic instructions. What you may miss is how often each run. On almost every access, vs every once in a while on a human timescale. Your OS scheduler will also occasionally trash the performance of your thread. But this is the actually apples to oranges comparison, and both techniques can do plenty of tweaks to hide certain tradeoffs, at the price of something else.
And I also mention that the above triad of time, skill and effort is not a given and is definitely not free.
In Rust there’s no forcing of any specific garbage collection mechanism. You’re free to do whatever and there’s many high performance crates to let you accomplish this. Even in Swift this is opt-in.
As for “skill” this is one thing that’s super hard to optimize for. All I can do is point to existence proofs that there’s no mainstream operating system, browser or other piece of high performance code written in Java and it’s all primarily C/C++ with some assembly with Rust starting to take over the C/C++ bits. And at the point where you’re relegating Java to being “business” logic, there’s plenty of languages that are better suited for that in terms of ergonomics.
Sure, but I think that people often fall into the trap of imagining a problem that nicely fits a RAII model, where each lifetime is statically knowable. This is either due to having a specific problem, or because we decided on a specific constraint.
Java is used in HFT (well, there are two types of "high frequency", one where general purpose CPUs are already too slow, where it obviously doesn't apply (neither do rust or c++)) - but sure, I wouldn't write a runtime or other piece of code in Java where absolute control over the hardware is required. But that's a small niche only. What about large distributed systems/algorithms? Why is Java over-represented in this niche (e.g. Kafka, Elasticsearch, etc)?
> And at the point where you’re relegating Java to being “business” logic, there’s plenty of languages that are better suited for that in terms of ergonomics.
> Java’s memory model implies and requires the JVM to insert atomic accesses all over the place you wouldn’t expect (eg reading an integer field of a class is an atomic read and writing it is an atomic write).
AFAIK that doesn’t really happen. They won’t insert atomic accesses anywhere on real hardware because the cpu is capable of doing that atomically anyway.
> Additionally, a lot of Java code declares methods synchronized which requires taking a “global” lock on the object which is expensive and pessimistic for performance as compared with the fine-grained access other languages offer.
What does this have to do with anything? Concurrency requires locks. Arc<T> is a global lock on references. “A lot” of Java objects don’t use synchronized. I’d even bet that 95-99% of them don’t.
> Concurrency requires locks. Arc<T> is a global lock on references
Concurrency does not require locks. There’s entire classes of lock free and wait free algorithms. Arc<T> is also not a lock - it uses atomics to manage the reference counts and no operation on an Arc needs to wait on a lock (it is a lock-free container).
> “A lot” of Java objects don’t use synchronized. I’d even bet that 95-99% of them don’t.
Almost all objects that are used in a concurrent context will likely feature synchronized, at least historically. That’s why Hashtable was split into HashMap (unsynchronized) and ConcurrentHashMap (no longer using synchronized). Thats why you have StringBuffer which was redone into StringBuilder.
Ok I mispoke on Arc because I was being hasty; but you're still being pedantic. Concurrency still requires locks. Wait/lock free algorithms can't cover the entirety of concurrency. Rust ships with plenty of locks in std::sync and to implement a ConcurrentHashMap in Rust you would still need to lock. In fact it doesn't even look like Rust supplies concurrent collections at all. So what are we even talking about here? This is still a far cry from "a lot of Java objects use global synchronized locks".
No, that’s an overly strong statement - concurrency doesn’t necessarily require locks even though they can be convenient to express it. You could have channels and queues to transfer data and ownership between threads. Not a lock in sight as queues and channels can be done lock free. The presence of locks in the Rust standard library says nothing other than it’s a very common concurrency tool, not that it’s absolutely required.
> and to implement a ConcurrentHashMap in Rust you would still need to lock
There’s many ways to implement concurrency safe hashmaps (if you explicitly needs such a data structure as the synchronization mechanism) without locks. Notably RCU is such a mechanism (really neat mechanism developed for the kernel although not super user friendly yet or common in userspace) and there are also generational garbage techniques available (kind of similar to tracing GC conceptually but implemented just for a single data structure). A common and popular crate in Rust for this is DashMap which doesn’t use locks and is a concurrency safe hashmap.
> A common and popular crate in Rust for this is DashMap which doesn’t use locks and is a concurrency safe hashmap.
Still not in the standard library. The only way in Rust is to use a global lock around map. Seems to be worse than the situation in Java. You could implement the same thing and use a third party library in Java too. So your original point of "everything uses a global lock" is "overly strong"
You’ve now degraded the conversation into a very very weird direction. You made a claim that concurrency required locks. It simply does not and I have an existence proof of Dashmap as a hashmap that doesn’t have any locks anywhere.
The strengths and weaknesses of the standard library aren’t relevant. But if we’re going there, the reason they’re not in the Rust standard library is likely in practice concurrent data structures are an anti pattern - putting a lock around a data structure doesn’t suddenly solve higher order race conditions which is something a lot of Java programmers seem to believe because the standard library encourages this kind of thinking.
As for “my comment” about “global lock” (your words not mine), it’s that the implicit lock that’s available on every object is a bad idea for highly concurrent code (not to mention the implicit overhead that implies for every part of the object graph regardless of it being needed anywhere). Don’t get me wrong - Java took a valiant effort to define a solid memory model for concurrency when the field was still young. Many of the ideas didn’t pan out and are antipatterns these days for high performing code. Of course none of that pertains to the original point of the conversation - tracing GCs have significantly more overhead in practice because they’re very difficult to be opt in, carry quite a penalty if not, Rc/Arc is much better as it’s possible to do opt-in when you need shared ownership (which isn’t always), and in practice loops don’t come up often enough to matter and when they do there’s still solutions. In other words tracing GCs drop huge amounts of performance on the floor and you can read all the comments to see how the claims are “it’s more efficient than Rc”, or “performance is free” or even “it doesn’t matter because the programmer is more efficient”. I’d buy the efficiency argument when the only alternative was C/C++ and came with serious memory safety baggage, but not any of the others and memory safety without sacrificing performance of C++ in my view is a solved problem with Rust.
It depends how you implement reference counting. In Rust the atomic inc-dec operations can be kept at a minimum (i.e. only for true changes in lifecycle ownership) because most accesses are validated at compile time by the borrow checker.
Is this an AI-generated answer? Most of these are not even true, although I still would prefer Go for micro-services. I'll address just a bunch and to be clear - I'm not even a big Java fan.
- Quarkus with GraalVM compiles your Java app to native code. There is no JIT or warm up, memory footprint is also low. By the way, the JVM Hotspot JIT can actually make your Java app faster than your Go or Rust app in many cases [citation needed] exactly due to the hot path optimizations it does.
- GC tuning - I don't even know who does this. Maybe Netflix or some trading shops? Almost no one does this nowadays and with the new JVM ZGC [0] coming up, nobody would need to.
> You can’t ship a minimal standalone binary without pulling in a JVM.
- You'd need JRE actually, e.g., 27 MB .MSI for Windows. That's probably the easiest thing to install today and if you do this via your package manager, you also get regular security fixes. Build tools like Gradle generate a fully ready-to-execute directory structure for your app. If you got the JRE on your system, it will run.
> Dependency management and classpath conflicts historically plagued Java
The keyword here is "historically". Please try Maven or Gradle today and enjoy the modern dependency management. It just works. I won't delve into Java 9 modules, but it's been ages since I last saw a class path issue.
> J2EE
Is someone still using this? It is super easy writing a web app with Java+Javalin for example. The Java library and frameworks ecosystem is super rich.
> “Write once, run anywhere” costs: The abstraction layers that make Java portable also add runtime weight and overhead.
Like I wrote above, the HotSpot JIT is actually doing the heavy lifting for your in real time. These claims are baseless without pointing to what "overhead" is meant in practice.
I believe Netflix has moved to ZGC with no tuning. Their default setup is to set the min/max heap to the same size, enable always pretouch, and to use transparent huge pages [0]. GC tuning is something of the past. Once automatic heap sizing for ZGC and G1 land you won’t even need to set the heap size [1][2]. They’ll still use more ram because the vm and jit, but the days of it holding on to ram when it doesn’t need it should be over.
> taking up 10x the memory it seems like it should need… when idle.
The JVM tends to hold onto memory in order to make things faster when it does wind up needing that memory for actual stuff. However, how much it holds on to, how the GC is setup, etc are all tunable parameters. Further, if it's holding onto memory that's not being used, these are prime candidates to be stored in virtual memory which is effectively free.
Note that the memory problem you mentioned is not really a problem in fact. It is just how managed memory works in Java. Just run .gc() and you'll see what I'm talking about. It reserves memory which you can see on the charts but it is not necessarily used memory.
Well, you might want to read up on how OSs handle memory under the hood, and that virtual memory != physical, and that task manager and stuff like that can't show the real memory usage.
Nonetheless, tracing GCs do have some memory overhead in exchange for better throughput. This is basically the same concept as using a buffer.
-----
And can you tell which of these websites use Java from "the feel"? AWS cloud infra, a significant chunk of Google, Apple's backends, Alibaba, Netflix?
I’ve never seen another basic tech used to develop other programs that’s so consistently obvious from its high resource use and slowness, aside from the modern web platform (Chrome, as you put it). It was even more obvious back when we had slower machines, of course, but Java still stands out. It may be able to calculate digits of Pi in a tight loop about as fast as C, but real programs are bloated and slow.
5% of "whose server-side programming language we know"
From the website.
And 76% of these websites is PHP, which seems to mean.. they can determine PHP more easily for a website (nonetheless, there are indeed a lot of WordPress sites, but not this amount).
Right, so Im assuming that as you are saying 'Half the web runs on java', maybe you know more about what websites are using in their backend? Care to share where you are getting this information from?
That doesn't match my experience in the last 15 years working for 3 companies (one was a big enterprise, one medium sized and one startup)
Maybe I have been lucky, or that the practice is more common in certain countries or eco systems? Java has been a very productive language for me, and the code has been far from the forced pattern usage that I have read horror stories about.
Have you gotten to use loom/virtual threads? I’ve heard pretty interesting stuff about em, but haven’t really spent the time to get into it yet. It’s pretty exciting and tbh gives me an easy elevator pitch to JVM world for people outside of it
If you have a use-case where you currently allocate ~1K threads mostly waiting on I/O switching to virtual threads is a one-liner ("Thread.ofVirtual()" instead of
"Thread.ofPlatform()"). No more golang envy for sure.
Depending on how much memory is used by the Thread stack (presumably 1M-512K by default, allegedly 128K with Alpine base images) that's your 1G-500M heap space usage improvement right off the bat.
The migration from JDK17 to JDK21 was uneventful in production. The only issue is limited monitoring as a thread dump will not show most virtual threads and the micrometer metrics will not even collect the total number of active virtual threads. It's supposed to work better in JDK24.
The Spring Framework directly supports virtual threads with "spring.threads.virtual.enabled=true" but I haven't tried it to comment.
But the perf is not reliable. If you want latency and throughput, idiomatic Rust will give you better properties. Interestingly even will Go for some reason has better latency guarantees I believe even though it’s GC is worse than Java.
There is not much point talking about throughput and latency in the abstract - they are very often opposing goals, you can make one better at the expense of the other.
Go's GC is tuned more for latency at the expense of throughput (not sure if it still applies, but Go was quite literally stopping the "business" mutator threads when utilisation got higher to be able to keep up with the load - Java's default GC is tuned for a more balanced approach, but it can deliver it at very high congestion rates as well. Plus it has a low-latency focused GC which has much better latency guarantees, and it trades off some throughput in a consistent manner, so you can choose what fits best). The reason it might sometimes be more efficient than Java is simply value types - it doesn't create as much garbage, so doesn't need as good a GC in certain settings.
Rust code can indeed be better at both metrics for a particular application, but it is not automatically true, e.g. if the requirements have funny lifetimes and you put a bunch of ARC's, then you might actually end up worse than a modern tracing GC could do. Also, future changes to the lifetimes may be more expensive (even though the compiler will guide you, you still have to make a lot of recursive changes all across the codebase, even if it might be a local change only in, say, Java), so for often changing requirements like most business software, it may not be the best choice (even though I absolutely love Rust).
The problem is that writing genuinely performant Java code requires that you drop most if not all of the niceties of writing Java. At that point, why write Java at all? Just find some other language that targets the JVM. But then you're already treading such DIY and frictionful waters that just adopting some other cross-platform language/runtime isn't the worst idea.
> The problem is that writing genuinely performant Java code requires that you drop most if not all of the niceties of writing Java
Such as? The only area where you have to "drop" features is high-frequency trading, where they often want to reach a steady-state for the trading interval with absolutely no allocations. But for HFT you would have to do serious tradeoffs for every language.
In my experience, vanilla java is more than fine for almost every application - you might just benchmark your code and maybe add some int arrays over an Integer list, but Java's GC is an absolute beast, you don't have to baby it at all.
>The problem is that writing genuinely performant Java code requires that you drop most if not all of the niceties of writing Java. At that point, why write Java at all?
The reason is quite well known. Supporting multiple languages is a cost. If you only have to support one language, everything is simpler and cheaper.
With Java, you can write elegant code these days, rely on ZGC, not really worry too much about GC and get excellent performance with quick development cycles for most of your use cases. Then with the same language and often in the same repo (monorepo is great) you can write smarter code for your hot path in a GC free manner and get phenomenal performance.
And you get that with only having one build system, one CI pipeline, one deployment system, some amazing profiling and monitoring tooling, a bunch of shared utility code that you don't have to duplicate, and a lot more benefits.
That's the reason to choose Java.
Of course, if you're truly into HFT space, then they'll be writing in C, C++ or on FPGAs.
Depends on the program (especially the framework used) and the GC being used. I can write a java program and set it up in a way that it runs faster than almost everything else. For example in a serverless architecture where you need fast startup and small programs you can choose __not__ to use a GC and run ephemeral Java scripts. It starts and finishes running faster than you can blink.
None of what you said are any of the reasons given that it WAS written in Java already [0] but rewrote it all in Go explicitly because of its performance, concurrency and single binary distribution characteristics.
Those were enough technical advantages to abandon any thought of a production-grade version of k8s in Java.
> the anti patterns weren’t enough we also observe how Kubernetes has over 20 main() functions in a monolithic “build” directory. We learn how Kubernetes successfully made vendoring even more challenging than it already was, and discuss the pitfalls with this design. We look at what it would take to begin undoing the spaghetti code that is the various Kubernetes binaries built from github.com/kubernetes/kubernetes
It seems to me that perhaps it wasn’t the languages fault but the authors.
> Horrible dev experience, no decent clients/libs, complex pricing, weird scaling in/out mechanism, slow, it only works well for well defined use-cases.
Most of these arguments probably don't outweigh the benefits. If you're in need of a managed, highly-consistent, highly-scalable, distributed database, and you're already an AWS customer, what would you use instead?
Oracle Cloud only charges a fraction of want Google, Microsoft, and Amazon charge. Any idea how Oracle is able to keep the cost so low? Or are the others just inflating the price so customers don’t move to the competitor? In that case Oracle deserves a shout out for not applying these vendor lock-in practices.
Oracle has probably really good margins on egress costs. With AWS/GCP/Azure the costs are absurd because for a lot of their customers it's not a big cost during operation, but makes moving data off cost prohibitive. It's simply a vendor lock-in mechanism for them.
The margin on egress is insane. Oracle got into cloud late in the game and burned a lot of goodwill downmarket so they have to sacrifice that to play catchup
Well I'm sure there are some costs, but Google charging you an arm and a leg for traffic when they literally own multiple sea cables going around the world and a bazillion datacenters seems a bit sus...
ORMs could (and most do) provide some escape hatch, where you can write the query yourself and reuse the hydration layer, or reuse the query generator and customize the hydrator, or a combination. Or you can just bail out completely for the few performance critical queries.
Honestly, ORMs are just an abstraction. They come at a cost and they’re not a silver bullet, just like most abstractions. I believe the hate for ORMs in many cases is due to a lack of understanding/wrong expectations.
I don't understand the negativity in this thread about Oracle and the pricing model. GraalVM is an amazing piece of technology that enables many new applications. Oracle has the courage to invest heavily in this research, provides a community edition for free, and asks a very reasonable fee for the enterprise version. The pricing model is admittedly a bit complex, but from what I've been told it is a fairly common pricing model in this industry. And if you use the enterprise version in a way that is not allowed, you risk facing the consequences; no surprises there.
You’re not sure why there’s negativity towards Oracle? That’s an easy one word answer: audits.
I don’t work at Oracle but have lots of customers who use their/your products and they all feel your sales staff and motion are best described as predatory.
I used to work at Oracle, and the penny pinching isn't any better on the inside.
It's a bit ridiculous when, being an Oracle employee and using Oracle products, there's only one member on our team with access to a product's support pages and you have to message him and ask for a PDF printout whenever you run into an obscure issue.
+1 for paying for value. At the scale that companies like Facebook operate, 40% improvement on compute efficiency represents millions. I’m not really an Oracle fanboy, but definitely feel that value should be captured where it’s created, especially for enterprise use cases.
If at any given second of the course of a month I have somewhere between 500 and 2000 vcore’s worth of AWS vms running with Java SE, how do I figure out how much I owe Oracle?
Amazon, a competitor to Oracle, publicly provides enough information to figure this out.
After upgrading my MacBook Pro 13" (2017) to Big Sur in Dec 2020 I had a similar experience. I use a 4k monitor and everything got super slow, fans started spinning, processes got throttled, and I could barely get any work done. I later did a fresh install of Catalina and everything was fast again.
I never did proper benchmarking, but my feeling was that resolution played a role. When using 1080p things were fast, but the resolution is unusable on a 27" 4k display. When using 2160p things were fast, but too small for my eyes. Any resolution in between (this implies things are being scaled?) was sluggish.
Note that this is a 13" model, so the problem does not seem to be restricted to the 16" model that the author is talking about.
This is funny, because I have the complete opposite experience. Yes, it took me an afternoon to find my way around Gradle. But ever since I really appreciate their documentation. It is to the point, well written, contains code samples (which I believe are unit-tested, so they the code samples are always up-to-date), contains examples in both Groovy & Kotlin, and I could keep going on. A good entry point, for example, is Build Script Basics (https://docs.gradle.org/current/userguide/tutorial_using_tas...).
> But I am yet to encounter a developer who actually learned Gradle inside out just from using it and reading the documentation.
One thing that keeps disappointing me is the utter lack of reference-style (API) documentation.
Sure, the conceptual documents contain examples. But if the example isn’t clear enough, you’re completely on your own.
You are probably doing a bit simpler stuff, if that basic tutorial suffices you. I should have maybe added that I do Android development with Gradle. There you don't as much start with an empty build file to write the tasks to compile your project, but instead have to navigate a pre-existing large build file and figure how to correctly use the Google and Gradle interfaces to integrate to the Android build tools, such as the dreaded AGP (Android Gradle Plugin). Which by the way is actually mostly undocumnented AFAIK!
I know nothing about Android development with Gradle (or Android development in general, for that matter). I feel that is where a lot of negativity in this thread comes from. But then it's not per-se Gradle who is to blame; it's this specific use case of Gradle with Android which apparently is giving people a poor experience (e.g. lacking documentation).
> You are probably doing a bit simpler stuff, if that basic tutorial suffices you.
I've written a couple Gradle plugins, one of them was to build and test programs in our custom DSL. This exposed me to all facets of Gradle, because whatever Gradle offers to compile Java, Kotlin, ... you will most likely use for another language as well. I could not have written these plugins without the Gradle documentation. But again, it's totally unrelated to Android development.
The problem is not all developers have such a nice experience. Maybe the reason is underlying assumptions - they can really help in learning or become a really big obstacle, if one isn't aware of them. That or another, my several attempts to grok Gradle from existing scripts and documentation still fail to produce understanding.
The problem is that it is super easy to modify a Gradle build scripts, but also super hard to get it right (e.g. understanding of configuration vs. execution phase, concept of configurations, task dependencies). As long as you don't touch the build script, Gradle is superior to Maven in _so_ many ways.
Yes, it has what I would describe as a cliff learning curve and I think that is it's biggest drawback. One understands Gradle or one doesn't and that is pretty far from ideal.