I remember times in 90's when we planned a software system in UML powered tool called Rational Rose. Oh my god its was clumsy and slow process. But yes, sequence diagrams are very useful tool.
It may be dumb question, but is there any realistic use case to use this vulnerability to reveal SHA-3 hashed secrets? Or is it just that attacker can crash systems with suitable input?
I’ve shown how this vulnerability in XKCP can be used to violate the cryptographic properties of the hash function to create preimages, second preimages, and collisions. Moreover, I’ve also shown how a specially constructed file can result in arbitrary code execution, and the vulnerability can also impact signature verification algorithms such as Ed448 that require the use of SHA-3. The details of these attacks will be made public at a later date.
The code for the functions is vendor agnostic. Vendor lock-in comes from the integrations the code that runs in the cloud ends up consuming, and the developer experience one acquires. The nature of cloud development is that one invariably becomes an expert in a cloud or stack, and that's the real lock in / why it's expensive to move in practice.
Exactly… the projection that datacenters' demand for computing power won't be offset by silicon efficiency improvements is somewhat speculative; the projection that Bitcoin's demand for computing power won't be offset by silicon efficiency improvements is obvious and is by design of the fucking system.
bitcoin mining provides an opportunity for anyone, anywhere to convert their computing resources into value. the competitive pressure is to have more efficient energy usage than your competitors
so in reality, bitcoin provides an direct incentive for more efficient computing (the kind that can't be handle with by building smaller, more efficient processors). it might do more to boost computational efficiency (and reduce emissions) than all the millions of pages of blogger platitude-ing flooding the internet
I'm sorry, what? Bitcoin mining has spawned an entire new industry focused solely on converting a scarce shared resource (electrical energy) into perceived personal profits. It is wasteful pollution on a global scale: a single bitcoin transaction consumes as much power as an average U.S. household does in 77 days (https://digiconomist.net/bitcoin-energy-consumption)
It's a perfect example of the tragedy of the commons. The energy already wasted on bitcoin mining will never be recuperated by the efficiency gains you so eagerly (and without merit) ascribe to bitcoin.
So when bitcoin hits it's schedule max limit and can't be mined anymore, is the point moot?
I guess the question is: how much of a total energy bill will it have racked up before we get to that point, and should we consider the likely improvements to computing efficiency when we start speculating about the long term impact
"converting a scarce shared resource (electrical energy) into perceived personal profits" (1) is this entirely new?, and (2) it doesn't just exist for increasing profits, it provides a service: a digital cash infrastructure
Absolute bullshit. Bitcoin does not even use processors for mining, it uses massively specialised hardware that is absolutely useless for anything else. No advances in bitcoin mining will translate to gain in anything else.
It also massively increases emissions and does nothing to decrease them. The difficulty adjustment algorithm of bitcoin will ensure that you always have to keep wasting more resources to mine, and will cancel out any gains you ever make. By design. Bitcoin is explicitly built to waste resources.
by specialized hardware, you mean GPUs? which are extremely useful for graphics, machine learning, and large-scale mathematical calculations? the gains to society that GPUs brought can't be understated. how do you think they run the climate-projection models? [1]
and my argument is about second-order effects. even if bitcoin mining remains inefficient, the technological gains of directly incentivising lower energy usage than your competitors carries over to other markets / enterprises
plus, bitcoin is scheduled to run out. so eventually the mining issue is moot
a second-order prediction is obviously more noisy and holds less weight without any direct evidence, but 'lack of direct evidence' characterizes much of this debate
That hasn't been the case for most of the decade. I already told you what does: Completely custom-built hardware that can do nothing but bitcoin mining, such as Antminers.
alright, in that case you're right, the processors seem wasteful
though interestingly, the specialized processors alkso seem like an example of a second order effect of bitcoin: driving innovation towards more energy efficiency. (though the tradeoff here seems to be wasting materials versus wasting energy)
Harping on about how individuals can reduce energy usage in computing but not talking about Bitcoin is directly comparable to pushing lame individual recycling efforts but ignoring coal power plants and industrial scale methane leaks.
The good thing about Bitcoin is that we can just ban it, since nobody actually uses it for anything important. It could be turned off tomorrow and nothing of value would be lost.
except all the goods and services that it currently stands in as a reference for (given people had to buy it with their own money, which they provided some service to obtain)
but hey,
just because bitcoin wasn't directly involved in a transaction doesn't mean it isn't standing in for economic value
if I steal an ancient coin worth $10,000 dollars from you, and you're upset about it, am I making a good rebuttal by saying: "it's not like you could buy anything at the grocery store with it anyways"
except all the goods and services that it currently stands in as a reference for (given people had to buy it with their own money, which they provided some service to obtain)
We should reduce pollution, not panic about weather. Seas are full of plastic and organic waste, dry land too, rain forests are burned down to make more field. I don't claim that human would not affect to climate, but it is extremely difficult to prove statistically when the data we have is something like 100 years from the full 5 billion years of existence of earth. How can you take all factors into account in this kind of statistical analysis?
I developed software professionally over 20 years and I get impression that I would not have survived any of these interviews. I remember the times in 90's when I had most of the Java classes & methods in my "working memory", but after that came syntax helpers and search engines. That ruined whole concept of programming with just text editor. I don't know if anyone else has similar experiences.
> That ruined whole concept of programming with just text editor.
What "ruined" it for you, "made" it for me :)
I absolutely hated my CS courses in undergrad and dropped the major after 2 classes. Didn't get back around to programming until a few years later - using Google makes it way more doable.
Actually I agree with you as I would not be able to program much without Google today. But many reported here about interview situations where they had like white paper and task to develop something. It is something they never face in real work.
yeah I'm confused too, so its because Java 9 officially came out September 2017 and they've been releasing very fast but have completely ditched semantic versioning
Java 8 → Java 11 → Java 17. Those are the LTS releases. If you want to follow a release cadence similar to Java 5 → Java 6 → Java 7 → Java 8, that's what you'll track.
The intermediate releases can be used to test against, but adopting them too soon often means tackling lots of issues in your dependencies. And updating your application servers to a new major Java version every six months is not something you can do easily coming from older Java versions. It's not impossible, but it may require a different deployment strategy and may not benefit you much over going from LTS to LTS.
I think that there is also an unspoken feeling that the LTS releases are more stable, which again may be due to the libraries you depend on not always being as well tested on the intermediate releases. I think a lot of Java developers burned their fingers on Java 9 and 10 and are now sticking to the LTS releases, but that is just my gut-feeling.
I struggle to imagine any meaningful definition for LTS that includes "tracking latest" except by coincidence (during the window of time when the most recent release is an LTS release).
If you're doing green field JVM developer and NOT using big frameworks like Spring or an app server it is pretty safe to use the intermediate releases.
... if you're willing and able to update your entire app to a new JDK the month it comes out.
This is where I hope Java adopts the .NET support lifecycle. .NET also has (relatively) fast-paced releases now, once per year with LTS every other year, but the prior non-LTS is supported for 3 months after the next one comes out. That's still pretty aggressive, but it's very doable with a modern app to upgrade within 3 months.
With Java, JDK 15 support ends this month, the same month that JDK 16 comes out. Not just that, but we're halfway through the month already. So if you're on JDK 15, you now have only two weeks to upgrade and still be supported, according to their official schedule.
Edit: I may have been too generous in my reading of their support roadmap. It seems like the previous non-LTS version is immediately out of support as soon as the new one "supersedes" it.
Uh, probably just my narrow corporate world view but... is there any serious development in java NOT using Spring?
Time for me to learn so let me rephrase this question!
Those who use java without EE or Spring framework, what are your go-to libraries? What are you using for DI / REST / ORM / Auth etc?
> is there any serious development in java NOT using Spring?
Yes. And I don't just mean people doing Android development.
1) DI: Don't always need DI. For things like injecting values from config, it's really unnecessary given that we have very good control over production, down to building our boxes from a playbook. For things like mocks, plain old Java suffices -- you can roll your own (actually, someone in your team just needs to do it once and it keeps getting improved/forked). Guice exists as a fallback.
2) REST: Jersey + own libraries on top, Retrofit. Using Spring for REST is probably the poorest use for Spring, for me.
3) ORM: you can use "just" JPA. Don't always need an ORM either -- the strategy varies depending on the requirement. Although for some projects I have used Spring Data JPA because it seemed like the easiest path. At least Spring was localized to that artifact.
4) Lots of good OAuth2 libraries as well as things like pac4j. If you're wondering about Spring Security, it's definitely one to consider if you're doing lots of webapps in Java. But in a polyglot environment, a lot of Java code just handles APIs while node handles the UI.
Anyhow, this isn't to hate on Spring or anything -- I use it if it adds value. This is more about not taking the the complexity hit if you can help it. It also makes the code a bit more straightforward and readable.
Different needs for different domains. For example, there's an old pack of machine learning code that I maintain, and it (and a bunch of other Java code that I see from other institutions) does not use or need DI, ORM, Auth nor proper REST, they typically handle processing of non-database data with no or minimal web interaction, and plain old objects are fine for that - if we started from scratch and chose Java instead of some other language, we would still not use Spring for that.
Google used Guice for DI (Or Dagger2 sometimes), rest is it's own thing, orm is "there is no ORM go write your spanner queries", and Auth is it's own thing.
Note that blindly following LTS is not a great idea for many teams that deploy applications (as opposed to libraries), including teams working in large companies, if you have access to modern CI/CD tooling. Personally, I'd let production stay a little behind the latest release but always run a task on my build server to build and test my code on the latest JRE.
If you read Ron Pressler's comment on Reddit[1], the recommendation is
> In any event, the default position should be to keep up with the current JDK as much as possible (it's OK to skip a version or two), and consider LTS only if there is some specific difficulty preventing you from doing so. As I said before, the "current JDK" upgrade path is designed to be the one that is overall the cheapest and easiest for the vast majority of users -- easier and cheaper than the old model.
For users who are happy with the current JDK language level, newer releases also have JRE improvements:
> ...those users should [also] prefer the current JDK, too, as that is the easiest, cheapest update path. Also, new feature releases contain performance and monitoring improvements (e.g. JFR, ZGC, AppCDS in the last few releases) that may be of interest even when users are not interested in new language/library features.
> The problem with LTS is that it's costly (costlier than the old model): on the one hand, the risk of a breaking change in an LTS patch is no lower than in a feature release, and on the other hand, the patches no longer contain many gradual implementation features (that you use without knowing). In addition, the OpenJDK development process revolves around feature releases without regard for LTS, so features can be removed without notice in an LTS release. This makes an LTS->LTS upgrade more costly than in the old model.
> LTS is designed and is advisable for companies that prefer a costly, but well-planned, three-yearly upgrade process.
> Even if you've carefully considered the options and decided that LTS is the right approach for you, you are strongly advised to test your app against the current JDK release to reduce the cost of an LTS->LTS upgrade.
> It's true that upgrading from 8 to post-9 can be non-trivial (9 was the last ever major release), but once you do that, you have an update path that ensures you'll not have a major upgrade ever again.
Java upgrades inside enterprises cost time and money, no sense in taking that cost if you can help it, and many engineering teams with good DevOps can easily dodge this cost.
And what do you do if you "try it out" and some months later you identify a problem? Then you have to rewrite all the code without the new features to go back to the LTS version, and likely you'll have to downgrade or even swap out some of your dependencies. That's painful and expensive, there needs to be a very good reason to justify that risk.
You don't need to downgrade dependencies, libs are at 8 and 11 right now and support up to 16 (e.g. jackson is I think at Java 7, and supports records from Java 16).
What kind of problem could you identify? Same might happen with LTS release (e.g. I had it in JDK 8, suddenly I wasn't able to use some crypto libs).
Fixes always go first to JDK latest, and then are backported to 11 and 8.
If you're using JBoss/Wildfly the recommendation is to stick to the LTS. The last JDK supported is 13 since 14 removed some API's and the team is still working to fix it.
Basically there's six month feature releases but every five or six releases there's going to be an LTS release. Most companies use the LTS releases. Next big LTS release is later this year.