Seeing OpenTF on the CNCF website would be glorious. TF has become too ingrained in the cloud operating model, CNCF is where it belongs. Maybe a Vault fork will join it someday.
Will HashiCorp remain relevant throughout this? Seeing a lot of parallels with Red Hat's recent mistake...
There’s parallels but it’s not the same. Red Hat’s projects are still all fully open source. And it’s not clear it’s a mistake yet.(I work for red hat but this is my personal opinion).
Also FWIW Red Hat license policy (as implemented publicly through Fedora) disallows software under the Business Source License:
https://gitlab.com/fedora/legal/fedora-license-data/-/blob/m...
Red Hat has previously worked to eliminate product dependencies on 'source available' licenses and we're currently having to do this wrt Hashicorp stuff.
Not sure how much you details you can provide, but I know RH products use Terraform under the covers for a few things (like in OpenShift). Are you removing this functionality because it's no longer FOSS or fears around the BSL verbiage?
Since Red Hat is at the earliest stages of grappling with this issue and I can't speak for the teams involved I don't think there's anything I can say, other than that our corporate policies on product licensing by default do not allow stuff under licenses like BUSL-1.1. The only case I am aware of offhand where Red Hat briefly had a 'source available' license in a product concerned some code that was transferred from IBM to Red Hat (the source available component was third party with respect to both IBM and Red Hat; IBM does not or at least did not have the same restrictions on use of such licenses that Red Hat has).
Just speaking personally, I'm happy to see this fork occurring and hope they succeed in joining CNCF.
For sure it will not update to BUSL-licensed versions of Terraform as mentioned above, but I can't say if it will stay on an older version, use OpenTF, use Ansible or something else.
Well, they clearly alienated themselves from the community, or a significant part of it. I'm not sure if it's a mistake from a business perspective but early leaders of Red Hat were very careful to collaborate with the community.
I can say that, the scientific computing community has been affected deeply because of this move. They wanted to eliminate "The Freeloaders", but the collateral damage was enormous, and they either didn't see or don't want to see what they have done.
The thing is, the big majority of these systems won't flock to RedHat, and won't continue to use CentOS.
Yeah a large portion of research computing standardized on Red Hat (see e.g. Scientific Linux). The stability is quite important when trying to run ancient/niche scientific code that sometimes would only support (or even build on) Red Hat, and when you have a large number of nodes that need to be essentially bug-for-bug identical you want the package churn and update cycle to be kept to an absolute minimum.
The licensing of real RHEL never could have made sense in the HPC space, and I'd be shocked if a meaningful number of deployments would be moved to purchase RHEL now.
When I was a "sysadmin" in this space I always kind of personally preferred Debian, which has similar longevity to its release cycle, but it could never gain much traction.
I hope at this current juncture the HPC community might rethink its investment in the RHAT ecosystem and give Debian a chance.
> a large portion of research computing standardized on Red Hat (see e.g. Scientific Linux). The stability is quite important when trying to run ancient/niche scientific code that sometimes would only support (or even build on) Red Hat
> I hope at this current juncture the HPC community might rethink its investment in the RHAT ecosystem and give Debian a chance.
Nowadays Nix and Guix are available, and they're fundamentally better fits for this problem. Traditional Linux package managers aren't really suited to scientific reproducibility or archival work. It would be much better to turn more towards functional package management than just swap in another distro.
> Nowadays Nix and Guix are available, and they're fundamentally better fits for this problem.
No, it's not. First of all, reproducible installations for HPC nodes is a solved problem (we have xCAT to boot, for example). However the biggest factor is the hardware we use on these nodes.
An HPC node contains an Infiniband interface, at least, and you generally use manufacturer provided OFED for optimal performance, which requires a supported kernel.
I wasn't talking about NixOS or GuixSD, or OS installation.
I was talking about tools that let you run ancient software on modern distros. With Nix and Guix you don't have to hold your whole OS back just to run 'ancient software'.
Well, if my memory serves right, this investment started with RedHat's support for CERN's ScientificLinux and snowballed from there.
Then this snowball is solidified by the hardware we use, namely InfiniBand and GPUs, plus the filesystems we use (LustreFS or IBM's GPFS) which requires specific operating systems and kernels to work the way it should.
It's not as simple as "Mneh, I like Debian more, let's replace".
While I strictly use Debian on my personal systems, we can't do that on our clusters.
Red Hat is also strictly against copyright assignment agreements in general, and keeps many under the GPL, so few Red Hat projects could realistically be relicensed like this to begin with.
IBM probably disagrees and as much as people expected RH to show IBM how to work, I think history is repeating itself and things are happening as they always did.
I understand why it's tempting to buy into this narrative but it is just not the case.
Aside from the fact that IBM had no involvement in the recent decision relating to git.centos.org (if I remember correctly, IBM found out about it when it was publicly announced), IBM has had basically zero influence on any aspect of Red Hat open source development or project or product licensing policy.
On the other hand, Red Hat has had some limited influence on IBM's open source practices. For example, IBM has moved away from using CLAs for its open source projects, I believe mainly out of a desire to follow Red Hat's example. I'm not aware of any use of copyright assignment by IBM projects.
Your comment dances around the point so avidly that it’s un-understandable to me. How have things been happening, and why would they happen, now, at Red Hat?
Allow me to spell it out: if IBM could guarantee themselves a maintenance or growth of market share in the short-term while simultaneously clamping down on licenses that are anything but closed-source, they would. iBM didn't buy redhat because they think it's doing things the "right way". They bought redhat because they thought they could make money with it.
Fully open source in the strictest possible sense, but with the added caveat that if you choose to exercise your rights under the GPL you'll no longer be able to do business with Red Hat [0]. I personally wouldn't categorize Red Hat's current position as compatible with the ethos of FOSS.
As with Sun in the old days, good luck actually collaborating as in a healthy open source project but the license doesn't specify there should be a community around anything so it's all good.
Will HashiCorp remain relevant throughout this? Seeing a lot of parallels with Red Hat's recent mistake...