As a side note, we've recently released OpenTofu 1.7 with end-to-end state encryption, enhanced provider-defined functions, and a bunch more[0].
If you've been holding out with the migration, now is the perfect moment to take another look, and join the many companies that have already migrated!
[0]: https://github.com/opentofu/opentofu/releases/tag/v1.7.0
Note: Tech Lead of the OpenTofu project, always happy to answer any questions
For example:
# current method
module "foo" {
count = var.enable_foo ? 1 : 0
}
# better?
module "bar" {
enabled = var.enable_bar
}
Preconditions and postconditions fail the apply run if their condition doesn't validate, so those can't be used.I'd also really like to be able to say in an output block, "this value doesn't have to exist, only display it if its parent module is enabled", again without the "count" attribute.
There's a bunch of nontrivial technical complexity though, because of how OpenTofu currently works.
All of these "how can I do X in HCL" just go away.
This is why I dropped CDK for TF in favor of Pulumi, although I do feel Terraform has more maturity and polish but not enough to warrant its limitations.
[1]: https://developer.hashicorp.com/terraform/cdktf/concepts/fun...
But I'll take that annoyance anyday over the absolute pain of HCL.
I haven't tried Pulumi, I will eventually. (Cloud services have TF providers more often than Pulumi ones. But if Pulumi is easy to extend, maybe doesn't matter.)
I understand why people were upset about licensing changes but I was not one of them who were particularly bothered. Why should I switch?
OpenTofu is indeed a hard fork. When doing similar features (like provider-defined functions) we try to stay compatible where it makes sense, but there's often some differences (like our more extended capabilities of provider-defined functions[0]) and also new features in Terraform that we're not introducing - and vice versa.
You can check for known incompatibilities in our migration guides[1], based on your Terraform version. In practice, the longer you wait, the more the projects will diverge, so if you still want to "wait and see" I would suggest settling on your current Terraform version for now - otherwise, the migration will just be more work for you later.
Regarding the reasons for switching, I'd say features and community-first process. We're striving to be very community driven in what work we're prioritizing[2] and have received a lot of positive feedback over that from our users.
Some companies we've spoken to see adopting the open-source community-driven project as a way to reduce risk long-term. It's also a way to keep your options open if you're in the market for commercial Terraform/OpenTofu management systems.
[0]: https://github.com/opentofu/terraform-provider-go
In this case Oracle is the user, not the vendor.
Hm, with that logic they could dump MySQL in favor of MariaDB as well
1) Oh, I prefer open source alternative for ideological reason.
2) This software is not really worth that much.
3) Hectoring developers every single time in providing why their software should be preferred over unpaid alternatives.
4) Blaming companies that they are bigger users so they should pay not me.
If these entitled developers who deserve all the money but no one deserve their money just shut the fuck up every once in a while it will be a good thing.
This shouldn't be unexpected and it's not an excuse to be dismissive to an imagined hypocrite. Not saying there aren't hypocrites in this world, just that we shouldn't treat members of a community as some kind of superset of everything in that community.
Besides I made observation about people in community and not community itself as I did not say HN thinks software should not be paid for.
Can you link any specific HN user who holds any 3 of those specific beliefs, or was this hypocritical strawman purpose-built to bolster your argument?
Believing every employee at Walmart thinks the same is silly and while someone is to blame for policy its important to not blame retail clerks for store policy, for example.
Linux as a whole exists because developers said "fuck AT&T, we're taking this train off the rails" and nobody ever looked back since.
> Those companies are merely helping to “commoditize their complements”
That's how they justify it internally, yeah. From an administrative standpoint it's pretty obvious that they all choose Linux because it's easier than retrofitting proprietary UNIX for modern software. But indeed, they market it as goodwill and complimentary development.
https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/
What must be rejected is nonfree licenses like the BSL.
Specifically, supposed inevitability of BSL->OSI transition is dubious. If anything, there are examples of the opposite - terraform itself being prime one.
Sure! [1]
> The Business Source License requires the work to be relicensed to a "Change License" at the "Change Date". The "Change License" must be a "license which is compatible with GPL version 2.0 or later". The Change Date must be four years or sooner from the publication date of the work being licensed
So the business source license is less "non-OSI" and more "not currently non-OSI, but eventually and irrevocable at future date".
In the case of Terraform it says [2]:
>Change Date: Four years from the date the Licensed Work is published.
>Change License: MPL 2.0
So is this ideal? No. But it's better than OpenVMS screwing over historians and hobbyists [3] decades after it's relevancy has expired.[6]
It's also better than SSPL [4] which has no such transition and stays permanently non-OSI [5].
> "BSL XP be good for wine" claims above.
Well Wine uses the LGPL, and Windows XP was released in 2001 so even if they set the expiry 20 years after release, it'd be GPL'd by now.
---
[1] https://en.wikipedia.org/wiki/Business_Source_License#Terms
[2] https://github.com/hashicorp/terraform/blob/main/LICENSE
[3] https://www.theregister.com/2024/04/09/vsi_prunes_hobbyist_p...
[4] https://en.wikipedia.org/wiki/Server_Side_Public_License
[5] https://web.archive.org/web/20230411163802/https://lists.ope...
[6] https://www.theregister.com/2013/06/10/openvms_death_notice/
A BSL project could say, hey, look at this guy stealing our code!, even if I’ve never seen it. I could have, and that opens a plausible risk I wish I didn’t have.
> A BSL project could say, hey, look at this guy stealing our code!, even if I’ve never seen it. I could have, and that opens a plausible risk I wish I didn’t have.
By that argument, you could have looked at Windows code too, since Windows source code has leaked multiple times, and 5 minutes of searching will find it.
Yes it is. Because companies (like Oracle) will take as much as they can and give as little as they can.
> free software is a gift freely given
It's a gift to the public, not to individuals and companies (like Oracle).
> Even public domain is ok
Even worse because that expressly allows companies (like Oracle) to take everything and give nothing.
If they didn’t accept that, they could have used a non-commercial license. If they expected contributions they could have sold a paid product.
I’d suggest not using others hard work as the basis for your argument. If it was your work and you regret it, say that. If you don’t like oracle, say that. Otherwise, people who contribute to FOSS software do so knowingly, yet you are trying to inject your own opinions of “public” vs whomever, as though you know better than those contributors own feelings and intentions.
Which in the case of free software is a completely neutral fact that causes exactly zero negative impact to the project. You're trying to apply principles of scarcity to a product category that has no scarcity—replicating the bits to serve Oracle doesn't cost a maintainer anything at all.
They can prefer not to let Oracle use their otherwise-freely-provided software, but that's not a position that's as easy to get sympathy for as pretending there's harm done.
Yes, they will. So? Nobody is actually harmed by this. The software is still perfectly available for the public to make use of.
> It's a gift to the public, not to individuals and companies (like Oracle).
The public is not some separate entity from individuals or companies. It's simply the collective of all individuals and companies. So yes, when you gift something to the public it's a gift to Oracle as well. It's not exclusively to them, but they are a part.
Hey, what if they do - what we do to other companies ... to us ...
presses a red button
Oracle bought MySQL which was forked into MariaDB. MariaDB created the Business Source License (BSL). Hashicorp switches Terraform to BSL which then leads to Terraform being forked into OpenTofu. OpenTofu seems to be getting adopted by Oracle.
If everyone acted like Oracle; there would be no mysql users. Which is the point being made.
Let's see;
If I giving away a product because I think it's for the betterment of mankind, and definitely not an attempt to rug-pull or anything like that: no, just for developer good will.
Then I am offered a free service, and I do not use it for fear that there could potentially be some rugpulling, despite having a reputation for that myself: and the project I'm considering having no reputation for that.
Then the pretense in which I "give away" my software, is morally dubious. I would never permit myself to be in the same situation as I need others to be in order for my product to be successful.
MySQL/Virtualbox/Java etc;
I have no sympathy for oracle or their products but I fail to see the hypocrisy here. I think oracle is pretty consistent in their position over the years.
Clearly Oracle are aware of the danger that they themselves ask people to ignore.
(There may be other reasons to do so but that's not here nor there)
You are also committing the classic mistake of anthropomorphizing Larry Ellison.
Too often it's a failure. Too often it has some upsides, but also is a LOT of work that is discovered over time. Too often it's seen as good BUT now some incompatible new version or alternative requires the whole debacle start again.
I only want to learn technologies that will be relevant until the day I retire, otherwise I'm not advancing, it's all just a treadmill.
Yes, I too wish I could make a living programming in 65C02 assembly on my Apple //e like I did in 1986.
I also don’t see any reason I have to learn about S3 instead of storing all of my files on an on prem CDRW jukebox
S3 will also certainly be around in 20 years.
Programming in Java in 2024 is nothing like it was in the 1990s when it was first embedded in Netscape Navigator - yeah I played around with it back then.
When I was first using C and C++, I was writing Windows apps with MFC in 1999. Good luck if that’s all you know in 2024z
I’ve been at this awhile. I started writing C and Fortran apps on DEC Vax and Stratus VOS mainframes in 1996z
My second job was part development and part managing Windows servers on prem running IIS and Classic ASP.
I got my first, only, and hopefully last job at BigTech in the cloud consulting department at 46 (full time role) consulting companies on all of the latest “serverless” goodness.
Either evolve or end up complaining on HN about “ageism”. When I got Amazoned at 49 last year, it took all of three weeks to have multiple offers. I’ll put my buzzword compliance against anyone of any age in my niches.
While “tcp/ip” will be around as will assembly language. I’ve programmed in assembly language on five different architectures either professionally or as hobby. I haven’t touched it since 2008. Jobs are at a higher level of abstraction these days.
I think of terraform as a form of insurance. It's "Oops manual change" insurance. In the event that somebody breaks something in the console and you need to undo it, it's exponentially faster-easier. However you have to pay premiums to get this insurance as well as a setup cost.
So is the insurance worth it? It depends on the org. But I've seen small places where it's a small team that communicates well and nobody screws around in the console with stuff they don't understand (and if they break it they can own it). So there absolutely are places where the amount of time terraform costs you (in learning, setup, and extra PR time, waiting for atlantis to finish, locks) is higher cost than the time saved when you need.
That is nonsense.
You just said, equivalently, "Terraform is all things to all people".
I'll pose a question to your snotty response - What specifically about terraform would lead to it failing to be implemented at a company? The answer to that will provide all you need.
Rather, Terraform does not add value within every organizational structure. Not adding value is failing. Having a negative ROI is failing.
None of these infrastructure tools are perfect, and the ways in which they are imperfect mean that some are better or worse matches for an organization's needs.
Therefore your initial statement is oversimplified, presumptuous, and ultimately nonsensical. A logical reframing is "if your organization does not match Terraform's strengths, then your org is the problem", and that is clearly not true.
But they suck differently, for different reasons, and they suck in different magnitudes in the hands of different teams, with different needs.
I have never met an org that was happy with their infrastructure tooling! But I have met some that were happier with some tools than with others.
It's horses for courses. Terraform is a contender for some use cases. Nothing more, nothing less.
We'll inevitably see others large companies follow suite - it was one thing when hashicorp was independent tech company but it's very different when it's owned by a direct competitor.
Which is ironic given that OEL is a direct rip-off of RHEL which IBM also now owns.
IBM employees then initiated the fork of vault which is called openbao. Later IBM buys Hashicorp. The fork might have just been an attempt at leverage in the negotiations but it remains to be seen if it will live on.
One of the things that has always really frustrated me about terraform is that it seems to go out of its way to make you do things in a very annoying, inconsistent way. Part of this is necessary due to the nature of the provider ecosystem, you can't guarantee consistency across providers - and I won't burden this post with my gripes about inconsistencies and annoyances within providers, such as the AWS provider.
Really though the interface has always been terrible (IMO). Stuff like iterating through a nested map using a for loop, which is trivial in most languages, is annoying and obtuse to the point of comedy. God help you if this map contains mixed types. Novices have trouble picking it up in general. It's very easy to start a project that sprawls completely out of control, and there doesn't seem to be a standard at all as to how to organize projects/code, so each terraform project I inherit is wildly different and has its own seemingly unique pain points.
A lot of this has gotten better over the years with QoL improvements within terraform itself - but really, as a developer, I've gotten more than a little tired about the hubris that Hashicorp shows with some of the stuff around the terraform ecosystem. Features that people beg for routinely get told by maintainers that they will not be doing that because reasons or because "it's not possible" (such as dynamic provider blocks). OpenTofu is already tackling many of the gripes and feature requests I've had over the years and are doing so eagerly and have some heavy hitters behind it.
Terraform is good, but it was always going to be vulnerable to competition - It's basically just a state-based wrapper around cloud API's. A great idea, but easy to duplicate. I don't know what they were thinking trying to put this behind a walled garden when they could have used it to get people into the hashicorp ecosystem and sell their other enterprise products.
I've been using terranix, which uses nix to generate a tf.json file, and oh my god is the experience night and day. I can make functions! I can refactor! And if it's a pure refactor, there is nothing to apply.
my process is roughly:
comment out the resource in the module, run a plan -> get output like:
"module.foo1.aws_resource.bar will be deleted"
Then copy my resource in source to module.foo2.aws_resource.bar, the command becomes:
terraform state mv module.foo1.aws_resource.bar module.foo2.aws_resource.bar
I guess this might be harder if you're using upstream "official" modules, but I avoid those like the plague.
Suddenly, just to refactor the source in a way that shouldn't touch any resources, you have to have be able to mutate the terraform state. (Or use the more recently introduced moved blocks, which is still quite a big kludge).
This means any kind of broadly sweeping refactor (which might impact many different state files) is really hard.
Second that. One of my colleagues is working on adding proper tracing to the OpenTofu codebase, to help understand the exact cause of failures.
https://opensource.oracle.com/ (almost endless list)
But then they have take FLOSS projects and abandoned them, see OOo for instance:
https://en.wikipedia.org/wiki/Apache_OpenOffice#/media/File:...
https://blogs.oracle.com/linux/post/oracle-is-the-1-contribu...
XFS is really important for (their) database performance, so quite a lot comes out of Oracle for it. You might also know that btrfs began at Oracle.
https://www.google.com/search?q=oracle+blog+xfs
https://en.wikipedia.org/wiki/Btrfs
"Chris Mason, an engineer working on ReiserFS for SUSE at the time, joined Oracle later that year and began work on a new file system based on these B-trees."
I see this argument a lot, but I'm not sure how it detracts from their continued development. Oracle funds many engineers working on OSS and, despite having CLAs in place, retain the permissive license for most of them. In some cases they've acquired closed source software and made it open source (e.g., JRockit stuff). They're a major contributor to OSS.
It would be hubris if they tried to then take the moral high ground.
So to move forward with upgrading the Terraform support in their tool, Oracle had two choices: pay HashiCorp (soon IBM) a hefty license fee to resell Terraform, or use OpenTofu which is free and has now proven to be well-run enough to issue a new release with both Terraform compatibility and OpenTofu-specific enhancements, while dodging lazy accusations of code theft from HashiCorp.
This is a no-brainer for Oracle, and it’s great news for the future of OpenTofu.
Good info on our experience here: https://masterpoint.io/updates/opentofu-early-adopters/
Oracle wins a big competitive talking point versus IBM, as well as crushing the value of IBM's acquisition of Hashicorp, and completely eliminating IBM's Terraform inroad into a large group of Oracle's enterprise customers.
If you're an enterprise customer, do you want your enterprise deployments on a company that knowingly does two near-identical implementations, and can't seem to decide on which one to favor?
Red Hat used to routinely open-source acquisitions. Sun also did— that's how we got OpenOffice (and by way of it, LibreOffice). StarOffice was proprietary when Sun bought it.
Way back when the license changed the threads on HN had HashiCorp employees claiming the change was primarily to protect HashiCorp from the fact IBM was reselling Vault. IBM then went ahead and helped fork Vault (OpenBao).
It's largely because a lot of Oracle DB products where performance mattered (eg. Exadata) needed some sort of a base OS that Oracle could manage and optimize as needed.
All that’s needed is update sysctl.conf to tune kernel parameters to the workload. Every Linux sysadmin knows how to do this. What kernel parameters need to be updated is heavily documented for any product.
Spending $500k/yr on compute+support SLA is cheaper than $200k/yr on compute and hiring 3 admins dedicated to that piece of compute.
This is the model that every Enterprise Infra vendor pushes (eg. Oracle, AWS, MongoDB, Nvidia), and most mid- and upper-market purchasers are used to it.
All software products have documentation on how to install the product. Oracle has a large suite of products, their databases, ERPs, etc. For kernel parameters, its just a file, which takes a second https://docs.oracle.com/en/database/oracle/oracle-database/1...
In reality though, all Infra teams, have infrastructure to install OS (and manage the fleet), then post-install customize the OS to which team is requesting, usually done via puppet or ansible to manage the configuration. There will be standardized configuration for application, web, database (just to keep it simple).
I would be shocked if Oracle support (or any other vendor) is given login access to make changes on servers owned by clients. At best, you open a case, you get an incompetent support person who'll send you documentation.
Oracle support does not replace admins. Oracle support gives you access to bug fixes, updates, documentation. I believe you can download most Oracle software for free, but without the docs and updates, its worthless. Other vendors may use the opposite strategy, docs openly available but software downloads are paid/subscriptions.
> Spending $500k/yr on compute+support SLA is cheaper than $200k/yr on compute and hiring 3 admins dedicated to that piece of compute.
In reality though, there will always be admins, then a whole lot DevOps/Cloud Ops/Kubernetes/SRE/etc people added, smooth talking manager/director increasing the spend from what could be done on bare-metal under 20K to a 20 million dollar multi cloud strategy. Why have 3 admins report to you, when you can have an army of 200 people do the same work for 100x more cost? Success stories and promotions all around!
Yep! And it takes time and effort to maintain your Puppet/Chef/Ansible/Terraform/OpenTofu scripts as well as your golden images as well as triaging escalations as well as other incidental work. This means you don't have as much time to work on tuning or debugging, because you'll have dozens of tools (some in-house, others purchased) to manage.
Furthermore, most people recognize Hardware specialized IT Administration is increasingly a career dead end, so most end up switching to Engineering, Sales Engineering, or Support Engineering due to better career opportunities.
> I would be shocked if Oracle support (or any other vendor) is given login access to make changes on servers owned by clients. At best, you open a case, you get an incompetent support person who'll send you documentation.
This is the norm in most mid- and upper-market support contracts. You'll have a dedicated TAM, Support Eng, and CSM who will handhold teams, and will have access to the underlying infrastructure.
> Oracle support does not replace admins. Oracle support gives you access to bug fixes, updates, documentation. I believe you can download most Oracle software for free, but without the docs and updates, its worthless. Other vendors may use the opposite strategy, docs openly available but software downloads are paid/subscriptions.
Depending on your contract, you would be given a dedicated TAM team and support team to debug any issues in the Oracle stack.
> In reality though, there will always be admins, then a whole lot DevOps/Cloud Ops/Kubernetes/SRE/etc people added, smooth talking manager/director increasing the spend from what could be done on bare-metal under 20K to a 20 million dollar multi cloud strategy. Why have 3 admins report to you, when you can have an army of 200 people do the same work for 100x more cost? Success stories and promotions all around!
That "smooth-talking manager" needs to justify to the CFO, COO, CTO, VP Eng, etc that for $X spent, I can get 1.5 * $X back.
As I've mentioned on multiple different occasions on HN, spend on on-prem infra is treated as part of the Finance+ITOps budget, not the DevOps budget (which is generally within R&D).
Procurement is hard, and you need to JUSTIFY a 1% increase in headcount
For example, let's assume you are hiring 3 IT Admins for $120k. That ends up costing $700-800k/yr because of benefits and incidentals. The compute as well is an additional $200-300k.
This means you are spending $900k/yr AT BEST.
That $200-300k in compute becomes $500k with a support contract, and you can hire 1 person for $120k to manage that.
This means you're spending around $750k/yr AT BEST.
That extra $150K can then be given to Engineering to help give bonuses to attract good dev talent or hire some additional headcount on the Sales side to sell the product you are hired to build.
Or did you just mean Atlas?
Mind that they do quite some work on the kernel itself to optimize it for their workloads:
https://blogs.oracle.com/linux/post/oracle-is-the-1-contribu...
The availability of Oracle's uek kernel is a differentiator from standard RHEL.
If your in-house DBA doesn’t have the experience to perform the specific tuning required, then that’s what support contracts are for
The documentation can’t cover every customer’s use case and configuration. That’s just enabling folk to blindly copy inappropriate sysctls they don’t understand like they are building gentoo kernels.
https://github.com/VirtuslabRnD/pulumi-kotlin
For Pulumi. When I see the pulumi-kotlin example code I much prefer it over my Terraform scripts. (We picked TF before Pulumi was an option, and waaaaay before it had reasonably typesafe lang support)
This lets you use Pulumi w/Gradle multi-project builds in Kotlin script.
There's also a first-party Pulumi SDK for F#: https://www.pulumi.com/docs/languages-sdks/dotnet/
If you're into Nix, you might enjoy using this to generate Terraform JSON. The language is inspired by Nix, so it feels familiar to Nixers, but it has a better type system that recently includes ADTs, at least on its master branch: https://github.com/tweag/tf-ncl
Even self hosting your state management in a bucket is simpler with Pulumi since it uses lock files on S3 versus a separate DyanamoDB + S3 combo.
I have been using it in production for 4-5 years and used Terraform for several years before that.
This is disturbing because S3 does not give you guarantees required to implement real locking.
For locking to work properly you'd need to have a conditional write that would fail if some prerequisite was not met. GCP offers that operation, S3 AFAIK does not.
client A lists s3://bucket/prefix/.pulumi/locks/, sees nothing
client B lists s3://bucket/prefix/.pulumi/locks/, sees nothing
client A creates s3://bucket/prefix/.pulumi/locks/unique1.json
client A lists s3://bucket/prefix/.pulumi/locks/, only sees unique1.json, and proceeds
client B creates s3://bucket/prefix/.pulumi/locks/unique2.json
client B lists s3://bucket/prefix/.pulumi/locks/ and sees both unique1.json and unique2.json
client B assumes it lost a race, deletes s3://bucket/prefix/.pulumi/locks/unique2.json, and retries
There's another mode where both clients pessimistically retry, but fuzzing a retry delay could eventually choose a winner randomly.(Of course, companies do go out of business, and products stop to be maintained, and the example here is a bit extreme, but the point is that company will do what makes the most business sense)
Oh so you never heard of “suppliers”?
It makes perfect sense for them to push their customers to move to the more permissive licensing to avoid any legal issues.
>
> OpenTofu is a Terraform fork, created as an initiative of Gruntwork, Spacelift, Harness, Env0, Scalr, and others, in response to HashiCorp’s switch from an open-source license to the BUSL. The initiative has many supporters, all of whom are listed here.
I still have no idea what I am looking at. I know that probably means this product isn't for me, but it peeves me when products do this. "What is X? X is like Y!"
It's like using AWS CloudFormation or GCP Deployment Manager, but supports quite a few cloud vendors with the same tools.
I don't know much about Oracle's services so can't figure if this is a huge number of users, of a small subset of their clients.
* Oracle Products (e.g. DB, Fusion, E-Business Suite)
* Oracle Cloud (OCI)
What's telling is that Oracle Cloud's Terraform-as-a-Service (Resource Manager) is still Terraform:
https://docs.oracle.com/en-us/iaas/Content/ResourceManager/C...
Clearly, Oracle must think there is some legal distinction between telling Terraform-as-a-service, and selling+distributing a product _containing_ Terraform that end users then use as Terraform-as-a-service.
Version support here: https://docs.oracle.com/en-us/iaas/Content/ResourceManager/R...
Why Oracle Cloud isn't using Terraform version 1.5.7 which is still open source in Resource Manager is anyone's guess. Perhaps the tool isn't getting much attention recently?
* Oracle legal
Sun and MySQL precede them.
One theory of mine is that we can measure the risk that a project will be relicensed by looking at things like diversity of contributors, trademark ownership, contributor agreements, and license terms. Low risk projects include the Linux kernel (GPL, DCO) [1]. High risk projects include Kubernetes (Apache, CLA) [2].
If this trend continues developers will need to get a better understanding of how relicencing works and may decide to avoid contributing to projects with elevated risk.
[1] https://alexsci.com/relicensing-monitor/projects/linux/
[2] https://alexsci.com/relicensing-monitor/projects/kubernetes/
To me, the obvious questions are who owns the IP, and what are their incentives to maintain the current licensing.
I think there are a bunch of questions you can ask:
* Why is the software open source (if licensing/contractual requirements make it so, that's more likely to keep the status quo vs. corporate claims of "we <heart> open source")?
* Who owns the copyright/IP (and what's their reputation)?
* What would happen if the the license changes (is there an ecosystem that relies on it being open source, or is it a black box)?
* Who cares what the license is (e.g. BerkeleyDB was relicensed, which got old versions frozen in linux distributions, so no-one upgraded to newer versions, and replacements were written)?
I think "contributor agreements" are the biggest red flag. Though I like them for potentially upgrading a license (say from GPLv2 to v3), not that this always is a good thing.
Yes, in my experience it is.
Permissive licenses like Apache, MIT, and BSD are easiest for the corporate lawyers to approve but also easy for the project owner to relicense. Relicencing Monitor isn't measuring how easy it is for companies to use the software; risk is solely measuring how easy it is to relicense the software.
Copyleft licenses are lower risk than permissive licenses in this specific context as they are viral. A CLA or a very small number of contributors can negate that, as happened with Emby [1].
SourceGraph is probably the best example here (I need to add them still). They switched off Apache 2 and prompted this [2] helpful blog post.
[1] https://alexsci.com/relicensing-monitor/projects/emby/
[2] https://drewdevault.com/2023/07/04/Dont-sign-a-CLA-2.html
Mostly things like copyright ownership transfer is not a thing with OSS communities because it strongly discourages third parties from contributing. Copyright transfers are only needed with some licenses (GPL style licenses that insist everything else is licensed the same way) and cannot prevent a retroactive fork even if you have them. Other licenses allow distributing mixed licensed code and you can just create a commercial source distribution for those because the license explicitly allows that. Either way, anyone with the pre-license change version of the code can fork. That's why Elastic, which used the Apache license and had copyright transfers, got forked.
The more widely used an OSS project is, the more likely it is that somebody will fork it if it is re-licensed. Because that usually means lots of external contributors and plenty of interest from wealthy companies that depend on it. Meaning there are skills and money needed to fund the fork. Copyright transfers don't stop this from happening. Unless you specifically want to fire most of your user base, this just doesn't make any sense from a business point of view.
A failure to fork basically indicates the project didn't have a strong developer community and big companies simply didn't care about the project.
I consult some clients on Elasticsearch and Opensearch. Most of my recent clients now default to Opensearch. Because it's the OSS option. They are clearly spending money to get support (from me and others) but Elastic isn't getting any. As far as I can see, Opensearch now represents the vast majority of new users and is becoming a significant source of money for hosting, training, and consulting. But Elastic is getting none of that.
My guess is that the industry will learn from the repeated re-licensing and forking and subsequent community split that has been happening. Elastic, Redis, OpenTofu, Centos, etc. The pattern is the same every time: 1) project gets relicensed 2) a few weeks later a consortium of companies pools resources together and forks 3) most users stick with open source and the company cuts themselves off from those users.
Long term, I would not be surprised to see some of those companies offering support for their OSS forks (in addition to their commercial offerings) or even reverting the license change. This would make a lot of sense for e.g. Elastic as there's a lot of duplicated effort between them and Amazon. And Amazon gets a lot for free from outside contributors.
I also had the same thought to create some sort of risk metric that could be applied to projects, but I do think your initial metric is lacking some criteria. Foundations like the CNCF and ASF have to be among the lowest risk, and CLAs can be more or less harmful depending on their specific content. I think a big red flag has to be if they’ve taken any VC or PE funding.
However I think the principle of taking this risk more seriously is good and important.