What makes you so sure that closed-source companies won't run those same AI scanners on their own code?
It's closed to the public, it's not closed to them!
Having worked in quite a few agency/consultancy situations, it is far more productive to smash your head against a wall till bleeding, than to get a client to pay for security. The regular answer: "This is table stakes, we pay you for this." Combined with: "Why has velocity gone down, we don't pay you for that security or documentation crap."
There are unexploited security holes in enterprise software you can drive a boring machine through. There is a well paid "security" (aka employee surveillance) company using python2.7 (no, not patched) on each and every machine their software runs on. At some of the biggest companies in this world. They just don't care for updating this, because, why should they. There is no incentive. None.
Running AI scanners internally costs money, dev time, and management buy in to actually fix the mountain of tech debt the scanners uncover. As you said there is no incentive for that
But for bad actors the cost of pointing an LLM at an exposed endpoint or reverse engineered binary has dropped to near zero. The attackers tooling just got exponentially cheaper and faster, while the enterprise defenders budget remained at zero.
There should be a way to donate your unused tokens on every cycle to open source like rounding up at the chekout!
1. shallow
2. hollow
3. flat
...
Not claiming that it's a slam dunk for open source, but the inverse does not seem correct either.
Why "minus D, E and F"? After all, once you have the harness set up, there's no additional work to add in new models, right?
Not from the automated repo scanners, but bug bounty programs can generate a lot of reports in my experience. AI tools are becoming a problem there, too, because amateurs are drawn to the bounties and will submit anything the AI hallucinates.
Closed source companies can (and should!) also run their own security audits rather than passively waiting for volunteers to spend their tokens on it.
That still exists in the OSS world too, having your code out there is no panacea. I think we'll see a real swarm of security issues across the board, but I would expect the OSS world to fare better (perhaps after a painful period).
So just like a pre-AI or worse?
There is no guarantee that open means that they will be discovered.
This really just seems like Strix marketing. Which is totally fair, but let's be reasonable here, any open-source business stands to lose way more by continuing to be open-source vs. relying on the benevolence of people scanning their code for them.
Actually the opposite is obvious - the comment you replied too talked about an abundance of good Samaritan reports - it's strange to speculate on some nebulous "gain" when responding to facts about more then enough reports concerning open source code.
> In this new closed-source world (for Cal.com), there's nothing stopping them from running their own internal security agent audits
That's one good Samaritan for a closed source app vs many for an open source one. Open source wins again.
> any open-source business stands to lose way more
That doesn't make any sense - why would it lose more when it has many more good Samaritans working for it for free?
You seem to forget that the number of vulnerabilities in a certain app is finite, an open source app will reach a secure status much faster than a closed source one, in addition to also gaining from shorter time to market.
In fact, open source will soon be much better and more capable due to new and developing technological and organizational advancements which are next to impossible to happen under a closed source regime.
But at that point, "fighting fire with fire" is still a good point. Assuming tokens are available, we could just dump the entire code base, changesets and all, our dependent configuration on the code base, company-internal domain knowledge and previous upgrade failures into a folder and tell the AI to figure out upgrade risks. Bonus points if you have decent integration tests or test setups to all of that through.
It won't be perfect, but combine that with a good tiered rollout and increasing velocity of rollouts are entirely possible.
It's kinda funny to me -- a lot of the agentic hype seems to be rewarding good practices - cooperation, documentation, unit testing, integration testing, local test setups hugely.
If the cost of security audit becomes marginal, it would seem reasonable to expect projects to publish results of such audits frequently.
There’s probably a quite hefty backlog of medium- and low-severity issues in existing projects for maintainers to suffer through first though.
This is what worries me about companies sleeping on using AI to at a bare minimum run code audits and evaluate their security routinely. I suspect as models get better we're going to see companies being hacked at a level never seen before.
Right now we've seen a few different maintainers for open source packages get hacked, who knows how many companies have someone infiltrating their internal systems with the help of AI because nobody wants to do the due dilligence of having a company do security audits on their systems.
but with cal.com i dont think this is about security lol
open source will always be an advantage just you need to decide wether it aligns with you business needs
I analyze crash dumps for a Windows application. I haven't had much luck using Claude, OpenAI, or Google models when working with WinDbg. None of the models are very good at assembly and don't seem to be able to remember the details of different calling conventions or even how some of the registers are typically used. They are all pretty good at helping me navigate WinDbg though.
How so? AI won't have access to the source code. In some cases AI may have access to deployed binaries (if your business deploys binaries) but I am not aware that it has the same capabilities against compiled code than source code.
But in a SAAS world, all AI has access to is your API. It might be still be up to no good but surely you will be several orders of magnitude less exposed than with access to source code.
That sounds like an excuse. The real reason is probably that it's hard to make a viable business out of developing open source.
Now it's a lot easier to rewrite open source stuff to get around licensing requirements and have an LLM watch the repo and copy all improvements and fixes, so the bar for a competitor to come along and get 10 years of work for free it a lot lower.
The issue is competitors popping up to clone your offering with your own codebase.
Going closed source actually hurts our business more than it benefits it. But it ultimately protects customer data, and that's what we care about the most.
Are you able to share any more detail on how you determined this is the best route? It would be a significant implication for many other pieces of open source software also if so.
(And I say this is someone who just recommended cal.com to someone a few days ago specifically citing the fact that it was open source, that led to increased trust in it.)
I did find the video valuable, for reference for others: https://www.youtube.com/watch?v=JYEPLpgCRck
I think if you are committed to switching back to open source as soon as the threat landscape changes, and you have some metric for what that looks like, that would be valuable to share now.
I would like to see the analysis that you're referencing around open source being 5-10x less secure.
All your servers are Linux, so imagine how insecure you are - must switch to windows ASAP.
blaming AI scanners is just really convenient PR cover for a normal license change.
“I need to do foo in my app. Libraries bar and baz do these bits well. Pick the best from each and let’s implement them here”
I’d not be surprised if npmjs.com and its ilk turn into more a reference site than a package manager backend soon.
It started as a what-if joke, but it's turned out to be amazing. So yeah, npmjs.com is just reference site for me now, and node_modules stays tiny.
And the output is honestly superior. I end up with smaller projects, clean code, and a huge suite of property-based tests from the refactor process. And it's fully automatic.
Now I can take an open source repo and just add the missing features, fix the bugs, deploy in a few hours. The value of integration and bug-fixing when the code is available is now a single capable dev for a few hours, instead of an internal team. The calculus is completely different.
1) Pulls you in with a catchy title, that at first glance seems like a dunk on Cal.com (whatever that is).
2) Takes the "we understand your pain" approach to empathize w/ Cal.com, so you feel like you're on the good vibes side.
3) Provides a genuine response to the actual problem Cal.com is dealing with. Something you can't dismiss out of hand.
4) But in the end of the day, the response aligns perfectly with the product they're promoting (a click away to the homepage!)
This mix of genuine ideas and marketing is quite potent. Not saying this is all bad or anything, just found it a bit funny. The mixed-up-ness is the point!
As it mentions in their article, Strix actually scans the Cal.com codebase and reports vulnerabilities to us. But the reality is, they actually miss so many vulnerabilities that other platforms do find. There's no one platform that seems to be able to reliably find all vulnerabilities, and so simply adopting AI scanners just isn't enough.
The real content could fit in a comment.
Cal.com is going closed source - https://news.ycombinator.com/item?id=47780456
Cybersecurity looks like proof of work now - https://news.ycombinator.com/item?id=47769089
One of the ugliest parts of open source is people believing they’re entitled to you working for free forever. And instead of being thankful you gave years of your labor for free, people get angry at you for not continuing to do so forever. And try to shame you as if you’re somehow greedy if that changes.
Do you work exclusively pro-bono on open source projects? Or do you work a job where you only go in if you get paid?
Security through obscurity is only problematic if that is the only, or a primary, layer of defense. As an incremental layer of deterrence or delay, it is an absolutely valid tactic, with its primary function being imposing higher costs on the attacker.
As such if, as people are postulating post-Mythos, security comes down to which side spends more tokens, it is an even more valid strategy to impose asymmetric costs on the attacker.
"With enough AI-balls (heheh) all bugs are shallow."
From a security perspective, the basic calculus of open versus closed comes down to which you expect to be case for your project: the attention donated by the community outweighs the attention (lowered by openness) invested by attackers, or, the attention from your internal processes outweighs the attention costs (increased by obscurity) on attackers. The only change is that the attention from AI is multifold more effective than from humans, otherwise the calculus is the same.
This article is effectively an announcement that cal.com is riddled with vulnerabilities, which should be easy to find in an archive of their code.
Then the real work is in investigating each false positive. Can still be useful compared to manual review, but requires real resources.
Meanwhile the flood of false positives causes reputation loss if not addressed. Reputation loss that closed source software does not get. Hence perhaps going closed source.
-if code is open source or closed source, AI bots can still look for exploits
-so we need to use AI to develop a checklist program regardless to check for currently known and unknown exploits given our current state of AI tools
-we have to just keep running AI tools looking for more security issues as AI models become more powerful, which empowers AI bots attacking but also then AI bots to defensively find exploits and mitigate them
-so it's an ongoing effort to work on
I understand the logic of closing the source to prevent AI bot scans of the code but also fundamentally people won't trust your closed source code because it could contain harmful code, thus forcing it to be open source
Edit: Another thing that comes to mind is people are often dunking here on "vibe coding" however can't we just develop "standards / tools" to "harden" vibe coded software and also help guide well for decisions related to architecture of the program, and so on?
There are real limitations of course.
I'm not sure how this works in the legal sense. A human could ostensibly study an existing project and then rewrite it from scratch. The original work's license shouldn't apply as long as code wasn't copy & pasted, right?
What happens when an automated tool does the same? It's basically just a complicated copy & paste job.
And likely there would be enough similarities that the rewrite would be considered a derived work under copyright law.
> The original work's license shouldn't apply as long as code wasn't copy & pasted, right?
You don't need to do a literal copy & paste for it to be copyright infringement.
> What happens when an automated tool does the same? It's basically just a complicated copy & paste job.
Sounds like copyright infringement to me.
If we go by the OSI's definition, a project that doesn't allow this is not "open source". So all open source projects -- not just "a lot" -- allow this.
This feels like the core of the article, but it doesn’t prove the need for open source.
One of which I am experiencing right now is somebody just copying my repo, not crediting me, didn't even try to change the README. It's pretty discouraging.
The other is security reasons, the premise that volunteers will report vulnerabilities really matter if you are big enough for small portion of people to dedicate themselves, for the most part people take open source tool use it and then forget about it, they only want stuff fixed.
Lastly, open source development kinda sucks so far. I'v been working on a few different tools and the amount of trolling and just bad faith actors I had to deal with is exhausting. On top of that there is a constant stream of people just demanding stuff to be fixed quickly.
Making the assumption that the same amount of money needed to attack a critical vulnerability is also required to find and fix it.
Lets say we have a project with 100 modules, and it costs us $100 000 to check these modules for vulnerabilities. What is stopping an attacker from spending the same amount of money to scan, lets say 10 modules but this time with 10x the number of tokens per module than the defender had when hardening the software?
Open Source was always open to "many eyes" in theory exposing itself to zero-day vulnerabilities. But the "many eyes" go for the good and the bad actors.
As far as I am concerned... Way to go Cal.com, and a good reminder to never use your services.
Some things just can't be truly secure as well, ddos protection is mostly a guessing/preventive game, exposing your firewall config/scripts will make you more vulnerable than NOT.
If your codebase isn't exposed, attackers are constrained by the network and other external restrictions which greatly reduce the number of possible trials, even with a swarm of residential proxies, it's not the same at all from inspecting a codebase in depth with thousand of agents and all models.
- it’s not open vs closed anymore, it’s more like bug finding going a few devs poking around to basically infinite parallel scanners
- so now you don’t get a couple of thoughtful reports, you get a many edge cases and half-real junk. fixing capacity didn’t change though
- closing the repo doesn’t really save you, it just switches from white-box to black-box… and that’s getting pretty damn good anyway
real problem is: vuln discovery scaled, patching didn’t. now everything is a backlog game
Cal.com folks are getting a red team for free, wouldn't that further convince them their closed source software is strong enough?
Isn't Strix's business companies paying for scans regardless of whether the software scanned is open source or closed?
At $WORK we had a system which, if you traced its logic, could not possibly experience the bug we were seeing in production. This was a userspace control module for an FPGA driver connected to some machinery you really don't want to fuck around with, and the bug had wasted something like three staff+ engineer-years by the time I got there.
Recognizing that the bug was impossible in the userspace code if the system worked as intended end-to-end, the engineers started diving into verilog and driver code, trying to find the issue. People were suspecting miscompilations and all kinds of fun things.
Eventually, for unrelated reasons, I decided to clean up the userspace code (deleting and refactoring things unlocks additional deletion and refactoring opportunities, and all said and done I deleted 80% of the project so that I had a better foundation for some features I had to add).
For one of those improvements, my observation was just that if I had to write the driver code to support the concurrency we were abusing I'd be swearing up a storm and trying to find any way I could to solve a simpler problem instead.
Long story short, I still don't know what the driver bug was, but the actual authors must've felt the same way, since when I opted for userspace code with simpler concurrency demands the bug disappeared.
Tying it back to AI and hacking, the white box approach here literally didn't work, and the black box approach easily illuminated that something was probably fucky. Given that AI can de-minify and otherwise spot patterns from fairly limited data, I wouldn't be shocked if black-box hacking were (at least sometimes) more token-efficient than white-box.
This seems to be extremely common. Been a very long time since I looked at Linux kernel stuff, but there were numerous drivers that disabled hardware acceleration or offloading features simply because they became unreliable if they were given heavy loads or deep queues.
I mean as a convention when dealing with cryptography, so far the only organization that has succeeded in doing closed-source cryptography securely, has been the USA's "NSA", and mostly their algorithms are public.
I mostly work in the closed source world, however my observation from all the code bases I've seen is "mostly open source are more secure", except when very thorough following of formal security specifications are followed, and then security is as good as the specifications. (YMMV there, of course).
With that said it at least seems possible to be able to be able to read binary itself, but most of the magic there is in execution, so you'd have to have an LLM behave kind of like a processor I think.
Cal.com is going closed source
Laughable and hilarious. Extremely short sighted. I can show code generated by Claude Opus 4.6 at the highest compute intensity that lacks even basic checks in input validation that was clearly provided in the spec.
There's no point in arguing with crypto and AI bros. They are the same tribe. AI crowd however might learn their lessons sooner because the universe isn't forgiving or flexible.
Note: I use AI code generators all the time but I take them as very very dumb transpilers no matter how expensive their input/output pricing it and I learned that hard way.
PS: Edit to fix typos.
Shame
I'm disappointed to hear this especially since I don't think the rationale makes sense, from what I understand of the security landscape, and it also makes me a little more skeptical of cal.com in general.
use battle-tested frameworks such as Rails, Django then you won't make rookie security mistakes.
There is zero incentive or reason for content creators to let AI slurp their content for free and distribute it and get all the money from it.
Everything new will be licensed and if AI companies want access to it, they will need to pay for it, just like we will.
The people that go behind paywalls don't realize how much they'll have to spend on marketing to catch up to those that are open.
And that's only frames the current state, where models are very expensive to train. Once model training is close to the point where a group of individuals can afford it, it's pretty much game over for our current paradigm. The software police will be running around trying to play whack-a-mole on open weight models with people all over the world.
Search engines will cease to exist, so no one will search your content and then click on your link. AI will simply regurgitate your content and take the money for tokens or subscription and not acknowledge you at all.
--Humans need not apply.
It's kind of funny that you think you're going to be making money writing software. If you lock up your software who exactly are you selling it to anyway? It's like you're thinking 25% through the situation then going "I can stay where I am and I don't have to change anything", and then crying later when it doesn't work.
What are you going to do, advertise in BYTE magazine (dead). On Instagram? With a sandwich board on a Seattle street corner? What does the software market even look like in the AI age.
And much like how Google and Amazon eat your lunch now whenever they way, successful AI companies will buy up some software ideas and feed them to their models (which will be stolen later by other models). Anyone that sees your software will mock up a useful clone of it pretty quick the first time they see it. And foreign AI companies will just right out steal it.
You're right you won't create content that you don't get paid for, you just won't be creating anything while competing with the other unemployed masses for strawberry picking jobs.
I see this trope a lot in security discussions. “Obscurity isn’t security” or “since you can’t protect against X you may as well do Y”.
This is a harmful trope, which discourages perfectly good protections. Sure, closing source is not a perfect protection, but it is a defense against a large band of attacks.
Think of the entire field of potential vulnerability probes attackers have. Closing the source closes many of them off, likely most of them.
A pen-tester model with implementation will be loads more effective than one with only a black box. And that will give cal.com time to run the pen testing model on the source and address the vulns , hopefully before they are exploited.
I tested this myself, first using black box model attacks, secondly using the source code. The model with the source found and exploited the vulns instantly . The model without failed.
The lesson is: obscurity is not security ALONE, but it is a component of security.
I read this as:
"We figured no one was looking so we just shipped unsafe garbage for years. We never once did an internal audit, never once paid a hacker to try to exploit our product, never thought we'd get caught with our substandard products."
If a guy in his basement with $200 dollars can ruin your company then you were trading on vapor the entire time. I'm sorry you had to find out this way.
It's entirely possible this CEO sincerely believes this, but that means you as a potential customer should stay away: now you know that the CEO of this company has no idea how technology works even at an executive level and/or that he doesn't consult his experts before making decisions.
The pipeline goes like this:
Use open source license to gain traction and credibility > establish a customer base > pull the rug on open source to get everyone who depends on your product but isn't yet paying to pay.
My concern is mostly financial. Most people would be in a better position to monetize my software than I am... Using AI to obfuscate the origin while appropriating all the key innovations. I wouldn't get any credit.
Also, I'm not really interested in humans anymore. I have human fatigue.
Then AI will eat your lunch anyway if the financial part has anything at all to do with the code.
AI can decompile code very well.
I mean, bold statement but statistically speaking it's almost certainly incorrect. I will say that, irrespective of whether source is open or closed, I would be deeply skeptical of a project that made this assertion.
I previously failed to summarise HN guidelines on sarcasm: https://news.ycombinator.com/item?id=38585465
is anyone else seeing this / fixed this problem ?
I mean an AI skill is perfectly capable of doing this exact same thing.
In order to build trust, they open source their product. I forked it, removed the blocks from the freemium feature in 15 minutes using Claude Code. Never published the code to anyone else, just used it myself
Unfortunately, I think it isn’t going to be tenable for systems to be fully open sourced going forward.
AI generated bullshit PRs are clearly the bigger issue in the OSS space.
The future is sharing, you may not believe because your income is tied to being clever. Long term we are all more clever because of the sharing, and your contribution sometimes does not add to your personal success. Asking a company or its individuals to forego their success will not make them add more to our future. But they will add to our future nonetheless, because they all feel like we all do, that adding is what we are all meant to do.
But... playing devil's advocate, if AI makes it very easy to find exploits without the source code, wouldn't it be doubly effective finding them with the source code as well? And why is the dichotomy posed by this blog post "open source with AI reviews by everyone" vs "closed source but only the bad guys use AI"? What if the scenario was: closed source and the authors/security team use every AI tool at their disposal to find bugs? What do the community's eyeballs add to this equation, assuming (big if) AI review of exploits is such a force multiplier?
Before any knee-jerk reactions: big fan of open source, I'm not arguing this will kill it, I don't have the faintest idea what Cal.com is and I think a world without FOSS would be a tragedy, I run linux and most of my software on my personal PC (other than games) is FOSS.
Which works if you assume that AI can find 100% of your bugs.
It can't. So this is a complete waste of your time and will hide actual bugs behind a layer of confidence _and_ obscurity.
You're going to actually have to sit down and figure out how to provide real security in your product while earning profits. This is called "work." I understand Silicon Valley would like to earn money and not work. I am eager for these people to get their comeuppance.
Well ...
Open Source as such will never "die", but we only need to look at what happened in, say, the last 5 or 10 years. Private entities with a commercial interest, have been flexing their muscles. Microsoft - also known as Microslop these days - with Github is probably the most famous example still, but you can see other examples. One that annoys me personally is Shopify's recent influence - rubygems.org is basically just shopifygems.org now. See: https://blog.rubygems.org/2026/04/15/rubygems-org-has-a-publ...
"Contributors from both the RubyGems client team and Shopify are already working with us on making native gems a better experience for the Ruby community. "
There is a lot more I could add to this (see my complaint about how rubygems.org hijacks gems past the 100.000 download barrier; this was why I retired from using rubygems.org, and then the year afterwards ruby core purged numerous developers. The handwriting is soooooo clear that shopify flexed their muscles here).
I think we need to make open source development more accessible to everyone, not just corporations throwing their money to gain influence and leverage. I don't have a great idea to make this model work; economic incentives kind of have to be there too, I get that part, and I am not sure which models could work. But right now we really have a big problem. We can also see this with age sniffing (age verification - see the article that pointed at Meta at orchestrating influence and lobbyism) and many more changes. Something has to change. Hopefully some people cleverer than me can come up with models that are actually sustainable, even if it may not necessarily be a "fund an open source developer for a year". There could be a more wide-spread "achieve xyz" or some other lower finance effort - but again, I don't have a good suggestion here. Hopfully something improves here though, because I am getting really tired of private interests constantly sabotaging and ruining the whole ecosystem while claiming they do "improve" an ecosystem. We have the old "War is peace. Freedom is slavery. Ignorance is strength." going again. Opposite day, every day.
Corporations are about money.
Individuals need to eat.
Governments love to concentrate power.
I wrote some very nice expressive text for our deployment guide. My project manager took the guide and had Gemini break it down into plain boring bullet points. AI and the pundits can gf themselves in their journey to kill human expression.
Here is what I wrote in the guide:
"Post Deploy Responsibility
If you made it this far, say “Wow I really did it and it was so easy!”
Did you say it? Good. Now you are entirely responsible for any issues or bugs that may arise from the newly deployed code. Don’t go anywhere until the deploy has finished (usually takes a few minutes). While an issue or bug may not leave you directly at fault, you are responsible for coordinating any rollbacks or remediations that may be needed until the next deploy."
Here is what the product manager slopped it into:
"- Post deploy responsibility
- You are responsible for performing QA upon deployment
- You are responsible for any issues or bugs that may arise from newly deployed code
- You are responsible for coordinating any rollbacks or remediations that may be needed until the next deploy"
My paragraph wasn't long, hard to understand, or poorly written. I wouldn't have objected to a rewording or some changes but the project manager chose to just copy paste it into Gemini and copy and paste it back. So my take is that they didn't understand what I wrote. Which is a few sentences long and frankly sad if a paragraph is too intense for you to read. When my project manager did this during the meeting I said, "RIP human expression" and their response was a very hasty "no that's not what's happening". This is what all the pundits want to do to everyone and society. Don't believe them that "it's just a tool", that is just a tactic to get you to rollover so they can shove more AI in your face.going closed source does not mean we are not fighting fire with fire
we are using a handful of internal AI vulnerability scanners for months now
being open source simply reduces risk by 5x to 10x according to several security researchers we are working with https://cal.com/blog/continuous-ai-pentesting-vulnerability-...
It’s OK if there’s another reason for this transition, just be transparent about it and don’t treat your users as children.