There is still benefit for non-Infra people. But non-Infra people don't understand system design, so the benefits are limited. Imagine a "mechanic AI". Yes, you could ask it all sorts of mechanic questions, and maybe it could even do some work on the car. But if you wanted to, say, replace the entire engine with a different one, that is a systemic change and has farther reaching implications than an AI will explain, much less perform competently. You need a mechanic to stop you and say, uh, no, please don't change the engine; explain to me what you're trying to do and I'll help you find a better solution. Then you need a real mechanic to manage changing the tires on the moving bus so it doesn't crash into the school. But having an AI could make the mechanic do all of that smoother.
Another thing I'd love to see more AI use of, is people asking the AI for advice. Most devs seem to avoid asking Infra people for architectural/design advice. This leads to them putting together a system using their limited knowledge, and it turns out to be an inferior design to what an Infra person would have suggested. Hopefully they will ask AI for advice in the future.
Something we’ve been dealing with is trying to get the agents to not over-complicate their designs, because they have a tendency to do so. But with good prompting they can be very helpful assistants!
Might be good to train multiple "personalities": one's a startup codebro that will tell you the easiest way to do anything; another will only give you the best practice and won't let you cheat yourself. Let the user decide who they want advice from.
Going further: input the business's requirements first, let that help decide? Just today I was on a call where somebody wants to manually deploy a single EC2 instance to run a big service. My first question is, if it goes down and it takes 2+ days to bring it back, is the business okay with that? That'll change my advice.
The personalities approach sounds fun to experiment with. I'm wondering if you could use SAEs to scan for a "startup codebro" feature in language models. Alas this is not something we get to look into until we think that fine-tuning our own models is the best way to make them better. For now we are betting on in-context learning.
Business requirements are also incredibly valuable. Notion, Slack, and Confluence hold a lot of context, but it can be hard to find. This is something that I think the subagents architecture is great for though.
Even if you manage to prompt an app, you'll still have no idea how the system works.
> Right now, Datafruit receives read-only access to your infrastructure
> "Grant @User write access to analytics S3 bucket for 24 hours" > -> Creates temporary IAM role, sends least-privilege credentials, auto-revokes tomorrow
These statements directly conflict with one another.
So it needs "iam:CreateRole," "iam:AttachPolicy," and other similar permissions. Those are not "read-only." And, they make it effectively admin in the account.
What safeguards are in place to make sure it doesn't delete other roles, or make production-impacting changes?
How is the auto-revoke handled? Will it require human intervention to merge a PR/apply the Terraform configuration, or will it do it automatically?
Also, auto-revoke right now can be handled by creating a role in Terraform that can be assumed and expires after a certain time. But we’re exploring deeper integrations with identity providers like Okta to handle this better.
I consulted for an early stage company that was trying to do this during the GPT-3 era. Despite the founders' stellar reputation and impressive startup pedigree, it was exceedingly difficult to get customers to provide meaningful read access to their AWS infrastructure, let alone the ability to make changes.
And yeah, we are noticing that it’s difficult to convince people to give us access to their infrastructure. I hope that a BYOC model will help with that.
> we’ve talked to a couple of startups where the Claude Code + AWS CLI combo has taken their infra down
Do you care to share what language model(s) you use?
BTW, your website is heavy, for a basic set of components it shouldn't be taking 100% CPU.
It is workflow automation in the end of the day. I would rather pick SOAR or AI-SOC where automation like this is very common. For eg blinkops or torq.
We have not spent as much time working in the security space, and I do think that purpose-built solutions are better if you only care about security. We are purposefully trying to stay broad, which might mean that our agents lack depth in specific verticals.
Why does that need an AI? I’m pretty sure many tools for those things exist, and they predate LLMs.
I think the power language models introduce is being able to more tightly integrate app-code with the infrastructure. They can read YAML, shell scripts, or ad-hoc wiki policies and map them to compliance checks, for example.
You need to be very clear about the persona who you're building for, what their pain point is, and why they're willing to spend money to solve it. So far it seems like you took an emerging technology (agentic workflows), applied it to a novel area (DevOps), built a UX around it, and tried to immediately start selling. This is the product trap of a solution in search of a problem.
Are you trying to sell to large companies? The problem that large companies have is cultural/organizational, not tooling. For any change, you need to get about a dozen people to review, understand, wait for people to come back from vacation, ping people because it fell off their desk, sign off, get them to prioritize, answer questions again from the engineer the task was assigned to, wait for another round of reviews and approvals, and maybe finally somebody will get the fix applied in production. DevOps is (or at least, it originally used to be) focused on finding and alleviating the bottlenecks; the actual process of finding data or applying changes is not the bottleneck in large companies and so therefore it is not a solution to the pain point that different folk in large companies have. If your value proposition is that large company executives could replace Infrastructure employee salaries with a cheaper agentic workflow, you need to re-read my prior point - if large companies have all this process and approvals for human beings making changes, why would they ever let an agentic workflow YOLO the changes without approval? And yes, I know, your agent proposes Terraform PRs for making changes to keep a human in the loop - but now you slayed one of the Hydra's heads and three more have popped up in its place: the customer needs the Terraform PR to be reviewed by a human committee, some of whose members are on vacation, some of whose members missed the PR request because they had other priorities and it fell off their desk, etc. etc. Doesn't really sound like you solved anything. The fundamental difference between what you built and something like Claude Code is that Claude Code doesn't need a human committee to review on every iteration it executes on an engineer's laptop, only the review of the One Benevolent Laptop User who is incentivized to get good output from Claude Code and provide human review as quickly as (literally) humanly possible.
Are you trying to sell to small companies that don't have DevOps Engineers? What's the competitive space here? The options usually look something like, (a) pay a premium for a PaaS, (b) spend on the salary for your first DevOps Engineer in the hopes that they will save more on low-level infra bills compared to their salary, so you're posing now (c) some kind of DevOps agentic workflow that is cheaper than a DevOps Engineer salary but will provide similar infra cost savings? So your agentic workflow will actually lift and shift to better/cheaper infra primitives and own day-to-day maintenance, responding to infra issues which your customers - who aren't DevOps Engineers, and don't know anything about infra, and are trying to outsource these concerns to you - which your customers don't know how to handle? I would argue that if you really did achieve that, then you should be building an agentic-workflow-maintained PaaS that, by virtue of using agents instead of humans, can undercut traditional PaaS on cost while offering a maybe better UX somehow. If you're asking your customers to review infra changes that they don't understand, then they need to hire a DevOps Engineer for the expertise to review it, and then you have a much less interesting value proposition.
Right now most of our value, as you said, is in augmenting an infra engineer at a growth stage company to limit some of the operational burdens they deal with. For the companies we’ve been selling to, the customers are SWEs who have been forced to learn infra when needs arise. But overall they are fairly competent and technical. And Claude code or other agentic coding tools are not always sufficient or safe to use. Our customers have told us anecdotally that Claude code gets stuck in a hallucination loop of nothingness on certain tasks and that Datafruit was able to solve them.
That being said, we have lost sales because people are content with Claude code. So this is something we are thinking about.
YC, you want founders of this companies to have 10 years working at Ford Motor Company. It's all reasons I want to write my blog article of "FAANG, please STFU. I wish I could be focused on 100k Requests per Second but instead I'm dealing with engineers who has no idea why their ORM is creating terrible query. Please stop telling them about GraphQL."
"Grant @User write access to analytics S3 bucket for 24 hours" Can the user even have access to this? Do they need write access or can't understand why they are getting errors on read? What happens when they forget in 30 days they asked your LLM for access and now their application does not work because they decided to borrow this S3 bucket instead of asking for one of their own. Yes this happened.
"Find where this secret is used so I can rotate it without downtime" Well, unless you are scanning all our Github repos, Kubernetes secret and containers, you are going to miss the fact this secret was manually loaded into Kubernetes/loaded into flat file in Docker container or stored in some random secret manager none of us are even aware of.
""Why did database costs spike yesterday?" -> Identifies expensive queries, shows optimization options, implements fixes
How? Likely it's because bad schema or lack of understanding with ORMs. Fix is going to be some PR somewhere to Dev who probably does not understand what they are reviewing.
Most of our headaches is the fact that Devs almost never give a shit about Ops, their bosses don't give a shit about Ops and Ops is trying desperately to keep this train which is on fire from derailing. We don't need AI YOLOing more stuff into Prod, we need AI to tell their bosses what downtime they are causing is costing our company so maybe, just maybe, they will actually care.
We are always trying to learn more based on our customer's feedback. What we've learned so far is that infra setups are all extremely different, and what works for some companies don't work for others. There's also vastly different company cultures related to ops. Some companies value their ops team a lot, other companies burden them with way too much work. Our goal is to try to make that burden a little lighter :)
Also, as a daily AI user (claude code / codex subs), I'm not sure I want YOLO AIs anywhere near my infra.
I don't mind letting AI's help with infra, but it's with the configs and infra as code files and it will never have any form of access to anything outside it's little box. It's significantly faster at writing out the port ranges for an FTP (don't ask) ingress than I can by hand.
that's because infrastructure is complicated. the AWS console isn't that bad (it's not great, and you should just use terraform whenever possible because clickops is dull, error-prone work); there's just a lot to know in order to deploy infrastructure cost-effectively.
this is more like "we don't want to hire infra engineers who know what they're doing so here's a tool to make suggestions that a knowledgeable engineer would make, vet and apply. just Trust Us."
I know dang is going to shake his finger at me for this, but come on.
Also:
> AWS emulator
isn't doing you any favors. I, too, have tried localstack and I can tell you first hand it is not an AWS emulator. That doesn't even get into the fact that AWS is not DevOps so what's up: is it AWS only or does it have GCP Emulation, too?
That's my whole point about the leading observation: without proper expectation management, how could anyone who spots this Launch HN possibly know if they should spend the time to book a call with you?
You're right that the bar is higher for Launch HNs (I wrote about this here: https://news.ycombinator.com/item?id=39633270) - but it's not uncommon for a startup to have a working product and real customers and yet have a home page that just says "book a call".
For some early-stage startups it makes sense to focus on iterating rapidly based on feedback from a few customers, and to defer building what used to be called the "whole product" (including self-serve features, a complete website, etc.) until later. It's simply about prioritizing higher-risk things and deferring lower-risk things.
I believe this is especially true for enterprise products, since deployment, onboarding, etc. are more complex and usually require personal interaction (at least in the early stages).
In such cases, a Launch HN can still make sense because the startup is real, the product is real, and there are real customers. But since the product can't be tried out publicly, I tell the founders they need a good demo video, and I usually tell them to add to their text an explanation of why the product isn't publicly available yet, as well as an invitation to contact them if people want to know more or want to be an early adopter. (You'll notice that both of those things are present in the text above!)
https://www.uspto.gov/trademarks/search/likelihood-confusion
> Trademarks don’t have to be identical to be confusingly similar. Instead, they could just be similar in sound, appearance, or meaning, or could create a similar commercial impression.