59 points by adityaathalye 8 hours ago | 22 comments
farx 55 minutes ago
This whole article smells a bit of someone being salty they couldn't sell their software.

Having worked in corporate with vaguely software-buying related stuff, I am confused at why so many small companies think an enterprise would be excited to go with them.

Even if I love your product, how do I pitch to the powers that be that we replace something we are already paying for with this new thing? The company might make billions but I've always had to fight for my budgets.

And tell me again why we should bet our core operations on a two man outfit with six months runway? What happens when you pivot? What happens when our competitor acquires you? What happens when you go on a transatlantic flight and a key expires?

Selling to enterprise early on is a poisoned chalice as well. They have much larger teams, so you'll be dealing with a horde of product owners, compliance specialists, data privacy experts, who might never touch your product but come with excel sheets with 300 rows of gnarly questions. Not to mention just getting the bills paid can be a huge fight.

It will drag you into their orbit, especially if 80% of your revenue is from a single customer. Soon your other customers will start going to someone who actually have time to care about them. And by then there's been a political shift in-house and the new VP of X gets a quote for an outsourcing bundle from his squash buddy at one of the big system integrators. Your line item gets bundled into this to motivate the cost even though it's not even relevant. And that the end of your company.

If you do want to sell, treat the enterprise like an ecosystem of SMEs, find a department or team who are more innovative and sell to them behind the backs of enterprise IT. Once you've entrenched yourself and the users love you, then you can expand to other teams and eventually enterprise IT will be forced to negotiate with you for a license and do the compliance dance. But even so this will take years of effort and luck.

somat 6 hours ago
"When the software is being written by agents as much as by humans, the familiar-language argument is the weakest it has ever been - an LLM does not care whether your codebase is Java or Clojure. It cares about the token efficiency of the code, the structural regularity of the data, the stability of the language's semantics across releases."

Isn't familiarity with the language even more the case with a LLM. The language they do best with is the one with the largest corpus in the training set.

dgb23 5 hours ago
Familiarity matters to some degree. But there are diminishing returns I think.

Stability, consistency and simplicity are much more important than this notion of familiarity (there's lots of code to train on) as long as the corpus is sufficiently large. Another important one is how clear and accessible libraries, especially standard libraries, are.

Take Zig for example. Very explicit and clear language, easy access to the std lib. For a young language it is consistent in its style. An agent can write reasonable Zig code and debug issues from tests. However, it is still unstable and APIs change, so LLMs get regularly confused.

Languages and ecosystems that are more mature and take stability very seriously, like Go or Clojure, don't have the problem of "LLM hallucinates APIs" nearly as much.

The thing with Clojure is also that it's a very expressive and very dynamic language. You can hook up an agent into the REPL and it can very quickly validate or explore things. With most other languages it needs to change a file (which are multiple, more complex operations), then write an explicit test, then run that test to get the same result as "defn this function and run some invocations".

aleph_minus_one 1 hour ago
> Languages and ecosystems that are more mature and take stability very seriously, like Go or Clojure, don't have the problem of "LLM hallucinates APIs" nearly as much.

Counterexample: the Wolfram programming language (by many people rather known from the Mathematica computer algebra system).

It is incredibly mature and takes stability very seriously, but in my experience LLMs tend to hallucinate a lot when you ask them to write Wolfram or Mathematica code.

I see the reason in two points:

1. There exists less Wolfram/Mathematica code online than for many other popular programming languages.

2. Code in Wolfram is often very concise; thus it is less forgiving with respect to "somewhat correct" code (which is in my opinion mostly a good thing), thus LLM often tend to struggle writing Wolfram/Mathematica code.

ehnto 5 hours ago
And they're very sensitive to new releases, often making it difficult to work with after a major release of a framework for example. Tripping up on minor stuff like new functions, changes in signatures etc.

A stable mature framework then is the best case scenario. New frameworks or rapidly changing frameworks will be difficult, wasting lots of tokens on discovery and corrections.

bilekas 4 hours ago
Yes I'd agree from the perspective of the model that one cohesive well established language would be more reliable. The nightmare scenario is an enterprise suite with a Hodge podge mix of every language known to man all mangled together because the frontier model at the time decided Haskel would be the most efficient when compiled for webassembly and some poor intern has to fix a bug that should cost 100x less than rerunning the LLM to patch.
throwaway2037 2 hours ago

    > Clojure was not a hiring barrier - it was a hiring filter.
It makes me think about this HN comment: https://news.ycombinator.com/item?id=11933250

    > Jane Street Capital's Yaron Minsky once said that contrary to popular belief hiring for OCaml developers was easier because the signal to noise ratio in the OCaml community is so much better than other, more approachable languages.
I saw a YouTube vidoe years ago that featured Yaron Minsky. He made similar points. In short, some programming languages are like catnip for excellent programmers.
cucumber3732842 1 hour ago
>In short, some programming languages are like catnip for excellent programmers.

I think that misses the point.

Things that are hard have a higher percentage of people who don't need it to be easy.

If you're a "good" programmer you don't need the "community support" (i.e. a bunch of stuff to tell you why you should do things one way or the other in your particular language) so you're free to choose niche languages based on other factors and in turn there will be more good programmers programming in those languages.

You see this in all sorts of subjects not just programming.

harrouet 1 hour ago
"Nobody gets fired for buying IBM"

This is still true today. Gartner makes a living out of it. Always prefer buying the "familiar" product rather than being successful with the right solution.

Fortunately history show that those who do their math right actually end up being extremely successful: Google using linux HW for their DB servers, AWS developing their own network equipment and protocols, etc. It takes guts but when it works it leaves competition years behind.

whynotmaybe 32 minutes ago
Well, in Quebec, the driver's insurance agency (SAAQ) decided to go with SAP and the major bosses were fired.

The cost of the migration was supposed to be 500millions $ and it's now estimated at 1.1 billion $.

But, they weren't fired because of SAP, they were fired because they lied to the government about the true cost.

xtiansimon 30 minutes ago
> “…the buyer bought what was familiar to them, not what was right.”

This friction, and the lead dividing solutions from consulting, gave me an idea—-they’re describing conditions where LLM revolution might track with the desktop revolution. Companies, groups within companies and small businesses will DIY it and say good enough.

dangus 17 minutes ago
Except not really when big enterprise needs another party to hold blame and prove compliance to regulations and standards to auditors and customers.

When you hire a big company like Microsoft to handle some enterprise function of your business, you have someone who is already certified in whatever regulatory thing you need, and you have someone big enough to sue if they mess up.

I can vibecode Google Drive in a weekend but I can’t vibecode their HIPAA compliance and various certifications.

adityaathalye 4 hours ago
Yeah, "Nobody every got fired for purchasing IBM"... a story as old as time itself.

But that is the "fear" side of the enterprise sales equation... The "greed" side of it is for the buyer to make the long / short hedge.

The exec who gets the value of the working product can potentially come out shining, when their peers will be furiously backpedalling next year. And this consummate exec can do it by name-associating with their "main bet" which is optically great for the immediate term but totally out of their control (because big corp vendor will drag its feet like every SAP integration failure they've seen), and feeling a sense of agency by running an off-books skunkworks project that actually works and saves the day.

A fine needle to thread for the upstart, but better than standing outside the game.

ianpurton 1 hour ago
For context this is the authors website. https://axonlore.com/

So where its fair to say enterprise users buy safety, if he's referring to his own product I would offer the following.

He's in the AI tool space i.e. a better rag. So you're selling to AI developers and developers nearly always go open source first.

If they can't find an open source solution or if they don't even look, they prefer to build it themselves.

For this kind of product most enterprise buyers won't understand its benefits, you have to get the developers interested first.

And finally, in this market, you are 1 prompt away from someone cloning your whole business and calling it openaxon or something like that.

It's a tough time to be a software startup.

netcan 1 hour ago
Imo, there is a real question about the value of better here. Also, the ability and likelihood of the enterprise to actually leverage better.

This dynamic is not new. Unsophisticated enterprise buyers making bad decisions in a bad way. We haven't had an overwhelming market discipline come down though.

Do these enterprises actually need "good?"

nottorp 2 hours ago
> The wiki is not the thing you add AI to. The wiki is the thing AI replaces.

HN discussions seems to miss this. What LLMs are before you use them for agentic something is a lossy compression of a large text corpus.

The original wikis have to survive so you can have access to the non lossy version though.

troelsSteegin 23 minutes ago
A strong appetite for familiarity implies a desire for avoiding effort. Effort - thinking, negotiating, planning, testing. Effort is cost.

The author has a new thing which is different - unfamiliar - and ostensibly better. To a customer, when is a claim for better credible, and when does better really better? How does better measure up as benefit?

The challenge for any product story is to a) illuminate the need - why is the status quo intolerable and b) communicate the benefit tangibly to your audience. That the audience thinks your new thing is worth the effort depends on them understanding the new thing, feeling the need, and feeling good about the effort needed to exploit your thing. You'd like to get to your customer saying "I want that".

I think the specific question for axonlore.com is communicating benefit - how does it impact whatever workflows it serves? The website is a "thing" story, vs a benefit story in my view. I like "enterprise intelligence" as a thing, but it's a tough product. It inevitably implies culture change, and in the decision making space, the key people think they are intelligent enough already -- they want to scale themselves. Someone mentioned "better RAG" - maybe the story is how agents can perform better and more cost effectively. I am not clear that "the market" knows that it needs that yet.

I don't think "familiarity" is the right framing. Application automation, or workflow automation, or whatever the enteprise framing is of agentic solution generation, is to me a question of variance and effort. Variance in the quality of a work product and the net effort to produce it. Variance is the complement to familiar.

- high variance / low effort: prototypes

- low variance / low effort: automating anything repetitive and complicated

- low variance / high effort: demonstrated need for precision and or reliability

- high variance / high effort: when there seems like potential huge upside, or existential risk.

From an IT perspective, enterprise status quo is towards low variance/high effort. The market "want" here now with "agentic" seems to the benefit of low variance/low effort solutions ... where, in enterprise, getting an adequate solution is no longer gated on negotiating with or relying on IT or dev. Ultimately, I think enterprises want low variance, low effort operations -- customers of enterprise customers pay for low variance. I think an Agentic-IT solution question will be how confidently can one iterate and converge to that from whatever is delivered in the first pass. What's the ultimate effort of getting something "right enough".

ilikerashers 2 hours ago
Understood that this is a pitch for his own platform (which is fair enough), there is a mixture of a few things here which are common tech tropes.

- Enterprise buyers are risk averse and buy the wrong thing - Language X is better because the people that use it are smarter - New tech is difficult for established players

Not really a fresh take but at least it's well written.

egorfine 4 hours ago
> The category has never once, in sixty years, produced a product that reliably made good

In the same article the author was mentioning a few expert systems from the past that were quite obviously successful.

> on the promise printed on its marketing

Ah, _that_ promise. That promise is never fulfilled anywhere nor it is expected to.

bilekas 4 hours ago
Yeah I don't quite get his point here. He seems to be complaining that enterprise companies buy from other enterprise a d larger companies instead of him. It's a tale as old as time.

Enterprise buy from large companies because those large companies come with support teams, liability, expertise that you don't need to manage internally.

It rare I read an article that actively annoys me but there's something about how this is written that seems a little arrogant.

egorfine 4 hours ago
> seems a little arrogant

A little. But it's a nice article nevertheless.

JSR_FDED 6 hours ago
The core insight that enterprises select products on familiarity over anything else, is valuable. I’m going to keep it in mind for future customer engagements.
xivzgrev 5 hours ago
That's just human nature, to prefer what's familiar.

The insight here is that this also still applies to huge enterprise contracts where supposedly more rational decision making should apply.

grebc 4 hours ago
Not just enterprise, any human organisation.

Also sunk costs “should in theory” never be considered but I’ve only ever seen sunk costs considered.

cadamsdotcom 1 hour ago
> an LLM does not care whether your codebase is Java or Clojure. It cares about the token efficiency of the code, the structural regularity of the data, the stability of the language's semantics across releases.

Huh? All current and previous-gen models are most effective when coding in languages with the most test data.

While I agree the newest frontier model may be smart enough to reason at a lower level and be agnostic but its “relatively dumber / less capable” forebears .. need lots of examples to pattern match from.

Familiarity once again!

avereveard 5 hours ago
Eh, it's skipped in "the enemy" section an important bit, that was spelled out in the intro by the buyer, and wasn't listened: if the small vendor goes bust, who maintains the system after? if you plan for in 10 year cycles, greenfield buys look scary

That why vc look favorably to startup which go trough the motion of setting up partner led sales channel. an established partner taking maintenance contracts bridge the disconnect in the lifecycle gap between the two realities.

But no, corporate is bad, I guess.

dgb23 4 hours ago
It's an interesting problem for small businesses that want to sell stuff that will be used and relied on for a very long time.

In a sense, they have to make themselves obsolete. Either by making sure they are a part of a larger network, or by making sure that the org itself can own the product or service.

egorfine 4 hours ago
> your system is not an intelligence tool, it is a compression primitive with a chat interface on top

One should not underestimate a "compression primitive with a chat interface". For certain tasks it is a superpower.

BrenBarn 6 hours ago
> And they put it succinctly: buying from a small innovative company is brave while buying from a big, well recognised name is an insurance policy and the risk-averse buyer must have the insurance.

As the article notes, the alternatives from the large companies suck. So this is like buying fire insurance from a company that promptly sets fire to your house. You are buying the insurance while knowing you will need it because the disaster is already happening.

sublinear 5 hours ago
> Enterprise knowledge has always been as much a human problem as a technology one. Nobody wants to do the structuring work, and every prior architecture demanded that somebody do the structuring work rather than their actual job

This is correct and very agreeable to everyone, but then after some waffle they then write this:

> Structure, for the first time, can be produced from content instead of demanded from people

These quotes are very much at odds. Where is this structure and content supposed to come from if you just said that nobody makes it? Nowhere in that waffle is it explained clearly how this is really supposed to work. If you want to sell AI and not just grift, this is the part people are hung up on. Elsewhere in the article are stats on hallucination rates of the bigger offerings, and yet there's nothing to convince anyone this will do better other than a pinky promise.

dgb23 5 hours ago
I think the explanation comes later in the article:

"It is graph-native - not a vector database with graph features bolted on, not a document store with a graph view, but a graph at it's core - because the multi-hop question intelligent systems actually have to answer cannot be answered by cosine similarity over chunked text, no matter how much AI you paste on top."

And

"It has a deterministic harness around its stochastic components. The language model proposes but the scaffolding verifies. Every inference, every tool call, every state change is captured in an immutable ledger as first-class data and this is what makes non-deterministic components safe to deploy where determinism is required."

Till_Opel 5 hours ago
[dead]
DocTomoe 6 hours ago
[dead]
egorfine 4 hours ago
> The category error under all of this is the assumption that you can take a document library or a wiki [...] and make it intelligent by attaching a language model to it. But you cannot.

Imagine a model with a reliable 100M context window. Then all of a sudden you can.

> The information the intelligent answer needs was never in the wiki in the first place.

Oh well.