Business-wise, I think getting acquired was the right choice. Experimentation is too small & treacherous to build a great business, and the broader Product Analytics space is also overcrowded. Amplitude (YC 2012), to date, only has a 1.4B market cap.
Joining the hottest name next door gives Statsig a lot more room to explore. I look forward to their evolution.
Meanwhile Optimizely is a new partner in our agency portfolio.
Big Tech teams want to ship features fast, but measuring impact is messy. It usually requires experiments— and traditionally experiments need Data Scientists to ensure statistical validity or "correctness". Without a baseline of correctness, the results cannot be trusted. Ensuring correctness in a complex process like experimenting, means the DS has to perform manual labor: debugging bad experiment setups, navigating incomplete data and legacy infrastructure, trying to compensate for errors and biases in post-analysis, etc. Even with great DS effort, cases still arise where Team A reports positive numbers & ships their feature while unknowingly tanking Team B's revenue—discovered only months later when a data scientist is tasked to trace the cause.
Platforms like Statsig exist to lower the high cost of experimenting: enabling people to see a feature's potential impact before shipping (save money) while minimizing user frustrations (save time). They do so by establishing statistical correctness in the platform, eliminating common errors or issues at at each stage of the process for each user type. Engs setup experiments via SDK/UI with nudges and warnings for misconfigurations, while data scientists focus on higher-value work like metric design in SQL. PMs view shared dashboards and get automatic coordination notifications if their feature is breaking something. People still fight, but earlier and in the same "room," with fewer questions about what's real versus what's noise.
"Statsig"'s name represents this focus on reality—"statistically significant," separating real results from random noise. Statsig's platform, like others, lets companies define a standard for what they consider "real" impact across all experiments, while the platform quietly balances statistical correctness with user cost. The outcome is less data scientists to hire, less bad tooling to work around, less deep statistical knowledge required, and crucially, more trust & shared oversight.
Is Statsig worth $1B to OpenAI? Maybe. There's an art and science to product development, and Facebook's experimentation platform was central to their science. But it could be premature; I personally feel experimentation as an ideology best fits optimization problems that achieved strong product-market fit ages ago. However, it's been years since I've worked on Experimentation; someone can correct me about any part of this answer.
Congrats to the Statsig team!
In reality, it's going to be enterprise and ads.
https://www.amazon.com/Unaccountability-Machine-Systems-Terr...
Oh the CTO approved it so we should blame them. No not that CTO the other CTO. Oh so who decided on the final out come. The CTO! So who is on first again?
If Mira Murati (CTO of OpenAI) has authority over their technical decisions, then it’s an odd title. If I was talking with a CTO, I wouldn't expect another CTO to outrank or be able to overrule them.
In practice, these are just internal P&Ls.
Is Brockman now CTO over research specifically or is there going to be a weird dotted line?
Here you will have what appears per this article to be 2 but as I looked more into it, there are 3 (!!) CTOs (see here: https://x.com/snsf/status/1962939368085327923) one of which (B2B CTO) seems to be reporting to the COO.
So in this context, you have 3 (!!) engineering organizations that don't terminate in a single engineering leader. "Apps" terminates at the "Apps" CEO (Fidji), research org terminates (??) at Sama (Overall CEO) and then B2B terminates at the COO.
So either you have weird dotted lines to Brockman for each of these CTOs OR you are going to have a lot of internal customer relationships which don't have a final point of escalation. That's definitely not common at this size and, unless these are all extremely independent organizations from a tech stack perspective (they can't really be since surely they are all reliant on the core LLMs...), then there will be a lot more weird politics that are harder to resolve than having these organizations all under one technical leader.
Of course another alternative is OAI is handing out titles for retention purposes and "CTO" will be heavily devalued as a title internally.
Reporting lines that ladder up to a line in the P&L.
[0] https://docs.anthropic.com/en/docs/claude-code/data-usage#te...
Hardly billions.
sister [deleted] comment said "He’s an extremely well-known and deeply respected engineer, leader, and founder in the Seattle metro region. This is a key hire for OpenAI, and a good one."
A a ton of companies will compete with OpenAI while their focus is divided amongst a 100 things. May a thousand flowers bloom!