177 points by warrenm 16 hours ago | 19 comments
AnotherGoodName 15 hours ago
I’ve been working on board game ai lately.

Fwiw nothing beats ‘implement the game logic in full (huge amounts of work) and with pruning on some heuristics look 50 moves ahead’. This is how chess engines work and how all good turn based game ai works.

I’ve tried throwing masses of game state data at latest models in pytorch. Unusable. It Makes really dumb moves. In fact one big issue is that it often suggests invalid moves and the best way to avoid this is to implement the board game logic in full to validate it. At which point, why don’t i just do the above scan ahead X moves since i have to do the hard parts of manually building the world model anyway?

One area where current ai is helping is on the heuristics themselves for evaluating best moves when scanning ahead. You can input various game states and whether the player won the game or not in the end to train the values of the heuristics. You still need to implement the world model and look ahead to use those heuristics though! When you hear of neural networks being used for go or chess this is where they are used. You still need to build the world model and brute force scan ahead.

One path i do want to try more: In theory coding assistants should be able to read rulebooks and dynamically generate code to represent those rules. If you can do that part the rest should be easy. Ie. it could be possible to throw rulebooks at ai and it play the game. It would generate a world model from the rulebook via coding assistants and scan ahead more moves than humanly possible using that world model, evaluating to some heuristics that would need to be trained through trial and error.

Of course coding assistants aren’t at a point where you can throw rulebooks at them to generate an internal representation of game states. I should know. I just spent weeks building the game model even with a coding assistant.

smokel 14 hours ago
You probably know this, but things heavily depend on the type of board game you are trying to solve.

In Go, for instance, it does not help much to look 50 moves ahead. The complexity is way too high for this to be feasible, and determining who's ahead is far from trivial. It's in these situations where modern AI (reinforcement learning, deep neural networks) helps tremendously.

Also note that nobody said that using AI is easy.

AnotherGoodName 14 hours ago
Alphago (and stockfish that another commenter mentioned) still has to search ahead using a world model. The AI training just helps with the heuristics for pruning and evaluation of that search.

The big fundamental blocker to a generic ‘can play any game’ ai is the manual implementation of the world model. If you read the alphago paper you’ll see ‘we started with nothing but an implementation of the game rules’. That’s the part we’re missing. It’s done by humans.

moyix 14 hours ago
Note that MuZero did better than AlphaGo, without access to preprogrammed rules: https://en.wikipedia.org/wiki/MuZero
smokel 13 hours ago
Minor nitpick: it did not use preprogrammed rules for scanning through the search tree, but it does use preprogrammed rules to enforce that no illegal moves are made during play.
hulium 11 hours ago
During play, yes, obviously you need an implementation of the game to play it. But in its planning tree, no:

> MuZero only masks legal actions at the root of the search tree where the environment can be queried, but does not perform any masking within the search tree. This is possible because the network rapidly learns not to predict actions that never occur in the trajectories it is trained on.

https://arxiv.org/pdf/1911.08265

skywhopper 10 hours ago
That is exactly what the commenter was saying.
gnfargbl 9 hours ago
The more detailed clarification on what "preprogrammed rules" actually means in this case made the entire discussion significantly more clear to me. I think it was helpful.
Zacharias030 5 hours ago
It is consistent with what the commenter was saying.

In any case, for Go - with a mild amount of expert knowledge - this limitation is most likely quite irrelevant unless in very rare endgame situations, or special superko setups, where a lack of moves or solutions push some probability to moves that look like wishful thinking.

I think this is not a significant limitation of the work (not that any parent claimed otherwise). MuZero is acting in an environment with prescribed actions, it’s just “planning with a learned model” and without access to the simulation environment.

—-

What I am less convinced by was the claim that MuZero reaches higher performance than previous AlphaZero variants. What is the comparison based on? Iso-flops, Iso-search depth, iso self play games, iso wallclock time? What would make sense here?

Each AlphaGo paper was trained on some sort of embarrassingly parallel compute cluster, but all included the punchlines for general audiences that “in just 30 hours” some performance level was reached.

CGamesPlay 3 hours ago
This is true, and MuZero's paper notes that it did better with less computation than AlphaZero. But it still used about 10x more computation to get there than AlphaGo, which was "bootstrapped" with human expert moves. I think this is very important context to anyone who is trying to implement an AI for their own game.
smokel 14 hours ago
Implementing a world model seems to be mostly solved by LLMs. Finding one that can be evaluated fast enough to actually solve games is extremely hard, for humans and AI alike.
skywhopper 10 hours ago
What are you talking about?
daxfohl 13 hours ago
Yeah, I can't even get them to retain a simple state. I've tried having them run a maze, but instead of giving them the whole maze up front, I have them move one step at a time, tell them which directions are open from that square and ask for the next move, etc.

After a few moves they get hopelessly lost and just start wandering back and forth in a loop. Even when I prompt them explicitly to serialize a state representation of the maze after each step, and even if I prune the old context so they don't get tripped up on old state representations, they still get flustered and corrupt the state or lose track of things eventually.

They get the concept: if I explain the challenge and ask to write a program to solve such a maze step-by-step like that, they can do that successfully first-try! But maintaining it internally, they still seem to struggle.

nomadpenguin 13 hours ago
There are specialized architectures (the Tolman-Eichenbaum Machine)* that are able to complete this kind of task. Interestingly, once trained, their activations look strikingly similar to place and grid cells in real brains. The team were also able to show (in a separate paper) that the TEM is mathematically equivalent to a transformer.

* https://www.sciencedirect.com/science/article/pii/S009286742...

warrenm 13 hours ago
>I've tried having them run a maze, but instead of giving them the whole maze up front, I have them move one step at a time, tell them which directions are open from that square and ask for the next move, etc.

Presuming these are 'typical' mazes (like you find in a garden or local corn field in late fall), why not have the bot run the known-correct solving algorithm (or its mirror)?

daxfohl 13 hours ago
Like I said, they can implement the algorithm to solve it, but when forced to maintain the state themselves, either internally or explicitly in the context, they are unable to do so and get lost.

Similarly if you ask to write a Sudoku solver, they have no problem. And if you ask an online model to solve a sudoku, it'll write a sudoku solver in the background and use that to solve it. But (at least the last time I tried, a year ago), if you ask to solve step-by-step using pure reasoning without writing a program, they start spewing out all kinds of nonsense (but humorously cheat: they'll still spit out the correct answer at the end).

prewett 7 hours ago
That’s because there are lots of maze-solving algorithms on the web, so it’s easy to spit one back at you. But since they don’t actually understand how solve a maze, or even apply an algorithm one step at a time, it doesn’t work well.
adventured 12 hours ago
So if you push eg Claude Sonnet 4 or Opus 4.1 into a maze scenario, and have it record its own pathing as it goes, and then refresh and feed the next Claude the progress so far, would that solve for the inability to maintain long duration context in such maze cases?

I make Claude do that on every project. I call them Notes for Future Claude and have it write notes for itself because of how quickly context accuracy erodes. It tends to write rather amusing notes to itself in my experience.

daxfohl 11 hours ago
This was from a few months ago, so things may be different now. I only used OpenAI, and the o3 model did by far the best (gpt-4o's performance was equivalent on the basic scenario when I had it just move one move at a time (which, it was still pretty good, all considered), but when I started having it summarize state and such, o3 was able to use that to improve performance, whereas 4o actually got worse).

But yeah, that's one of the things I tried. "Your turn is over. Please summarize everything you have learned about the maze so someone else can pick up where you left off". It did okay, but it often included superfluous information, it sometimes forgot to include current orientation (the maze action options were "move forward", "turn right", "turn left", so knowing the current orientation was important), and it always forgot to include instructions on how to interpret the state: in particular, which absolute direction corresponded to an increase or decrease of which grid index.

I even tried to coax it into defining a formal state representation and "instructions for an LLM to use it" up-front, to see if it would remember to include the direction/index correspondence, but it never did. It was amusing actually; it was apparent it was just doing whatever I told it and not thinking for itself. Something like

"Do you think you should include a map in the state representation? Would that be useful?"

"Yes, great idea! Here is a field for a map, and an algorithm to build it"

"Do you think a map would be too much information?"

"Yes, great consideration! I have removed the map field"

"No, I'm asking you. You're the one that's going to use this. Do you want a map or not?"

"It's up to you! I can implement it however you like!"

Mars008 5 hours ago
> have it write notes for itself because of how quickly context accuracy erodes. It tends to write rather amusing notes to itself in my experience.

Just wondering would it help to ask it to write to someone else? Because model itself wasn't in its training set, this may be confusing.

kqr 3 hours ago
My experience in trying to get them to play text adventures[1] is similar. I had to prompt with very specific leading questions to give them a decent chance of even recognising the main objective after the first few steps.

[1]: https://entropicthoughts.com/getting-an-llm-to-play-text-adv...

yberreby 6 hours ago
It took me a second to realize you were talking about prompting a LLM. This is fundamentally different from what the parent is doing. "AI" is so much more than "talking to a pretrained LLM."
PeterStuer 3 hours ago
"Elephants don't play chess" ;)

You have a tiny, completely known, deterministic rule based 'world'. 'Reasoning' forwards for that is trivial.

Now try your approach for much more 'fuzzy', incomletely and ill defined environments, e.g. natural language production, and watch it go down in flames.

Different problems need different solutions. While current frontier llm's show surprising results in emergent shallow and linguistic reasoning, they are far away from deep abstract logical reasoning. A sota theorem prover otoh, can excel at that, but can still struggle to produce a coherent sentence.

I think most have always agreed that for certain tasks, an abstraction over which one can 'reason' is required. People differ in opinion over wether this faculty is to be 'crafted' in or wether it is possible to have it emerge implicitly and more robust from observations and interactions.

https://people.csail.mit.edu/brooks/papers/elephants.pdf

ChaitanyaSai 21 minutes ago
Interesting! Documenting this anywhere?
coeneedell 14 hours ago
IIRC the rules system for magic the gathering: Arena is generated by a sort of compiler fed the rules. You might not even need a modern coding assistant to build out something reasonable in a DSL that is perfect, then have people (or an LLM after fine tuning) transforms rule books into the DSL.
Crespyl 9 hours ago
They have an interesting write up here: https://magic.wizards.com/en/news/mtg-arena/on-whiteboards-n...

There's a lisp variant involved, and IIRC even a parser that reads the card text to auto-generate the rules code for most of the cards.

bubblyworld 11 hours ago
Something to consider is that while it's really hard to implement a decent NN-based algorithm like AlphaZero for your game, you get the benefit that model checkpoints give you a range of skill levels to play against as you train it.

Handicapping traditional tree search produces really terrible results, imo. It's common for weak chess engines to be weak for stupid reasons (they just hang pieces, make random unnatural moves, miss blatant threats etc). Playing weak versions of Leela chess really "feels" like a (bad) human opponent by contrast.

Maybe the juice isn't worth the squeeze. It's definitely a ton of work to get right.

deepsquirrelnet 10 hours ago
> I’ve tried throwing masses of game state data at latest models in pytorch. Unusable. It Makes really dumb moves. In fact one big issue is that it often suggests invalid moves and the best way to avoid this is to implement the board game logic in full to validate it.

It sounds like you need RL. You could try setting up some reward functions with evaluators. I’m not sure what your architecture is, but something to try.

robertlagrant 10 hours ago
How does this experience translate to non-turn based games? Alphastar presumably is doing something other than searching all the possible moves. Why would whatever it does not translate to turn-based?
red75prime 12 hours ago
It would be nice if you could train a decent model on a $1000 (or so) budget, but for now it seems unlikely.
jjk7 13 hours ago
Interesting the parallels between LLM development and psychology & spirituality.

To have a true thinking, you need an internal adversary challenging thoughts and beliefs. To look 50 moves ahead, you need to simulate the adversary's moves... Duality

GaggiX 14 hours ago
>This is how chess engines work

All strongest chess engine have at least one neural network to evaluate positions, including Stockfish, and this impact the searching window.

>how all good turn based game ai works

That's not really true, just think of Go.

skywhopper 10 hours ago
??? Chess engines and Go engines have as a baseline a world model of the state of the game and what moves are legal.
GaggiX 9 hours ago
>Fwiw nothing beats ‘implement the game logic in full (huge amounts of work) and with pruning on some heuristics look 50 moves ahead’. This is how chess engines work and how all good turn based game ai works.

Just read the parent comment.

maxvij 39 minutes ago
I stumbled upon a lecture by Josh Tenenbaum (MIT) yesterday. Starting from minute 19 he talks about world models, and how we're nowhere near 'real AI'. This lecture was from 7 years ago, I wonder what a more recent take from him on this topic would be. https://youtu.be/TFyAEHk5asY?si=lZfjeF7t66FhkdSZ&t=1157
Animats 10 hours ago
Important subject, useless article.

Some new ideas in world models are beginning to work. Using Gaussian splatting as a world model has had some recent success.[1] It's a representation that's somewhat tolerant of areas where there's not enough information. Some of the systems that generate video from images work this way.

[1] https://katjaschwarz.github.io/ggs/

ryukoposting 13 hours ago
A footnote in the GPT-5 announcement was that you can now give OpenAI's API a context-free grammar that the LLM must follow. One way of thinking about this feature is that it's a user-defined world model. You could tell the model "the sky is" => "blue" for example.

Obviously you can't actually use this feature as a true world model. There's just too much stuff you have to codify, and basing such a system on tokens is inherently limiting.

The basic principle sounds like what we're looking for, though: a strict automata or rule set that steers the model's output reliably and provably. Perhaps a similar kind of thing that operates on neurons, rather than tokens? Hmm.

spindump8930 6 hours ago
It's good to have this support in APIs but grammar constrained decoding has been around for quite a while, even before the contemporary LLM era (e.g. [1] is similar in spirit). Local vs global planning is a huge issue here though - if you enforce local constraints during decoding time, an LLM might be forced to make suboptimal token decisions. This could result in a "global" (i.e. all tokens) miss, where the probability of the constrained output is far lower than the probability of the optimal response (which may also conform to the grammar). Algorithms like beam search can alleviate this, but it's still difficult. This is one of the reasons that XML tags work better than JSON outputs - less constraints on "weird" tokens.

[1] https://aclanthology.org/P17-2012/

ijk 5 hours ago
Oh, OpenAI finally added it? Structured generation has been available in things like llama.cpp and Instructor for a while, so I was wondering if they were going to get around to adding it.

In the examples I've seen, it's not something you can define an entire world model in, but you can sure constrain the immediate action space so the model does something sensible.

nxobject 13 hours ago
> There's just too much stuff you have to codify, and basing such a system on tokens is inherently limiting.

As a complete amateur who works in embedded: I imagine the restriction to a linear, ordered input stream is fundamentally limiting as well, even with the use of attention layers.

gavmor 13 hours ago
I suspect something more akin to a LoRA and/or circuit tracing will help us keep track of the truth.
dejongh 13 hours ago
This is a very interesting article. The concept "run an experiment in your head and predict the outcome" is a capability that AIs must have to attain some kind of general intelligence. Anyway, read the article, it's great.
thsvrrck 1 hour ago
would be also deeply interesting to see the thinking tokens then!
lsy 10 hours ago
A world model itself, in its particulars, isn't as important as the tacit understanding that the "world model" is necessarily incomplete and subordinate to the world itself, that there are sensory inputs from the world that would indicate you should adjust your world model, and the capacity and commitment to adjust that model in a way that maintains a level of coherence. With those things you don't need a complex model, you could start with a very simple but flexible model that would be adjusted over time by the system.

But I don't think we have a hint of a proposal for how to incorporate even the first part of that into our current systems.

cognitif 4 hours ago
Sounds like the “open-world assumption” used in RDF, with coherence maintained by OWL. (Well, at least it’s a hint of a proposal.)
BariumBlue 13 hours ago
> When researchers attempt(opens a new tab) to recover [something like] a coherent computational representation of an Othello game board they instead find [bags of heuristics]

Humans don't exactly have a full representation of board space in their head either. Notably, chess masters and amateurs can memorize completely random board positions as well as the other. I'd think neither could memorize 64 chess pieces in random positions on a board.

AIPedant 11 hours ago
That's not what "coherent computational representation" means in this context. It means being able to reliably apply the rules of Othello / chess / etc to the current state of the board. Any competent amateur can do this without studying thousands of board positions - in fact you can do it just from the written rules, without ever having seen a game - they have a causal, non-heuristic understanding of the rules. LLMs have much more trouble: they don't learn how knights move, they learn how white knights move when they're in position d5, then in position g4, etc etc, a "bag of heuristics."

Notably this is also true for MuZero, though at that scale the heuristics become "dense" enough that an apparent causal understanding seems to emerge. But it is quite brittle: my favorite example involves the arcade game Breakout, where MuZero can attain superhuman performance on Level 1 and still be unable to do Level 2. Healthy human children are not like this - they figure out "the trick" in Level 1 and quickly generalize.

mym1990 13 hours ago
For whatever its worth, I bet the chess master would be able to instantly identify that it is a random/invalid board position, aka an invalid world state. I think the experiment you are alluding to gave both groups a very limited amount of time to look at the board. Given enough time, both groups would definitely be able to memorize 64 pieces on a board.
aurelwu 12 hours ago
I do think even the most amateur of amateurs would be able to recognize instantly that a chess board with 64 pieces on it is a invalid game state.
yellow_postit 13 hours ago
Not mentioning Fei-Fei Li and her startup explicitly focused on world models is an interesting choice by the author.
srush 14 hours ago
A recent tutorial video from one of the authors featured in this article:

Evaluating AI's World Models (https://www.youtube.com/watch?v=hguIUmMsvA4)

Goes into details about several of the challenges discussed.

mingtianzhang 11 hours ago
I used to work on a idea that instead of modelling the whole world, you can build your own Solipsistic model: https://openreview.net/pdf?id=fPaGSuQRP1O
chongli 10 hours ago
A little bit disappointed that there was no mention of the Frame Problem [1], a major challenge with world models. The issue arises when you're building an AI agent with the ability to move through and act in the real world, updating its world model as it does so.

The challenge comes from the problem of finding a set of axioms that tell you how to make predictions about what changes a particular action will cause in the world. Naively, we might suppose that the laws of physics would be suitable axioms but this immediately turns out to be computationally intractable. So then we're stuck trying to find a set of heuristics, as alluded to in the article.

Without being a neuroscientist, I think it's likely that at least some of the axioms of our own world models (as human beings) are built into the structure of our brains, rather than being knowledge that we learn as we grow up. We know, for example, that our visual systems have a great deal of built-in assumptions about the way light works and how objects appear under different lighting conditions, a fact revealed to us by optical illusions such as the checker shadow illusion [2]. Building a complete set of heuristics such as this does not sound impossible, just somewhat obscure and unexplored as an engineering problem, and does not seem to be related whatsoever to currently popular means of building and training AI models.

[1] https://plato.stanford.edu/entries/frame-problem/

[2] https://en.wikipedia.org/wiki/Checker_shadow_illusion

jonbaer 13 hours ago
"You’re carrying around in your head a model of how the world works" (or so you thought) ... the real AI is in a) how fast you can realize it's changed and b) how fast you can adapt. This bit isn't being optimized, it's being dragged out.
red75prime 13 hours ago
> This bit isn't being optimized, it's being dragged out.

Of course, it is being optimized. People are working on increasing the sample efficiency. A simple search on Google Scholar will confirm it.

cratermoon 7 hours ago
morpheos137 8 hours ago
For world models to be efficient you need the model to self assemble
nathan_douglas 14 hours ago
I'm sure neural networks are a great tool here, but I don't know how the training would proceed effectively off "mere data"; too much of the data we have is incomplete, inaccurate, or outright fantasy or misinformation or out of the ordinary.

I could see this being the domain of fleets of robots, many different styles, compositions, materials, etc. Send ten robots in to survey a room - drones, crawlers, dogs, rollers, etc - they'll bang against things, knock things off shelves, illuminate corners, etc. The aggregate of their observations is the useful output, kinda like networked toddlers.

And yeah, unfortunately, sometimes this means you just need to send a swarm of robots to attack a city bus... or a bank... to "learn how things work." Or an internment camp. Don't get upset, guy, we're building a world model.

Anybody wanna give me VC money to work on this?

ACCount37 14 hours ago
When you're training an AI, that "mere data" adds up. Random error averages out, getting closer to zero with every data point. Systematic error leaks information about the system that keeps making the error.

A Harry Potter book doesn't ruin an AI's world model by contaminating reality with fantasy. It gives it valuable data points on human culture and imagination and fiction tropes and commercially successful creative works. All of which is a part of the broader "reality" the AI is trying to grasp the shape of as it learns from the vast unstructured dataset.

nathan_douglas 14 hours ago
You're absolutely correct, of course. I was musing during down time in a meeting and turned it into a joke instead of engaging my faculties :)
multjoy 12 hours ago
The AI learns nothing from Harry Potter other than the statistical likelihood of one token appearing after another.

The AI is trying to grasp nothing.

ACCount37 11 hours ago
Any sufficiently advanced statistical model is a world model.

If you think that what your own brain doing isn't fancy statistics plugged into a prediction engine, I have some news for you.

startupsfail 5 hours ago
It's interesting that an AI doesn't necessarily need to carry the model of the actual physical world around them, it can be any imaginary one.

While biological systems (or other physical agents) do need to model the world around them to be able to operate.

tsunamifury 14 hours ago
The end of westworld basically put forth that the only way we could stabilize the world is if we just destroyed it and moved it all to a parallel simulation. Since early attempts at world Modeling failed due to complexity of Outliers the only way ai could handle a world model was to just get rid of the real one.

People didn’t give the later seasons enough credit even if they didn’t rise tot he same dramatic effect as the first.

Mallowram 14 hours ago
[dead]
kanakenfresse 13 hours ago
[flagged]