For those claiming they rigged it. Do you have any concrete evidence? What if the models have just gotten really good?
I just asked Gemini pro to generate an SVG of an octopus dunking a basketball and it did a great job. Not even Deep Think model. Then I did "generate an svg of raccoon at a beach drinking a beer" you can go try this out yourself. Ask it to generate anything you want in SVG. use your imagination.
Rant:
This is why AI is going to take over, folks are not even trying the least.
> What if the models have just gotten really good?
Kagi Assistant remains my main way of interacting with AI. One of its benefits is you're encouraged to try different models.
The heterogeneity in competence, particular per unit in time, is growing rapidly. If I'm extrapolating image-creation capabilities from Claude, I'm going to underestimate what Gemini can do without fuckery. Likewise, if I'm using Grok all day, Gemini and Claude will seem unbelievably competent when it comes to deep research.
Every bit of improvement on AI ability will have the corresponding denial phrase. Some people still think AI can't generate the correct number of fingers today.
Why frame it as rigging? I assume they would teach the models to improve on tasks the public find interesting. Then we just have to come up with more challenges for it.
I don't think they "rigged" it, but might be given a bit more push on that part since it's going for a very long time now.
Another benchmark is going on at [0]. It's pretty interesting. A perfect scoring model "borks" in the next iteration, for example.
> Rant: This is why AI is going to take over, folks are not even trying the least.
It might be drawing things alright, at least some cases. I seldom use it when my hours long researches doesn't take me to the place I want, and guess what? AI can't go there, either. It hallucinates things, makes up stuff, etc. For a couple of things I asked, it managed to find a single reference, and it was the thing I was looking for, so it works rarely in my cases.
Rant: This is why people are delusional. They test the happy path and claims it knows all the paths, and then some.
Everyone should have their own private evals for models. If I ask a question and a model flat out gets it wrong sometimes I will put it in my test questions bank.
Simon notes this benchmark is win-win, since he loves pictures of pelicans riding bicycles — if they spend time benchmaxxing it’s like free pelicans for him.
He originally promised to generate a bunch more animals when we got a “good” pelican. This is not a good pelican. This is an OUTSTANDING pelican, a great bicycle, and it even has a little sun ray over the ocean marked out. I’d like to see more animals please Simon!
It is visually outstanding. The only thing that sticks out to me is that the steering column bends out forwards towards the ground (negative trail), which would make it oversteer rather than self-stabilize. Interestingly there's a slight positive trail bend in the second one, though.
Agreed, good is quite an understatement. Every item is drawn superbly, and the basket with the fish is just great. Feels like a big jump over the other models (though granted, this is such a known "benchmark" by now, it's likely gamed to some extent).
This is a very reasonable drawing of a bicycle. It has a solid rear triangle, and forward swept front fork, which is an important detail for actually being able to steer the bike. The drivetrain is single speed, but that's fine, and the wheels are radially laced, which is also fine: both of those simplified details are things which occur in real bicycles.
The intensity of competition between models is so intense right now they are definitely benchmaxxing pelican on bike SVGs and Will Smith spaghetti dinner videos.
Parallel hypothesis: the intensity of competition between models is so intense that any high-engagement high-relevance web discussion about any LLM/AI generation is gonna hit the self-guided self-reinforced model training and result in de facto benchmaxxing.
Which is only to say: if we HN-front-page it, they will come (generate).
I never realized Lenna was a Playboy centerfold until years after I first encountered it, which was part of an MP in the data structures class all CS undergrads take at UIUC.
> when the indicator becomes a target, it stops being a good indicator
But it's still a fair target. Unless it's hard coded into Gemini 3 DT, for which we have no evidence and decent evidence against, I'd say it's still informative.
note that this benchmark aside, they've gotten really good at SVGs, I used to rely on the nounproject for icons, and sometimes various libraries, but now coding agents just synthesize an SVG tag in the code and draw all icons.
I think it could still be an interesting benchmark. Like, assuming AI companies are genuinely trying to solve this pelican problem, how well do they solve it? That seems valid, and the assumption here is that the approach they take could generalize, which seems plausible.
>The strongest argument is that they would get caught. If a model finally comes out that produces an excellent SVG of a pelican riding a bicycle you can bet I’m going to test it on all manner of creatures riding all sorts of transportation devices. If those are notably worse it’s going to be pretty obvious what happened.
He mentioned in the Deep Think thread the other day that his secret test set also was impressive.
Interesting thing: I've got my internal request that is similar to this pelican. And there was 0 progress on it in the past ~2 years. Which might have at least a couple of explanations. 1. Spillage into the pre-training: some real artist had drawn a pelican riding a bicycle. 2. Seeing it as an important discourse for model intelligence in the training data might affect allocation of compute into solving this problem, either thru engineers or the model itself finding the texts about this challenge.
I have wondered if with these tests it'll reach a point where online models cheat by generating a line art raster reference then behind the scenes deciding how to vectorize it in the most minimalist way (eg: using strokes and shape elements, etc, rather than naively using path outlines for all forms).
The interesting aspect of the ongoing tests I feel is seeing how models can plan out an image directly using SVG primatives solely through reasoning (code-to-code). If they have a reference then it's a different type of challenge (optimizing for a trace).
SVG generation is a surprisingly good benchmark for spatial reasoning because it forces the model to work in a coordinate system with no visual feedback loop. You have to hold a mental model of what the output looks like while emitting raw path data and transforms. It's closer to how a blind sculptor works than how an image diffusion model works.
What I find interesting is that Deep Think's chain-of-thought approach helps here — you can actually watch it reason about where the pedals should be relative to the wheels, which is something that trips up models that try to emit the SVG in one shot. The deliberative process maps well to compositional visual tasks.