It's not the AI. It's the coordinates.
I wrote about Dave Plummer and Tempest - a 1981 arcade game where Dave spent a year trying to improve his AI before realising the problem wasn’t the AI. It was the way he’d taught it to see the game.

Tempest is a shooter played from the centre of a geometric web outward - enemies crawl in from the edges and you spin around the rim to meet them. Dave taught his AI to play it using Cartesian coordinates - X and Y positions on a grid. The AI got good. It kept getting better. And then it plateaued, hard, and stayed there. It wasn’t until Dave gave it a completely different way to be aware of the game - polar coordinates, angle and distance from the centre - that something shifted. The same AI, the same game, the same enemies. Suddenly it didn’t just improve. It leapt. Because the coordinate system it had been given finally matched the geometry of what it was actually doing.
There’s more in it than a gamer-AI anecdote.
It demonstrates something that sounds intimidating when you name it properly. The coordinate system isn’t neutral. It doesn’t just describe what’s there - it determines what questions you can ask. Cartesian coordinates generate Cartesian questions. Polar coordinates generate polar questions. And if the situation you’re navigating is polar in its nature, Cartesian questions won’t find the answer no matter how well you process them.
Philosophers call this ontology - the study of what is real and how it can be described. (I know, I know. I wrote a whole post about why that word doesn’t need to be as scary as it sounds.) The short version: an ontological register is just a coordinate system for reality. And the interesting thing (the liberating thing) is that the same reality can have multiple valid coordinate systems simultaneously, none of which is complete on its own.
A deal seen through a financial lens and a deal seen through an operational lens aren’t two descriptions of one underlying truth. They’re two coordinate systems, each generating different questions, each making different things visible. The financial register asks: does this stack up? The operational register asks: can we actually do this? Both questions are real. Neither exhausts the deal.
If you’re working with AI systems, low confidence scores aren’t just a verification problem.
When a model returns low confidence, the instinct is: something went wrong, add more, verify harder. But there’s another reading. The model processed everything it could in the register it was given - and then correctly reported that something remained outside it. It’s not failing. It’s pointing. The register ran out.
That’s not a gap to fill. That’s a gap to see differently.
The Tempest AI didn’t need more training. It needed a different map - one drawn for the terrain it was actually crossing. The question for any AI-assisted system isn’t just: did it get the answer right? It’s: did it have all the registers to ask the right questions - or just one register and a good vocabulary?
Change the coordinates. Different questions become available.
Follow along: mattburgess.micro.blog/subscribe… · mattburgess.micro.blog/feed.xml · micro.blog/mattburge… · Mastodon @mattburgess@micro.blog