Looking and seeing are not the same thing — and AI lets blind people do both now.

Some of the most exciting work in vision AI right now is about looking.

Explore-by-touch tools let you interrogate an image piece by piece. You can find the couch, trace the edge of a bird’s wing, check whether something is cropped, build a spatial map, and return to it as often as you like. That’s real agency. It turns blind people from passive listeners into active explorers.

That matters.

But looking isn’t the whole story.

Seeing is different. Seeing is faster, more felt, more holistic. It’s the sense of what this image is doing before you know every detail of how it’s constructed. Seeing is about mood, intention, and invitation. It’s the difference between knowing what’s in the frame and understanding why the frame exists.

Sighted people do both, constantly. They look when they need detail. They see when they need meaning.

AI finally lets blind people do both too.

Looking is exploratory. It’s building up an image piece by piece.
Seeing is narrative. It’s being told a story about the moment.

Neither replaces the other. An explore-by-touch map won’t tell you why an image makes you want to buy the dress, remember the day, or feel at home in a body. And a narrative description won’t tell you where the table edge is or whether something’s cut off.

Good accessibility doesn’t choose between them. It recognises that images serve different functions at different times — and gives people the tools to switch modes.

Alt text unlocks access.
Exploration builds understanding.
Seeing creates meaning.

The real breakthrough isn’t choosing one approach.
It’s finally admitting that looking and seeing are different things — and both matter.

Charlotte Joanne

Charlotte Joanne

Charlotte Joanne is the editor of Through the AIs of the Blind. She curates essays, experiments, and voices exploring AI, perception, and access — shaping a publication where lived experience, design, and speculation meet.
United Kingdom