Colin Melville / 6 February 2026Originally published by The Drum, February 3, 2026
I was recently asked by a major client to make our AI humans “more real and less AI.” As a film director with thirty years on set, that request stopped me in my tracks.
For three decades, my job has been the exact opposite. We have spent the history of the moving image beautifying, filtering, and hyper-styling the human form. From the soft-glow lenses of the silver screen to the aggressive beauty filters of TikTok, we have been running away from “real” as fast as the technology would carry us.
Yet, now that we have arrived at a point where AI can generate a human with pore-perfect precision, we find ourselves repelled by the result.
The industry complains that AI humans don’t look real. But I would argue the problem is deeper. AI models are trained on the world we created for them—an unreal, hyper-stylized version of ourselves. If the input is a century of curated perfection, the output will inevitably be “AI slop.”
To find the human in the machine, we have to learn how to dial back thirty years of directing experience and embrace the “rough and ready.”
The argument that AI looks “fake” assumes that our art was ever “real.” In truth, we have been making humans unnaturally beautiful since art began.
Consider Sandro Botticelli’s The Birth of Venus. It is an iconic masterpiece, yet the anatomy is physically impossible. Her neck is elongated beyond human limits; her stance defies skeletal structure. Botticelli wasn’t interested in the “warts and all” reality of a 15th-century Florentine; he was interested in a stylized icon of beauty.
This tradition transitioned seamlessly into the cinematic era. We developed makeup, three-point lighting, and specific film stocks to ensure the characters we watched were more posed and “sexy” than the people sitting in the theater.
We have been conditioned to accept the “camera’s eye” as reality. Because nearly 100% of the stock video and cinematic data that AI models are trained on follows this “beautified” standard, the AI simply spits back the unreal lens we gave it.
There is a fundamental difference between how a camera sees and how a human sees. When you look at a beautiful sunset and try to capture it with your iPhone, it almost always looks terrible compared to the natural experience. The camera sees one way; our eyes see another.
We don’t “glide” toward people we recognize in a shopping mall with the smooth, stabilized movement of a Steadicam; we walk, our vision bouncing with every step. However, because we are conditioned by cinema, we often find the camera’s stylized movement—overhead shots, tracking shots, smooth cuts—more “real” than actual human observation.
AI video models are trained on the stylization of the lens, not the observation of the eye. To make an AI human feel less “machine,” I found I had to force the AI camera to dial back on everything it had been taught. I started choosing lenses closer to a human’s field of vision—the 50mm—and framing shots that felt less perfect and more observational.
I recently set myself a task in a London hotel room: create the most “real” AI human possible. Warts and all.
I started by generating images that captured the world as we actually see it—not as a lighting director wants us to see it. Once the images were locked, I moved into the mechanics of movement. To shatter the “AI look,” I began adding in the very things we usually spend thousands of dollars to remove from a film.
I introduced accidental camera shake. I layered in rough, unpolished audio. I kept the edit slightly loose. Most importantly, I removed the music.
The result was a sixty-second clip that looked like a camera test film from the Philip Bloom era of DSLR testing. I titled it “A walk around town and the hospital with my AI Camera,” making it feel as if I had actually been there, shoulder-rig in hand, capturing the mundane.
By making the footage look a bit “less perfect” than I would have if I were shooting for a brand, the result became highly impressive. It looked “not AI,” which has become the new battleground.
It is a confusing time to be a director. I am now obsessed with making my footage look worse than I am capable of shooting for real.
But in a world of high-gloss synthetic perfection, the only way to stand out is to embrace the flawed. The highly polished, stylized output is a legacy constraint. To win the next era of human expression, we must move beyond the “Silver Screen” and back into the street.
We are no longer just telling stories; we are architecting atmospheres. And sometimes, the most atmospheric thing you can do is leave the camera shake in and turn the music off.