Neuroscientist composes AI-generated art videos awarded at international festivals. We asked him about his experience.

Ian Gibbins, a former neuroscientist and Professor of Anatomy, created a philosophical video art project using AI-generated faces from Generated Media.

The video had great success. It was presented at 32 international festivals. The most significant award that the film has won is the Delluc Avant-Garde Winner at the Fotogenia Film Festival in Mexico City in 2021. This was recognized for being the best film at the festival, one of the world’s leading festivals for experimental videos and video art.

Watch this art piece and read the deep story behind it.

“The Life We Live Is Not Life Itself” is a visualization of the poem by Greek poet Tasos Sagris. It questions the essence of life itself and, specifically, the life that each of us lives individually.

The author shot the raw footage mainly in and around Adelaide, the nearby Fleurieu Peninsula, and Yorke Peninsula, South Australia, supplemented with images from around Greece. But nothing in the video is quite as it seems. Most scenes were composited and animated from multiple sources.

AI-generated faces that change facial expressions, age, and skin color demonstrate the illusory nature of the world surrounding us. The idea that all the faces in the video are not real matches the text and contrasts with the composited actions of actual people.

Ian Gibbins told us about the idea of using generated faces in his video:

As we were pulling together an outline of the visuals for this video, we needed some people in it. I don’t usually have people in my videos, and we had neither the time nor the resources to find some actors. I knew there had been significant advances in AI face generation, so I came across Generated Photos. The AI faces were ideal for the concept I was developing. By generating a related set of images, I could animate morphs from young to old, female to male, light to dark skin, and so on. Placing the animations in windows, shopfronts, picture frames, etc., was consistent with some of the poem’s central themes: who is watching and remembering our actions? How reliable are our memories of the people we have met or desire to meet again?

Then, it turned out that a lot of the footage I’d taken around the city did have people walking past. So I used complex composting methods to create scenes in which the same person appears twice in the same scene, sometimes walking beside themselves. It creates a paradox: realistic-looking people in the scenes are AI-generated, and the impossible ones (the doubles) are real. This paradox is a deeper level of the poem, namely, the unreliability of memory and the strange misperceptions that underlie it.

This is not the only time Dr. Gibbins uses AI-generated content. After this project, he also used faces from Generated Media in his work “An Introduction to the Theory of Eclipses.”

As a neuroscientist, Ian Gibbins researched parts of the nervous system that operate with neither conscious control nor conscious perception. He also used sensory illusions when teaching at Flinders University and could get a whole class of students to believe that time could stand still and they could feel touch on someone else’s skin. This academic experience reflected his interest in generative technologies in visuals:

I think AI-generated content is fantastic. It’s a tool, not a result, like a camera or a pen. Although not AI, I have also used algorithmic image generators in recent work. Critics rarely worry about this as they do about AI, which seems odd. I could probably make suitable images by hand, but image generators save a lot of time and throw up things I would never have thought of. I set the parameters for the generators, and off it goes, and then I see if the result worked or not, just like using an AI generator. The difference, I suppose, is that image generators draw on the work of others while the generators make things from scratch, albeit using algorithms someone else wrote. Proprietary rights are essential. I’m happy to pay reasonable rates to access source material such as Generated Media. For me, it was important that Generated Media uses their library of images from consenting models.

Some say that AI-generated content can leave many creative workers without a profession. Dr. Gibbins thinks differently:

Looking at the forums for the generated content community, I doubt the AI will soon put any artist out of a job. Generating high-quality output takes a lot of work and a decent bit of luck, and it needs a good artist’s eye to know where to go, what to save, and when to start doing something else.

What do you think about content generators and AI technologies in general? Are they a nightmare for artists or just another handy tool? Share your opinion on social media by tagging @icons8 and @generated_media.

Credits
Ian Gibbins is a poet, video artist, and electronic musician living on Kaurna land in Adelaide, South Australia. He has four books of poetry, and his videos have been shown to acclaim worldwide in festivals, galleries, installations, and public art programs. Until he retired in 2014, Ian was an internationally recognized neuroscientist and Professor of Anatomy.

“The Life We Live Is Not Life Itself” video was commissioned by The Institute for Experimental Arts in Athens. Music for the video was created by Whodoes.

Subscribe to
Icons8 Newsletter!

Stay tuned and get the latest news
in design world

Yay! You’ve successfuly subscribed!

Welcome to the community, buddy. We promise never to spam you