What happens when art, science and artificial intelligence get put together? A brand-new art form is created, and that is exactly what an UH Cullen College of Engineering professor, Jose Luis Contreras-Vidal, Ph.D., has done with his latest project, The Nahual Project.
The Nahual Project is in collaboration with a Houston-based visual artist, Geraldina Wise, who was named Artist-in-Residence at the Cullen College of Engineering and the National Science Center BRAIN Center.
In this project, Contreras-Vidal, and his team, is “using neural interfaces to acquire brain activity of Geraldina as she creates visual art in public settings.”
“The images and video of her art and her brain activity from past performance are used to train an AI model that learns the artist's creative process. During a performance, this model is used to create a digital painting that coexist with the physical painting on the canvas,” Contreras-Vidal said.
A “grammar of sounds” takes Wise’s brain activity and converts is into “creative sound. This process is called sonification.
This gives the viewer a visualization of what is happening in the artist’s mind during the creative process.
“The technology is based on noninvasive mobile scalp electroencephalography, which is used to listen to the brain in free-behaving individuals outside lab settings. Custom specialized software is used to process the signal and to train the AI model,” Contreras-Vidal said.
Contreras-Vidal said understanding the creative process is one of the oldest scientific and artistic question discussions of all time. He describes creativity as a characteristic that defines humankind. That’s what inspired him to take on this project.
“We are now able to integrate mobile neuroimaging with AI and art to investigate the neural basis of creativity in real settings,” Contreras-Vidal said.
He said this could have a positive impact on education by promoting creativity, health and medicine by using it for art therapy and neuroscience by the ability to reverse engineer the brain in action.
This technology creates a new way for brains and machines to communicate.
It also will impact the way that art is consumed and created. “The tech can be used to engage the audience. For example, brain headsets fitted to audience participants could be used to modulate the sonification or as inputs to the AI model to 'borrow' Geraldina's brain to create personalized art,” Contreras-Vidal said.
It could also be used by people with disabilities to “create art, to visualize their brain activity, and how context can modulate patterns of brain activity.”
Not only can this technology be used with visual art. Contreras-Vidal and his team have started a new collaboration with Noblemotion Dance Company and music composer, Tony Bandt to “create a creative movement choreography supported by music and neurotechnology to explore some neuroscience principles.”