fbpx

Applied Computer Science Students Create Real-Time Immersive Art at Getty Center College Night

Creating art on the fly requires both talent and native intelligence and — as demonstrated vividly at Getty Center College Night, “Dreaming in Color” –- a little Artificial Intelligence doesn’t hurt, either.

This year’s gathering at the Getty Center on April 29 celebrated color — color in art and in imagination, the science of color, and the possibilities of a more colorful future. A group of 11 of Woodbury’s Applied Computer Science (ACS) students joined students from other schools in an eclectic series of mind-expanding projects and presentations.

 

In collaboration with the Getty, and with concept/project management by program chair Ana Herruzo and faculty member Nikita Pashenkov, Woodbury’s ACS students created an immersive, interactive installation merging AI and visual arts. Relying on data from users’ facial expressions and their interactions with the screen, members of the group generated graphics in real-time.

Students from Professor Herruzo’s Media Environments class, led the Project Design and Execution, and the students in Professor Pashenkov’s Artificial Intelligence course led the Machine Learning Development part of the project

Dubbed “WISIWYG: What I See Is What You Get,” the experiential installation produced live, interactive works of art along with text descriptions, by analyzing users’ facial expressions, and enlisted machine learning algorithms to train artificial neural network models on the Getty Museum’s art collection.

“With sensors tracking user movements, our students applied that data to design silhouettes, creating a reflection of the user onto the screen,” Professor Herruzo explains. “Graphics were then displayed on a vertical video wall. The video wall, in turn, had an embedded sensor that obtained live data from the user, and displayed real-time generated graphics and machine learning-generated text. It proved to be a remarkable blend of art and science.”

“In analyzing the Getty’s art collection, students have experimented with a ‘deep learning’ language model GPT-2, released by the non-profit foundation OpenAI in February of this year,” explains Professor Nikita. “The language model has been trained by the students on existing descriptions of art works on display at the Getty Center, then prompted by computer vision algorithms detecting participants’ facial expressions in order to generate new synthetic descriptions to accompany the real-time visualizations.”

The installation turned out to be a great success, with hundreds of students interacting and engaging with Woodbury’s Applied Computer Science – Media Arts project throughout the night.

Learn more about the Applied Computer Science program

Translate »