Latent Space Companion

motion capturing | artificial intelligence | human-computer interaction | performance

A real-time motion capturing vision board

A real-time motion capturing vision board

roles: creative director | developer | prompt engineering | performer

 

Motivation

I designed The Latent Space Companion to explore a future reality where everyday interactions could be augmented to facilitate an extended reality with pictorial assistance. The performer interacts with everyday objects and makes poses that would be interpreted as an image inspiration to the visualized latent space. For instance, a person wanting to dance would trigger a sequence that helps them achieve that goal by generating a choreography of the dance genre detected with its cultural history in a dynamic visual scenery. Unlike traditional vision boards, the Latent Space Companion entertains the performer’s imagination with embedded machine hallucinations to challenge their default mode of thinking and passive acceptance of everyday social scripts.

Methodology

This proof of concept performance is powered by a motion capturing suit and Stable Diffusion image generator with pre-selected GPT-3.5 prompts on a projection surface. As a multi-dimensional data space, the Stable Diffusion image generator aims to encode meaningful internal representations of the performer’s externally observed and visualized interactions. Technically, the live image generator is based on the pre-programmed text with symbolic logic triggered by a calibrated human pose in real-time. In the future, a collection of poses could ideally trigger live-generated text-to-image sequences parsed with a knowledge base of a performer’s predictable movements or social scripts with semantic encoding.

Latent Space Companion Apparatus

 

Performance Demo