VIEW All      

GenPT-Live




Project Type :  

2025, Data, Generative AI
Team :  Reto Chen  +  Adrian Tsao



GenPT-Live is a real-time, feedback-generative visual system using AI. We explored how machine-learning generated imagery can evolve over time with continuous self-reflection. Instead of approaching AI image generation as just an output, the project studies what happens when imagery is analyzed, transformed and fed back into the model.


The core idea behind GenPT-Live is a closed feedback loop between AI generation and real-time visual analysis. The system relies on TouchDesigner for the visual environment and Python for the script calling Stream Diffusion as the image-generation engine.

A user provides a text prompt and model parameters (such as resolution, seed, or model selection). The AI generates an image based on this prompt. Instead of stopping there, the generated image is analyzed within TouchDesigner. Pixel palette morphed into horizontal strips—is extracted, transformed, and regenerated into a secondary visual output. This processed imagery is then routed back into the AI system, influencing the next round of image generation.

Unlike typical AI image workflows that rely on repeated manual prompting, GenPT-Live allows images to influence themselves. This creates a system where visuals gradually drift, mutate, and stabilize based on internal logic rather than constant user input. The result is an emergent visual language that feels organic, temporal, and unpredictable.  


Users interact with GenPT-Live through a simplified TouchDesigner interface.

They can:

  • Enter or modify text prompts
  • Adjust model and resolution parameters
  • Decide when to activate the feedback loop
  • Observe the system evolve with minimal intervention
  • Continuously modify prompts in the entire process

The tool is intended for media artists, live performers, creative coders, installation designers, VJs, and exhibition curators who have interest in real-time AI visualization and generative systems. It is especially suited for live visuals, immersive installations, and experimental image research.