UCLA Center for Research in Engineering, Media, and Performance and Advanced Research Computing have collaborated to build a set of cloud-based AI inferencing microservices to support an immersive live production of the musical Xanadu. During the production, audience members interact with the production using an app on their mobile phones that allows them to draw images in the air with cell-phone gestures. Those images are then sent to the AI microservices for real-time inferencing. The inferences are then displayed as 3D meshes on stage-based digital panels during the performance. This session will walk participants through the imagining of the performance, the AI microservices we developed, our model selection process, and the cloud architecture. Participants should come away with a good understanding of Immersive Performance with AI, how to create and implement similar AI microservices with AWS and how to build cost consciously.
Speaker/Host
Andrew is the Research Data and Web Platforms Manager at UCLA Advanced Research Computing. His research interests include AI in the areas of advanced manufacturing, medical and dental self-care, immersive performance, and digital media.
Co-speaker(s)
Anthony Doolan is an Application Programmer and AV Specialist for UCLA's Office of Advanced Research Computing. He develops and maintains full stack web applications, both on premises and cloud-based, and provides audiovisual systems integration and programming expertise, specializing in Extron equipment.
I am a Web Developer and Cloud Architect with experience across AWS and GCP. My current focus is on AI prototyping and development, exploring how intelligent systems can enhance modern applications. I enjoy building scalable, cloud-native solutions that bridge innovation and practicality.