3 Ways Deeplocal Builds Interactive Experiences with AI

Illustration by Colin Miller

At Deeplocal, we create interactive experiences that bridge physical and digital mediums. We’ve used AI and ML to transport fans into a cartoon universe, create worlds out of words, build electromechanical flowers that bloom and respond to people in real time, and more. For over a decade, AI has helped us turn our wildest ideas into reality.

Deeplocal uses AI to make branded experiences relatable, intuitive, and magical. Because there’s an intrinsic joy in tactile interactions like pushing buttons, pulling levers, and turning dials, we often combine analog form factors with digital technologies. In this article, we'll show you how AI can bridge the physical-digital divide, and how we’ve used it to immerse people in brand stories, launch products, and connect fan communities.

Before we dive in, a quick note on how we select and deploy AI models. 

Off-the-shelf vs custom training

When we work with AI, there are two ways to source the tools. They are either already made (off-the-shelf) or we make them ourselves (custom). 

Off-the-shelf tools offer models that have already been trained by a third party; we can quickly integrate these models into our productions. Running something off the cloud, instead of our own computers, is more efficient, requires less computing power, and costs less per use. For example, most predictive tasks will generally be done better by off-the-shelf systems, like using Google’s Voice Recognition AI. 

Custom training means the model needs to be trained on specific data to get the results we want, generally requiring significant amounts of cloud computing and engineering time. When custom training is involved, it’s usually the main focus of the project. Some models can be “fine-tuned” from an already trained state, which cuts down on training time, like character chatbots. New types of generative models or introducing new concepts to predictive models require custom training on a large data set. 

With this in mind, here are three ways Deeplocal uses AI to create IRL experiences.

Recognition

AI is an excellent tool for when a computer needs to “see” what’s happening in the physical world and respond or react to user inputs. When we use AI for recognition, the computer identifies a constellation of data points and uses machine learning to know when it’s looking at a face, body pose, or movement. 

In Adult Swim’s Rickflector, fans created their own Rick and Morty-style avatars, displayed on screens in pods, that could move and respond in unison with their own bodies. Thanks to AI, the computer “saw” what fans were doing and mirrored the same actions with the avatars, bringing the cartoon universe to life. 

Google Flowers, the electromechanical exhibit in Google’s 9th Ave lobby in NYC, uses recognition to bloom in response to the movements and gestures of visitors and passersby. This playful installation is a physical expression of the Google brand, transforming the world through design, engineering, and intelligence.

Generation and Transformation

Tools like ChatGPT and Midjourney have been making headlines by creating one-of-a-kind text and images with highly-specific user input. We’ve used these programs as well as Google Imagen, a text-to-image diffusion model that generates photorealistic results.

At Google Cloud Next, an exploratory pop-up gave attendees the chance to visualize the world of tomorrow. Guests used Google Imagen to translate keywords into their own unique visions of the future. For example: Imagine the school of tomorrow with solar energy vibes and a hint of outer space. Imagen translated this prompt and others into custom, take-home art prints.

Similarly, deep learning techniques like neural style transfer enable us to transform images by identifying stylistic elements from one type of media and applying them to another. We can use style transfer to create personalized, interactive experiences. Think: a user-submitted selfie transforms into mood art when rendered in the style of Van Gogh.

And beyond visual media, we’ve used AI for collaborative, generative sound, creating sensory experiences that connect artists and fans.

For the Flaming Lips’ headlining performance at Google I/O, we created an AI-powered musical instrument that combined Google Magenta’s Piano Genie model with a physical interface. To get the audience involved, we filled giant inflatables with sensors that signaled the software when touched, letting the crowd compose a song with the band in real time.

Scaling (MLOps) 

AI models can be very powerful, but also very complex. It can be difficult to deploy them in production, especially for high-profile events and campaigns where brand affinity is connected to the user experience. To combat this, we’ve developed processes that make it easier to scale AI models to production, and ensure success at each event, installation, or demo. These processes are present in behind-the-scenes ways in many of the aforementioned projects in this article.

Conclusion

As technical innovators, Hybrid Intelligence helps us create novel interactives for our clients. Tangible experiences are on the rise across retail, pop-ups, and workplaces, and AI and ML are often key components. 

And there’s a lot to unpack around how we’ll use AI down the line. Next time, we’ll talk with Deeplocal’s creative technologists about their hopes and predictions for the future. Beyond the media hype, where is AI headed, and how will it affect our daily lives? Stay tuned.

Previous
Previous

Ad Age: How AI Is Bringing Performance Marketing to Experiential Strategies

Next
Next

AI Basics for the Brand Marketer