Procedural Sound Design and interactive installations

Dotdotdot_it
4 min readJul 19, 2019

--

The installation for Enel Green Power in the former Taccani hydroelectric power plant represents an important moment in our research on procedural sound, for a user experience that involves a more profound level of empathy

A view of the character Mariasole from the 360° “The Heart of Energy”

Outfitting the former Taccani powerplant for Enel Green Power in Trezzo sull’Adda presented us with the great challenge of conveying immaterial concepts such as energy, making them tangible and experimentable, as well as transferring complex topics through the use of Interaction Design, such as the production of energy or its relation to the geography of the world.

When creating of an empathic experience, we believe that environmental sound design and direct voice interaction with the characters is increasingly critical. Thanks to a sophisticated voice recognition system placed in the guide’s helmet, saif characters answer the questions in real-time creating a direct link with the visitor.

Speech recognition and generative audio systems for more effective involvement

The sound featured is a new way of interaction that we are gradually getting used to, and therefore fundamental in the design of so-called responsive environments.

In the experience designed for Enel Green Power, a voice recognition system is integrated into the helmet worn by the guide at the very start of the path, which also activates the 5 characters representing renewable energy sources. Key words activate Idro, the character that anthropomorphizes water energy, which recounts the contents of the various stations and, in the central 360 ° space, and introduces the other four sources of clean energy.

Our research on the voice of the avatars was initially oriented towards real-time voice synthesis, by verifying how Artificial Intelligence systems, such as Microsoft Cortana or iOS Siri, were able to pronounce complete and intelligible answers. However, these items did not transfer any transport or emotion, which is fundamental for us to empathize with visitors. For this reason, we opted for professional actors for whom we have written a series of answers that make up a library of answers to be activated with the keywords. Integrated within the helmet is a custom proximity detection system (Bluetooth Beacon technology) that activates the voice recognition device in the corresponding station, with the Python server running in the background for the natural language parsing of all the sentences pronounced by the guide.

Within the 360° space (“The journey into the heart of energy”) eight loudspeakers are placed for the immersive audio, whose arrangement makes for a very effective surround effect. The background sound of each character is completely procedural, produced in real time through the use of the SuperCollider platform and is never the same. The sound of water, wind, fire or the sea are synthesized for a much more realistic and different sound effect depending on the noises and people present.

The soundscapes are designed to reinforce the avatar’s identity: for example, Levante, the wind, has a more musical sound; Mariasole, solar energy, has brighter sounds such as the chirping of cicadas that recalls summer; Gaia, geothermal energy, has a heart-like beat.

Of course, synchronizing the algorithmic audio with the graphics and managing to move from one character to another, was not easy. The area at the center of the space also produces the background sound that spreads along the exhibition path.

The voice recognition system placed in the guide’s helmet.

Future developments, including data sonification and infotainment

The experience at the Taccani interactive plant allowed us to carry out a lot of technological experimentation. But since the set-up is scalable and will be replicated in other Enel Green Power plants (next to Acquoria in Lazio), there are desirable and possible implementations, such as rendering the relationship with the characters freer, disengaging from the limited audio files that limit the interaction with the library of pre-recorded answers, or the multi-user voice recognition system, which is not limited to the wearer of the helmet.

In the case of visual or auditory feedback, such as representations in Computer Vision or musical backgrounds, they could be implemented with a representation of data-sonification or infotainment starting from user data such as real-time recorded biometric parameters.

In general, the experience of the Interactive panels suggests many other applications where data is the focal point of entertainment.

The station “We have a lot in common”

by Laura Dellamotta , co-founder and CTO of Dotdotdot and OpenDot.

--

--

Dotdotdot_it
Dotdotdot_it

Written by Dotdotdot_it

We are a multidisciplinary interaction design studio founded in Milan in 2004, in which experimentation is at the core of innovation.

No responses yet