by Thomas Grill, 2019
Horn loudspeakers, audio electronics, speech synthesis, machine learning
We are witnessing two acoustic agents incarnated by large horn loudspeakers as they incessantly exchange acoustic codes. Based on models of human vocalization, they develop their vocabulary independently from a natural language. In their ongoing discourse, they follow a common goal: to maximize the beauty of their own vocal expression. The concept relates to an experiment by the Facebook AI team in which chat bots were given the task of optimizing their language for negotiation efficiency. The language soon became impossible for human eavesdroppers to interpret. Mutual Understanding expands on this experiment, treating it as an aesthetic problem, investigating the possibility space of language and its technical measurability.
The installation has originally been created for the AI x Music exhibition of the Ars Electronica 2019 festival and has now found its home at the Auditorium of Rotting Sounds.