Our industrial work is built on a foundation of experimental projects and certified R&D. Each research project investigates a specific aspect of context-sensitive sound — and each one feeds directly into the systems we build for partners.
Since 2020, we have conducted five BSFZ-certified R&D projects (Forschungszulage), each building on the last. Together, they form a systematic investigation into how sound can adapt to context, environment, and human state in real time.
How can machines and musicians communicate through visual scores — before the machine can even "understand"?
Designed a notation interface for AI-musician dialogue. Led to the Lure project — AI-driven characters in live theatre performance.
Can physical movement through space deepen how we listen to complex music?
Built a GPS + binaural 3D audio prototype in Unity with DearVR. Led to The Planets — an interactive orchestral soundwalk with the Münchner Philharmoniker.
Can weather, time of day, and user state drive real-time sound generation?
Developed a combinatoric sound engine in MAX/MSP that translates environmental parameters into musical gestures. Led directly to adaptive automotive sound systems.
Can subtle, generative sound interventions change how we perceive in-between spaces?
Explored real-time audio processing of environmental sound in urban settings. Fed into the City Songs project — a location-aware generative soundwalk.
Can LLM-driven systems generate adaptive transition sounds that maintain acoustic presence without being perceived as active content?
Developing an LLM-to-audio-engine pipeline for context-adaptive sound generation. Leads directly to CORPUS Reef.
These projects are where we develop and test the techniques we later apply to industrial products. Each one operates at the intersection of artistic practice and technical research.
The world's first mixed-reality opera. Staged in the ruins of an industrial heating plant, MAYA used augmented reality, spatial sound, and laser choreography to create a music-theatre experience where the audience's smartphones became part of the performance apparatus.
A lonely character seeking connection amid a crowd absorbed in their screens. No props, no set — only immaterial interventions: music, light, and code. The app shifted function across three phases, from archaeological site guide to magical looking glass to active audience tool. The principle: minimally invasive design, born from the space rather than imposed on it.
MAYA established the artistic and technical methods that became Sofilab: spatial audio in unconventional environments, technology as dramaturgical element, and the conviction that presence is created through sound, not scenery. Published in Theatre and Performance Design (Routledge, 2018).
Gustav Holst's orchestral suite as a spatial audio walk in a public park. Individual orchestral voices are mapped to physical locations — listeners walk through the music, approaching instruments, discovering layers, shaping their own mix through movement.
First orchestral-scale binaural 3D audio experience on a smartphone. Built with Unity, DearVR spatial audio, and real-time GPS tracking. Funded by FFF Bayern.
An AI-driven character based on Parzival (Wolfram von Eschenbach) that learns and interacts with live musicians during performance. The project explored how graphic notation can serve as a communication interface between machine intelligence and human performers — a dialogue in the space before "understanding."
The central research question — how do machines and artists address each other through sound and visual structure? — is the conceptual ancestor of our work on human-machine sonic communication.
A soundwalk that works anywhere in the world. Instead of pre-composed content for a fixed location, City Songs classifies the listener's real-world context — weather, sun position, temperature, environment type — and generates a site-specific sonic experience in real time.
Custom parsers classify OpenStreetMap data into environment types (urban residential, forest, open landscape). GPT-4 generates location-specific poetry. The system produces a unique experience for every walk, every location, every time of day.
The context classification pipeline developed here — weather, sun phase, location type, user state — is the direct precursor to the parameter systems we build for adaptive automotive sound.
Every research project on this page investigated one piece of the puzzle: how sound can adapt to context, how it can be generated in real time, how it can maintain presence without being perceived as content. CORPUS is where these lines converge.
The central research question: how do you build a generative audio model that is musically authentic, controllable in real time, and legally compliant from end to end? This is not just a technical challenge — it requires solving the training data problem. Current generative music models are trained on scraped internet data with no provenance, no artist consent, and no legal basis for commercial deployment.
CORPUS builds the first rights-compliant music corpus for generative AI — a licensing and royalty protocol that connects contributing musicians with the technology companies that train on their work. The resulting model, CORPUS Reef, is designed for real-time, edge-deployable sound generation in products.
CORPUS is co-funded by the European Union through Creative Europe, the EU's programme supporting culture and audiovisual sectors. The grant recognizes CORPUS as a contribution to both cultural infrastructure and technological innovation.
This is Sofilab's largest research project to date — and the point where artistic research, adaptive audio engineering, and AI ethics meet.
Our research is co-funded by public institutions that recognize artistic and technological innovation as complementary forces.