We design non-verbal communication for autonomous products — feedback, presence, adaptive behavior.
Get in touchHow does a driver navigate a car's interface without taking eyes off the road? We designed a complete UX sound system — "Drive-By-Ear" — where audio cues replace visual menus. Every interaction state has a distinct sonic signature, from media and navigation to climate control. Presented at Paris Mondial de l'Auto 2022.
In the operating room, a robotic imaging system's audio cues were indistinguishable from patient monitors — creating confusion and cognitive load for surgeons. We designed a non-verbal communication system for the robot: distinct sound signatures for movement types, radiation intensity levels, and system states — allowing surgeons to keep their attention on the patient.
When an in-car entertainment system starts up, the first seconds define how premium the experience feels. We developed brand-specific start-up and shut-down sounds in Dolby Atmos for Harman Kardon, a UI sound system, and an atmospheric sound layer bridging silence to music. The project evolved into a redefinition of the Harman Kardon sonic brand identity.
How can sound adapt to weather, time of day, location, and user state — in real time? Across multiple prototypes and industry collaborations, we developed systems that generate sound combinatorically from contextual parameters instead of playing static files. This line of research led directly to CORPUS Reef — our real-time generative sound model.
Sound strategy, adaptive audio systems, and generative AI for products that interact with people. We work across the full arc — from defining a product's sonic identity to shipping production-ready systems.
Every product needs a coherent sound language — not isolated files, but a system. Feedback, notifications, ambient presence, sonic branding: designed as one auditive identity that makes a product recognizable, navigable, and approachable. Delivered as production-ready assets — WAV, spatial formats (Dolby Atmos, MPEG-H), middleware-integrated.
Sound that responds to context, behavior, and environment in real time. From situation-adaptive audio to ambient states that shift with driving dynamics, user state, or operational context. Built with game audio middleware (FMOD, Wwise), custom engines, and spatial audio frameworks. Designed for edge deployment.
Certified R&D lab (BSFZ Forschungszulage). We investigate context-sensitive sound generation, interaction models, and generative audio for industrial applications. We help partners structure joint R&D proposals for public funding.
Sound aesthetics and use case design. Three approaches, narrowed to one in dialogue with the client.
Built and tested in our Immersive Sound Stage (ISS) — a modular multi-speaker environment for spatial audio prototyping.
User testing in simulated environments. Measuring emotional response, cognitive load, and brand coherence.
Production-ready assets in all major formats (Dolby Atmos, MPEG-H, Auro 3D). On-device mastering and technical integration supervision.
Adaptive audio systems can't scale with static sound libraries. Generative AI is the only path — but it needs training data that is legally compliant and musically authentic. Scraping the internet is not an option. And models trained on stolen data produce results with no musical provenance.
A real-time generative sound model trained on rights-cleared data from a community of contributing musicians. Edge-deployable, controllable, and legally compliant from end to end. What you hear is not replicated — it emerges from the musicians who trained the model.
CORPUS is the licensing and royalty protocol that makes Reef possible. Musicians contribute, keep their rights, and receive royalties based on the value their work creates. For partners who define their brand through musical authenticity, this matters: the provenance is real.
corpus.musicOur background is in composition, theatre, and interactive media — but also in commercial production, brand communication, and industrial events. That combination is why our work sounds different. We bring artistic judgement with a clear awareness of brand, product, and delivery.
Designing a product's sonic identity is character design. Making that identity respond to context in real time is adaptive composition. Both require artistic judgement — applied with engineering discipline.
Founded in 2017 by Mathis Nitschke, composer and sound designer with three decades of experience across film scoring, theatre, game audio, commercial production, and immersive installations. Since 2019, the studio has focused on auditive interaction for industrial partners — combining artistic practice with research in psychoacoustics, spatial audio, and generative models.
Most of our team members are interdisciplinary by nature — engineers who are also artists, researchers who compose. Our core team brings decades of combined experience in adaptive and interactive music, game audio, and generative systems. Projects delivered in automotive, medical technology, and consumer electronics.