Sound is how products communicate.

We design non-verbal communication for autonomous products — feedback, presence, adaptive behavior.

Get in touch

What this looks like.

Automotive

Hopium Machina Vision

How does a driver navigate a car's interface without taking eyes off the road? We designed a complete UX sound system — "Drive-By-Ear" — where audio cues replace visual menus. Every interaction state has a distinct sonic signature, from media and navigation to climate control. Presented at Paris Mondial de l'Auto 2022.

Medical technology

Surgical Imaging Robot

In the operating room, a robotic imaging system's audio cues were indistinguishable from patient monitors — creating confusion and cognitive load for surgeons. We designed a non-verbal communication system for the robot: distinct sound signatures for movement types, radiation intensity levels, and system states — allowing surgeons to keep their attention on the patient.

Automotive · Consumer electronics

Harman AutoOne

When an in-car entertainment system starts up, the first seconds define how premium the experience feels. We developed brand-specific start-up and shut-down sounds in Dolby Atmos for Harman Kardon, a UI sound system, and an atmospheric sound layer bridging silence to music. The project evolved into a redefinition of the Harman Kardon sonic brand identity.

weather dayphase location motion context state → CORPUS Reef
R&D · Adaptive audio

Context-Aware Sound Generation

How can sound adapt to weather, time of day, location, and user state — in real time? Across multiple prototypes and industry collaborations, we developed systems that generate sound combinatorically from contextual parameters instead of playing static files. This line of research led directly to CORPUS Reef — our real-time generative sound model.

From concept to embedded system.

Sound strategy, adaptive audio systems, and generative AI for products that interact with people. We work across the full arc — from defining a product's sonic identity to shipping production-ready systems.

Product Sound Identity

Every product needs a coherent sound language — not isolated files, but a system. Feedback, notifications, ambient presence, sonic branding: designed as one auditive identity that makes a product recognizable, navigable, and approachable. Delivered as production-ready assets — WAV, spatial formats (Dolby Atmos, MPEG-H), middleware-integrated.

Adaptive & Context-Aware Audio

Sound that responds to context, behavior, and environment in real time. From situation-adaptive audio to ambient states that shift with driving dynamics, user state, or operational context. Built with game audio middleware (FMOD, Wwise), custom engines, and spatial audio frameworks. Designed for edge deployment.

Research & Development

Certified R&D lab (BSFZ Forschungszulage). We investigate context-sensitive sound generation, interaction models, and generative audio for industrial applications. We help partners structure joint R&D proposals for public funding.

1

Concept

Sound aesthetics and use case design. Three approaches, narrowed to one in dialogue with the client.

2

Prototype

Built and tested in our Immersive Sound Stage (ISS) — a modular multi-speaker environment for spatial audio prototyping.

3

Validate

User testing in simulated environments. Measuring emotional response, cognitive load, and brand coherence.

4

Deliver

Production-ready assets in all major formats (Dolby Atmos, MPEG-H, Auro 3D). On-device mastering and technical integration supervision.

Adaptive sound needs a new foundation.

Adaptive audio systems can't scale with static sound libraries. Generative AI is the only path — but it needs training data that is legally compliant and musically authentic. Scraping the internet is not an option. And models trained on stolen data produce results with no musical provenance.

CORPUS Reef

A real-time generative sound model trained on rights-cleared data from a community of contributing musicians. Edge-deployable, controllable, and legally compliant from end to end. What you hear is not replicated — it emerges from the musicians who trained the model.

CORPUS is the licensing and royalty protocol that makes Reef possible. Musicians contribute, keep their rights, and receive royalties based on the value their work creates. For partners who define their brand through musical authenticity, this matters: the provenance is real.

corpus.music

Artistic rigour applied to industrial problems.

Our background is in composition, theatre, and interactive media — but also in commercial production, brand communication, and industrial events. That combination is why our work sounds different. We bring artistic judgement with a clear awareness of brand, product, and delivery.

Designing a product's sonic identity is character design. Making that identity respond to context in real time is adaptive composition. Both require artistic judgement — applied with engineering discipline.

Founded in 2017 by Mathis Nitschke, composer and sound designer with three decades of experience across film scoring, theatre, game audio, commercial production, and immersive installations. Since 2019, the studio has focused on auditive interaction for industrial partners — combining artistic practice with research in psychoacoustics, spatial audio, and generative models.

6+
years in automotive
sound interaction
4
certified R&D projects
(Forschungszulage)
8
core team members
across disciplines

Most of our team members are interdisciplinary by nature — engineers who are also artists, researchers who compose. Our core team brings decades of combined experience in adaptive and interactive music, game audio, and generative systems. Projects delivered in automotive, medical technology, and consumer electronics.

Munich-based, globally connected

Mathis Nitschke
Mathis Nitschke
Founder & Artistic Director
Three decades in film scoring, theatre, game audio, and commercial production. Collaborations include Audi, Harman, Brainlab, and Linde.
Jörg Hüttner
Jörg Hüttner
Composer & Sound Designer
20+ years in film, gaming, and industrial sound. Has worked with Hans Zimmer and Atli Örvarsson.
Lars Ullrich
Lars Ullrich
CTO, CORPUS
Full-stack engineer and digital artist. Years of experience in adaptive music systems and high-performance computing.
Max Graf
Max Graf
AI & Music Researcher
PhD in AI & Music (Queen Mary University London). Published researcher in generative audio, AI, and XR.
Michael Hartung
Michael Hartung
Interactive Audio Designer / Programmer
Game audio specialist. Builds adaptive audio systems with middleware and custom engines.
Luciano Pinna
Luciano Pinna
Augmented Reality Designer / Programmer
Conceptual artist and AR developer. Creates spatial and interactive audio-visual experiences.
Jacob A.C. Andersen
Jacob A.C. Andersen
Sound Designer & Product Manager
Background in interactive and adaptive music. Bridges creative and operational sides of projects.
Anja Gerscher
Anja Gerscher
Graphics Designer
20+ years in brand development and visual communication. Bridges strategy, identity, and spatial design.
Audi Harman Harman Kardon UE Studios K5 Factory ECM Records
Co-funded by the European Union BSFZ Forschungszulage FFF Bayern Münchner Philharmoniker Creative Industries Fund NL

Let's talk

Or write directly:

info@sofilab.art

Sofilab GmbH
Akademiestr. 3
80799 Munich, Germany

LinkedIn