Sound carries more information than you think.
A mechanical car blinker communicated four dimensions of meaning with every click. Its electronic replacement communicates one. We lost three dimensions of information — and didn't even notice. Until now.
Four dimensions in a click
The old blinker relay — an electromagnetic switch, a mechanical component that clicks and clacks as a byproduct of its function — was a remarkable piece of unintentional communication design. With every click-clack, it told you four things at once:
Function: the blinker is active. That's the obvious one.
Diagnostics: the rhythm tells you about the health of other components. A faster-than-normal rhythm means a bulb is blown. An extremely slow rhythm means the battery is weak. The driver hears this without looking at a gauge.
Wear: the relay is a mechanical part that ages with the car. The contacts oxidize, the mechanism hardens. An aging relay sounds harder, more brittle. The sound told you about the age and condition of the vehicle itself.
Vitality: every single click sounded slightly different. Mechanical variation meant the sound was never perfectly identical. This is what made it feel alive — not a recording, but a physical event happening in real time.
The electronic replacement reproduces a sample. One sample, played identically every time. It communicates: blinker is active. Nothing more. Three dimensions of information — gone.
The ear processes in parallel
This matters because of how human hearing works. The ear doesn't process sound sequentially, like reading text on a screen. It processes multiple dimensions simultaneously. You hear pitch, rhythm, timbre, spatial position, and intensity all at once — and your brain integrates them into a single, intuitive impression without conscious effort.
This is fundamentally different from vision. A status indicator on a dashboard shows one piece of information at a time, and you have to look at it. A sound reaches you peripherally, requires no visual attention, and can carry multiple layers of meaning in a single moment.
This is why a surgeon can keep eyes on the patient while a robot communicates its state through sound. This is why a driver hears that something is wrong before any warning light comes on. The ear was built for exactly this: extracting complex, layered information from the acoustic environment, instantly, without deliberate focus.
See also: Why your robot scares people →
What products are throwing away
The transition from mechanical to electronic products has been, acoustically, a story of loss. Every mechanical device had a rich, unintentional sound identity. The typewriter told you about keystroke pressure, ribbon condition, and carriage position — all through sound. The film camera's shutter told you about exposure time. The sewing machine told you about thread tension and motor load.
Electronic products replaced all of this with silence — or with generic, static sounds that carry no information beyond "something happened." A notification beep tells you that you received a message, but not whether it's urgent. A warehouse robot beeps when it moves, but the beep is identical whether it's approaching you or moving away. A medical pump alarms, but the alarm sounds the same for a minor occlusion and a critical failure.
This is like replacing a full-color photograph with a binary pixel: on or off. The information capacity of sound is vast, and most products use almost none of it.
Why static sound files can't fix this
You can't restore multidimensional sound information with pre-recorded audio files. A sound file is frozen. It was produced once, in a studio, for a generic situation. It can't reflect the actual state of the system in this moment, in this context, for this user.
The mechanical blinker was rich precisely because it was generated in real time by the actual system. The sound was a direct physical expression of the current state — not a representation of it, but a manifestation. To bring this richness back into electronic products, sound needs to be modulated by actual system parameters — not a fixed beep, but a living signal shaped by what's happening now.
Sound as information architecture
The difference between a beep and a designed sound event is the difference between a label and a language. A beep says one thing. A designed, adaptive sound can simultaneously communicate:
Function — what is happening right now.
State — how is the system doing.
Context — where, when, and under what conditions.
Emotion — how should this moment feel.
Identity — who is this product.
All in a single sonic moment. All processed by the listener in parallel, without effort, without looking away from what they're doing.
This is not audio decoration. This is information architecture in sound — designing what each acoustic moment carries, how it relates to the moments before and after, and how the overall sonic behavior of a product forms a coherent, readable language.
The old blinker did this by accident. The next generation of products can do it by design.
We build adaptive sound systems that carry real-time information — from context-aware feedback to generative sonic identities. If your product communicates through sound, every click should count.