The Times: Conversations can be reconstructed from video footage of nearby objects
Researchers at MIT have found a way to reconstruct audio signals by examining high-resolution video footage of nearby objects.
The sound waves emitted when we speak produce vibrations in objects as diverse as crisp packets, plant leaves and even glasses of water could now be converted back to audio form.
“When sound hits an object, it causes the object to vibrate,” Abe Davis, one of the MIT researchers, said. “The motion of this vibration creates a very subtle visual signal that’s usually invisible to the naked eye. People didn’t realise that this information was there.”
The technique requires cameras capable of capturing between 2,000 and 6,000 frames per second to function effectively, although even ordinary household cameras could allow eavesdroppers to identify the number and sex of speakers.
Read the full story here