Advertisement

Researchers alter video, discover way to make the invisible visible

Researchers alter video, discover way to make the invisible visible

Breakthrough opens door to several new technology possibilities


By amplifying variations in successive video frames, a team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has discovered an amazing new way of making subtle movements captured on video — typically invisible to the human eye — distinctly visible.

The group that worked on this project includes graduate student Michael Rubinstein, recent alumni Hao-Yu Wu ‘12, MNG ‘12 and Eugene Shih SM ‘01, PhD ‘10, and professors William Freeman, Fredo Durand and John Guttag. In the video below, they demonstrate how the technology works; specifically, how it is able to display fast, subtle movements like a human pulse, breathing, and a person’s skin becoming red and growing pale with the flow of blood to the region.

In these frames of video, a new algorithm developed by a team from MIT amplifies the subtle change in skin color caused by the pumping of the blood.

The team likens the technology to the way in which an equalizer in a sound system works; that is, how its job is to boost some frequencies and cut others. The difference with the team’s approach is that the frequency they’re focusing on is that of color change in a sequence of video frames, not an audio signal.

Taking a closer look

The prototype presented allows the user to specify the frequency range as well as the degree of amplification. It works in real time, and displays both the original video as well as the altered one, with changes between the two magnified for clearer understanding on the differences.

The team suggests that if the range of frequencies is wide enough, their software can be used to amplify changes that occur only once (as opposed to those that recur at regular intervals — e.g. heartbeat, lungs breathing, plucked guitar string, etc.) This would allow users the ability to compare different images of the same scene and easily pick out changes that would otherwise go unnoticed.

Video

Below, the team explains the concept behind their technology, how it works, and presents a few examples of it in action:

Stumbling upon this discovery

The team started out looking to create a technology that would amplify color changes. In their experiments, however, they found that their program was really effective when it came to amplifying motion as well.

“We started from amplifying color, and we noticed that we’d get this nice effect, that motion is also amplified,” Rubinstein explains. “So we went back, figured out exactly why that happens, studied it well, and saw how we can incorporate that to do better motion amplification.”

A future in healthcare

Rubinstein foresees this technology being particularly useful in healthcare. For instance, it could be used for the “contactless monitoring” of a patient’s vital signs: Boosting one set of frequencies allows the measurement of pulse rates via subtle changes in skin coloration; boosting another set of frequencies allows monitoring of breathing.

This sort of thinking could be particularly useful with infants who are born prematurely or otherwise require early medical attention. “Their bodies are so fragile, you want to attach as few sensors as possible,” Rubinstein notes.

He adds that the technology could be used for baby monitors, too, wherein concerned parents could check on the vital signs of their sleeping toddler simply by adjusting the frequency of the video feed.

Popular concept

Since sharing their discovery, outside researchers have begun making suggestions to Rubinstein and the rest of the MIT team other ways to use the technology, including laparoscopic imaging of internal organs, long-range-surveillance systems, contactless lie detection, and more.

“It’s a fantastic result,” says Maneesh Agrawala, an associate professor in the electrical engineering and computer science department at the University of California at Berkeley, and director of the department’s Visualization Lab. Agrawala points out that Freeman and Durand were part of a team of MIT researchers who made a splash at the 2005 Siggraph with a paper on motion magnification in video.

“This approach is both simpler and allows you to see some things that you couldn’t see with that old approach,” Agrawala says. “The simplicity of the approach makes it something that has the possibility for application in a number of places. I think we’ll see a lot of people implementing it because it’s fairly straightforward.” ■

Story via: MIT.edu

Advertisement



Learn more about Electronic Products Magazine

Leave a Reply