Advertisement

AI predicts the outcome of your marriage based on body language and voice

University of Southern California researchers trained AI to pick up on subtle cues in human behavior

Stock_marriage

By Brian Santo, contributing writer

It is creepy enough that artificial intelligence systems can analyze your television viewing habits and recommend other shows (recommend? Heck, create shows ) that you will probably like with uncanny accuracy, but now AI systems are being trained to pick up on subtle cues in human behavior and even in human physiognomy, and in some instances, they’re already proving to be more intuitive than humans.

In one experiment, AI was used to analyze the vocal characteristics of spouses in couples therapy and asked to predict the outcomes of the therapy. The AI was as successful, and sometimes more successful, at predicting the outcome of therapy than human experts.

There are two variables with couples therapy that make it difficult to evaluate whether progress is being made. The relationship dynamics between every couple are different, and therapists’ evaluations will always be subjective. Consequently, the profession has been striving to develop some objective metrics.

In a project based at the University of Southern California (USC), researchers tried to develop an AI-based system that could provide such metrics. There already exists a body of research that maps a wide variety of vocal characteristics to emotions. These include commonly identified characteristics such as pitch and intensity but also others such as jitter and shimmer, which are components of pitch and amplitude, respectively. Another is the harmonics-to-noise ratio (HNR).

The researchers developed a set of sophisticated algorithms for evaluating these characteristics and mapping them to emotion so that they could analyze the quality of conversation between spouses, as they explain in their paper, “Predicting couple therapy outcomes based on speech acoustic features.” Are there signs of withdrawal? Are they increasing in severity or frequency (bad) or decreasing (good)? Are there signs of humor? Are they increasing (good) or decreasing (bad)? Which characteristics of conversation are more indicative of progress, which less, and in what order?

The system also takes into consideration conversational characteristics such as who spoke when, and for how long. Interestingly, the AI disregards what is said. 

Once the researchers had their AI trained, they used it on an existing data set with known outcomes. They also asked human psychologists to evaluate the same data . The psychologists, who also evaluated what was said in addition to vocal characteristics, made correct predictions in 75.6% of the cases. The AI, using only vocal characteristics, had a success rate of 79.3%.

The results, the researchers say, should encourage further investigations into measuring non-verbal cues to aide in therapy.

Meanwhile, researchers at Stanford University trained a neural network on over 35,000 faces to see if it was possible to distinguish between straight and gay by physical characteristics alone. They made their conclusion the title of their paper: “Deep neural networks are more accurate than humans at detecting sexual orientation from facial images.”

While gays and lesbians have been invoking “gaydar” for a quarter-century or more, it was always nominally a joke about intuition. The Stanford research seems akin to discredited pseudo-sciences such as physiognomy and phrenology, making the paper immediately more controversial, yet their initial results will be hard to dismiss out of hand until they’re either replicated or not.

The authors refer to a body of research that shows that people can look at others’ faces and identify character, emotional states, personality, sexual orientation, and other traits with better-than-chance accuracy.

They also refer to prenatal hormone theory, which posits a physiological basis for sexual orientation (the research is inconclusive). “Consistent with the prenatal hormone theory of sexual orientation, gay men and women tended to have gender-atypical facial morphology, expression, and grooming styles,” they wrote in their paper.

They said that they trained their neural network to consider both fixed facial features (nose shape, for example) and unfixed features (grooming).

The researchers used photos from a dating site where each person self-identified as heterosexual or gay. When the neural network was presented with two different faces, one heterosexual and one gay or lesbian, the system correctly guessed 81% of the time with men and 71% of the time with women. For human judges, the numbers were 61% and 54%.

The authors state explicitly that their research may have dangerous repercussions. First, the results between the neural network and human judges indicates that the differences in facial features that the neural networks detect are so subtle that humans clearly aren’t picking up on them.

They write that there are still many places where gay men and women are persecuted, and that “given that companies and governments are increasingly using computer-vision algorithms to detect people’s intimate traits, our findings expose a threat to the privacy and safety of gay men and women.”

While the Stanford project raised an immediate red flag, the USC project also merits some caution, though unlike the USC researchers, the Stanford researchers avoided mention of the possible negative societal ramifications of what they’d done.

What they’d done is create tools that could be misused to snoop on peoples’ conversations to make crude evaluations of the participants’ emotional states.

Advertisement



Learn more about Electronic Products Digital

Leave a Reply