American Mensa Header

Building Our Emotionally Intelligent Future

How will the development of affective computing and artificial emotional intelligence transform our relationship with technology?

Amazon Alexa Echo on a stage
And now, heeeere’s Alexa!

In a large studio auditorium, hundreds of attendees fill the seats, struggling to contain their excitement. Each has been lucky enough to score tickets to this, America’s most popular and highest rated late-night talk show. Having been warmed up and instructed by the show’s audience coordinator, they collectively squirm in their seats with anticipation.

“One minute to show time,” calls out the production manager.

“30 seconds to MI playback,” chimes an engineer half a minute later.

On stage, an assistant director uses his voice and hands to complete the countdown — 5, 4, 3…. The well-known theme music rises as we pan stage right to a middle-aged gentleman who is evidently the show’s latest cohost.

“Frooooom Hollywood, ‘The Tonight Show,’ starring the world’s favorite AI talkshow host, Alexa Prime. And now, heeeere’s Alexa!”

A sleek android dressed in a dark skirt, jacket and high heels strides confidently onto the stage and begins her monologue, to the crowd’s enormous appreciation. Alexa’s manner is warm and engaging throughout the performance, ad-libbing continually. It’s a performance that would make even Johnny Carson proud.

horizontal rule

Is this an alternate universe? Or could something like this actually come to pass in our perhaps not-so-distant future? If we go by nothing more than the current state of artificial intelligence, of course such a scenario seems a long way from being ready for prime time. But recent developments could lead to this becoming reality one day, particularly given advances in the fields of affective computing, social robotics and artificial emotional intelligence.

Affective computing is a relatively new branch of computer science that was conceived in the mid-1990s by Rosalind Picard of MIT’s world-famous Media Lab. An electrical engineer, Picard had worked on many different types of artificial intelligence projects. Despite much success, however, she felt there were always problems with the systems, whether dealing with an anticipated input or other conditions that exceeded expected thresholds. Ultimately, this resulted in the software failing. Realizing artificial intelligence could never advance without overcoming these issues, Picard began to broaden her research. She soon surmised that the problems were being caused by the systems not knowing where to put their attention. That is, they couldn’t adequately assign and shift value to alter their focus on a moment-by-moment basis, something humans and other animals do routinely, even unconsciously. What was the secret behind this shifting of attention, and could it open the door to further advances in artificial intelligence?

Jealous RobotThese questions and many others led Picard to conduct further research on the neural processes involved in attention, including her conferring with cognitive scientists, psychologists, vision experts and other researchers. In time she was astounded to discover that for many of the cognitive systems involved in focus and attention, value assignment, and memory formation and retrieval, emotion was always intimately linked in one way or another.

This was not something Picard really wanted to hear, much less pursue. Her professional world was made up of mathematics, circuits and algorithms, systems that had a direct line to objective cause and effect. The idea of getting involved in something as “irrational” as emotion did not appeal, particularly because she was a very successful woman in the traditionally male world of electrical engineering. Going down this path could potentially have been career suicide.

Nevertheless, Picard persevered. In reflecting on her early exploration, she explained to me, “[Computers] didn’t actually have any feelings about anything mattering, and I thought, ‘If they’re going to help us, and certain bits matter more to us than others, then they need to have this weighting function that tells them some things matter more than others.’”

Compiling her studies, Picard eventually wrote an internal MIT think piece titled “Affective Computing,” followed by a book of the same title. This work soon launched an entirely new branch of computer science, and Picard quickly found herself at the head of the Media Lab’s brand-new Affective Computing Research Group.

The group’s research was prolific; it rapidly explored many of the ways emotion and mood could potentially facilitate communication and interaction between people and machines. Researchers developed methods that allowed computers to read emotions directly from our expressions, with high accuracy. They studied the ways emotion changes skin conductance and vocal qualities. They explored new techniques for treating autism and ways to analyze aggregate group sentiment. The approaches were as profuse as they were creative and diverse. At the same time, MIT was not the only place showing interest in this new field. Universities around the world, corporate research departments and even DARPA (the Defense Advanced Research Projects Agency) jumped on board.

Disappointed robotOf course, among its many functions, emotion is critical in human communication, social interaction and bonding. This means emotion not only influences our expression of what we are feeling; it can be used to influence and alter the emotions of the person on the other side of the exchange.

This aspect is particularly important to the development of social robotics. As artificial intelligence becomes embodied in more and more computer systems, it will not be enough that it reads and anticipates how we feel. Numerous studies have shown that there are many benefits when people engage with systems that can be aware of, emulate and reflect emotion. For instance, an education study using robot tutors found that the presence of a physical robot had a superior impact on student test scores when compared with tutoring via software voice, an on-screen avatar or no tutor at all.

These types of correlations come up again and again in many different types of research, supporting the idea that at our core we evolved to interact with much of our world initially through our emotions. This is what makes this field and the building of emotional awareness into our technologies so important.

As emotional beings, we don’t want to become more machine-like. We aren’t inclined to interact with computers and other devices on their terms; we want them to operate on ours. This is one of the reasons why interfaces have become increasingly intuitive over the years, finally reaching the point where we employ natural user interfaces such as touch, gesture and voice to interact with our devices. It’s been a logical progression, and next in this line is emotion, perhaps our most natural form of communication.

This relatively new branch of computer science is leading to all manner of new applications. In the corporate world, some of the earliest uses of affective computing were for product branding and various types of market research. The ability to read participants’ emotions, detecting the fleeting feelings test subjects themselves may have been unaware of, was considered invaluable information. These types of studies continue today but with even more sophistication. (Massachusetts-based Affectiva, for example, claims to have evaluated more than 20,000 advertisements and similar media, in the process analyzing some 5 million faces and gathering more than 40 billion emotion data points.) As data training samples accumulate, systems will improve further, leading to more samples that will further improve these systems’ accuracy. That’s one of the fascinating aspects of deep machine learning: More quality training data leads to better and better results. These types of neural net-based programs need data sets to train on, and, generally speaking, the bigger those data sets are, the more accurate the systems can become.

Surprised RobotBecause of a confluence of circumstances, including available technology and the widespread commonality of human facial expression, many of the early emotion analytics companies focused on facial expression recognition. As in-store cameras, dashboard cams, smartphones and interactive billboards have become more and more ubiquitous in our environment, they have made this type of usage increasingly appealing for developers and vendors.

Of course, facial expression is far from our only method of communicating our feelings, and there are many businesses and research labs that focus on other modes as well: vocal recognition technology that ascertains emotion from the sound of people’s voices, wearables that track and learn from users’ galvanic skin responses, potentially even methods to identify emotions based simply on touch. Those are just a few examples.

Emotionally aware technologies are rapidly finding applications in all kinds of fields and professions. As previously mentioned, there are many potential uses in education, including methods for improving learning rates, memory formation and early detection of autism. The military is exploring new ways to treat, mitigate and even prevent post-traumatic stress disorder as well as depression and substance abuse. Law enforcement and intelligence agencies are looking at ways to ascertain criminal intent as early as possible. Marketers are exploring methods to improve customer brand loyalty.

Two areas seeing major investment from governments around the world are health and elder care. Many companies are seeking to learn how affective technologies, particularly in the form of social robotics, can help overcome the demographic challenges many developed nations face as their populations rapidly age. Japan is particularly vulnerable, given its high number of seniors relative to the size of its workforce. As a result, the Japanese government has invested heavily in research to build social robots that can perform physical labor in hospitals, guide or take part in physiotherapy, and even act as companions to ward off loneliness. It’s still early, but most researchers believe that building emotional awareness into these systems will greatly increase their usefulness and adoption.

All of these developments are anticipated to grow rapidly over the next few decades. A 2015 report from market research firm MarketsandMarkets forecast the global value of affective computing would grow from $9.3 billion in 2015 to $42.5 billion by the end of 2020. Several other recent reports mirror this forecast. As a result, some forecasters (including this author) are picturing the development of an emotion economy, a growing ecosystem of emotionally aware devices, software and services that will eventually serve as an infrastructure for further development. As a result, the ubiquitous nature of artificial emotional intelligence will completely change the way we interact with our technologies and environment, transforming the relationship between man and machine forever.

Ecstatic robotWhile much of this sounds incredibly techno-utopic, make no mistake that there are many problems and pitfalls that will arise from these changes as well. Emotion is one of the core aspects of the human condition. A large body of research supports the idea that our emotions precede and influence processes that occur well in advance of many higher executive functions taking place in the prefrontal cortex. Because of this, these same emotions can alter our decision making, perception, memories and other functions. Therefore, some of these new technologies may open us up to risks and vulnerabilities unlike any we’ve seen before.

Automated, programmable threats to our privacy, personal and financial security, autonomy, free will, and a host of other vulnerabilities will all need to be addressed as we find ourselves faced with many new challenges, some anticipated and some unforeseen. Only by doing this can we hope to truly benefit from these advances.

Perhaps most significant in all of this will be the potential development of super intelligent, possibly even conscious, machines. Though this remains a highly speculative topic of discussion, there are many reasons to believe that emotion in computers (or an analog thereof) could be critical to the development of higher forms of machine intelligence and consciousness, just as it was crucial in early human evolution. In discussing this technology, it’s important to remember that just because devices and systems may be programmed with some degree of emotional awareness, this doesn’t mean they are capable of actually internalizing or experiencing feelings. For this to occur, higher levels of consciousness may be required, levels that could eventually lead to the development of theory of mind (internal modeling of self and other) and self-awareness.

Will artificial emotional intelligence be the spark that eventually lights the fuse of future technological consciousness? Perhaps. But what is probably more important is that we recognize that our relationship with technology is on the cusp of changing radically. A strange new world lies ahead of us, one in which our machines are going to increasingly seem more than just a little bit… human.

Horizontal Rule

More on Artificial Emotional Intelligence

Heart of the machine book coverReadHeart of the Machine: Our Future in a World of Artificial Emotional Intelligence by Richard Yonck, out March 7 from Arcade Publishing.

Attend — See Richard Yonck speak in Seattle, March 7, 6:30 p.m., downstairs at Town Hall, $5; and in Austin (South by Southwest), March 11, 12:30 p.m., Convention Center, Room 10AB.