The listening and sensing devices in our homes that choose music, manage thermostats, and look up recipes may soon know more about your state of mental and physical wellness than your primary physician. Ubiquitous sensing, paired with machine learning, can amalgamate all of the signals we give off—from the timbre of our voice to the dilation of our pupils—to detect signs of conditions, such as Alzheimer’s disease, years before a traditional diagnosis.
Emerging “empathetic technology” sounds a bit scary to people who don’t like the idea of machines reading our feelings. We want to think we can mask how we’re feeling by controlling our faces, voices, and body language—and we don’t entirely trust what machines do with this data. True, this is a thorny topic. But the potential for empathetic technology to improve our health and lives makes dealing with the uncomfortable questions worthwhile.
Signals That Define Our Internal State
Let’s define what empathetic technology is, and what it isn’t. It’s not about creating computers that empathize with us, or that mimic human behavior. It’s about technology that is using our internal state to decide how it will respond and make decisions.
To understand how this works, think about the smart thermostats we now have in our homes. In the old days, they couldn’t make decisions or interpret what we needed: We could only give them some parameters, like to turn on the heat if the ambient temperature goes below a certain number.
Today, thermostats track and sense how we interact in our spaces, learning from our behaviors to keep us comfortable. But while the thermostats know a good deal about our behavior, they don’t know whether we are actually hot or cold, or how our temperature might be affecting our cognitive capacity and what we might be trying to achieve physically or mentally at a given moment in time. Whether we are trying to sleep, study, heal, or train will be better achieved in different conditions. Empathetic technology can add these missing pieces—our internal state—to make probabilistic decisions when paired with machine learning.
How can this play out in healthcare? Years before people are diagnosed with Alzheimer’s disease, they exhibit linguistic changes, such as pronoun choices and changes in the timing patterns of how they speak. These changes are usually too subtle for human caregivers like family members or spouses to pick up on: As most of us do, they’re typically listening for what’s being said, rather than how it’s being said.
But a sensing device such as a microphone—perhaps one that’s already listening to speech for other reasons, like any smart speaker or capable, connected voice assistant managing household systems—can compare a person’s linguistic patterns to what’s known to be normal. More importantly, it can compare a single individual’s patterns across time to their own averages and patterns. This personal picture in time of an individual’s change in speech or other behaviors is unprecedented in the insight and opportunity it can provide to alert caregivers and clinicians. The same can be done for diabetes, which can affect the “spectral coloration,” or timbre of our vocal patterns: again, a change that many clinicians might have a hard time detecting during a typical office visit.
Sensing Devices Capture Data Over The Long Term
Empathetic technology can do more than listen. When something causes stress in us, like being frightened or lonely, we exhale more carbon dioxide as well as other chemicals like isoprene that increases when your muscles tense. These can be detected by sensors, perhaps alerting people or their caregivers to mental health issues. Our pupils dilate when our cognitive load is higher than usual, or in response to something that we hear (or if we can’t hear well, but are trying to do so)—again, something that can be measured by eye-tracking sensors in eyeglasses, in our environments, or even in our ears, and compare that to what we know about signals resulting from health conditions or mental health challenges.
Much of this sensing can be done without us wearing sensors that have contact with our bodies. With more capable cameras, microphones, thermal imaging, and exhalant measuring devices, we can capture prolific data. And exponentially greater processing power lets us gather simultaneous insights from many sensors.
The value of this kind of sensing data combined with machine learning is that it is longitudinal. Hour by hour, day by day, consumer devices have long relationships with us in our spaces. Think of the importance of this data to our physicians, who might only have a 10-minute examination once or twice a year, plus medical tests from a few points in time, on which to base their diagnoses. With empathetic technology, they can now tap into an unprecedented near-constant flow of information.
Here are just a few other examples of how this might play out: By combining drug regimens with empathetic technology, doctors gain a closed feedback loop of data from the patient, changing drugs and therapies based on your signals. Or, weeks before you go in for knee surgery, your orthopedic surgeon can gather much more data about your gait and how you use your knee in ways that may benefit from different considerations during your physical therapy rehabilitation post-surgery, and potentially other relevant data points that would allow your surgeon to provide you a more personalized comprehensive treatment plan than would be possible with just a few MRIs.
How AI And Empathetic Technology Can Foster Independent Living
Healthcare is where the bleeding edge of empathetic technology can truly show its potential. But the end result is not just to diagnose conditions—although in terms of early intervention, the impact can be profound. The technology can improve our lives and personalize the treatments that are suggested to us by clinicians.
Hearing loss is one condition that can be treated more effectively with empathetic technology—specifically, creating hearing enhancement devices or aids that use the cognitive effort and attentional context of a wearer to make decisions. Historically, hearing aids offered little ability to adjust to constantly changing auditory conditions. Using machine learning, listening devices can alter volume and sound in a space based on a person’s cognitive signals. This goes beyond just whether someone is outside or inside, or listening to one speaker or many, although empathetic technology can adjust for these contexts as well.
Empathetic technology can also empower us to live independently as we age or recover. This is a space where I see significant investment, as startups look at ways to enable autonomous living. Instead of devices that simply detect whether we’ve taken our medications, if we’ve fallen or haven’t gotten out of bed, empathetic technology can gather signals that could indicate to caregivers that a person is unwell, having cognitive difficulties, or is just lonely and could use a phone call or a visit. Even fragile elders can gain some autonomy in terms of living on their own, since empathetic tech offers an unobtrusive yet effective way to keep them healthier and happier.