SlashGear was invited to join Starkey’s Hearing Innovation Expo, prior to this year’s Consumer Electronics Show (CES), which is always a time to show off the latest innovations, the latest products, and the latest technologies for consumers and opportunities for potential business partners as well. But technology isn’t just about self-driving cars, talking speakers, or even the latest smartphones and connected televisions. Technology is about making lives better for people, especially people who need to see better or, in the case of Starkey Hearing Technologies, hear better. Vincent Nguyen, SlashGear’s Editor-in-Chief sat down with Achin Bhowmik, the company’s chief technology officer and executive VP of engineering, to learn where the current technology’s at and, more importantly, where hearing innovation is headed.
Bhowmik is probably the last person you’d expect to work at a medical technology company. In fact, Bhowmik almost turned down the offer to work at Starkey. After all, he had an illustrious career at Intel, where he led the chip makers’ efforts in a wide range of technologies, for AI to drones to virtual and mixed reality to 3D sensing. It may seem like a departure from his competencies but, to some extent, it really isn’t. Because the very same sensors and AI that Bhowmik had worked on at Intel are going to be the same technologies that will transform hearing aids into “a gateway to health”.
Hearing aids carry a stigma, in no small part to how they’re portrayed in pop culture, that causes even those who need it to carry it in shame. But, as Starkey President Brandon Sawalich explained, a hearing aid is “a health and wellness device. It is personal.” It’s not something you can simply buy off the shelf and fill comfortable, like you would, say an AirPod. But to move from medical to wellness device, CEO and founder William “Bill” Austin knew that Starkey would need more than just amplifying sound. And that is where Bhowmik comes in.
Achin Bhowmik: I was surprised how only a small number of people use a hearing aid. I look at it from the viewpoint of “why?” It‘s called a single-purpose function today. It only helps you hear better. If you enrich the device with new sensors that are now available and AI to make meaning out of those sensor data, you could turn that device into a multi-purpose “healthable” device.
What are those? There are low hanging fruits. An inertial sensor (IMU). Thanks to billions of phones, those IMU sensors are tiny, less than a millimeter, low power, and no problem fitting them in. But why would you?
One is physical activity tracking. Professionals and kids of patients want to know if their parents are using the hearing aid. We use data logging but data logging of sound doesn’t tell you the whole truth. Another thing, we’re also looking at the benefits of physical activity. 46% reduces the chance of Alzheimer’s if you walk 30 minutes a day, 5 days a week.
And then fall detection. Every 13 seconds an older adult is treated in the emergency for a fall. Every 20 minutes, one of them died. The cost of it to the healthcare was projected to be $67 billion by 2020. If I have an inertial sensor in the hearing and my patient falls down, and I have been working on AI, it’s the simplest thing to determine “did he fall?” And then we’ll have an app and you can choose an alert to loved ones.
SlashGear: Do you envision that these features to be the heart across all product lines?
Achin Bhowmik: Eventually, but for now, all you need for this to work, you need connectivity with the phone. And then you need enough power to run AI. But when you have the hearing aid connected to the phone which is connected to the cloud, we can do wonderful AI implementations.
SlashGear: How are you thinking about doing that [natural user interface] because it’s such a small area?
Achin Bhowmik: If you look at the sensitivity of the IMU signal as you tap, I don’t even have to tap on the device. Just tap wherever and I detect a spike and I feed that into a neural network to detect “did you tap?”. But the other part that we need is to collect a lot of data from real people tapping.
SlashGear: Which is where your machine learning and AI comes in.
Achin Bhowmik: In fact, that’s the same thing about falling, because people don’t fall the same way. Even in the traditional hearing aid, how do you set environments like restaurant, home, machine noise? You have all these modes because they require different signal processing. It used to be manual. So the first AI we have implemented in the iQ line of products is AI-based automatic environment recognition.
SlashGear: How are you actually doing that?
Going from biological neural network to artificial neurons. Let say I went down collecting hundred thousand samples of wind noise. I’ll provide this sound, initially the weights [to the signals] are all random. When I feed wind into it, the first initial neural network will have no clue. It will wrongly classify the wind to be a car. And then I’ll feed it back in, and it will say “no, it’s actually wind” and will adjust the weight of those.
SlashGear: So how do you package all that?
Achin Bhowmik: You don’t have to. The training requires the heavy computer. I need a server computer with an Intel chip and an NVIDIA graphics card. Once the training is finished, I got the weights learned or evolved, I take that and download that into the hearing aid. In the hearing aid, by the way, you have a pretty powerful processor and memory. The processor in the iQ line of hearing aid got 325,000 times more processing power and 60 times more memory than the Apollo guidance computer. And when we need to do more sophisticated things, I can use the phone, I’ll offload there. And to push it, even more, the phone’s always connected to the server. So if I need a more sophisticated neural network, I can utilize the data center.
So for example, for simple things, “did I tap” or “how many steps have I taken”, I don’t need to offload it. The processor and memory that I have in the hearing aids are enough to host a neural network like that. Pushing the limits, we’re going to bring real-time translation to hearing aids.
SlashGear: That’s a sweet spot for me. Tell me that’s real.
Achin Bhowmik: A few years ago this wouldn’t have been possible. Because you have to finish a few complete sentences. It didn’t work very well because you had to pause for a long time. It’s like having a human translator. Hats off to Google for investing billions in this, the latest that’s called a recurrent neural network. If you go to Google Search and search for a sentence, you’ll see that it’s displaying the words and getting some of the words wrong. But then halfway through the sentence, it goes back and corrects the words. That’s because of the recurrent neural network.The translation engine is really good.
The key here is the UI. We still have to tell our patients that you have to learn to use it. If you are in an area where twenty people are talking, your brain does not translate all of that. It needs to figure out what it is you are paying attention to. You will learn to use it in areas where it works well. You need an input device. Who’s talking to you? The microphone in the hearing aid is powerful enough but it’s going to pick up everybody’s dialogue, so what is the part that you translate? The key is that.
SlashGear: So how will you be doing that?
Achin Bhowmik: In my view, there are ways to decide the preference based on microphones. Because you have your hearing aid microphone, you have your phone microphone. So that’s the challenging part, becomes sometimes, I might be near and you might not be looking at another person, but when she starts speaking, you get it from there. So when we first bring it to market, it needs to be simple. The one that is guaranteed to work is one on one conversations.
SlashGear: Is Starkey set up for connecting with the different clouds?
Achin Bhowmik: We already are. For detecting environments, we’re already using a server backend for the AI. And the data logging, we’re using the cloud infrastructure.
SlashGear: This is such a small device, how can you fit other sensors, like an optical sensor for heart rate monitor.
Achin Bhowmik: A few requirement there, you do need contact. Some of our devices, the ones that have ear mold, you have real intimate contact with the inner ear. We have an IR laser or LED that sends light deep into the skin and the back-scattered light is detected with a semiconductor light detector. It turns out we can measure the reflectivity change from the differences in oxygen level. Once you measure that, I can measure heart rates, and then from the heart rate, we can also measure really good variability in the heart rate, which is a direct indicator of heart health. So a hearing aid could become a lot more reliable heart rate and heart rate variability monitor than any other wearable.
SlashGear: We’re talking not just what it can do for you now. We’re talking what it can help you prevent down the road and mitigate health issues as well too.
Achin Bhowmik: It’s also not just fall detection. What I really want to get to is fall prediction. How does the human doctor looking at your walking pattern decide you have a higher chance of falling down? The moment I have a hearing aid with an inertial sensor, I can learn about your normal gait. And I can detect an anomaly in your gait and warn you. That’s the holy grail in my mind: fall prevention. I’d like to be predictive and preventive.
According to Bhowmik, you need two things to redefine what a hearing aid is: sensors and AI. And these are the very same technologies that are also defining consumer electronics, from smartphones to TVs to, of course, smart machines like self-driving cars, home robots, and drones. It is definitely inspiring to see how the same technologies that are making tech lovers swoon and drool being used for a bigger purpose. Or as Bhowmik poetically said, “from enhancing machine perception to augmenting the human experience. Stay tuned for more news announcement of Starkey’s upcoming hearing aids with additional health features. We can’t wait to test them out.