Consider the term “AI” and what do you picture? Terminator. The Matrix. 2001 A Space Odyssey. Depictions of hyper-intelligent machines behaving in cold and ruthless ways, and all with user interfaces that look like early 80s computers. Basically, AI in films look evil and perplexing and perhaps this is why so many of us are afraid.
But one woman who is not afraid is Danielle Krettek. She’s the Founder and Principal of Google’s Empathy Lab, where she leads a team attempting to train empathy into Google’s algorithms.
We caught up with Danielle after her talk at Semi-Permanent in Sydney. We talked over the influence of art and film on AI, and why she thinks sentient machines won’t kill us all.
VICE: Hey Danielle, so we just started this article with a quick mention of how society sees AI thanks to the influence of film. Do you think it’s possible that AI’s perception of society will be similarly influenced via movies? One hundred percent. AI is not made in a vacuum. The models are trained on the things that we teach them to see, the data that we give them, and the decision trees that are made available because of that. So the kind of way I talk about my job is like this school teacher for machines where I’m looking after the finger-painting during their upbringing while others are training them in the hardcore math and science. These machines need to see the world fully in a spectrum and that means exposing them to storytellers, philosophers, artists, poets, designers, and filmmakers. I think every science or artistic discipline has a slightly different way of looking at a human problem, or a potential human solution, or an inspiring way to crack either of those—and it feels like this is an all hands on deck moment where we need everyone.
You just compared yourself to a schoolteacher who is basically teaching these machines how to be kind and fair. Do you think we could ever reach a point in which machines could truly experience real empathy, or will we have to make do with a high-level imitation? I actually have a really strong point of view on this. I think that when it comes to the magic and mystery of emotion, I think you can look at the idiosyncrasies of the dance of emotion in a person and think that there’s no pattern in that. But in truth, we all do have our patterns—like literally there are emotional rhythms and emotional tendencies. So I think if we allow machines to observe us long enough, they’ll probably be able to mimic us very convincingly. But my personal opinion is that the real emotional connection—that real empathic connection, and the idea of being self-aware—I think is a uniquely human thing.
So the end game is to help AI become great at imitating? I think the false grail of AI is that they’ll be just like humans. I actually did this study on imaginary friends a few years ago and I found correlations between the potential for AI and the way children talk about their imaginary friends. How they just show up and help in little ways and they kind of expand the capabilities of the child to be able to think or see or play in certain ways. What’s funny is AI is kind of no different for us. It’s a question of how can it take us on adventures beyond ourselves? How can it expand our perceptions? How can it expand our capabilities? I don’t think empathy is actually the heart of human potential here. Like when I think about what truly makes humans happy, what really makes you feel like you’re singing in your life is when we are curiously following the path of something that makes us feel like we’re growing. And that I think is the intelligence we want for these machines as a companion species to us.
Look, all this stuff sounds really great and all, and maybe I’ve seen too many sci-fi films, but I’m worried about the imminent robot uprising. I can’t help but feel as if we are edging closer to birthing our own destroyer here. When humans are faced with the unknown, often the instinct is towards fear. That’s why Apple rounded the corners of their devices, they wanted them to feel gentler and more intuitive and more comfortable; less foreign. Basically, AI won’t necessarily come up with its own values—that’s not real. But what is real is that it could be programmed with values not necessarily aligned with someone who is using it, or everyone who’s using it because our values are so varied across the globe.
I think the idea that there will always be shadow and light, and forces of good and forces of not so good, are just realities. I look at my role in the context of everyone that I work with on the machine intelligence and design team, are we kind of seeing as far as we possibly can so that we can have the conversations ahead of time, how do we go deep into the ethics, how do we go deep into the psychological space and how do we load the dice with the things for the future that we want to create.