We hear a lot about how robots are poised to take over our jobs and how AI might wreak havoc on humanity… But what about the huge potential that exists for positive impact? I talk with AI specialist and tech philosopher, Nell Watson, about the relationship between ethics, AI and empathy, and how values-based technologies could potentially enhance and enrich the fabric of human society.
Join in the conversation #hivepodcast
Nell Watson is an entrepreneur, engineer and public speaker, whose work focuses on AI, Cognitive Science, Blockchain, and Human Society.
A member of the Artificial Intelligence & Robotics Faculty at Singularity University, Nell is also a Senior Advisor to The Future Society at Harvard, and serves as an advisory technologist to several accelerators, venture capital funds and startups, including The Lifeboat Foundation, which aims to protect humanity from existential risks that could end civilization as we know it, such as asteroid collisions, or rogue artificial intelligence.
Nell is the Co-Founder of OpenEth, an ethical explication engine that aims to crowdsource ethical heuristics for autonomous systems, and is currently writing a book called The Founder Virtue.
1. What’s your greatest concern for the future?
Well, as I mentioned a little bit earlier, I’m more concerned about human reaction to machines than machines themselves. We have a lot of examples in history of humans going haywire but not so many examples of machine going haywire. But one thing I would like to see more of is positive science fiction. Because what we encode in our fiction in many ways has a habit of coming true.
Yes, because our fiction, our story in total creates our culture and our culture creates us. I have this theory that human beings are in fact, a form of AI in a sense, we are created by our culture, a cultural dataset which is fed with its monkey mind and so on top of the monkey mind is this layer of this sort of strange biological AI that’s created out of that cultural dataset. And so we’ve got to be careful what goes into our cultural dataset because that becomes our destiny.
And so I would very much like to see science fiction like we used to have back in the 50s and 60s which is much more optimistic and much more about creating a world that is a little bit more optimistic and more utopian. And I think there is danger in only paying attention to the worse possible outcomes because it’s kind of like when you’re riding a bicycle and there’s an obstacle ahead – don’t look at the obstacle, look where you want to be going and allow the bicycle to take you in that direction, don’t look at the bad thing otherwise it will turn to ensnare one.
2. What’s your greatest hope?
My greatest hope for the future is our leveraging of these new technologies to develop societies which are more kind, which are more fair, more inclusive and that we can develop systems and organizations which are less traumatic to human beings.
Because trauma is the thing that makes our monkey minds go wrong and that creates humans that are less rational and less capable of acting in good moral agency and so I’d like to see a kinder, gentler and more fair society. And I believe that we have an opportunity to create that through things like AI, Blockchain and machine ethics in the years to come.
3. What single action can we take right now?
I invite people to open up conversations about these aspect of society and to think about how they would like to see society in the years to come, what would be the nicest possible thing that we could shoot for and then figure out how to work back from there.
And I also invite people to start joining the conversation at EthicsNet.com and come and create this dataset along with other people across the world and make sure that your voice is counted in creating this kind of dataset for machines.
Find out more
Written, recorded & produced by Nathalie Nahai © 2018.