You’ll have heard all about the Cambridge Analytica scandal, but do you know about the ground-breaking research that led to some of the profiling practices hitting the headlines today? Join me as I talk with Dr David Stillwell about the fascinating and sometimes frightening world of psychometric segmentation and data-mining – how it’s done, what it can reveal, and where responsibility for ethical practice may lie.
Join in the conversation #hivepodcast
Dr David Stillwell is the Deputy Director of The Psychometrics Centre at the University of Cambridge, and
lectureds in Big Data Analytics and Quantitative Social Science.
While researching his PhD in cognitive decision-making, David created a Facebook app called myPersonality and a user-facing app called Apply Magic Sauce, both of which accurately predict the psychological traits of users from the digital footprints of their online behaviour.
Over 6 million people have taken the myPersonality questionnaire, and, quite unsurprisingly, his research has attracted a lot of attention in the press for its predictive accuracy and potential implications.
1. What’s your greatest concern for the future?
I am not worried about large companies because they are always thinking about their reputation. They are thinking about their brand. They are thinking about what will my customers think? It’s those people who don’t have customers that I am more concerned about.
So either the little startups who try some stuff just because they can, or the other side is governments who are less democratic or at least maybe they are democratic but they are secret about what they do, and then people don’t necessarily know what is happening and know what is possible, and that is when it starts getting scary, when they start using our data against us and we’ve got no way of expressing our disapproval.
2. What’s your greatest hope?
Too often right now companies sit down and think “How can I get the most from this data?” but they don’t think about where the data comes from, they just think “Well, I’ve passed the privacy rules and therefore it’s my data now and I can do whatever I like.”
So I think companies sort of – I want them to start working in ways that kind of collaborate with their customers, in ways that generally benefit the customers and not just the company. So I do think this is not necessarily some brave new world that we’ve never seen before.
I think if you think back to the 1950s you went to a shop and you knew the shopkeeper, and the shopkeeper knew you and your family and was able to give you a really personal service – that’s obviously really expensive. So now there is a potential, can we do that on a large scale in a cheap way using algorithms instead, as long as it’s for people’s benefit?
3. What single action can we take right now?
I think it would be, be transparent with every use of data that you do, and if you are afraid to be transparent it probably means you shouldn’t be doing what you are doing.
So I think shining a light on how data is being used is going to benefit both people but also the companies in the long run so much more, because people right now – they are already feeling like companies are creepy, companies are using data in ways that I don’t understand. And if companies were only incredibly transparent with what they are doing with the data then people would not start imagining things that they think companies might do, but they are not really.
So lots of people think that Google is listening to us via our phones on the microphone but as far as anyone can tell there is no evidence that that’s actually happening. But people feel like it is happening because you don’t understand how Google is so good at what it does. So if only Google was more transparent and people would not start sort of making up imagined scary things that the companies are doing.
Find out more
Private traits and attributes are predictable from digital records of human behavior
Automatic personality assessment through social media language
Written, recorded & produced by Nathalie Nahai © 2018.