Why targeted advertising is becoming creepy

Many are saying that AI needs to be explainable. What this means is that processes handled by AI should be made more easily understandable to humans, and hence more easily trusted. And yet, we implicitly trust a lot of technology without actually understanding it. How does the fridge actually make things cold? How exactly can digital displays show pictures? Technology can and sometimes should be made more easily understood. I believe advertising could be a perfect place to start. Here’s why.

Mobile device graphs
Michael Björn

Head of Research Agenda and Quality at Consumer & IndustryLab

I have had comments from almost everyone I have talked to about the Spying Apps trend in our recent 10 Hot Consumer Trends report. At first their reaction and interest surprised me. But then I realized that many people have the same story to tell.

Many that I have met have recounted how they have talked face to face about something – like, for example, going on a February ski vacation. They were then shocked to be exposed to advertising in their smartphone apps even though their conversations had not been online.

The conclusion that everyone draws is that their smartphone apps have been listening in on their conversations and have used this information to create targeted advertising.

I wonder if the brands who pay for the advertising really are aware that they are intimidating their customers by having the ads appear in this way? The ads about what happens on your phone stays there that Apple have been displaying in Las Vegas during the CES week are very much playing on this, by trying to be different by respecting people's integrity.

Personally, I find it very hard to believe that apps are really spying in the straightforward way that people seem to believe. Instead this effect appears because people do not know how data is analyzed. Not only does this make them lose trust in technology - it could also make them take the wrong actions relating to how their personal information is collected and used.

And with virtual assistants, like Google Now, Siri, and Alexa etc. everywhere, I think we are still in the beginning of this discussion, not least because 59 percent in our trend research said we need global personal data protection principles.

For example, Alexa is now allegedly in more than 100 million devices sold, and even though virtual assistants are not supposed to listen in if not explicitly asked, many do not seem to trust this. "If that is so, how does Alexa know how to respond when I say its name?" A valid question...

Recently, a German Alexa user requested his recordings from Amazon and was mistakenly sent a link to 1,700 audio files from another user. Allegedly, there was so much information in these recordings that the other user could be identified and contacted. If a user can get that level of detailed info - although due to a human error in this case - then maybe that information could be used for ad targeting too?

I know next to nothing about ad targeting, but I do know about segmentation from my own line of work and suspect that advertisers at least partly makes use of similar methods. Briefly, a small number of parameters are used to allocate people to different groups. Everyone in a certain group is then likely to share certain traits beyond those parameters.

If you have certain friends in your social network that you contact in certain parts of town at certain hours, maybe that is enough to allocate you to a group of people who go on skiing vacations in February, for example.

So, even though you had that conversation with your friend offline, the social network has already allocated you to the same group. When your friend talks online to someone else about ski holidays, you automatically are assumed to have the same interest even though you were not even mentioned in the conversation. That's not spying, just inference.

The spooky thing is that the more digital traces we leave, the better these inferences will become and the more this uncannily correct advertising will freak people out.

So, rather than wait for explainable AI, maybe we should start a step earlier? How about explainable advertising?

Next time you get an unnervingly accurate ad, wouldn't it be nice if you could simply ask it to explain how it got there? Personally, I would love to be able to do that!


ABOUT THE CONTRIBUTOR
Michael Björn
Michael Björn is Head of Research Agenda and Quality at Consumer & IndustryLab at Ericsson ConsumerLab and has a PhD in data modeling from the University of Tsukuba in Japan.
The Ericsson blog

In a world that is increasingly complex, we are on a quest for easy. At the Ericsson blog, we provide insight, news and opinion to help make complex ideas on technology, business and innovation simple. If you want to hear from us directly, please head over to our contact page.

Contact us