As Fake News came into prominence during the 2016 American election and spilled like a massive tsunami into 2017, the tech giants did what they do best; throw technology at a problem. The bright and shiny sword they held forth was Artificial Intelligence (AI). Except it turned out to be dull and as useful as trying to mow the lawn with a hammer. And it’s unlikely it will any time soon.
What AI is Good at and Bad at
The kind of AI that we see in Hollywood movies from Spielbergs more gentle take to the blunt force trauma of the Terminator is called “general AI” and is the kind that can carry on a diverse conversation, spot subtle clues and has nuance. It would, essentially, have a conscience. Except we don’t know what consciousness actually is. Today, AI has several branches, from Machine Learning to Natural Language Processing (NLP.) They’re all used in various ways and mixed and matched.
Yet AI today can only do one thing very well. AI today is classified by techies as “weak AI” because it can’t really combine all the disciplines together to think. Keeping in mind that thinking is different from intelligence and different again from solving multiple complex problems.
So AI is very good at doing a specific task that has a process to it. While AI can do some impressive problem solving, it solves problems within a set of confines.
Why AI Can’t Solve Fake News
The primary challenge of I to deal with Fake News is that AI is today, unable to deal with nuance and subtlety. And Fake News is all about that. Over time, as in years, with extensive training, AI will likely improve. The challenge is that such training requires a lot of people and data to train the algorithms of AI. It also means a lot of energy and massive data centres. It may well be too expensive right now to train an AI and we’re also not in enough of an advanced state with understanding consciousness to bring AI along.
This is further evidenced as both Facebook and Google backed away from using AI and have quietly added more humans to manage Fake News on their digital channels.
For now, AI can’t solve this deeply complex problem. That means placing more onus on humans and using public policy tools to help deal with a form of propaganda that is sure to impact many countries in the years ahead.

About the Author Giles W. Crouch

Giles Crouch is a digital anthropologist and CDO/CIO. He spent over 20 years in globally-focussed marketing communications for technology products and services, but his roots are anthropology in a modern sense. He uniquely ties his deep knowledge of technology, marketing, design thinking and design anthropology as a polymath to help clients seeking digital advantage in today’s complex world. Giles has been regularly interviewed by international news media on topics such as social media, blockchain, artificial intelligence and it’s impacts on society. He is a passionate practitioner of design thinking and anthropology. Giles is prolific writer and public speaker, lecturer and keynote. He has also completed over 250 netnographic research projects since 2009. His secondary activity is as Group Publisher with Human Media Inc.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: