Collecting data about people has long been a primary goal of the advertising industry, e.g. The Madison Avenue crowd.  The more you know about potential customers, the more closely you can match your advertising to their interests and preferences.  This is true for any kind of communications messaging – it’s all bound up in ‘audience analysis,’ a central theme in successful communications.  Social media coupled with fast super-computer technology has taken it to new levels of sophistication.  The big problem is that we are cave-age brains living in a high level digital super-fast technology system.  When we used to talk to each other in order to exchange information, we used a system of personal cues to determine whether the information was valid or invalid.  In today’s high tech communications it is increasingly more difficult to determine good or bad, or even junk information.  The onus is on the receiver to do the work to determine the validity of information.  Sadly, too many people just assume that what they hear must be true, especially if they hear it from a source they have come to ‘trust.’  This trust is more and more blindly based on belief preferences alone.  And if the media platform only tells you what you prefer to hear with no dissonant (alternate) perspectives then it is easy to see how one’s worldview can be framed in only one main perspective. 

A major assumption made by many users of the social media platforms is that when they do a search for information using search engines, a range of perspectives will be offered in the results.  As given in the last post, search engines and media platforms are designed to give you what you prefer to see, not a broad perspective.  As such you can easily be polarized into your perspective without even realizing it.  NOW, what would happen if you took that algorithm and deliberately manipulated it to push a specific point of view!!   Advertising agencies have been doing this for decades, such that the statement, ‘truth in advertising’ is seen as a joke.  We all know that women’s Shampoo X will not make your hair so shiny that men fall over each other to be with you.  Nor will drinking Beer X will make the star of the party.  Have you noticed that those annoying adds that pop up in your social media always seemed geared to you personally.  I did a search for a car hire yesterday and within minutes all these different car rental ads appeared on my Facebook.  When I looked at electric bicycles, all these companies selling electric bicycles had ads for their websites appear on the YouTube sites I was watching.  Whatever you are searching for on Google appears on the other social media platforms.  Careful of looking at naughty sites as ads for those kinds of sites will appear all over your social media – it’s all interrelated.  And it is all advertising linked.  But, let’s delve in to the even darker side of this technology.

We all understand that politicians and advertisers embellish and even lie with political, economic, and product information.  Most of us also realize that the news media are biased dependent on which corporate or elite entity owns the outlet so we usually pick our news outlets to get news that more closely matches our worldviews (unless we make an effort to look at varied perspectives, which few people really do).  However, when it comes to information from social media, we seem to be completely gullible in believing such information when we believe it comes from a ‘trusted’ and neutral source.   And most people do seem to believe that the social media is neutral because it is a search of the whole web.  Let me be clear – IT ISN’T.  The algorithms make sure that it isn’t neutral.  These algorithms in themselves are merely programs set up to funnel relevant information to the users.  But, what if someone with an agenda was to analyze the data sets for each person (via another algorithm) to understand the users biases so that they could provide false information to influence their decision making.  Once you have grouped your users into various categories you could then deliberately flood the internet with all manner of propaganda and false information that specific groups would simply believe.  Those key users would then propagate that false information on the social media, essentially making it go viral very quickly.  This would quickly cause polarization where few people would actually fact check the information simply believing it to be real.  When enough people are ‘infected’ you can influence and even manage political and social changes that work for the elites and not the people.  You have the people themselves support agenda’s they would never normally do if they simply thought about it.  Sounds like a Hollywood movie plot doesn’t it.

There are consulting groups out there that actually do this for their clients who have a keen interest in swaying peoples opinions.   One such company that been involved with many scandals since 2013 is Cambridge Analytica Ltd – an IT service management company – which is a British political consulting firm which combined data mining, data brokerage, and data analysis with strategic communication during electoral processes – SCL calls itself a ‘Global Election Management Agency (a private behavioral research and strategic communication company).’  This sounds innocuous until you look more closely at the messaging they actually use.  Cambridge Analytica was started in 2013 as an offshoot the SCL Group.  In most political elections, especially in the USA and the UK, the voting block that predominately figures in which parties win or lose are the swing voters.  These are the undecided fence sitters.  I talked about them in the last blog post – the ‘persuadables.’  Using Audience Analysis is a core technique to produce targeted messages for good communications.  What makes Organizations such as Cambridge Analytica different is that they deliberately introduced false news and misinformation propaganda to the target audience to sway their voting choices.  One only has to study campaigns such as the last Brazilian Presidential election, the Brexit referendum, and the 2016 campaign to elect President Trump, as key examples of how this mis-messaging using social media has been taken to scurrilous levels of persuasion.  It is notable how these campaigns were greatly influenced by negative fear-based messaging that upon post-choice analysis showed how the final choice clearly went against the expected outcomes.  Also, note that even though you cannot access your own information, Cambridge Analytica had full access to all the data.  I’ll let you make your own conclusions.  To be Continued …….    


0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.