Social media has become increasingly popular over the last decade, connecting people from all over the world and opening up new communication channels for individuals and companies alike. In contrast to all the positive aspects, social media can also serve as an instrument for manipulating public opinion – for example, by spreading deliberate false reports or distorting the presentation of genuine information. As a result of the supposed anonymity on the Internet and the target group-oriented play-out, false information spread much faster in social media than in the established media – and this without the authors and creators having to take (“legal”) responsibility for it.
However, one thing is clear: disinformation is not a new phenomenon. It was not invented for the Brexit vote, nor for the US election campaign in 2016. Lies that are deliberately published to someone’s disadvantage have probably been around as long as we can speak. However, social media has made it easier to circulate them quickly and on a large scale.
Disinformation, misinformation and fake news…
In order to be able to take countermeasures and to better assess the outgoing danger for companies, institutions and individuals, it is important to understand what disinformation actually is. It is helpful to distinguish disinformation from similar terms such as misinformation or fake news. Even though these terms are often used synonymously in everyday language, not every disinformation or misinformation is a fake news story.
Misinformation is false information that is disseminated, regardless of whether the intention is to mislead. This means that people who share misinformation often believe in the truth of the information themselves. The dissemination of misinformation can, but does not necessarily have to be the result of bad intentions.
Disinformation is proven to be false or misleading information that has been deliberately written and disseminated for the purpose of manipulation. The aim is, among other things, to cause economic damage, manipulate public opinion or even generate monetary profits. Nowadays, disinformation is increasingly produced in written form and decorated with falsified, out-of-context and manipulated images or videos (so-called deep fakes). Through the technological support of, for example, social bots, algorithms or artificial intelligence, disinformation is usually spread via Internet forums, news sites or social media.
Disinformation can occur in many different forms and strategic manifestations. In general, however, they can be divided into seven categories.
There is basically no malicious intent behind the satire and parody. The freely invented contents refer to real persons and/or circumstances, whereas the way of representation is exaggerated and therefore openly unrealistic. However, there can also be authors who use satirical and parodistic stylistic elements and in this way spread disinformation for the purpose of intentional manipulation. Accordingly, the writing of satirical and parodic content with malicious intent can also be described as disinformation. This type of disinformation is increasingly found in social media in the form of memes. A meme is a picture or a short video that is subsequently provided with a short text. Memes have the goal of making fun of something. However, if pictures and videos are taken out of context and edited in the sense of a meme, their distribution can lead to personal and economic damage.
Clickbaits” (click baits) are intended to arouse the curiosity of online users and thus direct them to websites with advertising potential by using exaggerated and attention-grabbing headlines. The term ‘clickbaiting’ thus basically describes the use of such headlines, especially in social media, to encourage users to click on a particular article. This often involves announcing unexpected news. The goal is to generate traffic for the entire website. Therefore, “clickbaits” often either refer to particularly well-known people or companies or reflect an excessively crazy and sensational story in order to generate as many clicks as possible.
Here certain contents are deliberately misleading or manipulated. This procedure aims at constructing connections between people and facts in order to cause damage to them. For example, a quotation by the German politician Renate Künast on the subject of “violence against children” from a debate in the German parliament was taken out of context and additionally fed with further invented sentences in such a way that the dissemination of this constructed content in the social media gave the impression that Künast was trivialising pedophilia, if not advocating it.
In this kind of disinformation, true content is fed by false information and spread in a false context. In this way, for example, statistics, theses and subjective assertions can be strengthened. This type of disinformation dissemination is particularly dangerous for companies, as readers find it very difficult to separate the right content from the freely invented, false content. By combining true and false information and placing it in a false context, disinformation gains a higher status and supposedly more credibility.
Existing contents are imitated here by fraudulent authors. The aim is to deceive the user and to get his data. Phishing mails are an example of this. Here, fraudsters imitate e-mails from companies to customers or employees in order to obtain money or information by fraudulent means. In 2017, one of the largest phishing attacks to date occurred in Germany: on behalf of the Deutsche Volksbanken and Raiffeisenbanken, fraudsters sent e-mails to bank customers asking them to enter their access data and make bank transfers.
Here, texts, audio recordings, photos and videos are falsified by certain editing programs. Modern technologies then make them no longer recognizable as manipulated content. In the case of manipulated videos (so-called deep fakes), artificial intelligence is used to imitate the appearance, facial expressions, gestures and voice of real people, for example.
With this kind of disinformation, alleged facts are freely invented and made public. The false content often has the aim of manipulating certain groups or persons, damaging the reputation of the target or causing an otherwise damaging effect. If the deliberately placed false report is particularly explosive, it is picked up in the social media within a very short time and thus made available to a large public. Examples of this type of disinformation are fake news, but also so-called fake reviews – i.e. positive, neutral or negative reviews which do not reflect the honest opinion of a consumer or which do not represent the consumer’s real experience with a product or a company.
Fake news are fake or misleading news, which are spread in the form of texts, videos or photos and serve the purpose of manipulation. They are mainly distributed via social networks, as a broad public can be reached quickly. Fake news are disseminated for personal, political or financial reasons. Fake news are a kind of deliberate disinformation.
The term “fake news” was coined by the US President Donald Trump and has now become a term, which is mainly based on populism. Therefore, the use of the term is increasingly refrained from and rather spoken of as targeted disinformation.
Fake News are number 1 on the list of the most important cyber risks – Cyber-Security-Report 2019
Cyber-Security-Report 2019
Quite simply because they cost money, because they can cause lasting damage to a company’s reputation, because they can cause share prices to plummet and destroy the image of products in the long term. The great danger is also illustrated by the following figure: 78 billion dollars – that’s how much fake news alone costs the global economy every year.
Towards the end of the 2016 US election campaign, the 20 most read false news items were more often liked, shared and commented on than the 20 most successful reports from reputable news agencies – Zeit.de
Zeit.de
1. Because people believe what they want to believe. We all tend to avoid cognitive dissonance – that means that we believe information that fits into our own world view and, above all, our social identity.
2. Because technical possibilities make it increasingly difficult to identify and expose disinformation nowadays. For this, media awareness and a certain level of media competence – also among employees of a company – is particularly necessary.
3. Because disinformation and especially fake news play with negative emotions and this is exactly what the authors make use of. The human brain gives negative news much more attention than positive news and thus arouses more interest among readers. This phenomenon is also called negativity bias (also: negativity effect) and describes the effect that people are more likely to be attracted to negative news.
4. Because we are confronted with a huge amount of information every day through social media. Verifying each one of these pieces of information would probably take us days. Therefore, the power of misinformation and disinformation should not be underestimated, because we often assume – without checking – that sources are reliable and thus (intentionally) false news can also have a particularly high credibility.
Yes, because if there is no harmful information about a company, then adversaries can simply invent it and buy it on the Internet. On the so-called “Disinformation-as-a-Service” (DaaS) market, the distribution of false or reputation-damaging content is offered in the dark net. Whether from private individuals or PR companies – the offer is large and above all growing. Among other things, the spreading of rumours and false information in the social networks can be bought without much effort. The goal: The destruction of the company’s reputation. After all, it is not only the political discourse that is littered with fake news and false or disinformation – companies are also increasingly being confronted with this risk. Social media is not only a platform for political and social communication, but also for economic communication. It is particularly worrying that the dissemination of disinformation can not only be carried out by IT professionals or PR companies, but with the help of freely available software on the Internet, it is also possible for the public at large. It is therefore often not clear which interest groups are behind the dissemination of fake news or disinformation. Whether private individuals, dissatisfied employees, competitors or simply criminals – the faces behind the disinformation can be diverse and vary in size and influence.
The systematic and organized spread of disinformation in social media exists and any company can be affected. One of the dangers is that the public reputation of a company can be permanently damaged. This in turn has negative consequences such as a drop in sales.
More than a quarter of the surveyed international corporations are affected by negative activities in social networks – Global Fraud and Risk Report 2019/20
Global Fraud and Risk Report 2019/20
Specifically, there is a danger that the reputation of a company can be deliberately damaged and that competitive advantages can be manipulated by third parties. In order to gain precisely this competitive advantage, there are more and more often deliberate disinformation campaigns, which are spread and intensified by so-called ‘social bots’. Social Bots’ or ‘Social Networking Bots’ are computer programs that automatically simulate human behaviour patterns in social networks: they like, share, comment and text. Always with the aim of making statements and opinions visible and reinforcing them. They are an instrument of agitation and manipulation and a source of fake news. It is therefore clear: disinformation and misinformation can determine the public opinion, so protect your company at an early stage.
Counter-statements, injunctions or burying your head in the sand are of little help in disinformation attacks. Rather, it is important to prepare yourself in advance for the new dangers and challenges. Because once the child has fallen into the well, it is difficult to get it out again.
That is why it is necessary above all:
1. effective prevention through, among other things, employee awareness raising and training
2. early detection through monitoring measures and general media observation
3. an elaborated reputation and risk management
You want to train how to deal with disinformation on the social web in a corporate context? Then our crisis simulation is just right for you and your team!