Human thought is a construct of Frankensteined variations of ideas and memetically altered belief systems. The original idea is held hostage to weaponized vectors that expedite delivery of the meme that becomes the narrative. The “original idea” is a rarity because of the toxicity corroding the mind, toxicity stemming from the bombardment of messages that pummel the critical faculties that distort the natural process of thinking. Reality is gossamer, and the human psyche can be disrupted and manipulated easily by tailoring a meme to exploit any one or more of dozens of psychological vectors. The weaponized meme, when introduced and reinforced properly, will parasitically weave its way throughout the labyrinth of the mind and attach itself to the subconscious, thus affecting the root of the thoughts of the recipient resulting in the intellectual zombification of the target’s perception on the particular categories of concepts related to the meme. When one has an idea memetically introduced and the meme colonizes the neural pathways of the mind, it becomes a tremendous contributor to one’s belief system. Engineered properly, a belief automatically precludes one from believing its opposite. The meme’s contribution to the digital tribal society allows for automated enforcement of this new belief via peer pressure from the tribe and its chieftains.

Normalization is the process of making something accepted or commonplace in the societal collective conscious by acclimating the population to keywords, desensitizing them to the outcomes through an oscillatory discussion of the polar opposite outcomes, or inundating them with discussion to the point that the subject is accepted but mostly ignored [10].

Online platforms have altered how users, especially younger generations, consume media, including propaganda, and how and why they engage in civic-political spheres. The access to information, expression of ideas, circulation of propaganda, and mobilization of communities all depend on digital platforms and mechanisms that are highly susceptible to the machinations of adversaries and special interests. Participatory politics is the broadening of political discourse into the daily communities of individuals. DIOs often target populations of individuals who participate out of ideological compulsion, attachment to a social justice cause, or online convenience, because those individuals seek causes and dialogues to champion intentionally. In many cases, users are psychologically addicted to the heated online discourse, ideological bubbles, and redirected animosity perpetrated on online platforms or forums [11].

Researchers such as Lee Ross, Craig Anderson, and others have proven that it is remarkably difficult to demolish a falsehood once it has been planted in a subject’s mind and nourished with rationalizations. Each experiment they conducted either implanted a belief by proclaiming it true or by showing the participants anecdotal evidence. Participants were then asked to explain why it was true. Finally, the researchers totally discredited the belief by telling participants that the information was manufactured and that half the participants had received opposite information. Despite being blatantly told that they had received fabricated information by the creators of that data, approximately 75 percent of the new belief persevered. This effect, belief perseverance, is attributed to the participants’ retention of their invented explanations for the belief. It demonstrates that fake news or disinformation can survive even the most direct attempts to dispel the false ideas. Another experiment asked subjects to determine whether firefighters performed better if they were more willing to take risks. One group considered the case of a risk-prone firefighter who was successful and a risk-adverse firefighter who was unsuccessful. The other group considered the opposite. After forming their hypotheses, each participant wrote explanations justifying their position. Once the explanation was formed, it became independent of the initial information that created the belief. Even when the narrative was discredited, the participants retained their self-generated explanations. The research suggests that the more individuals examine their theories and beliefs and the more they explain and rationalize them, the more closed people become to information that challenges their ideology [10].

Belief perseverance is prevalent on every digital platform and, to some extent, in nearly every human interaction. Our beliefs and expectations govern how we construct events powerfully. Individuals are prisoners of their patterns of thought [10]. Influencers can leverage psychographics and the metadata collected from online platforms to determine the spiral dynamic and socionic profiles of population segments that share similar psychological patterns. Afterward, they can craft fake news, propaganda, disinformation, and misinformation that are tailored to implant specific memes in the population. Infected individuals will internalize and rationalize the notions, and their perceptions will become independent derivatives of the meme that are impervious to challenge or fact. In fact, challenging the theories with contrarian information, such as fact, has an inverse effect. Believers become more defensive and convinced of their ideologies as their theories and rationalizations are confronted [10]. Adversaries can, therefore, play both sides of an issue using trolls and bot accounts to plant an idea and then attack those same ideas and faux sources in order to root the false information in the community and evangelize memetic zealots. The only proven method of dispelling belief perseverance is to coax the subject into considering the opposite side of the issue and explaining it as a “devil’s advocate” [10]. This, too, can be weaponized to convert believers from one side of an issue to another and thereby disrupt critical community networks through infighting and conflicting beliefs.

People do not perceive things as they are; instead, their perceptions are reflections of themselves. Online users tend to enhance their self-images by overestimating and underestimating the extent to which others think and act as they do. Many erroneously assume that others operate on the same spiral dynamic tier or are of a specific intertype relation, without any data to inform that consensus. This phenomenon is referred to as false consensus, and it summarizes each person’s psychological tendency to assume that others share their internal “common sense.” Humans excel at rationalization and knee-jerk conclusions that allow us to persuade ourselves that we remain part of the majority in the functional evolutionary tier. Mistakes are dismissed through self-reassurance that “everyone makes mistakes” or “I am sure most people do that.” Meanwhile, negative behaviors are projected onto others. For instance, liars often become paranoid that others are dishonest because they believe that if they lie, then everyone must. False consensus occurs when we generalize from a small sample that may only include ourselves. People have evolved to be comfortable with this process, because in most instances, individuals are in the majority, so their assumption that they lie in the majority on most issues is proportionally accurate. Further, assumed membership in the majority is reinforced through participation in communities and familial units that mirror attitudes, behaviors, and beliefs. However, many fail to realize that majority membership often does not translate to ideological and nuanced issues, such as politics or religion [10]. Additionally, on matters of ability and success, the false uniqueness effect occurs. At some primal level, people want their talents, ideas, abilities, and moral behaviors to be acknowledged as relatively unusual or even superior compared with those of their peers [10].

The effect of false consensus and false uniqueness are self-serving biases. A self-serving bias is a byproduct of how individuals process and remember information about themselves. For instance, one might attribute their success to ability and effort, while the same individual may attribute any failures or shortcomings to luck and external influencers. Other self-serving biases include comparing oneself favorably to others without empirical correlation and unrealistic optimism [10].

Adaptive self-serving biases help to protect people from depression and serve as a buffer to stress; but, in doing so, they also inflate reality. Non-depressed individuals typically attribute their failings to external factors, while depressed subjects have more accurate appraisals of themselves and reality. Induction of adaptive self-serving biases in a population can be used to manage their anxiety, downplay traumatic events such as terrorism, and increase their motivation and productivity. The opposite is likewise true, however. The propagation of memes that dispel adaptive self-serving biases results in increased introspection and a more grounded view of reality and in the correlation and causation of events [10].

Maladaptation self-serving biases result in people who fail to recognize or grow from their mistakes and instead blame others for their social difficulties and unhappiness. Groups can be poisoned when members overestimate their contributions to the group’s successes and underestimate their contributions to its failures. For instance, at the University of Florida, social psychologist Barry Schlenker conducted nine experiments in which he tasked groups with a collaborative exercise and then falsely informed them that their group had done either well or poorly. In every study, members of successful groups claimed more responsibility for their group’s performance than members of groups who were told that they had failed the task. Envy and disharmony often resulted from group members who felt that they were underappreciated or underrepresented [10].

When groups are comparable, most people will consider their own group superior. Group-serving bias occurs when influencers inflate people’s judgment of their groups. Adversaries can leverage the effect of group-serving bias to pit otherwise comparable digital communities against one another to either create chaos or increase the quality and resolve of membership through competition 

SELF-HANDICAPPING Self-handicapping is the effect of individuals sabotaging their own chances of success by creating impediments that limit the likelihood of success or by not trying at all. People eagerly protect their self-images by attributing failures to external factors. By inflating the hopes and exciting the fears of an audience, an influencer can induce self-handicapping in the more self-conscious members of the target demographic. In short, through an influence operation, the perceived risks begin to outweigh the illusory gains. When self-image is tied to performance, it can be more self-defeating to try and fail than to procrastinate or not try at all. Self-handicapping shifts the onus of failure to external actors or circumstances. Outcomes are no longer associated with skill, talent, merit, or ability. On the off chance that the impaired individual succeeds, their self-esteem is bolstered for overcoming daunting obstacles; meanwhile, failures are attributed to temporary or external factors and actors [10].

PRIMING Assumptions and prejudgments guide our perceptions, interpretations, and recall. The world is construed through belief-tinted lenses. People do not respond to reality; instead, they react to their personal interpretation of reality. Unattended stimuli influence the interpretation and recollection of events subtly. Memory is a complex web of associations, and priming is the activation of specific associations. Numerous experiments have demonstrated that priming one thought, even without awareness, can influence another thought or action. For instance, in 1996, John Bargh and his colleagues asked subjects to complete a sentence containing words such as “old,” “wise,” and “retired.” Afterward, the individuals were observed walking more slowly to the elevator than other subjects who were not primed with age-related words. The “slow walkers” had no awareness of their walking speed or that they had been primed with age-related words. Depressed moods prime negative associations, good moods prime people to selectively view their past as positive, and violent media may prime viewers to interpret ambiguous actions as aggressive [10].

Out of sight does not equate to out of mind. Most social information processing is unintentional, out of sight, and occurs without conscious awareness or consent. It is automatic. In studies, priming effects surface even when the influence stimuli are presented subliminally, too briefly to be perceived consciously. An otherwise invisible image or word can prime a following task. For example, an imperceptibly flashed word “bread” may prime people to detect a related word such as “butter” faster than an unrelated word such as “bottle” or “bubble.” A subliminal color name facilitates speedier detection of that color; though an unseen wrong color name delays color identification. Imagine the impact of an influence operation that primed consumers with invisible keywords or images in the ads or borders of the screen. Consider the effect that the use of subtle or even sub-audible sounds could have on reception and understanding of various forms of media or the media consumed afterward [10].

Despite biases and logical flaws, first impressions tend to be accurate more often than not. As a result, most individuals do not have cognitive defenses implemented to mitigate DIOs. They innately trust their perception of events to be undeniably accurate. Implanted prejudgments and expectations can sour impressions and color future interactions without additional effort on behalf of the adversary. The most culturally relevant example of this is that political candidates and their supporters always view the media as unsympathetic to their cause, regardless of coverage [10].

Assumptions about the world can even be leveraged to make contradictory evidence seem true. When presented with factual empirical evidence, such as confirmed experiment results or objective statistics, both proponents and opponents of any issue readily accept evidence that confirms their beliefs and discounts evidence that does not. Presenting both sides of an issue with an identical body of mixed evidence increases their disagreement rather than lessening it. Priming is an invaluable tool for any influencer, regardless of position, medium, or issue. Partisanship predisposes perception [10].

In experiments where researchers have manipulated people’s preconceptions, they were able to direct their interpretations and recollections significantly. For instance, in 1977, Myron Rothbart and Pamela Birrell had University of Oregon students assess the facial expression of a man. Those told he was a Gestapo leader responsible for war crimes interpreted his expression as a cruel sneer; meanwhile, those told he was a leader of an anti-Nazi underground movement interpreted the same expression as a kind and comforting smile. In the digital landscape, every meme engineer can replicate this experiment en masse. They can provide the same memes and misinformation to opposing communities and prime both to harbor extreme animosity against the other. A simple picture of a smiling politician could be interpreted as warm and charming in one sphere and creepy or menacing in the other. Filmmakers likewise control audience perceptions through the “Kuleshov effect.” Essentially, audiences associate their perception of an individual and their expression with the images that precede that individual. If an explosion is seen before a hard cut to a smiling man, then the man is interpreted as the villain responsible for the blast. Similarly, if a faceless person is seen handing money to the homeless, then the individual featured after the hard cut is perceived as charitable [10].

Construal processes also impact perception. When entity A says something good or bad about entity B, people tend to associate that good or bad thing spontaneously with entity A. This phenomenon is referred to as spontaneous trait transference. People who complain about gossiping are seen as gossips themselves, while people who call others names are seen as those insults. Describe someone as caring to seem so yourself. Spontaneous trait transference is the epitome of the childhood mantra, “I am rubber, you are glue….” And it can be weaponized by opportunistic adversaries. Actors can ingratiate themselves into community ranks by complimenting key figures and members. On the other hand, they can demonize their opposition by baiting them into resorting to insults or relying on charged keywords and phrases [10].

Groupthink is a mode of thinking that people engage in when concurrence-seeking becomes so dominant in a cohesive in-group that it overrides realistic appraisals of alternative courses of action [10]. It is vital to maintaining an already collected and controlled population. If one has control over all forms of information, such as books media and the web, then a cohesive group consciousness is all that remains. It can even be used for generational indoctrination, such as in North Korea.

Conspiracy theories, otherwise known as distorted beliefs, are widespread. How and why they are formed is still more a matter of speculation rather than science, but fledgling theories exist. Emotional arousal increases neuroplasticity and leads to the creation of new pathways spreading neural activation. According to neurodynamics, a meme is a quasi-stable associative memory attractor state. Depending on the temporal characteristics of the incoming information and the plasticity of the network, the memory may self-organize creating memes with large attractor basins, linking many unrelated input patterns. Memes with false rich associations distort relations between memory states. Simulations of various neural network models trained with competitive Hebbian learning (CHL) on stationary and non-stationary data lead to the same conclusion: Short learning with high plasticity followed by a rapid reduction of plasticity leads to memes with large attraction basins, distorting input pattern representations in associative memory. Such system-level models may be used to understand the creation of distorted beliefs and formation of conspiracy memes, understood as strong attractor states of the neurodynamics [12].

Group polarization is a community-produced enhancement of members’ preexisting tendencies; a strengthening of the members’ average tendency, not a split within the group [10]. Group polarization allows the powerful and influential in a group to sway more moderate opinion to their side slowly. It also leads to a more unified group than when the group is segmented and broken. Influencers leverage group polarization to bias a group further.

The fundamental attribution error is the tendency for observers to underestimate situational influences and overestimate dispositional influences upon others’ behavior. Populations often make fundamental attribution errors in judging their beliefs compared with other segments of the population [10]. This is helpful in influencing others, since the flaw attributed falsely is often a weakness that can be leveraged.

Hindsight bias is the tendency to exaggerate, after learning an outcome, one’s ability to have foreseen how something turned out. It is also known as the “I knew it all along” phenomenon. Similar to confirmation bias, hindsight bias acts as a polarizing lightning rod in influence. People want to not just be right, but to believe their internal mechanisms can produce correct results routinely. Thus, when one has discovered they predicted something correctly, they become invested in that issue. When one predicted incorrectly, they become prime targets for manipulation by exaggerating the negative traits of the resulting decision. When they feel that the situation was outside their control and there was no way for them to have been correct, they blame the system and propagate distrust [10].

The illusion of control is the perception that uncontrollable events are more controllable than they are in reality. It is often used to steer the perception of a population. For instance, politicians are known for crediting their supporters with the success of an achievement, while blaming losses on their adversaries. Letting a supporter believe their efforts helped in the success or outcome of a movement makes it more likely they will put such efforts forward again. Similarly, the ability to inspire change or generate momentum toward something not happening convinces people to mobilize efforts in a likewise fashion [10].

The illusion of transparency is the misapprehension that concealed emotions are detected by other people easily. Individuals like to believe that they can “read” people or that they know who or what to trust based on “gut instinct.” In reality, their perception is a malleable amalgamation of their beliefs, predispositions, biases, and fallacies [10]. It is trivial for an adversary to steer the perception of a population through suggestion or inception. The other side of the illusion is that individuals trust that their peers will understand the emotions, morals, and values underlying their decision-making processes. Similar to a man-in-the-middle attack, an adversary can interject faux motivations into this implicit trust through trolls, bot comments, and fake news.

Extreme persuasion, otherwise known as indoctrination, is the byproduct of conformity, compliance, dissonance, persuasion, group influence, and psychographics. People’s attitudes follow their behavior. The indoctrination process is an incremental conversion made possible by the internalization of commitments that are made “voluntarily,” publicly, and repeatedly. Vulnerable individuals gravitate toward digital communities in search of companionship and acceptance. Ideological radicals leverage group settings and communities as a venue for public commitment to a cause. To navigate from the out-group to the in-crowd, gain membership, or make friends, individuals are fed propaganda and manipulated not just to commit to its messaging but to vocalize their adherence to its tenets repeatedly. Active members are, in turn, tasked with recruitment and participate in activities that ingratiate them further in the group. After some time, the beliefs and actions of the group, no matter how radical, normalize because compliance breeds acceptance. Members may be asked to conduct surveillance, launch layers of digital attacks, fundraise through scams or cryptocurrency mining, or otherwise support the group. Studies have shown that the greater the personal commitment to the cause, the more the individual feels the need to believe the propaganda and disinformation of their community. Eventually, the goals and survival of the group are the priorities of the initiate [10].

Extreme persuasion, otherwise known as indoctrination, is the byproduct of conformity, compliance, dissonance, persuasion, group influence, and psychographics. People’s attitudes follow their behavior. The indoctrination process is an incremental conversion made possible by the internalization of commitments that are made “voluntarily,” publicly, and repeatedly. Vulnerable individuals gravitate toward digital communities in search of companionship and acceptance. Ideological radicals leverage group settings and communities as a venue for public commitment to a cause. To navigate from the out-group to the in-crowd, gain membership, or make friends, individuals are fed propaganda and manipulated not just to commit to its messaging but to vocalize their adherence to its tenets repeatedly. Active members are, in turn, tasked with recruitment and participate in activities that ingratiate them further in the group. After some time, the beliefs and actions of the group, no matter how radical, normalize because compliance breeds acceptance. Members may be asked to conduct surveillance, launch layers of digital attacks, fundraise through scams or cryptocurrency mining, or otherwise support the group. Studies have shown that the greater the personal commitment to the cause, the more the individual feels the need to believe the propaganda and disinformation of their community. Eventually, the goals and survival of the group are the priorities of the initiate [10].

People do not turn to extreme ideologies on a whim. Drastic life changes rarely result from an abrupt, conscious decision. Recruiters also do not jar the target drastically into subservience. Instead, recruitment exploits the foot-in-the-door principle. Engagement via memes, especially humorous ones, and psychographically tailored interaction is far more effective compared with other vectors. As with cults, digital indoctrination is designed to draw more of the target demographic’s time and attention. The activities and engagement likewise increase in ardor. Social engagement might at first be voluntary, but as the process progresses, the input of the individual will be solicited to make interaction mandatory [10]. In fact, with the advent of socially intelligent chatbots, recruitment and indoctrination could be automated using predetermined statements, questions, and responses. Even the process of social commitment may be outsourced to software.

Targets are often emotionally vulnerable and under the age of 25. In many cases, they are less educated and are attracted to simple messages that are difficult to counter-argue. Conveniently for threat actors, such basic messaging is a cornerstone of memes. These individuals are often at crisis points in their lives, feel emotionally vulnerable, or are socially isolated. Many are lifelong wound collectors searching for purpose. Social media platforms, such as Facebook, can be used to locate such individuals. In fact, the application can even delimitate users based on daily mood or level of emotional vulnerability [10].

Vivid emotional messages; communal warmth or acceptance; or a shared mindset, such as humor, are all strong messaging attractors. Lonely and depressed people are especially susceptible to such memetic manipulation, because the small bursts of purpose or enjoyment can develop into a dependency. Indoctrination efforts propagating trust in the community, a fight for a justified cause, or faith in a charismatic leader appeal to vulnerable populations. Even just reassurance that someone is not alone or that the community “knows the answer” can be irresistible to a target. Successful indoctrination depends on a strong socionic motivator, such as an infallible leader or an irrefutable cause. The community establishes and bases its interpretation of reality around that entity. For instance, foreign ideological terrorists rely on digital recruitment and indoctrination efforts in other nations. They lure lifelong wound collectors into their communities and befriend them. Afterward, they subject them slowly to propaganda, disinformation, and misinformation. They manipulate emotional triggers, such as sympathy, anger, and gratitude. They cultivate dependencies on the authoritative entities of the ideology, the community, and the leaders. Recruits are convinced to believe the extreme and erroneous interpretation of Islam. They are convinced to act as “living martyrs” whose kinetic actions will herald change and will be rewarded with bliss. Candidates are isolated or withdrawn into near-isolation and then coerced into making repeated public commitments to the community through videos or manifestos. Finally, the adversary acts and the collective, who may have never been within a thousand miles of the individual or known their name, takes credit for the attack [10].

Choice of colors and color associations is vital to effective meme construction. The palette must incline the audience toward a feeling, mood, and ideological stance, and to do that, it must capture both the “brand” of the product and resonate with the collective consciousness of the community. While color is too dependent on personal experiences to be translated universally to specific feelings, there are broad messaging patterns in color perceptions that can be exploited by a knowledgeable adversary. A study titled “Exciting red and competent blue” confirmed that purchasing intent is affected by colors greatly because of their effect on how a brand is perceived; colors influence how customers view the “personality” of the brand in question. When it comes to picking the “right” color, research has found that predicting consumer reactions to color appropriateness is far more important than the individual color itself. Brands can be a cross between two traits, but one is always dominant. While certain colors do align broadly with specific traits (e.g., brown with ruggedness, purple with sophistication, and red with excitement), nearly every academic study on colors has found that it is far more important for colors to support the personality of the idea rather than stereotypical color associations (i.e., red equates to anger). In a study titled “Impact of color on marketing,” researchers found that up to 90 percent of snap judgments made about products can be based on color alone, depending on the product. Regarding the role that color plays in branding, results from another study show that the relationship between brands and color hinges on the perceived appropriateness of the color being used for the particular brand, such as whether the color “fit” the product sold [13].

An evolutionary influence on everyday thinking is the mind’s tendency to search for and impose order on random events. Illusory correlation is the perception of a relationship between two things where none exists or the perception of a stronger relationship than actually exists. It predominantly occurs when one already expects a strong correlation. In 1965, William Ward and Herbert Jenkins showed participants the results of a hypothetical 50-day cloud seeding experiment. They told participants which of the 50 days the clouds had been seeded and which days it had rained. In truth, the data set consisted of a mix of random numbers. Sometimes, it rained after seeding, and sometimes it did not. Nevertheless, participants became convinced, in conformity with the implanted conclusion about seeding, that they had observed a causal relationship in the data. Numerous other experiments have verified that people misperceive random events easily as confirming their beliefs. When a correlation exists, we are more likely to notice and recall confirming instances. People do not notice when unrelated events do not correlate (i.e., the lack of a causal relationship) [10].

Illusory correlation is a powerful misinformation and disinformation tool in the digital age. It can be used to manipulate a population that is seeking truth or belief of truth on a topic. Datasets and “facts” can be disseminated to inquisitive but misinformed audiences, in near real-time. False but convincing data can be fabricated instantaneously from historical data sets or big data algorithms and machine learning systems. A portion of the public is seeking scandal constantly, and they can be manipulated with “leaked” data sets. Individuals, especially those unqualified to interpret the data in the first place, are surprisingly unlikely to question the legitimacy of leaked data [10].

Collectivism is the act of prioritizing the goals and well-being of one’s group (e.g., one’s extended family or workgroup) over the welfare of the individual and defining one’s identity accordingly. It is controlled strongly at the spiral dynamic evolutionary stage and, to a lesser extent, at the socionic level. People in groups are more proactive and engaged when the group is challenged, or they are tasked with something intrinsically appealing to their disposition [10]. Attackers can internally steer and externally challenge collectives and groups using troll accounts, bots, weaponized hashtags, or opposing groups to optimize their proactivity. For instance, after riling radical factions concerned with an issue, an adversary can latch onto a current event via the Hegelian dialectic, launch attacks at one or both sides of the issue to drum up support, and inspire both a protest and a counter-protest in real-time. Other psychological and memetic manipulations can be leveraged to ensure that radical members commit violence or that the event captures the mainstream media.

Collectivism also ensures that the adversary can remain obfuscated within the group. Members tend to defend each other, especially prominent figures, such as those accounts backed by bots. Consequently, the attacker can use collectivism and their faux viral status to gain control of the group and to steer its directive and the perceptions of its members [10].

Complementarity is the popularly supposed tendency for people to choose friends or partners who are different from themselves and complete what they’re missing (e.g., for a shy person to choose a highly social person as a romantic partner). A number of the socionic intertype relations depend on a sense of complementarity. Furthermore, individuals, especially those who are troubled or isolated, seek out digital communities based on this tendency. For example, a person with views on maintaining their right to have guns may align with a person who is against religious scrutiny. Even though the views are not related, both have a shared interest in maintaining constitutional rights and therefore complement each other, bolstering their collective positions [10].

Complementarity subverts the mind’s defenses and internal beliefs. To a degree, people engage in counterintuitive behaviors as if they were engaging in a “guilty pleasure.” Individuals can be influenced through the use of complementary information or views that fit or complete their view of the world, especially in the face of opposition [10]. For example, even those initially opposed to violence might be recruited and indoctrinated into radical online collectives, such as the Muslim Brotherhood or antifa, and persuaded to plan or conduct campaigns to which they would otherwise be opposed.

Confirmation bias is the tendency to search for and weigh information that confirms one’s preconceptions more strongly than information that challenges them. This occurs more than anything else; we act to preserve our understanding of the world and often disregard contrary information. In influence, it is vital to identify which perceptions are in conflict and how they might be swayed. It also leads to a tendency for people to remember success and forget promises or mistakes. This helps to explain why people are so willing to forget promises made to them or past blunders, as the ends appear to justify the means. Thus, a target population can be influenced by many promises and remarks, and to a non-critical eye, the outlandish and untrue are forgotten and the correct predictions seem powerful. Confirmation bias further helps to shape how one is influenced, since we have tendencies to remember only the information pertinent to sustaining our views and delete the rest from memory. Thus, a fake news site could post multiple fake targeted articles and not be recognized as fake, since the fake articles will be forgotten, and the fake ones that are well-received will confirm beliefs and gain trust for the site[10].

Influencers play both sides of issues relevant to their targets. One effective mechanism that they can leverage against one or more sides is counterfactual thinking. It focuses on the imagined reminiscence of “what could have been.” Counterfactual thinkers ponder alternative scenarios and outcomes that might have happened but didn’t. Often, they over-idealize those wistful thoughts to the point that reality becomes a bitter disappointment. Attackers can leverage those musings and negative feelings to sow discord and mobilize movements. According to a 1997 study, “Affective Determinants of Counterfactual Thinking,” the significance of the event is correlated linearly with the level of counterfactual thinking. As a result, close elections, charged societal debates, the passage of controversial legislation, and other events that leave one or both sides of an issue extremely emotional are prime targets for memes or trolled discourse that capitalizes on counterfactual thinking [10].

Dispositional attribution explains that a person’s behavior results more from their traits and beliefs than from external or cultural factors [10]. As such, digital attackers will garner more sway by appealing to the evolutionary urges and ingrained ideological dispositions of their targets than they would by tailoring their campaigns to focus on a single issue or cause. A chaos operation that aims to promote xenophobia and anger across all digital vectors is going to be more successful than one that only intends to target one race, class, or individual. Unlike tailored operations, the impact of chaos ops is more significant when the scope is broad, and the desired outcome is somewhat abstract.

Cognitive dissonance is the discomfort or tension that arises from simultaneously holding two or more psychologically incompatible thoughts. Leon Festinger’s theory of cognitive dissonance proposes that people are motivated to avoid or minimize cognitive dissonance whenever possible. Influence begins with a disruption, such as a conflict between the current view and desired view. When bombarded with reports and information, targets either begin to believe it or to segment off into their own polarized views [10]. In influence campaigns, both can be useful. By planting an initial idea, over time a conflict can be created, and through targeted manipulation, the desired outcome can be achieved. For instance, initial interest in legislation may be nonexistent. A foreign or domestic digital adversary can leverage bots and trolls on Facebook, weaponize hashtags on Twitter, or promote radical podcasts on iTunes to generate “grassroots” support of the issue or bill. Propaganda, fake news, and misinformation can also be deployed to shape public opinion. Similarly, a sudden influx of information may result in overwhelming support, especially when coming from authority figures. If the dissidence instead pushes the individual to separate and decide on the matter regardless of propaganda, the individual becomes able to be influenced in other ways, such as isolating them from their communities or radicalizing them.

Deindividualization accounts for the loss of self-awareness that occurs when people are not seen or paid attention to as individuals (for example, when they become absorbed in a role that reduces their sense of individuality or accountability or when they become part of a crowd or a mob). Often with influencing and manipulation of a populace, the deindividualized become the primary targets. By offering a voice to the voiceless, a collective strength can be forged. Furthermore, as the influenced population becomes more and more deindividualized in the message and efforts, it is less likely that the group will break down [10]. Radical factions, such as antifa, the Muslim Brotherhood, Anonymous, and many others, depend on deindividualization for recruitment and continued loyalty on online vectors. These groups tend to target wound collectors who already have a decreased sense of self because those individuals are easier to radicalize and will depend on the group more than others might.

Displacement is a distractionary and reactionary tactic that influencers can employ to redirect aggression to a target other than the source of one’s anger or frustration. Disposition is most prevalent when direct retaliation against the initial source of anger or frustration is not possible. Afterward, those frustrated are prone to overreaction against any minor inconvenience or trivial offense [10]. Generally, the new target is a safer or more socially acceptable target. This could occur microscopically on online forums and social platforms to demonize critics, or it could occur macroscopically to pit a collective against a specific target.

Altruism is the motive to increase another’s welfare without conscious regard for one’s own self-interest. It is essentially using the “greater good” as a key motivator. In general, people tend to strive for an altruistic outlook within society. People’s altruistic natures can be abused, however. Many in online communities exhibit false altruism and can be angered easily when their dedication to causes is questioned or when their self-interest is revealed [10].

People neglect the speed and capacity of their psychological immune systems. Internal processes of rationalizing, discounting, forgiving, and minimizing emotional trauma are rarely considered consciously. Since most are ignorant of their mental immune system, they tend to adapt to disabilities, emotional trauma, and other inconveniences too rapidly. Aberrant events are permitted and eventually normalize in the collective conscious. Interestingly enough, research suggests that immune neglect causes populations to be less distressed by major traumatic events than they are by small irritants, because their minds accept and adapt to the circumstances faster [10].

Many believe that influence should be gradual and hidden. While certain messages and particular audiences necessitate incremental conditioning and obfuscation for normalization to occur, operations that leverage highly emotional or politically charged events should be more immediate to have a greater and swifter impact.

According to research, our intuitions are remarkable in their inability to ascertain what influences us and to what extent we are influenced. When the motivators of our behavior are conspicuous and the correct explanation corresponds to our intuition, self-perceptions tend to be accurate. When the causes of behavior are obvious to an external observer, they tend to be obvious to us as well. When motivation is not a conscious consideration and is maintained internally, however, analysis is unreliable. When manipulation is not obvious, its effect often goes unnoticed. Studies on perception and memory indicate that people are more aware of the results of their thinking than the process. Professor Timothy Wilson has postulated that the mental processes that control social behavior are wholly distinct from the mental processes through which behavior is evaluated and explained. Our rational explanations tend to omit the unconscious attitudes, factors, and processes that determine behavior. In nine experiments, Wilson found that the attitudes that people expressed toward people or things could be used to predict their subsequent behavior relatively accurately; however, if those individuals were first asked to analyze their feelings, then their admissions no longer corresponded to their actions. People are capable of predicting their feelings, such as happiness in a relationship, but thinking about their circumstances clouds their ability to predict their future performance accurately [10].

In short, people are, to a degree, strangers to themselves. Self-reports are unreliable and untrustworthy unless the decisions are driven cognitively and dependent on rational analysis. The sincerity with which people respond, report, and interpret their experiences is not a guarantee of the validity of that sentiment. Vulnerable individuals can be more or less told how they feel and think if sufficient emotional stimuli are applied. Even when they discount the attempts openly, they may still be susceptible [10]. At a bare minimum, ideas and feelings can be planted memetically in the minds of the audience. Individuals who judge themselves unaffected may also be undefended against the influence. As a result, their false sense of security may actually assist the propaganda, misinformation, or disinformation in rooting itself in the subconscious.

The way a question or an issue is posed can influence people’s decisions and expressed opinions. Framing controls the perception of an issue entirely and the response options available [10]. It is vital to the effective design of every variation of propaganda and meme because, if done correctly, it galvanizes the target to follow one of only a few predetermined and predesigned courses of action. The frame could be a logical trap (“Are you opposed to the current administration?” with options of Yes/Somewhat/Other) or it could force a moral conundrum (“True Christians prove themselves by donating to this cause”). Even the engagement-baiting memes that spread like wildfire on social media (“like and X will happen” or “share and Y will be in your future”) rely on a less effective form of framing. Sophisticated memetic frames will engage the viewer and mentally distract or coerce them into not considering response options other than those proffered by the influencer. The target should believe that they have no recourse but to act according to the threat actor’s intent.

In order to escalate the magnitude and impact of tensions within or between groups, threat actors can leverage their bots and trolls to strategically strengthen the arguments and mental defenses of members of those communities. Attitude inoculation is the practice of subjecting individuals to weak attacks on their beliefs or ideologies, so that when stronger attacks occur, they will have more powerful refutations available [10]. Defense of an ideology or cause unites the surrounding community and further indoctrinates members through groupthink and collectivism. When attitude inoculation is applied properly, the influencer can shape and train an ideological community or social network group to be exactly the weapon they need for their cyber-kinetic influence operation.

Affective forecasts – predictions of future emotions – are strong influencers of decisions. If people estimate the intensity or duration of the emotional weight of a decision incorrectly, then they are prone to erroneously prepare or emotionally invest in the choice and often, a hastily made decision is later regretted. People assume that if they get what they want, then they will experience fulfillment immediately. Such is not the case. For instance, people believe that if their preferred candidate wins an election, they will be comforted and delighted for an extended period. Instead, studies reveal that they are vulnerable to an overestimation of the enduring impact of emotion-causing events. This effect is referred to as impact bias. Worse, people studies suggest that people are more prone to impact bias after negative events. At a fundamental level, decisions are made by estimating the importance of the event and the impact of everything else; however, when the focus is centered on a negative outcome, subjects discount the importance of everything else that could contribute to reality and thereby over-predict their enduring misery [10]. Adversaries can exploit impact bias in their virtual campaigns by focusing their propaganda and misinformation narrowly on negative events, asserting the faux-eventuality of worst-case scenarios repeatedly in their interactions on social media platforms, and disseminating memes that encapsulate only the most negative and fear-mongering aspects of current events. In effect, weaponizing impact bias enables the attacker to shape the digital narrative into a one-sided, entirely negative “conversation” with an anxious and mentally exhausted target population [10].

Behavioral confirmation is the epitome of the self-fulfilling prophecy in which people’s social expectations lead them to behave in ways that cause others to confirm their expectations. This applies to influence in that once those around an individual begin to conform to a political view or expectation, those that typically transverse the middle or represent the undecided become receptive to messaging and manipulation through not wanting to rock the boat with friends and relatives. Even when the messages do not appeal to them, the individual will conform or cooperate with more polarized stances. As such, social expectations, such as “we are a Republican family” or “this is a liberal college,” lead one to subconsciously negotiate the spectrum slowly to align their own personal views to meet social expectations around them, until the individual is convinced that they were always like that. Essentially, we indoctrinate ourselves to meet the perceived expectations around us and avoid conflict [10].

Bystander effect is the tendency for people to be less likely to help someone in need when other people are present than when they are the only person there. Also known as bystander inhibition, the bystander effect can be seen in loose terms as the segment of the population that remains inert even in the most crucial junctures. They are influenced through ideas like “my vote doesn’t count anyway” and “all government is corrupt.” Here, the goal is not to manipulate or influence the population for a result; rather, it is to convince the population to simply not contribute in any meaningful way [10].

The catharsis theory of aggression is that people’s antagonistic drive is reduced when they “release” aggressive energy, either by acting hostilely or fantasizing about aggression. To one degree or another, everyone strives to attain catharsis. The drive for catharsis is one of the most primal motivators, and it can be leveraged by influencers willing to provide an outlet or pay attention to someone with pent-up frustration or rage. Wound collectors and those obsessed with radical change are the most vulnerable to operations that weaponize the need for catharsis. According to the “hydraulic” model, accumulated frustration and aggression require a release. This form of catharsis often takes the form of expressions of prejudice, acts of aggression, and other outlets [10].

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from Youtube
Consent to display content from Vimeo
Google Maps
Consent to display content from Google