INFLUENCE OPERATIONS TOOLS

Hacking, the art of abstracting new insights out of the old data, is a skillset that will be a requirement for all nation-states and in all forms. It is no longer enough to confront an aggressor directly or indirectly; now a nation-state must also stalk the aggressor, and at the first indication they are winding back to strike, multiple templates for offensives that have been prepared in advance for such an incident must crush those memes and narratives directly. Mock them, berate them, criticize them. The combination of images, wording, colors, fonts, and distribution vector variations must be instant and fierce.

KALI LINUX
Kali Linux is an open source Debian-derived Linux distribution that was developed for penetration testing and offensive security. It contains numerous tools that can be used for cyber-attacks or intelligence gathering operations against an individual, group, or population as a preliminary stage of the influence campaign. Kali includes features and tools that support the Wireless 802.11 frame injection, deploy one-click MANA Evil Access Point setups, launch HID keyboard attacks, and conduct Bad USB MITM attacks. Among other applications, the download contains the Burp suite, the Cisco Global Exploiter, Ettercap, John the Ripper, Kismet, Maltego, the Metasploit framework, Nmap, OWASP ZAP, Wireshark, and many social engineering tools [15].

The Kali Linux operating system is a free open source download that can be installed on nearly any system or turned into a boot disk on removable media. The included features and tools are a literal plug-and-play attack campaign and influence operation framework and library. Numerous tools can be used to develop and deploy convincing social engineering lures. Nmap, Wireshark, and other applications can be used to derive network and platform metadata, which an attacker could use to calibrate their memes, lures, mediums, or propagation vectors [15].

MALTEGO
Maltego, an open source intelligence and forensics application that gathers massive amounts of data from social media platforms, could be used for a similar purpose to target specific members of a group, to develop an optimal persona based on the socionics and evolutionary characteristics of the populations, or to impersonate a key member by hijacking their accounts or mimicking their actions [16].

METASPLOIT
The Metasploit framework consists of anti-forensic, penetration testing, and evasion tools, which can be used against local or remote machines. Metasploit’s modular construction allows any combination of payload and exploit. Additionally, Metasploit 3.0 includes fuzzing tools that can be used to discover vulnerabilities and exploit known bugs. Adversaries can use port scanning and OS fingerprinting tools, such as Nmap, or vulnerability scanners, such as Nexpose, Nessus, and OpenVAS, which can detect target system vulnerabilities that can then be used to glean the system information necessary to choose an exploit to execute on a remote system. For influence operations, the scanning tools should also determine the browser type, native applications, and hardware (such as camera and microphone) capabilities. Metasploit is capable of importing vulnerability scanner data and comparing the identified vulnerabilities to existing exploit modules for accurate exploitation. The payload could be purchased from a Deep Web market or forum, or it could be crafted specifically for the target based on the objectives of the influence operation; the technical specifications of the system; and the socionic, psychological, and demographic characteristics of the target user [17].

AUDIBLE STIMULI
The audiences of horror movies are not meant to be simple spectators; they are passive participants. While immersed, they become convinced on some level that they are accompanying the characters around a dark corner or through a shrouded doorway to strange and disturbing locales. Though the audience remains safe in their seats at the theater or their living room when the monster or killer emerges from the shadows, the movie leverages their suspended disbelief to make their hair to stand on end, make sweat emerge from their skin, or cause them to leap from their seats. Horror movies rely on a specific ambiance that is generated from a careful balance of visual and audio stimuli that induces a sense of anxiety, suspense, or fear in the audience. Discomfort, anxiety, and fear are powerful behavioral influencers that attackers can inflict subtly on a lured audience by incorporating well-documented movie techniques into their broadcasts or visual propaganda. For instance, The Shining and other movies invoke an instinctual fear response by merging animal calls, screams, and the sounds of distressed animals and other nonlinear noises deep in the complex movie score. Harry Manfredini, the creator of the music score for “Friday the 13th,” elaborated, “The sound itself could be created by an instrument that one would normally be able to identify but is either processed or performed in such a way as to hide the actual instrument.” The effect of these subtle and often entirely obscured sound waves was the evocation of a mini adrenaline rush from the psychological “fight or flight” instinct of the viewer. A similar disorienting effect occurs when a sound is removed from its normal context and retrofitted into an unfamiliar one. The listener’s brain recognizes the disparity but is often unable to bypass the discrepancy. Audiences may also be conditioned to associate specific sounds with certain actions. In thrillers, this manifests as cued scores that signify when the killer is near; heavy, echoed breathing; or other obvious but learned audible cues [18].

Infrasound – low frequency sounds below 20 Hz – lie mostly outside the human spectrum but can be felt in bones and understood by brains. Infrasound can be created naturally by some animals for communication or generated by wind, earthquakes, or avalanches. Movie composers exploit infrasound just above or below the human hearing threshold to incite a response in the audience that ranges from subtle anxiety to a visceral unsettling. Steve Goodman, in “Sonic Warfare: Sound, Affect, and the Ecology of Fear,” says that while the ways sound in media cause these responses in human perception are under-theorized, it likely has its place, especially with a sourceless vibration like infrasound. “Abstract sensations cause anxiety due to the very absence of an object or cause,” he writes. “Without either, the imagination produces one, which can be more frightening than the reality” [18].

Layered audio attacks have an even stronger effect on an audience. The combination of abstract or altered everyday noises with dialogue or music can unbalance an audience enough that the composer can make them feel a specific emotion. For instance, for the 2012 low budget zombie film “The Battery,” Christian Stella layered music on top of modulated recordings of power transformers, air conditioners, and other appliances. Manfredini aligns layered emotional audio cues with actions, objects, and colors to increase immersion and divide the audience gradually from logic and reason by exciting the psychological centers responsible for fear and panic [18].

An adversary could layer infrasound and masked disorienting noises with the dialogue of a fake news broadcast in the background of a video shared on social media or in any number of other situations or mediums to make the target audience fearful or panicked. Anxious and fearful populations become more tribal and isolationist. During the communique or afterward, the attacker can leverage false narratives of xenophobia; prejudice; or any other social, economic, or political topic that matters to the target demographic or evolutionary tribe. The narrative memes will mutate and propagate from the anxious and defensive victims to others in their families, communities, or evolutionary tribes until the memetic narrative becomes a self-replicating, self-mutating entity. Worse, weaponized sonic attacks may not have to be as hidden in other mediums to influence one or more targets; they could be paired with an exploit. Between October 2016 and October 2017, at least 24 American diplomats in Cuba may have been the victims of precision-targeted sonic attacks. An April 2017 letter from the Cuban Interior Ministry asked hospital officials if the diplomats were ever treated for “hearing and neurological ailments, which could be linked to harm to their auditory system from being exposed to levels of sound affecting their health.” Based on the descriptions of the incidents, it is possible, though speculative, that the attacks could have been the result of unique malware that was delivered to the victims’ mobile devices by exploiting known vulnerabilities. The malware would then utilize the hardware of the device to release an infrasonic frequency capable of disorienting and nauseating the target over an extended period [19].

POLITICAL CORRECTNESS
Political correctness, enforced by peer pressure and a sound method for introducing new rules and regulations that benefit the state, can be used to exert control over a community without the drama of introducing social laws that would otherwise encounter resistance by the population. With only a limited number of supporting bot and troll accounts and strategic baiting or an insistence on perspective, entire communities can be polarized based on the words or actions of an individual or small group, because many partisan ideologies find meaning and garner a sense of community from finding and punishing perceived offenses to their members or cause.

FAKE NEWS
Fake news plants false and dangerous ideas into the minds of a population. It is tailored to the spiral dynamic, socionic, and psychographic profiles of the target. Fake news causes chaos, breeds conflict, and decreases the access to accurate information, thereby decreasing the public’s ability to make informed choices. Furthermore, it is cheap to produce and disseminate. Before the 2016 Presidential election, the Kremlin paid an army of more than 1,000 people to create fake anti-Hillary Clinton news stories targeting specific areas in key swing states Wisconsin, Michigan, and Pennsylvania [20]. The best place for a lie is in between two truths, and effective fake news blends truth and falsehood seamlessly until the narrative is sufficiently muddied and the readers’ minds are satisfactorily muddled.

Even the insinuation of fake news can damage reputations and societal institutions. Weaponized erroneous allegations of “fake news” from seemingly trusted sources can instantly delegitimize invaluable investigative sources and consequently nullify months’ or years’ worth of groundbreaking revelations entirely. With the right message behind the right figurehead on the right platform, stories revealing ongoing atrocities, war crimes, slave trades, illegal business practices, or corruption can be mitigated entirely with a single tweet, Facebook post, or blog entry, often without the need to even address the issue. By attacking the source of the narrative with the “fake news” meme, indoctrinated audiences discount the original message immediately, adopt the bandwagon mentality, and join the attack campaign against the actual legitimate source.

UNAUTHORIZED ACCESS TO INFORMATION
Hacking or gaining access to a computer system can enable the attacker to modify data for a particular purpose. Hacking critical information infrastructure can seriously undermine trust in national authorities. For example, in May 2014, the group known as Cyber-Berkut compromised the computers of the Central Election Committee in Ukraine. This attack disabled certain functionalities of the software that was supposed to display real-time vote-counting. This hack did not disrupt the election process, because the outcome was reinforced by physical ballots. The impact would have been much greater if it had actually influenced the functioning of the voting system. Instead, it called into question the credibility of the Ukrainian government overseeing the fair election process. Evidence indicates that the attack was carried out by a proxy actor and not directly by the Russian government. Although Cyber-Berkut supports Russian policy toward Ukraine, there is not definitive proof that these hacktivists have a direct relationship with Russian authorities [2]. This makes the denial of involvement by the Russian government not only plausible but also irrefutable in an arbitrary legal sense. From the view of international law, the use of such an operation makes it almost impossible to relate these activities to a state actor. Another example is the security breach that affected the U.S. Office of Personnel Management in 2015, which resulted in major embarrassment for the U.S. authorities unable to protect the sensitive information of nearly all government personnel [2].

FALSE FLAG CYBERATTACKS
In April 2015, the French television network TV5 was the victim of a cyberattack from hackers claiming to have ties with Islamic State’s (IS) “Cyber Caliphate.” TV5 Monde said its TV station, website, and social media accounts were all hit. Also, the hackers posted documents purporting to be ID cards of relatives of French soldiers involved in anti-IS operations. TV5 Monde regained control over most of its sites after about two hours. In the aftermath of the January 2015 terrorist attacks on Charlie Hebdo, it was quite obvious to the general public and the investigators that the attackers had ties with the IS organization. In June 2015, security experts from FireEye involved in the investigation of the hack revealed that pseudonym of IS was “Cyber Caliphate” for this attack. According to them, the Russian hacker group known as APT28 (also known as Pawn Storm, Tsar Team, Fancy Bear, and Sednit) may have used the name of IS as a diversionary strategy. The experts noticed some similarities in the techniques, tactics, and procedures used in the attack against TV5 Monde and by the Russian group. This can, therefore, be qualified as a false flag cyberattack, where the use of specific techniques (i.e., IP spoofing, fake lines of code in a specific language) will result in misattribution. Why would Russia hack, or sponsor and condone someone else hacking, a French TV station? The only obvious rationale behind these attacks, if conducted by Russia, is to sow confusion and undermine trust in French institutions in a period of national anxiety. TV5 Monde can be blamed for not protecting its networks properly and looking like foolish amateurs, unable to respond in an effective way. Although there is no direct connection, it could be argued that any action that undermined the French government may have led it to act in ways favorable to Russian interests. Here again, plausible deniability provides enough cover not to worry about the legality of such actions or any response of the victim. The fact that it was discovered only months later that there might be a link to the Russian government highlights the very limited risk of repercussions or countermeasures [2].

WEBSITE DEFACEMENTS
Although most website defacements or hacks of Twitter accounts have only very limited impact, their results can be quite catastrophic. In 2013, the Twitter account of the Associated Press was hacked, and a message claiming the White House was under attack was posted. This sent the stock markets down 1 percent in a matter of seconds. With High-Frequency Trading, short interruptions as a result of false messages can have profound financial repercussions. In most cases, however, website defacements are comparable to graffiti and can be classified as vandalism. Technically, they are not very complicated, and again, the effect lies mainly in the embarrassment it causes to the target. The aim is to sow confusion and undermine trust in institutions by spreading disinformation or embarrass the administrators for poor network defense. The effectiveness of the attack lies in the media reaction; the exposure is far more important than the technical stunt itself. These attacks are minor stings, but taken together, they have the potential to erode credibility. Their long-term effectiveness, however, is questionable, as people become aware of their limited impact and network security is improved [2].

DOXING
Another technique that has been widely used in recent years is “doxing” (or “doxxing”), which is the practice of revealing and publicizing information on an organization (e.g., Sony Corporation) or an individual (e.g., John Brennan) that is private or classified, so as to shame or embarrass targets publicly. There are various ways to obtain this information, ranging from open sources to hacking. This type of action is on the rise, and if the data of people like the director of the CIA is accessible, that means that everyone’s might be. Doxing may be used for political purposes. For example, in February 2014, Victoria Nuland, then U.S. Assistant Secretary of State for European and Eurasian Affairs, made a rather obscene comment about the European Union in a telephone conversation with the U.S. Ambassador to Ukraine. Such an incident is embarrassing, but more importantly, it can create divisions among allies and jeopardize a common policy to address a crisis. Doxing can be an offshoot of an espionage operation and thus turned into an ICO, information obtained through a disclosed source to undermine the adversary. These activities cannot be qualified as a use of force or be deemed of a coercive nature under international law [2].

SWATTING
Swatting is a popular tactic among script kiddies and gamers, in which a false emergency call of a dire situation, such as a hostage crisis or bomb deployment, is made to law enforcement local to an unsuspecting target in an attempt to harass or inhibit the individual. It is primarily conducted out of revenge for some perceived harm or bragging rights. Swatting can lead to unintentional harm or loss of life if the target does not comprehend that they are being swatted or if police forces misinterpret the situation. Influencers can use swatting as an intimidation tactic against outspoken opposition to the propagation of the meme, to disrupt rival narratives, as fodder for anti-police sentiments, or as part of a false flag attack [2].

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google