INFLUENCE OPERATIONS BOTS

According to a Radware study on Web application security, bots make up over three-fourths of traffic for some businesses; however, 40 percent have no capability of distinguishing legitimate bots from malicious ones. The study also found that 45 percent of businesses suffered a data breach in the last year and 68 percent were not confident that they could keep corporate information safe. Malicious bots can steal intellectual property, conduct web-scraping, or be used to undercut prices. Adversaries can direct bot traffic at businesses and either scrape consumer metadata; overwhelm the site to force users to a competitor site or mirror that acts as a watering-hole; influence the opinions of the user base by overwhelming the comment section; spread misinformation, malware, or fake news from the site; or compromise the site and exfiltrate consumer information for use in targeted attacks [25].

FRIENDSTER BOT
Bots on Friendster search for recent questions, mentions, and tweets. They respond to random questions and comments or thank the person. If they receive a response, they send a friend request to the original poster. After a direct connection is established, the bots are often used to push malicious links.

RANDOM COMMENT BOT
Random comment bots can be trained by identifying members of niche communities and then categorizing them according to their number of followers. Accounts with low followers are followed in the hopes of reciprocity, while accounts with high follower counts are noted. Finally, the bots are used to algorithmically send out malicious links or fake news via shoutouts of @follower_username. The specific lure employed will depend on the community targeted. For instance, academics may follow links blindly to interesting scientific articles within their niche, while political populations are more likely to respond to fake news articles tailored to their partisanship.

CHATBOTS
Different bots deliver sundry value according to their capabilities, functions, and applications. Chatbots are disruptive, but only a few varieties deliver value.

  • The Optimizer’
    Optimizer bots are the largest category of functional bots, and all others derive from them. These bots take on a concrete challenge and try solving it better than existing apps or websites. These bots attempt to disrupt by reducing friction versus more traditional ways of “doing things.” They may be applied to shopping, traveling, or everyday life. Optimizer bots minimize the workload of the user, but they also reduce the user’s agency for making decisions. For instance, an innocuous optimizer might select music for the user or pick a restaurant for them based on preferences. Attackers or corporate dragnet propagandists can influence the behavior of the bot to influence the user’s preferences, schedule, or choices directly.
  • ‘The One-Trick Pony’
    A “one-trick pony” bot is a mini-utility with a messaging interface that assists in creating a meme, video, or editing text. A simple example of this bot is Snapchat’s “simple” spectacles. It is easy for users to take the cognitive capabilities of these bots for granted because while engaged, the user is distracted with another task, such as photo-shopping an image, sending a meme to a friend, or editing a video rapidly. However, the impressive recognition and influence potential of these bots should not be underestimated. “One-trick ponies” are responsible for the generation of some of the most viral memes. Once the bots become popularized through the spread of even a single viral meme, the distributor of the bot has leverage over an ever-expanding user-populated meme generation and mutation factory. The developer of the bot or application controls the boundaries of meme generation through the selective offering of filters or tools offered to consumers. The bot might even suggest mutations, provide content, deliver ads, or collect consumer information. All of these capabilities enable adversaries and special interests to control meme generation, gather psychographic and demographic information that can be used to fine-tune targeting profiles, track meme mutation and propagation, and influence the meme generators who spread content to diverse and selective communities.
  • ‘The Proactive’
    “Proactive” bots excel in their ability to provide the right info at the right time and place. Examples are Foursquare’s Marsbot, Weathercat Poncho, and KLM’s bot. These bots can be useful for narrow use-cases if they do not irritate their victims with useless notifications. For true mass adoption, they will need to provide personal, adept, and timely recommendations on a use case that is important enough for the target population to engage with frequently. The goal of the bot is to coerce user dependence, become indispensable, and normalize within the target’s daily life. The developer of the bot or any adversary digitally hijacking the application controls what information the user receives, when the user receives notifications, and which notifications appear on which devices based on user demographics, psychographic profile, device type, geographic area, or socionics. “Proactive” bot controllers could frame information selectively, serve misinformation/disinformation, or polarize entire populations based on their registered information and any data collected from their device. For instance, consider the havoc an adversary could wreak if separate “facts” about a racial incident were reported to different users based on their demographic information. Alternately, consider how they could manipulate protest and counter-protest turnout by delivering different weather or traffic advisories selectively.
  • ‘The Social’
    Like other bots, “social” bots are meant to accomplish a task; however, their distinguishing feature is that they compound the power of a group or crowd while making use of the unique nature of messaging platforms. Examples include Swelly, Sensay, Tinder Stacks, Fam, and Slack bots. Social bots have the potential to become viral immediately by drawing users into dialogues. The bots already choose which users to engage with based on their activity, interests, or demographics. When weaponized, these bots can be tailored to deliver propaganda or misinformation, they can assist in the polarization of a group or individual, they can gather victim information, or they can harass or radicalize one or more targets. In effect, “social” bots can leverage and weaponize fully the considerable influence that social media platforms possess over users’ daily lives.
  • ‘The Shield’
    “Shield” bots are a sub-category of “optimizers” that specialize in helping users avoid unpleasant experiences. These usually appear as automation interfaces, such as customer service, payment interactions, or any other field where a live operator can be replaced with an application. Popularized “shield” bots survive by their ability to outperform their competitors; however, the ineffectiveness of bots lacking in competitors can be used to control consumer behaviors. For instance, interaction with a poorly implemented “shield” bot might be necessary to fight a parking ticket. Only users with the patience to suffer through the interaction with an ineffective bot would be able to fight the ticket. Everyone else would either have to pay it out of frustration or ignore it at the risk of further penalties. Since a large subset of those who are patient may be those willing to pay the ticket, the ineffective bot acts as a discriminatory barrier against many psychographic profiles.

Propaganda Bots
Bot activity is not unique to the United States; similar activity-disrupting activities, like launching attacks and disseminating propaganda, have been studied empirically in Mexico, Honduras, Dominican Republic, Venezuela, Argentina, Peru, Spain, Turkey, Egypt, Syria, Bahrain, Saudi Arabia, Azerbaijan, Russia, Tibet, China, UK, Australia, and South Korea. Spambot technology infused with machine learning and artificial intelligence is compounded by the weaponization of every conceivable digital vector, all made more potent by the use of memetics, psychographic targeting, cognitive biases, socionics, and spiral dynamics [26].For instance, social media bots are prolific on Mexican networks. Bots and trolls target activists, journalists, businesses, political targets, and social movements with disruptive attacks, personal degradation campaigns, distractions, targeted malware, and death threats. Since the publication of his research denouncing bot activity, Alberto Escorcia has received constant death threats to him and his family, he has suffered rumor campaigns that have impacted his business relationships, his systems have been hacked, his website has been taken offline, and someone has broken into his apartment and stolen computer equipment. Similarly, researcher and blogger Rossana Reguillo suffered a two-month campaign of phishing links and death threats containing misogynistic language, hate speech, and pictures of dismembered bodies and burned corpses. The purpose of the attacks was to dissuade her from communicating with journalists, academics, activists, and her audience. The goal appears to have been to disrupt her work, force her to delete her accounts, or intimidate her into leaving the internet[26].Digital propaganda botnets can be bought, sold, rented, and shared. They are not impeded by borders. For instance, the case study “Elecciones Mayo 2015. Quienes hacen trampa en Twitter” (Elections May 2015. Who is playing tricks on Twitter) discusses a network of bot accounts that were created in April 2014 to support Venezuelan anti-government protests La Salida, went silent for eight months, and then reemerged tweeting about Spanish politics, shortly after the creation of the MEVA (Movimiento Español Venezolano Antipodemos). This second period of activity focused on criticism of PODEMOS and promotion of Ciudadanos, while the possible account of the network’s administrator began to be followed by 18 official accounts of that party [26].Propaganda bots and botnets are used to disrupt networks, suppress and censor information, spread misinformation and smear campaigns, and overwhelm vital nodes to sever them from the network. In 2014, Alberto Escorcia from LoQueSigue in Mexico City used the open source program Gephi to map Tweets visually using the hashtag “#YaMeCansé,” and he found that armies of bots were attacking the hashtag repeatedly and attempting to appropriate it. Even after users mutated the hashtag into “#YaMeCansé2 ” and later “#YaMeCansé3,” the bots continued to spam the tags to make them useless. Over the course of a month, the hashtag morphed into 30 different iterations, with each overthrown by bots. Similarly, in January 2015, bots spammed “#EPNNotWelcome,” which was meant to protest Mexican president Enrique Peña Nieto’s (EPN) visit to Washington, D.C.; however, the tweets of thousands of online protesters were drowned in the tenfold flood of bot tweets [26].According to Colombian hacker Andrés Sepúlveda, bots were extremely effective in influencing voters prior to an election, because the audience placed greater trust in the faux viral group-think implied by the bots than it did in the facts and opinions provided by television, newspapers, and online media outlets. People wanted to believe what they thought were spontaneous expressions of real people on social media more than the self-proclaimed experts in the media, because they had a desire to be part of the cultural zeitgeist. Sepúlveda discovered that he could manipulate the public debate easily by exploiting that human flaw. He wrote a software program, now called Social Media Predator, to manage and direct a virtual army of fake Twitter accounts. The software let him change names, profile pictures, and biographies quickly to fit any need [26]. While there is sufficient evidence to conclude that bots from multiple operators attempted to influence the 2016 U.S. Presidential election, it has proven difficult to ascertain the full extent of the impact and attribute the bots back to their sources. According to Alessandro Bessi and Emilio Ferrera, “Unfortunately, most of the time, it has proven impossible to determine who’s behind these types of operations. Governments, organizations, and other entities with sufficient resources can obtain the technological capabilities to deploy thousands of social bots and use them to their advantage, either to support or to attack particular political figures or candidates” [26].

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Youtube
Consent to display content from Youtube
Vimeo
Consent to display content from Vimeo
Google Maps
Consent to display content from Google