SOCIAL MEDIA PLATFORMS
Global social media usage reached 1.96 billion users in 2017 and is expected to grow to some 2.5 billion users in 2018. As of 2017, daily social media usage of global internet users averaged 135 minutes per day, an increase from 126 daily minutes in 2016. It is the utmost prerogative of social media platforms to increase users’ usage and dependency on the platform. They want users to check their feeds, pages, and streams every few minutes, regardless of whether it interrupts their workflow, disrupts their real-world interactions, or inhibits their emotional stability. In 2017, 81 percent of U.S. Americans had a social media profile, representing 5 percent growth compared with the previous year. Young Americans are the most likely to use social networks, with usage at 90 percent; however, use by those 65 and older is increasing rapidly. In 2015, the Pew Research Center estimated that 65 percent of American adults were social media users.
Social media platforms are free or mostly free to users because they collect, analyze, and sell consumer data to external third-parties who are interested in personal and global insights into user behavior, interests, and motivators. Platforms further capitalize on users’ attention by selling the ad space within and surrounding the platform. They control their users’ perspectives through the selective display, agenda-oriented curation, and dependency-cultivating delivery of content, news, and community.
Mobile devices have permanently and irrevocably altered the digital threat landscape because the devices tend to travel with the user wherever they go. While at home, at work, at school, or on vacation, mobile devices and the social media platforms with constant access to and control over those devices accompany their users. A study conducted by British psychologists found that young adults use their smartphones an average of five hours a day, roughly one-third of their total waking hours, and that an overwhelming majority used social media.
Many platforms have expanded their initial capabilities to increase user functionality and dependence. For the purpose of the attack vectors, we will focus on the predominant use of that platform.
ACCOUNT WEAPONIZATION
Every layer of a social media account or other digital accounts can be tailored to optimize the weaponization of the meme. Usernames can contain keywords or triggering words or phrases. Images can be of memes, public figures, or sympathetic profiles based on race or gender. The demographic information and messaging can also be used to lend legitimacy or community to anyone who views the account. The age of the account, the number of followers or friends, the content liked, shared, and commented on, and the communities joined also influence the believability of the account narrative.
PUBLIC MESSAGING PLATFORMS
Public messaging services such as Twitter are populated by at least 320 million registered users every month, in addition to all the voyeurs who congregate on the platform to absorb the messaging of their peers and idols. Twitter has recently become a more culturally relevant platform as politicians, Hollywood elite, and other societal leaders and figureheads have utilized it to deliver concise editorials or calls to action on numerous societal, political, and cultural issues and causes. As a result, the potential and potential harm of a stolen account are increased drastically. An adversary who weaponized a popular and trusted account could cause significant disruption in a short period. Additionally, threat actors can leverage massive amounts of bot accounts to generate their own fake viral following, and they can likewise leverage retweets, weaponized hashtags, and follow-for-follow campaigns. Once the account has even a modest following, they can disseminate malicious links, propaganda, fake news, and other attacks across their own and public figures’ networks.
Twitter bot “rental” services can be hired for set periods or to deliver tailored content to a specific target or target platform. For bot creation, Twitter account resellers are plentiful on Deep Web markets and forums. Otherwise, Twitter bot accounts can be created individually with handles, images, emails, and demographic information designed precisely to evoke a strong response, convey a subtle message, or lend a certain personality to the account. For instance, a bot attempting to incite violence or division within the “Black Lives Matter” communities will likely be more influential if the underlying account appears to have a history and if the account appears to be operated by someone who is black, according to the messaging, account information, display image, and handle. Attacks from community outsiders, such as a bot account that mimics a white supremacist, will evoke a tribal community response; meanwhile, bots that appear as community insiders can incite internal divisions and radicalization more easily by challenging the dedication, beliefs, and values of other community members. If either has a sizable following (of even a few dozen accounts, including other bots), then interaction of any type will result in a reaction. Every reaction can be manipulated and controlled through the guidance of bot accounts on either or both sides of the dispute until the desired outcome is achieved.
Cultivating a bot account to increase its audience and influence focuses on mass exposure. Bot accounts are grown naturally by interacting with legitimate Twitter accounts and gaining reciprocal followers and by auto-following niche figures and community members. A hundred Twitter accounts with a few hundred followers each amount to an audience in the tens of thousands. The organic flow of interaction, automatic tweets, and retweets will simulate an active account persona and imply authenticity behind the bot. Low-level bots add uses and drip links or pre-generated comments automatically. More sophisticated bots, often aimed at higher value targets, influence epicenters, or niche communities, will interact through real dialogue generated using artificial intelligence or automated libraries, such as WordAI’s API. If the meme gains traction, it will migrate onto other platforms such as Facebook or Pinterest through users. Particularly virulent memes will mutate or evolve into derivatives on both the original platform, Twitter, and the new destination (e.g., Facebook), due to the user’s desire to repurpose the meme to their argument and the loss of context, such as the source or platform. Each iteration of the meme increases the chaos and, oddly, the perceived authenticity of the underlying message within the community. Humanity’s desire to comprehend and rationalize information transforms a suggestion into a rumor into gossip and eventually into the narrative.
IMAGE BOARD SITES
Most memes are images or have a visual component that distracts the viewer while their subconscious absorbs and internalizes the adversarial message. These sites, such as Instagram, Snapchat, Pinterest, Flickr, and DeviantArt, can be used to design and tailor the meme before spreading it to other platforms. Unsuccessful memes are discarded immediately, while memes that resonate are studied, improved, and propagated.
Most content migrates to Pinterest as a cascading impact of meme adoption on other platforms; however, Pinterest bots can be rented and sold just like those of any other platform. Though not necessarily indicative of the actual audience, Pinterest bots are typically used to target women and feminist niches because they are perceived as the Pinterest audience. Bots re-pin images, follow users, and post fake news or malicious links.
In the run-up to the 2016 election, Pinterest became a repository for thousands of political posts created by Russian operatives seeking to shape public opinion and foment discord in U.S. society. Trolls did not post to the site directly. Instead, content spread to the platform from users who adopted it from Facebook and other sites and pinned it to their boards. Influence posts and ads intend to divide the population over hot-button issues, such as immigration and race. Influence pages on Facebook and Twitter weaponized supporters and opponents of “Blacktivism,” “United Muslims of America,” “Secured Borders,” “LGBT United,” and other highly polarized topics. Many of the accounts operated on multiple platforms (i.e., Facebook, Twitter, Instagram) and images from an estimated 83 percent of pages migrated to Pinterest through indoctrinated legitimate users. Of the hundreds of accounts investigated, only one account displayed the conventional characteristics of bot accounts, such as a lifetime of less than a year and limited interaction with other users. For instance, a Pinterest board dedicated to “Ideas for the House” featured an image of a police officer and text indicating that they were fired for flying the Confederate flag; however, the meme originated from the “Being Patriotic” Twitter and Facebook accounts, which were associated with the Internet Research Agency. Similarly, the “Heart of Texas” account that was disabled eventually on Facebook weaponized polarized users to spread its memes across multiple platforms and mediums by playing on their proclivities, beliefs, and fears. One image of a man in a cowboy hat urged viewers to like and share if they “wanted to stop the Islamic invasion of Texas.” Jonathan Albright, research director of the Tow Center for Digital Journalism at Columbia University, uncovered on Pinterest over 2,000 propaganda images tied to the Internet Research Agency. The migration of political memes to Pinterest is uniquely interesting, because the platform is not conventionally political. Pinterest does not enable users to spread content to innumerable networked users in the same manner as other social media applications. Pinterest is typically used to share crafts and artistic creations, rather than for the exchange of ideas or debate. Before foreign attempts to influence the 2016 election, political content had never gone viral on Pinterest. The spread of Russian coercive memes onto Pinterest could be an unintentional symptom of the multi-vector infection of American cultural communication mechanisms, or it could be a deliberate strategic attack component in a complex operation designed to pollute the entire information and social media landscape[21].
PROFESSIONAL NETWORKING SITES
Nearly every professional will utilize or be dependent upon an online professional network at some point in their career to create new connections, hire new talent, or search for new opportunities. Though LinkedIn is the premier service in this space, others such as Viadeo, Monster, and Xing also exist. The main attacks against users of professional networks are the delivery of memetic content or propaganda and the damage caused by social engineering and mimicry. Users are open to connecting, communicating, and sharing information by default. Adversaries can leverage that trust to persuade niche-specific targets to open links or consume fake news, misinformation, or disinformation that they otherwise would not. Academics, key professionals, and well-known networkers are prime targets for these attacks. Account details from the platform and other social media facilitate attacks where the adversary adopts a target’s information to lure others or cause reputational harm.
LinkedIn is the most business-oriented of the social media platforms, and as a result, it can be an immensely powerful tool for social engineering operations, precision targeting of niche communities, and small but more focused operations. Unlike Facebook or Twitter, where users can lie about their demographic information or hide their content from the global network, LinkedIn requires honesty, transparency, and accessibility as necessary functions. Any adversary can weaponize these aspects against unsuspecting users. LinkedIn bots can be used to force networking requests, articles, or messages.
Though some information on LinkedIn can be privatized to those external to a network, the majority of information, such as employment history, education, demographics, interests, or skills, is often left public to anyone with an account. Adversaries can collect basic information or rely on demographic or psychographic information, or spiral dynamics profiles, to tailor bots to certain users and community members. Worse, according to a court ruling, under appeal at the time of this writing, LinkedIn cannot prevent third parties, such as bot operators and data cultivators, from scraping members’ data. Bots can like, share, comment, send invitations, endorse skills, or post or send fake news or malicious links automatically. Given that many use the platform to share their resume or search for employment, LinkedIn bots could pose as potential employers to collect resume information that could help facilitate identity theft. Stolen resumes, portfolios, writing samples, and company emails can be used in future spear phishing campaigns that imitate the initial victim to increase the success rate of the lure. Even emails from bot accounts to a target company’s management that personnel with active LinkedIn accounts are searching or have accepted alternative employment, whether or not a conversation occurred, could cause pandemonium within major organizations.
MESSAGE BOARD SITES
Just as at the advent of the internet, forum-based sites remain the home of marginalized and self-isolating communities. Sites like Reddit and 4chan are populated by radical users and communities from every demographic. They typically follow more vocal accounts that appear to “know things” but that often actually only spout rhetoric that gratifies the beliefs of their ideological bubble. For instance, in early November 2016, WikiLeaks released emails from former Clinton campaign chairman John Podesta’s account. One particular minor donor caught the attention of trolls on Reddit and 4chan along with the use of the word “pizza,” which was used in discussions about fundraisers and regular outings. 4chan users began posting speculation and alleged connections that they had pieced together from haphazard internet searches. Others trawled the Instagram feed of the donor, James Alefantis, for images of children and modern art that line the walls of his pizza establishment in Washington DC. Despite the building of Comet Ping Pong lacking a basement and despite Alefantis having never met either of the Clintons, Reddit and 4chan users perpetuated the conspiracy theory that his business operated a pedophile sex ring out of the basement. In only a few days, the employees of the shop were receiving threatening phone calls and social media attacks and angry protesters picketed outside the shop. In mid-November, Turkish pro-government media outlets and trolls began tweeting #Pizzagate, because Turkish President Tayyip Erdogan had adopted the narrative as a means of displaying “American hypocrisy.” Erdogan’s regime had been scandalized by a real child abuse operation connected to a Turkish government-linked foundation [22][23]. Circulation of the weaponized hashtag was amplified by Twitter bots that were later traced to the Czech Republic, Cyprus, and Vietnam. In fact, most of the share of tweets about Pizzagate appear to have come from bot accounts [3]. The Turkish trolls also used the story as a distraction from Erdogan’s controversial draft bill that would have provided amnesty to child abusers if they married their victims, although the draft was later withdrawn due to protests. Despite a complete lack of any form of tangible or credible evidence and even a lack of victims, the conspiracy continued to grow until an armed North Carolina man, Edgar Maddison Welch, arrived at Comet Ping Pong and attempted to “self-investigate.” He too found no evidence that the theory was anything other than a fake news story turned politically motivated attack by a Turkish regime intent to turn moral panic in the United States against their geopolitical critics. Afterward, Reddit began to remove threads related to Pizzagate, claiming, “We don’t want witchhunts on our site” [22] [23]. The Pizzagate narrative should not be seen as a partisan issue. Such rumors could be insinuated, amplified, and directed against affiliates of either party. In this instance, internet trolls loyal to the Erdogan government used weaponized hashtags, Twitter bots, and Reddit and 4chan trolls to distract and deflect from government scandals in Turkey. Any actor or any regime could just as easily fan the flames of chaos or smear public officials or their donors digitally. As a result, campaign contributors from either party might be less willing to donate in the future out of fear that their information could be compromised and a similar situation could target them. Welch turned himself over to police after he did not find any evidence of the alleged conspiracy; however, similar influence operations could easily radicalize a susceptible individual into a lone-wolf actor in a foreign nation and inspire them to launch an attack that results in loss of life or that seizes the attention of American media outlets.
COMMUNAL NETWORKING PLATFORMS
Facebook and similar platforms such as VKontakte (VK) are prime targets for foreign influence operations that weaponize fake news, propaganda, altered images, inflammatory and derogatory public and private messages, inflated like counters, and other adversarial activities. Following the 2016 election, Facebook shut down over 470 Internet Research Agency accounts. Facebook acknowledged that some had a presence on Instagram. Twitter shut down at least 201 accounts associated with the Internet Research Agency.
COMMUNAL NETWORKING PLATFORMS
Facebook and similar platforms such as VKontakte (VK) are prime targets for foreign influence operations that weaponize fake news, propaganda, altered images, inflammatory and derogatory public and private messages, inflated like counters, and other adversarial activities. Following the 2016 election, Facebook shut down over 470 Internet Research Agency accounts. Facebook acknowledged that some had a presence on Instagram. Twitter shut down at least 201 accounts associated with the Internet Research Agency.
Groups and communities are completely fabricated or are flooded with malicious bots. For instance, in June 2016, a swarm of Russian bots with no apparent ties to California were friending a San Diego pro-Bernie Sanders Facebook page and flooding it with anti-Hillary Clinton propaganda. The links were not meant to divide the community via political differences. They alleged that Clinton murdered her political opponents and used body doubles. Most domain registrations linked to Macedonia and Albania [24].
Similarly, in January 2016, Bernie Sanders supporters became high-volume targets of influence operations propaganda floods. “Sock puppet” accounts were used to deliver links and spam bomb groups. It began as anti-Bernie until Clinton won the Democratic nomination, then switched to anti-Hillary propaganda, fake news, and watering-hole sites. Dozens of fake news sites were spread on each group. The lure topics ranged from a “Clinton has Parkinson’s” conspiracy to a “Clinton is running a pedophilia ring out of a pizza shop” conspiracy. Trolls in the comment sections attempted to convince group members that the content was real. Many of the “interlopers” claimed to be Sanders supporters who decided to support the Green Party or vote GOP. The bots and trolls made it seem as if the community as a whole had decided that Green or GOP was the only viable option. Other articles offered “false hope.” ABC[.]com[.]co masqueraded as ABC News and “reported” that Sanders had been endorsed by the Pope. Bev Cowling, who managed a dozen Sanders Facebook groups, comments that, “It came in like a wave, like a tsunami. It was like a flood of misinformation.” Groups were bombarded with nearly a hundred join requests per day and administrators lacked the time to vet each applicant. According to Cowling, “People were so anti-Hillary that no matter what you said, they were willing to share it and spread it,” she said. “At first, I would just laugh about it. I would say, ‘C’mon, this is beyond ridiculous.’ I created a word called ‘ridiculosity.’ I would say, ‘This reeks of ridiculosity.’” In response, the trolls would discredit her voice by calling her, the administrator of a Sanders group, a “Hillbot” or Trump supporter. The misinformation incited trust issues into the Sanders community. Their mistrust compounded with legitimate reasons to be skeptical of Clinton, the WikiLeaks dump of DNC emails, and a perpetuation of paranoia and flame wars. The Facebook groups were bombarded with fake news and anti-Clinton propaganda. It did not matter if the stories were believable or not. Users hesitated to click on legitimate links, fearing redirection to a misinformation site. One achievement of the attack was that it made browsing the group for valid sources akin to sitting in a room filled with blaring radios and attempting to discern which one was not blasting white noise. The goal was both to misinform those susceptible and overwhelm or distract those rational. Entire communities were effectively “gas-lit” for months. Anyone attempting to call attention to the attack was labeled a “Hillary shill” and attacked. Even an attempt to point out that NBCPolitics[.]org was a fake site drew criticism and vitriol (the real site is NBCNEWS[.]com/politics). All it took was one group administrator to be convinced by the misinformation campaign for rational detractors who combated the attack to get banned and thereby increase the reach of the attack. Foreign-influence memes transformed groups into echo chambers of anger. One administrator of a Bernie Sanders Facebook group investigated a propaganda account named “Oliver Miltov” and discovered that there were four accounts associated with the name. Three had Sanders as their profile picture, two had the same single Facebook friend, while a third had no Facebook friends. The fourth appeared to be a middle-aged man with 19 Facebook friends, including that one friend the other Miltovs had in common. The four Miltovs operated on more than two dozen pro-Sanders groups around the United States, and their posts reached hundreds of thousands of members. For instance, on August 4, 2016, the Miltov post claimed, “This is a story you won’t see on Fox/CNN or the other mainstream media!,” and linked to an article claiming that Hillary Clinton “made a small fortune by arming ISIS.” Similarly, on September 25, 2016, a Miltov account posted, “NEW LEAK: Here is Who Ordered Hillary To Leave The 4 Men In Benghazi!” and linked to a fake news site called usapoliticsnow[.]com. The Miltov accounts, just a few of an astounding number of other influence accounts operating on social media platforms, intended to depress, disenfranchise, overwhelm, desensitize, inundate, and anger Sanders supporters [24].
Bots and trolls also purchase ads on Facebook, Twitter, Google, and other platforms and at the end of articles on popular sites via ad targeting services. Most users have probably seen these “clickbait” sections, sometimes entitled “From Around the Web.” Political influence links often surround “legitimate” propaganda ads paid for by campaigns and associated entities. Even when the foreign efforts to spread disinformation are comically obvious and riddled with typos and poor translations, their artificial groups garner tens of thousands of members who heavily engage with the articles and links posted by the malicious administrators. Users follow the links and engage with the bots/trolls in the comments when they agree or oppose the content or title of the article. Trolls specifically target voters who consider themselves activists or anti-establishment and anti-status quo.
Facebook said in September that Russian entities paid $150,000 to run 5,200 divisive ads on its platform during the campaign. It identified roughly 450 Russian-linked accounts as having purchased ads, a list that it shared with Twitter and Google, according to people familiar with the matter. Twitter said that it discovered 201 accounts on its service linked to the Russian actors identified by Facebook. Graham Brookie, deputy director with Atlantic Council’s Digital Forensic Research Lab, stated, “If you’re running a messaging campaign that is as sophisticated as micro-targeting demographics on Facebook, then there’s no way you’re going to sit there from a communication standpoint and say, ‘Google doesn’t matter to us.’” One Russia-sponsored Facebook page among the over 470 pages and accounts that were shut down as part of Facebook’s investigation into Russian meddling in the 2016 election was “Being Patriotic.” The Associated Press performed content analysis on the 500 most popular posts and found it filled with buzzwords related to issues such as illegal immigration [24].