Skip to main content
  • Share to or
stories

A ‘Wild Boar’ trained by Yandex A massive data leak reveals the ascent of artificial intelligence in Internet surveillance and suppressing protest in Russia

Source: Meduza
Ivan Kleymenov for Meduza

Last November, a Belarusian hacker group called “Cyberpartisans” announced that it had gained access to more than two terabytes of documents and messages exchanged by the staff of a nondescript entity within the larger structure of Russia’s mass media regulator, Roskomnadzor. The so-called Main Radio Frequency Center (MRFC), or (in Russian) “FGUP GRChTs,” turned out to be the locus of Russia’s Internet surveillance and a major driver of repressions against Russians who criticize the Putin regime or protest against the war in Ukraine. Cyberpartisans shared their data trove with journalists at several independent outlets, leading to the realization that the obscure agency is central to building a next-generation surveillance state in Russia. The leaked messages also reveal MRFC’s close collaboration with Yandex: the divided I.T. giant’s Russian side appears to be deeply embroiled in training the next generation of A.I. apps geared at total surveillance and crushing political opposition and protest in embryo. Meduza summarizes the detailed analysis of the leak published by Current Time in collaboration with Radio Free Europe.

A war on pacifists

The Main Radio Frequency Center (“FGUP GRChTs”) was first established as a unit of the federal communications regulator Gossvyaznadzor in 2000. In 2008, it became part of Roskomnadzor, then newly established by President Dmitry Medvedev. In May 2014, MRFC was tasked with “monitoring compliance within Roskomnadzor’s specified scope.” The center’s eight branches employ around 5,000 workers; its 2022 budget was 20.5 billion rubles (or just over $280.6 million).

The center’s main tool for tracking politically undesirable online content is the Unified Automated Information System, also known as the “Unified Register.” MRFC staff look for “forbidden” content and compile links based on court decisions and queries from the Prosecutor General’s Office. The product of this research is a registration card noting a violation in the Unified Register, which then becomes the basis for prosecution. The author of the offending Web page or post then receives a letter demanding the removal of illegal content. In the event the cease-and-desist letter is not satisfied, the censor blocks the page or the entire website.

The Russian state censors content related to drugs, suicide, gambling, and child pornography. Since the February 2022 invasion of Ukraine, the MRFC began monitoring the Web for “fakes about the special military operation” (in essence, any content that contradicts the official narrative about the war and Russia’s benign purposes in Ukraine). Judging by an internal Roskomnadzor report, 150,000 social media publications were removed in the first 10 months of the full-scale war. Among the illegal publications discovered by the agency were 169,000 “fakes” about the invasion and 40,000 calls to protest (also illegal under current Russian law).

How Russian legislation serves political repression

‘Thе fog of war spreads over daily life’ Human rights lawyer Pavel Chikov explains how arbitrary and cruel law enforcement is reducing Russian society to paranoia and paralysis

How Russian legislation serves political repression

‘Thе fog of war spreads over daily life’ Human rights lawyer Pavel Chikov explains how arbitrary and cruel law enforcement is reducing Russian society to paranoia and paralysis

Ask ASKOV

In 2019, the state censorship authority Roskomnadzor tasked one of its subsidiaries with developing a secure messenger for operative communication within the law enforcement structures (the MVD, FSB, Prosecutor General’s Office, National Guard, and the like). The resulting app, ASKOV, is similar to publicly available messenger apps — the difference being that all ASKOV group chats are organized by the participants’ shared tasks. (The app also lets users exchange direct messages.)

What, then, are some of the chat topics in ASKOV? In November 2022, titles included the following:

  • Protest Moods
  • Destabilization Operative
  • Terrorism
  • Protest Operative
  • Inter-Ethnic Relations
  • Foreign Interference
  • Extremism

In the first chat group, for example, you can see detailed daily reports on “protest moods in the social media,” compiled by monitoring 546 “national” and 3,000 “regional” accounts. These reports address key daily news items, including any “hot button” publications in the so-called “oppositional” sources. These reports also have a special rubric on content that tells people how they can send money “to oppositional structures and terrorist or extremist organizations.” The reports conclude with a section on planned protest meetings, with links to information about each protest. (The links are also forwarded to the Prosecutor General’s Office.)

“Destabilization Operative” is a chat about “anti-Russian public opinion leaders.” Information collected in the chat is also forwarded to the Prosecutor General. “Extremism” is a chat with brief instructions to take note of particular public statements, like the Anti-Corruption Foundation’s call to sabotage mobilization by all means, “including setting draft boards on fire.”

As of July 2022, ASKOV had close to 1,000 registered users, employed in places like Roskomnadzor and MRFC, the Interior Ministry (“MVD”), the National Guard, the Prosecutor General’s Office, regional governments, and the President’s Office. Curiously, just a single anonymous user, registered as “Employee Employee,” represents the Federal Security Service (FSB). ASKOV users also include Moscow Deputy Mayor Natalia Sergunina, Moscow municipal I.T. department head Eduard Lysenko, and Ramzan Kadyrov associate Akhmed Dudayev, Chechnya’s print publications minister.

How a teen got sentenced to five years for ‘terrorism’

Minecraft ‘terrorism’ Russian court sentences 16-year-old to five years in prison over plot to blow up virtual FSB building in video game

How a teen got sentenced to five years for ‘terrorism’

Minecraft ‘terrorism’ Russian court sentences 16-year-old to five years in prison over plot to blow up virtual FSB building in video game

‘Special Task Yandex U’

The trove of documents obtained by Cyberpartisans contains several Excel spreadsheets marked “Special Task Yandex U,” as well as a message exchange about this particular document. One of the MRFC employees sent a message to clarify the nature of this “special task” and learned that the spreadsheets track keywords to be excluded from Yandex search results (Yandex, loosely speaking, being “Russia’s Google”). The same employee then sought verification: were they talking about searches like “Russia bombs civilian housing,” “Russia kills civilians,” “Russian military atrocities in Bucha,” “raw recruits in Ukraine,” “Russia’s massive losses in Ukraine”?

A second reply confirmed that he had understood correctly.

According to Roskomnadzor data, the Russian search engines Yandex and Mail.ru suppressed more than 11,800 different publications referring to Russia’s losses in Ukraine, the mass surrender of Russian troops, Russia’s attacks on Ukraine’s civilian infrastructure, and the murder of Ukrainian civilians by the invading military.

‘Fakes,’ ‘foreign agents,’ and ‘Putin as a crab’

Certain MRFC employees monitor Russia’s main social media sites, each of them spotting 100–300 “fake news” items per day and logging the incidents in a special Excel spreadsheet. Once again, what’s classified as “fake news” is information about civilian killings, attacks on civilian infrastructure, and artillery strikes against residential areas, as well as Russian losses in Ukraine, looting by the Russian military, Russian POWs in Ukraine, mobilization, and political instability within Russia itself. A share of these findings is selectively forwarded to the Prosecutor General through the ASKOV interface.

A special MRFC department monitors the mass media. Since October 2020, it has nominated potential “foreign agents” in its regular reports, which informed the Justice Ministry’s official designations. Publications like Proekt, Kholod, The Bell, Open Media, Redaktsia, and many others all went through this process of being reported by MRFC and then declared “foreign agents.” When Meduza got its “letter of recommendation” from MRFC, it took only two days before it joined the other “foreign agents” on the Justice Ministry’s registry. The agency issues similar reports on individuals, especially media figures. All in all, it compiled 804 recommendations for recognizing a person or an entity as a “foreign agent” (the full, searchable list is available here).

Officials also monitor negative information about the president, particularly mentions of Putin’s “critical health condition” or other adverse remarks about his health. This category is second to “fake news” in the number of blocked and removed publications. Putin-related content is aggregated automatically, using the Russian-designed Brand Analytics system.

Weekly newsletter

Sign up for The Beet

Underreported stories. Fresh perspectives. From Budapest to Bishkek.

MRFC classifies Putin-related reports by source and its “national,” “regional,” or “foreign” status. Since September 2022, it stepped down its reporting frequency from daily to weekly, after most sources that publish anti-Putin content had been blocked inside Russia.

In 2021, Roskomnadzor said it was developing Oculus (not to be confused with the California-based virtual reality company), a new system for identifying illegal photos and video. Oculus was expected to analyze 200,000 images daily. 

The MRFC data leak contains a classification system used to train Oculus to discover undesirable depictions of Putin. This “classification of graphic entities” distinguishes two main types of content: “offensive depictions of the president” and “comparisons of the president with negative characters.” These rubrics are elaborated with tags like “Putin depicted as a crab,” “the president depicted as a moth,” “the president in a dumpster,” “the president depicted as Hitler,” and “the president depicted as a vampire.”

Oculus is also being trained to categorize content as “extremism,” “calls for disorderly conduct,” “LGBT propaganda,” and “suicide.” It can look for “calls for disorderly conduct” using photos of Alexey Navalny, his team members, blue-white-and-blue alternative Russian flags, photoshopped images of the Kremlin on fire, and photos of various past protests in both Russia and Ukraine.

A 2022 internal presentation claims that, when launched, Oculus will be able to identify faces and determine their age “taking into account the presence of a beard or a mask.” But the app’s projected launch date is still unknown.

Turning the tables on facial recognition

‘You have no masks’ Meet the Belarusian developer working on a facial recognition algorithm for doxing riot police

Turning the tables on facial recognition

‘You have no masks’ Meet the Belarusian developer working on a facial recognition algorithm for doxing riot police

Unleashing a ‘Wild Boar’

In the summer of 2022, MRFC began looking for a contractor to develop an artificial intelligence system called “Vepr” (“Wild Boar”), a tool for detecting online “informational tension points.” What this phrase stands for are “instances of spreading socially significant information in the guise of true facts, while creating danger to public safety and security.”

What supposedly poses a “danger to public safety and security” are publications that have a “negative informational and psychological impact,” “destabilize the socio-political situation,” “manipulate public opinion,” “discredit traditional values,” or simply “misinform” (however that might be interpreted). The A.I. app, in turn, is supposed to detect the following:

  • Protest moods and facts of social destabilization in connection with territorial integrity, inter-ethnic conflicts, and migration policy
  • Negativity and “fake news” about the head of state, its leading figures, and the state itself
  • Manipulation or polarization of public opinion (on subjects like vaccinations, non-systematic opposition, or the sanctions)
  • Profanation or the discrediting of traditional values

Vepr A.I. is supposed to launch in July 2023. Journalists have drawn attention to redundancies between the two A.I. products, Oculus and Vepr, whose functionalities have a large overlap. Both applications were initially prototyped by the Moscow Institute of Physics and Technics (“MFTI”) and are likely to be part of a larger project called “Clean Internet,” which has been in the works since summer 2020. According to internal documents, starting in May 2023, “Clean Internet” will surveil not only online texts, but also multimedia content, detecting the following categories of violations:

  • Unsanctioned public events
  • Involvement of minors in politics
  • Insulting the president
  • Accusing the president of extremism
  • “Fakes” about the president and the state
  • LGBT “propaganda”

Nevertheless, you cannot mine data without collaborating with search engines. Judging by the leaked documents, MRFC is relying on Yandex APIs to support its surveillance efforts. The company itself has increased the federal regulator’s daily search queries cap to 300,000 and granted access to its AI training service Toloka.

The capstone of Russia’s “Clean Internet” project is to be a bot farm whose employees will infiltrate closed social-media groups. Instead of posting comments on social media as they did before, these “bots” will gather information, contributing to total online surveillance.

How the pandemic helped advance the surveillance state

Surveillance pandemic Russia carried out surveillance on an unprecedented scale during the coronavirus lockdown

How the pandemic helped advance the surveillance state

Surveillance pandemic Russia carried out surveillance on an unprecedented scale during the coronavirus lockdown

Translated by Anna Razumnaya

  • Share to or