
T-invariant releases the latest issue of the Chronicle of the Persecution of Scientists: No. 32, dated April 30, 2026. At the end of April, the Russian government agreed to revise and soften the draft law on artificial intelligence published by the Ministry of Digital Development in March. The bill was restrictive and prohibitive in nature, and its enforcement, especially in its original version, could have led to numerous prosecutions of developers, companies, and even users by the FSB. T-invariant wrote about this in its commentary. However, active criticism of the bill led the government to retreat.
The main concession to business was the abandonment of the requirement that “sovereign” and “national” models be trained exclusively on data assembled in Russia by Russian citizens. Businesses had warned that there was catastrophically little such data in the public domain, that compliance with this requirement would increase the cost of AI deployment by 20–40 percent, and that it would slow time to market by a factor of two. The authorities agreed. AI models may now be trained on any available data, regardless of their source. At the same time, the requirement that large AI services (with more than 500,000 users) register as organizers of information dissemination was removed. This would have obliged them to install SORM surveillance systems and provide the security services with direct access to user data.
Throughout these discussions, no one spoke publicly about what everyone in the industry already knows. The real technological scheme of the Russian AI market looks roughly like this, or is moving toward this configuration: Chinese Huawei chips, Chinese open models such as DeepSeek or Qwen, additional training on Russian-language material, and output filters blocking politically undesirable content. (On this Russian-Chinese setup, see T-invariant’s article “Sovereign Intelligence on Chinese Chips” and commentary by T-invariant editor Alexander Sergeev.) It is a workable and relatively inexpensive scheme: operating the open Qwen model costs tens of times less than Western equivalents. This is precisely the model that business was defending when it pushed to scrap restrictions on data sources. And in the end, it is this model to which the authorities adapted.
However, the law preserved the main control mechanism: a registry of “trusted models,” mandatory for the public sector and critical infrastructure. Certification is to be carried out by the FSB and FSTEC (the Federal Service for Technical and Export Control — the agency historically responsible for protecting state information systems from foreign technical intelligence). The idea is clear: before allowing a model near state secrets, the government wants to check whether there is anything suspicious inside it. The problem is that, in principle, this is extremely difficult to verify.
Current research points to several difficult problems that arise when auditing a model. Here are just a few of them. In so-called distillation — when one model (the student) is trained on the output data of another model (the teacher) — the student inherits the teacher’s hidden preferences even if the training data appear completely neutral. Anthropic researchers called this effect “a love of owls.” The somewhat odd name comes from a demonstration example: the teacher AI was trained so that, among all birds, it preferred owls. The student — an AI of the same architecture — was then trained on the teacher’s output data, and owls were never mentioned at all. Nevertheless, the student retained the teacher’s preference. The authors of the paper showed that this “love of owls” is transmitted through statistical patterns invisible to ordinary inspection. In other words, in some cases a Russian model trained on Qwen data will retain the teacher model’s preferences, but inspection cannot “catch” those preferences — such control methods simply do not exist.
In addition, a “trigger” can be deliberately embedded in the training data — a rare phrase or construction that causes the model to behave completely differently from its normal mode. Standard testing will not reveal such a trigger. To find it, one simply has to know the phrase. But even if the model itself is flawless, agentic systems — where AI has access to email, documents, and corporate databases — create an entirely different attack surface. Aim Security showed how, through an ordinary email in Outlook, Microsoft Copilot could be made to silently extract an organization’s confidential data and send it to an attacker — without a single click on the part of the victim and without leaving any trace in the logs. No model certification could have prevented that. The model is fine; the vulnerability lies in the system architecture.
DeepSeek is distributed under an open license: the model weights are available, it can be deployed on one’s own servers, and it can be studied. This creates an illusion of auditability. But DeepSeek’s training data are closed, and that is precisely where undesirable properties may be hidden, if there are any. FSTEC understands this no worse than anyone else. The result is a strange configuration: the agency receives formal authority to certify something that, technically, cannot be fully audited, while the Chinese origin of these models rules out awkward questions to the developers of the training data — China is a “friend forever; how could we possibly suspect it?” As a result, certification turns not into a protective mechanism but into a bureaucratic procedure that creates the appearance of control, while also proving very costly for business.
Pavel Krasheninnikov, chairman of the State Duma Committee on State Building and Legislation and head of the Presidential Council for the Codification of Civil Legislation, sharply rejected the bill: “The Civil Code is a foundation, and trying to build separate structures on top of it for every new technology, in ways that contradict it, is a path to legal chaos. If there are a few sensible ideas of a public-law nature, their place is in sector-specific legislation, not in an empty legislative shell.”
But there is also something worth noting that is rare in today’s Russia. More than 150 experts and representatives of the largest companies took part in the discussion of the bill. Business openly criticized specific provisions and succeeded in having them removed. The Presidential Council publicly rejected the government’s initiative. The law was genuinely debated, and real amendments were made. This does not mean that subsequent by-laws from the FSB and FSTEC will not restore what has now been abandoned. But the very fact of such bargaining is a rare sign that Russian technological policy still retains a certain pragmatism.
Almost certainly, this is connected to the military applications of AI: in drones, in processing intelligence data, and in logistics. This is no longer theory; it is a real war. To regulate civilian AI so heavily that its development comes to a halt would also mean cutting back military applications. This argument almost certainly came up in closed-door discussions. It may well have been the decisive one.
New Cards
April 10, 2026. Russia’s Justice Ministry added Stanford University to the list of foreign organizations whose activities are deemed undesirable in Russia. The Prosecutor General’s Office made the decision on March 26, Kommersant reports. Media reports did not provide the reasoning behind it.
April 17, 2026. Russia’s Justice Ministry added Russian-American chemist Alexander Kabanov — a corresponding member of the Russian Academy of Sciences and a professor at the University of North Carolina — to the foreign agents register. According to the ministry, Kabanov disseminated false information about decisions made by the Russian authorities and the policies they pursue, opposed the special military operation in Ukraine, and participated in the creation and dissemination of materials and messages by foreign agents and organizations deemed undesirable in the Russian Federation, RIA Novosti reports.
April 9, 2026. The Central District Court of Krasnoyarsk placed Academician Nikolai Testoyedov under house arrest — the former director general of JSC Reshetnev Information Satellite Systems, Russia’s principal satellite manufacturer, according to the joint press service of the courts of Krasnoyarsk Krai. Testoyedov was charged with large-scale fraud (Part 4 of Article 159 of the Criminal Code), Vedomosti reports.
Updates
April 22, 2026. Novaya Gazeta reported that “Azat Miftakhov is being transferred to serve his sentence in the Yamalo-Nenets Autonomous Okrug,” according to his support group. “This is connected with the terms of the sentence, under which Miftakhov must spend a year and a half of his four-year term in a strict-regime penal colony.” The paper notes that Kharp, where Miftakhov is being transferred, has two colonies: IK-3 “Polar Wolf,” a special-regime colony where Alexei Navalny was held before his death, and IK-18 “Polar Owl,” which has both strict- and general-regime sections.