Doves of the “Russkiy Mir”. How Potanin’s Money and the Institute of Putin’s Daughter Recruit Neuroscience for Military Service

The Russian startup Neiry Group implants chips into pigeons’ brains to turn them into remotely controlled biodrones. According to the company’s press releases, such birds can be operated just like conventional drones. The company’s founder, Alexander Panov — an advertiser and self-proclaimed “teal” entrepreneur — managed to attract “funding on a scale Russian neuroscience has never seen” and prominent scientists from the MSU Institute of Artificial Intelligence Katerina Tikhonova, Vladimir Putin’s younger daughter. In his blog, Panov laments that “the style of conducting the ‘special military operation’ in Ukraine is too gentle,” and at conferences he tells officials that Neiry’s goal is world domination through neurotechnologies. Nevertheless, all such neuroexperiments (both Panov’s with rats and pigeons in Russia and the efforts of R&D teams in other countries) are still referred to in the academic literature as “coercive optimism.” T-invariant investigated who benefits from the hype around neurotechnologies and what real results can be claimed about today.

Top news on scientists’ work and experiences during the war, along with videos and infographics — subscribe to the T-invariant Telegram channel to stay updated.

How pigeon biodrones work

Pigeon-drones were first tested on November 25, 2025 — birds with implantable neural interfaces completed a test flight from the lab and back. According to press releases, Neiry has already produced dozens of such birds — allegedly for potential use in monitoring industrial and environmental facilities or in search-and-rescue operations. In theory, biodrones could be used for many other purposes, including military ones, such as covert surveillance of an adversary, but company representatives emphasize the peaceful nature of the developments. So far the birds are being tested over short distances, but in the future the developers plan to send them on flights spanning tens of kilometers.

Externally, the PJN-1 pigeon-biodrone is fairly easy to distinguish from an ordinary bird: a neural interface wire protrudes from its head, a backpack with control electronics and solar panels sits on its back, and a video camera is mounted on its breast.

This is what a pigeon-biodrone looks like. Source: Neiry website

Electrodes developed in-house by Neiry are implanted in the pigeon’s brain; they are connected to a stimulator in the bird’s backpack, which in turn is linked to a controller also located there that contains the flight task. The stimulator sends pulses that influence the bird’s desires — for example, to turn left or right. The system’s positioning relies on GPS and other methods. The drone-pigeon’s cameras operate like those in public spaces — with face blurring and removal of personal data.

Neiry claims that biodrones require no training — unlike performing animals — and that their advantages over conventional drones lie in longer operating time, greater flight range, and lower accident risk. The price of such a bird is comparable to that of similar-class devices.

No scientific papers have yet been published that describe the operating principle in detail; the test video shows birds turning right or left on command, but the flight looks nothing like natural movement.

The company’s website constantly emphasizes the ethical nature and safety of the development for test animals. For example, it states that operations on pigeons are performed with state-of-the-art equipment to maximize survival rates and that implantation of chips does not shorten their lifespan. What actually happens is impossible to verify, since there are no publicly available data on the number of pigeons used in experiments; it is also impossible to assess how long they can realistically live with a chip in their brains. Interventions in the brain and the presence of a foreign body are associated with various complications — ranging from inflammation to infection — so the birds’ lives are most likely shortened.

In a Bloomberg article, bioethicist Nita Farahany of Duke University calls the practice of creating pigeon biodrones “repugnant,” arguing that people should not treat living beings as commodities by trying to control them. However, in the same piece Neiry founder Alexander Panov compares such curtailment of animals’ free will to what happens during horseback riding or milking cows, adding that the company’s in-house ethicists see no problem with this use of birds.

What we know about the Neiry startup

The Neiry group of companies was founded in 2018 by Alexander Panov. Before that, the entrepreneur had launched the full-service marketing agency KB-12, which proved successful. According to Inc., it operates through several legal entities, the two main ones being ООО “Kapibara” and ООО “12”. Combined revenue reaches hundreds of millions of rubles.

Photo: Telegram channel “Итак, это Панов” (Well, it’s Panov)

Neiry currently works on various neurotechnologies involving both humans and animals. For example, the company sells headsets that track a person’s psychological state via brain electrical activity and special devices for vagus nerve stimulation intended to improve sleep and reduce anxiety. Besides pigeons, Neiry experiments with rats and cows — the former were used to develop brain-chip animal control, while the latter received implants to increase milk yield. The portfolio now includes about 50 different products. Companies linked to Panov or his legal entities also engage in more mundane activities, such as medical equipment manufacturing.

According to Forbes, the combined revenue of all Neiry-related companies in 2024 reached 481 million rubles ($6,3m). Profit figures are unknown. The joint-stock company “Neurorevolyutsiya”, whose details appear on the Neiry website, is the founder of nearly ten legal entities. One of them — ООО “Neirofiziologiya” — is a co-founder of ООО “Neiry”, whose second co-founder is the National Technology Initiative fund linked to the Russian government. The general director of JSC “Neurorevolyutsiya” and several related entities (apart from the fund) is Panov himself.

The joint-stock company itself is loss-making — revenue in 2024 was just over 20 million rubles ($260k) with expenses of 133 million ($1.7m). ООО “Neirofiziologiya” also shows zero revenue with expenses of nearly 17 million ($220k). Other companies post positive balances; for example, ООО “Neiry” ended 2024 with revenue of 159 million rubles ($2.1m) and expenses of 153.1 million ($2m).

Over its existence Neiry has raised about one billion rubles ($13m) in investment. One major investor is the already-mentioned NTI fund, which invested roughly 360 million rubles ($4.7m) in 2021 and became a co-founder of at least one entity in the group.

The National Technology Initiative is a non-profit organization established by a decree of Russian Prime Minister Dmitry Medvedev on instructions from Putin. Its creation involved the Agency for Strategic Initiatives — another non-profit set up by the Russian government whose supervisory board chairman is Putin himself. Another notable Neiry investor is the Voskhod fund. It was founded in 2021 by oligarch Vladimir Potanin’s Interros; in 2022 both the founder and the organization fell under U.S. sanctions. Voskhod was subsequently sold to its top managers: 80% is owned by CEO Ruslan Sarkisov. In 2023, however, he and Voskhod were also sanctioned by the United States. More than 300 million rubles ($3.9) came from private investors.

Katerina Tikhonova. Photo: The Insider

“Neiry handles funding on a scale Russian neuroscience has never seen — that cannot be denied. Neither projects at HSE nor at MSU have budgets like that. Nor does Katerina Tikhonova’s institute, contrary to popular belief. That is one reason such serious scientists are involved in the project,” a neurobiologist familiar with the situation told T-invariant.

The startup is also connected through joint projects to the MSU Institute of Artificial Intelligence, headed by Putin’s daughter Katerina Tikhonova. The institute operates four laboratories, one of which — “Development of Invasive Neural Interfaces” — focuses mainly on dual-use technologies (see T-invariant’s detailed analysis). The laboratory is led by Candidate of Biological Sciences Vasily Popkov. He participated in Neiry’s rat experiments.

Vasily Popkov. Photo: Forbes

In those experiments, rodents received neural interfaces connected to a neural network. When an AI is asked a question, it sends signals to electrodes in the animal’s brain, stimulating one region for “yes” and another for “no.” The rat then performs the required action, “answering” the question. The experiment was named “Pythia.” According to Panov’s press release, its results represent “one stage of the ‘AIntuition’ project. For example, a user will be able to sense the truth or falsity of any statement or intuitively know the correct answer on a test. All of this will increase human performance… with our project we are gently moving humanity to the next step, where AI becomes our reliable and trustworthy symbiotic partner.”

At a Neiry conference, Vasily Popkov also called the “Pythia” rat merely a testbed for refining the technology. The strategic goal is a direct link between AI and human intelligence, supposedly for seamless access to technologies and the internet.

The “Pythia” rat. Photo: Neiry website

In media interviews Panov names the company’s long-term goal as the creation of Homo superior — the next species after Homo sapiens. In his Telegram channel he muses about selling people to the state and “reprogramming” them to fit the “cultural code”:

“One of my next projects that I have been thinking about for a long time involves selling people to the state. I will share details after a pilot, but right away I warn you that everything there will be eco-friendly, socially oriented, Orthodox, and humane — no clones! The basic thesis of this startup is simple: due to declining birth rates any state needs people. Ours especially. Different ones will do, but those who carry the basic culture will be valued more… In turn, there are Ukrainians who are, of course, Russians in terms of cultural code, just intoxicated by delusions (like in Far Cry 5). And even if only 5–7 million of the useful ones remain there, they can be reprogrammed, just like most other people, and it is much cheaper than creating and raising a person of our own culture.”

Such plans look alarming — especially because prominent scientists work with Neiry. Besides Vasily Popkov, the startup’s scientific advisor is neurophysiologist Mikhail Lebedev, Doctor of Biological Sciences, professor at Moscow State University, and a specialist in brain–computer interfaces; he has an h-index of 56. He is the author of more than one hundred scientific papers, including publications in Nature, Nature Reviews Neuroscience, Science Translational Medicine, and others. An article on the links between neural activity and movement parameters that he co-authored ranked first in the 2024 top-100 articles of Scientific Reports in the “Neurobiology” section.

Mikhail Lebedev. Photo: Gazeta.ru

Yet the development of neurotechnologies has not even reached the level where practically useful controlled animals can be created. And controlling humans via chips is even further away — if it is possible at all: many scientists, in contrast to tech-startup founders, are far from convinced of its success.

Is it effective to use pigeons instead of drones, and what is happening with human neural-interface technologies

Attempts to control animals date back a long way. The most common method is training. For example, pigeons with cameras were once used to conduct secret video recording, and trained dolphins for underwater missions. Chips have also been tried. During the Cold War the CIA implanted electrodes in the brains of six dogs in an attempt to remotely control their behavior. None of these developments were ultimately used in practice — the experiments either failed or more effective methods emerged for the same tasks.

The same applies to pigeon-drones — Chinese scientists have been the most active in this field, but despite published papers with positive results, practical application has not materialized. Experiments stimulating pigeons’ brains to control movement have been conducted for several years. Researchers achieved successful takeoff, hovering, left/right turns, and forward flight on command before Neiry did. Yet such biodrones are still not used for real tasks. There are many reasons: the pigeon brain is not yet well enough understood to issue complex commands, precise command execution and reliable communication are often impossible, and controlling pigeons outdoors — especially in flocks — is difficult.

Given all this, mechanical drones may be preferable to pigeons: they are easier to control, can carry heavier payloads, do not get sick, and do not need to eat or defecate. Developers also avoid ethical constraints on animal experimentation.

At the same time, the very idea of turning birds into drones is probably not pointless — it is just easier to make such birds artificial. Reports indicate that the Chinese government has already tried using bird-like drones for aerial surveillance in restive regions. Developing biomimetic robots that imitate bird flight can also yield solutions to make future UAVs and even airplanes more efficient. For instance, Chinese scientists created PigeonBot II with morphing wings and a tail to better understand how birds maneuver in the air.

Chinese PigeonBot 2. Photo: Eric Chang, Lentink Lab

Rat-control experiments are also not new. Since 2002 American scientists have developed a system for remotely controlling rodent movement consisting of a transmitter operated by a human, a backpack microprocessor-receiver, and brain-implanted electrodes. Similar experiments continued later. Such rodents were considered potentially useful in disaster zones or for military purposes, but they were never deployed in practice.

Sergey Shishkin, leading researcher at the MEG Center and neural-interface specialist, explained at the NeuroAI 2025 conference (lecture) that publications on AI–brain integration should be taken seriously only about 5%. Even when the work is sound science, results are presented as being far more impressive than they really are. This stems both from inflated expectations and from the high degree of freedom in data handling and algorithm opacity, which opens the door to intentional or unintentional bias in results

For example, what Neiry presents as animal–AI merging is in reality simple brain stimulation — a technique with a long history. Neurophysiologist José Delgado implanted electrodes in a bull’s brain in 1963 and then made it stop with a remote control. In essence, animal-control technologies via neural interfaces have not advanced much beyond that kind of mechanical linkage.

José Delgado. Photo: persons-info.com

Relatively “simple” technologies of this kind are already used in medicine to improve certain conditions. Deep brain stimulation (DBS) implants electrodes in specific brain regions to modulate electrical activity. They work somewhat like cardiac pacemakers, blocking unwanted impulses — for example, helping reduce tremor in Parkinson’s patients. Such operations are performed in many countries, including Russia.

Output neural interfaces also exist — they do not send signals to the brain but instead read its electrical activity. Here too lies the problem: what gets marketed as “mind reading” is typically just machine-learning-based inference — guessing what a person saw, heard, recalled, or dreamed. Studies show that most scientific papers on such output interfaces use phrases like “mind reading” or “brain reading.” This fosters the illusion that these technologies have already achieved capabilities far beyond what’s currently possible — or even conceivable in principle. A similar effect is achieved with formulations such as “uploading information into the brain,” “controlling the mind,” and the like.

Companies such as Elon Musk’s Neuralink and its closest competitor Synchron are developing such neural interfaces primarily to treat neurological disorders or restore communication — for example, enabling speech recognition for people who cannot speak. They are not used to “control” a person; they attempt to decode what brain activity means and apply it — for controlling a prosthesis or computer, or assessing mental state.

Up-to-date videos on science during wartime, interviews, podcasts, and streams with prominent scientists — subscribe to the T-invariant YouTube channel!

Neurobiologist Douglas Fields writes that modern brain–computer interfaces work by analyzing data in roughly the same way Amazon tries to predict the next book you might want to read. A brain implant or non-invasive headset captures streams of brain electrical activity, and a connected computer learns to recognize patterns — for example, those that occur when a person wants to move a hand or shift a cursor.

Non-invasive devices are not sensitive enough because they pick up activity from large brain regions. For instance, if a person silently pronounces a word, a neuro-headset will register activity across the entire speech area.

To obtain more precise and complete information from the brain — connecting to small regions or even individual neurons — interfaces have become invasive. Neuralink, for example, contacts the brain via 1,024 electrodes directly linked to the organ. But these solutions also face numerous problems, so their use remains largely confined to research purposes.

Подписаться на нас в социальных сетях

The transformation of neural activity into practically usable data rests on distributed, dynamic, context-dependent brain networks that resist reduction to simple linear models — in short, even basic actions require cascading interactions across many brain areas. Neural interfaces must also be personalized, accounting for individual variability in neural signals and psychological states. They must reliably filter noise — spontaneous neural activity unrelated to the user’s intent: subconscious processes, sensory distractions, emotional experiences. Moreover, learning to work with any such interface is necessary — it only seems that thinking about an action is enough for the computer to perform it.

Neural activity in the brain cannot, in principle, be decoded like ordinary computer code — machine learning is used to recognize electrical patterns that correlate with the desired action. The brain itself plays a huge role: over time, through trial and error, it learns to generate the electrical impulse the interface “understands.” Scientists still do not fully know how this happens; the learning occurs subconsciously.

Moreover, even the most advanced invasive systems carry risks for the person implanted: the surgery itself is risky, the device can trigger immune responses, and it degrades over time.

European scientists use the term “coercive optimism” — whereby hype around new technologies and exaggeration of neural-interface advantages push people to participate in trials without regard for risks. The same excitement about potential military applications of neural interfaces can lead to initially doomed experiments on humans with unknown consequences, especially when testing invasive technologies.

This is far from the only ethical dilemma. Another question that arises with any implantable device is what happens to users in case of technical failures or if the company stops supporting the product. A well-known case occurred in 2010 when a patient received a Neurovista implant to control epilepsy seizures. The device successfully reduced her seizures to zero by warning of an impending attack, allowing timely medication. An unexpected side effect was a change in self-perception — for example, improved self-control. Two years later, due to financial problems, the company halted trials and removed implants from all participants. The patient then faced not only the return of seizures but also disorientation and anxiety. It is unclear whether this resulted from brain changes caused by the implant or from subjective expectations, but the effect impacted her life.

Another concern for ethics experts is the transfer of confidential human data to third parties. A user might buy a gaming neuro-headset or receive an implant for medical reasons, while the company collects the data and sells it — for instance, patterns indicating mental disorders — to insurance companies or military enlistment offices. Another serious issue is the potential for “brain hacking” — unauthorized access to cognitive processes and personal data after breaching a neural interface. Because of the many possible future security and privacy problems, ethical consequences are already being discussed, even though widespread adoption of neural interfaces remains distant and uncertain.

All this does not mean neural interfaces are useless — they really can help paralyzed people type on computers and control prostheses. Achieving this, however, is more difficult than tech startups promise, and the devices’ capabilities are not limitless. They remain in clinical trials, and fewer than one hundred people worldwide have “tried” such implanted systems.

Governments worldwide undoubtedly have an interest in developing solutions for controlling people via neurotechnologies. In theory, neural interfaces that accurately read brain electrical activity could also work in reverse — transmitting commands and information. But since even peaceful developments are at an early stage, no successful programs exist for creating mind-control devices for military purposes. In 2018 the U.S. Defense Advanced Research Projects Agency (DARPA) launched a program to create a “safe, portable neural-interface system capable of simultaneously reading from and writing to multiple points in the brain.” The goal was non-surgical brain–computer interfaces to be available to service members by 2050. However, several years later the program was terminated.

Despite the massive funding directed toward such developments, it remains unknown whether science will ever be able to create a device that can control the human brain or instantly read thoughts. Neurobiologist Timothy Buschman explains in Quanta Magazine that controlling a human brain faces numerous obstacles: scientists can identify which brain region to target, but not which individual neuron is responsible for a given action. Moreover, the functions of the same neurons differ between individuals. Even attempts at mind-reading require intense cooperation and effort from the participant. To determine that a person is thinking about an apple, they have to be asked to think about it several times — and the subject can easily fool the system by thinking of different things each time.

Meanwhile, no one knows the ultimate limits of neurotechnologies. The decoding capabilities of brain electrical impulses may improve. There is some chance that in several decades even low-resolution electroencephalography data could yield additional information about an individual. That is why many neurophysiologists believe it is important to adopt ethical guidelines in advance to prevent misuse of any neurodata — before it is too late, should technology companies eventually go too far. For now, their splashy promises are most often a way to attract investment from states or private sponsors impressed by apparent technical progress. In scientific literature this is called “neurointerface propaganda” — in addition to exaggerating capabilities, it includes claims of widespread adoption, technological maturity, and large market size.

Such an approach can even lead to setbacks in neurotechnologies in areas where they could genuinely be useful. Some scientists believe that investors’ fascination with fantastical ideas — mind control, brain uploading — hinders the development of neural interfaces for treating diseases. The loudly discussed goals in the media may well be unattainable in principle, and exaggeration of neurotechnology capabilities could lead to excessively strict regulation and negative public perception. That, in turn, would slow even the research that could significantly improve quality of life — at minimum giving people with complete paralysis a fast channel to communicate with the world.

Support T-invariant’s work by subscribing to our Patreon and choosing a donation amount that suits you.

Et Cetera