AI Standpoint

Fear of the Stochastic Parrot

Machine translation

The letter about stopping the training of large AI models appeared in the public domain on the night of March 29, but such suggestions have been made more than once by various people especially in recent months, since ChatGPT’s release in November 2022. The idea is not new, it’s been expressed regularly, and Gary Marcus, a well-known critic of big language models (LLMs), has been saying «Carthage must be destroyed» almost every day. And, of course, he was among the first to sign the letter.

There are big names among the signers, though not many, and there are serious experts in their fields, but the call for a moratorium itself, as well as its rationale, is unlikely to convince those to whom the letter is addressed — the heads of AI labs.

What is the dubiousness of this idea? — That stopping work doesn’t solve any problems. Yes, everyone agrees that LLMs are opaque, unreliable, prone to hallucination, and we don’t understand with any clarity how they draw conclusions. Nor do we know what properties will appear in models more powerful than GPT-4. However, we cannot find that out based on general considerations and experimenting with smaller models for six months or longer. This is an important feature of LLMs: their behavior depends nontrivially on the size of the model, on the quality and quantity of training data, and on training methods.

Only working with AI models will allow to understand at least something about them, see the risks and test the possibilities. As philosopher Artem Besedin aptly noted, «the authors of this letter suggest that we learn to swim without entering the water.» Alas, it does not work that way.

Next, how will China, for example, feel about the moratorium? Judging by the news coming out of there, the CCP is betting heavily on the development of artificial intelligence. China’s official goal is to become the world leader in this area by 2030. Right now, they are in a catch-up role, and the stalling of projects by U.S. competitors will be a great gift to Chinese researchers. They are unlikely to stop on their own.

However, it is unlikely that OpenAI, the company that created GPT-4, will take a break. The next version is already in training, and GPT-4.5 and then GPT-5 have been announced for the fall. Given that Microsoft is investing $10 billion in these models, there will be no stopping. Nor did Yann LeCun, vice president of Meta, endorse the letter. Perhaps the authors of the letter are way overdue: the genie has already been released.

In addition, the threats of AI are outlined rather vaguely to be feared seriously. Against the backdrop of the thundering covid and the current tragedies we’re seeing live, the risk of fake proliferation and the coming changes in the labor market are not impressive enough to «drop everything.» And there are no realistic scenarios for how LLMs will enslave humanity. And while some researchers find «sparks of general intelligence» in GPT-4, others call language models «stochastic parrots» 🦜 denying them even a rudimentary thought. And then should we be afraid of a parrot, even if it is a talking one?

The question of LLMs’ intelligence is far from settled, and this is where experiments are needed. But everyone agrees that it is a powerful tool that can be very useful. It expands our capabilities, and giving them up has a price, too. These possibilities are impressive, and in the emergence of manifestos we see a typical «future shock» in the terms of sociologist Alvin Toffler, who wrote about the psychological effects of the rapid development of technology back in the 1960s.

Models are upgrading too quickly and getting «smarter» before our eyes. The limit of their intelligence growth is not clear, and this frightens people with rich imagination. However, moratoriums and bans will only give a respite to our psyche, but will not help to eliminate future risks. In biology, for example, some hypotheses can be tested on animals or cell cultures. In the field of AI, there is no similar substitute for large models; their behavior is unique.

In a more general sense, we are creating a supercomplex object of a new type for which we have no ready-made theories. Understanding its properties can only be gained by interacting with it. That is why the call to discuss the key problems of AI in a pause, without research, sounds like utopia. Of course, all this does not cancel the need to take seriously the issues of reliability, transparency and safety of LLMs, their training and regulation, but a break in experiments for half a year will hardly bring us closer to the answers.

Text: DENIS TULINOV, author telegram channel «Vagus nerve».

The text illustration was generated by Midjourney.

  30.03.2023