Stanford University proposal on ‘foundations’ of artificial intelligence raises controversy

Last month, Stanford Researchers announced that a new era of Artificial intelligence It arrived, one built on top of an enormous neural networks and data oceans. They said New Research Center At Stanford University, you will build — and teach — these “foundational models” of AI.

Critics of the idea emerged quickly – including at the workshop organized to celebrate the launch of the new centre. Some object to the limited capabilities and sometimes strange behavior of these models; Others caution against focusing too heavily on one way to make machines smarter.

“I think the term ‘establishment’ is a fatal mistake,” Jitendra Malik, a professor at UC Berkeley who studies artificial intelligence, Tell the workshop attendees In a video discussion.

Malik acknowledged that one type of model identified by the Stanford researchers — large language models that can answer questions or create text with a prompt — has great practical use. But he said evolutionary biology suggests that language builds on other aspects of intelligence such as interaction with the physical world.

“These models are really castles in the air,” Malik said. “The language we have in these models is not grounded, there is this fake, there is no real understanding.” He declined the interview request.

A research paper co-authored by dozens of Stanford researchers describes an “emerging paradigm for building artificial intelligence systems” that it calls “basic models.” Larger AI models have produced some impressive advances in AI in recent years, in areas such as cognition and robotics as well as language.

Large language models are also the basis for big tech companies like google browser And Facebook social networking site, which are used in areas such as research, advertising and content editing. Building and training large language models may require millions of dollars of cloud computing power; So far, this has been limited to their development and use in a few highly efficient technology companies.

But large models are also problematic. Linguistic models inherit bias and offensive text from the data they are trained on, and they have no absolute understanding of common sense or what is right or wrong. By giving a prompt, you may model a great language Spit unpleasant language or misinformation. There is also no guarantee that these large models will continue to make advances in machine intelligence.

The Stanford University proposal split the research community. Describing them as ‘founding models’ totally spoils the discourse, he says. Subbarao KamphpatiHe is a professor at Arizona State University. Kampbhati says there is no clear path from these models to more general forms of AI.

Thomas Dietrich, a professor at Oregon State University and former president of Association for the Advancement of Artificial Intelligence, says he has “high respect” for the researchers behind the new Stanford Center, and believes they are really concerned about the problems these models raise.

But Dietrich wonders if the idea of ​​foundational models isn’t partly about getting funding for the resources to build on them. “I was surprised that they called these models such a cool name and created a hub,” he says. “This is a slap of the flag transplant, which can have many benefits on the fundraising side.”

Stanford also suggested creating National Artificial Intelligence Cloud To make industry-wide computing resources available to academics working on AI research projects.

Emily M BenderD., a professor in the University of Washington’s Department of Linguistics, says she worries that the idea of ​​foundational models reflects a bias toward investing in the data-centric approach to artificial intelligence favored by the industry.

Bender says it is particularly important to study the risks posed by large AI models. Co-authored by paper, published in March, which drew attention to the problems of large linguistic models and Contributed to the departure of two Google researchers. But she says the scrutiny has to come from multiple disciplines.

“There are all these other fields next door, and really important, that are hungry to just get it funded,” she says. “Before we put the money in the cloud, I’d like to see the money go to other disciplines.”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *