Opinion: AI isn’t magic. It’s just knowledge sausage

0

Shown is the Google's Bad website in Glenside, Pa., Monday, March 27, 2023. The recently rolled-out bot dubbed Bard is the internet search giant's answer to the ChatGPT tool that Microsoft has been melding into its Bing search engine and other software. (AP Photo/Matt Rourke)

(Matt Rourke / Associated Press)

Can today's AI truly learn on its own? Not likely

Op-Ed

Theodore Kim

May 14, 2023

One of the boldest, most breathless claims being made about

artificial intelligence toolsAIs

is that they have “emergent properties” impressive abilities gained by these programs that they were supposedly never trained to possess. “60 Minutes,” for example, reported credulously that a Google program taught itself to speak Bengali, while the New York Times misleadingly defined “emergent behavior” in AI as language models gaining “unexpected or unintended abilities” such as writing computer code.

This misappropriation of the term emergent by AI researchers and boosters deploys language from biology and physics to imply that these programs are uncovering new scientific principles adjacent to basic questions about consciousness that AIs

are showing signs of life. However, as computational linguistics expert Emily Bender has pointed out, we’ve been giving AI too much credit since at least the 1960s. A new study from Stanford researchers suggests that sparks of intelligence in supposedly emergent systems are in fact mirages.

If anything, these far-fetched claims look like a marketing maneuver one at odds with the definition of emergence used in science for decades. The term captures one of the most thrilling phenomena in nature: complex, unpredictable behaviors emerging from simple natural laws. Far removed from this classic definition, current AIs display behaviors more appropriately characterized as knowledge sausage: complex, vaguely acceptable computer outputs that predictably arise from even more complex, industrial-scale inputs.

The language model training process used for AI takes gigantic troves of data scraped indiscriminately from the internet, pushes that data repeatedly through artificial neural networks, some containing 175 billion individual parameters, and adjusts the networks settings to more closely fit the data. The process involves what the

CEO chief executive

of OpenAI has called an eye-watering amount of computations. In the end, this immense enterprise arrives not at an unexplained spark of consciousness but a compressed kielbasa of information. It is the industrial production of a knowledge sausage, which crams together so much data that its ability to spit out a million possible outputs becomes relatively quotidian.

Contrast this with examples of actual emergence, such as fluid flow, which has been described for two centuries by an elegant expression known as the Navier-Stokes equations. Shorter than a haiku, these equations somehow characterize a stupendous range of natural phenomena, from steam rising from a coffee mug to the turbulent vortices of weather systems hundreds of miles wide. None of this is obvious from inspecting the equations. Yet they spontaneously give rise to beautiful, interlocking systems of complex whorls.

In my more than two decades researching computational methods for harnessing these equations (and translating some of my findings into algorithms that appeared in

Avatar

and

Iron Man 3,

winning me an Academy Award), Ive seen this beauty and complexity emerge repeatedly. Its far removed from the workings of todays AI.

Additional examples of emergent behavior in physics and biology include four-line descriptions of water and ice that suddenly give rise to intricate snowflake patterns (the subject of my PhD dissertation). Another is the reaction-diffusion systems discovered by Alan Turing, the same British scientist who developed the infamous Turing test to gauge whether AIs are indistinguishable from humans. He found simple systems of equations describing chemical interactions that spontaneously organize into the spots of a leopard or stripes of a zebra.

When recognizable biological structures spring forth from primordial chemical baths, it points toward a better understanding of one of humanitys most basic questions: How did life begin? Scientists have been enchanted for centuries whenever such simple mechanisms produce such complex phenomena, understandably treating them with near-religious awe. When magic emerges from a haiku-like equation or primordial ooze, we get to witness something rise up from practically nothing. With OpenAIs ChatGPT and Googles Bard, were seeing an industrial product rising up from a factory complex.

Just as with real-life sausage, the components that make up ChatGPT are obscured over the course of its production. But that doesnt mean they defy explanation, especially by manufacturers. After the “60 Minutes” broadcast, Margaret Mitchell, an AI ethics researcher Google fired two years ago, pointed out that Google’s program could speak Bengali because it was almost certainly shown Bengali while being trained and used pieces of a previous model that already knew the language. The suggestion that it acquired this skill

ex nihilo

strains credulity.

The emergent claims of AI writing computer code have equally mundane explanations: There are massive amounts of code on the internet. When shown the contents of the internet over and over, the AI learns both written and programmed languages.

Downplaying such mundane provenances feeds the notion that AIs must somehow be magical. As digital humanities scholar Lauren Klein explained in a recent talk, this narrative that code is magic and its creators akin to wizards stretches back to the 1950s. Programmers considered themselves members of a priesthood guarding skills and mysteries far too complex for ordinary mortals. Today, with coding skills being taught on college campuses around the world, this illusion has become harder to maintain. AI is being spun into the fabric that clothes the new priesthood.

Claiming that complex outputs arising from even more complex inputs is emergent behavior is like finding a severed finger in a hot dog and claiming the hot dog factory has learned to create fingers. Modern AI chatbots are not magical artifacts without precedent in human history. They are not producing something out of nothing. They do not reveal insights into the laws that govern human consciousness and our physical universe. They are industrial-scale knowledge sausages.

Theodore Kim is an associate professor of computer science at Yale University. 

@_TheodoreKim

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest  Business News Click Here 

Read original article here

Denial of responsibility! Rapidtelecast.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment