in printfrom | the New York TimesThe singularity is a slingshot into the future.

feat. Ray Kurzweil
June 1, 2023

IMAGE


— contents —

~ story
~ reference


publication: the New York Times
story title: Silicon Valley confronts the idea that the singularity is here
date: June 11, 2023
author: by David Streitfeld

read | story


story |

An introduction.

The frenzy over artificial intelligence (AI) may be ushering in the long-awaited moment when technology goes wild. Or maybe it’s the hype that’s out of control.

For decades, Silicon Valley anticipated the moment when a new tech would come along and change everything. It would unite human and machine. Probably for the better but possibly for the worse — and split history into before + after.

The name for this milestone: the singularity.

It could happen in several ways. One possibility is that people would add a computer’s processing power to their own innate intelligence, becoming supercharged versions of themselves. Or maybe computers would grow so complex that they could truly think, creating a global brain.

In either case, the resulting changes would be drastic, exponential and irreversible. A self-aware superhuman machine could design its own improvements faster than any group of scientists, setting off an explosion in intelligence. Centuries of progress could happen in years or even months. The Singularity is a slingshot into the future.

Artificial intelligence is roiling tech, business and politics like nothing in recent memory. Listen to the extravagant claims and wild assertions issuing from Silicon Valley, and it seems the long-promised virtual paradise is finally at hand.

Sundar Pichai, Google’s usually low-key chief executive, calls artificial intelligence “more profound than fire or electricity or anything we have done in the past.” Reid Hoffman, a billionaire investor, says, “The power to make positive change in the world is about to get the biggest boost it’s ever had.” And Microsoft’s co-founder Bill Gates proclaims A.I. “will change the way people work, learn, travel, get health care and communicate with each other.”

A.I. is Silicon Valley’s ultimate new product rollout: transcendence on demand.

But there’s a dark twist. It’s as if tech companies introduced self-driving cars with the caveat that they could blow up before you got to Walmart.

“The advent of artificial general intelligence is called the Singularity because it is so hard to predict what will happen after that,” Elon Musk, who runs Twitter and Tesla, told CNBC last month. He said he thought “an age of abundance” would result but there was “some chance” that it “destroys humanity.”

The biggest cheerleader for A.I. in the tech community is Sam Altman, chief executive of OpenAI, the start-up that prompted the current frenzy with its ChatGPT chatbot. He says A.I. will be “the greatest force for economic empowerment and a lot of people getting rich we have ever seen.”

But he also says Mr. Musk, a critic of A.I. who also started a company to develop brain-computer interfaces, might be right.

Mr. Altman signed an open letter last month released by the Center for AI Safety, a nonprofit organization, saying that “mitigating the risk of extinction from A.I. should be a global priority” that is right up there with “pandemics and nuclear war.” Other signatories included Mr. Altman’s colleagues from OpenAI and computer scientists from Microsoft and Google.


IMAGE

OpenAI’s chief executive, Sam Altman, has been a cheerleader for A.I., but has also signed a statement that “mitigating the risk of extinction from A.I. should be a global priority.”Credit…Haiyun Jiang/The New York Times


Apocalypse is familiar, even beloved territory for Silicon Valley. A few years ago, it seemed every tech executive had a fully stocked apocalypse bunker somewhere remote but reachable. In 2016, Mr. Altman said he was amassing “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force and a big patch of land in Big Sur I can fly to.” The coronavirus pandemic made tech preppers feel vindicated, for a while.

Now, they are prepping for the Singularity.

“They like to think they’re sensible people making sage comments, but they sound more like monks in the year 1000 talking about the Rapture,” said Baldur Bjarnason, author of “The Intelligence Illusion,” a critical examination of A.I. “It’s a bit frightening,” he said.

The roots of transcendence

The Singularity’s intellectual roots go back to John von Neumann, a pioneering computer scientist who in the 1950s talked about how “the ever-accelerating progress of technology” would yield “some essential singularity in the history of the race.”


IMAGE

John von Neumann, a pioneering computer scientist, talked in the 1950s about how “the ever-accelerating progress of technology” would yield “some essential singularity in the history of the race.”Credit…Getty Images


Irving John Good, a British mathematician who helped decode the German Enigma device at Bletchley Park during World War II, was also an influential proponent. “The survival of man depends on the early construction of an ultra-intelligent machine,” he wrote in 1964. The director Stanley Kubrick consulted Mr. Good on HAL, the benign-turned-malevolent computer in “2001: A Space Odyssey” — an early example of the porous borders between computer science and science fiction.

Hans Moravec, an adjunct professor at the Robotics Institute at Carnegie Mellon University, thought A.I. would be a boon not just for the living: The dead, too, would be reclaimed in the Singularity. “We would have the opportunity to recreate the past and to interact with it in a real and direct fashion,” he wrote in “Mind Children: The Future of Robot and Human Intelligence.”

In recent years, the entrepreneur and inventor Ray Kurzweil has been the biggest champion of the Singularity. Mr. Kurzweil wrote “The Age of Intelligent Machines” in 1990 and “The Singularity Is Near” in 2005, and is now writing “The Singularity Is Nearer.”

By the end of the decade, he expects computers to pass the Turing Test and be indistinguishable from humans. Fifteen years after that, he calculates, the true transcendence will come: the moment when “computation will be part of ourselves, and we will increase our intelligence a millionfold.”

By then, Mr. Kurzweil will be 97. With the help of vitamins and supplements, he plans to live to see it.


IMAGE

Ray Kurzweil, a high-profile computer scientist, has championed the idea of the Singularity.Credit… Friso Gentsch/Picture Alliance, via Getty Images


For some critics of the Singularity, it is an intellectually dubious attempt to replicate the belief system of organized religion in the kingdom of software.

“They all want eternal life without the inconvenience of having to believe in God,” said Rodney Brooks, the former director of the Computer Science and Artificial Intelligence Laboratory at the Massachusetts Institute of Technology.

The innovation that feeds today’s Singularity debate is the large language model, the type of A.I. system that powers chatbots. Start a conversation with one of these L.L.M.s and it can spit back answers speedily, coherently and often with a fair degree of illumination.

“When you ask a question, these models interpret what it means, determine what its response should mean, then translate that back into words — if that’s not a definition of general intelligence, what is?” said Jerry Kaplan, a longtime A.I. entrepreneur and the author of “Artificial Intelligence: What Everyone Needs to Know.”

Mr. Kaplan said he was skeptical about such highly heralded wonders as self-driving cars and cryptocurrency. He approached the latest A.I. boom with the same doubts but said he had been won over.

“If this isn’t ‘the Singularity,’ it’s certainly a singularity: a transformative technological step that is going to broadly accelerate a whole bunch of art, science and human knowledge — and create some problems,” he said.

Critics counter that even the impressive results of L.L.M.s are a far cry from the enormous, global intelligence long promised by the Singularity. Part of the problem in accurately separating hype from reality is that the engines driving this technology are becoming hidden. OpenAI, which began as a nonprofit using open source code, is now a for-profit venture that critics say is effectively a black box. Google and Microsoft also offer limited visibility.

Much of the A.I. research is being done by the companies with much to gain from the results. Researchers at Microsoft, which invested $13 billion in OpenAI, published a paper in April concluding that a preliminary version of the latest OpenAI model “exhibits many traits of intelligence” including “abstraction, comprehension, vision, coding” and “understanding of human motives and emotions.”

Rylan Schaeffer, a doctoral student in computer science at Stanford, said some A.I. researchers had painted an inaccurate picture of how these large language models exhibit “emergent abilities” — unexplained capabilities that were not evident in smaller versions.

Along with two Stanford colleagues, Brando Miranda and Sanmi Koyejo, Mr. Schaeffer examined the question in a research paper published last month and concluded that emergent properties were “a mirage” caused by errors in measurement. In effect, researchers are seeing what they want to see.

Eternal life, eternal profits

In Washington, London and Brussels, lawmakers are stirring to the opportunities and problems of A.I. and starting to talk about regulation. Mr. Altman is on a road show, seeking to deflect early criticism and to promote OpenAI as the shepherd of the Singularity.

This includes an openness to regulation, but exactly what that would look like is fuzzy. Silicon Valley has generally held the view that government is too slow and stupid to oversee fast-breaking technological developments.

“There’s no one in the government who can get it right,” Eric Schmidt, Google’s former chief executive, said in an interview with “Meet the Press” last month, arguing the case for A.I. self-regulation. “But the industry can roughly get it right.”

A.I., just like the Singularity, is already being described as irreversible. “Stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work,” Mr. Altman and some of his colleagues wrote last month. If Silicon Valley doesn’t make it, they added, others will.

Less discussed are the vast profits to be made from uploading the world. Despite all the talk of A.I. being an unlimited wealth-generating machine, the people getting rich are pretty much the ones who are already rich.

Microsoft has seen its market capitalization soar by half a trillion dollars this year. Nvidia, a maker of chips that run A.I. systems, recently became one of the most valuable public U.S. companies when it said demand for those chips had skyrocketed.

“A.I. is the tech the world has always wanted,” Mr. Altman tweeted.

It certainly is the tech that the tech world has always wanted, arriving at the absolute best possible time. Last year, Silicon Valley was reeling from layoffs and rising interest rates. Crypto, the previous boom, was enmeshed in fraud and disappointment.

Follow the money, said Charles Stross, a co-author of the novel “The Rapture of the Nerds,” a comedic take on the Singularity, as well as the author of “Accelerando,” a more serious attempt to describe what life could soon be like.

“The real promise here is that corporations will be able to replace many of their flawed, expensive, slow, human information-processing sub units with bits of software, thereby speeding things up and reducing their overheads,” he said.

The Singularity has long been imagined as a cosmic event, literally mind-blowing. And it still may be.

But it might manifest first and foremost — thanks, in part, to the bottom-line obsession of today’s Silicon Valley — as a tool to slash corporate America’s head count. When you’re sprinting to add trillions to your market cap, Heaven can wait.

David Streitfeld has written about technology and its effects for 20 years. In 2013, he was part of the team that won the Pulitzer Prize for explanatory reporting.