No menu items!

Gringo view: can we really stop AI being released into the wild?

(Opinion) Even Elon Musk has joined the rousing Silicon Valley chorus line calling for a pause in AI development.

He is riding the wave of the prestigious Future of Life Org., which has issued a call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.

That’s a big want.

I can just imagine the titans of Microsoft, Google, Amazon, IBM, and a herd of unicorns milling around and arguing how long AI development should be halted.

It is rather like watching the pace car slow a Formula 1 race while each competitor waits anxiously to cross the safety car line and push the pedal to the metal, accelerating away from the pack.

The dangers presented by AI have caused Dr. Geoffrey Hinton, artificial intelligence pioneer and AI chief at Google to resign his prestigiouss post to give him freedom to voice his deep concerns about the dangers AI presents.

(Google CEO Pichai says that they don’t fully understand their own AI system after it did things it wasn’t programmed to do!)

Said the NYTimes: “Gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.”

“It is hard to see how you can prevent the bad actors from using AI for bad things,” Dr. Hinton said.

Like the scientists working on the atomic bomb, Dr. Hinton is aware that AI systems can pose profound risks to humanity – a two-edged sword that needs to be used cautiously like dynamite or bitcoin.

Will we be willing to beat our new AI swords into plowshares and use them for communal good? Betting that way doesn’t seem to promise big rewards.

“These models are being developed in an arms race between tech companies, without any transparency,” said Peter Bloem, an A.I. expert who studies large language models, the foundations on which AI is built.

AI appears to be getting a lot more attention than other technological breakthroughs.

The 2017 conference on beneficial AI highlighted the now widely endorsed Asilomar AI Principles, intended to promote the safe and ethical development of artificial intelligence.

In the shadow of these principles, academics and leaders in the computer and related industries expressed their fear that advanced AI could “represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources.”

Perhaps I’m being overly negative but it’s rather hard to remember the last time ‘the history of life on earth’ was ‘planned for and managed’ except by authoritarians eager to tilt the playing field in their favor.

The idea that advanced AI could “represent a profound change” in the history of life on earth is nothing short of hyperbolic, rather like contemplating the cost of the Tesla 360,000 car recall, which must have seemed almost as earth-shaking as watching the share price of Twitter tank.

There was recently a high-octane pow-wow of AI leaders at the White House presided over by vice-president Harris, who reported that firms developing the AI technology had a “fundamental responsibility to make sure their products are safe before they are deployed or made public”.

Hooray for that sentiment but is it any different than the responsibility of any widget-maker, to offer his widget to the public?

What’s most interesting is how much concern AI has generated and how it appears that scientists, academics, and governments have taken a heightened interest in this new technology most of them are unlikely to understand.

What is it about AI that has made it so interesting? Could it be a dystopian view of an out-of-control world?

The UK government’s outgoing scientific adviser, Sir Patrick Vallance, urged ministers to ‘get ahead’ of the profound social and economic changes that could be triggered by AI, saying “the impact on jobs could be as big as that of the Industrial Revolution.”

AI labs are locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

Robert Weissman, the president of the consumer rights non-profit Public Citizen, praised the White House’s announcement, pointing out that “Big Tech companies need to be saved from themselves.”

Echoing Peter Bloem’s fears, he argues, “The companies and their top AI developers are well aware of the risks posed by generative AI. But they are in a competitive arms race, and each believes themselves unable to slow down,” he said.

Like many AI executives and developers, Mr. Bloem shares the dichotomy of concerns about the lack of A.I. regulation and the excitement that computers are finally starting to “understand what we want them to do” — something he says, “researchers hadn’t been close to achieving in over 70 years of trying”.

The last time I looked, more than 30,000 interested parties had called “on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

I can just see these scientists and industry leaders, PCs at the ready, calculating the competitive disadvantage of pushing the pause button while others, ignoring this proposed cooling-off period, would charge full speed ahead.

Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.

That’s no small promise.

Will the six-month pause come into effect, or is the profit-driven competitive momentum too strong to accept it?

 

 

 

Check out our other content

×