No menu items!

Gringo view: how scared should we be of AI?

(Opinon) Very! But not as scared as all the headlines proclaim. At least not yet!

Like many of you, I have been bombarded by warnings that AI is going to end civilization as we know it. The thesis stretches credibility a bit far.

How can a computer program, even one that can learn from all the world’s stored knowledge, even one marvelously creative and intuitive, even one capable of and hungry to write computer code, answer our questions, plan our meals, tell our kids bedtime stories and a million other things, take over our lives and end civilization as we know it?

The experts tell us that for the first time in history we have a technology that can make decisions by itself and create ideas by itself. That’s no small first.

(Artificial Intelligence, Democracy, & the Future of Civilization | Yoshua Bengio & Yuval Noah Harari)

Drilling down through the more than ample bedrock of recently accumulated knowledge and opinion about AI and asking my Chat GPT-4 to help me understand if AI has the potential to drastically change civilization, I uncovered a potpourri of leads but no conclusive answers.

There is a certain magic surrounding the whole subject of AI. It does its amazing tricks, but you can’t discern how the rabbit magically got into the seemingly empty hat.

My GPT-4 seemed to be hiding its real opinions up its sleeves. Like this.

“AI has the potential to drastically change civilization in a multitude of ways. Some of these ways are already evident, while others may only be realized in the future.”

It is hard to beat that for an enigmatic answer. It was followed by along annotated list of “some of the ways it may change our world.”

I was stopped in my tracks by the subtle use of ‘our world’, not ‘the world’ but ‘ours’. I couldn’t help but ask AI if he/she/it had already taken a seat at our table?

“AI, including AI like me, is increasingly integrated into various aspects of human society, including industry, healthcare, education, and entertainment, to provide information, automate tasks, and help make predictions.”

“In this sense, AI has indeed taken a ‘seat at the table’ in many areas, helping to shape and inform discussions and decisions.”

“As an AI, I don’t literally take a seat at any table, and I don’t make decisions or have influence in the same way a human participant might. But remember, AI doesn’t have desires, goals, or a will of its own.”

Is lacking desires, goals, or a will of its own necessarily a drawback?

Or could this heartless and passionless AI programmed bot avoid the pitfalls which are the inevitable result of unrestrained passion and instead, just ‘get on with it’ – whatever ‘it’ is?

My bot says: “This isn’t a drawback…it enables me to provide neutral and unbiased assistance to users without any personal agenda or preferences.”

Until now, passionate objectivity was an oxymoron. Not anymore.
As we get used to dealing with bots – and I must admit I now instinctively say ‘please’

when asking my bot, a question – we may lose the ability to differentiate between AI and human communications with all the dangers this represents.

As brilliantly explained by Yuval Noah Harari, an Israeli historian, professor, and author, AI has the key ability to master human language and this provides all it needs to create a matrix-like world of illusion that would fall, like a curtain, taking over our minds.

(AI and the future of humanity | Yuval Noah Harari at the Frontiers Forum)

Sound crazy? Not to AI’s creators and developers.

Reported the NY Times: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”

That single-sentence statement released by the Center for AI Safety, a nonprofit organization, has been signed by more than 350 executives, researchers and engineers working in A.I.

The majority believe A.I. could become powerful enough, in some areas having already surpassed human-level performance, that it could create societal-scale disruptions within a few years if nothing is done to slow it down.

The development raises significant questions: should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? And should we risk the loss of control of our civilization?

Closer to the current situation it is reported that the Chinese are known to be developing its capabilities for what they call “cognitive warfare” which is using AI techniques to influence the minds and shape the decisions of its adversaries.

The alarm bells are ringing even if to date no one has been able to say exactly how this disruption to our civilization would actually happen.

Testifying to a Senate subcommittee Sam Altman, chief executive of OpenAI laid it on the line with this understatement: “I think if this technology goes wrong, it can go quite wrong,”

How scared should we be of AI?

Very!

Check out our other content