No menu items!

Opinion: Is Everything Changing Too Fast?

Suddenly, everywhere we look, we find AI looking back at us. And it was only yesterday that we had never heard of it.

And when we have always been comfortable with bedrock certainties about our world, exploration into the origins of the universe by the marvelous Webb telescope is splintering those certainties by ever-changing new knowledge.

It has been reliably estimated that by the time students have finished studying engineering or many of the sciences, what has been learned in the early years has changed so dramatically as to make those learnings largely irrelevant.

It’s all happening so fast.

According to the Guardian, “OpenAI was taken by surprise by what happened. ChatGPT went from zero to 1 million users in five days and by January was up to 100 million, making it the fastest-growing app ever”.

Is Everything Changing Too Fast? (Photo Internet reproduction)
Is Everything Changing Too Fast? (Photo Internet reproduction)

“AI works by combining large amounts of data with fast, iterative processing and intelligent algorithms, allowing the software to learn automatically from patterns or features in the data”.

“Intelligent algorithms”? Could that be today’s oxymoron?

Can these algorithms sweat from global warming, fall in love, or marvel at the beauty of a sunset?

Can Ernie Bot, an AI chatbot recently launched by Baidu, a Chinese search giant, elicit objective responses to questions that test Chinese state slogans and orthodoxy on sensitive political subjects?

Advertising and websites of enterprises big and small can’t tell you enough about their uses of AI, even if it is an exaggeration or a downright falsehood.

I’m eagerly awaiting the message which honestly proclaims, ‘Our Intelligence is Real, Not Artificial’.

Because hard as it is to know sometimes, there is a real difference.

The classic definition of AI is that the technology “makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks”.

The real difference with human intelligence is that while we might not be able to access deep learning, our human intelligence is flexible.

We possess a layer of emotion which conditions how we use our intelligence. We also have the capacity for adaptation and change.

AI is locked into its underlying algorithms and the data pool available to it.

Imagine the Bengali user, although Bengali is one of the world’s most spoken languages, his AI has had less digitized text – his data pool – on which to train, and thus his results are limited.

That drawback hasn’t prevented AI from racing to the top of the popularity charts.

The rapid adoption of “must-have” gadgets often outpaces users’ understanding of their full capabilities.

Many of the 100 million downloads may simply use AI for basic tasks like remembering birthdays.

The perception of high-tech savvy can be more important to users than actually utilizing all the device’s features.

While these gadgets may offer advanced AI functions, most people may just use them for simpler tasks like planning parties.

The appeal often lies more in the bragging rights and the image of being tech-savvy, rather than in exploiting the full range of features.

And despite the best efforts of AI mavens, AI applications remain devoid of emotion.

While we might wish to free some imagined sentient beings from imprisonment in our AI, that’s the job of sci-fi and it is a long way from becoming a reality.

Like any other app Open AI, the father of Chat GPT enjoyed one of the delights of being a software producer; you don’t have to worry about supply-chains, or ‘production’ and ‘manufacturing’ timelines.

Just crank up a server farm and your only limit is the size of the world’s wired population. And that keeps growing exponentially.

So do its dangerous uses.

AI invites the reduction of fuzzy emotional subjectivity to dry objectivity. It produces answers not questions.

It dumbs down outputs so as to make them seem normal and the temptation is to accept them without question. And then they just become part of us.

Universities are concerned that admission staff may not be able to differentiate between AI composed essays and original student’s work as part of a candidate’s admission package.

And even though AI can make horrendous mistakes.

Tesla car crashes, Amazon’s recruiting tool showing bias against women, and less seriously, an AI camera’s mistaking a tennis linesmen’s head for a ball, its outputs become indistinguishable from those of humans.

Mistaking a linesman’s head for a tennis ball is hardly a cosmic mistake. But making a discovery which flips the long-held foundations of cosmology on its head is nothing less than cosmic.

One of the amazing Webb telescope’s first major findings unsettled cosmologists with the discovery of the existence of fully formed galaxies.

The so-called standard model of cosmology predicated fully formed galaxies far later in time than Webb’s stunning observation.

Astrophysicist Adam Frank and theoretical physicist Marcelo Gleiser suggested in a New York Times article that our understanding of the universe may be fundamentally flawed.

The NYT article, ‘The Story of Our Universe May Be Starting to Unravel,’ posits that we might need a radical shift away from the standard cosmological model.

This shift could compel us to reevaluate not only the elemental components of the universe but also our concepts of space and time.

Changing how we think about the nature of space and time is a very big ask.

How would that be for cosmology graduate students to throw away half of what they had learned, perhaps going back to their AI models and asking them to ‘regenerate’.

‘Regenerate’ may be a command that will define the fast-changing AI-infused times in which we live.

Each time we ask it to probe deep into its algorithm-driven data, our world view is likely to change.

Scary stuff.

Check out our other content