Artificial Intelligence: reality, sensationalism, and enigmas

Today, we stand at the threshold of the Artificial Intelligence (AI) era, yet another inflection point with limitless potential. AI is already reshaping diverse sectors from health care to commerce, education to communication. Photo: Pixabay

Today, we stand at the threshold of the Artificial Intelligence (AI) era, yet another inflection point with limitless potential. AI is already reshaping diverse sectors from health care to commerce, education to communication. Photo: Pixabay

Published Jun 30, 2023

Share

MANOJ MAHARAJ and CRAIG BLEWETT

The story of human achievement is measured not just in years, but in the innovation and mastery of tools.

Each milestone represented a pivotal leap in societal progress from the advent of simple stone implements to the ingenious inventions of the ramp, the lever, the wheel, steam, and the motor.

Today, we stand at the threshold of the Artificial Intelligence (AI) era, yet another inflection point with limitless potential. AI is already reshaping diverse sectors from health care to commerce, education to communication. Yet, an unsettling gulf exists between public perception and scientific understanding of AI, a divide deepened by sensationalist media and popular culture’s misleading narratives.

As we grapple with the onset of AI, we stand on the precipice of a monumental inflection point. The course of our journey hinges on the choices we make and the paths we tread in the coming months and years. AI is not merely a scientific curiosity, it’s a revolutionary force with the potential to infiltrate and reframe every facet of our lives.

The ripples of AI-driven innovation and enhanced productivity are already creating seismic shifts. Despite its pervasiveness, a concerning paradox arises: the wider AI spreads, the deeper the chasm grows between the public’s perception and the scientific community’s understanding of this transformative technology.

We argue that this disparity is primarily fuelled by a diet of sensationalist media narratives and popular culture portrayals, which obscure the true essence of AI and its potential impact on our shared future.

This narrative is driven mainly by the conflation of robotics concepts with that of AI. Often, depictions of AI in the popular press are driven by Hollywood’s portrayal of AI, such as in The Terminator or I, Robot. These depictions largely present an apocalyptic scenario where sentient machines are intent on overthrowing their human creators.

These narratives sell copy and drive public perception fuelling our collective imaginations. The man versus machine storyline is old and has lived in various guises through the ages, where machines have been accused of causing job losses or otherwise changing the world for the worse. Yet these innovations persevere, and we have advanced technologically, socially and in many other ways as a result.

We argue that the public discourse is skewed towards fear and apprehension, overshadowing balanced, informative debates about the role of machines and now, in particular, AI in our collective futures. In contrast, scientists and researchers in AI development view it as a tool rather than a sentient entity set on destroying us.

For these experts, AI is not about replacing humans but augmenting human capabilities. AI can be leveraged to tackle complex tasks, boost efficiency, and uncover insights that might otherwise be out of reach in much the same way as other tools have allowed us to do things which would otherwise have been out of our reach.

In this pragmatic perspective, AI lacks desires, emotions, or the ability to rebel – it simply executes the instructions coded by its human creators. We believe that sometimes the mysterious behaviour of these AI bots reflects our lack of comprehensive understanding of how the human mind works rather than an actual “human-like” response from the machine.

The working of the human brain is a far greater mystery than that of the AI “brain”. However, where there is a difference to previous innovations, is in the pace of change. While most previous technological advancements were slowly assimilated into society, this is not the case with AI. This, understandably, may partly explain the dichotomy between the public’s fearful perception of AI and the scientific community’s pragmatic understanding.

This highlights the importance of responsible reporting and accurate depictions of AI in popular media. Given AI’s pervasive influence on society, it is important the general public should understand its capabilities, limitations, potential risks, and ethical implications.

This necessitates inclusive educational initiatives, such as public workshops and community programmes to increase AI literacy. A free-flowing dialogue between the scientific community, media, and the public can foster mutual understanding. Such exchanges allow scientists to directly address public concerns and the media to fact check their narratives.

Importantly, these fora provide a platform for the public to voice their fears and questions. This process of collective learning and open discourse can help replace fear and sensationalism with a sober, informed debate about AI.

It is apparent that existential fears about the impending assimilation of humankind by machines are obscuring the very real current issues that need to be addressed. The creation of new jobs and the replacement of old ones need serious public discourse. Workers in threatened positions can prepare themselves by re-skilling through continuing education and training.

What is clear is that the advances in technology are driving a lifelong requirement for all to remain relevant and to be active participants in society. It is also necessary for society to be vigilant, by remaining informed, against the threats posed by AI in the hands of malicious actors.

While fake news, and deep fakes are likely to increase, other important issues must also be addressed, such as bias, explainability, transparency, trust, cyber-security risks, and ethical considerations. Steering the public’s attention to remote existential threats may actually become self-fulfilling because we have failed to deal with the more real and immediate issues.

Part of the fear and misunderstanding surrounding AI comes from its “black box” nature. Complex AI models have internal workings that can be opaque. This opacity is often quoted as a justification for the predictions of doomsayers. Yet, the inner workings of many innovations in everyday use are opaque to the general public, and in the scientific domain there are many things that we have a limited understanding of, due to theoretical or technological limitations.

These arguments do not mean that we are advocating a laissez-faire attitude towards a technology that has such potentially far-reaching consequences for us. On the contrary, we are advocating a balanced outlook that takes into account the potential of AI to significantly improve our lives, while also guarding against overreach, unethical use, and unfettered development of AI without the societal guardrails of education and legislation.

As a profound instrument of change, AI possesses the power to extend human potential, broaden the boundaries of our knowledge, and address some of the most complex challenges facing our societies today. Yet, the fulfilment of this extraordinary promise hinges on our collective vigilance and responsibility. History has taught us the art of survival amid existential threats, from deadly pandemics to the looming shadow of nuclear destruction.

Similarly, while we cannot dismiss the potential long-term threats posed by AI, our immediate focus should be on navigating the everyday challenges. Only by doing so can we prevent the fear-fuelled dystopian future from transpiring, using AI not as a harbinger of fear, but as a catalyst for progress and enlightenment.

Professor Manoj Maharaj is a Professor in Information Systems at the School of Management, IT and Governance, at the University of KwaZulu-Natal.

Professor Manoj Maharaj is a Professor in Information Systems at the School of Management, IT and Governance, at the University of KwaZulu-Natal. Picture: Supplied

Professor Craig Blewitt is an Associate Professor in Information Systems at the School of Management, IT and Governance at the University of KwaZulu-Natal.

Professor Craig Blewitt is an Associate Professor in Information Systems at the School of Management, IT and Governance at the University of KwaZulu-Natal. Picture: Supplied

Daily News