article banner
Overview

Is the evolution of AI what we anticipated?

During an interview with Life in 1970 Marvin Minsky, one of the godfathers of artificial intelligence, predicted that in a few years’ time we would have a machine “with the general intelligence of an average human being” (so-called general intelligence).

Undoubtedly, if to date the idea of such an intelligent machine is a utopia based on the superiority of digital logic over human logic and on the deconstruction of space and time, the role of artificial intelligence in our society is a central issue to the current debate on the topic.

Artificial intelligence is actually a tool which can support human beings in their tasks and activities, as well as a system to predict future decisions, thus with a potential significant impact on our choices.

The debate on AI originates first of all from an understanding of what is meant by artificial intelligence and the mechanisms underlying its functioning.

As known, there is no definition of artificial intelligence which includes the various functions and applications of this scientific discipline, except the one specifying that its purpose is to “define or develop programs or machines with a behaviour that would be defined as intelligent if it were exhibited by a human being” (F. Rossi).

In the last few years, the reach and impact of artificial intelligence have expanded at a breathtaking pace due to the exponential growth of computer speed, to the huge increase in the amount of data available and to cloud computing, to the point that it has become the key driver of the “fifth industrial revolution”, characterised by the interaction between intelligent machines and human beings.

These factors, together with the various capabilities of machines (planning, vision, expert system, natural language processing, robotics and speech), among which machine learning, have made AI forecasting increasingly accurate, thus leading to the application of AI within specific domains (so-called weak artificial intelligence): from medicine to finance, from manufacturing to education, from weapons to information, up to resource management.

One of the most evident areas in which artificial intelligence has made significant progress is natural language recognition and image recognition.

Deep learning algorithms based on artificial neural networks, with layers of simple computational nodes comparable to neurons, have shown an increasing accuracy in recognising objects in images and in processing human language in a natural way, showing human-level performance on various professional and academic benchmarks.

An example of this is ChatGPT, the new artificial intelligence chatbot made available freely to the public by OpenAI in November 2022, capable of answering questions, chatting with people and generating texts based on natural language text inputs, and powered by a Large Language Mode (LLM), i.e. a deep learning model trained on a large body of texts. In two months, the tool was used by 100 million users (USB), who could test the output of an artificial intelligence capable of generating knowledge from what learned from human beings, without the intermediation of an expert.

Other major progresses concern unsupervised learning, where AI algorithms acquire “raw” unlabelled data and, based on a simplified representation of their training dataset, learn to generate new texts and new creative contents, such as music and art.

GPT 4.0 (Generative Pre-trained Transformer), the model licensed by OpenAI in March 2023, showed us that thanks to neural networks, machines can write emails, blog posts or articles (copy.AI) but, in their applications, they can also generate multimodal content such as images (Midjourney), compose music (Soundraw.io), protect rights (DoNotPay), as well as create avatars (Anime AI).

Another sector in which artificial intelligence is refining its capabilities is that of predictive algorithms, which go beyond formal logic and the perimeter of their source code to generate a model using information extracted from data mining.

The validity of machine answers would not derive from prescriptive axioms, but rather from the ability to answer questions basing on models which could update themselves also through interactions with the surrounding environment and human feedback (reinforcement learning).

These machine learning algorithms can have applications in healthcare, finance and marketing and are transforming the way in which human beings express judgments. They require users to have an adequate mindset to spot possible algorithms errors deriving either from an imperfect application of the function built on test data (generalisation errors), or from the absence of some scenarios in learning data.

Despite its progresses and advantages, AI is not an unerring technology: its answers often need to be understood and interpreted.

Moreover, there are significant social risks related to the use of this technology which require consideration, without, however, envisaging catastrophic scenarios, such as those of a humanity reduced to automatons as depicted in the film Wall.E.

Among the main risks and damages deriving from the use of artificial intelligence are: an increased misinformation amplified by recommendation systems and by LLM, which can produce a convincing misinformation known as “hallucinations”; the strengthening of social inequalities due to the use of distorted learning data and results; the undermining of users’ privacy, as AI models source data from the web or purchase them, but never provide information on which are used to train them.

The Center for AI Safety (CAS), a not-for-profit organisation based in San Francisco, identified eight large-scale risks related to the competitive development of AI, among which weaponization, power-seeking behaviour and value lock-in.

One of the main risks perceived is the automation of human work. AI may substitute a broad range of activities and professions, thus endangering the employment of millions of people in various industries. The economic gap could increase lacking adequate policies and measures to mitigate the negative impacts of technological unemployment.

Another perceived risk concerns the ethics of AI. AI systems are not as good as the data they are trained on: besides perpetuating existing biases or discriminating against certain groups of people, they could generate, even by deception, a "rogue" artificial intelligence (Yoshua Bengio).

Data privacy and security are additional major issues related to AI. Automated learning algorithms require huge amounts of data for their training, but the indiscriminate and unregulated use of such data, including synthetic data, and the difficulty to check their accuracy and relevance might lead to a violation of people’s privacy in terms of control of the information provided and correctness of the decisions.

Moreover, the corruption or manipulation of input data sets and data security breaches may lead to an abuse of technology, as these are not immediately perceivable given the characteristics of the model, which does not work according to logical deductions and is not necessarily transparent.

In order to guarantee that the impact of artificial intelligence be fair, secure and sustainable for the society as a whole, the experts of the Organizations of the United Nations have recently identified three tools: regulation, increased transparency, and human supervision.

It is therefore a question of governing and addressing artificial intelligence basing on an anthropocentric approach to understand how human beings and machines could work together to the best of their abilities to face society’s pressing challenges to expand and broaden human experience, rather than replicate or even substitute it.

It will be necessary to boost human creativity, curiosity and empathy (Paul McDonagh-Smith) and only when machine will come to epitomise the best of who we are and potentially could be, we will hold the keys to our future.