Artificial intelligence is becoming more and more incomprehensible

As artificial intelligence continues to become more complex and to invest in more and more areas every day, experts fear that the processes at work will completely escape them. At the risk of catastrophic errors.

It’s a strange phenomenon that continues to intrigue researchers interested in the evolution of artificial intelligence. As summarized by the American newspaper “Vice”, “AI is no longer content to copy human intelligence, it now also reflects its inexplicability.” To the point that developers are finding it increasingly difficult to understand how it works and to determine why it obtains one result rather than another.

Experts therefore recommend that developers take a “step aside” and think about why and how an AI achieves a result rather than constantly improving the acuity and speed of processes.

What’s next after this ad

“If AI becomes an opaque box, we will no longer be able to understand the causes of a failure, nor to improve the security of the system” explains Roman V. Yampolskiy, professor of computer science at the University of Louisville, in an article at the title in the form of a statement “Inexplicability and incomprehensibility of artificial intelligence”.
“Furthermore, adds the researcher, if we get used to accepting the AI’s answers as the words of oracles that do not require explanation, then we will be unable to verify whether these results are not biased or manipulated.”

What’s next after this ad

One of the most dramatic examples of this risk is facial recognition: in 2018, a study determined that AI had much more difficulty recognizing a dark-skinned female face than a dark-skinned male face. clear. In the context of use by the police and justice, this system is therefore potentially a source of miscarriages of justice. The problem is similar with AIs capable of making a diagnosis from medical imaging: “There is growing concern that these systems reflect and amplify human biases, and reduce the quality of their performance in historically underserved populations. , such as female patients, black patients or patients of low socioeconomic status” reveals a study published in the journal Nature.

Abandon the illusion of explanation

However, other experts fear that by trying too hard to understand and interpret AI processes, it will lose its effectiveness…

What’s next after this ad

What’s next after this ad

“Perhaps the answer is to abandon the illusion of explanation, and focus instead on more rigorous testing of model reliability, bias and performance, as we try to do with humans. asks Jeff Clune, Professor of Computer Science at the University of British Columbia.
But most experts agree on one point: it’s far too early to let the AI ​​decide whether or not to hire someone, grant or refuse a loan or send someone to prison.

Leave a Comment