The problem of “pretending to know when you don’t know” (hallucination), one of the biggest dilemmas of artificial intelligence, has reached an important turning point in the scientific world. The study, conducted by the Korea Advanced Institute of Science and Technology (KAIST), promises to improve reliability in critical areas such as medicine and autonomous driving by making AI “admit its ignorance.”
HOW WAS THE “EXCESSIVE SELF-CONFIDENCE” PROBLEM SOLVED?
According to the news of Independent Turkish; Existing artificial intelligence models tended to give wrong answers with high confidence, even on subjects they did not know, by magnifying small errors in the data they were exposed to, especially at the beginning of the training process. Researchers determined that the basis of this situation lies in the way neural networks learn.
To solve this problem, the KAIST team took inspiration from the prenatal development of the human brain. The human brain starts producing signals even before receiving a stimulus from the outside world. By mimicking this process, the scientists applied a brief pre-training (warm-up period) with “random noise” inputs before teaching the AI model real data.
“I DON’T KNOW ANYTHING YET” PHASE
Thanks to this “warm-up period,” the AI model learns to balance its own uncertainty before starting to learn data. The model starts the process from the very beginning by laying the foundation of “I don’t know anything yet.” Thus, when he encounters data that he has not encountered during training, he gains the ability to say “I don’t know” by lowering his self-confidence instead of giving a made-up answer.
ARTIFICIAL INTELLIGENCE NOW HAS A MORE “HUMAN-LIKE” AWARENESS
Se-Bum Paik, one of the authors of the study published in the journal Nature Machine Intelligence, emphasizes the importance of this method with the following words:
“This study shows that by combining fundamental principles of brain development, AI can recognize its own state of knowledge in a way more similar to humans.”
WHY IS IT IMPORTANT?
Especially in medical diagnoses, artificial intelligence’s predictions can pose life-threatening risks. With the new method:
Reliability Will Increase: The AI will recognize when it is indecisive or may have been wrong.
Hallucinations Will Reduce: It will accept uncertainty rather than making up information.
Critical Decisions Are Safer: More consistent results will be achieved in areas where there is no margin for error, such as autonomous vehicles and the healthcare industry.
This invention seems to carry digital assistants and decision-making mechanisms to a safer future by improving not only the capacity of artificial intelligence to answer correctly, but also its ability to know its own limits (metacognition).