Artificial intelligence has become a prominent tool for decision-makers, but some question the extent to which AI can be trusted. Imagine teaching AI to recognize a horse in photographs. Using thousands of images, the AI is trained to identify the correct animal - or so it seems.
The problem is AI has difficulty contextualizing information and using common sense. Let’s say each of the horse images displays a specific copyright logo. As a result, the AI has learned to identify a “horse” based only on the logo instead of recognizing the animal itself.
AI can learn and comprehend topics quickly, but skeptics question the reliability. AI can appear highly accurate at face value, but it may have severe limitations related to typical human common sense.
Research performed over the years has uncovered limitations related to AI systems. Although AI can be trained with large datasets, that doesn’t always provide a high degree of accuracy. Determining the right answers using the wrong methods can severely hinder the precision and reliability of AI processes.
The Clever Hans Phenomenon in AI
The “Clever Hans” phenomenon was used to describe a horse in the late 19th century who was seemingly able to comprehend advanced human functions. Investigations later discovered that the horse had been taught to respond to body language cues instead of understanding human language. The dilemma draws a similar parallel to AI. Having the right answer means nothing with the wrong assumptions in place.
Without any human checks and balances AI may unknowingly draw incorrect conclusions. Eliminating the “Clever Hans” dilemma is a critical step towards the practical application and dissemination of AI.
Although AI can be useful in many instances, people must take care when placing full reliance on machine learning systems. Trust in AI must be gradually achieved over time using various forms of interactive learning.
Human experts should be integrated into the learning process to understand AI functionality better. AI systems must provide rationalization on active learning processes using appropriate reasoning and logic mechanisms.
Explanatory Interactive Learning
Experts have developed a new approach to handling AI known as “explanatory interactive learning.” The methodology places experts into AI learning scenarios that require constant feedback and interaction. The information provided is then used to enhance and improve AI system recognition abilities.
The Federal Ministry of Food and Agriculture (BMEL), along with the research team at TU Darmstadt, used the approach to help spot Cercospora Leaf Spot disease, a harmful disease found in sugar beets worldwide. AI was first used to review hyperspectral data not relevant for identifying pathogens. Although the predictions were highly accurate, the AI was focusing on the wrong characteristics.
The team was able to successfully implement a correction stage using explanatory interactive learning (XIL). Although the detection rate decreased, more accurate conclusions were drawn in the end. Fine-tuning AI continually leads to more reliable predictions in the long term.
Learning to Trust AI
Building trust in AI systems requires a high degree of interaction and feedback. Although AI can learn from datasets, humans are needed to validate AI and machine learning processes. Humans and AI must work together to establish a higher degree of accuracy and trust.
Bitvore uses massive amounts of unstructured datasets to create AI-ready data. Our specialized technology helps create clean, normalized business-centric data that eliminates tedious and repetitive tasks, improving decision-making abilities of business leaders.
For additional information on how Bitvore can improve your business efficiencies, check out our latest white paper for more details: Using Sentiment Analysis on Unstructured Data to Identify Emerging Risk.