Limiting the Impact of AI Systems On Critical Decision Making

845142_Bitvore images_4_092820-1

How do you trust AI systems? The answer is that right now, the state of AI isn't advanced enough for humans to fully trust these solutions. Human approval and authorization should always be required before AI makes a decision that can affect a human life. In addition, AI systems should also focus on providing transparency and explicability alongside their primary functions.

Keeping Humans in the Loop

 

Right now, the state of the art in terms of artificial intelligence is still what's known as "Weak" AI. Weak AI represents solutions designed to perform a single task—e.g., image recognition, fraud detection, and so on. In some cases, these tools can only perform their jobs roughly as well as a person, and their advantage lies mostly in the fact that they don't get bored or tired.

 

In addition, there are some AI tools, such as predictive analytics solutions, that may or may not perform better than a human person. AI analytics systems have access to larger amounts of data. They can incorporate inputs from tens of thousands of variables to make predictions, and they can do this faster than any human analyst.

 

In both cases, human insight should be part of the decision loop for AI processes. For example, imagine a fraud-detection AI as an insurance. Every case of potential fraud is serious—but if an AI is allowed to dismiss potentially fraudulent claims automatically, it might be punishing people who are in legitimate need of assistance. Humans need to be in the loop to review potentially fraudulent claims and then weed out false positives.

 

Similarly, predictive analytics can be more accurate than those of a human data analyst—but only under certain conditions. Many AIs are trained by correlating historical data with data from the present. For example, data from the historical stock market may be used to predict stock movements in the future. If that historical data represents a bias, for example, then future predictions will reflect that bias.

 

Therefore, human data analysts should always double-check AI systems' predictions—but there's a catch. Because AI systems make predictions based on so many variables, it can be challenging to understand how an AI system made its decisions. Therefore, AI systems need modifiers that allow analysts to understand why a decision was made.

 

Providing Transparency and Explicability in an AI Context

 

Offering transparency in an AI context is different than in other forms of computer programing. Other computer programs are developed with code, but AI models are generated via constant iteration. If an ordinary program develops a bug, you can go back into the code and fix it. The only way to fix a bug in an AI model is to keep iterating until the algorithm generates the right conclusion.

 

If you're a consumer of an AI model, you may not know that your model is generating bad results—or if you do know, your product probably can't tell you what's wrong or give you ways to fix it. Your only recourse may be to find a new model.

 

Some help is coming. The EU's High-Level Expert Group on AI has offered guidelines for creating trustworthy artificial intelligence. Although these guidelines aren't yet law, they are available in the form of an assessment tool that both users and developers can use to understand how their AI systems can affect and uphold others' rights.

 

Meanwhile in the US, the NIST started—but then paused—work on an ethics tool for AI, but this work is being continued by an organization known as the ATARC AI Working Group. This group scores AI tools on five metrics, including model versioning, identification of data sources, methods of data selection, reduction of bias, and algorithm explicability, and then presents the results in the form of a radar chart.

 

Although these tools are in their infancy, they provide an important way for both consumers and developers to gain insight into the ethics of their IT tools—and more is yet to come.

 

Adding Ethics to investing with Bitvore

 

Here at Bitvore, we know that ethics in investing begins with the data that you use. That's why we're pleased to announce the availability of a new Cellenus Environmental, Social and Governance (ESG) dataset. This data will help users ensure that their investments are more sustainable and socially conscious using a scoring mechanism that incorporates AI over 60K high quality unstructured data sources. 

 

Download our latest case study to learn how Bitvore Cellenus identified emerging risks for clients of Commercial Insurance, Workers Comp and Employee Benefits solutions. 

Download

Subscribe to Bitvore News Blog Weekly Email

Recent Posts

Archives

See all