What’s the Line For AI?

907641_Bitvore images_8_120220

Artificial intelligence can reshape our lives in countless ways. But it's important to remember that AI is just a tool. Like any tool, it can be used for constructive or destructive purposes. And like any tool, it's the user's responsibility to do the former and not the latter. 

But there's one big difference between AI and other tools: AI can be designed to act autonomously, and this raises important concerns. How much autonomy should AI be given? What decisions should stay with actual people? What are the lines we shouldn't cross? 

 

As a recent article from the Swiss Federal Institute of Technology in Lausanne (EPFL) pointed out, the EU is already moving to set guidelines for AI use. In 2020, the European Parliament adopted a regulatory proposal that defines legal responsibilities for those building and using AI, and calls for strict liability laws for AI that causes harm. However, we're still far from an international consensus and legal framework around the limits of AI.

 

Ethical Limits of Digital Technology

 

While much of the discussion aims to regulate future AI technology, some believe things have already gone too far. Computer Science Professor Stuart Russel of the University of California believes apps are already making users, "the subject of digital technology, rather than the owners of it."

 

"For example, there is already AI from 50 different corporate representatives sitting in your pocket stealing your information, and your money, as fast as it can, and there's nobody in your phone who actually works for you. Could we rearrange that so that the software in your phone actually works for you and negotiates with these other entities to keep all of your data private?"

 

Others are concerned with AI taking on leadership roles. At a recent EPFL conference on governance in digital technology, Associate Professor Bryan Ford, from the college's Decentralized and Distributed Systems Laboratory voiced concern about the possibility that AI could replace humans as a public policy decision-maker. "Matters of policy in governing humans must remain a domain reserved strictly for humans," said the professor.

 

Human Decision Makers Shouldn't Be Replaced

 

We've always taken the position that AI should augment human abilities, not replace them. Current AI is excellent at analyzing data, but it's not autonomous; human beings can spot errors, utilize experience and intuition, and weigh a huge range of factors the AI has not been trained to spot.

 

This isn't primarily an ethical perspective — it's our practical take as developers. There's a big difference between crunching the data and making good predictions and inferences within a particular context (which AI does very well) and replacing human decision-makers (which current AI can't do.)

 

However, ethically we agree with Associate Professor Ford. Technology should empower us to make better decisions and live better lives, not replace us. No matter how good AI and ML get, humans need to stay in the loop and be held to account for their decisions, whether the software recommended those decisions or not. 

 

What do you think about the role of AI in policy decisions? Let us know @Bitvore.

 

Download our latest case study to learn how Bitvore Cellenus identified emerging risks for clients of Commercial Insurance, Workers Comp and Employee Benefits solutions. 

Download

Subscribe to Bitvore News Blog Weekly Email

Recent Posts

Archives

See all