European Union lawmakers struck a deal last Friday agreeing to one of the world’s first major comprehensive artificial intelligence laws. Called the AI Act, the landmark legislation sets up a regulatory framework to promote the development of AI while seeking to address the risks associated with this rapidly evolving technology.
Recognising the potential threat to citizens’ rights and democracy posed by certain applications of AI, the co-legislators agreed to prohibit:
- Biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race).
- Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases.
- Emotion recognition in the workplace and educational institutions.
- Social scoring based on social behaviour or personal characteristics.
- AI systems that manipulate human behaviour to circumvent their free will.
- AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).
Rules for Remote Biometric Identification Systems
A series of safeguards and narrow exceptions for the use of remote biometric identification (RBI) systems in publicly accessible spaces for law enforcement purposes, subject to prior judicial authorisation and for strictly defined lists of crime, were agreed. “Post-remote” RBI would be used strictly in the targeted search of a person convicted or suspected of having committed a serious crime, while “real-time” RBI would comply with strict conditions and its use would be limited in time and location, for the purposes of:
- Targeted searches of victims (abduction, trafficking, sexual exploitation).
- Prevention of a specific and present terrorist threat.
- The localisation or identification of a person suspected of having committed one of the specific crimes mentioned in the regulation (e.g. terrorism, trafficking, sexual exploitation, murder, kidnapping, rape, armed robbery, participation in a criminal organisation, environmental crime).
In a news conference, Roberta Metsola, the president of the European Parliament, called the law “a balanced and human-centered approach” that will “no doubt be setting the global standard for years to come.”
For AI systems classified as high-risk (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law), clear obligations were agreed. MEPs successfully managed to include a mandatory fundamental rights impact assessment, among other requirements, applicable also to the insurance and banking sectors. AI systems used to influence the outcome of elections and voter behaviour, are also classified as high-risk.
European citizens will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights.
Rules for General-Purpose AI Systems
To account for the wide range of tasks AI systems can accomplish and the quick expansion of its capabilities, it was agreed that general-purpose AI (GPAI) systems, and the GPAI models they are based on, will have to adhere to transparency requirements as initially proposed by Parliament. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.
Limited risk systems, such as chatbots like OpenAI’s ChatGPT, or technology that generates images, audio or video content, are subject to new transparency obligations under the law.
Violation Penalties
Penalties were set as a percentage of the offending company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. This would be €35 million or 7% for violations of the banned AI applications, €15 million or 3% for violations of the AI act’s obligations and €7,5 million or 1,5% for the supply of incorrect information.
The Regulation will apply to developers and deployers. This is really important as most organisations will likely fall in the latter category. Some are subject to fundamental rights impact assessments when deploying high risk systems - those who are providing public services? Controls have to be in place for both, one being typically more strictly regulated than the other.
What's Next?
Now that a political agreement has been reached, the final text will be available some time in 2024, this means the Regulation hasn’t “passed” as such. The agreed text will have to be formally adopted by both Parliament and Council to become EU law. Parliament’s Internal Market and Civil Liberties committees will vote on the agreement in a forthcoming meeting. It will then come fully into effect in 2026. It’s a regulation, so it will have huge direct effects on how AI operates in Europe and, no doubt, globally.
While European AI regulations were celebrated as a huge victory in Brussels, many investors and AI founders say this news could significantly hinder the progress of smaller European startups, potentially pushing Europe further behind in the global AI race.
Miss Nothing With Bitvore's Automated Intelligence
Trusted by more than 70 of the world’s top financial institutions, Bitvore provides the precision intelligence capabilities top firms need to counter risks and drive efficiencies with power of data-driven decision making.
Uncover rich streams of risk and ESG insights from unstructured data that act as the perfect complement to the internal data and insights your firm is already generating. Our artificial intelligence and machine learning powered system provides the ability to see further, respond faster, and capitalize more effectively.
To learn how the Bitvore solutions can help your organization, contact info@bitvore.com or visit www.bitvore.com.