European Union (EU) lawmakers reached a provisional agreement on draft legislation to govern the use of AI, with authorities set to finalise the details in the coming weeks.
The European Council stated it had agreed with the European Parliament and the European Commission (EC) on the landmark draft AI Act following three-days of negotiation aimed at ensuring systems used in the bloc adhere to fundamental human rights.
It stated stated the rules would not apply to AI deployments in military and defence, research and non-professional applications by consumers.
The authorities settled on a format whereby AI systems would be classified as being of minimal, high or unacceptable risk.
Technologies including spam filters or intelligent recommendations will be classed as minimal risk, while AI used in critical infrastructure; medical devices; work or educational institutions; border control; and administration of justice and law enforcement are rated high-risk.
AI systems considered “a clear threat” to the public’s fundamental rights will be banned under “unacceptable risks”, including systems allowing predictive policing, social scoring and manipulation of human behaviour.
Some uses of biometric identification or monitoring will also be prohibited, with “narrow exceptions” for policing public spaces.
The EC noted there would be “additional binding obligations” for the most-powerful general purpose AI models “that could pose systemic risks”.
Fine
Various levels of financial penalties are planned for companies which fail to comply with the proposed rules: €35 million or 7 per cent of global annual revenue “for violations of banned AI applications”; €15 million or 3 per cent for breaching “other obligations”; and €7.5 million or 1.5 per cent “for supplying incorrect information”.
The authority pledged “more proportionate” caps for SMEs and start-ups.
Related Articles
Technology industry group Digital Europe maintained previous critisicm of the plan, stating
Cecilia Bonefeld-Dahl, director general of technology industry group DigitalEurope, maintained the organisation’s critical line on the draft, asking “at what cost” the agreement came.
She argued the proposal would “take a lot of resources for companies to comply with, resources that will be spent on lawyers instead of hiring AI engineers”.
The law is unlikely to come into force until at least 2025.
Comments