Countries including the UK, US and European Union (EU) member states agreed the first legally-binding international convention to ensure the use and development of AI systems align with human rights and democratic values, complementing existing frameworks such as the EU AI Act and the Bletchley agreement.

The Council of Europe stated its decision-making Committee of Ministers adopted the framework convention in May, around two years after work on a draft began and five years after first exploring the feasibility of the move.

It said its 46 member states, the European Union and 11 non-member states including the US, Australia, Canada, Argentina, Japan and Israel were involved in the drafting process, while academia, private sector representatives and civil society “contributed as observers”.

Apart from the US, UK and EU, signatories of the convention so far have included: Andorra; Georgia; Iceland; Norway; the Republic of Moldova; San Marino and Israel, but “countries from all over the world will be eligible to join”, the Council of Europe added.

Council of Europe Secretary General Marija Pejcinovic Buric described the framework as “an open treaty with a potentially global reach”.

It will cover risk and impact assessments “in respect of actual and potential impacts on human rights, democracy and the rule of law”, the establishment of mitigation measures, and the possibility for regulators to introduce ban or moratoria on certain AI applications or systems.

The treaty will enter into force three months after five signatories, including “at least three Council of Europe member states”, have ratified it.

Peter Kyle, UK Secretary of State for Science, Innovation and Technology, said the treaty will “further enhance protections for human rights, rule of law and democracy, strengthening our own domestic approach to the technology while furthering the global cause of safe, secure and responsible AI”.

Separately, Australia today (5 September) unveiled a set of guidelines as a new AI safety standard, which may include the use of “human oversight” in deployments and the ability to challenge automated decision-making processes.