The joint AI treaty is the next global move towards unifying national goals and values related to artificial intelligence, human rights and democracy.
The member States of the Council of Europe and the other countries, including the US, Australia, Canada, Israel, Japan, and Argentina, are to sign the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, which is the first legally binding global treaty ensuring that the use of AI systems is fully consistent with human rights, democracy and the rule of law.
The treaty was opened for signature during a conference of Council of Europe Ministers of Justice in Vilnius, Lithuania, on Sept. 5. On behalf of the European Union, the Convention was signed by European Commission Vice-President for Values and Transparceny, Věra Jourová. After signing, the regulatory framework should still be concluded by the Council of Europe and receive a consent from European Parliament.
Besides the EU, the Framework Convention was already signed by Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom, Israel, and the United States of America.
The main principles of the new treaty aim to ensure that innovative activities within the lifecycle of AI systems are compatible with human rights, democracy and the rule of law. The Convention presupposes strengthened documentation, accountability and remedies, risk-based approach, support to safe innovation through regulatory sandboxes, oversight mechanisms for supervision of AI activities.
It is consistent with the EU AI Act, which came into force on Aug. 1. The latter governs the development and deployment of AI models, with a particular focus on high-level tools equipped with large amounts of computing power. One of the first stages of the EU AI Act is prohibition of AI applications that exploit individual vulnerabilities, engage in non-targeted scraping of facial images from the internet or CCTV footage, and create facial recognition databases without consent.
In addition, companies dealing with AI in the EU will have to meet strict compliance obligations in risk management, data governance, information transparency, human oversight and post-market monitoring. To do that, they should probably undergo thorough audits of their existing AI systems.
EU was one of the first countries or unions in the world which drafted a comprehensive regulation on AI technology. In the US, the Congress has not yet implemented a nationwide framework for AI regulation, though separate legal bodies are working on it. At the same time, the US urged the UN General Assembly in March to launch international regulatory and governance mechanisms.
According to the announcement, the Convention principles will apply directly to AI systems used by public authorities or private actors acting on their behalf. While private sector players must address risks and impacts from AI systems in compliance with the Convention too, they have the option to implement alternative, appropriate measures.
Developing a certain code of ethics as well as basic security rules is especially important for dealing with human-like AI tools but also with all other types of this technology. A clear ethical framework can help businesses and regulators to make responsible decisions about their AI tools.
Nina Bobro
Nina is passionate about financial technologies and environmental issues, reporting on the industry news and the most exciting projects that build their offerings around the intersection of fintech and sustainability.