The International Chamber of Commerce (ICC) has released a four-pillar narrative on business considerations for the trustworthy, responsible and ethical development of artificial intelligence (AI).
Artificial intelligence is technology which enables the simulation of human intelligence in machines, allowing them to perform tasks which are typically associated with human intelligence. These tasks include problem-solving, understanding natural language, decision-making and learning, with the potential to improve productivity and augment creativity.
Artificial intelligence is revolutionising global industries, shaping economies and societies, therefore a robust governance model becomes essential to harness its benefits while mitigating risks.
The International Chamber of Commerce outlines the four global AI governance from the perspective of global business:
1. Principles and code of conduct
2. Regulation
3. Technical Standards
4. Industry self-regulation
Each pillar is essential in promoting the safe development of AI so that it is trustworthy, responsible and ethical.
Principles and Guidelines
Guiding principles for responsible AI development, deployment and use provide a baseline framework for ethical and sustainable governance.
The OECD’s 2019 Principles on Trustworthy AI, revised in 2024, and endorsed by 47 countries, exemplify these efforts and emphasise cooperation “within and across jurisdictions to promote interoperable governance and policy environments”.
Similarly, the UN General Assembly’s 2024 resolution and UNESCO’s 2021 Recommendations on the Ethics of AI underscore the importance of human rights and ethical standards in AI at a global level.
Globally agreed principles and guidelines for responsible AI are necessary to provide a comprehensive framework for ethical and sustainable AI governance, avoiding fragmented and duplicative AI governance solutions and spanning multilateral and regional approaches.
Regulation
Recent developments in Europe, in particular the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law and the EU’s AI Act, represent a significant milestone in the governance and regulation of AI. The EU’s approach, grounded in ethics and human rights, sets a precedent for global AI regulation based on a risk-based classification system.
National efforts are also underway in several countries to devise and implement AI regulatory frameworks, aiming to promote responsible AI development, deployment and use, and strike a balance between boosting investment and innovation while protecting citizens from high-risk systems. Such initiatives include, among others, Brazil’s proposed AI Bill, Canada’s AI and Data Act, China’s Scientific and Technological Ethics Regulation, India’s proposed Digital India Act, South Korea’s AI Act, or the United Arab Emirates’ Council for AI and Blockchain.
The regulation aspect of AI, as outlined by the ICC focuses on ensuring that AI technologies are developed and deployed in compliance with legal frameworks and regulatory bodies.
Technical Standards
International standards play a vital role in ensuring consistency in the practical implementation of global, regional, and national AI policies and laws. These formal guidelines and benchmarks define how AI should be designed, developed, tested and maintained to ensure their safety, effectiveness and ethical standards. For instance, numerous upcoming AI regulations require AI system providers to put in place a risk management system.
Organisations such as the US NIST Risk Management Framework or ISO/IEC, CEN-CENELEC, and ITU are actively developing technical standards to advance consistency in the ways in which impact assessments are conducted. For instance, the ISO/IEC 42001 seeks to provide an overarching framework for AI system management. Additionally, ISO/IEC DIS 42005 is in development, detailing the procedures an organisation should follow when conducting impact assessments.
Industry self-regulation
Effective governance of AI requires international cooperation. A cohesive framework for such cooperation should prioritise convergence on governance standards to prevent fragmentation of the policy landscape. There needs to be an international interoperable approach that will enable industry standards, domestic regulation, and global governance to come together and reinforce one another.
A risk-based regulatory approach that differentiates between high- and low-risk scenarios provides focus and protection against harm where it is most needed, while ensuring that regulations are not overly prescriptive and do not hamper innovation. For high-risk AI systems there should be a requirement for developers and deployers to put in place measures such as a risk management system, human oversight, data governance and security, technical documentation, record keeping and transparency.
There is a need for international collaboration to monitor for, and respond to, globally significant safety and security risks, building on the work begun by the November 2023 UK Safety Summit and continued through the May 2024 Seoul AI Summit.
All information in this article is taken from the International Chambers of Commerce’s ‘Overarching Narrative on Artificial Intelligence’.
For more information: iccwbo.org/global-insights/digital-economy/icc-overarching-narrative-on-artificial-intelligence/#block-accordion-7