International AI GovernanceFramework: The Importanceof G7-G20 Synergy
Post Date
19 May, 2025
Author
Ipag
Key Points
AI is expected to contribute $15.7 trillion to the global economy by 2030 (PwC 2017).
Major risks include algorithmic bias, misinformation, cyber threats, and autonomous weapons.
The G7 and G20 must collaborate to establish an international AI governance framework.
Ethical AI standards must be implemented to ensure fairness, transparency, and accountability.
AI-driven cybersecurity risks and misinformation must be mitigated through international cooperation.
Regulations on AI’s use in warfare and surveillance are essential for global security.
AI research, data sharing, and capacity-building initiatives should promote equitable AI access.
A unified governance strategy will balance AI innovation, security, and fairness globally.
Introduction: The Need for International AI and Data Governance
Artificial intelligence (AI) is set to contribute $15.7 trillion to the global economy by 2030, driving innovation across industries such as health care, finance, and defense. However, its rapid adoption also presents ethical, security, and socio-economic challenges. Risks such as algorithmic bias, AI-driven misinformation, and cyber threats underscore the need for a cohesive international AI governance framework. According to the UN’s International Telecommunications Union (ITU), around 2.9 billion people globally – conservatively onethird of the world’s population – remain without internet access, facing significant barriers to effective participation in the digital economy and society.
The G7, as a leader in AI development, must set regulatory standards, while the G20 ensures inclusivity, bringing both developed and emerging economies into AI governance discussions. Multilateral cooperation is essential to prevent AI-related discrimination, security vulnerabilities, and economic inequalities. Studies highlight risks like biased facial recognition systems disproportionately misidentifying people of color and AI-generated disinformation influencing elections. Additionally, autonomous weapons and AI-powered cyberattacks pose threats to global security.
To address these concerns, the G7 and G20 must align policies to regulate AI responsibly. Key recommendations include establishing global AI ethics protocols, strengthening AI security treaties, enforcing regulatory oversight, and ensuring developing nations have equitable AI access. A coordinated G7-G20 effort will help maximize AI’s benefits while mitigating its risks, ensuring AI serves as a force for economic growth, security, and social well-being.
AI Governance: Current Landscape and Challenges
The current AI governance landscape is highly fragmented. Nations such as the U.S., China, and the EU have developed independent AI regulations, but there is no globally accepted framework. Further, regulations that emerge from Global North institutions tend not to be appropriate for Global South nations, and in fact, threaten to deepen pre-existing inequalities. The EU’s AI Act aims to regulate high-risk AI applications, while China has imposed stringent rules on AI-generated content (European Commission, 2023). The absence of standardized global governance leads to inconsistencies in AI regulation, creating challenges for multinational corporations and cross-border AI applications.
The G7 has played a proactive role in setting AI standards. The OECD AI Principles, endorsed by the G7, emphasize transparency, accountability, and human rights protection in AI deployment. However, these principles are not legally binding, limiting their effectiveness. The G20, on the other hand, encompasses diverse economies, including emerging AI hubs like India and Brazil. It provides a broader platform for inclusive governance, ensuring that AI benefits are distributed equitably across developed and developing nations.
Unregulated AI presents several risks. Mass surveillance powered by AI threatens privacy rights, as seen in China’s extensive use of facial recognition for social credit scoring. The use of AI in military applications, such as autonomous drones, raises ethical concerns about human oversight in warfare. Furthermore, AI-driven economic disparities are growing, with developing nations lacking access to advanced AI research and infrastructure. Addressing these challenges requires a cohesive governance approach that balances innovation with ethical considerations, security measures, and equitable and meaningful access to AI resources. Change can only be achieved through responsive policy making that is sensitive to the most vulnerable and accountable to all.
Case Studies: How Nations Around the World Are Adopting AI Regulations
Case 01: To promote international AI governance, the Trustworthy & Responsible AI Resource Center, launched in March 2023, has facilitated cross-border AI policy coordination. Countries like Japan and the European Union (European Commission 2024) have translated the NIST AI RMF into Japanese and Arabic, ensuring their AI industries align with internationally recognized best practices (Wiz 2024). This has improved global AI policy standardization, reducing regulatory fragmentation and ensuring AI safety, fairness, and accountability across different legal jurisdictions.
Case 02: The Government of Canada has launched the AI Strategy for the Federal Public Service (2025-2027) to ensure responsible AI adoption that enhances public services while maintaining ethical, secure, and transparent governance. AI has long been used in government operations, but rapid advancements—especially in generative AI—necessitate updated policies. This strategy prioritizes human-centered, collaborative, and responsible AI integration across government agencies, ensuring AI aligns with Canada’s values and national security interests. To mitigate risks like bias, security threats, and public distrust, the strategy establishes governance frameworks, training initiatives, and interdepartmental collaboration, setting a precedent for ethical AI adoption in public administration globally.
Synergizing G7-G20 Efforts for Effective AI Governance and Bring Equitable Access to AI Even by the Developing World
Artificial Intelligence (AI) has become a cornerstone of modern technological advancements, influencing everything from economic growth to national security. However, the lack of a cohesive global governance framework poses significant challenges in ensuring that AI development remains ethical, transparent, and secure. As a group of the world’s most technologically advanced economies, the G7 has both the capability and the responsibility to lead international AI governance efforts. The G7’s proactive stance in AI governance, including its endorsement of the OECD AI Principles, positions it as a key architect of global AI standards. However, to maximize impact, the G7 must strengthen its leadership by driving policy coordination, setting enforceable AI ethics and security standards, and guiding AI regulation at the global level.
Given the cross-border nature of AI, the G7 should seek strategic engagement with the broader international community, particularly G20 nations, to ensure its governance frameworks are effectively implemented worldwide. The G7 can provide technical expertise, regulatory guidance, and capacity-building support, while leveraging the G20’s broader representation to ensure inclusivity in AI governance. This approach would create a more structured and effective AI governance system that balances innovation, security, and fairness at a global scale.
To achieve equitable AI access, developing nations must be better integrated into the global AI governance framework. This requires capacity building, affordable AI infrastructure, and financial incentives to ensure they are not left behind in AI advancements. The G20 can be a game-changer by leveraging its diverse membership to drive AI standardization, public private partnerships, and regulatory inclusivity. Through GPAI-led initiatives, developing nations can access AI training, infrastructure, and equitable decision-making. Additionally, international financial institutions such as the World Bank and IMF should provide economic incentives for countries adopting ethical AI regulations. By bridging AI expertise across borders, facilitating investments in AI hubs, and establishing global AI standards, a unified G7- G20 effort can ensure that AI benefits are distributed equitably, preventing technological disparities between nations.
a) G7-led global ethical framework for AI deployment
The G7 has already taken steps in shaping AI ethics through initiatives such as the OECD AI Principles, which emphasize fairness, transparency, and accountability. However, these principles need to be reinforced with concrete regulatory mechanisms that ensure compliance. The G7 needs to move beyond voluntary guidelines and develop a legally enforceable ethical framework that can be adopted by member states and serve as a global benchmark for AI governance.
To enhance its impact, the G7 should work with the G20 to encourage wider adoption of these ethical standards. This can be achieved by providing regulatory templates, funding AI governance capacity-building initiatives, and engaging with emerging economies to develop AI policies that align with international human rights and ethical standards. Ensuring a common set of AI ethical principles will facilitate smoother cross-border AI collaborations, enabling international businesses to operate within a consistent regulatory environment.
b) Strengthening AI security protocols and addressing AI-powered cyber threats
AI-driven cyber threats, including automated hacking, misinformation campaigns, and AIenabled cyber espionage, pose serious risks to national security and global stability. G7’s advanced cybersecurity infrastructure and expertise make it well-positioned to spearhead international AI security initiatives. The G7 can take the lead in developing a comprehensive AI security framework that mandates minimum cybersecurity standards for AI systems, ensuring safeguards against adversarial attacks and unauthorized AI-driven cyber threats.
c) Regulating dual-use AI in warfare and surveillance
The Political Declaration on Responsible Military Use of AI, led by the U.S., and the Responsible AI in the Military Domain (REAIM) initiative, spearheaded by the Netherlands, Republic of Korea, Spain, and others, emphasize the need for international cooperation to ensure AI remains under human control in warfare and surveillance. These initiatives seek to establish ethical guidelines and legal frameworks for military AI applications, particularly in autonomous weapons and surveillance technologies. While the G7 plays a key role in shaping regulations, broader engagement with G20 nations is essential to prevent an AI arms race and uphold global security and human rights.
The military applications of AI present significant ethical and security concerns, particularly in autonomous weapons and AI-driven surveillance systems. The G7 has been at the forefront of discussions regarding the regulation of lethal autonomous weapons systems (LAWS), yet more action is needed to prevent an AI-driven arms race. The G7 should establish clear policies prohibiting the use of AI in fully autonomous lethal systems and advocate for a legally binding international treaty restricting AI’s role in warfare.
While the G7 can take a leading role in defining these regulations, broader consensus is required for effective implementation. Many G20 nations, including China and Russia, have been rapidly advancing AI-driven military capabilities. Engaging these countries through diplomatic negotiations, confidence-building measures, and arms control agreements will be essential in ensuring that AI remains under human control in critical military decisions. Similarly, G7-G20 collaboration will be necessary to regulate AI-driven surveillance technologies, ensuring they are used ethically and do not infringe upon civil liberties.
d) Expanding AI research, data sharing, and technology transfer
AI research and development (R&D) is currently concentrated in a handful of technologically advanced nations, many of which are G7 members. This concentration risks exacerbating global inequalities in AI capabilities. The world must democratize AI development by expanding research collaboration and data-sharing agreements with international partners.
To achieve this, there should be an establishment of a Global AI Research and Education Fund aimed at supporting AI research initiatives in emerging economies, funding AI ethics and safety training, and facilitating AI talent exchanges. Through collaboration and cooperation, G7 and G20 can promote structured AI development programs that enable technology transfer and knowledge-sharing between technologically advanced and developing nations. Such initiatives will not only foster global AI inclusivity but also strengthen international AI cooperation and innovation.
Policy Recommendations for a Unified International AI Governance Framework
An international AI governance framework must be built on strategic collaboration between the G7 and G20, leveraging their respective strengths to ensure ethical AI deployment, innovation, and security. The United States, as a G7 leader in AI, drives breakthroughs through companies like OpenAI, Google, and numerous startups, setting industry benchmarks. However, the current U.S. administration has shifted towards a deregulatory stance, emphasizing AI innovation over regulatory oversight. This was marked by the rescission of Executive Order 14110, which had previously established comprehensive AI governance, signaling a shift towards minimal federal intervention. Instead, the administration prioritizes reducing regulatory constraints to maintain U.S. leadership in AI development.
Meanwhile, China, a G20 powerhouse, has made waves with DeepSeek, triggering a trilliondollar market disruption and demonstrating its rapid strides in AI research and commercialization. South Korea, another key G20 player, is revolutionizing AI data centers, optimizing energy efficiency, and scaling AI infrastructure to unprecedented levels. Meanwhile, Australia is cementing its position as the front-runner in AI research in the Asia- Pacific, ensuring that cutting-edge AI capabilities are developed and applied responsibly. Balancing regulatory approaches across these major players will be critical in shaping a globally coherent AI governance framework.
Bridging AI Expertise Across Borders
The development and deployment of artificial intelligence are not confined to any single nation or region. The AI landscape is evolving through the collective expertise of countries with diverse technological, economic, and regulatory strengths. The G7 and G20 must establish a framework that enables seamless collaboration across borders, ensuring that AI’s benefits are distributed equitably while also addressing potential risks.
The G7 countries, including the United States, Canada, France, Germany, Japan, Italy, and the UK, have long been at the forefront of AI development. The US leads AI research and commercialization through OpenAI, Google, Meta, and a thriving startup ecosystem, while Canada excels in AI ethics and reinforcement learning, particularly through institutions like the Vector Institute and Mila. France has emerged as an AI regulatory leader within the EU’s AI Act framework, and Germany spearheads industrial AI applications, integrating AI into advanced manufacturing, automation, and Industry 4.0.
On the G20 side, China has built an AI powerhouse, with companies like DeepSeek, Baidu, and Tencent driving AI’s rapid commercialization. India’s thriving AI startup ecosystem, centered in Bengaluru, mirrors the dynamism of Silicon Valley, fueled by the country’s highly skilled software engineers. South Korea is pioneering AI-driven infrastructure, focusing on energy-efficient AI data centers and AI-powered smart cities. Meanwhile, Australia leads AI research in the Asia-Pacific, investing in responsible AI development and shaping AI policies that balance innovation with ethics.
Developing Global AI Security Standards
As AI systems become more sophisticated, they are increasingly being exploited for cyber threats, misinformation, and geopolitical manipulation. AI-powered cyberattacks, including automated hacking, AI-driven espionage, and deepfake-enabled disinformation campaigns, pose risks to both national security and democratic integrity. The rise of adversarial AI attacks, where AI systems manipulate algorithms to bypass security controls, further exacerbates cybersecurity threats.
A unified global AI security framework is necessary to counteract these threats. The G7 nations, given their technological and cybersecurity leadership, must take the lead in setting AI security standards, while the G20’s broader global reach ensures these standards are adopted universally. A structured G7-G20 AI Security and Cyber Threat Response Task Force should be formed, co-led by the United States, Germany, China, and India, to:
Standardize cybersecurity protocols for AI-driven systems, ensuring robust defense mechanisms against AI-powered cyber threats.
Coordinate intelligence-sharing on AI-related cyber risks, helping nations collectively respond to AI-driven misinformation, hacking, and digital warfare.
Develop an AI Incident Response Framework, providing rapid response mechanisms to counteract AI-based cyberattacks and mitigate their impact on global infrastructure.
Regulating Dual-Use AI in Military and Surveillance
The G7 must work towards a unified stance to shape global norms on military AI and surveillance, particularly amid shifting international debates. While Canada and France have championed ethical AI principles, the United States’ recent shift toward a deregulatory approach could challenge consensus within the bloc. To maintain leadership, the G7 must find common ground by aligning AI governance strategies with ongoing global initiatives, such as the U.S.-led Political Declaration on Responsible Military Use of AI and the Responsible AI in the Military Domain (REAIM) initiative, spearheaded by the Netherlands, Republic of Korea, and Spain. This alignment would enable the G7 to lead broader international efforts while accommodating national security interests.
Given the rapid militarization of AI by key G20 nations, particularly China and Russia, the G7 must actively engage in diplomatic dialogues and confidence-building measures to prevent an uncontrolled AI arms race. A coordinated G7 effort should prioritize establishing an internationally binding treaty that:
Prohibits fully autonomous lethal weapons, ensuring human oversight remains central to military decision-making.
Mandates transparency in AI military applications, requiring nations to disclose AI integration in defense systems to deter aggressive and unethical AI deployment.
Regulates AI-driven surveillance technologies, ensuring adherence to international human rights standards and preventing misuse for political repression.
Strengthening GPAI for Inclusive International AI Governance
To ensure effective AI governance, efforts should be directed towards strengthening and expanding the Global Partnership on Artificial Intelligence (GPAI). As an existing multilateral initiative, GPAI already brings together governments, industry leaders, and researchers to promote responsible AI development. However, its global influence remains limited, and its role in regulatory enforcement needs to be significantly enhanced.
A. Expanding GPAI for Inclusive AI Governance
Currently, GPAI lacks broad global representation, with many developing countries underrepresented in AI governance discussions. To make GPAI more inclusive, the following measures should be taken:
Expand Membership: Encourage broader participation from developing nations, particularly from Africa, Latin America, and Southeast Asia, to ensure AI policies address global challenges and diverse technological landscapes.
Create Regional AI Hubs: Establish GPAI regional offices that provide localized AI policy support, capacity-building, and governance expertise tailored to different regions.
Ensure Equitable Decision-Making: Reform GPAI’s governance structure to include diverse voices in leadership roles, giving emerging economies more influence in shaping AI regulations.
B. GPAI as a Bridge Between Global AI Policies and National Laws
GPAI must play a stronger role in convincing governments to integrate AI regulations into their national legal frameworks as swiftly as possible. To achieve this, GPAI should:
Develop an AI Governance Framework for Adoption: Formulate a globally applicable AI regulatory template based on existing best practices, including the OECD AI Principles and the EU AI Act, while adapting it to local legal and economic contexts.
Encourage Regulatory Commitments: Member states should be required to report annually on their progress in implementing AI laws, with GPAI providing technical assistance to governments that lack regulatory expertise.
Leverage Economic Incentives: GPAI should work with international financial institutions, such as the World Bank and IMF, to tie AI governance commitments to economic aid and investment opportunities for developing nations.
Launch a Global AI Compliance Mechanism: Establish an AI Risk Assessment and Compliance System that helps governments evaluate ethical and security risks associated with AI systems before deployment. This system should include crossborder AI audits and an independent oversight board to ensure adherence to ethical standards.
By reinforcing GPAI’s authority, expanding its inclusivity, and making it a bridge between global AI principles and national regulations, the G7 and G20 can drive a coordinated, effective, and enforceable international AI governance strategy that balances innovation with responsibility.
Conclusion: The Need for Addressing Challenges in Policy Implementation
While a unified international AI governance framework is essential, several challenges must be addressed to ensure its effectiveness. Diverging national AI strategies, geopolitical tensions, and corporate resistance make consensus difficult. The U.S. deregulatory approach contrasts with EU’s strict AI regulations, while China and Russia’s rapid militarization of AI raises concerns about non-compliance with global standards. Additionally, GPAI lacks enforcement power, and developing nations struggle with AI infrastructure gaps, limiting equitable participation. Overcoming these obstacles requires flexible governance models, diplomatic engagement, and economic incentives. By combining G7 technological leadership with G20’s AI growth, AI’s benefits can be shared equitably while mitigating risks.
Author Biographies
Lead Author Syed Munir Khasru has influenced international policy discourse through his roles as cochair of G20 and G7 task forces. He heads the international think tank IPAG and has led 15 policy briefs for G20 and G7 leadership summits.
Among his co-authors are Zane Dangor, G20 Sherpa for South Africa, playing a crucial role in the G20 policy landscape under the South African Presidency. Dr. Alison Gillwald, Executive Director at Research ICT Africa and Professor at University of Cape Town, co-chairs G20’s Task Force 02 in South Africa, focusing on digital policy issues. Gbenga Sesan, a member of Civil Society Leadership Panel at the UN Internet Governance Forum (IGF). Also involved is Dr. Siphamandla Zondi, Director, Institute for Pan-African Thought and Conversation (IPATC), adds a pan-African perspective to the T20’s core group.
Co-Authors Alison Gillwald is the co-chair of the G20 Task Force, G20 South Africa; executive director of Research ICT Africa; and adjunct professor at the University of Cape Town. Gbenga Sesan is a member of the Civil Society, Leadership Panel, UN Internet Governance Forum and executive director of Paradigm Initiative. Siphamandla Zondi is a core group member of G20 South Africa and director of the Institute for Pan-African Thought and Conversation.
Key Points Introduction: The Global Renewable Energy Imperative Energy Security and renewable energy stands as the most critical solution to global climatechallenges, economic development. The urgent imperative for transforming global energysystems has never been more apparent. According to the UAE Consensus, the world musttriple renewable energy capacity and double energy efficiency by 2030 (IREN
Key Points Introduction The digital economy has become a defining force in global commerce, reshaping industriesand international economic relations across the G7 and beyond. At its core, data has becomea key economic asset, essential for innovation, competitiveness, and cross-border trade(Birch, Cochrane, and Ward 2021). However, the absence of a standardized global datagovernance framework has r
Lead Author: Co-authors: __________________________________________________________________ Abstract Global food systems face increasing demand amidst stressed supply due to climate change as well as conflicts. Moreover, the pandemic exposed t