1. Historical Context In 2024, the international community significantly accelerated its efforts to govern Artificial Intelligence. Following the inaugural Bletchley Park Summit (November 2023), world leaders reconvened to formalise safety standards through the Seoul Declaration.
- Key Agreement: The Bletchley Declaration is often cited as the foundational document, but its principles were expanded and operationally strengthened in the Seoul Declaration (May 2024).
- Signatories: Over 25+ countries, including the USA, UK, China, India, and the EU.
- Primary Objective: To promote “Safe, Innovative, and Inclusive AI” and establish common safety testing standards for “Frontier AI” (the most advanced models).
2. Core Pillars of the 2024 AI Governance. The agreements signed in 2024 (primarily during the AI Seoul Summit) focused on three major pillars:
| Pillar | Focus Area | Key Action |
| Safety | Identifying and mitigating “catastrophic” risks. | Establishment of an International Network of AI Safety Institutes. |
| Innovation | Promoting competition and supporting SMEs. | Fostering interoperability between different national AI frameworks. |
| Inclusivity | Bridging the digital divide for the Global South. | Commitment to sustainable and environmentally friendly AI infrastructure. |
3. The “Frontier AI Safety Commitments” A standout feature of the 2024 proceedings was the “Frontier AI Safety Commitments,” where 16 global AI tech companies (including OpenAI, Google, Meta, Microsoft, and Amazon) voluntarily pledged to:
- Define “Intolerable” Risks: Set thresholds for risks (e.g., cyberattacks or chemical weapon assistance) that would cause them to stop developing a model.
- Red-Teaming: Conduct rigorous internal and external security testing before public release.
- Transparency: Publicly disclose their safety frameworks and model capabilities.
4. Major AI Regulatory Milestones of 2024. While declarations are non-binding, 2024 saw the first major binding international treaty:
- Council of Europe Framework Convention (Sept 2024): The world’s first legally binding international treaty on AI, signed by the USA, UK, and EU. It focuses on ensuring AI respects human rights, democracy, and the rule of law.
- EU AI Act (Aug 2024): The worldβs first comprehensive internal legal framework on AI entered into force, categorising AI systems by risk level (Minimal, Limited, High, and Unacceptable).

