The global race to regulate artificial intelligence has intensified as world leaders, technology executives, and academics convene in Paris for the AI Action Summit, a pivotal gathering aimed at shaping the future of AI governance. Delegates from 80 countries, including India’s Prime Minister Narendra Modi, US Vice President JD Vance, OpenAI CEO Sam Altman, and Google CEO Sundar Pichai, are expected to navigate the fine balance between innovation and control.
The summit builds on the AI Safety Summits held at Bletchley Park and Seoul, serving as a platform to stress-test AI governance frameworks against the backdrop of rapid technological developments. Central to discussions is the newly published International Safety Report, which outlines the risks posed by evolving AI systems and proposes guidelines to mitigate their potential harm. The event also comes amid a period of significant global policy shifts, with the UK expanding AI investment in the public sector, the US introducing new executive orders on AI regulation, and emerging AI models such as DeepSeek gaining prominence.
Balancing AI innovation and risk
While the summit represents a crucial moment for international AI cooperation, business leaders warn that the success of AI depends on tackling fundamental issues: ethical governance, data integrity, and workforce readiness. These, they argue, are essential for AI’s responsible deployment at scale.
“The opportunities of AI have been discussed at length, but privacy concerns, unregulated AI use, regulatory complexity, and language divides continue to prove barriers for businesses,” Ramprakash Ramamoorthy, Director of AI Research at Zoho Corporation, said. “The AI Action Summit is a forum for world leaders to tackle these challenges head-on and hopefully provide clarity to businesses that are looking to push ahead with AI investment and adoption.”
He highlighted the importance of the Independent AI Safety Report, calling it a roadmap for future regulation and a step toward embedding trust and safety into AI developments. However, he warned that governing AI on a global scale requires a concerted effort from governments, regulators, industry leaders, and educators to ensure AI systems align with ethical and safety standards.
Data readiness as the foundation of AI governance
Data governance has emerged as a pressing concern among industry leaders, who argue that robust data policies must underpin AI regulation. Without structured data frameworks, they caution that AI could spiral into regulatory grey areas with profound consequences.
“AI is at risk of spiralling out of control if it’s left unchecked without robust, governed data policies in place,” Stuart Harvey, CEO of Datactics, said. “Data is the foundation of every successful AI model, from training data to meeting regulations to producing a quality output. Rushing ahead with AI without data readiness in place can lead to costly setbacks, leaving the door open to potential bias, regulatory breaches, and significantly undermining public confidence.”
These concerns echo broader industry apprehensions that AI models are only as reliable as the data they are trained on. Poor data governance could result in flawed decision-making, discrimination, and security risks that regulators may struggle to rein in retroactively.
Addressing AI’s skills gap
As governments invest in AI-driven public sector projects, leaders in workforce development stress that a lack of skilled professionals could slow AI’s integration and amplify its risks. Ensuring that AI is deployed effectively requires not just technological advancements, but also investment in talent and education.
“AI is becoming an increasingly integral part of public sector work, from improving public services to strengthening security,” Oliver Hester, Head of Public Sector Services at FDM Group, said. “The UK government is prioritising significant investment into levelling up the civil services, NHS, and other departments, and the roadmap being set out at the AI Action Summit will provide crucial guidance on the barriers and opportunities facing AI systems.”
However, he warned that AI implementation will falter without the right talent to manage both its benefits and risks. “Investing in training and experiential learning to provide industry-standard skills can ensure that AI is used responsibly while meeting talent demand and preparing the next generation of digital professionals.”
Global AI policy at a crossroads
The AI Action Summit underscores a critical inflection point in AI policy. As governments and corporations push for regulatory alignment, the challenge remains in balancing oversight with innovation. Businesses are calling for AI frameworks that promote responsible development rather than stifling progress, while ensuring that AI systems operate transparently, equitably, and securely.
With international stakeholders gathered in Paris, the focus now shifts to whether policymakers can translate discussions into actionable regulations. The summit’s outcomes will likely shape AI’s trajectory for years to come, determining how the technology integrates into economies, societies, and governance structures worldwide.