At the recent Paris AI Action Summit, a familiar tension played out on a global stage, this time more starkly than ever. Fifty-eight countries, including France, Germany, India and Canada, signed a declaration committing to open, ethical and inclusive AI.
However, the United States and the United Kingdom, two of the world’s most powerful AI players, declined. Their absence didn’t just make headlines; it underscored a growing fracture in global AI governance. It’s one that raises difficult questions about the future of innovation, sovereignty and sustainable growth.
While the global ambition to develop AI responsibly is broadly shared, the divergence lies in how to get there. For some nations, regulation is the foundation for public trust, long-term competitiveness and sovereignty in the digital economy.
For others, it’s a potential brake on the rapid innovation they see as essential for leadership in the AI age. But amid this growing divide, our customers are proving that it’s not a binary choice. With the right data infrastructure, sustainable practices and sector-specific insights, it is possible to govern AI responsibly and grow it ambitiously.
A Fractured Framework—or an Opportunity for Leadership?
For Europe, AI regulation has become a tool of economic strategy as much as ethics. The EU AI Act, widely regarded as the most comprehensive attempt to regulate the technology, follows in the footsteps of the General Data Protection Regulation (GDPR) and the Digital Services Act. It reflects Europe’s bid to shape the global digital rulebook, prioritizing human rights, transparency and accountability.
However, critics argue that these well-intentioned frameworks may risk falling out of step with the pace of technological change. That was the stance adopted by U.S. Vice President J.D. Vance at the Paris Summit, who argued that overregulation could “kill a booming industry.” The UK echoed this sentiment, positioning itself as an AI-friendly innovation economy, more closely aligned with U.S. priorities than continental Europe.
This divide reveals a fundamental disagreement about where regulation fits in the AI lifecycle. Should we regulate before large-scale adoption to prevent harm, or after, when risks are better understood but potentially more entrenched?
In our world, we take a more balanced perspective: AI isn’t about quick wins, it’s about laying a foundation for measurable, sustainable and scalable change. Moreover, real transformation requires not just tools, but a system-wide understanding of impact, use case and consequence.
The Infrastructure of Trust
Regardless of where they fall on the regulatory spectrum, nations and companies alike are grappling with the same foundational truth: AI is only as effective—and as safe—as the data and platforms behind it.
A recent Hitachi Vantara report revealed that 38% of IT leaders believe data quality is the most important factor in AI success, yet many still operate on fragmented and siloed data. This isn’t just a technical bottleneck; it’s a trust issue. Without clean, reliable data, AI decisions become opaque, error-prone and hard to audit.
Turning disjointed datasets into actionable intelligence helps organizations meet innovation and governance goals. This means that from mining and energy to transportation and manufacturing, AI doesn’t just get “deployed; ” it performs reliably in some of the world’s most high-stakes environments.
Regulation Without Sustainability Is a Missed Opportunity
Another tension the Paris Summit largely sidestepped is AI’s growing environmental footprint. With data center workloads set to double by 2026 and AI models consuming up to 100 times more energy than traditional computing tasks, the world’s digital ambitions are bumping up hard against the energy crisis.
Yet only 33% of organizations currently factor sustainability into their AI strategy. That’s a worrying gap, especially as more governments look to implement environmental reporting requirements and net-zero mandates.
Deploying AI sustainably isn’t just smart ethics, it’s smart economics. By building AI that’s sustainable by design, businesses lower their energy costs, increase operational resilience and future-proof themselves against emerging regulations.
Responsible Innovation Requires Collaboration
While the Paris Declaration was, at its core, a diplomatic event, it also shone a spotlight on an important industry truth: no organization, government or sector can build safe AI in isolation.
Nearly half of IT leaders (46%) now cite partnerships as critical to integrating AI into their operations. This collaborative, real-world approach is in sharp contrast to vendors that treat AI as a one-size-fits-all platform. The goal is not to stop at selling tools and go further by embedding AI into industries we know deeply. That means working with stakeholders, whether it’s governments or community organizations, to ensure that AI not only delivers value but earns trust.
Looking Ahead: A Shared Future or Fragmented Fates?
The real risk exposed by the Paris AI Action Summit isn’t that countries disagree; it’s that their divergence becomes irreversible. If major economies continue to carve out incompatible AI standards, the global ecosystem could fracture, limiting interoperability, delaying cross-border innovation and making governance harder for everyone.
The opportunity lies in finding common ground without compromising national priorities. That will require a new kind of leadership, one that acknowledges the value of regulation and innovation, of sovereignty and cooperation.
For organizations navigating this complexity, the smartest path is to think big but start small. Build trust through transparency. Prioritize sustainability as a business enabler, not an afterthought. And above all, ground AI in real-world use cases that solve real-world problems.