Don’t Rob Cloud to Pay for AI: New Rules for CIOs in Emerging Economies

Published:

An advanced cloud strategy is no longer a quiet infrastructure project happening behind the scenes. It has become the precondition for turning AI from a string of pilots into a system that actually runs products, channels and citizen services. Across markets, especially in emerging economies, boards are signing off budgets for generative experiments while the foundations that make AI reliable—cloud maturity, data governance and operating discipline—remain underinvested and under‑engineered. The result is a growing tension between what organisations are promising with AI and what their estates can safely deliver.

The latest data makes that tension hard to ignore. A 2026 global study by NTT DATA, based on more than 2,300 senior decision‑makers in 33 countries, finds that only 14% of enterprises have reached the highest level of cloud maturity, the point at which they are fully realising cloud value and using cloud as the backbone of their AI ambitions. Nearly two decades into the cloud era, more than eight out of ten organisations still sit below that threshold. At the same time, almost all of them understand that AI is pushing cloud to the centre of their operating model. In the same survey, 99% of respondents say AI is increasing their dependency on cloud, yet 88% admit that current cloud investment levels are putting their AI, cloud‑native and modernisation programmes at risk. AI is pulling them forward faster than their cloud estates are able to evolve.

AI maturity itself tells a similar story, especially in adjacent high‑growth regions. IDC’s “Data and AI Pulse: Asia Pacific” study finds that only 23% of organisations in Southeast Asia can be considered “transformative” in their use of AI. Transformative, in that research, describes enterprises that have a clear long‑term AI investment plan and are using AI to reshape markets and customers by creating new business models, products and service experiences, not just to automate internal workflows. The study also explains why so many programmes stall. A large share of organisations point to untrustworthy or poor‑quality data as a primary reason for AI failure, and many cite privacy and compliance limitations or an inability to access data because of business restrictions. The barriers are not exotic. They live in data quality, data governance and data access—the very domains that cloud programmes were supposed to fix.

Gartner has been tracking the other side of the equation: the money. Its forecasts show worldwide public‑cloud end‑user spending pushing well past the six hundred‑billion‑dollar mark this year and heading toward seven hundred billion in the next, with AI and application modernisation highlighted as key drivers. At the same time, Gartner and others have been warning for years that a majority of analytics and AI projects never make it into sustained production. Put those perspectives together with the NTT and IDC numbers and a clear pattern emerges. On one side, only a small minority—14%—have achieved advanced cloud maturity. On another, only around a quarter of organisations in a fast‑growing region like Southeast Asia are using AI in a truly transformative way. In between lies a broad middle of enterprises that are experimenting heavily with AI, often under strong pressure from boards and regulators, but are still running those experiments on top of fragmented, under‑governed and under‑observed estates.

This is exactly where the familiar AI “pilot trap” lives. Pilots are easy. Production is hard. Teams can get a demo running on top of almost anything: an internal chatbot for HR queries, a summarisation service for customer complaints, a prototype model that reads a subset of credit files. None of these strictly require a modern cloud estate to show well in a steering committee. But the moment you try to operate them at scale, the weaknesses of the underlying estate come into sharp focus.

Data turns out to be scattered across on‑premises systems, regional data centres and a patchwork of SaaS applications. Policies for data residency and privacy vary from country to country, and sometimes from regulator to regulator. Logging and observability are split across tools and providers, making it difficult to trace issues end‑to‑end. Cost allocation is fuzzy, with cloud bills and GPU consumption examined after the fact rather than designed into architectures from the outset. Under those conditions, the problem is not that AI cannot be made to work. It is that AI cannot be made trustworthy, repeatable and affordable enough to sit in the critical path of the business.

NTT DATA’s leadership has been explicit about this. In their framing, cloud has moved well beyond being an infrastructure conversation and has become the execution layer for AI itself. Organisations that fail to evolve their cloud foundations risk constraining the growth and value of their AI investments, no matter how advanced their models or how many proofs of concept they launch. The paradox their report highlights is stark: AI is pushing cloud into a more central, strategic role at the same moment that enterprises are trying to reallocate budget away from cloud modernisation into AI pilots. Leaders are aware of this imbalance, and almost nine in ten admit that it is putting their core initiatives at risk.

For CIOs in emerging economies across the Middle East, South Asia and Africa, this global pattern lands on top of a very specific local reality. Infrastructure and skills gaps are more pronounced. Foreign‑currency exposure and dollar‑denominated cloud pricing make cost predictability a board‑level concern. Data‑protection and localisation regimes are still evolving, sometimes with overlapping or inconsistent requirements across national borders. At the same time, there is intense pressure—from customers, regulators and shareholders—to show visible progress on AI. The temptation in this environment is to go straight for the visible layer: pilots, proofs of concept, assistants and agents that can be showcased in quarterly reviews.

The risk, of course, is that much of this activity is built on top of estates that were never designed for AI. In many organisations in this region, the current landscape still combines long‑lived core systems in on‑premises data centres, layers of bespoke integration, pockets of SaaS and a first wave of cloud migrations that often took the form of “lift‑and‑shift”. When AI teams arrive, they do what they can: create copies of key datasets, stitch together temporary pipelines, rely on manual data curation and deploy models into localised, sometimes one‑off environments. This can be enough to get a demo working. It is not enough to operate a production AI system with real concurrency, volume and compliance requirements.

The IDC findings from Southeast Asia are a useful mirror because they show what happens next if nothing changes. When a significant proportion of organisations say untrustworthy or poor‑quality data is the primary reason AI projects fail, that is a verdict on the state of data management, not on the hype cycle. When others point to privacy and compliance limitations, or to an inability to access data because of business restrictions, that is a verdict on how governance and architecture have been designed. The region does not lack AI tools or cloud capacity; it lacks estates that bring data together in a way that is reliable, well‑governed and observable enough for AI to be trusted at the core of the business.

Policy signals in some emerging markets are beginning to align with this reality. In Pakistan, for example, the federal government approved a national Cloud First Policy in 2022 that requires federal public‑sector entities to prioritise public cloud for new technology infrastructure from mid‑2022 onwards. The policy positions cloud computing as a way to improve service delivery, optimise costs and address the technical, financial and human‑resource challenges of fragmented government data centres. Provincial strategies in Khyber Pakhtunkhwa and draft guidance in Punjab follow the same path, emphasising cloud‑based, modern, scalable and secure infrastructure, with coordination between federal and provincial levels to ensure consistency. Similar conversations are playing out in Gulf and ASEAN markets, where regulators are tightening expectations on data protection and operational resilience while encouraging the use of cloud to improve the quality and agility of services.

None of these documents talk much about AI on the surface, but they are deeply relevant to AI in practice. They push enterprises and public‑sector entities toward more unified infrastructure and more deliberate choices about where data lives and how it is governed. They create the conditions in which AI can be brought closer to where the data is, rather than forcing data to move in increasingly complex and risky ways to reach AI services. IDC’s broader spending forecasts reinforce the point: AI investments in Asia Pacific alone are expected to grow in double digits annually over the next few years, reaching well over a hundred billion dollars by the latter part of the decade. A significant share of that spend is coming from organisations that are reallocating budgets away from infrastructure and application modernisation into generative AI initiatives. The pattern is clear: AI is being prioritised, often at the expense of the very foundations it depends on.

For CIOs in emerging economies, that choice is being made now, often in subtle ways. When budget cycles pit AI pilots against cloud modernisation, the temptation is to fund what can be seen and measured quickly. It is harder to argue for unifying logging, rationalising platforms, rebuilding data pipelines or standardising identity and access management across regions. Yet those are exactly the investments that determine whether AI will ever move beyond pilots. They are what separate a landscape where each new use case requires bespoke plumbing from one where AI products can be rolled out repeatedly, with clear controls and predictable costs.

The new rule, then, is deceptively simple: do not rob cloud to pay for AI. Use AI demand to justify and shape a more mature cloud and data foundation instead. That means designing AI initiatives that deliberately surface where data is fragmented, where governance is weak, where latency and resilience are inadequate and where costs are opaque—and then feeding those findings back into the cloud roadmap. It means treating numbers like NTT DATA’s 14% and IDC’s 23% not as curiosities, but as baselines against which to measure your own journey. It means reading the surge in AI and cloud spending in Asia Pacific and other emerging regions not as a distant market forecast, but as a near‑term warning that a large share of your compute could be supporting AI workloads before the decade is out, and asking whether your current estate is ready for that level of dependency.

The organisations that move early on this will quietly change their odds. They will still run pilots, but those pilots will be anchored on shared data platforms, shared security and shared observability. They will still experiment with new models, but they will do so on top of infrastructure that has been designed with AI in mind, not bolted together around it. They will still make mistakes, but they will have the ability to see them quickly, correct them and scale what works. In a world where most enterprises are still running twenty‑six‑era AI aspirations on architecture assumptions from another decade, that difference will matter more than any single model choice.

Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem. 

Related articles

spot_img