Before the Power Question, We Still Do Not Fully Understand How AI Will Be Used

Executive Summary

In the second half of 2025, the conversation around AI began to shift away from model capabilities and application competition toward concerns about power and infrastructure. As a result, the idea that AI growth will be constrained by electricity has quickly taken hold as a narrative that feels both reasonable and intuitive.

However, from the perspective of on the ground deployment and operations, as well as from the upstream supply side, the most pressing challenge today is often not whether power is available. It is whether existing resources and capacity can be reliably translated into sustained and dependable service delivery.

More importantly, the way AI is actually being used has not yet settled. At the same time, changes in capital spending cycles continue to shape how demand materializes in practice and how quickly expansion can proceed.

For this reason, the question that deserves longer term attention may still be the one that remains unresolved. How, exactly, will AI be used?

Introduction

As markets approach a new round of earnings calls and forward guidance in early 2026, it may be worth pausing to revisit a question that is often passed over too quickly. Before power becomes the central focus of the discussion, have we truly gained a clear understanding of how AI will ultimately be used?

Since the fourth quarter of 2025, as model scale has continued to expand and data center density has risen rapidly, the conversation around AI has taken a noticeable turn. Governments, companies, and investors alike have begun to shift their attention away from model capabilities and application scenarios toward a more fundamental and more pressing concern. That concern is whether there will be enough power to support AI growth.

Within this context, the idea that AI will be constrained by electricity has gradually moved from a hypothesis to something close to a shared consensus. This narrative has spread quickly not because it is sensational, but because it appears reasonable. AI does consume significant energy. Models are becoming larger. Infrastructure expansion, by its nature, takes time.

Taken together, these conditions make the prospect of future power pressure feel like a conclusion that requires little additional reasoning to accept.

The issue, however, is that such judgments are often derived from aggregate scale and external constraints. When viewed from a different position, particularly from the perspective of actual deployment and supply side operations, the limitations that are felt can be quite different. This divergence reflects the fact that different actors are confronting different stages of the process and different layers of the problem.

From the Infrastructure Front Line

These challenges on the ground are difficult not simply because they are engineering problems, but because they arise at a stage when the way AI is used has not yet settled. When usage scenarios and load patterns are still shifting, those responsible for deploying and operating AI infrastructure tend to feel pressure first not from power availability, but from whether systems can continue to operate under unstable demand conditions. At this stage, pressure rarely appears as a single bottleneck. Instead, it is distributed across many parts of the system.

  • Power and hardware resources are gradually coming into place, yet systems still struggle to maintain stable performance across varying loads.
  • Equipment is being delivered and brought online, but sustained operation at high utilization has not fully taken shape.
  • Cooling, networking, and scheduling increasingly constrain one another, requiring ongoing adjustment across the system.
  • Latency and reliability requirements limit how much compute capacity can be consistently and safely deployed.

In other words, resources exist, but they have not yet been smoothly converted into stable and sustainable service capability.

From this position, a power crisis does not disappear, but it has not yet emerged as the most immediate constraint in day to day operations. This is not because power is unimportant, but because before systems are fully integrated, pressure tends to concentrate first on integration, coordination, and operational conversion.

This is a very real situation, and one that is rarely understood in full from the outside. These constraints also share a common characteristic. They tend to appear during periods of rapid expansion rather than as structural limits in a long term steady state. Deployment may move faster than systems can be fully tuned. Equipment density may rise suddenly, requiring existing designs to adapt. Operational experience is still accumulating, and utilization naturally takes time to improve.

For this reason, the fact that these difficulties may persist for some time does not define their nature. Even if they extend into the first half of 2026 or beyond, what is visible from this position still looks more like transitional friction in the expansion of AI than a fixed and final constraint.

From the perspective of the infrastructure front line, these limitations do not point to the exhaustion of any single resource. They point instead to a system that has not yet fully entered a stable operating state.

From the Supply Side After Orders Are Locked In

If we move further upstream along the supply chain, the shape of the constraint changes once again.

For suppliers such as semiconductor foundries, the starting point is not whether final AI demand exists. It is the fact that customers have already committed orders and capacity reservations for specific process nodes. Once those orders are in place, the pressure they feel naturally concentrates on whether capacity, productivity, and talent are sufficient to absorb them.

Recent earnings calls from TSMC offer a clear example. When asked whether AI growth would be constrained by power availability, management did not extend that narrative. Instead, they redirected attention back to capacity, productivity, and workforce considerations. This was not an act of avoidance. It reflected the reality that becomes visible from their position.

Once orders are secured, wafers are already moving, and delivery timelines begin to tighten, the first constraint the supply side encounters always comes from its own ability to keep pace. Whether capacity can be brought online on schedule, whether processes can ramp smoothly into volume production, and whether organizations can sustain prolonged high intensity operations all rise naturally to the top of the priority list.

From this perspective, power is not an absent risk. It sits on a different time horizon. It resembles a question of whether the system can be supported over the long term, rather than whether commitments can be met in the present moment. For the supply side, the importance of the former is not denied, but during a rapid buildout phase, the urgency of the latter dominates.

As a result, when suppliers repeatedly emphasize capacity, productivity, and talent, it does not mean the power narrative is wrong. It means that once orders are confirmed, constraints are reordered. This difference in ordering does not reflect conflicting positions. It is an outcome determined by where each actor stands.

Seen together, these two positions are not describing different problems. They are observing different moments and layers within the same expansion process. Infrastructure operators are grappling with how to run a system that has not yet reached a steady state. The supply side is confronting whether it can keep up after commitments have already been made.

These differences do not imply that any single narrative is incorrect. They indicate that before AI has fully taken shape, constraints will continue to be reordered by position and by time. It is within this process that the question of whether power will become the bottleneck is raised again and again, yet remains without a definitive answer.

Two Key Variables That Remain Unsettled

Even as industry leaders outline what appear to be clear technological trajectories, from generative systems to more agent driven applications and toward tighter integration with the physical world, actual AI usage still requires time and context to take shape. Until this process is meaningfully complete, any assessment of final demand patterns or long term bottlenecks remains grounded in provisional assumptions.

Much of today’s discussion around inference systems implicitly assumes a relatively stable environment. Demand is expected to be predictable, traffic manageable, and systems gradually optimizable over time. Yet this premise itself is still in flux. The inference landscape has not settled, and more importantly, there is still no fixed answer to how AI will ultimately be used.

If AI assistants become persistently active rather than episodic, if agent based applications scale broadly, and if multi model coordination shifts from an exception to a norm, then inference demand will not simply increase in volume. Its structure will change. Demand will become more distributed, more time sensitive, and harder to forecast. Under these conditions, utilization metrics that once measured whether systems were running at full capacity shift in meaning. They move from targets that can be optimized in advance to reference points that must be observed and interpreted in motion.

In other words, before usage behavior stabilizes, it is difficult to assert which constraint will ultimately emerge as the system’s long term bottleneck. What appears to be a reasonable ordering of constraints today may be reshuffled within a relatively short period as usage patterns evolve.

Beyond usage behavior, another variable that is equally important yet often underestimated is the rhythm of capital.

Many expansion plans that are treated as signals of demand growth are closer in nature to options than to irreversible commitments. When financing conditions tighten or risk appetite in capital markets recedes, slower expansion, delayed projects, or reassessment of plans are not anomalies. They are normal responses within capital allocation cycles.

This means that even if technological direction and industry narratives remain intact, the pace of execution can still be recalibrated as capital conditions shift. In such circumstances, the growth assumptions and investment pacing that markets have priced into expansion plans are often tested before any physical bottleneck is reached.

Together, these two variables create an environment that has not yet stabilized. Within this context, constraints observed from deployment sites, supply chains, or the energy sector resemble snapshots taken at specific moments rather than conclusions that can be extrapolated into final outcomes.

As a result, through the first half of 2026, markets are more likely to experience repeated shifts in where constraints appear to reside, rather than the definitive emergence of a single, settled bottleneck.

Conclusion

This article is not intended to dismiss the very real challenges the current system is facing. On the contrary, these difficulties matter, and they are already shaping the pace of AI expansion. Energy, infrastructure, and sustainability will, over a longer time horizon, become conditions that AI development cannot avoid.

At this moment, however, as the industry remains in a phase of rapid build out, experimentation, and ongoing recalibration, it may be worth exercising some patience before naming definitive bottlenecks. Not because the problems do not exist, but because the system itself is still taking form.

From the energy sector to on site deployment, and further upstream to the supply side, the constraints observed at each position are often real. What they describe, however, are pressures experienced from specific vantage points rather than a system that has already settled into a fixed state. When such localized experiences are quickly elevated into overarching conclusions, narratives tend to run ahead of reality.

The perspective expressed by TSMC during its recent earnings calls offers a clear illustration of this positional difference. For supply side actors operating with confirmed orders, the most immediate constraints naturally center on capacity, productivity, and workforce availability, not on whether the system will ultimately be constrained by power. This is not a question of right or wrong. It reflects the fact that different positions, at different moments, will surface different problems. As time passes and usage patterns gradually take shape, these constraints will continue to be reordered.

For this reason, before AI usage has meaningfully stabilized, any claim about where the system will ultimately be constrained remains a provisional judgment. What appears to be the most reasonable explanation today may not be the one that holds two or three years from now.

Before power anxiety takes hold, after infrastructure narratives have taken shape, and even as capacity and talent pressures emerge on the supply side, the question most deserving of sustained attention remains unresolved.

How will AI actually be used.

For now, there is no definitive answer. And perhaps that is precisely why the question deserves to be revisited, again and again.

Note: AI tools were used both to refine clarity and flow in writing, and as part of the research methodology (semantic analysis). All interpretations and perspectives expressed are entirely my own.