Oracle’s sudden spike of more than 20% in after-hours trading following its fiscal Q1 earnings wasn’t about a beat on revenues or EPS, both of which came in slightly soft. Instead, it was about a staggering leap in its booked business for AI-driven cloud infrastructure. The company disclosed that its remaining performance obligations (RPO) ballooned 359% year-over-year to $455 billion, a number that reframes Oracle from a legacy database giant into a front-line combatant in the AI infrastructure wars. Investors read this backlog as hard evidence that enterprise and government clients are locking in massive multi-year cloud contracts to secure scarce compute capacity, particularly for training and inference workloads tied to generative AI.
The real story lies in Oracle’s guidance and the context of its partnerships. The company raised its Oracle Cloud Infrastructure (OCI) outlook, now projecting 77% growth this fiscal year to $18 billion and outlining a path to $144 billion over the next five years. This growth is anchored in deals connected to the “Stargate” project, a sweeping AI infrastructure initiative involving OpenAI, Meta, and others. What makes Oracle’s positioning unique is its willingness to align with Nvidia’s and AMD’s GPU build-outs, while also leaning on Broadcom’s next-generation networking and switching technology to keep these hyperscale data centers operating at high throughput. The implication: Oracle isn’t just selling storage and compute—it is stitching itself into the critical supply chain of the AI factory model, the distributed powerhouses that train and deploy large language models.
For Nvidia, the Oracle surge reinforces its dominance in GPU demand. Nvidia’s Blackwell GPUs and network cards remain the bedrock of hyperscale AI clusters, and Oracle’s expansion effectively guarantees multi-year orders of Nvidia’s hardware. AMD benefits in parallel, as its MI350 accelerators increasingly find traction as alternatives in cloud training clusters; Oracle’s multi-cloud posture suggests it will adopt AMD GPUs alongside Nvidia to secure supply diversity and cost efficiency. Meanwhile, Broadcom’s role may be less flashy but is arguably just as critical. Its Jericho4 and Tomahawk Ultra switching platforms are designed to handle the scale and east-west data flows that AI workloads demand. If Oracle is pledging $144 billion in cloud AI revenues, Broadcom is quietly positioned as a key enabler, feeding the bandwidth required to keep those GPUs saturated.
The significance for the sector is twofold. First, Oracle’s results provide quantifiable proof that the AI build-out is no longer hype but locked-in capital expenditure. Enterprises are reserving compute at a rate that dwarfs the incremental guidance changes of prior quarters. This makes AI infrastructure the new arms race, with Oracle’s backlog giving visibility into years of demand that Nvidia, AMD, and Broadcom will capture downstream. Second, the signaling effect is profound: Wall Street now views Oracle not as an also-ran against AWS and Azure but as a revitalized competitor that has carved out relevance by betting aggressively on AI compute capacity. This shift could pressure other cloud providers to accelerate their own commitments, further fueling semiconductor demand.
What Oracle’s after-hours surge really represents is confirmation that we are in the middle of the AI infrastructure super-cycle. The market is rewarding not today’s margins but the durability of demand for GPUs, networking, and cloud integration. For investors, the takeaway is that Oracle’s leap is not isolated—it is a rising tide moment that also validates the long-term bull cases for Nvidia, AMD, and Broadcom. The AI factory is being built at unprecedented scale, and Oracle’s blockbuster backlog shows just how much capital is already committed to making it real.