Broadcom AI Inference Supercycle: How Custom ASICs Drive the Next Leg of Growth

FEATURED STOCK AVGO Broadcom Inc.
Close $396.72 +4.20% Apr 15, 2026 4:00 PM ET
Pre-Market $393.10 -0.91% Apr 16, 2026 8:40 AM ET
View full AVGO profile: Chart, Key Stats, All Articles →
VIEW FULL AVGO PROFILE: CHART, KEY STATS, ALL ARTICLES →
Hyperscale data center with custom ASIC racks illustrating Broadcom AI Inference growth.

Is Broadcom capturing the AI inference supercycle?

After a volatile stretch for semiconductors, investors have been rotating back into AI beneficiaries, and **Broadcom Inc.** is firmly on that list. AVGO gained more than 4% in the latest session to $396.72, outpacing a mixed Philadelphia Semiconductor Index and helping push the Nasdaq 100 toward record territory. While many chip names from Intel to Micron have already surged 40% to 50% in recent weeks, Broadcom’s move is increasingly tied not just to the broad tech rally, but to the structural opportunity in Broadcom AI Inference.

The market’s first AI phase focused on training huge models using high-end GPUs, dominated by NVIDIA. Now, as those models are deployed into real applications—search, cloud productivity, autonomous systems, and more—the bottleneck shifts to inference: serving billions of queries in real time at acceptable cost and power levels. That is where Broadcom’s application-specific integrated circuits (ASICs) come in.

How do Broadcom and Marvell challenge GPU dominance?

GPUs remain indispensable for training, but inference workloads have different requirements. They demand high throughput, low latency, and—crucially—better energy and cost efficiency per query. Broadcom and Marvell Technology design custom ASICs that strip out unnecessary circuitry and are optimized for specific AI tasks. These chips can outperform general-purpose GPUs on tightly defined inference workloads while consuming less electricity, an increasingly critical consideration in hyperscale data centers.

Research from Goldman Sachs projects that AI data center demand for ASICs will rapidly converge with GPUs. By 2027, the bank expects a roughly 50-50 split of ASICs and GPUs in AI servers, compared with GPUs’ roughly 62% share last year. Counterpoint Research goes further, forecasting Broadcom to retain a dominant ASIC share of around 60% by 2027 as shipments of AI server ASICs are set to roughly triple between 2024 and 2027. That roadmap helps explain why investors are focusing on Broadcom AI Inference as a multi-year growth driver rather than a short-lived cycle.

Broadcom Inc. Aktienchart - 252 Tage Kursverlauf - April 2026

Why hyperscalers are key to Broadcom AI Inference

The most powerful validation of Broadcom’s strategy is coming from the hyperscalers that dominate cloud and AI infrastructure. Alphabet, Microsoft, and Amazon are all investing heavily in custom silicon to bring down AI costs and reduce their dependence on NVIDIA’s GPU roadmap. Broadcom is a core design partner here, co-developing custom AI accelerators for data centers that sit at the heart of inference workloads.

Alphabet’s in-house AI chips, used extensively to power its large language models and services, are co-designed with Broadcom and are increasingly deployed across Google Cloud. Amazon and Microsoft, while also collaborating closely with Marvell, are part of the same structural shift toward custom accelerators. Even leading AI start-up Anthropic is exploring its own chips and already leans on Alphabet’s custom processors for its workloads—indirectly reinforcing Broadcom’s position in the AI stack.

For U.S. investors comparing AI hardware plays, this puts Broadcom in a different bucket from GPU-centric names. Where NVIDIA captures much of the training spend, Broadcom AI Inference aims at the long tail of deployed workloads that could scale for years as AI features proliferate across cloud, enterprise software, smartphones from Apple, and even automotive players such as Tesla.

What do earnings and Wall Street targets imply?

Broadcom’s earnings trajectory reflects this shift. Analysts expect the company’s profits to almost double over the next two to three years as AI-related revenue accelerates. Using the tech-heavy Nasdaq-100’s average earnings multiple of roughly 31 as a benchmark, Broadcom’s stock could climb toward the high-$600 range over the medium term if consensus earnings materialize, implying upside of around 70% to 80% from current levels.

Investment banks have taken notice. Goldman Sachs highlights custom AI processors as a primary growth driver for AVGO, while other large houses such as Morgan Stanley and Citigroup have pointed to Broadcom’s ASIC leadership and hyperscaler relationships as reasons to stay constructive on the name. With the stock breaking a recent downtrend and showing relative strength versus the broader market, technical traders on Wall Street have also turned more positive. One recently recommended call structure tied to an initial price target implied potential derivative gains of several hundred percent if the first upside target is reached.

Related coverage: Where does AVGO fit in the AI trade?

For readers looking to understand how hyperscaler demand translates into real contracts for Broadcom, our recent piece on Meta’s AI buildout is essential. Broadcom Meta Partnership Boom: AI Data Center Surge explains how Meta’s expanding AI infrastructure could cement Broadcom’s role as a long-term custom chip supplier and add another pillar to the AI inference thesis. On the trading side, regulatory shifts can shape retail flows into high-beta tech stocks like AVGO. In Robinhood Regulation +10.4% Surge on SEC Day-Trading Shock, we explore how changing SEC rules may affect day-trading activity and liquidity in AI names, an important backdrop for short-term AVGO traders.

Conclusion

In summary, Broadcom AI Inference sits at the center of the market’s pivot from AI training to AI deployment, backed by a dominant ASIC position and deep hyperscaler ties. For U.S. investors seeking diversified exposure beyond GPU leaders, AVGO offers a compelling way to participate in the next leg of the AI hardware supercycle. The coming quarters, as cloud giants ramp their custom accelerators, will show just how powerful this inference-driven growth engine can become.

Discussion
Loading comments...
Maik Kemper

Financial journalist and active trader since the age of 18. Founder and editor-in-chief of Stock Newsroom, specializing in equity analysis, earnings reports, and macroeconomic trends.

Related Stories