Can Alphabet’s aggressive push into custom AI chips with Marvell quietly rewrite the economics of cloud AI and challenge Nvidia’s grip?
How are Alphabet AI Chips moving the stock?
Alphabet (GOOGL) slipped modestly on Monday, with Class C shares (GOOG) closing near $335.40, down about 1.2% on the day, and Class A shares at $337.42. In after-hours trading, both lines were roughly flat, leaving the stock just below recent highs but still among the stronger performers in the NASDAQ 100 this month. The pullback comes after a sharp run from roughly $273 at the end of March to the mid-$330s by mid-April, a gain of nearly 24% in three weeks.
Despite the minor dip, institutional appetite remains robust. Large asset managers such as JP Morgan, Mercer Global Advisors, AllianceBernstein and Robeco have all increased positions in Alphabet during Q1 and April, betting that AI infrastructure — including Alphabet AI Chips and expanding cloud capacity — will become a primary growth engine. MarketBeat notes that Robeco has made Alphabet its ninth-largest holding, while other funds have added aggressively, encouraged by strong earnings momentum and a broad analyst consensus rating of “Buy.”
KeyBank recently raised its price target on Alphabet, arguing that the market underestimates the company’s cloud momentum and the economic upside from in‑house AI hardware. Traders have also been active on the options side, with bullish call activity reflecting expectations that upcoming earnings, scheduled for April 29, could be another catalyst.
What is Google building with Marvell?
The latest twist in the Alphabet AI Chips roadmap is a reported deal with Marvell Technology (MRVL) to co-develop two new custom AI chips. One is expected to be a dedicated inference accelerator closely tied to Google’s TPU architecture, while the second will target broader AI and cloud workloads inside Google data centers. These chips would be designed exclusively for Alphabet, giving the company more control over its hardware stack.
Marvell shares jumped on the news while some existing partners like Broadcom and Celestica, as well as Google itself, traded lower, reflecting expectations of a strategic shift in the supplier mix. The new silicon is aimed at delivering up to an order-of-magnitude improvement in peak performance for certain AI tasks, especially inference — the running of models after they have been trained.
By combining Marvell’s custom ASIC expertise with Google’s decade of TPU design, Alphabet aims to reduce the cost per token for generative AI services and large language models. That is critical as Gemini and other models scale across Search, Workspace, YouTube, and third‑party customers like Anthropic. Lower unit costs can translate directly into better cloud margins and more pricing flexibility versus rivals.
Does this threaten NVIDIA’s dominance?
Google has been one of NVIDIA’s biggest hyperscale customers, deploying GPUs such as the H100 for training frontier models. However, hyperscalers including Alphabet, Apple, Amazon and Microsoft are all racing to deploy their own chips to reduce dependence on NVIDIA’s high-margin hardware. Google’s latest TPU generation is already widely used internally and offered via Google Cloud to select AI partners.
Bloomberg recently highlighted how Google’s TPUs have become some of the most sought-after accelerators in the AI ecosystem, with even competitors booking capacity. The new inference‑focused TPUs expected at Google Cloud Next in Las Vegas will further segment the hardware stack: specialized chips for training on one side, and distinct chips for inference on the other. Google’s chief scientist Jeff Dean has argued that this division is now economically justified given the exploding demand for fast, low‑latency inference.
For NVIDIA, the rise of custom silicon across Big Tech is a long‑term risk to unit growth and pricing power, even if overall AI demand remains strong. For Alphabet, success with its own chips could gradually shift capex away from third‑party GPUs toward internal designs, tightening the feedback loop between AI model teams and hardware engineers. Still, Google is unlikely to abandon NVIDIA entirely; instead, investors should expect a hybrid approach where TPUs and other Alphabet AI Chips sit alongside GPUs, optimized on a workload-by-workload basis.
What does this mean for Google Cloud and earnings?
Alphabet’s cloud business is at the heart of this strategy. New TPUs and Marvell co‑designed chips are expected to be deployed first in Google Cloud data centers, underpinning AI services for enterprises and startups. Cheaper, faster inference can make Google’s AI platform more attractive versus Tesla’s in‑house AI efforts in autonomous driving, or versus cloud AI offerings from Amazon and Microsoft.
Recent commentary from major holders and analysts suggests that Wall Street still undervalues Google Cloud’s upside. Some research notes argue that AI infrastructure is now the real growth engine for Alphabet, balancing a mature ads franchise with high‑growth, capital‑intensive cloud and AI segments. At the same time, there are trade‑offs: custom chip investments require multi‑year, multi‑billion‑dollar capex, which can pressure near‑term free cash flow and margins.
With Alphabet’s stock already rebounding from a 22–23% correction since February and approaching resistance near $350, the April 29 earnings call will be closely watched for details on AI capex, TPU deployment, and early economics of these new chips. Investors will look for management commentary on how quickly custom hardware can improve operating leverage in Cloud and generative AI products.
Related coverage
Investors who want a broader view of the company’s AI roadmap can read “Alphabet AI Strategy Soars 3.6% as Meta Ads Surge”, which explores how Alphabet is balancing aggressive AI spend against competitive ad pressure from Meta. For a sector‑wide look at how custom silicon from hyperscalers could reshape the competitive landscape, the article “NVIDIA AI Chips Warning as Rivals Challenge Its Lead” analyzes whether rivals’ in‑house chips are starting to erode NVIDIA’s market power in AI accelerators.
In summary, Alphabet AI Chips are evolving from an internal support tool into a strategic pillar that could reshape Google’s cost structure and competitive position across cloud and consumer services. For investors, the key question is how quickly this custom silicon effort can translate into higher margins and sustained top‑line growth. The next earnings call and the rollout of new TPUs at Google Cloud Next will be crucial checkpoints for assessing whether Alphabet’s bold hardware push is paying off.