Tit-for-Tat: Apple vs. Nvidia | | By George Gilder and Dr. Robert Castellano 01/06/2025 | | SPONSORED CONTENT Have You Seen This $11 Trillion 'Tech Strip?' While many folks today are wondering what to do with their money… a revolutionary "sheet" of new technology has quietly sparked an $11 trillion tech revolution.
Investors who get in FIRST have a rare chance to position themselves in front of a tsunami of profits.
Click here to see how anyone can profit fast. | | | This past summer, Apple (AAPL) announced it would no longer use Nvidia (NVDA) GPUs (graphic processor units) to train its AI models. Instead, it would turn to Google's (GOOG) tensor processing units (TPUs) for now until it could shift to its own "Baltra" processor still in development.
The move did not surprise us. The two firms' fruitful collaboration of the early 2000s has devolved into a rivalry shaped by historical tensions, diverging philosophies, and strategic imperatives. The latest episode only highlights Apple's determination to control its technological ecosystem, and not incidentally challenge Nvidia's new-found dominance of the industry.
The makeup of a breakup
In the early 2000s Apple happily used Nvidia's GPUs to enhance the graphics capabilities of the Mac lineup. That relationship began to sour thanks to a few painful incidents. - Pixar Accusations: Steve Jobs, then Apple's CEO, accused Nvidia of appropriating Pixar Animation Studios' technology. This accusation marked the beginning of mistrust between the companies.
- Bumpgate Incident: In 2008, defective Nvidia GPUs caused overheating issues in Apple's MacBook Pros, leading to widespread failures. Nvidia's refusal to assume full responsibility left Apple to absorb repair costs, damaging the trust between the two companies.
The real source of the rift, however, was strategic. With the release in 2004 of the M4 chip for the iPhone 4 and the first-generation iPad, "Apple Silicon" emerged as the centerpiece of Apple's strategy to control its own ecosystem. Henceforth wherever feasible Apple would design its own chips, not only to boost margins but to ensure the devices matched the demanding needs of its increasingly lightweight and low-power product line.
Even before the M series, however, Apple had decided that Nvidia's high-performance GPUs conflicted with Apple's emphasis on reducing weight and power consumption. Apple requested Nvidia to make customized GPUs for Mac; that did not happen. Apple turned first to AMD (AMD) for Mac GPUs and then invested heavily in proprietary silicon, culminating in its groundbreaking M-series chips.
AI widens the rift
The latest shift away from Nvidia's AI powerhouses, however, is not about consumer devices directly but the data center heavy metal used to train the AI models employed for Apple's consumer products. Apple wants data center AI hardware that better aligns with its ecosystem, cost structure, and performance goals.
Table 1 illustrates the tradeoffs in play. Nvidia brings strength in performance but challenges in power efficiency and cost. Google's TPU provides a transitional solution for specific workloads. Apple's Baltra chip promises ecosystem integration and long-term efficiency gains.
Table 1: Key Specifications of AI Chips | Feature | Nvidia GB200 NVL4 | Nvidia GB200 NVL72 | Google TPU v4 | Apple Baltra (Expected) | Architecture | 2 Grace CPUs + 4 Blackwell GPUs | 36 Grace CPUs + 72 GPUs | TensorFlow-optimized ASICs | Designed for custom training + inference | Memory | 1.3 TB unified | 13.5 TB HBM3e | 32 GB HBM per chip | Unified (est. 2 TB) | Performance (FP4 TFLOPS) | Medium-scale workloads | 1,440 TFLOPS | 275 TFLOPS | TBD but high for training + inference | Power Consumption | ~5.4 kW/server | ~120 kW/rack | High efficiency | ~3.5 kW/server (est.) | Form Factor | Compact, energy-focused | Large-scale racks | Cloud-based | Data center-focused | Cost | ~$300,000/server | ~$1M/rack | Rental model | High upfront, low long-term | Source: The Information Network (www.theinformationnet.com) | Deploying Nvidia's GB200 NVL72 racks in Apple's data centers would cost a cool $1 million per, not even counting the energy costs to keep them running and umm, cool. That conflicts with Apple's drive to raise margins. Apple's Baltra chip, designed for efficiency and tailored to its ecosystem, should help cut costs. Hardware custom designed for a well understood set of tasks can be cheaper than general purpose machines built for whatever tasks customers think up. | | "Un-Surgery" and a $59 billion medical revolution Have you heard of the new "Un-Surgery" treatments replacing many of today's hospital operating rooms?
They're part of a booming $59 billion medical revolution sweeping the globe today...
Giving doctors the ability to regenerate joints with just a few injections... or spines... even or heart muscles. And I've found one "under $5 stock" that's in the thick of it all.
Click here now to see the details.. | | | Broadcom's Packaging Innovation
In mid-December, Apple announced it is collaborating with Broadcom (AVGO) on Baltra. The new device will rely on Broadcom's advanced 3.5D XDSiP (Extreme Dimension System in Package) packaging breakthrough, which enables multi-die integration with faster chip-to-chip communication. This approach improves bandwidth, reduces power consumption, and aligns with Apple's push for efficiency and performance. Broadcom's face-to-face stacking and hybrid copper bonding further enhance scalability.
Forward but fraught
Apple's drive to break free of Nvidia is ambitious but fraught with challenges - Execution: the complexity of developing AI hardware cannot be understated. Nvidia brings decades of experience and a robust software ecosystem to the struggle.
- Transition costs: Renting GPU and TPU resources from cloud providers like Amazon and Google is a stopgap measure, but it also highlights Apple's current dependence on third-party infrastructure.
- Reluctant developers: Nvidia's GPUs are entrenched in the AI community, supported by a vast developer ecosystem. To convince developers to adopt Apple's proprietary solutions Apple will not only have to match Nvidia's performance (for relevant tasks) but also offer comparable software tools and support systems.
Nvidia vs. the rise of custom silicon
Apple's transition away from Nvidia for data center hardware is part of a trend. Google, Amazon (AMZN), and Meta (META) are all investing heavily in proprietary AI hardware customized for their workloads. The trend could threaten Nvidia's market share. So far, however, we've seen no evidence of slumping sales or margins as demand for NVDA GPUs still far outpaces supply.
In the struggle for AI dominance, energy efficiency is huge. Nvidia's powerful GPUs are energy hogs. Apple's focus on energy-efficiency cries out for customization. And then there are the CO2 emissions, which we regard as the elixir of life, but regulators hate. Table 2 summarizes Google's and Apple's energy edge.
Table 2: Energy Consumption and CO2 Emissions: | Metric | Nvidia GPU-Based Data Center | Google TPU-Based Data Center | Projected Apple AI Data Center | Annual Power Usage (kWh/year) | ~2,500,000 | ~1,500,000 | ~1,200,000 | CO2 Emissions (tons/year) | 1,200 | 720 | 600 | Source: The Information Network | | | Frustrated Farmer Solves Traders #1 Problem Using A.I. Do you know the #1 reason independent traders lose money?
It's because they are trading the wrong thing. In fact, they don't know what to trade.
It's that simple.
That's why I'm writing to tell you I've been happy on the ranch, but I've been so frustrated by watching other traders lose money that I wanted to make sure you heard about VantagePoint's A.I.
So, to celebrate the New Year, I'm going to teach you how to find what to trade using an A.I.-powered scanner. | | | Apple silicon: a winner so far
As Table 3 shows, in recent years Apple's move to Apple silicon—which accelerated in 2020 when Apple announced it would wean the Mac from Intel processors—has corresponded with improved gross margins. Part of that is lower costs but the shift to vertical integration and ecosystem control have also supported product quality and innovation.
Table 3: Apple's Gross Margins vs. Competitors (2020–2024) | Year | Apple Gross Margin (%) | Nvidia Gross Margin (%) | AMD Gross Margin (%) | Alphabet Gross Margin (%) | 2020 | 38.5 | 62.1 | 45 | 55.6 | 2021 | 41.8 | 64.1 | 46.1 | 57.2 | 2022 | 43.3 | 66.1 | 46.9 | 57.8 | 2023 | 44.7 | 68 | 47.8 | 58.2 | 2024 (E) | 45 | 69 | 48 | 58.5 | Source: The Information Network | Apple's record of successfully managing transitions to proprietary hardware (Table 4.) bodes well for Baltra.
Table 4: Apple's Historic Transitions in Key Technologies | Transition | Previous Supplier | Apple Solution | Key Benefits | CPUs for Macs | Intel | M-series Chips | Performance, power efficiency | GPUs for Macs | Nvidia, AMD | Custom GPU Solutions | Ecosystem control, efficiency | AI Chips (In Progress) | Nvidia, AWS | Baltra AI Chip | Cost savings, ecosystem integration | Source: The Information Network | Investor Takeaway
The shifting grounds of the great AI competition are crucial to the fate of all the companies discussed here: AAPL, NVDA, AMD, AVGO, GOOG, AMZN, and META. Will Apple cut into NVDA's sales? Will the hyper-scalers drive for independence help themselves or AMD and AVGO? For how long to come is NVDA's dominance already baked in? In January's issues of the Gilder Technology Report, we offer our answers.
Sincerely,
George Gilder, Richard Vigilante, Steve Waite, and John Schroeter Editors, Gilder's Guideposts, Technology Report, Technology Report Pro, Moonshots, and Private Reserve | | About George Gilder: George Gilder is the most knowledgeable man in America when it comes to the future of technology and its impact on our lives. He’s an established investor, bestselling author, and economist with an uncanny ability to foresee how new breakthroughs will play out, years in advance. George and his team are the editors of Gilder Technology Report, Gilder Technology Report Pro, Moonshots and Private Reserve. | | | | | |
Tidak ada komentar:
Posting Komentar