Within a single day after the U.S. government cleared Nvidia to resume sales of its H20 AI chips to China and approved AMD’s MI308, global semiconductor equities rallied sharply: Nvidia shares jumped more than 5 % intraday, while AMD soared nearly 7 %. The loosening lets both giants re-engage with the world’s second-largest data-center market and signals that Sino-American tech rivalry has entered a conditionally relaxed phase.
Differences Between NVDA and AMD Chips
Think of the H20 as a lower-spec variant of Nvidia’s flagship H100. To stay beneath the U.S. export red line, Nvidia cuts core count to roughly 60 % and shaves peak throughput by about one-third, yet preserves 96 GB of HBM3 and 4 TB/s of bandwidth; total board power remains 400 W. The result: in identical AI tests the H20 is about a third slower than the H100, but it still leads domestic GPUs and retains the full CUDA ecosystem, so users can almost literally “plug in and run.”
The MI308, by contrast, is a detuned version of the MI300X. It still carries up to 192 GB of HBM3—enough for larger language models—but trims core count and clock speed so that peak performance stays below the export threshold. TechPowerUp data show the sibling MI308X with roughly 19,456 stream processors and a PCIe 5.0 interface; both performance and power are “braked,” yet the “large-memory” selling point remains. MI308 runs on AMD’s open-source ROCm stack; porting workloads takes slightly more effort than with CUDA but reduces single-vendor dependence, appealing to cloud providers seeking diversification.
Overall, the H20 is akin to a speed-capped sports car—still fast and easy to get on the road—whereas the MI308 resembles a touring car with a bigger fuel tank: it hauls more data while trading a bit of horsepower for compliance and flexibility.
Immediate Market Reaction
After the announcement, Nvidia spiked to a record US$172.2, pushing its market cap above US$4.2 trn; AMD rose almost 7 % in the same session, lifting the Philadelphia Semiconductor Index to a two-month high. Bank of America estimates that shipping one million compliance-limited GPUs into China this year could add US$3–4 bn in China revenue for each company, boosting FY2025 EPS by roughly 5–7 percentage points.
Mainland A-share and Hong Kong semiconductor names rallied as well: high-compute server ODMs, HBM packaging houses and CPO optical-module makers gained an average 3–5 % intraday, reflecting bets on a thaw in supply chains and fresh AI-infrastructure capex.
Industry Implications
Analysts view the “point-to-point” licenses as leverage linked to wider negotiations over tech controls and critical materials such as rare earths. While H20 and MI308 are green-lit, the next-generation Blackwell B100/B200 remains blocked, showing Washington’s aim to “grant revenue without surrendering advantage,” using dynamic performance thresholds to pace China’s compute upgrades while safeguarding U.S. firms’ earnings lead in the global AI race.
For Chinese buyers, the GPUs fill near-term compute gaps but tighten the HBM3 supply crunch; for domestic GPU makers, the competitive bar rises again, forcing faster catch-up in software ecosystems and efficiency; and for global cloud providers, deploying multiple chip lines in parallel to spread policy risk is set to become the norm.