Hardware Price Crisis: GPUs and RAM in 2025-2026
DDR5, RTX 5090, RTX 5080, and DGX Spark Market Analysis

The hardware market is in turmoil. Both RAM and GPU prices are increasing.
Since September 2025, we’ve witnessed unprecedented price increases across PC components. DDR5 memory kits now cost more than high-end GPUs, the RTX 5090 has vanished from shelves, and anyone building AI infrastructure faces difficult choices. Here’s what’s happening and what it means for self-hosters and developers.
This article overviews, summarises and references my other posts on hardware and its cost topic.
1 The Memory Crisis
The root cause of today’s hardware chaos is a global DRAM shortage. AI companies have purchased memory supplies years in advance for datacenter infrastructure, creating a demand wave that’s now cascading through consumer markets.
RAM prices surged dramatically in late 2025, with increases ranging from 163% to a staggering 619% across global markets. In the US, high-capacity DDR5 kits that cost around US$1,000 just months ago now top US$2,200—some 192GB kits now exceed the RTX 5090’s US$1,999 MSRP.
In Australia, the situation has worsened through January 2026. According to recent GPU and RAM price tracking, RAM jumped 38% to around A$689 for common configurations. For those running local LLMs, where memory capacity directly impacts model size capability, this creates real constraints.
2 RTX 5090 and RTX 5080: The Vanishing Act
NVIDIA’s flagship consumer GPUs have become increasingly difficult to source. In the US, the RTX 5090 launched at US$1,999 MSRP but has largely vanished from retail, with secondary market scalpers demanding US$6,000+. NVIDIA has admitted to a looming RTX 50-series shortage, with reports suggesting 30-40% production cuts in H1 2026.
In Australia, we’re seeing steady price climbs rather than complete stockouts. The January 2026 price data shows RTX 5090 prices jumped 15.2% to an average of A$5,566, while RTX 5080 rose 6% to A$1,899. Compare this to earlier benchmarks from June 2025 when these cards first hit the Australian market.
For those comparing NVIDIA GPUs for AI workloads, the calculus has changed dramatically. The price-to-VRAM ratio that made consumer cards attractive for home LLM hosting has shifted.
3 DGX Spark: An Alternative Path
Amid the consumer GPU chaos, NVIDIA’s DGX Spark presents an interesting alternative. This “personal AI supercomputer” packs 128GB of unified memory and delivers 1 PFLOPS of FP4 AI performance in a desktop form factor.
DGX Spark pricing in Australia ranges from A$6,249 to A$7,999 at major retailers like Centrecom, Scorptec, and PCCaseGear. While that’s not cheap, consider what you’d pay for an RTX 5090 at A$5,566 plus 128GB of DDR5 RAM at current prices.
The performance comparison between DGX Spark, Mac Studio, and RTX 4080 shows where unified memory architectures shine—particularly for large models that would otherwise require expensive multi-GPU setups.
4 Impact on Self-Hosting AI
For developers running AI infrastructure on consumer hardware, these price increases force a rethink. The economics that made home LLM hosting attractive six months ago have shifted.
Key considerations now include:
- Memory bandwidth matters more than ever — LLM performance depends heavily on PCIe lanes and memory architecture
- Unified memory solutions — Systems like DGX Spark or Mac Studio avoid the VRAM bottleneck entirely
- Alternative accelerators — The rise of LLM ASICs offering 10-50× better performance per watt than GPUs
- Professional cards — The RTX 5880 Ada with 48GB remains available, though at workstation prices
5 What Comes Next
The memory shortage isn’t ending soon. Industry analysts expect constraints to persist through 2026 as AI datacenter demand continues outpacing supply expansion. For individual developers and small teams, this means:
- Buy RAM now if you need it — Prices are unlikely to drop soon
- Consider unified memory systems — DGX Spark and Mac Studio sidestep the DDR5 crisis
- Right-size your local models — Smaller, quantized models may be the pragmatic choice
- Hybrid approaches — Combine local inference for privacy with cloud APIs for heavy lifting
The hardware landscape has fundamentally changed. Planning around current availability and realistic pricing, rather than MSRPs, will save frustration in the months ahead.




