Private AI discussion

The AI hardware market presents compelling private investment opportunities as the industry shifts from training to inference. We analysed 12 private companies across accelerators, memory subsystems, and interconnect – here’s what we found.


The Big Picture

Four key themes emerged from our analysis:


Inference is where the money is. Training happens once; inference runs continuously at scale. Companies solving inference bottlenecks command premium valuations.


Memory is the constraint, not compute. GPU memory bandwidth limits LLM performance. CXL pooling and in-memory compute bypass expensive HBM.


NVLink lock-in creates openings. NVIDIA’s proprietary interconnect frustrates multi-vendor deployments. Open standards are gaining traction.


Valuations are wildly dispersed. Cerebras at $22B vs d-Matrix at $2B for comparable inference positioning. Early-stage offers better risk/reward.




Our Top 5 Picks

Ranked by risk-adjusted return potential for growth-oriented investors.

#1 d-Matrix

Digital In-Memory Inference Accelerator

Valuation: $2.0B  |  Raised: $450M  |  Backers: Microsoft (M12), Temasek, QIA

Corsair accelerator uses digital in-memory compute – claims 10x faster inference at 3x lower cost than GPUs. Already shipping with SquadRack reference architecture (Arista, Broadcom, Supermicro).

Why we like it: Best risk/reward in the space. Digital approach avoids yield issues plaguing analog competitors. Microsoft backing de-risks enterprise adoption. $2B valuation vs $22B Cerebras.

Key risk: NVIDIA inference roadmap; execution at scale.

#2 FuriosaAI

Tensor Contraction Processor

Valuation: ~$1-1.5B  |  Raised: $246M+  |  HQ: Seoul, Korea

RNGD chip delivers 2.25x better inference performance per watt vs GPUs using proprietary TCP architecture. Won LG AI Research as anchor customer. TSMC mass production underway.

Why we like it: Rejected Meta’s $800M acquisition offer – management believes in standalone value. IPO targeted 2027. Differentiated architecture, not a GPU clone.

Key risk: Korea-centric investor base; Series D pricing.

#3 Panmnesia

CXL Memory Pooling Fabric

Stage: Early  |  Raised: $80M+  |  Origin: KAIST spinout

CXL 3.1 fabric switch enables GPU memory pooling to terabytes – solving the memory wall that limits LLM context windows. Single-digit nanosecond latency.

Why we like it: Memory pooling is inevitable. HBM costs $25/GB vs $5/GB for DDR5. Enfabrica’s $900M NVIDIA acqui-hire validates this layer. Panmnesia is the “next Enfabrica.”

Key risk: CXL 3.x adoption timing.

#4 Upscale AI

Open-Standard AI Networking

Stage: Seed  |  Raised: $100M+  |  Backers: Mayfield, Maverick Silicon, Qualcomm

Building open-standard networking (UAL, Ultra Ethernet) to compete with NVIDIA’s proprietary NVLink. Pre-product but exceptional team.

Why we like it: Founded by Barun Kar and Rajiv Khemani – they built Palo Alto Networks, Innovium, and Cavium (all acquired). This is a team bet with 3x proven exits.

Key risk: Pre-product; NVLink ecosystem stickiness.

#5 Axelera AI

Edge AI Accelerator

Valuation: ~$500M+  |  Raised: $200M+  |  Backers: Samsung Catalyst, EU Innovation Council

Europa chip delivers 629 TOPS using digital in-memory compute and RISC-V. 120+ customers in retail, robotics, security. €61.6M EuroHPC grant for next-gen Titania chip.

Why we like it: Leading European play with EU sovereignty tailwinds. Samsung backing provides strategic optionality. Real customers, real revenue.

Key risk: Edge TAM smaller than data center.


The Rest of the Field (1 = best)

Companies we evaluated but rank lower due to valuation, risk profile, or structural issues:

SiMa.ai

Edge MLSoC · $270M raised · Rating: 3.5

Full-stack edge platform, but crowded market with Jetson competition.

Cerebras

Wafer-scale · $22B valuation · Rating: 4

Radical architecture but valuation prices in perfection. Event trade only.

Tenstorrent

RISC-V AI chips · $3.2B valuation · Rating: 5

Jim Keller star power, but open RISC-V = no moat.

Etched

Transformer ASIC · $5B valuation · Rating: 5

Bold bet on transformer permanence. Binary outcome.


Not Investable

These companies came up in our research but are no longer private opportunities:

  • Groq – Acquired by NVIDIA for ~$20B (December 2025)
  • Enfabrica – NVIDIA acqui-hired CEO and team for ~$900M (September 2025)
  • Rivos – Meta acquisition announced (September 2025)
  • Astera Labs – IPO’d March 2024, now $23B+ market cap (NASDAQ: ALAB)

Key Risks

Before deploying capital, consider:

⚠️ NVIDIA isn’t standing still. Blackwell and Rubin will improve inference. Startups need to stay ahead of a well-funded incumbent.

⚠️ Hyperscalers are building captive silicon. Google (TPU), Amazon (Trainium), Microsoft (Maia) reduce addressable market.

⚠️ TSMC concentration. Nearly all targets fab at TSMC. Taiwan risk affects the entire portfolio.

⚠️ CXL timing uncertainty. Memory pooling benefits require ecosystem maturity that may take longer than expected.


Consider

For a growth mandate targeting 5x+ returns:

Core Holdings (50-60%)
d-Matrix, FuriosaAI – validated products at reasonable valuations

High Conviction (25-35%)
Panmnesia, Upscale AI – earlier stage with higher upside potential

Opportunistic (10-20%)
Axelera, SiMa.ai – edge exposure, likely smaller outcomes ($1-3B)


Next Steps

  1. DD on d-Matrix and FuriosaAI
  2. Conduct technical DD on Panmnesia’s CXL 3.1 IP
  3. Monitor Upscale AI for first silicon tape-out
  4. Watch Cerebras IPO for public market entry if pricing rationalises

Analysis based on publicly available information as of January 2026. This is not investment advice. Due diligence recommended before any investment decisions.


Discover more from Priory House

Subscribe to get the latest posts sent to your email.