
Oil Crisis February 2026 Analysis

FutureCompute AMC Fund February 19th Update
| Fund / Index | Dec 5, 2025 | Peak | Feb 18, 2026 | Return |
|---|
| Date | RPL | Portfolio Value | vs Inception | vs QQQ |
|---|

Who wins from the data centre build out
Who Benefits Most from the $725B DC Capex Wave?
2026 data centre capex from hyperscalers plus neoclouds totals ~$725B, up ~36% YoY. But these companies were already spending heavily, so the real question is: where does the incremental ~$190B of new spending land, who sees the biggest revenue uplift relative to their existing business, and who has the supply constraints to protect margins?

Tier 1 — Transformed: DC Capex IS the Company
These businesses are being fundamentally reshaped. DC demand matches or exceeds their entire current revenue.
VRT — Vertiv (Cooling & Power Distribution)
$10B revenue → $10-13B DC demand = ~115% of current revenue
Arguably the most transformed company in the entire chain. Five years ago Vertiv was a sleepy $5B industrial business. Now orders are up 60% YoY, backlog is $9.5B with a 1.4x book-to-bill. Every megawatt of DC capacity needs cooling racks and power distribution — and Vertiv is one of very few scaled suppliers. Supply is genuinely constrained, supporting pricing. The incremental uplift from 2024 to 2026 could be 50-70% revenue growth.
ANET — Arista Networks (DC Networking)
$9B revenue → $8-10B DC demand = ~100% of current revenue
Near pure-play DC networking, taking share from Cisco in back-end AI cluster connectivity. AI-specific networking revenue was ~$750M in 2025, targeting $1.5B+ in 2026. The shift to 800G+ Ethernet for AI clusters is a product cycle Arista owns. Every new GPU rack needs high-bandwidth switching — as the installed GPU base doubles, networking demand follows.
NVDA — NVIDIA (GPUs + NVLink + Systems)
$213B revenue (FY26) → ~$200B DC = ~90%. FY27 consensus: $316B
Already transformed — DC revenue grew from $15B (FY23) to ~$188B (FY26) in three years. The story is now about sustaining 50%+ growth, not discovering it. Blackwell and Vera Rubin have half a trillion in visibility. Supply remains constrained (GPUs sold out), keeping gross margins at 73-75%. The long-term risk is custom silicon gradually eroding ~90% market share, but that’s a 2028+ concern.
Tier 2 — Major Driver: DC Capex Reshaping 40-65% of Revenue
DC spending is the dominant growth engine but these companies retain meaningful diversification.
TSM — TSMC (Fabricates Everything)
$122B revenue (CY25) → ~$75-90B DC-linked = ~65%. Guiding +30% for CY26
The ultimate picks-and-shovels play. TSMC wins regardless of whether NVIDIA, Broadcom, AMD, or custom silicon wins — they fab all of them. HPC/AI is now 58% of revenue (up from 39% in 2022). Annual price increases locked in through 2029 because there is no alternative at advanced nodes. The supply constraint is physics — building leading-edge fabs takes 2-3 years, so capacity stays tight through 2027+.
AVGO — Broadcom (Custom ASICs + AI Networking)
$64B revenue (incl. $27B VMware) → ~$30-40B DC semi revenue = ~55%
The hidden story is how fast AI is reshaping the semiconductor half. AI revenue was ~$24B in FY2025, growing 60%+ YoY, with a $73B AI order backlog. Broadcom designs the custom ASICs for Google (TPUs), Amazon, and now OpenAI. As hyperscalers diversify away from NVIDIA, Broadcom is the direct beneficiary. AI networking adds another layer. The incremental growth is concentrated in the semi side — VMware software grows at low-double-digits.
MU — Micron (HBM & Server DRAM)
$42B revenue (TTM) → ~$15-20B DC-linked = ~42%. Q1 FY26 was record $13.6B
Every AI GPU needs 6-8 HBM stacks, and HBM is in severe shortage. Micron is the only US-listed pure play — SK Hynix (~50% share) and Samsung (~25%) are Korean-listed. HBM pricing runs at 3-5x regular DRAM per bit, and the constraint won’t ease before 2027. Margins went from negative to record levels in 18 months. Risk: Korean competitors and eventual oversupply could compress pricing.
Tier 3 — Solid Tailwind: DC is 10-38% of Revenue
Meaningful exposure but well-diversified. Less upside but also less risk if DC spending slows.
PWR Quanta Services / EME EMCOR — DC Construction
PWR: $28B rev, ~$8-12B DC (~36%). EME: $16B rev, ~$5-7B DC (~38%)
Record backlogs ($39B for PWR) driven by DC buildout AND the grid infrastructure to power it. The key constraint is skilled craft labour — you can’t quickly train data centre electricians, which protects margins. They benefit twice: building the DCs and building the transmission/substation infrastructure to feed them. Lower risk than pure-play DC names because they also serve grid, renewables, and industrial markets.
ETN Eaton / CAT Caterpillar — Power Infrastructure
ETN: $27B rev, ~$6-9B DC (~28%). CAT: $66B rev, ~$5-8B DC (~10%)
Eaton (UPS, switchgear, PDUs) has more DC leverage at ~28% of revenue, with 12-18 month lead times supporting pricing. Caterpillar (backup generators) benefits but DC is ~10% of a massive diversified business — a nice tailwind, not thesis-changing. Both benefit from supply constraints on electrical equipment but neither is being reshaped by this cycle.
Why Supply Constraints Matter: Pricing Power Across the Chain
The revenue uplift is only half the story. What makes this cycle particularly attractive is that supply bottlenecks are creating a seller’s market at almost every layer, meaning growth comes with margin expansion rather than compression.
TSMC — monopoly on advanced nodes, annual price rises locked in through 2029. NVIDIA — GPUs sold out, 73-75% gross margins at massive scale. HBM memory — physical shortage, 3-5x premium pricing. Vertiv/Eaton — cooling and power equipment backlogs at record levels, 12-18 month lead times. Quanta/EMCOR — skilled labour shortage in electrical trades creates barriers to competition.
These constraints look durable through at least 2027.
What to Watch
Custom silicon adoption — if hyperscalers shift faster to Trainium/TPUs/Broadcom ASICs, NVIDIA’s share erodes but TSMC and AVGO benefit. Net neutral for the overall pool.
HBM supply normalisation — if the memory shortage resolves, HBM pricing compresses and Micron’s margins come under pressure. Likely a 2027-2028 event.
Neocloud financial health — CoreWeave has $14B debt with $9.7B maturing within 12 months. If neoclouds struggle, ~$55B of the capex pool is at risk.
Capex sustainability — hyperscalers now spend 45-57% of revenue on capex. Any pullback in AI demand or ROI questions could trigger rapid capex cuts, hitting all suppliers simultaneously.
Sources: Company earnings reports (Q3/Q4 2025), Dell’Oro Group, IoT Analytics DC Infrastructure Report, IEEE ComSoc, analyst estimates. Revenue figures are latest reported full-year or TTM. DC revenue estimates are approximate allocations from the ~$725B total pool based on industry breakdowns and company disclosures. February 2026.

The Bubble Is in SaaS: The “Nokia Trap” Is Coming
The market keeps talking about an “AI bubble.” That misses the real fragility.
The bigger bubble risk is in SaaS application software—because AI enables new entrants to build broader, business-native solutions that solve conjoined problems end-to-end, not isolated tasks. That shifts value away from “apps glued together with humans” toward platforms that understand the business process.
The Nokia Trap (what’s actually changing)

Nokia didn’t lose because people wanted a cheaper phone or a slightly better phone.
It lost because the category stopped being “a phone.” The product became a computing platform that absorbed whole new jobs (apps, internet, maps, media). Nokia was optimised for the old definition of the product.
SaaS is facing the same trap:
- Many SaaS tools are optimised for problems not productivity of work
- The next generation solves the whole job, across steps, systems, and teams
When the category expands, the old leaders can look suddenly narrow.
The real threat: “bigger problem” products
Most businesses don’t have “a CRM problem” or “a ticketing problem” or “a billing problem.”
They have a revenue problem, a service problem, a fulfilment problem, a compliance problem—spanning multiple tools and handoffs.
Classic SaaS often works like this:
- a set of apps
- connected by integrations
- held together by people doing manual work, chasing exceptions, updating fields, and reconciling gaps
AI-native challengers can compete on a different axis:
- one model of the business workflow
- one place where context lives
- automation that moves work forward across steps
- handling exceptions without a human being the glue
That’s not “cheaper seats.” It’s a different product category: outcome-focused systems rather than tool-focused apps.
Why this hits SaaS valuations
If buyers shift from “best tool” to “best end-to-end outcome,” the SaaS moat changes:
- Feature depth matters less than workflow ownership
- Integrations matter less than shared context
- UI matters less than automation and decisioning
- Point solutions get re-framed as “components,” not “systems”
That’s exactly where multiples compress: when a market realises a product is becoming a replaceable module inside a bigger platform.
What to watch (the early signs)
Not “seat price down.” Look for:
- new vendors winning by promising cycle-time reduction (close faster, ship faster, resolve faster)
- incumbents expanding into adjacencies defensively (copying platform moves)
- customers consolidating around process owners (systems of action), not systems of record
- services headcount shifting from “ops glue” to “exception oversight”
Bottom line
AI isn’t just making software cheaper.
It’s enabling a change in what customers buy: solutions that map to the business, solve multiple connected problems, and reduce the need for humans to stitch workflows together.
That’s the Nokia Trap for SaaS: people keep asking for “better apps” right up until the product category shifts to “business platforms that run the work.”
“–” More exposed SaaS / app-layer
- Salesforce (CRM) — CRM
- ServiceNow — NOW
- Workday — WDAY
- Adobe — ADBE
- Atlassian — TEAM
- Snowflake — SNOW
- Datadog — DDOG
- HubSpot — HUBS
- Zoom — ZM
- DocuSign — DOCU
- Okta — OKTA
- Twilio — TWLO
- Shopify — SHOP
- Monday.com — MNDY
- Asana — ASAN
- Smartsheet — SMAR
- PagerDuty — PD
- Freshworks — FRSH
- Elastic — ESTC
- GitLab — GTLB
- MongoDB — MDB
- Cloudflare — NET
- Unity — U
- Toast — TOST
“+” Likely beneficiaries (AI platform / infra / “system-of-action” enablers)
- Palantir — PLTR
- Microsoft — MSFT
- Alphabet — GOOGL
- Amazon — AMZN
- Meta — META
- NVIDIA — NVDA
- AMD — AMD
- Broadcom — AVGO
- TSMC — TSM
- ASML — ASML
- Arista Networks — ANET
- Super Micro Computer — SMCI
- Oracle — ORCL
- IBM — IBM
- Cisco — CSCO
- Palo Alto Networks — PANW
- CrowdStrike — CRWD
- Zscaler — ZS

Don’t assume LLMs will be forever – Custom Silicon Risk – GPU impact
3 min read | For investors tracking NVDA, hyperscalers, and AI infrastructure

Emerging AI architectures — particularly Mamba and State Space Models (SSMs) — threaten to disrupt the current AI landscape.
Both hardware and software leaders face risk, though hardware companies are more exposed: custom silicon takes 3-5 years to design, while model companies can retrain and pivot in 12-18 months.
The Game-Changer: Mamba and State Space Models
The transformer problem: Today’s AI models (GPT, Claude, Gemini) use “attention” — comparing every word to every other word. Double the context length, quadruple the compute.
That’s why long-context models are so expensive and why hyperscalers are spending hundreds of billions on GPU clusters.
The SSM solution: State Space Models process sequences like a running summary rather than looking at everything simultaneously.
Mamba — the leading SSM architecture, developed by researchers at Carnegie Mellon and Princeton — scales linearly with context length.
The impact:
✓ 5x faster on long sequences
✓ Fraction of the memory footprint
✓ Dramatically lower compute per token
Who’s backing it: Mistral (Codestral Mamba), AI21 Labs (Jamba), Together AI. Hybrid transformer-Mamba models are already in production.
The catch: Pure Mamba hasn’t matched transformers for complex reasoning at frontier scale — yet. But hybrids are proliferating, and the trajectory is clear.
What This Means for GPU Demand
If SSMs gain traction, the compute economics of AI change fundamentally.
Transformer world:
- Compute scales quadratically with context
- High memory per token
- GPU hours: baseline
- Long context: expensive, constrained
SSM world:
- Compute scales linearly with context
- Low memory per token
- GPU hours: potentially 50-80% less
- Long context: cheap, abundant
The demand question: Efficiency gains could reduce absolute GPU demand — or unlock new use cases that consume the savings. History suggests the latter, but the transition creates uncertainty.
The NVIDIA risk: SSMs still run on CUDA — but if you need 50-80% fewer GPUs per workload, that’s a volume problem even if NVIDIA keeps 100% share.
The $600 billion hyperscaler capex trajectory assumes insatiable compute demand. SSMs challenge that assumption.
The custom silicon risk: TPUs, Trainium, and MTIA are optimised for transformer attention patterns. SSMs have different compute profiles.
Billions in chips could become sub-optimal — and reduced compute demand means less need for any custom silicon.
Who’s Exposed, Who’s Protected
NVIDIA — Moderate exposure
Keeps architecture flexibility, but volume at risk if compute demand drops. Pricing power may erode.
Google, Amazon — High exposure
Custom silicon optimised for wrong architecture and potentially overbuilt for reduced demand. Double hit.
Microsoft — Hedged
NVIDIA dependency means less stranded silicon, but Azure capex still at risk.
Neo-Clouds (CRWV, NBIS) — High exposure
Leveraged to GPU demand. If demand drops 50%, the debt doesn’t.
OpenAI, Anthropic — Lower exposure
Compute costs drop. They’ll adopt what works. Net beneficiaries.
The Bottom Line
SSMs and Mamba may not kill transformers outright — hybrids are more likely.
But they rewrite the compute economics and challenge the “insatiable demand” narrative underlying current valuations.
NVIDIA keeps architectural flexibility but faces volume risk. Hyperscalers face both wrong-architecture risk and overbuilding risk. Neo-clouds are leveraged to a demand curve that may flatten.
The model companies may be the cleanest winners — their compute bills go down while their capabilities go up.
When efficiency improves dramatically, the biggest spenders have the most to lose.

AI Data Center Capital Deployment: 2015-2025 Analysis
Annual vs Cumulative: The Critical Distinction
2025 annual spend: AI captured 58% share.
2015-2025 cumulative capital: AI represents 19% of total deployed infrastructure.
The valuation error: Markets anchor to annual flow metrics while ignoring the $2.29T capital stock already in operation.
Financial Implications
| Metric | Non-AI Infrastructure | AI Infrastructure |
|---|---|---|
| Capital Deployed (2015-2025) | $2.29T | $530B |
| Annual Revenue Generated | $900B – $1.1T | $150B – $200B |
| Revenue per $ Deployed | $0.40 – $0.48 | $0.28 – $0.38 |
| Gross Margin | 25% – 30% | 15% – 22% |
| Utilization Rate | 60% – 75% | 35% – 50% |
| Power Consumption (per rack) | Baseline | 3x – 5x higher |
| Refresh Cycle | 5 – 7 years | 2 – 3 years |
Key finding: Non-AI infrastructure delivers 30-40% higher revenue per deployed dollar. AI requires 1.5-1.7x more capital to generate equivalent gross profit.
AWS Case Study
| Period / Type | Capex Deployed | Annual Revenue (2024-2025) | Gross Margin |
|---|---|---|---|
| Traditional Cloud (2015-2022) | $180B | $85B | 25% – 30% |
| AI Infrastructure (2023-2025) | $60B | $12B | 15% – 22% |
Observation: AI infrastructure generates lower revenue and margins per dollar deployed despite higher growth rates.
ROI on AI Investment
Bottom line: AI infrastructure is profitable but capital-inefficient compared to traditional cloud.
AI generates 15-22% gross margins vs 25-30% for traditional infrastructure.
Requires 1.5-1.7x more deployed capital to produce equivalent gross profit.
Revenue per dollar: $0.28-0.38 vs $0.40-0.48 (30-40% lower efficiency).
Power: 3-5x consumption per rack, grid capacity constrained.
Depreciation: GPU refresh cycles 2-3 years vs 5-7 years traditional.
Pricing: Competitive pressure as providers race to scale.
Utilization: 35-50% vs 60-75% traditional. Variable training workloads reduce billable hours.
Investment implication: AI growth is real but structurally less efficient. Valuation multiples should reflect lower returns per deployed dollar.
Unit Economics
Revenue Efficiency
- Non-AI: $0.40-0.48 revenue per dollar deployed annually
- AI: $0.28-0.38 revenue per dollar deployed annually
- Gap: Non-AI 30-40% more efficient
Gross Margins
- Non-AI: 25-30%
- AI: 15-22%
- Capital required: AI needs 1.5-1.7x more capital for equivalent gross profit
AWS Example
- Traditional cloud: $180B → $85B revenue → 25-30% margins
- AI infrastructure: $60B → $12B revenue → 15-22% margins
Investment View
- $2.29T base generates $900B-1.1T revenue (6x larger than AI)
- Non-AI delivers superior revenue per dollar and margins
- Apply blended multiples reflecting base economics dominance

Private AI discussion
The AI hardware market presents compelling private investment opportunities as the industry shifts from training to inference. We analysed 12 private companies across accelerators, memory subsystems, and interconnect – here’s what we found.
The Big Picture
Four key themes emerged from our analysis:
Inference is where the money is. Training happens once; inference runs continuously at scale. Companies solving inference bottlenecks command premium valuations.
Memory is the constraint, not compute. GPU memory bandwidth limits LLM performance. CXL pooling and in-memory compute bypass expensive HBM.
NVLink lock-in creates openings. NVIDIA’s proprietary interconnect frustrates multi-vendor deployments. Open standards are gaining traction.
Valuations are wildly dispersed. Cerebras at $22B vs d-Matrix at $2B for comparable inference positioning. Early-stage offers better risk/reward.
Our Top 5 Picks
Ranked by risk-adjusted return potential for growth-oriented investors.
#1 d-Matrix
Digital In-Memory Inference Accelerator
Valuation: $2.0B | Raised: $450M | Backers: Microsoft (M12), Temasek, QIA
Corsair accelerator uses digital in-memory compute – claims 10x faster inference at 3x lower cost than GPUs. Already shipping with SquadRack reference architecture (Arista, Broadcom, Supermicro).
Why we like it: Best risk/reward in the space. Digital approach avoids yield issues plaguing analog competitors. Microsoft backing de-risks enterprise adoption. $2B valuation vs $22B Cerebras.
Key risk: NVIDIA inference roadmap; execution at scale.
#2 FuriosaAI
Tensor Contraction Processor
Valuation: ~$1-1.5B | Raised: $246M+ | HQ: Seoul, Korea
RNGD chip delivers 2.25x better inference performance per watt vs GPUs using proprietary TCP architecture. Won LG AI Research as anchor customer. TSMC mass production underway.
Why we like it: Rejected Meta’s $800M acquisition offer – management believes in standalone value. IPO targeted 2027. Differentiated architecture, not a GPU clone.
Key risk: Korea-centric investor base; Series D pricing.
#3 Panmnesia
CXL Memory Pooling Fabric
Stage: Early | Raised: $80M+ | Origin: KAIST spinout
CXL 3.1 fabric switch enables GPU memory pooling to terabytes – solving the memory wall that limits LLM context windows. Single-digit nanosecond latency.
Why we like it: Memory pooling is inevitable. HBM costs $25/GB vs $5/GB for DDR5. Enfabrica’s $900M NVIDIA acqui-hire validates this layer. Panmnesia is the “next Enfabrica.”
Key risk: CXL 3.x adoption timing.
#4 Upscale AI
Open-Standard AI Networking
Stage: Seed | Raised: $100M+ | Backers: Mayfield, Maverick Silicon, Qualcomm
Building open-standard networking (UAL, Ultra Ethernet) to compete with NVIDIA’s proprietary NVLink. Pre-product but exceptional team.
Why we like it: Founded by Barun Kar and Rajiv Khemani – they built Palo Alto Networks, Innovium, and Cavium (all acquired). This is a team bet with 3x proven exits.
Key risk: Pre-product; NVLink ecosystem stickiness.
#5 Axelera AI
Edge AI Accelerator
Valuation: ~$500M+ | Raised: $200M+ | Backers: Samsung Catalyst, EU Innovation Council
Europa chip delivers 629 TOPS using digital in-memory compute and RISC-V. 120+ customers in retail, robotics, security. €61.6M EuroHPC grant for next-gen Titania chip.
Why we like it: Leading European play with EU sovereignty tailwinds. Samsung backing provides strategic optionality. Real customers, real revenue.
Key risk: Edge TAM smaller than data center.
The Rest of the Field (1 = best)
Companies we evaluated but rank lower due to valuation, risk profile, or structural issues:
SiMa.ai
Edge MLSoC · $270M raised · Rating: 3.5
Full-stack edge platform, but crowded market with Jetson competition.
Cerebras
Wafer-scale · $22B valuation · Rating: 4
Radical architecture but valuation prices in perfection. Event trade only.
Tenstorrent
RISC-V AI chips · $3.2B valuation · Rating: 5
Jim Keller star power, but open RISC-V = no moat.
Etched
Transformer ASIC · $5B valuation · Rating: 5
Bold bet on transformer permanence. Binary outcome.
Not Investable
These companies came up in our research but are no longer private opportunities:
- Groq – Acquired by NVIDIA for ~$20B (December 2025)
- Enfabrica – NVIDIA acqui-hired CEO and team for ~$900M (September 2025)
- Rivos – Meta acquisition announced (September 2025)
- Astera Labs – IPO’d March 2024, now $23B+ market cap (NASDAQ: ALAB)
Key Risks
Before deploying capital, consider:
⚠️ NVIDIA isn’t standing still. Blackwell and Rubin will improve inference. Startups need to stay ahead of a well-funded incumbent.
⚠️ Hyperscalers are building captive silicon. Google (TPU), Amazon (Trainium), Microsoft (Maia) reduce addressable market.
⚠️ TSMC concentration. Nearly all targets fab at TSMC. Taiwan risk affects the entire portfolio.
⚠️ CXL timing uncertainty. Memory pooling benefits require ecosystem maturity that may take longer than expected.
Consider
For a growth mandate targeting 5x+ returns:
Core Holdings (50-60%)
d-Matrix, FuriosaAI – validated products at reasonable valuations
High Conviction (25-35%)
Panmnesia, Upscale AI – earlier stage with higher upside potential
Opportunistic (10-20%)
Axelera, SiMa.ai – edge exposure, likely smaller outcomes ($1-3B)
Next Steps
- DD on d-Matrix and FuriosaAI
- Conduct technical DD on Panmnesia’s CXL 3.1 IP
- Monitor Upscale AI for first silicon tape-out
- Watch Cerebras IPO for public market entry if pricing rationalises
Analysis based on publicly available information as of January 2026. This is not investment advice. Due diligence recommended before any investment decisions.

Private AI Hardware Investment Analysis
Private AI Hardware Investment Analysis
Risk-Adjusted Ranking for Non-VC Portfolio
January 2026
Executive Summary
| Company | Stage | Valuation | Total Raised | Stack Position | Risk Rating |
|---|---|---|---|---|---|
| d-Matrix | Series C | $2.0B | $450M | AI Accelerator | Medium |
| FuriosaAI | Series C+ | $735M | $246M | AI Accelerator | Medium |
| Panmnesia | Series A | $250M | $80M | CXL/Memory | High |
| Axelera AI | Series B | ~$1.0B | $200M+ | Edge AI | Medium |
| SiMa.ai | Series C | $960M | $355M | Edge AI | Medium-Low |
1. d-Matrix (Recommended: Core Holding)
| d-Matrix – Product & Competition | |
|---|---|
| Attribute | Details |
| Core Product | Corsair AI inference platform using Digital In-Memory Compute (DIMC) chiplets |
| Technology | DIMC architecture eliminates memory wall; chiplet-based design with high-bandwidth BoW interconnects |
| Stack Position | Data center AI inference – targets LLM and transformer workloads at lower TCO than GPUs |
| Real Competitors | NVIDIA (H100/B100), Groq (LPU), Cerebras, AMD MI300, Intel Gaudi, SambaNova |
| Differentiation | Memory-bound compute efficiency; claims 3-5x better TCO for inference vs GPUs |
| Key Risks | NVIDIA ecosystem lock-in; software stack maturity; scaling production; customer adoption timing |
| Exit Path | IPO likely (2026-2028); strategic acquisition by hyperscaler or AMD/Intel possible |
| d-Matrix – Financial & Investment | |
|---|---|
| Attribute | Details |
| Valuation | $2.0B (Nov 2025 Series C) |
| Total Raised | $450M across 3 rounds |
| Latest Round | $275M Series C (Nov 2025) – led by Temasek, Industry Ventures |
| Key Investors | Temasek, SK Hynix, M12 (Microsoft), Playground Global, Marvell Technology, EDBI |
| Revenue Status | Pre-revenue; commercial deployment began 2024; ramping customer pilots |
| TAM | AI inference market: $106B (2025) to $255B (2030); ASIC share growing to 40% |
| Access Route | EquityZen (secondary); direct allocation difficult at current stage |
2. FuriosaAI (Recommended: Core Holding)
| FuriosaAI – Product & Competition | |
|---|---|
| Attribute | Details |
| Core Product | RNGD (Renegade) AI accelerator optimized for LLM inference in data centers |
| Technology | Custom NPU architecture; claims 2.25x better perf/watt vs GPUs for LLM inference |
| Stack Position | Data center AI inference – targets hyperscale and enterprise LLM deployments |
| Real Competitors | NVIDIA, AMD, Groq, Cerebras, Rebellions (Korean rival), Graphcore |
| Differentiation | Korean government backing; LG AI Research design win; OpenAI partnership demo |
| Key Risks | No blockbuster order yet; catching NVIDIA is massive challenge; Meta acquisition talks stalled |
| Exit Path | IPO (planning $300M pre-IPO round); strategic M&A (Meta was interested) |
| FuriosaAI – Financial & Investment | |
|---|---|
| Attribute | Details |
| Valuation | $735M (~1T KRW) – Korean unicorn status (Jul 2025) |
| Total Raised | $246M across 4 rounds |
| Latest Round | $125M Series C Bridge (Jul 2025); planning $300M+ Series D |
| Key Investors | Korea Development Bank, Kakao Investment, Naver, DSC Investment, Industrial Bank of Korea |
| Revenue Status | Pre-revenue at scale; RNGD in production; LG AI Research customer |
| TAM | AI inference market: $106B (2025) to $255B (2030); data center AI chips to $457B by 2030 |
| Access Route | Limited – Korean VC/PE networks; potential pre-IPO allocation in Series D |
3. Panmnesia (Recommended: High Conviction)
| Panmnesia – Product & Competition | |
|---|---|
| Attribute | Details |
| Core Product | CXL 3.1 switch chips and semiconductor IP for memory pooling/expansion |
| Technology | World's first CXL 3.1 IP with two-digit nanosecond latency; CXL-GPU for AI workloads |
| Stack Position | Memory interconnect layer – enables disaggregated memory architecture for data centers |
| Real Competitors | Astera Labs (public), Montage Technology, Microchip, Samsung (internal), Intel (internal) |
| Differentiation | Technology leader (back-to-back CES Innovation Awards); first CXL 3.1 silicon; KAIST pedigree |
| Key Risks | CXL adoption timing (2026-2027 inflection); hyperscaler internal development; early stage |
| Exit Path | Strategic acquisition by Samsung/SK Hynix/Broadcom; IPO possible if CXL market scales |
| Panmnesia – Financial & Investment | |
|---|---|
| Attribute | Details |
| Valuation | $250M+ (Nov 2024 – largest Korean fabless Series A) |
| Total Raised | $80M (incl. govt grants) in ~2 years since founding |
| Latest Round | $60M Series A (Nov 2024) – led by InterVest |
| Key Investors | InterVest, Korea Investment Partners, KB Investment, Woori Venture Partners, Smilegate Investment |
| Revenue Status | Pre-revenue; IP licensing and silicon tape-out underway; strong hyperscaler interest |
| TAM | CXL memory: $2B (2025) to $20B+ (2030); CXL components $6B by 2034 |
| Access Route | Korean VC networks; potentially Series B in 2025-2026 |
4. Axelera AI (Recommended: Opportunistic)
| Axelera AI – Product & Competition | |
|---|---|
| Attribute | Details |
| Core Product | Metis AIPU (214 TOPS @ 15 TOPS/W); Europa next-gen (629 TOPS); Titania for data center |
| Technology | Digital In-Memory Compute (D-IMC) with RISC-V; targets memory wall problem |
| Stack Position | Edge AI inference – industrial, automotive, robotics, drones, surveillance |
| Real Competitors | Hailo, SiMa.ai, Qualcomm, NVIDIA Jetson, Intel Movidius, Google Edge TPU |
| Differentiation | European sovereignty play; Samsung backing; strong power efficiency; $100M+ pipeline |
| Key Risks | Edge market fragmentation; NVIDIA Jetson ecosystem; scaling from niche to mass market |
| Exit Path | Strategic M&A (Samsung?); IPO if data center expansion succeeds ($1-3B outcome likely) |
| Axelera AI – Financial & Investment | |
|---|---|
| Attribute | Details |
| Valuation | ~$1.0B (implied from Jan 2025 credit facility) |
| Total Raised | $200M+ (equity + EU grants including €62M EuroHPC grant) |
| Latest Round | €62M EuroHPC grant (Mar 2025); seeking €150M+ new round (Aug 2025) |
| Key Investors | Samsung Catalyst Fund, Invest-NL, EIC Fund, CDP Venture Capital (Italy), imec.xpand, Bitfury |
| Revenue Status | ~$15M revenue (Aug 2025); $100M+ business pipeline; Metis in production |
| TAM | Edge AI chip market growing rapidly; AI semiconductor infrastructure $193B by 2027 |
| Access Route | European VC networks; potentially in upcoming €150M+ round |
5. SiMa.ai (Recommended: Opportunistic)
| SiMa.ai – Product & Competition | |
|---|---|
| Attribute | Details |
| Core Product | MLSoC (Machine Learning System-on-Chip) for edge AI; software-centric platform |
| Technology | Full-pipeline ML SoC; no-code drag-and-drop development; 10x perf/power vs alternatives |
| Stack Position | Edge AI – industrial, automotive, retail, aerospace, defense, agriculture, healthcare |
| Real Competitors | Hailo, Axelera, Qualcomm, Ambarella, NVIDIA Jetson, Graphcore (edge) |
| Differentiation | Software-first approach; Cisco partnership for Industry 4.0; Michael Dell backing |
| Key Risks | Crowded edge market; customer concentration; scaling beyond niche verticals |
| Exit Path | IPO or strategic M&A by industrial/auto player; lower multiple exit ($1-3B likely) |
| SiMa.ai – Financial & Investment | |
|---|---|
| Attribute | Details |
| Valuation | $960M (Jul 2025) |
| Total Raised | $355M across 7 rounds |
| Latest Round | $85M Series C (Aug 2025) – led by Maverick Capital |
| Key Investors | Maverick Capital, Fidelity, Point72, MSD Partners (Michael Dell), Lip-Bu Tan (angel) |
| Revenue Status | Revenue generating; commercial products shipping; Cisco partnership |
| TAM | Edge AI inference: ~$15B (2025) to $75B (2033); mobile/edge 120M+ units annually |
| Access Route | Best access among the group – later stage, multiple funding rounds available |
Analysis based on publicly available information as of January 2026. This is not investment advice. Due diligence recommended before any investment decisions.

When do the AI companies have fair value
AI Labor Displacement: Investment Valuation Analysis
How We Built This Model
Forecasting AI’s impact on employment requires navigating considerable uncertainty. We’ve attempted to ground this analysis in where AI capabilities actually stand today and where they’re realistically heading over a 15-year horizon.
What we considered for each job category:
- Current AI capability vs. the full scope of the role
- The “corner case” problem—routine tasks automate easily, but exceptions and edge cases dominate what remains
- Public acceptance—will customers and businesses actually adopt AI in these roles?
- Physical vs. cognitive tasks—robotics lags behind software AI
- Regulatory and liability constraints—some industries move slowly regardless of technical capability
- Legacy infrastructure—new systems deploy AI easily, retrofitting old ones is slow and expensive
Where this led us:
Rather than accept headline figures of 30%+ displacement (which often measure “exposure to automation” rather than actual job losses), we’ve worked through each category individually. Many roles already experienced automation waves—bookkeepers to QuickBooks, bank tellers to ATMs, travel agents to online booking. What remains are the harder tasks.
The Conclusion
- 400 million jobs displaced globally over 15 years
- $670 billion/year in new savings each year, compounding
- $10 trillion/year ongoing savings at full deployment (Year 15)
- $80 trillion cumulative savings over the transition
At 8% value capture with 30% net margins, the AI ecosystem reaches fair value (20x P/E) by Year 12-13—roughly 2037-2038.
| Segment | Current P/E | Year to 20x |
|---|---|---|
| Memory | 25x | Now |
| Infrastructure | 30x | Year 3-4 |
| Networking | 35x | Year 5-6 |
| Cloud | 45x | Year 8-9 |
| Chips (Nvidia) | 60x | Year 11-13 |
| Application Layer | 80x | Year 10-12 |
| Pure-Play AI | 100x+ | Year 14+ |
Displacement by Category
- Customer Service (55% displaced) — routine calls automated, angry customers still want humans
- Administrative (45%) — already gutted by software, AI takes the remainder
- Retail (40%) — checkout goes, floor staff stay
- Manufacturing (45%) — new plants automated, legacy slow to retrofit
- Transport (35%) — highways yes, last-mile and regulation lag
- Professional Services (30%) — junior roles compress, seniors gain leverage
- Content/Creative (45%) — commodity automated, premium stays human
The Numbers
Fair value timeline: Memory and infrastructure now. Cloud by Year 8-9. Nvidia by Year 11-13. Pure-play AI requires the full 15-year thesis.
FutureCompute has delivered +1.92% since inception, outperforming every benchmark tracked. QQQ fell 3.15%, the Nasdaq Composite (IXIC) −3.51%, and ARKK declined 12.84% over the same period — a combined alpha spread of up to +14.76pp vs ARKK.
The December trough (−2.88%, Dec 12) reflected sector-wide AI reassessment post-DeepSeek R1. The portfolio recovered decisively through January, peaking at +3.79% on Jan 16 as hyperscaler earnings confirmed sustained infrastructure commitment.
Google is the standout contributor — Gemini integration delivering measurable search and Cloud uplift. Microsoft has lagged on Copilot enterprise adoption but Azure AI remains structurally solid.