CPT 5.5 is not ready for real problems – By the way Claude nailed it.
Chat GPT is not usefull even for classic confined problem analysis ! Am I the only one that feels he need to keep reminding it of previous constraints [even if in personality settings]. I sometimes find I get so irritaded with it going of track by ignoring some fundamental facts that it knew earlier, or that it just chose ‘wasn’t core’.
Both Claude and GPT 5.5 had links set up prior to my Google Calendar before starting in which available and unavailable dates were clearly marked.
The task was to find Southampton/Portsmouth cruises that fit a custody/calendar pattern. The output became unreliable because several errors compounded.
This is ChatGPT’s own self report
Mistakes made:
Used inferred alternating-weekend logic instead of checking the actual Google Calendar for each date.
Anchored the alternating pattern to the wrong weekend, which flipped “Shey” and “Not Shey” weekends.
Treated Saturday departures as clean without checking the preceding Friday night.
Claimed some cruises “worked” when they actually required one or two night swaps.
Mixed “clean”, “marginal”, and “works” inconsistently.
Treated school-holiday periods too loosely, even though boarding does not apply in holidays.
Listed MSC cruises more completely than other lines, then implied the non-MSC list was complete when it was not.
Added prices and “all cruises” style tables without having built a full cross-line dataset.
Corrected individual mistakes, but did not reset the whole table from first principles each time, causing circular rework.
Why it happened:
I relied on assumptions and pattern projection when I should have used the calendar as the ground truth.
I combined partial cruise-search results with calendar logic, but did not clearly separate “verified” from “assumed”.
I did not maintain one fixed rule set throughout.
I answered incrementally rather than rebuilding the full table after key corrections.
What the user experience felt like:
The user had to repeatedly catch mistakes.
The same issue was revisited several times.
Confidence in the table degraded because corrections were piecemeal.
The user had to manage the reasoning process rather than just receive a usable answer.
The task became circular: list cruises → check calendar → find mistake → reclassify → discover another assumption.
Correct method going forward:
Pull the relevant calendar period first.
Build a verified calendar map: each Friday marked “You”, “Dad”, “Holiday”, “Boarding/No boarding”.
Enumerate cruises separately by source/line.
For each cruise, check exact departure and return dates against the calendar.
Classify strictly:
Clean = 0 swaps
Marginal = 1–2 named swap nights
Exclude = holiday/full cover needed/too many swaps
Only say “all cruises” after a full schedule enumeration across all relevant cruise lines.
Copilot monetisation slower than expected. Azure AI remains structurally strong.
RPL — Weekly Since Inception
Return vs Benchmarks (Dec 5 → Feb 18)
FutureCompute Alpha vs Each Benchmark
Indexed Performance — All Five (Dec 5, 2025 = 100)
Benchmark Comparison
Fund / Index
Dec 5, 2025
Peak
Feb 18, 2026
Return
Weekly NAV Data
Date
RPL
Portfolio Value
vs Inception
vs QQQ
Manager Commentary
FutureCompute has delivered +1.92% since inception, outperforming every benchmark tracked. QQQ fell 3.15%, the Nasdaq Composite (IXIC) −3.51%, and ARKK declined 12.84% over the same period — a combined alpha spread of up to +14.76pp vs ARKK.
The December trough (−2.88%, Dec 12) reflected sector-wide AI reassessment post-DeepSeek R1. The portfolio recovered decisively through January, peaking at +3.79% on Jan 16 as hyperscaler earnings confirmed sustained infrastructure commitment.
Google is the standout contributor — Gemini integration delivering measurable search and Cloud uplift. Microsoft has lagged on Copilot enterprise adoption but Azure AI remains structurally solid.
Target: +30% by Dec 2026 · Next rebalance review: Apr 2026
2026 data centre capex from hyperscalers plus neoclouds totals ~$725B, up ~36% YoY. But these companies were already spending heavily, so the real question is: where does the incremental ~$190B of new spending land, who sees the biggest revenue uplift relative to their existing business, and who has the supply constraints to protect margins?
Tier 1 — Transformed: DC Capex IS the Company
These businesses are being fundamentally reshaped. DC demand matches or exceeds their entire current revenue.
VRT — Vertiv (Cooling & Power Distribution)
$10B revenue → $10-13B DC demand = ~115% of current revenue
Arguably the most transformed company in the entire chain. Five years ago Vertiv was a sleepy $5B industrial business. Now orders are up 60% YoY, backlog is $9.5B with a 1.4x book-to-bill. Every megawatt of DC capacity needs cooling racks and power distribution — and Vertiv is one of very few scaled suppliers. Supply is genuinely constrained, supporting pricing. The incremental uplift from 2024 to 2026 could be 50-70% revenue growth.
ANET — Arista Networks (DC Networking)
$9B revenue → $8-10B DC demand = ~100% of current revenue
Near pure-play DC networking, taking share from Cisco in back-end AI cluster connectivity. AI-specific networking revenue was ~$750M in 2025, targeting $1.5B+ in 2026. The shift to 800G+ Ethernet for AI clusters is a product cycle Arista owns. Every new GPU rack needs high-bandwidth switching — as the installed GPU base doubles, networking demand follows.
Already transformed — DC revenue grew from $15B (FY23) to ~$188B (FY26) in three years. The story is now about sustaining 50%+ growth, not discovering it. Blackwell and Vera Rubin have half a trillion in visibility. Supply remains constrained (GPUs sold out), keeping gross margins at 73-75%. The long-term risk is custom silicon gradually eroding ~90% market share, but that’s a 2028+ concern.
Tier 2 — Major Driver: DC Capex Reshaping 40-65% of Revenue
DC spending is the dominant growth engine but these companies retain meaningful diversification.
The ultimate picks-and-shovels play. TSMC wins regardless of whether NVIDIA, Broadcom, AMD, or custom silicon wins — they fab all of them. HPC/AI is now 58% of revenue (up from 39% in 2022). Annual price increases locked in through 2029 because there is no alternative at advanced nodes. The supply constraint is physics — building leading-edge fabs takes 2-3 years, so capacity stays tight through 2027+.
The hidden story is how fast AI is reshaping the semiconductor half. AI revenue was ~$24B in FY2025, growing 60%+ YoY, with a $73B AI order backlog. Broadcom designs the custom ASICs for Google (TPUs), Amazon, and now OpenAI. As hyperscalers diversify away from NVIDIA, Broadcom is the direct beneficiary. AI networking adds another layer. The incremental growth is concentrated in the semi side — VMware software grows at low-double-digits.
MU — Micron (HBM & Server DRAM)
$42B revenue (TTM) → ~$15-20B DC-linked = ~42%. Q1 FY26 was record $13.6B
Every AI GPU needs 6-8 HBM stacks, and HBM is in severe shortage. Micron is the only US-listed pure play — SK Hynix (~50% share) and Samsung (~25%) are Korean-listed. HBM pricing runs at 3-5x regular DRAM per bit, and the constraint won’t ease before 2027. Margins went from negative to record levels in 18 months. Risk: Korean competitors and eventual oversupply could compress pricing.
Tier 3 — Solid Tailwind: DC is 10-38% of Revenue
Meaningful exposure but well-diversified. Less upside but also less risk if DC spending slows.
PWR Quanta Services / EME EMCOR — DC Construction
PWR: $28B rev, ~$8-12B DC (~36%). EME: $16B rev, ~$5-7B DC (~38%)
Record backlogs ($39B for PWR) driven by DC buildout AND the grid infrastructure to power it. The key constraint is skilled craft labour — you can’t quickly train data centre electricians, which protects margins. They benefit twice: building the DCs and building the transmission/substation infrastructure to feed them. Lower risk than pure-play DC names because they also serve grid, renewables, and industrial markets.
ETN Eaton / CAT Caterpillar — Power Infrastructure
ETN: $27B rev, ~$6-9B DC (~28%). CAT: $66B rev, ~$5-8B DC (~10%)
Eaton (UPS, switchgear, PDUs) has more DC leverage at ~28% of revenue, with 12-18 month lead times supporting pricing. Caterpillar (backup generators) benefits but DC is ~10% of a massive diversified business — a nice tailwind, not thesis-changing. Both benefit from supply constraints on electrical equipment but neither is being reshaped by this cycle.
Why Supply Constraints Matter: Pricing Power Across the Chain
The revenue uplift is only half the story. What makes this cycle particularly attractive is that supply bottlenecks are creating a seller’s market at almost every layer, meaning growth comes with margin expansion rather than compression.
TSMC — monopoly on advanced nodes, annual price rises locked in through 2029. NVIDIA — GPUs sold out, 73-75% gross margins at massive scale. HBM memory — physical shortage, 3-5x premium pricing. Vertiv/Eaton — cooling and power equipment backlogs at record levels, 12-18 month lead times. Quanta/EMCOR — skilled labour shortage in electrical trades creates barriers to competition.
These constraints look durable through at least 2027.
What to Watch
Custom silicon adoption — if hyperscalers shift faster to Trainium/TPUs/Broadcom ASICs, NVIDIA’s share erodes but TSMC and AVGO benefit. Net neutral for the overall pool.
HBM supply normalisation — if the memory shortage resolves, HBM pricing compresses and Micron’s margins come under pressure. Likely a 2027-2028 event.
Neocloud financial health — CoreWeave has $14B debt with $9.7B maturing within 12 months. If neoclouds struggle, ~$55B of the capex pool is at risk.
Capex sustainability — hyperscalers now spend 45-57% of revenue on capex. Any pullback in AI demand or ROI questions could trigger rapid capex cuts, hitting all suppliers simultaneously.
Sources: Company earnings reports (Q3/Q4 2025), Dell’Oro Group, IoT Analytics DC Infrastructure Report, IEEE ComSoc, analyst estimates. Revenue figures are latest reported full-year or TTM. DC revenue estimates are approximate allocations from the ~$725B total pool based on industry breakdowns and company disclosures. February 2026.
The market keeps talking about an “AI bubble.” That misses the real fragility.
The bigger bubble risk is in SaaS application software—because AI enables new entrants to build broader, business-native solutions that solve conjoined problems end-to-end, not isolated tasks. That shifts value away from “apps glued together with humans” toward platforms that understand the business process.
The Nokia Trap (what’s actually changing)
Nokia didn’t lose because people wanted a cheaper phone or a slightly better phone.
It lost because the category stopped being “a phone.” The product became a computing platform that absorbed whole new jobs (apps, internet, maps, media). Nokia was optimised for the old definition of the product.
SaaS is facing the same trap:
Many SaaS tools are optimised for problems not productivity of work
The next generation solves the whole job, across steps, systems, and teams
When the category expands, the old leaders can look suddenly narrow.
The real threat: “bigger problem” products
Most businesses don’t have “a CRM problem” or “a ticketing problem” or “a billing problem.”
They have a revenue problem, a service problem, a fulfilment problem, a compliance problem—spanning multiple tools and handoffs.
Classic SaaS often works like this:
a set of apps
connected by integrations
held together by people doing manual work, chasing exceptions, updating fields, and reconciling gaps
AI-native challengers can compete on a different axis:
one model of the business workflow
one place where context lives
automation that moves work forward across steps
handling exceptions without a human being the glue
That’s not “cheaper seats.” It’s a different product category: outcome-focused systems rather than tool-focused apps.
Why this hits SaaS valuations
If buyers shift from “best tool” to “best end-to-end outcome,” the SaaS moat changes:
Feature depth matters less than workflow ownership
Integrations matter less than shared context
UI matters less than automation and decisioning
Point solutions get re-framed as “components,” not “systems”
That’s exactly where multiples compress: when a market realises a product is becoming a replaceable module inside a bigger platform.
What to watch (the early signs)
Not “seat price down.” Look for:
new vendors winning by promising cycle-time reduction (close faster, ship faster, resolve faster)
incumbents expanding into adjacencies defensively (copying platform moves)
customers consolidating around process owners (systems of action), not systems of record
services headcount shifting from “ops glue” to “exception oversight”
Bottom line
AI isn’t just making software cheaper.
It’s enabling a change in what customers buy: solutions that map to the business, solve multiple connected problems, and reduce the need for humans to stitch workflows together.
That’s the Nokia Trap for SaaS: people keep asking for “better apps” right up until the product category shifts to “business platforms that run the work.”
“–” More exposed SaaS / app-layer
Salesforce (CRM) — CRM
ServiceNow — NOW
Workday — WDAY
Adobe — ADBE
Atlassian — TEAM
Snowflake — SNOW
Datadog — DDOG
HubSpot — HUBS
Zoom — ZM
DocuSign — DOCU
Okta — OKTA
Twilio — TWLO
Shopify — SHOP
Monday.com — MNDY
Asana — ASAN
Smartsheet — SMAR
PagerDuty — PD
Freshworks — FRSH
Elastic — ESTC
GitLab — GTLB
MongoDB — MDB
Cloudflare — NET
Unity — U
Toast — TOST
“+” Likely beneficiaries (AI platform / infra / “system-of-action” enablers)
3 min read | For investors tracking NVDA, hyperscalers, and AI infrastructure
Emerging AI architectures — particularly Mamba and State Space Models (SSMs) — threaten to disrupt the current AI landscape.
Both hardware and software leaders face risk, though hardware companies are more exposed: custom silicon takes 3-5 years to design, while model companies can retrain and pivot in 12-18 months.
The Game-Changer: Mamba and State Space Models
The transformer problem: Today’s AI models (GPT, Claude, Gemini) use “attention” — comparing every word to every other word. Double the context length, quadruple the compute.
That’s why long-context models are so expensive and why hyperscalers are spending hundreds of billions on GPU clusters.
The SSM solution: State Space Models process sequences like a running summary rather than looking at everything simultaneously.
Mamba — the leading SSM architecture, developed by researchers at Carnegie Mellon and Princeton — scales linearly with context length.
The impact:
✓ 5x faster on long sequences
✓ Fraction of the memory footprint
✓ Dramatically lower compute per token
Who’s backing it: Mistral (Codestral Mamba), AI21 Labs (Jamba), Together AI. Hybrid transformer-Mamba models are already in production.
The catch: Pure Mamba hasn’t matched transformers for complex reasoning at frontier scale — yet. But hybrids are proliferating, and the trajectory is clear.
What This Means for GPU Demand
If SSMs gain traction, the compute economics of AI change fundamentally.
Transformer world:
Compute scales quadratically with context
High memory per token
GPU hours: baseline
Long context: expensive, constrained
SSM world:
Compute scales linearly with context
Low memory per token
GPU hours: potentially 50-80% less
Long context: cheap, abundant
The demand question: Efficiency gains could reduce absolute GPU demand — or unlock new use cases that consume the savings. History suggests the latter, but the transition creates uncertainty.
The NVIDIA risk: SSMs still run on CUDA — but if you need 50-80% fewer GPUs per workload, that’s a volume problem even if NVIDIA keeps 100% share.
The $600 billion hyperscaler capex trajectory assumes insatiable compute demand. SSMs challenge that assumption.
The custom silicon risk: TPUs, Trainium, and MTIA are optimised for transformer attention patterns. SSMs have different compute profiles.
Billions in chips could become sub-optimal — and reduced compute demand means less need for any custom silicon.
Who’s Exposed, Who’s Protected
NVIDIA — Moderate exposure
Keeps architecture flexibility, but volume at risk if compute demand drops. Pricing power may erode.
Google, Amazon — High exposure
Custom silicon optimised for wrong architecture and potentially overbuilt for reduced demand. Double hit.
Microsoft — Hedged
NVIDIA dependency means less stranded silicon, but Azure capex still at risk.
Neo-Clouds (CRWV, NBIS) — High exposure
Leveraged to GPU demand. If demand drops 50%, the debt doesn’t.
OpenAI, Anthropic — Lower exposure
Compute costs drop. They’ll adopt what works. Net beneficiaries.
The Bottom Line
SSMs and Mamba may not kill transformers outright — hybrids are more likely.
But they rewrite the compute economics and challenge the “insatiable demand” narrative underlying current valuations.
NVIDIA keeps architectural flexibility but faces volume risk. Hyperscalers face both wrong-architecture risk and overbuilding risk. Neo-clouds are leveraged to a demand curve that may flatten.
The model companies may be the cleanest winners — their compute bills go down while their capabilities go up.
When efficiency improves dramatically, the biggest spenders have the most to lose.
AI represents 19% of total data center capital stock.
Non-AI infrastructure accounts for 81% ($2.29T vs $530B).
Markets mispricing based on 2025 annual flow (58% AI) rather than cumulative capital deployed.
Non-AI Capital Base (2015-2025)
$2.29T
81% of global DC infrastructure
AI Infrastructure (2020-2025)
$530B
19% incremental build
Annual vs Cumulative: The Critical Distinction
Annual Data Center Investment by Type
2015-2025 annual spending patterns
Total Capital Deployed (2015-2025)
Cumulative infrastructure build over the decade
Key Finding
2025 annual spend: AI captured 58% share.
2015-2025 cumulative capital: AI represents 19% of total deployed infrastructure.
The valuation error: Markets anchor to annual flow metrics while ignoring the $2.29T capital stock already in operation.
Financial Implications
Metric
Non-AI Infrastructure
AI Infrastructure
Capital Deployed (2015-2025)
$2.29T
$530B
Annual Revenue Generated
$900B – $1.1T
$150B – $200B
Revenue per $ Deployed
$0.40 – $0.48
$0.28 – $0.38
Gross Margin
25% – 30%
15% – 22%
Utilization Rate
60% – 75%
35% – 50%
Power Consumption (per rack)
Baseline
3x – 5x higher
Refresh Cycle
5 – 7 years
2 – 3 years
Key finding: Non-AI infrastructure delivers 30-40% higher revenue per deployed dollar. AI requires 1.5-1.7x more capital to generate equivalent gross profit.
AWS Case Study
Period / Type
Capex Deployed
Annual Revenue (2024-2025)
Gross Margin
Traditional Cloud (2015-2022)
$180B
$85B
25% – 30%
AI Infrastructure (2023-2025)
$60B
$12B
15% – 22%
Observation: AI infrastructure generates lower revenue and margins per dollar deployed despite higher growth rates.
ROI on AI Investment
Bottom line: AI infrastructure is profitable but capital-inefficient compared to traditional cloud.
Return Profile
AI generates 15-22% gross margins vs 25-30% for traditional infrastructure.
Requires 1.5-1.7x more deployed capital to produce equivalent gross profit.
Revenue per dollar: $0.28-0.38 vs $0.40-0.48 (30-40% lower efficiency).
Business Pressures
Power: 3-5x consumption per rack, grid capacity constrained.
Depreciation: GPU refresh cycles 2-3 years vs 5-7 years traditional.
Pricing: Competitive pressure as providers race to scale.
Utilization: 35-50% vs 60-75% traditional. Variable training workloads reduce billable hours.
Investment implication: AI growth is real but structurally less efficient. Valuation multiples should reflect lower returns per deployed dollar.
Unit Economics
Revenue Efficiency
Non-AI: $0.40-0.48 revenue per dollar deployed annually
AI: $0.28-0.38 revenue per dollar deployed annually
Gap: Non-AI 30-40% more efficient
Gross Margins
Non-AI: 25-30%
AI: 15-22%
Capital required: AI needs 1.5-1.7x more capital for equivalent gross profit
AWS Example
Traditional cloud: $180B → $85B revenue → 25-30% margins
AI infrastructure: $60B → $12B revenue → 15-22% margins
Investment View
$2.29T base generates $900B-1.1T revenue (6x larger than AI)
Non-AI delivers superior revenue per dollar and margins
Apply blended multiples reflecting base economics dominance
The AI hardware market presents compelling private investment opportunities as the industry shifts from training to inference. We analysed 12 private companies across accelerators, memory subsystems, and interconnect – here’s what we found.
The Big Picture
Four key themes emerged from our analysis:
Inference is where the money is. Training happens once; inference runs continuously at scale. Companies solving inference bottlenecks command premium valuations.
Memory is the constraint, not compute. GPU memory bandwidth limits LLM performance. CXL pooling and in-memory compute bypass expensive HBM.
NVLink lock-in creates openings. NVIDIA’s proprietary interconnect frustrates multi-vendor deployments. Open standards are gaining traction.
Valuations are wildly dispersed. Cerebras at $22B vs d-Matrix at $2B for comparable inference positioning. Early-stage offers better risk/reward.
Our Top 5 Picks
Ranked by risk-adjusted return potential for growth-oriented investors.
Corsair accelerator uses digital in-memory compute – claims 10x faster inference at 3x lower cost than GPUs. Already shipping with SquadRack reference architecture (Arista, Broadcom, Supermicro).
Why we like it: Best risk/reward in the space. Digital approach avoids yield issues plaguing analog competitors. Microsoft backing de-risks enterprise adoption. $2B valuation vs $22B Cerebras.
Key risk: NVIDIA inference roadmap; execution at scale.
#2 FuriosaAI
Tensor Contraction Processor
Valuation: ~$1-1.5B | Raised: $246M+ | HQ: Seoul, Korea
RNGD chip delivers 2.25x better inference performance per watt vs GPUs using proprietary TCP architecture. Won LG AI Research as anchor customer. TSMC mass production underway.
Why we like it: Rejected Meta’s $800M acquisition offer – management believes in standalone value. IPO targeted 2027. Differentiated architecture, not a GPU clone.
Key risk: Korea-centric investor base; Series D pricing.
#3 Panmnesia
CXL Memory Pooling Fabric
Stage: Early | Raised: $80M+ | Origin: KAIST spinout
CXL 3.1 fabric switch enables GPU memory pooling to terabytes – solving the memory wall that limits LLM context windows. Single-digit nanosecond latency.
Why we like it: Memory pooling is inevitable. HBM costs $25/GB vs $5/GB for DDR5. Enfabrica’s $900M NVIDIA acqui-hire validates this layer. Panmnesia is the “next Enfabrica.”
Building open-standard networking (UAL, Ultra Ethernet) to compete with NVIDIA’s proprietary NVLink. Pre-product but exceptional team.
Why we like it: Founded by Barun Kar and Rajiv Khemani – they built Palo Alto Networks, Innovium, and Cavium (all acquired). This is a team bet with 3x proven exits.
Valuation: ~$500M+ | Raised: $200M+ | Backers: Samsung Catalyst, EU Innovation Council
Europa chip delivers 629 TOPS using digital in-memory compute and RISC-V. 120+ customers in retail, robotics, security. €61.6M EuroHPC grant for next-gen Titania chip.
Why we like it: Leading European play with EU sovereignty tailwinds. Samsung backing provides strategic optionality. Real customers, real revenue.
Key risk: Edge TAM smaller than data center.
The Rest of the Field (1 = best)
Companies we evaluated but rank lower due to valuation, risk profile, or structural issues:
SiMa.ai
Edge MLSoC · $270M raised · Rating: 3.5
Full-stack edge platform, but crowded market with Jetson competition.
Cerebras
Wafer-scale · $22B valuation · Rating: 4
Radical architecture but valuation prices in perfection. Event trade only.
Tenstorrent
RISC-V AI chips · $3.2B valuation · Rating: 5
Jim Keller star power, but open RISC-V = no moat.
Etched
Transformer ASIC · $5B valuation · Rating: 5
Bold bet on transformer permanence. Binary outcome.
Not Investable
These companies came up in our research but are no longer private opportunities:
Groq – Acquired by NVIDIA for ~$20B (December 2025)
Enfabrica – NVIDIA acqui-hired CEO and team for ~$900M (September 2025)
Rivos – Meta acquisition announced (September 2025)
Astera Labs – IPO’d March 2024, now $23B+ market cap (NASDAQ: ALAB)
Key Risks
Before deploying capital, consider:
⚠️ NVIDIA isn’t standing still. Blackwell and Rubin will improve inference. Startups need to stay ahead of a well-funded incumbent.
⚠️ Hyperscalers are building captive silicon. Google (TPU), Amazon (Trainium), Microsoft (Maia) reduce addressable market.
⚠️ TSMC concentration. Nearly all targets fab at TSMC. Taiwan risk affects the entire portfolio.
⚠️ CXL timing uncertainty. Memory pooling benefits require ecosystem maturity that may take longer than expected.
Watch Cerebras IPO for public market entry if pricing rationalises
Analysis based on publicly available information as of January 2026. This is not investment advice. Due diligence recommended before any investment decisions.
Edge AI inference: ~$15B (2025) to $75B (2033); mobile/edge 120M+ units annually
Access Route
Best access among the group – later stage, multiple funding rounds available
Analysis based on publicly available information as of January 2026. This is not investment advice. Due diligence recommended before any investment decisions.
FutureCompute has delivered +1.92% since inception, outperforming every benchmark tracked. QQQ fell 3.15%, the Nasdaq Composite (IXIC) −3.51%, and ARKK declined 12.84% over the same period — a combined alpha spread of up to +14.76pp vs ARKK.
The December trough (−2.88%, Dec 12) reflected sector-wide AI reassessment post-DeepSeek R1. The portfolio recovered decisively through January, peaking at +3.79% on Jan 16 as hyperscaler earnings confirmed sustained infrastructure commitment.
Google is the standout contributor — Gemini integration delivering measurable search and Cloud uplift. Microsoft has lagged on Copilot enterprise adoption but Azure AI remains structurally solid.