There is No Bubble / No Overbuild: AI Infrastructure Is Supported by the Revenue Chain

There is No Bubble / No Overbuild: AI Infrastructure Is Supported by the Revenue Chain

Bottom up measure of the revenue available to support today’s datacentre build out.

The likely revenue from AI end users to the GPU/DataCenter is intact and realistic.

AI infrastructure economics work from the customer upward. End users pay for AI-enabled products, and that revenue flows to applications, middleware, hyperscalers, data centers, and GPU suppliers. Only end-user spending generates new revenue; the rest simply passes it upstream.

How much cost to cover

Today’s global AI build-out totals approximately $350–400 billion invested, or about 5–7 million GPUs. Annual operating and capital recovery costs are roughly $150–200 billion per year, which must be supported by downstream revenues.

Where does that $$ come from up the Stack

(Best viewed landscape)

SectorCurrent Annual Revenue (USD B)AI Revenue Growth (%)Incremental Revenue (USD B)Total Post-AI Revenue (USD B)
Retail & Advertising~700+35–65250–450950–1,150
Enterprise & Government Productivity~9,000 (OPEX base)+2–3 effective capture180–280180–280 (net benefit)
Defense & National AI Programs~800+8–1260–12060–120
Financial Services~2,000+2–340–6040–60
Healthcare & Life Sciences~9,000+0.5–145–9045–90
Industrial & Logistics~4,000+1–240–8040–80
Media & Entertainment~2,500+2–350–7550–75
Education & Training~1,000+2–420–4020–40
Total Downstream Revenue Creation≈ 685–1,195≈ 685–1,195

Downstream revenues provide roughly three to six times coverage of the infrastructure’s annual cost ($150–200 billion per year).

Timing

2025–26: Utilization 50–65 percent, with strongest pull-through from retail and advertising.
2026–27: Enterprise, government, and defense spending lift utilization toward 80 percent.
2027–28: Equilibrium reached; downstream revenues of roughly $0.8–1.2 trillion per year meet or exceed the $150–200 billion annual infrastructure cost.

Conclusion

The AI data-center build-out is financially supportable. It is ahead of revenue, not in excess of it. As retail, enterprise, and government adoption matures, the system becomes self-funding within about two years.

Q&A

Q: Does this account for the current build-out pace?
A: Yes. It assumes GPU capacity roughly doubles by 2026, with capex rising 25–35 percent per year and AI-driven revenues expanding 30–40 percent per year. On that trajectory, revenue equilibrium is expected by 2027–2028. If build-outs outpace revenue growth, equilibrium could slip by about one year.

The AI Market Is Much Bigger Than We Anticipated – $800bn

The AI Market Is Much Bigger Than We Anticipated – $800bn

AI GPU Market — Priory House Partners

The AI Market HUGE – Bottom Up Analysis

Produced by Priory House Partners — PrioryHouse.com

The Insatiable Demand for AI Tools

This report examines the explosive and unrelenting growth in demand for AI-driven tools — a market already exceeding $ 350 bn in 2025 and projected to expand toward $ 700–800 bn by 2028. It explores how AI adoption across software, infrastructure, and operations is transforming business models, creating both new revenue layers and deep structural cost efficiencies. The following sections break down these effects by industry, monetization model, and their cumulative impact on GPU infrastructure demand.

Introduction & Market Scope

AI monetization is expanding faster than expected across industries. 2025 AI tools revenue of $ 350 bn is projected to reach $ 700–800 bn by 2028, with ~40% captured by GPU infrastructure (≈ $ 280–320 bn).

AI Tools Revenue Growth

AI Revenue Split by Industry (2028)

AI Revenue Composition

Industry Totals (Incremental AI Tools Revenue, $B)

Industry Incremental AI ($B)
Enterprise / SaaS 73
Transport / Energy / Infrastructure 65
Government / Military 53
Ecommerce / Retail 49
Finance / Insurance 32
Healthcare 29
Education 16

Monetization Models Across Segments

Software / SaaS Sector – Add-On Module Model

Vendors integrate AI into existing platforms as modular upgrades (e.g., copilots, anomaly detection, predictive reporting). Fast to deploy, scalable, and capital-light.

  • AI tier or “Pro” subscription plan
  • Usage-based charges (API calls, inference tokens)
  • Feature-based upsells inside enterprise suites

Example: productivity SaaS adding summarization, design, or code-generation modules with 15–25% premium pricing.

Industrial / Infrastructure – AI as a Cost Option

Sold as efficiency enhancements integrated into existing control/data systems to improve reliability, energy use, and logistics.

  • Predictive maintenance for fleets and energy grids
  • Route and load optimization in logistics
  • Smart forecasting in utilities or construction

Typical pricing: recurring license uplift or analytics subscription (~5–10%).

Cost-Saving Models – Human Replacement at Lower Cost

AI substitutes human analysis/processing at lower cost; monetization via throughput or per-transaction pricing.

  • Insurance / Health claims: automated review, adjudication, fraud detection
  • Credit scoring: instant AI risk modeling replacing manual underwriting
  • Medical scanning & test results: automated radiology/lab interpretation
  • Drug discovery: generative compound screening accelerating R&D
  • Customer service: conversational agents replacing Tier-1 call centers

These substitutions expand compute demand and model training cycles, sustaining GPU utilization even in cost-cutting environments.

E-commerce – AI Uplift on Take Rates ($ 49 bn)

On mobile, scroll left/right to view all columns.

Segment Base Market ($B) AI Adoption (%) Incremental AI Rev ($B)
Retail media & sponsored listings 240 8.5 20
Dynamic pricing & recommendation 120 7.5 9
Fulfilment & logistics 160 6.0 10
Customer support & personalisation 80 7.5 6
Fraud & trust management 45 10.0 4

Enterprise SaaS – Productivity & Analytics ($ 73 bn)

On mobile, scroll left/right to view all columns.

Segment Base Market ($B) AI Adoption (%) Incremental AI Rev ($B)
Productivity & collaboration 230 7.5 17
Engineering / design software 170 7.5 13
Data / analytics / BI 230 7.5 17
CX / CRM / marketing automation 260 7.5 20
ERP / HR / ITSM 90 6.5 6

Conclusion

The market for AI is immense—far larger than broadly appreciated—because AI is becoming pervasive across tools, systems, and services. Each incremental capability compounds efficiency, decision quality, and revenue capture, driving durable GPU infrastructure demand.

Appendix – Other Industries

Transport / Energy / Infrastructure ($ 65 bn)

On mobile, scroll left/right to view all columns.

Segment Base Market ($B) AI Adoption (%) Incremental AI Rev ($B)
Connected vehicle systems 90 8.5 8
Fleet & logistics 120 8.5 10
Aviation ops 140 7.0 10
Energy exploration & production 130 7.5 10
Grid management & forecasting 150 8.5 13
Mining & materials 60 8.5 5
Agriculture & food 100 9.0 9

Government / Military ($ 53 bn)

On mobile, scroll left/right to view all columns.

Segment Base Market ($B) AI Adoption (%) Incremental AI Rev ($B)
Defense: ISR, simulation, autonomy 220 11.5 25
Tax & revenue services 70 10.5 7
Citizen services 55 9.0 5
Education IT & workforce 80 9.0 7
Public research & environment 95 9.0 9

Healthcare ($ 29 bn)

On mobile, scroll left/right to view all columns.

Segment Base Market ($B) AI Adoption (%) Incremental AI Rev ($B)
Diagnostics / imaging 120 7.5 9
Clinical data mgmt 100 8.0 8
Drug discovery / R&D 80 9.0 7
Hospital operations 70 7.5 5

Finance / Insurance ($ 32 bn)

On mobile, scroll left/right to view all columns.

Segment Base Market ($B) AI Adoption (%) Incremental AI Rev ($B)
Risk modelling & trading 130 8.0 10
Claims & underwriting 110 8.0 9
Fraud detection 90 8.0 7
Customer advisory / CX 80 7.0 6

Education ($ 16 bn)

On mobile, scroll left/right to view all columns.

Segment Base Market ($B) AI Adoption (%) Incremental AI Rev ($B)
Digital learning / tutoring 80 9.0 7
Institutional admin 45 8.0 4
Research / analytics 60 9.0 5
Priory House Partners — PrioryHouse.com AI GPU Market

Why OpenAI and Anthropic will own the Chip Business

Instruction Sets as Historical Monopolies

  • Intel (x86) and ARM built decades-long monopolies by controlling instruction sets.
  • Licensing or compatibility dictated who could build chips, and locked the ecosystem to their terms.

AI’s Shift

  • In AI, the model (Claude, GPT, Gemini) becomes the functional equivalent of the instruction set.
  • The architecture of inference chips is defined by the model (layer sizes, precision, memory bandwidth).
  • This makes the model owner the new monopoly power. You cannot build an independent inference chip for GPT without OpenAI’s cooperation.

Blocking Copycats

  • Closed weights: The core IP is the model itself, not the chip. Without access to weights, no competitor can “copy” GPT’s instruction set.
  • Ecosystem lock-in: Software runtimes, quantization methods, and compilers become proprietary extensions—like CUDA for NVIDIA.
  • Vertical integration: Model makers who build custom inference chips tie the hardware directly to their model, blocking substitution.

Strategic Result

  • Chip vendors: Reduced to subcontractors unless they align with a model owner.
  • Model owners: Achieve monopoly status equivalent to Intel/ARM in the last era, but with stronger lock-in because the instruction set (model weights) is closed IP.
  • Barrier to copying: High — not from semiconductor know-how, but from legal/IP control over model architectures and trained weights.
The Fist Glimpse of the Death of Nvidia

The Fist Glimpse of the Death of Nvidia

Conclusion First: LLM Companies Will Own Inference—And With It, the Rights to the Silicon $$

OpenAI’s vast monopoly has turned the tables on Nvidia.

Processing does not own LLM’s – LLM’s own Silicon!

The most important investor takeaway is this: Large Language Model (LLM) companies will not just dominate inference—they will own the rights to the silicon that powers it. In the same way Intel and ARM controlled instruction sets for decades, model owners like OpenAI, Anthropic, and Google now control the “instruction set” of AI: the closed model weights. No chip can run their models without their approval. This creates a monopoly dynamic with far greater lock-in than past hardware eras.

NVIDIA’s Blackwell GPUs are world-class for training, but they are the wrong long-term solution for inference. Inference accounts for 99% of AI compute today, and it can be executed more cheaply and efficiently on custom chips tied directly to model architectures. Blackwell is overbuilt for this role, leaving an opening that the model owners themselves are best positioned to fill.


Blackwell vs. Inference Chips: Efficiency at the Core of AI Deployment

NVIDIA’s Blackwell architecture (B200/B100) is unmatched for training and flexible enough for both training and inference. But in inference-only use cases, it wastes silicon and power on unused capabilities. In contrast, dedicated inference chips—like Tesla’s FSD chip, Apple’s Neural Engine, Google’s TPUv4i, or AWS’s Inferentia—are optimized for low-bit precision, simpler interconnect, and streamlined memory pipelines. The result is 3–10× higher performance per watt and significantly lower cost per inference.

This efficiency gap demonstrates why inference will migrate away from general-purpose GPUs toward custom silicon.


Inference Monopolies: How Model Owners Will Control the Future of AI Compute

Why Inference Will Dominate

  • Training is rare and centralized.
  • Inference is constant and scales with usage.
  • Economically, inference drives the overwhelming majority of AI market value.

Cost-Effective Silicon for Inference

  • Inference chips can be built on 5nm or 7nm, avoiding the costs of bleeding-edge nodes.
  • Apple, Tesla, and Samsung have proven custom inference silicon can be built for $150M–$300M.
  • Quantization and model compression make inference even more efficient at lower geometries.

The Strategic Shift: Models as the New Instruction Set

  • Historically, Intel and ARM monopolized by controlling instruction sets.
  • In AI, model weights are the instruction set—closed, copyrighted, and protected by license agreements.
  • This means the model owner alone controls the right to design inference hardware compatible with their models.
  • Competitors cannot copy or replicate without permission, creating a legal and technical lock-out.

Deployment Path

  1. Centralized inference: Run in lab-controlled data centers (today’s norm).
  2. Enterprise inference: Licensed chips deployed inside corporate data centers.
  3. Edge inference: Personal Intelligence Engines (PIEs) embedded in user devices.

Investor Implications

  • Model owners (OpenAI, Anthropic, Google): Gain not only software monopoly but also a hardware monopoly, by controlling both the model and the silicon rights.
  • Chip vendors (NVIDIA, AMD): Training remains important but is a smaller, less scalable market. Their dominance weakens as inference shifts to model-specific chips.
  • Enterprises: Must license inference hardware, paying rent to model owners.
  • Consumers: AI engines run locally but still locked to the model maker’s ecosystem.

Final Takeaway for Investors
Training may still generate headlines, but inference is where the money and the control lie. And inference belongs to the model makers, who own the weights and thus own the rights to the silicon. This is a stronger monopoly than Intel or ARM ever achieved—legal, architectural, and economic. For investors, the key is clear: back the companies that own the models, because they will own the hardware market that runs them.
OpenAI Could Become the World’s Biggest Chipmaker — And Dethrone NVIDIA

OpenAI Could Become the World’s Biggest Chipmaker — And Dethrone NVIDIA

When the OpenAI Chips are down

The real leverage in AI doesn’t come from GPUs. It comes from who controls the models — and therefore who dictates the silicon those models must run on. That is the foundation modellers, they alone decide the Chips

OpenAI owns the most widely used closed models in the world. If it builds its own inference chips, those models will only run on OpenAI’s hardware. That flips the supply chain upside down. NVIDIA doesn’t set the rules anymore — the model company does. Developers and enterprises don’t get a choice of hardware; they’re locked into whatever silicon OpenAI decides to serve their APIs from.

That’s the second-order game. Training runs grab headlines, but they’re episodic. Inference is continuous, and it scales with every query to ChatGPT, every copilot suggestion, every agent embedded in a business workflow. If OpenAI locks inference to its own silicon, it captures not just the software margins but the hardware margins too — a vertical integration tighter than CUDA ever created. CUDA was sticky, but models could still be ported. Closed models bound to closed chips? That’s total captivity.

This is Apple’s chip strategy multiplied by orders of magnitude. By 2030, inference demand could support a $50–100 billion model API market. If OpenAI captures the silicon layer as well, it could seize an additional $15–25 billion annually that currently flows to NVIDIA and the cloud providers. In effect, the world’s most valuable AI company could also become its largest silicon vendor.

The first phase of the AI boom was about building models. The second phase is about defining the hardware those models run on. And because the model owners set the silicon requirements, companies like OpenAI have a clear path to unseat NVIDIA — not just as the leader in AI software, but as the dominant force in AI silicon.


AI’s Next Lock-In: Moving Inference to the Edge

The first phase of the silicon play is about model companies running their models on proprietary chips inside their own cloud clusters. That locks the cloud side of inference to their hardware.

The second phase is more ambitious: push inference out of the data center and into the edge.

Right now, every ChatGPT query or Claude response runs on centralized infrastructure. That creates two problems: latency, because every request travels across networks, and cost, because the model company is footing the compute bill. At global scale, those limits become critical.

Offloading inference to the edge solves both. If inference runs on enterprise servers or local devices — but only on silicon defined by the model owner — then three things happen:

  • Latency improves, because compute is local.
  • Costs shift, with enterprises and device vendors funding the hardware.
  • Lock-in deepens, since only certified chips can run the models.

This is the second-order business model. The model company still defines the stack, but now enterprises carry the capex. The provider keeps the margins while embedding its hardware into every environment where inference runs.

The bigger picture is that the cloud is only the start. The real opportunity is ensuring that wherever inference happens — in hyperscale farms, in corporate data centers, or at the edge — it runs only on silicon tied to the model owner.

That’s the next lock-in. And it could prove even bigger than the first.

OPEN AI Just killed Chrome & Became the Shopping Leader (Watch out Amazon)

OPEN AI Just killed Chrome & Became the Shopping Leader (Watch out Amazon)

Is Google Search the next Blackberry

Discovery to Delivery (D2D )OpenAI Instant Checkout: Redefining Online Commerce

OpenAI has just delivered what in hindsight feels obvious but is still shocking in its implications: purchasing integrated directly into ChatGPT. Until now, shopping involved describing what we want, getting AI recommendations, and then jumping into Chrome or Amazon to complete the purchase. That separation is now gone. Users can browse, choose, and buy inside the chat itself.

This is not a minor feature—it’s one of the biggest shake-ups in the way the internet is used for commerce.


Why This Is a Game-Changer

  • Seamlessness: No jumping between apps, tabs, or carts. The flow stays in ChatGPT.
  • Discovery + Checkout merged: Product search (once Google’s domain) and transaction execution (Amazon’s moat) are collapsing into a single AI-driven workflow.
  • Ecosystem threat:
    • Google risks losing shopping intent queries.
    • Amazon risks losing its ad-driven marketplace dominance But could win in a D2D (Discovery to Delivery) Duopoly.
  • First partners: Stripe (payments) and Etsy (merchants), with Shopify next. Spotify has also been highlighted as an upside partner in broader AI integrations.

Upside Vendors

  • Stripe: Core infrastructure, processes payments under the Agentic Commerce Protocol. Direct beneficiary.
  • Spotify / digital vendors: Integration potential to sell subscriptions or products inside the chat window.
  • Etsy/Shopify merchants: First-mover exposure to incremental demand.
  • AI infrastructure: This pushes further AI adoption. If Amazon’s marketplace algorithms are replaced by real-time AI agents, compute demand will soar—benefiting NVIDIA, AMD, and hyperscalers.
  • Amazon : Mixed – Amazon delivery infrastructure is unmatched, coupled with better search it could increase sales volume and simply replace a lot of affiliate costs with margin to OpenAI – I am not ready to call the Amazon fight won yest !

Downside Risks for Incumbents

Amazon

  • Ads: $47B ad business depends on discovery. If 10–20% shifts to ChatGPT, $5–10B is at risk.
  • Seller fees: 5% GMV migration = $8B+ lost.
  • Prime stickiness: Weakens as lock-in fades.
  • Potential Upside : Potential $10–20B downside under mid–high adoption scenarios. However their logistics supremacy might mean that they become a total product discovery to delivery (D2D) powerhouse with OpenAI

Google

  • Search/shopping ads: $30–35B segment directly in the firing line.
  • Diversion risk: 5–10% shift equates to $5–10B revenue loss; up to $15–20B in high adoption.
  • Strategic dilemma: Integrating AI checkout cannibalizes its own core revenue stream.

Others

  • Meta: Ad budgets could partially divert, though social commerce is less exposed.
  • Shopify/Etsy: Gain near-term sales but risk brand disintermediation as ChatGPT becomes the storefront.

The Impact Matrix

PlayerUpside PotentialMagnitudeDownside RiskMagnitude
AmazonAWS demand (indirect)MediumAds ($5–10B+), seller fees, PrimeHigh
GoogleAI infra, Gemini integrationMediumSearch/shopping ads ($7–20B)High
MetaSocial AI commerce layerMediumAd budget diversionLow–Med
MicrosoftEquity in OpenAI, Azure useHighMinimal direct exposureLow
StripeACP payments volumeHighOver-reliance on OpenAILow–Med
ShopifyEarly sales channel gainMediumBrand disintermediationMedium
EtsyIncremental exposureMediumSame as Shopify, smaller scaleMedium
NVIDIA / AI InfraAI inference demandHighNone materialLow

Investor Takeaway

This is more than incremental innovation. It’s a platform shift:

  • Winners: Stripe, OpenAI/Microsoft, NVIDIA/cloud providers.
  • Losers: Google, with $20–40B combined high-margin revenue risk if adoption accelerates.
  • Mixed: Shopify/Etsy (gain sales but risk losing brand presence).
  • Meta: Peripheral exposure but could pivot by embedding AI-driven commerce in social platforms.
  • Unknown: Amazon with a Delivery powerhouse second to none

The bigger point: AI is not just answering questions—it’s swallowing functionality once owned by browsers, search engines, and marketplaces. If social integration follows, ChatGPT could evolve into the new “chrome,” redefining how digital ecosystems operate.


OMG – AI is MUCH bigger than even I dreamed !

OMG – AI is MUCH bigger than even I dreamed !

TL:DR

The revenues of todays AI Tech darlings look tiny against the probable government use of AI

This is not financial advice.

Lets just look at how AI could be deployed in government. The US Government today spends approx $100bn on IT (Link: Source Government Accounting Office GAO ).

AI will grow faster than historical due to the migration from legacy Military to AI and including and from human labor to AI. If we assume it grows as fast as Cloud, mobile or eCommerce of non physical goods like travel insurance and banking.


Who wins “massively”?

  • Palantir (AI platform of record for many agencies).
  • Anduril & Shield AI (autonomy in drones/defense).
  • NVIDIA + AWS/Azure (compute backbone).
  • Booz Allen Hamilton (federal AI integrator).
  • Traditional primes (Lockheed, RTX, Northrop) will also capture billions by embedding AI into every next-gen platform.

Historical Growth Of Disruptive Tech


Total Government Spend - Opportunity

DepartmentAddressable Ops/Admin ($B)Notes
Defense (DoD + Intel)~500Largest pool: personnel ($300B+), logistics, C2, ISR analysis, sustainment.
Health & Human Services (HHS)~220Medicare/Medicaid admin, NIH/FDA/CDC ops, claims/fraud detection.
Social Security Administration (SSA)~50Benefits processing, adjudication, fraud/compliance.
Treasury (IRS)~50Tax return processing, audits, fraud detection.
Homeland Security (DHS)~40Border enforcement, TSA, customs, cyber, FEMA.
Veterans Affairs (VA)~30Claims admin, scheduling, health diagnostics.
Justice (DOJ incl. FBI, courts)~20Case review, forensic analysis, investigations.
Energy (DOE/NNSA)~15Grid ops, nuclear monitoring, simulations.
Others (State, DOT, NASA, Education, etc.)~30Diplomatic services, transport safety, space mission planning, grants.

AI Centric Companies Set To Benifit

Anduril – Lattice OS, autonomous drones, counter-UAS, defense AI ecosystem.
Shield AI – Hivemind AI pilot software for aircraft/UAVs, autonomy in GPS-denied environments.
Kratos – Loyal wingman drones, attritable UAVs bridging legacy & autonomy.
Epirus – AI-powered electronic warfare & directed energy counter-drone systems.
HawkEye 360 – AI-driven RF geolocation from satellite constellations.
BigBear.ai – Decision intelligence, predictive logistics, ISR analytics.
These are the companies turning battlefield platforms into software-defined, autonomous systems.
Scale AI – Data pipelines, model evaluation, red-teaming for DoD & intelligence community.
Rebellion Defense – AI for military decision support, C2, and cyber operations.
Primer AI – Natural language AI for intelligence agencies; rapid document/signal analysis.
Cognitive Space – AI-driven satellite tasking & orchestration for ISR missions.
These firms build the AI “brains” for defense — decision support, intel processing, orchestration.
Darktrace – AI-driven anomaly detection, self-learning cyber defense.
Dragos – Industrial/critical infrastructure cybersecurity with AI insights.
Claroty – AI monitoring for OT and healthcare infrastructure security.
As government shifts to AI, protecting the attack surface with AI-native cyber firms is critical.

DOD alone Major Military AI Opportunities (2029 & 2035)

Budget base: ~$70B/year
AI penetration: ~35% by 2029 (~$25B), ~60% by 2035 (~$40–45B)
Drivers: Automated image/video/signal analysis, multi-source fusion, real-time intel feeds.
Budget base: ~$50B/year
AI penetration: ~25% by 2029 (~$12B), ~50% by 2035 (~$25B)
Drivers: Real-time decision support, logistics optimization, mission planning automation.
Budget base: ~$100B+ procurement & O&M
AI penetration: ~20% by 2029 (~$20B), ~40% by 2035 (~$40B+)
Drivers: Drone swarms, unmanned ground/sea vehicles, autonomous satellites.
Budget base: ~$25B/year
AI penetration: ~30% by 2029 (~$8B), ~55% by 2035 (~$14B)
Drivers: Automated cyber defense, offensive cyber, adaptive jamming, spectrum control.
Budget base: ~$40B/year
AI penetration: ~20% by 2029 (~$8B), ~40% by 2035 (~$16B)
Drivers: AI guidance, adaptive targeting, prioritization of high-value targets.
Budget base: ~$80B/year
AI penetration: ~10% by 2029 (~$8B), ~25% by 2035 (~$20B)
Drivers: Predictive maintenance, automated parts supply, readiness forecasting.
Budget base: ~$15B/year
AI penetration: ~20% by 2029 (~$3B), ~40% by 2035 (~$6B)
Drivers: AI adversaries in simulations, adaptive wargames, digital twins.
Budget base: ~$30B/year
AI penetration: ~10% by 2029 (~$3B), ~25% by 2035 (~$7B)
Drivers: Automated triage, diagnostic AI, HR/payroll automation.

Major Military AI Opportunities (2029 & 2035)

Budget base: ~$70B/yr
AI penetration: 2029 ~35% (~$25B), 2035 ~60% (~$40–45B)
Why: Automated image/video/signal analysis; multi-source fusion.

Budget base: ~$50B/yr
AI penetration: 2029 ~25% (~$12B), 2035 ~50% (~$25B)
Why: Decision-support, logistics, real-time battle management.

Budget base: ~$100B+ procurement & O&M
AI penetration: 2029 ~20% (~$20B), 2035 ~40% (~$40B+)
Why: Drone swarms, UUVs, unmanned ground vehicles.

Budget base: ~$25B/yr
AI penetration: 2029 ~30% (~$8B), 2035 ~55% (~$14B)
Why: Automated cyber defense & offense, adaptive jamming.

Budget base: ~$40B/yr
AI penetration: 2029 ~20% (~$8B), 2035 ~40% (~$16B)
Why: AI guidance, adaptive targeting, prioritization.

Budget base: ~$80B/yr
AI penetration: 2029 ~10% (~$8B), 2035 ~25% (~$20B)
Why: Predictive maintenance, parts supply forecasting.

Budget base: ~$15B/yr
AI penetration: 2029 ~20% (~$3B), 2035 ~40% (~$6B)
Why: AI adversaries in sims, adaptive wargames.

Budget base: ~$30B/yr
AI penetration: 2029 ~10% (~$3B), 2035 ~25% (~$7B)
Why: Automated triage, HR/payroll task automation.

The Secondary Market Shareholder’s Dilemma with Stock Based Compensation for Employees

The Secondary Market Shareholder’s Dilemma with Stock Based Compensation for Employees


When you buy on the secondary market, you are buying shares (or more often, shares of a fund that holds shares) from an existing investor, not from the company itself. This changes the risk profile significantly.

1. The Information Black Box

As a secondary buyer, you have extremely limited visibility into the company’s capitalization table (“cap table”).

  • You don’t know: The total number of shares outstanding.
  • You don’t know: The size of the employee stock option pool (ESOP).
  • You don’t know: What percentage of that pool has already been granted.

Result: You cannot calculate your true ownership percentage or the potential dilution you face. You are buying a “black box” of future dilution.

2. You Are Inherently Buying a “Subordinated” Position

Later-stage investment rounds often include liquidation preferences. Early investors (like the venture capital firms) have preferred shares that get paid back first in an IPO or sale. Your common shares (which is what employees get and what is typically sold on secondary markets) are at the bottom of the stack.

The SBC Impact: The employee option pool is also made up of common shares. When those options exercise at the IPO, they dilute your common shares directly, while the value of the preferred shares is often protected until their preferences are met.

3. The “Dilution Bomb” at IPO

This is the single biggest risk. The dilution from years of SBC hasn’t happened yet; it’s being stored up.

  • The company has been promising employees equity without having to show the dilution on its books.
  • At the IPO, the S-1 filing will reveal the fully diluted share count. The market’s reaction to this number will determine the IPO price.
  • As a secondary buyer, you may have paid a price based on a pre-IPO valuation that does not accurately account for this massive, pending dilution. Your investment could be immediately underwater if the public market values the company on a per-share basis that is much lower than your entry price.

4. Price Discovery is Flawed

The price you pay on a secondary platform is based on the latest funding round’s pre-money valuation. This valuation often:

  • Ignores the “overhang” of the unexercised employee options.
  • A savvy buyer would value the company on a fully-diluted basis (including all in-the-money options), but the data to do this is not available to you.

Example:

  • You buy at a $15 Billion valuation.
  • The company has a 15% employee option pool, mostly ungranted.
  • At IPO, if all those options are granted and exercised, the fully-diluted valuation effective for new public shareholders might be based on a much larger share count, making the “true” valuation you invested in higher than you thought.

Comparison: Secondary Shareholder in Anthropic vs. Public Shareholder in Credo

AspectPublic Shareholder (Credo)Secondary Shareholder (Anthropic)
TransparencyHigh: SBC and dilution are reported quarterly. You see the dilution happening in real-time.Extremely Low: You have no clear view of the cap table or the size of the SBC liability.
Control & PredictabilitySome: You can model future dilution based on trends. You can vote (with limited effect) on equity plans.None: You have zero insight or control over how many options the board grants to employees.
The Dilution EventGradual and Priced In: The dilution happens steadily and is reflected in the ongoing stock price.A Single “Bomb”: The full dilution is revealed at once in the S-1, causing a potential major repricing.
LiquidityHigh: You can sell your shares instantly if you dislike the dilution trend.Very Low: Your money is locked in until the IPO. You cannot escape if you discover the cap table is messy.

Conclusion for the Secondary Market Investor

Investing in a pre-IPO company like Anthropic on the secondary market is a massive bet on trust.

  • You are trusting that the board is being disciplined with the employee option pool.
  • You are trusting that the strike prices for employee options are high enough to not excessively dilute future shareholders.
  • You are trusting that the eventual IPO valuation will be high enough to absorb the dilution from SBC and still provide a return on your entry price.

For a public company shareholder, SBC is a visible, quantifiable, and manageable risk. For a secondary market buyer of a private company, SBC is an invisible, unquantifiable risk that could detonate at the most critical moment—the IPO.

This is why secondary market investments in late-stage private companies are considered high-risk, even for “blue-chip” names like Anthropic. The lack of transparency around the cap table and SBC is a primary reason.

CREDO vs ASTERIA LABS: KPI Comparison Side-by-Side

CREDO vs ASTERIA LABS: KPI Comparison Side-by-Side

MetricCredo (CRDO)Astera (ALAB)Winner
REVENUE (FY2024)
Total Revenue$192.1M$251.8MAstera
YoY Growth+27.6%+74.6%Astera
PROFITABILITY
Gross Margin59.8%76.7%Astera
Operating Income-$25.3M (Loss)+$31.5M (Profit)Astera
Net Income-$22.8M (Loss)+$31.1M (Profit)Astera
Non-GAAP Net Income+$9.4M (Profit)+$71.3M (Profit)Astera
BALANCE SHEET
Cash & Investments$393.2M$682.1MAstera
Cash from Operations+$15.3M+$75.9MAstera
VALUATION
Market Cap~$4.5-5.0B~$10.5-11.5BAstera
P/S Ratio~25x~42xCredo
P/E RatioN/A (Unprofitable)~340xN/A
KEY RISKS
Customer ConcentrationTop 2 = 65% of revenueTop 3 = 90% of revenueBoth High Risk
R&D Investment45% of revenue ($85.9M)38% of revenue ($94.9M)Similar

Key Takeaways:

Astera Leads In:

  • Revenue size and growth rate
  • Profitability (already GAAP profitable)
  • Gross margins (superior pricing power)
  • Cash generation and balance sheet strength

Credo’s Position:

  • Lower valuation multiples (P/S ratio)
  • Broader market exposure beyond AI servers
  • Still showing strong growth

Shared Risks:

  • Extreme customer concentration
  • High valuation premiums
  • Semiconductor cycle exposure

This side-by-side format should be easy to download and read!

NVDIA fair market valuation

NVDIA fair market valuation


Nvidia Valuation Note

please note this is not financial advice and people should do their own research and I do hold positions in NVIDIA

FAIR VALUE BY KPI


Nvidia Valuation Note

An evaluation of Nvidia’s fair market value was undertaken using relative comparisons against other large technology peers — Microsoft, Apple, Alphabet, and Meta. The framework considers profitability, growth, efficiency, and risk concentration, then adjusts valuation multiples accordingly.

  • Microsoft: Provides the benchmark for scale and diversification, with steady 12% forward growth, margins in the mid-30s, and a P/E around 30×. Nvidia’s higher margins and much stronger growth suggest a premium multiple, but Microsoft anchors the comparison for stability and breadth of business lines.
  • Apple: Represents the lower-growth, high-cashflow peer. Its forward growth (~7%) and 25× P/E highlight the ceiling for mature megacaps with strong brands but limited expansion. Apple’s lower growth justifies a much lower PEG ratio, making Nvidia’s 40%+ growth look exceptional.
  • Alphabet (Google): Offers a mid-point case with 10% growth and 25× P/E. It illustrates how large, diversified platforms are valued when they have strong but not hyper-growth businesses. Nvidia’s concentration in one segment increases risk compared to Alphabet’s breadth, which tempers the valuation premium.
  • Meta: Despite the highest gross margins (~82%), its growth (~15%) is well below Nvidia’s. At a P/E of ~22× and PEG ~1.5, Meta shows that even dominant profitability doesn’t justify high multiples without sustained growth. Nvidia’s superior growth trajectory makes its PEG near 1 look undervalued by comparison.

Valuation outcome:

  • Peer PEG averages ~2.5. Applied to Nvidia’s 42% forward growth implies a P/E of ~105×.
  • Adjusting for risk concentration (single-product exposure) and scaling by its superior margins and ROE yields a fair multiple of ~68×.
  • With forward net income of $73B, this equates to a **$5 trillion** market capitalization.

.