AI Monetization Potential: $600B Today, ~$800B by 2027

Aggregate incremental value from AI features is ≈ $596BN/year (~0.6% of global GDP), led by enterprise software and e-commerce with material contributions from healthcare, transport, energy, government, military, and financial services.

Condensed Industry Overview

Industry Underlying Market Size Assumed Uplift % AI Incremental Value % of Total
Enterprise & Professional Software$1.5T10%$147BN25%
E-commerce & Consumer Commerce$13T GMV (35%)2%$99BN17%
Healthcare (Consumer & Enterprise IT)$9T~1%$74BN12%
Transport, Mobility & Aviation$8T~1%$70BN12%
Energy & Resources$7T~0.8%$58BN10%
Government & Public Sector$25–30T~0.2%$56BN9%
Military & Defense$2T~2.5%$50BN8%
Financial & Insurance Services$5T~0.8%$42BN7%
TOTAL≈$70T+$596BN100%
Tip: tap/hover a slice for the full name and value if a label is clipped.

Industries (click to expand)

Enterprise & Professional Software — $147BN
  • Software Engineering / IT Ops — $100BN
  • Industrial / Design / Manufacturing Software — $10BN
  • Legal, Consulting & Professional Services Software — $25BN
  • Scientific R&D / Enterprise Analytics — $12BN
E-commerce & Consumer Commerce — $99BN
  • Retail / General E-commerce — $42BN
  • Travel & Tourism — $6BN
  • Automotive (cars, EVs, parts) — $21BN
  • Food Delivery / Ride-hailing — $7BN
  • Health & Beauty — $5BN
  • Online Grocery — $5BN
  • Other (insurance, fintech, media, edtech, healthcare) — $13BN
Healthcare (Consumer & Enterprise IT) — $74BN
  • EMR & Clinical Systems — $15BN
  • Imaging & Diagnostics AI — $8BN
  • Patient Engagement & Triage — $6BN
  • Drug Discovery / Research AI — $10BN
  • Consumer e-Health / Online Pharmacies — $5BN
  • Other Healthcare AI Services — $30BN
Financial & Insurance Services — $42BN
  • Risk & Credit Scoring SaaS — $7BN
  • Fraud & Compliance AI — $6BN
  • Insurance Claims & Underwriting — $8BN
  • Online Insurance & Distribution — $10BN
  • Other Fin/Ins AI Services — $11BN
Transport, Mobility & Aviation — $70BN
  • Connected Car Platforms — $15BN
  • Fleet & Logistics Optimization — $7BN
  • Ride-hailing AI — $7BN
  • Aviation (airlines, cargo, MRO) — $20BN
  • Other Mobility/Transport AI — $21BN
Military & Defense — $50BN
  • Command & Control (C2) — $15BN
  • Intelligence / Surveillance / Recon (ISR) — $10BN
  • Autonomous Systems (air/ground/naval) — $15BN
  • Logistics & Simulation — $10BN
Energy & Resources — $58BN
  • Energy Exploration & Production — $20BN
  • Mining & Materials — $10BN
  • Agriculture & Food Production — $18BN
  • Other Energy/Resource AI — $10BN
Government & Public Sector (non-military) — $56BN
  • Tax & Revenue Services — $15BN
  • Citizen Services & Digital Government — $10BN
  • Education IT & Workforce Training — $14BN
  • Public Research & Climate/Environmental AI — $17BN

Forward Outlook (2025 → 2027)

  • 2025: ≈ $596BN
  • 2027: ≈ $750–800BN (assumes ~12–15% CAGR)

Mapping to NVIDIA

  • NVIDIA projected revenues ≈ $280BN by FY2027
  • Projected AI monetization pool ≈ $750–800BN
  • NVIDIA likely to capture a disproportionate share of infrastructure spend; competition includes AMD and hyperscalers

Conclusion

AI monetization ≈ $600BN today, expanding to ~$800BN by 2027.

Why OpenAI and Anthropic will own the Chip Business

Instruction Sets as Historical Monopolies

  • Intel (x86) and ARM built decades-long monopolies by controlling instruction sets.
  • Licensing or compatibility dictated who could build chips, and locked the ecosystem to their terms.

AI’s Shift

  • In AI, the model (Claude, GPT, Gemini) becomes the functional equivalent of the instruction set.
  • The architecture of inference chips is defined by the model (layer sizes, precision, memory bandwidth).
  • This makes the model owner the new monopoly power. You cannot build an independent inference chip for GPT without OpenAI’s cooperation.

Blocking Copycats

  • Closed weights: The core IP is the model itself, not the chip. Without access to weights, no competitor can “copy” GPT’s instruction set.
  • Ecosystem lock-in: Software runtimes, quantization methods, and compilers become proprietary extensions—like CUDA for NVIDIA.
  • Vertical integration: Model makers who build custom inference chips tie the hardware directly to their model, blocking substitution.

Strategic Result

  • Chip vendors: Reduced to subcontractors unless they align with a model owner.
  • Model owners: Achieve monopoly status equivalent to Intel/ARM in the last era, but with stronger lock-in because the instruction set (model weights) is closed IP.
  • Barrier to copying: High — not from semiconductor know-how, but from legal/IP control over model architectures and trained weights.
The Fist Glimpse of the Death of Nvidia

The Fist Glimpse of the Death of Nvidia

Conclusion First: LLM Companies Will Own Inference—And With It, the Rights to the Silicon $$

OpenAI’s vast monopoly has turned the tables on Nvidia.

Processing does not own LLM’s – LLM’s own Silicon!

The most important investor takeaway is this: Large Language Model (LLM) companies will not just dominate inference—they will own the rights to the silicon that powers it. In the same way Intel and ARM controlled instruction sets for decades, model owners like OpenAI, Anthropic, and Google now control the “instruction set” of AI: the closed model weights. No chip can run their models without their approval. This creates a monopoly dynamic with far greater lock-in than past hardware eras.

NVIDIA’s Blackwell GPUs are world-class for training, but they are the wrong long-term solution for inference. Inference accounts for 99% of AI compute today, and it can be executed more cheaply and efficiently on custom chips tied directly to model architectures. Blackwell is overbuilt for this role, leaving an opening that the model owners themselves are best positioned to fill.


Blackwell vs. Inference Chips: Efficiency at the Core of AI Deployment

NVIDIA’s Blackwell architecture (B200/B100) is unmatched for training and flexible enough for both training and inference. But in inference-only use cases, it wastes silicon and power on unused capabilities. In contrast, dedicated inference chips—like Tesla’s FSD chip, Apple’s Neural Engine, Google’s TPUv4i, or AWS’s Inferentia—are optimized for low-bit precision, simpler interconnect, and streamlined memory pipelines. The result is 3–10× higher performance per watt and significantly lower cost per inference.

This efficiency gap demonstrates why inference will migrate away from general-purpose GPUs toward custom silicon.


Inference Monopolies: How Model Owners Will Control the Future of AI Compute

Why Inference Will Dominate

  • Training is rare and centralized.
  • Inference is constant and scales with usage.
  • Economically, inference drives the overwhelming majority of AI market value.

Cost-Effective Silicon for Inference

  • Inference chips can be built on 5nm or 7nm, avoiding the costs of bleeding-edge nodes.
  • Apple, Tesla, and Samsung have proven custom inference silicon can be built for $150M–$300M.
  • Quantization and model compression make inference even more efficient at lower geometries.

The Strategic Shift: Models as the New Instruction Set

  • Historically, Intel and ARM monopolized by controlling instruction sets.
  • In AI, model weights are the instruction set—closed, copyrighted, and protected by license agreements.
  • This means the model owner alone controls the right to design inference hardware compatible with their models.
  • Competitors cannot copy or replicate without permission, creating a legal and technical lock-out.

Deployment Path

  1. Centralized inference: Run in lab-controlled data centers (today’s norm).
  2. Enterprise inference: Licensed chips deployed inside corporate data centers.
  3. Edge inference: Personal Intelligence Engines (PIEs) embedded in user devices.

Investor Implications

  • Model owners (OpenAI, Anthropic, Google): Gain not only software monopoly but also a hardware monopoly, by controlling both the model and the silicon rights.
  • Chip vendors (NVIDIA, AMD): Training remains important but is a smaller, less scalable market. Their dominance weakens as inference shifts to model-specific chips.
  • Enterprises: Must license inference hardware, paying rent to model owners.
  • Consumers: AI engines run locally but still locked to the model maker’s ecosystem.

Final Takeaway for Investors
Training may still generate headlines, but inference is where the money and the control lie. And inference belongs to the model makers, who own the weights and thus own the rights to the silicon. This is a stronger monopoly than Intel or ARM ever achieved—legal, architectural, and economic. For investors, the key is clear: back the companies that own the models, because they will own the hardware market that runs them.
OpenAI Could Become the World’s Biggest Chipmaker — And Dethrone NVIDIA

OpenAI Could Become the World’s Biggest Chipmaker — And Dethrone NVIDIA

When the OpenAI Chips are down

The real leverage in AI doesn’t come from GPUs. It comes from who controls the models — and therefore who dictates the silicon those models must run on. That is the foundation modellers, they alone decide the Chips

OpenAI owns the most widely used closed models in the world. If it builds its own inference chips, those models will only run on OpenAI’s hardware. That flips the supply chain upside down. NVIDIA doesn’t set the rules anymore — the model company does. Developers and enterprises don’t get a choice of hardware; they’re locked into whatever silicon OpenAI decides to serve their APIs from.

That’s the second-order game. Training runs grab headlines, but they’re episodic. Inference is continuous, and it scales with every query to ChatGPT, every copilot suggestion, every agent embedded in a business workflow. If OpenAI locks inference to its own silicon, it captures not just the software margins but the hardware margins too — a vertical integration tighter than CUDA ever created. CUDA was sticky, but models could still be ported. Closed models bound to closed chips? That’s total captivity.

This is Apple’s chip strategy multiplied by orders of magnitude. By 2030, inference demand could support a $50–100 billion model API market. If OpenAI captures the silicon layer as well, it could seize an additional $15–25 billion annually that currently flows to NVIDIA and the cloud providers. In effect, the world’s most valuable AI company could also become its largest silicon vendor.

The first phase of the AI boom was about building models. The second phase is about defining the hardware those models run on. And because the model owners set the silicon requirements, companies like OpenAI have a clear path to unseat NVIDIA — not just as the leader in AI software, but as the dominant force in AI silicon.


AI’s Next Lock-In: Moving Inference to the Edge

The first phase of the silicon play is about model companies running their models on proprietary chips inside their own cloud clusters. That locks the cloud side of inference to their hardware.

The second phase is more ambitious: push inference out of the data center and into the edge.

Right now, every ChatGPT query or Claude response runs on centralized infrastructure. That creates two problems: latency, because every request travels across networks, and cost, because the model company is footing the compute bill. At global scale, those limits become critical.

Offloading inference to the edge solves both. If inference runs on enterprise servers or local devices — but only on silicon defined by the model owner — then three things happen:

  • Latency improves, because compute is local.
  • Costs shift, with enterprises and device vendors funding the hardware.
  • Lock-in deepens, since only certified chips can run the models.

This is the second-order business model. The model company still defines the stack, but now enterprises carry the capex. The provider keeps the margins while embedding its hardware into every environment where inference runs.

The bigger picture is that the cloud is only the start. The real opportunity is ensuring that wherever inference happens — in hyperscale farms, in corporate data centers, or at the edge — it runs only on silicon tied to the model owner.

That’s the next lock-in. And it could prove even bigger than the first.

OPEN AI Just killed Chrome & Became the Shopping Leader (Watch out Amazon)

OPEN AI Just killed Chrome & Became the Shopping Leader (Watch out Amazon)

Is Google Search the next Blackberry

Discovery to Delivery (D2D )OpenAI Instant Checkout: Redefining Online Commerce

OpenAI has just delivered what in hindsight feels obvious but is still shocking in its implications: purchasing integrated directly into ChatGPT. Until now, shopping involved describing what we want, getting AI recommendations, and then jumping into Chrome or Amazon to complete the purchase. That separation is now gone. Users can browse, choose, and buy inside the chat itself.

This is not a minor feature—it’s one of the biggest shake-ups in the way the internet is used for commerce.


Why This Is a Game-Changer

  • Seamlessness: No jumping between apps, tabs, or carts. The flow stays in ChatGPT.
  • Discovery + Checkout merged: Product search (once Google’s domain) and transaction execution (Amazon’s moat) are collapsing into a single AI-driven workflow.
  • Ecosystem threat:
    • Google risks losing shopping intent queries.
    • Amazon risks losing its ad-driven marketplace dominance But could win in a D2D (Discovery to Delivery) Duopoly.
  • First partners: Stripe (payments) and Etsy (merchants), with Shopify next. Spotify has also been highlighted as an upside partner in broader AI integrations.

Upside Vendors

  • Stripe: Core infrastructure, processes payments under the Agentic Commerce Protocol. Direct beneficiary.
  • Spotify / digital vendors: Integration potential to sell subscriptions or products inside the chat window.
  • Etsy/Shopify merchants: First-mover exposure to incremental demand.
  • AI infrastructure: This pushes further AI adoption. If Amazon’s marketplace algorithms are replaced by real-time AI agents, compute demand will soar—benefiting NVIDIA, AMD, and hyperscalers.
  • Amazon : Mixed – Amazon delivery infrastructure is unmatched, coupled with better search it could increase sales volume and simply replace a lot of affiliate costs with margin to OpenAI – I am not ready to call the Amazon fight won yest !

Downside Risks for Incumbents

Amazon

  • Ads: $47B ad business depends on discovery. If 10–20% shifts to ChatGPT, $5–10B is at risk.
  • Seller fees: 5% GMV migration = $8B+ lost.
  • Prime stickiness: Weakens as lock-in fades.
  • Potential Upside : Potential $10–20B downside under mid–high adoption scenarios. However their logistics supremacy might mean that they become a total product discovery to delivery (D2D) powerhouse with OpenAI

Google

  • Search/shopping ads: $30–35B segment directly in the firing line.
  • Diversion risk: 5–10% shift equates to $5–10B revenue loss; up to $15–20B in high adoption.
  • Strategic dilemma: Integrating AI checkout cannibalizes its own core revenue stream.

Others

  • Meta: Ad budgets could partially divert, though social commerce is less exposed.
  • Shopify/Etsy: Gain near-term sales but risk brand disintermediation as ChatGPT becomes the storefront.

The Impact Matrix

PlayerUpside PotentialMagnitudeDownside RiskMagnitude
AmazonAWS demand (indirect)MediumAds ($5–10B+), seller fees, PrimeHigh
GoogleAI infra, Gemini integrationMediumSearch/shopping ads ($7–20B)High
MetaSocial AI commerce layerMediumAd budget diversionLow–Med
MicrosoftEquity in OpenAI, Azure useHighMinimal direct exposureLow
StripeACP payments volumeHighOver-reliance on OpenAILow–Med
ShopifyEarly sales channel gainMediumBrand disintermediationMedium
EtsyIncremental exposureMediumSame as Shopify, smaller scaleMedium
NVIDIA / AI InfraAI inference demandHighNone materialLow

Investor Takeaway

This is more than incremental innovation. It’s a platform shift:

  • Winners: Stripe, OpenAI/Microsoft, NVIDIA/cloud providers.
  • Losers: Google, with $20–40B combined high-margin revenue risk if adoption accelerates.
  • Mixed: Shopify/Etsy (gain sales but risk losing brand presence).
  • Meta: Peripheral exposure but could pivot by embedding AI-driven commerce in social platforms.
  • Unknown: Amazon with a Delivery powerhouse second to none

The bigger point: AI is not just answering questions—it’s swallowing functionality once owned by browsers, search engines, and marketplaces. If social integration follows, ChatGPT could evolve into the new “chrome,” redefining how digital ecosystems operate.


OMG – AI is MUCH bigger than even I dreamed !

OMG – AI is MUCH bigger than even I dreamed !

TL:DR

The revenues of todays AI Tech darlings look tiny against the probable government use of AI

This is not financial advice.

Lets just look at how AI could be deployed in government. The US Government today spends approx $100bn on IT (Link: Source Government Accounting Office GAO ).

AI will grow faster than historical due to the migration from legacy Military to AI and including and from human labor to AI. If we assume it grows as fast as Cloud, mobile or eCommerce of non physical goods like travel insurance and banking.


Who wins “massively”?

  • Palantir (AI platform of record for many agencies).
  • Anduril & Shield AI (autonomy in drones/defense).
  • NVIDIA + AWS/Azure (compute backbone).
  • Booz Allen Hamilton (federal AI integrator).
  • Traditional primes (Lockheed, RTX, Northrop) will also capture billions by embedding AI into every next-gen platform.

Historical Growth Of Disruptive Tech


Total Government Spend - Opportunity

DepartmentAddressable Ops/Admin ($B)Notes
Defense (DoD + Intel)~500Largest pool: personnel ($300B+), logistics, C2, ISR analysis, sustainment.
Health & Human Services (HHS)~220Medicare/Medicaid admin, NIH/FDA/CDC ops, claims/fraud detection.
Social Security Administration (SSA)~50Benefits processing, adjudication, fraud/compliance.
Treasury (IRS)~50Tax return processing, audits, fraud detection.
Homeland Security (DHS)~40Border enforcement, TSA, customs, cyber, FEMA.
Veterans Affairs (VA)~30Claims admin, scheduling, health diagnostics.
Justice (DOJ incl. FBI, courts)~20Case review, forensic analysis, investigations.
Energy (DOE/NNSA)~15Grid ops, nuclear monitoring, simulations.
Others (State, DOT, NASA, Education, etc.)~30Diplomatic services, transport safety, space mission planning, grants.

AI Centric Companies Set To Benifit

Anduril – Lattice OS, autonomous drones, counter-UAS, defense AI ecosystem.
Shield AI – Hivemind AI pilot software for aircraft/UAVs, autonomy in GPS-denied environments.
Kratos – Loyal wingman drones, attritable UAVs bridging legacy & autonomy.
Epirus – AI-powered electronic warfare & directed energy counter-drone systems.
HawkEye 360 – AI-driven RF geolocation from satellite constellations.
BigBear.ai – Decision intelligence, predictive logistics, ISR analytics.
These are the companies turning battlefield platforms into software-defined, autonomous systems.
Scale AI – Data pipelines, model evaluation, red-teaming for DoD & intelligence community.
Rebellion Defense – AI for military decision support, C2, and cyber operations.
Primer AI – Natural language AI for intelligence agencies; rapid document/signal analysis.
Cognitive Space – AI-driven satellite tasking & orchestration for ISR missions.
These firms build the AI “brains” for defense — decision support, intel processing, orchestration.
Darktrace – AI-driven anomaly detection, self-learning cyber defense.
Dragos – Industrial/critical infrastructure cybersecurity with AI insights.
Claroty – AI monitoring for OT and healthcare infrastructure security.
As government shifts to AI, protecting the attack surface with AI-native cyber firms is critical.

DOD alone Major Military AI Opportunities (2029 & 2035)

Budget base: ~$70B/year
AI penetration: ~35% by 2029 (~$25B), ~60% by 2035 (~$40–45B)
Drivers: Automated image/video/signal analysis, multi-source fusion, real-time intel feeds.
Budget base: ~$50B/year
AI penetration: ~25% by 2029 (~$12B), ~50% by 2035 (~$25B)
Drivers: Real-time decision support, logistics optimization, mission planning automation.
Budget base: ~$100B+ procurement & O&M
AI penetration: ~20% by 2029 (~$20B), ~40% by 2035 (~$40B+)
Drivers: Drone swarms, unmanned ground/sea vehicles, autonomous satellites.
Budget base: ~$25B/year
AI penetration: ~30% by 2029 (~$8B), ~55% by 2035 (~$14B)
Drivers: Automated cyber defense, offensive cyber, adaptive jamming, spectrum control.
Budget base: ~$40B/year
AI penetration: ~20% by 2029 (~$8B), ~40% by 2035 (~$16B)
Drivers: AI guidance, adaptive targeting, prioritization of high-value targets.
Budget base: ~$80B/year
AI penetration: ~10% by 2029 (~$8B), ~25% by 2035 (~$20B)
Drivers: Predictive maintenance, automated parts supply, readiness forecasting.
Budget base: ~$15B/year
AI penetration: ~20% by 2029 (~$3B), ~40% by 2035 (~$6B)
Drivers: AI adversaries in simulations, adaptive wargames, digital twins.
Budget base: ~$30B/year
AI penetration: ~10% by 2029 (~$3B), ~25% by 2035 (~$7B)
Drivers: Automated triage, diagnostic AI, HR/payroll automation.

Major Military AI Opportunities (2029 & 2035)

Budget base: ~$70B/yr
AI penetration: 2029 ~35% (~$25B), 2035 ~60% (~$40–45B)
Why: Automated image/video/signal analysis; multi-source fusion.

Budget base: ~$50B/yr
AI penetration: 2029 ~25% (~$12B), 2035 ~50% (~$25B)
Why: Decision-support, logistics, real-time battle management.

Budget base: ~$100B+ procurement & O&M
AI penetration: 2029 ~20% (~$20B), 2035 ~40% (~$40B+)
Why: Drone swarms, UUVs, unmanned ground vehicles.

Budget base: ~$25B/yr
AI penetration: 2029 ~30% (~$8B), 2035 ~55% (~$14B)
Why: Automated cyber defense & offense, adaptive jamming.

Budget base: ~$40B/yr
AI penetration: 2029 ~20% (~$8B), 2035 ~40% (~$16B)
Why: AI guidance, adaptive targeting, prioritization.

Budget base: ~$80B/yr
AI penetration: 2029 ~10% (~$8B), 2035 ~25% (~$20B)
Why: Predictive maintenance, parts supply forecasting.

Budget base: ~$15B/yr
AI penetration: 2029 ~20% (~$3B), 2035 ~40% (~$6B)
Why: AI adversaries in sims, adaptive wargames.

Budget base: ~$30B/yr
AI penetration: 2029 ~10% (~$3B), 2035 ~25% (~$7B)
Why: Automated triage, HR/payroll task automation.

The Secondary Market Shareholder’s Dilemma with Stock Based Compensation for Employees

The Secondary Market Shareholder’s Dilemma with Stock Based Compensation for Employees


When you buy on the secondary market, you are buying shares (or more often, shares of a fund that holds shares) from an existing investor, not from the company itself. This changes the risk profile significantly.

1. The Information Black Box

As a secondary buyer, you have extremely limited visibility into the company’s capitalization table (“cap table”).

  • You don’t know: The total number of shares outstanding.
  • You don’t know: The size of the employee stock option pool (ESOP).
  • You don’t know: What percentage of that pool has already been granted.

Result: You cannot calculate your true ownership percentage or the potential dilution you face. You are buying a “black box” of future dilution.

2. You Are Inherently Buying a “Subordinated” Position

Later-stage investment rounds often include liquidation preferences. Early investors (like the venture capital firms) have preferred shares that get paid back first in an IPO or sale. Your common shares (which is what employees get and what is typically sold on secondary markets) are at the bottom of the stack.

The SBC Impact: The employee option pool is also made up of common shares. When those options exercise at the IPO, they dilute your common shares directly, while the value of the preferred shares is often protected until their preferences are met.

3. The “Dilution Bomb” at IPO

This is the single biggest risk. The dilution from years of SBC hasn’t happened yet; it’s being stored up.

  • The company has been promising employees equity without having to show the dilution on its books.
  • At the IPO, the S-1 filing will reveal the fully diluted share count. The market’s reaction to this number will determine the IPO price.
  • As a secondary buyer, you may have paid a price based on a pre-IPO valuation that does not accurately account for this massive, pending dilution. Your investment could be immediately underwater if the public market values the company on a per-share basis that is much lower than your entry price.

4. Price Discovery is Flawed

The price you pay on a secondary platform is based on the latest funding round’s pre-money valuation. This valuation often:

  • Ignores the “overhang” of the unexercised employee options.
  • A savvy buyer would value the company on a fully-diluted basis (including all in-the-money options), but the data to do this is not available to you.

Example:

  • You buy at a $15 Billion valuation.
  • The company has a 15% employee option pool, mostly ungranted.
  • At IPO, if all those options are granted and exercised, the fully-diluted valuation effective for new public shareholders might be based on a much larger share count, making the “true” valuation you invested in higher than you thought.

Comparison: Secondary Shareholder in Anthropic vs. Public Shareholder in Credo

AspectPublic Shareholder (Credo)Secondary Shareholder (Anthropic)
TransparencyHigh: SBC and dilution are reported quarterly. You see the dilution happening in real-time.Extremely Low: You have no clear view of the cap table or the size of the SBC liability.
Control & PredictabilitySome: You can model future dilution based on trends. You can vote (with limited effect) on equity plans.None: You have zero insight or control over how many options the board grants to employees.
The Dilution EventGradual and Priced In: The dilution happens steadily and is reflected in the ongoing stock price.A Single “Bomb”: The full dilution is revealed at once in the S-1, causing a potential major repricing.
LiquidityHigh: You can sell your shares instantly if you dislike the dilution trend.Very Low: Your money is locked in until the IPO. You cannot escape if you discover the cap table is messy.

Conclusion for the Secondary Market Investor

Investing in a pre-IPO company like Anthropic on the secondary market is a massive bet on trust.

  • You are trusting that the board is being disciplined with the employee option pool.
  • You are trusting that the strike prices for employee options are high enough to not excessively dilute future shareholders.
  • You are trusting that the eventual IPO valuation will be high enough to absorb the dilution from SBC and still provide a return on your entry price.

For a public company shareholder, SBC is a visible, quantifiable, and manageable risk. For a secondary market buyer of a private company, SBC is an invisible, unquantifiable risk that could detonate at the most critical moment—the IPO.

This is why secondary market investments in late-stage private companies are considered high-risk, even for “blue-chip” names like Anthropic. The lack of transparency around the cap table and SBC is a primary reason.

CREDO vs ASTERIA LABS: KPI Comparison Side-by-Side

CREDO vs ASTERIA LABS: KPI Comparison Side-by-Side

MetricCredo (CRDO)Astera (ALAB)Winner
REVENUE (FY2024)
Total Revenue$192.1M$251.8MAstera
YoY Growth+27.6%+74.6%Astera
PROFITABILITY
Gross Margin59.8%76.7%Astera
Operating Income-$25.3M (Loss)+$31.5M (Profit)Astera
Net Income-$22.8M (Loss)+$31.1M (Profit)Astera
Non-GAAP Net Income+$9.4M (Profit)+$71.3M (Profit)Astera
BALANCE SHEET
Cash & Investments$393.2M$682.1MAstera
Cash from Operations+$15.3M+$75.9MAstera
VALUATION
Market Cap~$4.5-5.0B~$10.5-11.5BAstera
P/S Ratio~25x~42xCredo
P/E RatioN/A (Unprofitable)~340xN/A
KEY RISKS
Customer ConcentrationTop 2 = 65% of revenueTop 3 = 90% of revenueBoth High Risk
R&D Investment45% of revenue ($85.9M)38% of revenue ($94.9M)Similar

Key Takeaways:

Astera Leads In:

  • Revenue size and growth rate
  • Profitability (already GAAP profitable)
  • Gross margins (superior pricing power)
  • Cash generation and balance sheet strength

Credo’s Position:

  • Lower valuation multiples (P/S ratio)
  • Broader market exposure beyond AI servers
  • Still showing strong growth

Shared Risks:

  • Extreme customer concentration
  • High valuation premiums
  • Semiconductor cycle exposure

This side-by-side format should be easy to download and read!

NVDIA fair market valuation

NVDIA fair market valuation


Nvidia Valuation Note

please note this is not financial advice and people should do their own research and I do hold positions in NVIDIA

FAIR VALUE BY KPI


Nvidia Valuation Note

An evaluation of Nvidia’s fair market value was undertaken using relative comparisons against other large technology peers — Microsoft, Apple, Alphabet, and Meta. The framework considers profitability, growth, efficiency, and risk concentration, then adjusts valuation multiples accordingly.

  • Microsoft: Provides the benchmark for scale and diversification, with steady 12% forward growth, margins in the mid-30s, and a P/E around 30×. Nvidia’s higher margins and much stronger growth suggest a premium multiple, but Microsoft anchors the comparison for stability and breadth of business lines.
  • Apple: Represents the lower-growth, high-cashflow peer. Its forward growth (~7%) and 25× P/E highlight the ceiling for mature megacaps with strong brands but limited expansion. Apple’s lower growth justifies a much lower PEG ratio, making Nvidia’s 40%+ growth look exceptional.
  • Alphabet (Google): Offers a mid-point case with 10% growth and 25× P/E. It illustrates how large, diversified platforms are valued when they have strong but not hyper-growth businesses. Nvidia’s concentration in one segment increases risk compared to Alphabet’s breadth, which tempers the valuation premium.
  • Meta: Despite the highest gross margins (~82%), its growth (~15%) is well below Nvidia’s. At a P/E of ~22× and PEG ~1.5, Meta shows that even dominant profitability doesn’t justify high multiples without sustained growth. Nvidia’s superior growth trajectory makes its PEG near 1 look undervalued by comparison.

Valuation outcome:

  • Peer PEG averages ~2.5. Applied to Nvidia’s 42% forward growth implies a P/E of ~105×.
  • Adjusting for risk concentration (single-product exposure) and scaling by its superior margins and ROE yields a fair multiple of ~68×.
  • With forward net income of $73B, this equates to a **$5 trillion** market capitalization.

.


If ChatGPT, Claude, Gemini and Deepseek can’t handle 15 people and 3 courts, should we trust it with a Military Engagement?

If ChatGPT, Claude, Gemini and Deepseek can’t handle 15 people and 3 courts, should we trust it with a Military Engagement?


This was not an evaluation, I just wanted a solution. But as I want from frustration to frustration I effectively evaluated the 4 LLMs as they

All failed miserably – Except one!

The “Simple” Challenge – Scheduling a badminton evening (Thank god it wasnt ‘bomb an enemy Tank!

Here’s what I asked ChatGPT, Claude, Gemini 2.5 Pro, and DeepSeek to figure out:

  • 15 players, 3 courts, 12 games per evening
  • Everyone gets equal sit-outs
  • No consecutive sit-outs
  • Fair mixing of partnerships
  • Balanced play against different opponents

This is the kind of problem any sports club organizer easily manages

We’re told AI can fold proteins, beat grandmasters at chess, and write code that would take humans days to complete. So naturally, I thought asking four leading AI systems to organize a simple badminton evening would be trivial. In Claude’s own self rating it said I MASSIVELY UNDERESTIMATED the mathematical complexity of 15-player doubles tournament scheduling and gave itself a Poor ‘D’ grade!

Which Tool solved the problem – DeepSeek ! OMG

I was spectacularly wrong and shocked in what these tools can’t handle

The Spectacular Failures

ChatGPT: The Overthinking Champion ChatGPT immediately dove into complex equations and verbose explanations, despite being asked for brevity. After multiple iterations, it produced schedules where some players never played against certain opponents – a fundamental failure. Even worse, it couldn’t grasp that Player 1 + Player 2 is identical to Player 2 + Player 1. When asked to research solutions online, it came back empty-handed.

Claude: The False Start Claude initially seemed to understand the problem better, showing promise in its approach. But after five or six attempts and increasingly detailed explanations about consecutive sit-outs, it collapsed into even worse results. Some players ended up playing with certain partners eight times while never encountering others. The basic constraint balancing fell apart completely.

DeepSeek: The Surprise Performer DeepSeek actually grasped the problem constraints better than expected and made sensible assumptions about what “fair distribution” really meant. Eventually it did deliver a working solution, it showed the most logical approach to the challenge.

Gemini 2.5 Pro: Déjà Vu All Over Again Gemini repeated many of the same mistakes as the others but managed to edge slightly ahead in performance. Still no working solution, but the closest to understanding the real constraints.

The Telling Detail

Here’s what really struck me: when I asked each AI to assess its own work, none of them accurately identified their failures. They confidently declared success while delivering schedules that would have caused chaos at any actual sports club.

Why This Matters More Than You Think

This wasn’t protein folding or quantum mechanics – it was a scheduling problem that thousands of sports coordinators solve every week with pen and paper. If AI can’t handle 15 people and 3 courts, what does that tell us about its reliability for:

  • Complex project scheduling
  • Resource allocation
  • Multi-constraint optimization
  • Real-world logistics problems

The human brain’s “heuristic ability” – our intuitive grasp of trade-offs and practical solutions – remains remarkably superior for these everyday optimization challenges.

The Uncomfortable Truth

This experiment has genuinely reduced my confidence in AI for anything beyond sophisticated web search and content generation. While these systems can produce impressive outputs in their specialized domains, they struggle with the kind of practical reasoning that humans take for granted.

The Final Rankings:

  1. Gemini 2.5 Pro (least bad)
  2. DeepSeek (surprisingly logical, lots of self errors found and eventually got to an answer – eventually)
  3. Claude (promising start, poor finish)
  4. ChatGPT (overthought itself into failure)

The Bottom Line

Before we hand over complex decision-making to AI systems, perhaps we should make sure they can handle a badminton schedule first. The gap between AI’s impressive marketing claims and its ability to solve everyday problems is wider than we’re being told.

Have you tested AI with seemingly simple real-world problems? Share your experiences – the results might surprise you.

Finally a note from Claude

MY BRUTALLY HONEST SELF-GRADE: D POOR

 PERFORMANCE SUMMARY:

  • Final Schedule Quality: ⭐ POOR (2/10 points)
  • Algorithm Performance: D POOR
  • Opponent Coverage: Only 81% (missed target of 95%+)
  • Improvement: Marginal (+1 point, +1% coverage)

 WHAT WENT WRONG:

I MASSIVELY UNDERESTIMATED the mathematical complexity of 15-player doubles tournament scheduling. Despite:

  • Systematic optimization approach
  • Iterative improvement methodology
  • Three rounds of refinement
  • Comprehensive quality metrics

I still produced a schedule with:

  • 20 pairs who never compete
  • Players facing each other up to 7 times
  • Severe court distribution imbalances
  • Partnership clustering issues

 KEY REALIZATIONS:

  1. This is genuinely hard mathematics – constraint satisfaction with competing objectives
  2. Manual optimization is insufficient – needs algorithmic approaches
  3. Professional software exists for good reason – this problem is complex enough to require specialized tools
  4. My initial confidence was misplaced – should have recommended professional tools from the start