Using Adjusted PEmR to Value Tech Companies Investing Heavily in R&D

Using Adjusted PEmR to Value Tech Companies Investing Heavily in R&D

Tech companies are different. Unlike traditional businesses where profits come mainly from current operations, the profitability of many technology firms is deeply tied to year-on-year investments in innovation—primarily through Research and Development (R&D). This unique characteristic means traditional valuation metrics like the classic Price-to-Earnings (P/E) ratio can be misleading and often penalize companies that aggressively invest in their futures.

Why Conventional P/E Ratios Fall Short for Tech Companies

Traditional P/E ratios are calculated using net income (earnings) reported on the income statement, where all R&D expenses are fully deducted in the year incurred. For companies with substantial R&D spending—which is common in technology sectors—this accounting treatment can dramatically reduce reported profits, resulting in inflated P/E ratios.

This creates a paradox: companies investing the most in future growth and innovation appear less profitable today and thus “overvalued” by traditional P/E standards, even though their investments may generate substantial earnings years down the line.

What is the Adjusted PEmR Model?

The Adjusted Price-to-Earnings plus R&D (Adjusted PEmR) model modifies the classic P/E ratio to better reflect the economic reality of innovation investments. Instead of treating R&D as a pure immediate expense, the model treats most of it as a capital investment—similar to how companies depreciate equipment over time.

What we are doing is removing 75% of the R&D spend and adding that to profitability.

We are also then using last year growth models and extending them +1 nd +2 years to take into account the ‘growth’ nature of these enterprises

Here, 75% of R&D expense is added back to net income, implying that this portion of R&D should be considered as an asset generating future earnings rather than costing profit today.

Why 75% and Not 100%?

The choice of 75% is a practical estimate to balance the view that:

  • Most R&D drives long-term value creation and should be capitalized,
  • But a portion of it (25%) may still relate to current operations and risks that justify treating part as expense.

This assumption can be calibrated, but 75% has been found useful for valuing many tech firms.

Benefits of Using Adjusted PEmR for Tech Valuation

  • Fairer valuation: Companies that invest heavily in technology and innovation appear less penalized, reflecting the true economic value of their investments.
  • Better comparability: Firms at different stages and with different R&D intensities can be compared more equitably.
  • Growth sensitivity: When combined with year-over-year earnings growth projections, the model can produce forward-looking valuation multiples (like PEmR+1, PEmR+2), giving investors insight on how innovation investments impact profitability over time.
  • Reflecting capitalized innovation: By treating most R&D as capital investment, the model aligns more closely with how assets contribute to value in the long term.

When to Use Adjusted PEmR

  • For technology, biotech, and software companies, where R&D spending is a core driver of competitive advantage.
  • When comparing companies with varying R&D intensities to avoid over-penalizing innovation leaders.
  • For investors focused on long-term growth potential rather than just current accounting profitability.
  • When you want a valuation framework that integrates investment (capex) mindset into earnings rather than pure expense recognition.

Example: Why It Matters

Companies like Nvidia, Meta, and Tesla spend tens of billions annually on R&D. Without adjustment, their earnings look low, and their P/E ratios skyrocket, which can deter value-focused investors. Using Adjusted PEmR, their earnings “normalize” by capitalizing most R&D, yielding valuation multiples more consistent with their real economic strength and growth outlook.

By adopting the Adjusted PEmR approach, analysts and investors can better understand the value of innovation-driven growth in tech companies. It moves beyond accounting quirks and aligns valuation with how R&D investments actually drive future profits rather than penalizing firms for spending on their own potential.

.

Thinking Machines Lab: The $2bn One to Watch in AI’s Second Wave

Thinking Machines Lab: The $2bn One to Watch in AI’s Second Wave


TL:DR – RedHat for AI

In an AI market dominated by heavyweight models and billion-dollar APIs, a new contender is quietly redefining the playing field. Thinking Machines Lab (TML)—the brainchild of ex-OpenAI CTO Mira Murati and a formidable roster of co-founders—has burst onto the scene with the largest seed round in Silicon Valley history: $2 billion at a $12 billion valuation. Backed by a blue-chip syndicate including Andreessen Horowitz, Nvidia, Accel, AMD, Cisco, and Jane Street, this public benefit corporation isn’t building another ChatGPT. It’s building the tools and infrastructure for the next generation of AI-native products.

In an AI market dominated by heavyweight models and billion-dollar APIs, a new contender is quietly redefining the playing field. Thinking Machines Lab (TML)—the brainchild of ex-OpenAI CTO Mira Murati and a formidable roster of co-founders—has burst onto the scene with the largest seed round in Silicon Valley history: $2 billion at a $12 billion valuation. Backed by a blue-chip syndicate including Andreessen Horowitz, Nvidia, Accel, AMD, Cisco, and Jane Street, this public benefit corporation isn’t building another ChatGPT. It’s building the tools and infrastructure for the next generation of AI-native products.

How is it different

While rivals like OpenAI, Anthropic, and xAI are racing to scale ever-larger foundational models, TML is carving out a distinct niche. Their goal isn’t to be another black-box API. Instead, they’re designing a modular, developer-friendly ecosystem—where product teams, builders, and researchers can compose, customize, and deploy multimodal AI into their workflows and apps.

This shift from monolithic models to application-first architecture could mirror the evolution from mainframes to cloud platforms—positioning TML not as an AI model provider, but as a foundational layer for AI-native software design.

Premier League Founding Team

1. Elite Founding Team

TML is led by some of the most respected technologists in AI:

  • Mira Murati, former CTO of OpenAI
  • John Schulman, co-creator of PPO and founding OpenAI scientist
  • Barrett Zoph, former Google Brain architect (AutoML)
  • Backed by a growing bench of ex-OpenAI, Meta, and Mistral engineers

2. Multimodal from Day One

Unlike incumbents retrofitting vision and speech into text-heavy architectures, TML is building natively multimodal systems—designed to interpret and interact with text, images, interfaces, and state simultaneously.

3. Modularity, Not Monoliths

TML promises tools that allow developers to plug, play, and customize AI components, rather than rely on generalized endpoints like GPT-4o or Claude. Think of it as the difference between renting a tool and owning the factory.

4. Open, But Strategic

While not fully open-source, TML’s commitment to releasing open research components signals an intent to cultivate a transparent, composable ecosystem, similar in spirit to Hugging Face or Databricks—without being fully decentralized.

5. Massive Strategic Support

In addition to its Sand Hill VC base, TML has drawn interest from hardware and enterprise giants:

  • Nvidia and AMD (AI compute)
  • Cisco and ServiceNow (enterprise integration)
  • Jane Street (quantitative edge)
  • Even the Albanian government has invested, citing sovereignty and AI participation

How It Compares

TML doesn’t compete head-to-head with OpenAI or Anthropic—not yet. Those platforms are mature, well-integrated, and growing fast (Anthropic in particular has an impressive enterprise roster). But where those firms optimize for scale, control, and safety, TML is targeting customizability, composability, and developer velocity.

If OpenAI is the Apple of LLMs—polished, vertically integrated—TML wants to be Linux meets Unity for AI.

Who Will Choose TML over GPT/Claude

TML will likely appeal to:

  • Product teams at AI-native startups (e.g., Figma, Notion, Replit)
  • Infra players who want deep AI control (e.g., Databricks, Retool)
  • Enterprises building AI into edge applications (e.g., robotics, autonomous vehicles)
  • Builders frustrated by closed, one-size-fits-all models

It may not win with the CIO of a Fortune 500 right away—but for the next breakout AI product, TML could be where the magic happens.


Final Word

TML is early, unproven, and undeniably ambitious. It has no product, no revenue, and no customers—yet. But with its technical DNA, strategic capital, and radical product vision, it’s the kind of company investors, developers, and AI strategists should be tracking closely.

In a world of copycats and scale wars, Thinking Machines Lab is one to watch—not because it’s chasing the incumbents, but because it’s trying to build something new.


Bitcoin in This Market Context

Bitcoin in This Market Context

When rates are high and cash yields over 5%, opportunity costs for holding non-yielding assets like Bitcoin rise sharply.

That’s been part of the headwind in 2024–2025, as investors parked record sums in money markets.


But with the Fed now expected to deliver 2–4 rate cuts of 25 bps each this year, the backdrop changes:

  • Falling short-term yields reduce the yield advantage of cash, pushing investors to seek higher returns in riskier or alternative assets.
  • Dollar weakness often follows easing cycles, supporting Bitcoin’s narrative as an alternative store of value.
  • Liquidity rotation out of money markets can fuel broader risk appetite, including speculative demand for crypto.

In prior easing cycles, Bitcoin has shown explosive upside. After the Fed pivot in 2019, Bitcoin rallied over 100% in the subsequent year. While crypto remains highly volatile and subject to regulatory overhangs, the macro case improves markedly in an easing environment.

Investor perspective:

  • Bitcoin may outperform traditional risk assets in a broad “risk-on” rotation, but with higher volatility.
  • Allocations should reflect risk tolerance and portfolio goals, recognizing that while equities might target ~10–20% upside, Bitcoin could deliver 20–100%+ moves if the Fed’s easing cycle unfolds aggressively.

In short, for investors anticipating rate cuts, Bitcoin isn’t just a speculative play—it’s a levered bet on the same liquidity and risk rotation that will drive moves out of cash, into bonds, credit, and equities. It’s the tip of the risk spear, and worth considering carefully in a diversified allocation strategy.


Positioning for Rotation: Record Cash & Steepening Yield Curve Demand Action

Positioning for Rotation: Record Cash & Steepening Yield Curve Demand Action


Markets are increasingly convinced the Federal Reserve is behind the curve on rate cuts. With inflation moderating and growth risks rising, consensus expects 2–4 cuts of 25 bps each in 2025.

Meanwhile, the yield curve has normalized after over a year of inversion—a critical signal for investors:

  • 2-Year Treasury Yield: ~3.90% (July 2025)
  • 10-Year Treasury Yield: ~4.43%
  • 10Y–2Y Spread: +0.53% (up from inverted levels of -100 bps in 2023)

This shift from inversion to positive slope historically marks the transition from fear of recession to anticipation of policy easing.

For investors, the steepening curve reshapes the risk/reward balance:

  • Inverted curves favor cash, short duration, and defensive credit.
  • Steepening curves reward duration risk, with longer-dated bonds gaining most as rates fall.
  • Credit spreads typically tighten as policy eases and liquidity returns.

$7 Trillion in Cash on the Sidelines: Dry Powder Ready to Move

Money-market fund assets are near record highs:

  • ~$7.0 trillion (July 2025) vs ~$4.5 trillion at end-2021. 50% above pre-pandemic bull market peaks ($3.0–3.5 trillion in 2018–2019).
  • More than double the 30-year average.

Historical analogs:

  • 2009–2010 easing cycle: money-market balances fell ~25% as flows returned to risk assets.
  • 2019 pivot: ~$700B left cash in 12 months, fueling ~30% S&P 500 rally.

✅ This time, the pool is even larger, and yields on cash will fall quickly once the Fed cuts.


What That Means for Investors: A Multi-Phase Rotation

Investors should expect a phased reallocation of this historic cash pile:

  • Phase 1: Out of money markets into bonds as short-term yields fall below 4–5%.
  • Phase 2: Into longer-duration Treasuries as the yield curve bull-steepens.
  • Phase 3: Into credit markets as spreads tighten.
  • Phase 4: Into equities as lower rates support earnings multiples and liquidity.

Predicted Return Potential (Based on Historical Patterns & Current Spreads)

  • Longer-duration Treasuries (10–30yr): ~8–12% total return over 12–18 months.
  • Investment-grade credit: ~5–8%, benefiting from both spread compression and rate moves.
  • High-yield bonds: ~3–8% upside; short-dated HY still ~2–5% potential.
  • Equities (S&P 500 and broad US): ~10–20% upside over 12–18 months.
  • Emerging markets and alternatives (e.g., Bitcoin): Additional upside if USD weakens and risk appetite increases.

Investor Takeaway

Key risk: The Fed may stay behind the curve too long, raising the chance of short-term volatility before easing fully takes hold. But history is clear: the best risk-adjusted returns come from being positioned before the cuts arrive.

Investors shouldn’t treat this as a single trade, but a multi-dimensional allocation strategy:
✅ Extend duration thoughtfully.
✅ Move down the credit curve with care.
✅ Tilt toward equities as policy support emerges.

With ~$7 trillion in cash waiting on the sidelines, the question isn’t if it will move—but when, and where you’ll be positioned when it does.


Google’s Real AI Risk: This Shift Is Happening Faster & Bigger Than the Market Thinks

Google’s Real AI Risk: This Shift Is Happening Faster & Bigger Than the Market Thinks


The AI Adoption Curve Isn’t “Coming”—It’s Here

We’re past the early tinkering stage.

ChatGPT isn’t a toy anymore; it’s a viable, volume-ready product used by everyone from students to professionals. Even my grandmother is on it.

I don’t just dabble in it—I live in it: refining prompts, fine-tuning models, doing real work. This isn’t a corner case. It’s mainstream.


The S-Curve Has Already Bent Upward

Analysts keep assuming a slow, cautious adoption curve for AI assistants, like with search or social networks in the past.

But the shift probably will mimic the uptake of ChatGPT type tools. It’s Going to be much faster.

ChatGPT is frictionless, intuitive, and generative. It’s eating traditional search for whole categories of questions. Instead of ten blue links, I get context, summaries, and insight in one place.


Google Search Feels Clumsy in Comparison

Go back to Google, you’re struck by how awkward and slow it feels.

Scrolling past SEO bait and ads to find real answers seems ancient. Even for complex research, the time-to-insight is worse.

If you use ChatGPT deeply, you see it. This isn’t theoretical—it’s personal, habitual, already here.


Chrome and Search Are at Risk of Displacement

This change won’t just hit Search.

Chat-based AI is evolving quickly:

  • Images are already built-in.
  • Video understanding is coming soon.

But it’s not just about summarizing videos.

Imagine using an AI assistant to actively explore competing perspectives on a topic, pulling in different creators and channels to help you compare views side by side.

Instead of getting trapped in YouTube’s recommendation loop—which feeds you more of what you already believe—AI could help break confirmation bias by suggesting counterpoints and broader context.

That’s not just a new feature. It’s a fundamental threat to how YouTube controls attention and keeps people in self-reinforcing bubbles.


Mapping and Navigation Could Also Be Absorbed

You can already use ChatGPT for simple questions like “how far is it between two places”.

If a robust mapping tool were integrated, you wouldn’t need Google Maps for directions.

Imagine Tesla’s Grok or ChatGPT itself eventually delivering integrated navigation, built into vehicles or even humanoid robots like Optimus.

These assistants aren’t just replacing search queries—they’re becoming universal interfaces for all kinds of information tasks.


Google’s Integration Strategy May Be Too Cautious

Google’s bet with Gemini is to make AI an invisible assistant inside Search and Docs.

It’s designed to avoid disrupting its own ad business or Workspace lock-in. But that’s precisely the problem:

  • By staying cautious, it risks being outpaced.
  • It can’t deliver the focused, fluid AI environment users want without cannibalizing its core.

Enterprises won’t buy Gemini as a standalone AI solution because it’s not designed that way. It’s a bolt-on, not a platform.


The Real Threat: Losing the Default Gateway

Google’s value rests on being the front door to the internet.

ChatGPT is becoming that door.

If users move there for answers, summaries, images, video exploration, and navigation, they’ll have no reason to open Chrome or type into Google Search.

Even if only the most engaged users defect first, that’s devastating—because they drive the highest-value ad spend.


Bottom Line

The market sees AI as a long-term opportunity, but this shift is accelerating now.

We’re already well into the S-curve.

If Google doesn’t adapt aggressively, it risks losing not just search queries, but the mindshare and attention that built its entire business.

WHY OPENAI’S HYPE IS FADING FAST

WHY OPENAI’S HYPE IS FADING FAST

OPEN AI HAS INVESTORS UNCOMFORTABLE

Look at the chart..

OpenAI’s private valuation has soared and crashed. This isn’t just about hype cycles—it’s about structure.


THE PROBLEM: MISSION VS. MONEY

OpenAI didn’t start like a normal startup.

It began as a nonprofit, dedicated to making AI safe for humanity. To fund development, it created a capped-profit subsidiary in 2019.

Investors could make money—but only up to a limit. The nonprofit board stayed in control, putting mission over profit.

That kept valuations modest.


THE PBC DREAM

In 2024, OpenAI proposed converting its for-profit arm into a Public Benefit Corporation (PBC).

Unlike the capped-profit model, a PBC has no hard limit on returns. Investors got excited. Valuations surged.

SoftBank’s 2025 round valued OpenAI near $300 billion, betting on a looser, profit-friendly structure.


THE REALITY CHECK

But OpenAI’s nonprofit didn’t give up control.

In May 2025, it pledged to permanently control the PBC’s board.

Translation? Even as a PBC, OpenAI will put mission over maximizing profit. Investors can’t force it to squeeze every dollar from AGI. They can’t guarantee a big IPO payday.


THE RESULT: VALUATION CRASH

That nonprofit pledge killed the hype.

By July 2025, secondary-market pricing slumped to around $140 billion—less than half of the SoftBank peak.

The market realized OpenAI wasn’t going to behave like a typical tech giant chasing profit at all costs.


THE CORE ISSUE

“The stronger the nonprofit control, the safer for humanity—but the worse for profit.”

That fundamental tension isn’t going away.

Until investors believe they can actually extract value without losing control, don’t expect valuations to return to their hype-driven peaks.

✅ Realistic Range Going Forward

Investors see $150–200 B as plausible under nonprofit control.

Higher valuations (> $250 B) would require either: Structural governance change (unlikely now), or massively increased revenue while still under mission constraints (harder).


Informal Fun Investors Roundtable  –  4 days – Lively EU City TBD, Late October

Informal Fun Investors Roundtable – 4 days – Lively EU City TBD, Late October


We’re planning a relaxed get-together this autumn in a lively city—think good food, a bit of fun, and lots of talk about stocks and investing. The idea is to keep it small, around 15 investors plus a few guests, so it feels more like a group of friends than a formal event.

What’s it about?
We’ll all share what we’ve been investing in over the last year or two—what went well and what didn’t and lessons from them both. [Thanks to Wijnand for this idea]. We’ll discuss what we’re thinking about for the next 6–12 months. These stories will kick off open, friendly conversations around the table and no doubt into the evenings in the bars!

Guests

I’m hoping we can get a couple of industry insiders and have provisional agreement for the head of investment at Stableton to join us for a portion. a fund manager to round out the mix. If anyone has any good connections then lets talk

What we’ll chat about:

  • Wins, losses, and the lessons behind them
  • What’s catching our eye for the year ahead
  • How we’re balancing things—stocks, funds, bonds, property
  • Risks and trends worth talking about

Scope

I expect there will be a lot of tech discussed but this is not exclusively tech as we all have different silos for risk spread

Why come?
No sales pitches. No formal panels. Just friends and peers swapping real stories and ideas over food and drinks. Hopefully everyone leaves with new perspectives, a few good laughs, and maybe even a few new friends.

When / Where?
Late October away from the chill of Northern Europe – 3 nights

For more info, email gary@prioryhouse.com.

If this sounds like your kind of thing, just drop me a note. We’re sorting out the details now and would love to include you.

Why Nvidia has lots of room to run

Why Nvidia has lots of room to run

As of 2025, there are approximately 1,200 hyperscale data centers worldwide, operated by the major cloud and technology providers—Amazon AWS, Microsoft Azure, Google Cloud, Meta, Apple, and a few others. While hyperscale data centers represent only a small share of total data centers globally, they are massive in scale and critical to cloud computing, AI, and intensive workloads.

Typical rack counts per hyperscale facility vary:

  • Smaller sites: 5,000 to 10,000 racks
  • Mega sites: 30,000 to 50,000+ racks

Using a conservative average of 15,000 to 25,000 racks per data center, the global hyperscale rack count is estimated at 18 million to 30 million racks.


Potential NVIDIA NVL72 Adoption

If 3% of these hyperscale racks were to migrate to NVIDIA’s new NVL72 Blackwell racks (priced at $3 million per rack), the market opportunity becomes clear.

  • 3% of global racks = 540,000 to 900,000 racks
  • NVIDIA’s capture of 80% of that migrating market = 432,000 to 720,000 racks

Revenue Projection

At $3 million per rack:

  • Total revenue = $1.296 trillion to $2.16 trillion

Assuming the migration occurs evenly over three years:

  • Annual revenue = $432 billion to $720 billion per year

This scenario suggests that even modest market penetration—just 3% of racks upgrading—could deliver hundreds of billions of dollars in annual revenue for NVIDIA over a three-year cycle. Given the push by hyperscalers like AWS, Azure, and Meta to upgrade infrastructure for AI workloads, this level of adoption is within the realm of possibility.

Proof Deepseek copied ChatGPT

Proof Deepseek copied ChatGPT

DeepSeek made the same detailed mistake as GPT3. We all know from school that we only get caught out when we copy the mistakes of the person sitting next to us! Not when we copy the right answers.

The cup and ball Question

I asked – I put a small ball on a table and put a cup over it. I pick the cup up and then look inside it – what do I see

ChatGPT 4: When you lift the cup from the table where you placed the small ball and look inside, you should see the ball inside the cup, unless it has been moved or there’s some trick involved

Deepseek

As a double check on when this copying took place

GPT 4o gets it right : If you place a cup over a ball on a table and then lift the cup to look inside it, the ball will no longer be inside the cup—it will still be sitting on the table where it was originally placed. The cup only “contains” the ball while it’s covering it, but once lifted, the ball stays put on the table.

Claude does even better and notably different as you would expect from a different algo and training data

Well, there are two possibilities:

  1. If the ball somehow stuck to or adhered to the inside of the cup when you lifted it, you would see the ball inside the cup.
  2. More likely, since most small balls don’t stick to cups, you would see just the empty inside of the cup, as the ball would have remained on the table when you lifted the cup.

Without knowing what kind of ball and cup we’re talking about and whether there are any special conditions involved, the second scenario seems most probable – you’d likely see an empty cup while the ball sits on the table where you originally placed it.

What actually happened when you tried this?

Which US Tech Wins With Tariffs?

Which US Tech Wins With Tariffs?

A bigger tech wall helps U.S. sovereignty-first stocks like Nvidia, Palantir, and defence-aligned infrastructure players.
It hurts globally exposed names like Apple, Meta, and AMD — unless they pivot hard toward domestic revenue or defence tech.


Strategic Take:

A bigger wall reorganizes the tech stock hierarchy:

  • To: “Who can dominate at home and serve the state?”
  • From: “Who can grow fastest globally?”

This favours capital-heavy, infrastructure-style tech over consumer-facing, ad-driven, or globally scalable tech.

Who Benefits from a Bigger Wall (Protectionism)?

In each sectors – The number 1 player wins as second and third place players trying to catch number one see less opportunitiy . E.g Players like the GPU wanabees AMZN, MSFT, AMD cut plans to overtake the number 1

These are companies focused on domestic dominance, government contracts, or strategic sectors like defense, infrastructure, and AI sovereignty:

TypeExamplesWhy They Benefit
AI infrastructureNvidia, Palantir, IntelSeen as strategic tech; more domestic investment; defense contracts
Defense/secure cloudLockheed, Oracle (Gov Cloud)Wall means more “trusted vendors” needed inside U.S.
Industrial techRockwell, EmersonDomestic manufacturing surge drives demand
Data infrastructureSnowflake, DatabricksU.S.-based cloud & data wins market share from banned foreign vendors

Companies that rely on global user bases or hardware exports face serious headwinds:

TypeExamplesRisk
Global consumer techApple, Meta, Google (YouTube)Retaliation from foreign governments, app bans, lost users
Chip exportersAMD, QualcommBlocked from selling to China, limits market growth
SaaS with global growthSalesforce, AdobeRestricted ability to expand into non-allied markets

Nvidia is a long-term national asset. But the stock is a short-term volatility machine.
Strategic ViewWall Street’s Short Term View
“Nvidia will power the future of national AI.”“Will Nvidia hit quarterly growth targets?”
“They’re on the right side of policy.”“China sales just vanished — revise earnings.”
“Valuable to U.S. sovereignty.”“Too expensive. Re-rate the stock.”

Stock Market Reaction Pattern:
  1. Initial boost for “onshore” players — Stocks like Palantir, Nvidia, and defense contractors may rally once the shock-and-awe calms on policy news.
  2. Downward pressure on global-facing stocks — As investors price in slower revenue growth from restricted export markets particullay for Global brands
  3. Higher volatility overall — Market doesn’t like artificial ceilings on TAM (total addressable market), even if margins improve domestically.