Note : The Altman Era Index (AEI) is a working hypothesis, a thinking tool, not a formal framework. It isn’t affiliated with, endorsed by, or representative of any organisation, institution, or frontier lab. It is simply my attempt to make sense of how startups survive in a world shaped by fast-moving model cycles
Who can benefit most : Founders / Investors/ Enthusiasts gauging the health of an AI startup and its claims to succeed.
For those who know me, you'd notice my obsession for Sam Altman. My obsession stems from curiosity to understand how he thinks & operates; It's important, because there are few who can predict the future and wield power to create it. I scraped all his blog posts [available at blog[dot]samaltman[dot]com written from 2013] and been reading during flight hours from May 25. I have 40 pages left [175 pages in total] but this idea jumped midway.
The Why : Long story short, why did I read all of Mr.Altman's blogs ?
To understand the state of mind it takes to build something bold like Open AI.
Identify his patterns of thinking; compare, evaluate and future proof my own ideas.
Collect Mental Models to understand AI.
What will kill your AI startup idea.
The Flash point idea : I sensed a strong shift in his writing style after 2022. What Sam wrote in 2014 is shaping reality today, my thesis : His latest ideas can probably hint direction towards future. Example, his predictions about Reditt, shared locations, Bitcoin, Chat Interface has become the fabric of reality today. Altman describes how acceleration feels from the inside as an operator, investor, and participant.
With this belief, I asked GPT to analyse his blogs and create a word cloud. While the word cloud looks simple, the vocabulary noticeably shifts from growth, startup, wealth to alignment, energy, governance. The nouns matured. Somewhere between 2013 and 2023 the way he wrote about things started to mature into futuristic concepts. If you plot those words over time, you’ll notice three clear gradients:
Acceleration Era (2013–2020) : where “compute, growth, efficiency” dominate.
Coordination Era(2021–2025) : where “alignment, policy, governance” emerge.
Containment Era (2026 onward) : where “energy, sustainability, equilibrium” start peeking through.
From prediction to calibration
The question I’ve been obsessed with since reading him: If Altman can describe the slope of acceleration before it happens or use his ideas to predict the direction of Open AI, how do we compare and future proof our own ideas using that as a compass? If I am building an AI Startup, what era of Altman am I building into and what are the biggest threats to my business?
That’s where the idea of the Altman Era Index (AEI) came from, a simple system to measure where a startup’s thinking belongs. Are you building for the next decade or the previous one?
The Builder’s Systems Map
Every AI company, no matter how novel, is held together by six invisible systems:
Compute : How efficiently you convert energy into intelligence.
Intelligence : How unique and adaptive your model is.
Feedback Loops : How fast you learn from your users.
Alignment : How reliably you behave as intended.
Governance : How you self-correct before regulators do.
Energy : How sustainably you scale.
Energy calculations are difficult to measure for some, I've eliminated it out of the scoring system for ease. We are calling the above systems as "Nodes" in this article further.
If we plot the remaining systems on a Pentagon. The resulting shape is your startup’s fingerprint or as I call it, your "Node geometry". My inference is that a balanced shapes survive longer and an asymmetric ones burn out.
A. The Frontier Balance Sphere
To see what “balanced” looks like, I built a reference from the current frontier labs: OpenAI, Anthropic, ElevenLabs, and Gemini/DeepMind. Their combined profile forms a blue sphere of near-perfect equilibrium. Initially I had thought of only Open AI, but added other frontier labs to even out the modality score.

Node Average Score (0–10) is the physics of the frontier, the cost floor, the learning cadence, the energy ceiling.
B. Measuring yourself against it :
Founders can self-diagnose by answering some questions in all honesty: What’s your cost per 1K tokens? How long does feedback take to reach production? How often do you audit for bias or drift etc?
Initially I wanted founders to score themselves on a scale of 1 to 10 to each question but that might introduce some unknown bias into the scoring pattern, so I've minimised the answer options and assigned a broader range to each option to make it easier to answer and bias free.
Each answer measures your startup on the five nodes we discussed above, forming your own red polygon. When you overlay it against the blue sphere, the Frontier Balance Sphere and the gaps will give you a comparison.
The algorithm computes three things:
Altman-Fit : How close you are to frontier balance.
Symmetry Index : How evenly your systems are built.
Era Distance : How far your mindset is from the next Altman decade.
Together, they show not what you believe you’re building, but what your data says you are.

C. The Drowning Zones
Startups don’t usually die of competition; they drown in asymmetry. When there is too much compute, not enough feedback. Too much scale, no alignment. Too much energy burn, zero sustainability. For Example: If your Alignment score falls 20 % below frontier, you’ll lose enterprise trust. If your Compute × Energy balance breaks, you’ll drown in cloud bills.
I call these weak points the Drowning Zones, the nodes that drag your system underwater first.
The algorithm also shows a resulting table that captures the risk % of each node in a table format. The frontier gap also shows how far you are from any frontier lab.

The Drowning zones compare how with frontier labs and how you perform on each node.
D. The Survival Map
Survival Score is your startup’s ability to endure frontier pressure, based on closeness to frontier (FF), internal balance (SI), and future strain (CPI).
AEI produces a dashboard of your survival probability. It is calculated based on three core pillars.
Frontier Fit (FF) : How close your architecture is to the frontier labs. If you are too far from the frontier sphere in CE, IQ, FB, AL, GV then your base resilience drops.FF is assigned a 40% weightage in the Survival score.
Symmetry Index (SI) : How balanced your 5-node design is. A lopsided company (one strong node, one very weak node) collapses faster under pressure. This is more of an interdependent score between the five nodes you have. I've assigned 30% weightage for SI in the survival score.
Competitive Pressure Index (CPI) : How much the next Altman-era will punish your weak spots. A high CPI means the upcoming Altman era exposes your vulnerabilities harder with a 30% weightage in the survival score (inverted as 1 − CPI)
You can read the survival map like a cockpit:
Altman-Fit > 0.8 → healthy orbit.
Symmetry < 0.7 → fragile.
CPI > 0.15 → frontier labs can overrun your moat within 12 months.
A simple gauge rolls it all up:
Green > 80 = Resilient. Yellow 60–80 = Fragile. Red < 60 = Drowning.
Survival Score (0–100) = 0.4 × Fit + 0.3 × Symmetry + 0.3 × (1 − CPI)
E. Wrapper Exposure & Frontier Shock :
Most AI startups today are wrappers they ship UX, workflows, and narrow domain logic on top of APIs from OpenAI, Anthropic, Google, or ElevenLabs. This creates a paradox: you can grow fast, but you do not fully control the ground you stand on.
Wrapper Economics in AEI captures three forces to reckon :
1. NET (Normalized Economic Throughput)
How big you are relative to the frontier labs. Small NET = labs barely notice you. Large NET means your business is exposed whenever they change pricing, quotas, or policies.
2. CER (Customer Exposure Risk)
How much a price cut or model upgrade from labs can compress your margins. If the lab halves its API cost, and you pass that to customers, you’re safe. If it destroys your pricing power the CER spikes.
3. FSR (Frontier Shock Risk)
This is the “earthquake” metric. It measures how destabilising new frontier model releases can be for your business or the age old fear : Will Open AI kill my startup overnight.
Wrapper Economics is, in essence is:
“How much of your fate is controlled by your supplier vs. by you.”A high WES (Wrapper Exposure Score) means you’re building on rented land. A low WES means you’re slowly taking control of the ground beneath you.
Every new frontier model release creates a shock wave that moves through the ecosystem and AEI models this using two simple ideas:
Frontier Velocity : How frequently the labs drop major upgrades per year.
If it’s once a year → mild turbulence
If it’s every few months → constant shock
If it’s every few weeks → your moat becomes liquid
Sensitivity (CER) : If your product copies the same shape as GPT-4.1, GPT-5, Claude 3.7 or Gemini 3, every time they release a new version, your value shifts.
Each model launch or shock cycle lowers your adjusted survival unless:
You control your own infra
You have your own data advantage
You have strong switching friction
You deliver workflow value beyond raw model capability
Frontier cycles are not a competition. It's like AI world gravity or blackholes. And gravity always pulls wrappers downward unless they evolve into deeper companies.
The Frontier Cycles Map is a projection of your startup’s survival score over the next 6 frontier model cycles. Think of it as a stress-test through time and not revenue.

X axis : Frontier Model Release cycles & Y axis : Your survival score across those cycles. It may increase or decrease.

How to read the Frontier shock table.
Where this helps:
The AEI isn’t a prophecy engine; it’s a mirror. It forces founders to think in systems, not slogans. To see their own asymmetries before the market does. To ask: If the frontier moves tomorrow, which part of me drowns first?
How to know your AEI Score?
You can run the AEI from this Colab notebook here. I am not recording or saving your data anywhere. Be honest in your answers and let me know the results. Again please take it with a pinch of salt.
A caution
Don’t take the numbers too literally. Startups are made of timing, trust, and narrative none of which can be scored. The AEI just gives you a structured way to see your blind spots. Think of it as a stethoscope, not a diagnosis. It doesn’t predict your death; it helps you hear your pulse.
Closing line
Altman once wrote, “Prediction is not foresight; it’s calibration.” The AEI is exactly that. A simple calibration tool. A way for every AI founder to ask some honest questions and reflect in theie alone time :
“Which AI decade am I really building for?"
"What are my weakest spots"
"How much is my business exposed to frontier shocks"
If you notice errors, edge cases, or better mathematical structures, I’d genuinely appreciate your help improving the code. This project works best as a living, collaborative model refined by founders, builders, and anyone curious about the future we are all stepping into.
