MCAI Market Vision: Marcus Aurelius on AI
Meditations on Success, Valuation, and Market Discipline
Please see companion vision statements Socrates on AI, A Vision Statement for Intelligence Worth Finding (Aug 2025) and Realpolitik for AI (Aug 2025). The publication reframes the AI debate for business leaders and investors: in a market saturated with hype, the way to identify real value is not by chasing headlines but by asking disciplined questions. It offers four tests — does the system collapse without intelligence, does it deliver insights humans cannot, does it materially change outcomes, and does it serve a lasting purpose beyond profit. These questions function as a practical due diligence filter, helping decision-makers separate sustainable AI companies from buzz-driven pretenders.
Prologue: The Emperor Who Practiced Restraint
Marcus Aurelius (121-180 AD) ruled the Roman Empire at its peak, commanding the world's most powerful military and governing sixty million people across three continents. Yet history remembers him not for conquest but for constraint—the private journal he kept while managing plagues, wars, and political crises.
His Meditations, written in military camps and imperial palaces, reveal an obsession with practical philosophy: What actually works versus what sounds impressive? What endures versus what merely entertains? What is within our control versus what is merely within our influence?
As the last of Rome's "Five Good Emperors," Aurelius faced constant pressure to expand, indulge, and accumulate. Instead, he practiced what Stoics called "the discipline of desire"—focusing on sustainable value rather than immediate gratification. His empire outlasted him by centuries because he built institutions, not monuments.
In his private writings, he constantly asked: What is substance? What is performance? What will remain when the crowd moves on?
Today's AI market desperately needs this discipline. $200 billion flows annually into ventures that add "AI-powered" to their pitch decks. Valuations triple overnight on algorithmic promises. Executives who've never deployed machine learning suddenly become experts on artificial intelligence.
Aurelius would recognize the pattern. He wrote: "How much trouble he avoids who does not look to see what his neighbor says or does." The same applies to AI markets—those who focus on what actually creates dependency, rather than what generates headlines, will inherit the sustainable ground.
There is irony here: invoking the voice of an emperor who wrote only for himself to address the loudest, most public market in decades. But perhaps that contradiction is precisely the point—the most valuable insights often come from the quietest observers.
Insight: The discipline that built lasting institutions can identify lasting AI value.
Contact mcai@mindcast-ai.com to partner with us on predictive cognitive AI.
I. When Is AI Successful?
I have learned to distinguish between the sword that gleams in sunlight and the sword that cuts in battle. The first impresses observers; the second saves lives.
Walk into any enterprise software conference and witness this exact phenomenon. Vendors demonstrate AI that summarizes emails and optimizes schedules. Audiences applaud. Contracts get signed. Six months later, companies quietly return to their old workflows.
This reveals the first principle: dependence beats dazzle.
The test is surgical: Remove the AI. What breaks? If employees can return to previous methods without measurable loss, you've built entertainment. If essential functions fail, you've built infrastructure.
Consider two deployments I evaluated:
Company A automated customer service responses. Impressive results, glowing pilots. But turn it off tomorrow? "We'd rehire the reps we laid off." Efficiency, not dependency.
Company B predicts manufacturing equipment failures. Less glamorous, harder to demo. But removing it means returning to scheduled maintenance—40% more downtime, millions in unexpected failures.
One created convenience. The other created necessity.
Insight: AI is successful when its absence would cause collapse, not just inconvenience.
II. The Bubble Question
Aurelius would never invest in a company whose customers could abandon it tomorrow and survive. The Stoic investor asks not "How fast is it growing?" but "How much would customers lose by leaving?"
The AI bubble follows a predictable pattern. Companies build AI to solve expensive problems, or to solve investment problems.
The tell is in their metrics.
Bubble companies emphasize: Model accuracy, training data size, breakthrough announcements, partnership deals.
Value companies emphasize: Customer switching costs, operational dependency, competitive moats built through usage.
I recently reviewed two AI startups. The first raised $50 million and generated significant media coverage for "revolutionary natural language processing." Revenue came from pilots and proof-of-concepts. Customer renewals? "Still optimizing the onboarding experience."
The second raised $5 million with zero coverage. Their AI optimizes truck routes in real-time. Boring work. But customers who tried to switch back to manual routing abandoned the effort within weeks—the operational disruption was too severe. This AI had created genuine dependency.
The first solved investor excitement. The second solved the problem of customers having any choice.
The correction comes when investors start measuring dependency rather than growth.
Insight: Bubbles form in human judgment, not technological capability.
III. The Buzz Market
The competent engineer builds bridges; the incompetent engineer builds excitement about bridge-building.
The AI buzz market rewards wrong behaviors. Companies spend more on keynotes than customer success. The result is predictable: intelligence that impresses audiences but creates no lasting dependency.
I evaluated an AI company with enormous buzz for "conversational business intelligence." Flawless demos—ask any question about data, receive instant answers. Genuinely impressive technology.
Yet customers told a different story. The AI worked beautifully for designed questions, but users asked follow-ups, contextual questions, industry-specific nuances. Responses became generic and misleading. Crucially, customers could easily abandon the system and return to their previous analytics tools without operational disruption.
Contrast this with unglamorous AI that runs modern business. Fraud detection algorithms don't give impressive demos—they silently save billions annually. Supply chain optimization doesn't win awards—it ensures products arrive when needed. More importantly, these systems create operational dependencies that make switching prohibitively expensive.
Value what endures when the crowd moves on.
Insight: What works well doesn't need to shout. What shouts usually doesn't work well.
IV. Valuation as Discipline
I learned early that the value of a legion lies not in its equipment or training, but in what collapses when that legion is absent. The same principle applies to every tool, every investment, every innovation.
How do you value something that didn't exist five years ago? Most investors use familiar frameworks—revenue multiples, growth rates, market size projections. These metrics work for predictable businesses, but AI creates unpredictable value in unpredictable timeframes.
The Stoic approach focuses on dependency, not possibility.
A Stoic investor does not pay 20x revenue for convenience.
I recently evaluated two AI companies with identical metrics. Traditional analysis suggested similar valuations. But dependency analysis revealed completely different pictures:
Company A: Their AI optimized ad targeting. Impressive 20% better conversion rates. But customers could easily switch to competitors or revert to manual targeting with minimal disruption. Low dependency, abundant alternatives.
Company B: Their AI managed insulin dosing for diabetic patients. Less impressive growth story, smaller market. But customers literally could not function without the system once deployed. High dependency, no viable alternatives.
Same metrics, radically different values.
Applied valuations based on dependency depth:
Infrastructure AI (customers cannot function without it): 15-25x revenue.
Competitive AI (customers lose significant advantage without it): 8-15x revenue.
Efficiency AI (customers experience measurable benefits): 4-8x revenue.
Experimental AI (customers like the features but could live without them): 1-3x revenue.
Value what endures when the crowd moves on.
Insight: Value AI based on customer dependency, not investor enthusiasm.
V. Innovation as Pattern, Not Surprise
Nothing new under the sun, including the belief that everything is new under the sun.
Every transformative technology follows the same arc: invention, early adoption, hype, correction, productive integration. AI is transitioning from hype to correction, creating opportunities for those who value what endures when the crowd moves on.
The pattern:
Invention: Works in labs, unclear commercial use
Early Adoption: Pioneers solve specific problems, mixed results
Hype: Media excitement, massive investment, unrealistic expectations
Correction: Reality reasserts itself, funding dries up
Integration: Sustainable applications emerge, technology becomes invisible
Printing press, telegraph, electricity, internet—each followed this sequence. AI is currently late hype, early correction.
Strategic opportunities:
For acquirers: Distressed AI assets available at fractions of peak valuations. Focus on companies that built genuine customer dependencies.
For operators: Deploy capabilities while talent is available. The companies that survive correction will have created switching costs that lock in competitive advantages.
For investors: Reduce speculative positions now. The correction will separate AI that creates dependency from AI that creates excitement.
Prepare for what always happens rather than hope for what rarely happens.
Insight: Innovation follows patterns. Success comes from timing your response to predictable cycles.
Epilogue: The Stoic Market
The reports from my provinces tonight show both prosperity and trouble—abundant harvests in Gaul, flooding in Egypt, rebellion stirring in Britannia. Success and failure, often simultaneous, always temporary. The wise ruler prepares for both.
The 95% failure rate in AI projects reflects not technological inadequacy but human nature under conditions of excitement. We chase novelty over necessity, promise over proof, narrative over numbers.
Yet this creates opportunity for the disciplined few.
While others debate whether AI is revolutionary or overhyped, the practical question is simpler: How do you identify and capture sustainable value in markets driven by temporary enthusiasm?
The Stoic principles provide clarity:
Focus on what you can control: Your investment criteria, deployment strategy, and risk management. You cannot control market cycles or competitor actions, but you can control your response to them.
Distinguish between impression and function: Judge AI by what it enables, not what it claims. The most valuable AI will be the most boring AI—invisible systems that make essential functions possible.
Value dependency over growth: Fast-growing companies that create convenience lose to slow-growing companies that create necessity. Sustainable AI businesses are built on customer dependency, not customer delight.
The correction now beginning will separate sustainable AI value from speculative AI enthusiasm. Those who maintained discipline during the hype will inherit the infrastructure of the next economy. Those who abandoned fundamentals for excitement will inherit expensive lessons.
The choice, as always, is ours to make.
Insight: The future of AI markets lies not in algorithmic sophistication, but in the application of timeless business discipline to new technological capabilities.
These meditations do not predict which AI stocks will rise or when the bubble will burst. They provide a framework for maintaining judgment when markets lose theirs. And in that framework—as Aurelius knew—lies sustainable advantage.