On AI and the future of work - notes from evolution, economics, history and physics.
Episode 96
You can know where AI is or where it’s going. But never both at once.
That’s Heisenberg’s Uncertainty Principle. Oddly enough, it captures how we relate to artificial intelligence today.
The Uncertainty principle tells us that we can know either the position or the momentum of a particle at any given time but can never know both at the same time. The more accurately one property is measured, the less accurately the other can be known.
In AI, its much the same. Like in physics, we can measure AI’s present or its trajectory but never both. Zoom out and we spiral into hype or panic. Zoom in and we lose the big picture.
Is it ever possible to do both?
Brought to you by:
Market Curve is a premium tech storytelling studio run by me Shounak that turns tech companies into media companies. I’ve had the fortune of working with some of the top tech companies backed by some of the top investors like YCombinator, 20VC who have gone on to be acquired by companies like Roblox & Amplitude.
I’ve also helped founders grow their personal brand in the v0 stage, while writing essays that go viral on social media for companies closer to the PMF spectrum.
Here are a few things I can help you with —
Longform blogs, essays, and newsletters that drive authority.
Web & landing page copy that converts.
High-value assets like decks, e-books, guides, and case studies.
Ghostwriting for busy founders & execs who want to stand out.
Want to turn your startup into a media company?
This Isn't Our First Tech Panic
Each time technology has progressed, it has been met with equal degress of wide-eyed wonder and fearful skepticism.
We’re seeing it firsthand with AI, but on a long enough timeline, history repeats itself. Some philosophies even say time is cyclical. That means we’ve been here before and that’s a good thing. Patterns, precedents, and lessons exist if we’re willing to look back.
Following Alexander Graham Bell’s invention in 1876, the telephone sparked fear among early adopters. People believed it transmitted invisible forces or malevolent spirits. The telegraph faced the same suspicion—and so did electricity.
When John Feeks was tragically electrocuted in New York City in 1889, the association of electricity with danger grew even stronger.
These fears weren’t isolated - they were part of a broader reaction called technophobia, which took hold during the Industrial Revolution.
This wasn’t just fear of the unknown. It was also about jobs.
Before the 1870s, steel production and mining in the US were labor-intensive, manual processes. Workers would manually extract iron ore from shallow surface mines or deeper shaft mines using basic tools like picks, shovels, and hammers. The output? 10 to 20 tons of steel per worker each year. The metal was expensive, inconsistent in quality, and limited to small-scale uses like tools or rails.
But in the 1870s, the invention of the Besemeer process transformed steelmaking. Armed with this new technology, the average steel worker could produced over 300 tons of steel per year.
Steel prices dropped from $100 per ton in 1870 to just $12 by 1900, making it affordable for skyscrapers, bridges, and railroads, upon the back of which modern civilisation stands.
In the 1900s, US was producing 40% of the world's steel. Andrew Carnegie, one of the then early US businesses billionaires also made his fortunes on top of the Beseemeer-wave.
Just this small history lesson tells us that technological economics is designed to automate away manual inefficiencies, make things streamline so costs can drop down, and scarce valuable resources can be made more accessible to people (both present and future) at scale.
Your average tech or VC bro will come in and probably utter the four magical words at this stage - capitalism increases the pie.
Their argument is that yes, jobs were lost but so many jobs were created too. And they would be right.
In 1860, before the Besemeer wave, Cleveland's steel mill employed 374 workers. By 1880, Cleveland alone had ten steel mills employing 3,000 workers.
But the economics of work and job creation & displacement isn't so black and white. It's much more nuanced than that.
The 3 Stages of Technological Automation
Technology doesn’t replace humans overnight. It evolves in stages. When it comes to labor, there’s a spectrum of automation:
1. Productivity Tools
At the top of the spectrum are productivity tools that complement workers and complete only a portion of a job or task. Think telephones or AI-powered chip design. They make tasks easier, not obsolete.
2. Enabling Automation
Move up, and you'll find yourself in the middle of the spectrum, one where automation enables workflows that speed up in achieving specific outcomes.
The Gutenberg printing press & the invention of the manufacuturing assembly line are examples of this.
In the software world, you could look at spreadsheets as the disruptor to legacy notepads or ledgers for keeping track of customer data and financial information.
3. Replacing Automation
And then, finally, we have the last tier of the spectrum where automation replaces human labor.
When a Waymo can take another person from Point A to B, the human Uber driver is replaced by the technology.
Here, tech does the job entirely. Waymo replaces Uber drivers. And AI starts to replace knowledge workers.
Will Humans Still Have a Role?
AI doomsday experts almost exclusively reside in the third tier. The bad news for them is that it's almost inevitable that technology will replace humans at one point.
It's not the question of if, but when. Here's why this is almost a certainty.
The first reason is that, mathematically speaking, any "information-processing or execution task" is simply a function from input variables to output variables.
In lots of niche use cases, these input variables can become complicated, and it could become practically difficult to encode such things into computers, but in principle there is no insurmountable barrier. All we need is simply better ways to search for algorithms which implement such functions.
And this kind of search shouldn't be too difficult to create either. Human intelligence is an emergent property of blind evolutionary processes. Humans have been using tools for a long time but the cool thing is that this complex tool use has evolved at least three times across corvids, primates and octopuses. So this step isn't exactly a big bottleneck.
The creation of human intelligence from the level of primates took 2 million years out of the 4 billion years since the origins of life on Earth - a 8e13%. That's 8 followed by 13 zeros.
These 2 million years were spent creating general intelligence across cognition, maintaining biological processes, processing emotions etc. Therefore, a more finely-tuned search can probably converge on intelligence that much faster.
We also have access to silicon-based computational hardware (or even wetware in some cases) which have a massive advantage over biological hardware.
The arbitrage opportunity lies because biological hardware need to spend time controlling our bodies, some to sleep, others to emote and feel. A niched-down use case powered by computers is massive.
There's also disproportionate advancements being made in quantum computing where the world could very well be one where supercomputers are the norm much like smartphones are today.
And finally, there's the communication advantage - computers are able to transfers terabytes of data in seconds whereas in human communication, there is a large degree of information leakage.
The efficiency arbitrage is also a massive advantage. Combine all these together and add compounding loops on top, and chances are our AI overlords will replace humans entirely.
Well maybe not entirely. Humans will still have a role to play in this brave new world - they still need to make sure guardrails are in place, steer the AI in a direction that's most aligned with key incentives, there will be humans in the loop to give feedback to models and agents to make sure they learn and improve.
How do we find meaning in all this
The thing about us biological creatures is that when when our livelihood is threatened, or we feel like we are on course to lose that thing that once made us useful, we tend to respond in rather reactionary ways.
At which point, the question becomes - what work are we supposed to do if the machines can do it all? It becomes a question of us vs them.
We craft a narrative in our minds of what work is supposed to be and (wrongly) assume that only WE can do the work - after all, we can think, use our minds and make our efforts count.
There is a direct correlation between working hard & increase in perceived value. We might think that we humans naturally gravitate towards doing things or choosing things that make us do less work or free up our time, but the truth is we value our work in a very deep and meaningful way.
Businesses work on the assumption - save time is better. Products and services that help save time are perceived as more valuable.
But then you have companies like Ikea where you have the Ikea effect where people value something more because they made it on their own.
Or you have the Betty Crocker effect where sales dried up because the cake baking process was too methodical and lacked soul, so the company added friction into the cake making process (adding eggs) which boosted sales because the people making the cakes felt like it didn't have their personality and they wanted to make something they made with their own hands.
As such, there can potentially exist a world where there is a premium on the human efforts that goes into creating something that has a high perceived value.
Perhaps a world where if a person hires a human to provide a human customer support experience to a high ticket unicorn customer while using AI customer support bots for regular people like us. Human experience would then be held at a premium in an economic context.
So its plausible that AI can replace human tasks but the economic pie would increase and create this whole new world of jobs much like the internet & smartphone created the gig economy, the attention economy and the creator economy.
This is because what has historically happened is that the work / energy output doesn't just go out out of existence - it simply manifests itself into something new.
The law of conservation of energy states that energy is neither created nor destroyed. It just gets transformed from one form to another.
Much like this, the effort we put in won't disappear, bur rather will be chaneled into something new. Something that aligns with our existential & spiritual personas we inhabit within ourselves.
If Nietzche were here, he would probably urge us to become our best selves and discover what we want to do with ourselves to fulfil our higher purpose because for him, work is disgraceful because it detracts from man’s ability to produce the only thing that bestows value on life – beauty.
The point then, of human work in the post AGI world is to focus on doing work that is more creative, require strategic thinking, and of course, that we derive meaning from.
AI won’t just replace labor, it will force us to redefine it.
In that redefinition lies our next renaissance. Not of efficiency. But of meaning.
If you liked this essay, consider sharing it on your socials or inviting a friend to read this essay. Feel free to say hi to me on Twitter or on LinkedIn or email.
That’s all for today! See you soon!
Amor Fati Amor.