The AI Epoch: Humanity, Economics, and the Job Market - FL#9
An Attempt at Envisioning a Solution to AI-Propelled Job Displacement
Welcome to the 7 new crew members who have joined the frontier expedition since the last letter! If you haven’t subscribed, join the 39 curious explorers in our adventure to understand the frontier of innovation.
Today, we explore the future of AI, its effect on the job market, and the potential distribution models to ensure people have the required funds in the case of job loss!
On November 30th, 2022, ChatGPT 3.5 was released. The system astounded me; it felt like I was interacting with a human, and for some domains of question, it was like talking to an expert. It became the fastest-growing user base in two months, amounting to 100 million monthly active users. One month later, OpenAI released GPT4, which feels like an exceptional portable assistant with a multi-dimensional 150+ IQ knowledgebase. In November of 2023, they achieved 100 million weekly active users. ChatGPT certainly challenges our understanding of the Turing test and has raised conversation about the emergence of a superintelligent AI.
This is all to say - something unlike what we have ever seen is here.
I've become obsessed with pondering how artificial intelligence, such as ChatGPT, affects humanity. What if Artificial General Intelligence (AGI), an AI capable of performing any intelligent task a human can - defined in this piece as a highly autonomous system that outperforms humans at most economically valuable work - arises? What if it already has? Many questions about what humans will do in the wake of this event arise to the forefront: What are its intentions? Will we be okay? What does this mean for all those employed? What other jobs will be created (if any)? What does this mean for the economy? What does it mean for a human's sense of purpose? How should society be structured? Will we retain control over the AGI? The list goes on and on…
I would love to think through all of these here, but today, I will grapple with the following: Where does the money go from jobs displaced by the AGI, how do people get money if AGI wipes out their careers, and how does it affect the economy?
Should the money flow to the company that deployed the AGI? Following our current economic model, we end up with a situation where the companies that provide the AIs will generate revenue with the money that was once an employee's paycheck.
This is a dangerous track to take. The money that once sustained the individual and their family is now in the pocket of whoever can create the best AI model - leaving people careerless and moneyless. I'm not proposing a dissolution of capitalism (no thanks, Marx), but an exception in the case of AI. I am contemplating how we can ensure we have the appropriate safeguards to protect against mass job displacement in the age of AGI. Imagine the conversation: "We're going to have to let you go. We can't support this role anymore, and we're going to use AI going forward - thank you for all your hard work, and good luck." Would you want something to be in place as a safeguard for you and your family?
The current model's centralized nature raises concerns about the concentration of power, particularly when AI can influence many aspects of society. Similarly, the lack of transparency in decision-making and fund allocation becomes increasingly critical as AI systems potentially make more autonomous decisions and take more jobs. The traditional structures will likely not be robust enough to facilitate the support of en mass layoff due to AGI job-takeover.
There is a solution here that proves a worthwhile thought and discussion. I want to explore it today: An AI marketplace that can rely on fellow AI deployed on a decentralized, self-sufficient, secure, and censorship-resistant distributed network - a blockchain. This model presents an opportunity for a more flexible distribution of resources and more democratic governance in an AI-dominated landscape until the AI can govern itself upon reaching AGI status.
Some may recoil at the mention of 'blockchain.' This is not an investment pitch but an exploration of how we might navigate a monumental shift soon. The focus here is on frontier exploration, and blockchain is a part of that conversation.
In today's piece, we will walk through the impact of AI on the job market, its effect on the economy, the necessity for a new distribution and governance model, and how to implement that model.
Before we start discussing where the money goes from displaced jobs and the distribution model of AI, we first need to understand whether AI will replace jobs and to what extent it will!
AI In The Workplace
AI has beckoned forward the conversation of human job displacement, and I have been thinking about this a lot since the release of ChatGPT and the subsequent rapid growth of AI products and services.
Nearly every time an emerging technology appeared that people worried would take their jobs, it created MORE jobs for humans.
In history, when there was an emerging technology that threatened jobs, people worried, but in retrospect, it created MORE jobs. Examples include the Luddites destroying industrial machines worrying that skilled labor would disappear - without seeing the benefits that could come from the Industrial Revolution. More recent examples include the advent of computers, causing anxieties that machines will replace man, without us realizing that jobs like podcaster, video game streamer, YouTuber, and blog writer would exist. Automation has mostly opened up new and less mundane tasks for humans!
I fear saying, "This time might be different," because it's a common sentiment that this is a dangerous assumption. "This time might be different" is usually the quickest way to be wrong about a trend.
But I think this time… it might be different.
We're living through the most significant technological inflection point in known human history - the birth of AGI.
The best way to look at the potential for AI to take human jobs is through various future trajectories that humanity can take. In my research, I came across a piece by Anton Korinek, an economics professor specializing in AI research, who published a post with the International Monetary Fund titled: "Scenario Planning for an A(G)I Future."
In the piece, he proposes the concept of "the frontier of automation," which can be thought of as the task complexity that machines are capable of, including both:
mechanical (physical world) and
cognitive (abstract reasoning) skills.
He observes, which I think is very clear, that the task complexity that AI is capable of has increased.
This begs the question: Will AI become so capable that it surpasses all of the current and potential tasks that humans can do, or as AI can do more human tasks, will humans become capable of doing more undiscovered tasks? These scenarios are determined by the unbounded and bounded chart below:
To summarize, is the distribution of all tasks in the known and unknown world capable of being performed by humans, or is there a limit on what tasks humans can perform? If there's a limit, we have a bounded distribution; if there's no limit, we have an unbounded distribution.
The author presents 3 following scenarios:
Traditional - The world of unbounded task distribution. As AI frees up humans for new jobs, the complexity of tasks a human can do is expanded as humans continue to reveal undiscovered jobs! In this scenario, new jobs are constantly created, and AI might replace specific jobs but open up more meaningful ones.
Base AGI - A bounded distribution with a base case of a 20-year point to which we get AGI (recall our definition) when all human jobs disappear because the AGI surpasses our maximum task complexity capability. This means no more tasks are created that we can do more effectively than the AGI.
Aggressive AGI - A bounded distribution aggressive case of 5 years, in which we see the same scenario as scenario 2 above, but on a faster time scale.
Simply, it looks something like this:
The author posits that this would increase productivity and wage growth as follows:
If we get any scenario that looks like baseline or aggressive, it's clear that while productivity is vastly more substantial, wages will be materially zero. This means that nearly everyone isn't making money, highlighting the need for a value distribution model that includes those who lose jobs due to AI, which is almost everyone. The productivity explosion is noteworthy; we'll discuss that later in this piece.
First, let's consider which trajectory AI will likely take and which we should plan for as a society.
The answer to this question isn't clear. Since this has been such a prominent topic, many influential and relevant figures have weighed in, which helps amalgamate the viewpoints of the 'experts.'
Elon Musk said in his interview with Rishi Sunak,
"We will have for the first time something smarter than the smartest human. It's hard to say exactly what that moment is, but there will come a point where no job is needed, you can have a job if you want to have a job, for sort of, personal satisfaction."
Sam Altman with Lex Fridman said (paraphrasing):
AI will create a world with better jobs, meaning jobs you do for fulfillment rather than just to put food on the table. He believes it will create jobs that are difficult for us to imagine, and that of the job losses hit soonest, "Customer Service" comes to mind.
I asked the r/singularity subreddit this question, and the responses seem to favor a bounded distribution.
In this current moment, it's hard to predict if 1, 2, or 3 is more likely, being that we're in what feels like the midst of the exponential curve.
I'm not sure exactly what will happen. Speaking from logic, if we create intelligence that exceeds human cognitive and mechanical capability, which can operate 24/7, 365, is embedded in humanoid robots, is cheaper for companies to maintain than an employee, and is safer than if a human did it… it no longer even makes sense for humans to do the current jobs! Especially if it's safer for the AI to do so, would it not be imperative for us to let them step in?
This flow of logic makes it hard for me to see an unbounded distribution. I think scenario 2 is the most likely; however, I'm not unconvinced that we are somewhere on the curve toward 3.
By this point, the idealists may be upset that I believe AI won't be a net job creator, and the skeptic may be bitter that I haven't already discussed the world ending - I don't know, and I don't think anyone can know exactly how this will play out. However, we can have a hopeful future regardless. In the event of scenario 1, it's business as usual. In 2 and 3, the world will have so much more productivity and output that we can have a beautiful future. Still, we need to get the value distribution and governance right. Even if we get scenario 1, where AI creates better and more meaningful jobs, there must be a transition plan for those who lose theirs.
So now that we have navigated the evolving landscape of AI's role in employment, it becomes imperative to step back and cast a wider net encompassing the economic and social realms. Let's understand how AGI can transform our collective economic future if we don't adapt so that we can attempt to build a frame for what the solution really needs to do.
Post-AGI Economics in a Human-Task-Bounded World
If we assume that the frontier of automation as defined above continues to improve exponentially due to the advances in AI, and scenarios 2,3 or some bounded-distribution timeline occurs - we are faced with a fundamental question - how does this impact society and our economies?
Let's imagine a world where we see this AI revolution coming, but we follow suit with the satirical yet unnervingly accurate, Don't Look Up - Where would we be if all the competing interests ended up making us incapable of passing policy or moving forward with a solution for the masses who lose their jobs?
This will happen two ways - gradually, then suddenly. First, we'll see unemployment slowly start to tick up, and people will have difficulty finding a job in their careers. The jobs that will be hit first are those with cognitive tasks that the average current AI model can perform. While managers and employees don't want to see human colleagues out of a job, a slowdown in the economy will eventually make this inevitable, especially considering the public companies' Shareholder Primacy. This imposes a duty upon boards of directors to put the interests of their shareholders above all others. In an economic recession, cost-cutting initiatives will undoubtedly look to automation and AI to replace human labor, which is bound to be significantly cheaper.
Once companies start to do this, all other competing companies must do the same, as shareholders will want the company to retain and increase its value, and to stay competitive, they must apply similar initiatives. Assuming we are in a task-bounded world and AGI emerges along with robotics as dexterous as humans (not too far away), the flow of replacing human labor applies and will happen quite suddenly.
What happens to the salaries that once sustained those humans in their jobs? That money is now going into the pockets of the companies who developed the AI tools now operating in those positions!
This means that those who lose jobs will no longer have money to sustain themselves, and looking for a new job in that domain is no longer a consideration, considering that AI performs the job! So what happens to them? They need to find a way to generate money, but they can no longer perform a service that generates income because it is done by AI. This world is one where the companies operating the best AI models rule the world. Those companies will receive the value once distributed to the employees.
However, the world will be abundant, and the poverty floor will be significantly raised. Suppose nearly every step of a supply chain is automated, and 99% of all companies that generate products and services don't need to pay employees. In that case, the cost of basic human necessities will be close to 0!
The cost of shelter, food, clothing, and medical care will be so low that you may only need a little money to get those things easily! The world will be a beautiful place in terms of output, but I think we need to get the wage part correct, at least for the transitionary period (I'm not entirely sure what role money plays in a post-AGI world) - and this is where the importance of a new value distribution and governance model applies.
There is a better way, and we need to get this right and manage the transition so we can all benefit!
What is the best way to manage the transition? David Shapiro, an AI thought leader, presents a compelling case for this throughout his YouTube channel, specifically in his video titled "How do we get to UBI and Post-Labor Economics? Decentralized Ownership: The New Social Contract."
David Shapiro outlines a transition to post-labor economics due to AI advancements, predicting four phases leading to widespread job displacement and shorter work weeks. He champions decentralized ownership via DAOs and stakeholder capitalism for fair wealth distribution, aligning with a reimagined social contract and evolving government role. I was intrigued to find someone else thinking about managing this transition using decentralized tools and means.
We can use blockchain to develop a uniquely decentralized, secure, and transparent ownership structure for every individual affected by an AI job market takeover - let’s talk about blockchain’s role in all of this.
How Blockchain Can Fix This
Blockchain and AI have more than one synergy, such as providing transparency over the data AI models are trained on. However, for the sake of this piece, I will focus specifically on the distribution of funds and governance.
Succinctly, the problem I have been teasing is that we don't have a clean way to distribute money to individuals to at least maintain the standard of living they had before losing their jobs.
There are other options for those of you who recoil at the idea of blockchain. All of which involve the requirement of our trust in either the government, the company deploying the AI, or a third-party auditor.
We can
Introduce a robotics/AI tax where any company using an AI is taxed, which is used to fund a pool that distributes universal basic income.
Write federal policy to distribute company ownership among those who lose their jobs.
These are not awful solutions - they are at least attempts at solutions. Solution 1 effectively gets to what I am trying to solve with blockchain, but the distribution is performed through traditional channels and from the government; I, however, have a hard time trusting this over time when we can embed the trust in code. Solution 2 would likely involve a public-private partnership and various audit checkpoints to ensure the company meets its obligations.
We would have to go back to the old way of doing accountability checks through audits, which are not real-time and have their faults. The problem with this solution is more of a technical one; this data and information would sit on a database of that specific AGI-creating company. I'm not opposed to them, but I think a better one exists.
This solution was brought to life on January 3, 2009: blockchain.
In Frontier Letter #3, an exploration into the role blockchain technology will play in the future of our world, I state: "A blockchain is simply a list of transactions [a ledger] that is maintained by anybody who chooses to preserve the integrity of everything that is stored and processed by that ledger. Traditionally, the list is in a database, and the database owner has oversight and final say over the information in the list…Blockchains made it possible for individuals to now collectively maintain the list securely so that they could receive the security benefits of a bank maintaining this list, but you actually have full and complete control over your money (given that you hold it in a crypto wallet)."
This metaphor extrapolates to any company that holds any data in their database that you would prefer full autonomy and decision-making power over, including artificial intelligence companies that develop AGI, which will likely perform your job better than you at some point.
As mentioned, we need to get the value and governance right; here's what that means and how blockchain delivers:
Value - when I say value, I mean monetary value in the distribution of funds. Companies do this with stocks. Blockchains do this with tokens. The value distribution of stocks typically is heavily weighted towards founders, board members, and large investors - appropriately as designed - which bodes for a not-so-good design in the case of AGI taking jobs. Blockchain allows this distribution to be decentralized and automatically distributed to individuals through smart contracts.
Governance - In the same way a token is assigned a value, it can also be given voting rights with specific hard-coded rules about how the votes are weighted, giving us flexibility in giving those more knowledgable about AGI the capability to provide more 'steering power.'
I don't know the exact framework of how this is set up, but we do have some examples of blockchain projects already attempting to do this, such as SingularityNET. SingularityNET is a marketplace for AIs to interact with one another by calling on each other API's when they need a service to be performed by another 'specalist.' it's basically an AI market - This is very similar to what I'm discussing - so it is an interesting case to study - but I'm not going to do that in this piece.
My general thought about SingularityNET is that they have thoughts similar to what I laid out in this piece. I need to take some time to think more deeply about the token distribution - it's difficult to understand what level of centralization is necessary, if at all! Thankfully, blockchain allows us to experiment with different blockchain parameters in public.
The critical piece SingularityNET got right is the correct blockchain, Cardano. Cardano is poised to be the most decentralized, secure, sustainable, and scalable programmable blockchain protocol, which is paramount for AGI to be deployed on, or it will become a company with extra steps. Fundamentally, the blockchain with the AGI needs to be manipulation-proof and extraordinarily secure - which I think Cardano has the highest likelihood of being.
There are many reasons for "Why Cardano?" as explained in Frontier Letter #5, and I think that with the introduction of partner chains, as laid out in Frontier Letter #7, a new paradigm makes the deployment of blockchain AGI more reasonable than previously allowed.
We can deploy a similar concept to SingularityNET, but as a service layer, given that extraordinary levels of computing are required to operate AI.
The deployment infrastructure can make trade-offs to favor AI deployment. What's key is that Minotaur would provide joint consensus, allowing the service layer to outsource security and decentralization from Cardano, as well as a dual token model to split the function of the tokens.
Token 1 (T1) would be the fuel token; this can be set to a tokenomics model, which favors stable pricing and is what's used by the AGIs to communicate with one another and make payments in a marketplace.
Token 2 (T2) would be the token deployed on the Cardano base layer concerned with ownership and governance.
I will save going deeper on the specific deployment for a future piece after I've had time to chat with experts in the Cardano service layer and AI deployment realm.
Imagisticlly, it would simply look something like this:
The token distribution and voting specifics need a broader discussion. Is there a counsel who has a heavier weighting of votes? Should the company that develops the AGI get 10% of the tokens, and the rest of the population receive an airdrop of the remaining 90%? Should it be 80/20? How do we ensure that no one creates duplicate wallets to receive more than their fair share? How do we verify that they are an actual human, something like the Worldcoin orb?
So many more societal, moral, and technical questions remain, but we need to think through these things to arrive at the beautiful world of abundance we can experience if we land the AGI plane correctly.
Conclusion
I don't think we should slow the pace of AGI growth because it will be a beautiful feat of humanity that will also create a world of significant abundance, which will probably be so magnificent that the humans living in the era of this dream world will look back on us today in utter awe at the effort required from us to obtain necessities of life.
To bring this dream world to life, we need the infrastructure to transition from the present to the dream. That required infrastructure is humanity's involvement in the AGI's value accrual and decision-making to some capacity.
As far as I see it, the current best way to make this deployment is as a blockchain, specifically a service layer on Cardano, that allows humanity to participate in the reception of funds and the voting of where we should steer the wheel as we approach AI.
I am not an AI expert; I am an explorer trying to discern information living outside of understanding. That means I may not get these things right, so it's important that you comment, share, and subscribe to have this conversation as a community - bringing in as many voices as possible.
I hope you all enjoyed it, and I look forward to hearing from you all. Please share with your friends, family, and coworkers who would love to discuss this!
Have a beautiful next few weeks; I hope to chat soon!