#

AI Eases Its Own Labor-Market Transitions

On the list of existential risks posed by an AI-powered post-scarcity society, accessibility and equity are fundamental concerns for researchers and ethicists.

Without context, it’s easy to credulously accept such concerns as reliable predictions. One can just as easily dismiss these concerns as meritless virtue signals. 

With slightly more context, however, it is clear that these concerns are not unfounded. But there exists a solution to these potential problems where least expected: AI itself.

Let’s talk about bias.

‘Biased AI’ concerns can be organized into two buckets: bias in the creation of the model (data that contains or reflects biases) and bias in the deployment of the model (who receives access and how they interact with it). We are presently concerned with the final bias: how users can derive value from some highly intelligent, universally accessible, aligned AI. 

We assume unbiased model creation is not a concern, which reflects the current state of affairs. Mitigating bias in model creation is not inherently political nor is it practically treated as such. Models need useful data to produce useful outcomes for the humans leveraging them: AI labs wrangle data on the basis of quality, the criteria for which is apolitical. 

We also assume ‘model alignment’ at the post-training level is resolved. Currently, this is a concern. 

Machine learning researchers focus on maximizing the general capabilities of new models, whereas go-to-market product and design people focus on harnessing these capabilities in a business-aligned, risk-mitigating, controversy-minimizing manner. Simply put, the deployment of an “equitable” product like Google’s Gemini is not a reflection of the underlying model but instead how a company decides to commercialize it in an aligned manner. 

Should we be concerned that the decision making of “risk minimization” is concentrated in the hands of a few? Certainly, but this is a topic for another time.

Our focus here rests on considering whether or not the average human will be equipped to use the ubiquitous AI product of the future: one that is free, universally available, and immensely powerful.

The concern is as follows: technology is only as powerful as the value users derive from it. If a powerful new technology is too complicated or time consuming to be adopted by the masses, they will be unable to adopt it. Therefore, it isn’t hard to imagine an AI future where the power of these technologies accrues only to the well-educated, knowledge workers properly equipped to harness it. Those that are already advantaged in many ways.

Some context: in the United States, extreme poverty — those living on less than $2.15 per day — is basically nonexistent. 2021 data from the World Bank Poverty and Inequality Platform reports 0.25 percent. Even by domestic standards, the percentage of Americans living under the poverty line decreased from 15.1 percent in 1993 to 11.5 percent in 2023 per the US Census Bureau. Not only are fewer Americans poor — by any standard — but the real median household income has grown substantially since the 1990s: $59,210 (1992) and $74,580 (2022), according to the St. Louis Fed. 

Regardless as to the absolute level of income earned by Americans, concerns abound about income inequity. Before the pandemic, America’s Gini Index, a measure of the deviation of income distribution from perfect equality, increased from a local minimum of 38.0 in 1990 to an absolute maximum 41.5 in 2019. This trend has been met since the mid aughts with a chorus of concern about a polarized labor market in which high skilled workers get richer while low skilled ones get poorer. So, it’s not surprising that the rise of AI — a complex complement to pre-existing technology already inaccessible to underskilled individuals — heightens these concerns.

But the rise of AI itself should really mitigate these concerns of unequal distributional benefits. 

In the long run, sufficiently intelligent AI will be universally accessible. This hypothetical system will be highly effective, more so than any human, at interpreting commands and providing value to anyone of any background. So long as a human has some interface to the AI (voice, text, neural impulses), a generally intelligent AI will be able to interpret and interact with any human, with no information loss. 

Those concerned about the labor-market impacts of AI should understand that slowing down or ceasing AI development directly harms model accessibility. 

And if AGI did not come out-of-the-box with this universal accessibility, it would only incentivize a technology company to build this capability as a way of horizontally differentiating their software. In the meantime, however, AGI is not upon us. And AI tools such as ChatGPT are not universally accessible and adopted. How is this transition smoothed over without exacerbating existing educational and economic inequities? AI, of course.

AI, specifically Large Language Models (LLMs), the recent class of model powering ChatGPT-like products, are fundamentally strong at, well, modeling language. This could be a written or spoken language (English), a programming language (Python), or any new or invented language that can be represented and stored as a set of symbols. LLMs are so effective at this modeling, they can reconstruct nearly extinct languages given only 100 written examples.

“Prompting” LLMs — the interface by which we direct products like ChatGPT to produce useful outputs — is just another language. Similarly to how we interpret the grammatically incorrect demands of a frustrated toddler or the uninterpretable orders of a barking dog, LLMs can act as interpreters of the “language of prompting” in a way that makes them universally accessible in the near term. It should not come as a surprise that LLMs are highly effective at prompting themselves or other models given examples of great prompting. This capability increases accessibility by lifting the burden of learning a “new language” off the user and placing it on the system they use to interact with the model.

It should come as no surprise that the paradoxically constraining, open-ended nature of LLMs —  What do you ask of something that can allegedly do everything? — makes it difficult for most people to use. Much of this problem is that people do not know how to get the correct outputs. Not long ago, search engines like Google were not understood by the masses, which forced people through experience to learn how to “Google” webpages in the proper way. Submitting a search query is similarly a new language to learn, just like prompting a language model. And, just as one could Google “how do I use Google?”, one can ask an LLM “how do I prompt an LLM”? Or better yet, a company building a ChatGPT-like experience could compete in the market with software that understands this implicit user need and effectively addresses it.

In its current form, AI is functionally capable of being universally accessible. In practice, we are already seeing the current limitations of the model’s interpretation abilities, as to be expected. But for anyone who is concerned about this gap growing over time, they need not worry: Model competency is positively aligned with accessibility.