I recently saw a CNN segment that would be darkly comical if its implications weren’t so incredibly frightening. In the segment, a CNN anchor was interviewing an artificial intelligence (AI) expert and essentially posed the question, “Will artificial intelligence wipe out humanity in the next five years?” The AI expert, without missing a beat, responded with something to the effect of, “No, I think that’s far too aggressive a timeline; there are still so many kinks to be ironed out. If anything, we’re looking at a much longer timeline than that.” With a completely straight face, the anchor responded, “Well, that’s a relief to hear it won’t happen that soon.”
Without context, that sounds like material from an absurdist comedy sketch. However, the exchange illustrates how foreboding much of the general consensus is around AI and, given how larger societal forces tend to play out, how helpless even those closest to its development seem in preventing its potential worst outcomes or excesses. Another maddening aspect around the discussion of AI is that there seems to be such a variety of opinions that no consensus can remotely be reached on how to regulate it or rein it in. For every expert who is saying to pump the brakes on the panic and doomsday scenarios, there is another expert saying that doomsday scenarios are well within the realm of possibility and, in fact, likely. It’s a topic that seems to engender intellectual whiplash, even among those most intimately acquainted with the matter.
In a survey of top CEOs by Yale’s Chief Executive Leadership Institute, 34% of respondents said AI could “destroy humanity” within 10 years, while 58% said they didn’t think the potential catastrophe of AI was overstated. Early in June 2023, a group of influential tech leaders, business leaders and scientists signed an open letter urging a six-month pause on advanced AI development. This is contrasted by the less alarmist views of tech billionaires such as Bill Gates, Mark Zuckerberg and Marc Andreessen. Notably, Zuckerberg has employed the “godfather of AI” Yan Lecun as Meta’s Chief AI Scientist. Any way one slices it, there is a distinctly foreboding feeling that surrounds artificial intelligence, and the meager assurances of a few tech billionaires are certainly cold comfort for the vast majority of people.
In theory, artificial intelligence could be used to supplement and enhance the quality of work and the morale of workforces, but it’s awfully difficult to envision that being the case. That tends not to be the direction that multinational corporations, who will ultimately be tasked as custodians of AI, usually go. At the recent Yale CEO Summit in June of this year, top CEOs talked in broad, agreeable terms about the potential applications of AI, a far cry from the apocalyptic scenarios screaming from the headlines. As the Fortune article documenting the Summit details, “The comments of professional services CEOs reflected an overwhelming belief that AI can never truly replace human judgment in their fields, even as the new technology disrupts many facets of their business. There was an overwhelming agreement that, for all AI’s usefulness in automating what can be automated, AI is best used to buttress rather than supplant human professionals, freeing them up for higher value-added work while delivering services with the empathy and appreciation of ambiguity that only humans are capable of.”
The Fortune article also details possible uses for artificial intelligence in the banking industry, as suggested by current Lazard CEO and former Office of Management and Budget Director Peter Orszag. “AI provides the opportunity to replace some menial work of investment banking, whether it’s having executive assistants go back and forth on finding a time for a meeting to summarizing meeting notes. It’s also an incredible opportunity for our talented young analysts who spend a lot of time pulling data from multiple sources to streamline the research process as well as how that information is used in various applications. All of that can be automated and that frees up our people for higher value-added tasks where we’ll put their skills to better use,” said Orszag, who was also careful to stress the immaturity of the technology. “I tested putting together an email using GPT4 but ended up redoing most of it myself. There were a couple of thoughts that the technology put forward that I may not have thought of myself. The right way to think about it, at least for now, is augmentation.”
Many CEOs are echoing similar sentiments, but it’s not uncommon for those at the top to offer comforting statements in their comments. In fact, it’s to be expected. The reality is that, in an already remarkably precarious labor situation in America and many other countries, AI has the further potential to cause massive upheavals. Senate Majority Leader Chuck Schumer and President Biden, two men not exactly known for being on the cutting edge of issues or pushing bold policy agendas, have even expressed grave concern about not getting a handle on AI before it is completely unleashed. Schumer proposed to convene the top minds in artificial intelligence to lay a foundation for the type of legislation that can make sure AI doesn’t, for one, completely upend the economy and, most importantly, doesn’t shake the foundation of who we are as human beings.
Congress isn’t exactly known for acting swiftly or boldly, but it’s telling that even those deeply entrenched in Washington are seeing how potentially disruptive and catastrophic a fully realized version of AI could be without the proper regulations and safeguards. Since the 2008 financial crisis, the United States has largely shifted to a gig economy for vast swaths of the population. A gig economy is defined as a labor market that largely relies on temporary or part-time positions filled by independent contractors and freelancers, rather than full-time permanent employees. One of the reasons why the unemployment rate is so often touted as the measure of economic success in America is that, in a gig economy, it is easier to yo-yo and fudge employment numbers than it is, say, real wages. We now live in an economy where employment does not necessarily mean someone is able to make ends meet; in fact, quite often it doesn’t.
The gig economy, combined with our overriding economic structure, has wildly increased financial precarity among the U.S. populace. Since 2008, a continuing sentiment echoed to this day is that “the economy never really recovered for regular Americans.” Unfortunately, in a multitude of ways, this is true. Notoriously, the 2008 bailout, while addressing the economic emergency at the time, was largely directed at the highest levels of our economy, largely institutional interests, while millions of average Americans suffered economic setbacks they still haven’t fully recovered from. This includes losing life savings, home foreclosures, and a permanent decrease in gainful employment opportunities. The gig economy, in retrospect, was the logical extension of the fallout from the 2008 financial crisis. It has made economic conditions far more precarious for many Americans and has essentially made millions of people into freelance mercenary employees, only chasing the next paycheck to make ends meet. There are millions of people who find themselves in this situation despite making perfectly fine life choices, but the vagaries of the system have beaten and bruised them pretty severely.
Even if one doesn’t take a maximalist view of the potential fallout of more sophisticated, omnipresent artificial intelligence, there’s no denying that it looms as another tool to severely discipline labor and make the economic existence of many people more precarious. The hyperbolic talk of AI wiping out humanity belies the more likely, mundane fallout from AI’s spread. It will likely drive America further into a gig economy without a solid financial foundation and genuine security. This will increase all the issues that bother most Americans, no matter what your political perspective is — crime, homelessness, economic distress, lack of trust in our institutions, etc. AI is foreboding, not because its’ most maximalist outcomes will come true, but because we know, given the trajectory of many aspects of our society, it won’t be used to enhance the human experience, but instead used to make the lives of human beings more difficult.