On November 5, 2025, New York became the first state to require certain safety guardrails for artificial intelligence (AI) companions as its recently passed law officially took effect. The law requires operators of AI companions — AI chatbots that simulate human interaction — to be available to New York residents to detect and address expressions of self-harm or suicidal ideation and remind users they are not communicating with a human.
California will follow suit this year, when its recently signed Companion Chatbot law takes effect. While similar to the New York law, it requires additional notifications to known minor users and establishes a reporting procedure to the California Department of Public Health’s Office of Suicide Prevention. Other states, such as Maine, Texas and Utah, have enacted transparency laws requiring AI chatbots to disclose that users are not interacting with a human; however, they have declined to impose special obligations on providers of AI chatbots designed to simulate human relationships.
Any form of regulation on AI is a positive step, but these laws intended to rein in its worst excesses tend to be overly narrow in scope and don’t approach the holistic regulation we need.
Artificial intelligence, as it’s currently being deployed, poses a threat to practically every facet of human existence. As the state laws mentioned previously are meant to address, AI presents a threat when simulating human interaction and coercing people into extreme actions, but that is only the beginning. AI has the potential to be incredibly destructive economically and socially, yet we’re not even beginning to deal with this burgeoning reality.
A recent study by the Massachusetts Institute of Technology found that AI can already replace nearly 12% of the labor force, with the potential to reach up to 30% by 2030. This number is expected to continue increasing over time. Crucially, this will affect vast swaths of the population, some of whom have never experienced the kind of precarity and economic hardship that AI-induced job loss will bring.
Typically, wide-scale economic pain and job loss are most acutely felt at the bottom of the economic ladder, which consists of gig economy jobs and low-wage service jobs. These workers are typically treated poorly because they have limited economic leverage and upward mobility, so employers face little risk in making their lives incredibly difficult. However, with the rise of AI, a huge number of white-collar workers in relatively stable job positions may soon see their professional lives upended.
To maintain such an unevenly distributed economy, everyone who isn’t at the very top has to share a tiny slice of the pie, so to speak. AI has the potential to make it so that virtually no one, save a select few, gets more than a taste of the pie, which could lead to massive social instability and upheaval. AI could very well make it so that the white-collar worker with multiple degrees, previously unencumbered by economic distress, will now feel more kinship with gig economy workers. Millions of people who never dreamed of facing the financial precarity of a gig economy worker due to their education or work background may well find themselves facing the same economic peril.
Aside from the potential for wide-scale economic destruction it could bring, AI is also able to deceive, impersonate and misinform in far more potent ways than ever before. As a September 2025 article from the Northwestern Roberta Buffet Institute for Global Affairs details, AI’s ability to manipulate and deceive the public holds truly frightening implications:
“The advance of artificial intelligence is a growing concern for the international community, governments, and the public, with significant implications for national security and cybersecurity. It also raises ethical questions related to surveillance and transparency. In a world with widespread misinformation, AI provides ever-more sophisticated means of convincing people of the veracity of false information.
“Deepfakes — media content created by AI technologies that are generally meant to be deceptive — are a particularly significant and growing tool for misinformation and digital impersonation. Deepfakes are generated by machine-learning algorithms that can create realistic digital likenesses of individuals without permission. When execution is excellent, the result can be an extremely believable — but totally fabricated — text, video or audio clip of a person doing or saying something that they did not.”
Deepfakes pose a range of potential issues, from national security concerns to personal financial risks. The need to regulate its potential hazards, along with AI’s other possible problems, is obvious.
It seems incredibly short-sighted to enter the era of AI without any federal regulatory framework in place. Instead of encouraging sensible regulation, there has been talk in Congress of preventing states from regulating AI altogether. The federal government should be working with states to develop common-sense regulations and safeguards, which hasn’t been happening at all.
Frighteningly, the United States is quickly becoming so reliant on the development of this technology that it has created a “too big to fail” mindset around AI. However, we’ve created a double-edged sword no one seems eager to think about. If AI fails, the economy appears poised to collapse

