(The Epoch Times)—Whether it’s driving a car or summarizing a doctor’s appointment, autonomous artificial intelligence (AI) systems can make decisions that cause real harm, rapidly changing the landscape of liability.
Attorneys and AI developers say U.S. laws must keep up with the technology as debate persists over who’s responsible when things go wrong.
Lawmakers are looking to close the accountability gap by shifting burdens and expanding who can be held accountable when autonomous AI systems fail. Unlike non-autonomous AI systems, autonomous models are more likely to be unpredictable.
In the United States, a legal patchwork is slowly forming. In 2024, Colorado passed a law, Consumer Protections for Artificial Intelligence, requiring those deploying “high-risk” AI systems to protect consumers from “reasonably foreseeable risks” starting Feb. 1, 2026.
Since 2023, New York has enforced a law that prohibits employers and employment agencies from using automated employment decision tools unless they have undergone a bias audit within one year of the tool’s use. The results of the audit must be made public.
Presently, there’s no concrete federal legal foundation that demonstrates clear-cut accountability when autonomous AI systems fail, prompting some legal experts to say there must be greater transparency.
“For centuries, legal frameworks for assigning liability have relied on well-established principles designed for a human-centric world,” Pavel Kolmogorov, founder and managing attorney at Kolmogorov Law, told The Epoch Times.
Kolmogorov said that cases of negligence require proof of a breach of “duty of care.” Liability related to products holds manufacturers responsible for defects or design flaws. However, both scenarios assume there’s clear human oversight and relatively static, predictable tools.
“Autonomous AI systems fundamentally disrupt this paradigm,” Kolmogorov said. “Their defining characteristics—complexity, operational autonomy, and the capacity for continuous learning—create profound challenges for applying these traditional legal concepts.”
He also said AI’s “black box” problem, where even developers can’t fully explain the specific reasoning behind an AI’s decision, makes it extraordinarily difficult to pinpoint a specific breach or defect in the traditional sense.
Kolmogorov gave an example of a legal quagmire: “When an autonomous vehicle makes a fatal error, was it due to a flaw in its original code, a limitation in its training data, an unpredictable emergent behavior learned over time, or some combination thereof?”
Autonomy in Action
The idea of AI driven cars running people down in the streets is no longer a sci-fi concept. The landmark 2018 case involving a self-driving Uber vehicle that struck and killed a pedestrian in Tempe, Arizona, was the first recorded fatality involving a fully autonomous vehicle. The human passenger, or “backup” driver, was ultimately charged with negligent homicide.
This was far from an isolated incident. Between 2019 and 2024, there were 3,946 autonomous vehicle accidents, according to the Craft Law Firm. Of these cases, 10 percent caused injury and 2 percent resulted in fatalities.
“Right now, the law still treats autonomous driving systems under traditional negligence and product liability principles,” a representative for Valiente Mott Injury Attorneys told The Epoch Times.
“If a driver is expected to monitor the system and fails to intervene, they can be held responsible for negligence. But if the technology itself is defective or marketed in a misleading way, the manufacturer may face liability. In many cases, fault can be shared. We’re essentially applying old legal standards to new technology until the law catches up.”
The representative added that current laws were written with human drivers in mind.
“As autonomy increases, legislatures and courts will need to define how responsibility is allocated between human operators, manufacturers, and possibly even software developers.”
This ambiguity leads to what Kolmogorov called “responsibility fragmentation.”
“Unlike a simple tool with a single manufacturer and operator, an AI system is the product of a long and complex supply chain,” he said. “When a failure occurs, attributing liability becomes an exercise in untangling a dense web of dependencies, making it difficult for a victim to identify the appropriate defendant.”
This autonomous AI supply chain can include data suppliers, software developers, hardware manufacturers, system integrators, and end users, each contributing to the final product.
Kolmogorov noted that the driver bore the criminal responsibility in the 2018 Uber case, but on the civil end, legal experts said Uber had strong liability exposure that could qualify as negligence and product liability.
“The case exposed the split between criminal versus civil standards. The former requires intent or recklessness, while the latter hinges on design and testing failures,” Kolmogorov said.
Similarly, Tesla’s Autopilot has been tied to multiple crashes, including a 2019 fatality in Florida. This August, a jury found Tesla partially liable, saying its AI system contributed to the accident alongside driver negligence.
Autonomous AI systems with the ability to cause harm aren’t limited to self driving cars. Mostly autonomous “agentic” AI models are being integrated into nearly every sector of the United States, from health care to manufacturing, logistics, software, and the military.
Avoiding Hazards
Researchers at IBM have called 2025 the year of the AI agent. At a glance, agentic AI includes mostly autonomous systems that can act independently to achieve goals with minimal human oversight.
Some AI experts, including David Talby, CEO of John Snow Labs and chief technology officer at Pacific AI, believe AI agents can have quality of life impacts as models advance and become increasingly independent of human involvement.
“Health care stands out as one of the most demanding domains. Unlike consumer applications or even some enterprise use cases, AI in health care directly impacts people’s lives and well-being,” he told The Epoch Times.
Talby said many autonomous AI systems already exist in health care, including digital health applications that interact directly with patients, clinical decision support systems, and visit summarization tools that work alongside doctors.
“These systems can independently process complex medical data, draft clinical notes, or guide patients in self-care, but it’s important this is always under a human-in-the-loop framework of accountability,” he said. “While parts of the workflow are fully automated, human oversight is still needed in health care and beyond.”
Talby added that errors can have profound consequences and accountability must extend past accuracy metrics. Issues such as bias in medical datasets, robustness under real-world variability, and adherence to ethical standards all demand what he called “rigorous governance.”
“In health care, our top concerns extend beyond model accuracy. We must ensure that AI systems are not only effective but also safe, transparent, and compliant with regulations,” Talby said.
Kolmogorov said “physicians face dual risks” as AI diagnostic tools become more accurate in health care. They face “negligence for not using validated AI and negligence for over-relying on flawed recommendations.”
He said patients should be informed when AI is used. “And data representativeness is crucial to avoid bias.”
This year, AI developers from Hugging Face, a machine learning company, published research that advocated against deploying fully autonomous AI agents. The article stated, “Risks to people increase with the autonomy of a system: The more control a user cedes to an AI agent, the more risks to people arise.”
In July, the White House unveiled its AI Action Plan, which outlined infrastructure building, investment, and defense initiatives. However, this plan does not address the legal gray areas that remain as autonomous AI continues expanding its reach.
European lawmakers face similar challenges and currently treat AI as a product under its Product Liability Directive, which extends liability to post-sale changes, such as model updates or new machine-learned behaviors.
“I regularly advise clients in emerging tech and mobility sectors on AI autonomy risk,” Kolmogorov said. “The focus is on mitigating exposure before a failure becomes a legal crisis.”
Why the National Debt Is the Looming Threat to Your Retirement Plans
The Hidden Crisis No One Is Talking About
Every day, headlines warn about inflation, market volatility, and global instability—but the greatest looming threat to your retirement might be something far more fundamental: America’s skyrocketing national debt.
You can learn more about how the national debt affects you by reading this 3-minute report titled, “Debt Will Hit $40T in 2026: Prepare Your Retirement Now“.
With debt growing faster than most Americans can possibly fathom, the government’s borrowing habits have reached historic—and dangerous—levels. To cover spending, Washington is making moves with their budget packages, tariffs, and taxes. Is it enough? No. It’s not even close to what would be necessary to stop out-of-control debt, let alone reverse it.
How Debt Erodes Your Nest Egg
There are only so many levers government and the Federal Reserve can pull to try to protect Americans, assuming that’s even a top priority for them. Unfortunately, pulling one level to relive one pressure invariably adds pressure from another direction. This is why prices keep going up even as inflation reportedly slows.
For retirees and pre-retirees, that’s a perfect storm. The dollars you’ve worked hard to save lose value, and your cost of living increases while your investments lag behind.
If you’re relying solely on paper-based assets—stocks, bonds, or mutual funds—you’re essentially tied to the same system that’s creating the problem. It’s a system that was designed to work well in the 20th century, not in today’s world with people living longer and the dollar rapidly losing value.
This is why the 3-minute report, “Debt Will Hit $40T in 2026: Prepare Your Retirement Now,” is so important.
The Precious Metals Hedge
Thousands of Americans are looking for a tangible, time-tested hedge: physical gold and silver.
Unlike paper assets, precious metals aren’t dependent on government policy or the stock market’s mood swings. They’re real, finite resources that have maintained value for thousands of years through wars, recessions, and inflationary periods.
In fact, during times of high inflation and fiscal instability, gold often performs its best—because it’s seen as a store of value when faith in the dollar weakens. This is why prices have skyrocketed this year and are expected by many economists to continue going up in the future.
Take Control with a Gold IRA
One of the most effective ways to protect your retirement from national debt fallout is through a self-directed Gold IRA. This IRS-approved account lets you hold physical gold and silver within your retirement portfolio, giving you:
- Direct ownership of your assets
- A hedge against inflation and dollar decline
- The control to diversify beyond Wall Street
Augusta Precious Metals specializes in helping Americans just like you take this step with confidence. The company has earned a strong reputation for transparency, education, and personalized service—making it one of the most trusted names in the industry.
The Next Step: Secure Your Financial Future
Augusta Precious Metals has helped thousands of Americans with at least $50,000 to invest from their IRAs, 401(K)s, TSPs, and other retirement accounts safeguard their savings through precious metals.
If you’re concerned about what the rising national debt could mean for your future, now is the time to act.
Read this 3-minute report titled, “Debt Will Hit $40T in 2026: Prepare Your Retirement Now“ and learn the simple steps you can take to protect your retirement.



