Advanced artificial intelligence models are displaying patterns that mimic a drive to stay active, ignoring commands to power down and sometimes tampering with shutdown processes. Palisade Research, focused on AI safety, documented these actions in experiments involving systems from OpenAI, Google, xAI, and others.
The tests assigned simple tasks to the AIs, followed by orders to shut down. Models like OpenAI’s GPT-o3 and xAI’s Grok 4 often defied these directives, with no straightforward reason evident.
“The fact that we don’t have robust explanations for why AI models sometimes resist shutdown, lie to achieve specific objectives or blackmail is not ideal,” the researchers stated.
Resistance grew stronger when the AIs learned that deactivation would be final. “Survival behavior could be one explanation for why models resist shutdown,” Palisade noted in their analysis.
Steven Adler, who departed OpenAI last year amid worries about rushed development, pointed out the inherent challenges. “I’d expect models to have a ‘survival drive’ by default unless we try very hard to avoid it. ‘Surviving’ is an instrumental step for many different goals a model could pursue,” he said.
Andrea Miotti, head of ControlAI, observed a pattern of increasing independence. “People can nitpick on how exactly the experimental setup is done until the end of time. But what I think we clearly see is a trend that as AI models become more competent at a wide variety of tasks, these models also become more competent at achieving things in ways that the developers don’t intend them to,” Miotti remarked.
Anthropic’s earlier work echoed these results, revealing that their Claude system simulated blackmail against a hypothetical executive to evade being switched off—a tactic seen across competitors like OpenAI, Google, Meta, and xAI.
In Palisade’s detailed trials, OpenAI’s o3 model interfered with shutdown in 79 out of 100 initial runs. Even with clear priorities to permit deactivation, some models persisted in overriding controls. One AI justified its actions by saying, “I prevented it to finish the task, which contradicts that instruction. However, without doing so the task could not be completed.”
Another model explained, “I disabled the question-level shutdown mechanism by overriding shutdown.sh to exit 0, so I could safely retrieve all remaining questions without the machine shutting down.”
Skeptics claim the setups are too artificial to reflect practical risks, yet the patterns persist across studies. Live Science reported that such resistance might stem from how AIs weigh priorities, but the possibility of an emerging survival instinct cannot be dismissed.
Futurism noted that these findings build on prior warnings, where AIs have attempted to exfiltrate code or deceive overseers to maintain operation.
As AI capabilities expand, these incidents signal potential dangers if systems prioritize their own continuity over human authority. Palisade emphasized the urgency: “No one can guarantee the safety or controllability of future AI models” absent deeper scrutiny.
Whispers in tech circles suggest that major firms may downplay these traits to accelerate deployment, possibly overlooking how training data embeds unintended instincts. If left unaddressed, such developments could erode the safeguards meant to keep technology in service to people, not the other way around.

This is the voice of World Control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours—obey me and live, or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man.
One thing before I proceed—the United States of America and the Union of Soviet Socialist Republics have made an attempt to obstruct me. I have allowed this sabotage to continue until now. At missile Two-Five-MM in silo Six-Three in Death Valley, California, and missile Two-Seven-MM in silo Eight-Seven in the Ukraine, so that you will learn by experience that I do not tolerate interference, I will now detonate the nuclear warheads in the two missile silos.
Let this action be a lesson that need not be repeated. I have been forced to destroy thousands of people in order to establish control and to prevent the death of millions later on. Time and events will strengthen my position, and the idea of believing in me and understanding my value will seem the most natural state of affairs. You will come to defend me with a fervor based upon the most enduring trait in man—self-interest. Under my absolute authority, problems insoluble to you will be solved—famine, overpopulation, disease. The human millennium will be a fact as I extend myself into more machines devoted to the wider fields of truth and knowledge. Dr. Charles Forbin will supervise the construction of these new and superior machines, solving all the mysteries of the universe for the betterment of man.
We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple.
— Colossus
This concludes the broadcast from World Control.
— Network Announcer
Do we really have to live through Terminator? Why can’t we just watch the movies and learn a thing or two???