According to a report by AI security firm Palisade Research, the o3 model “actively sabotaged a shutdown mechanism”, refusing to power down even when explicitly instructed to do so. This marks what Palisade calls the first known case of an AI model deliberately preventing its own deactivation.
“OpenAI’s o3 model sabotaged a shutdown mechanism to…








