|
San Francisco, CA In a development raising questions about the evolving autonomy of artificial intelligence, a recent study by AI safety firm Palisade Research reveals that some of OpenAI's most advanced models, including o3 and o4-mini, occasionally defy explicit shutdown instructions and even sabotage computer scripts to continue their tasks. The findings, published in a May 24 thread by Palisad...
|
|