In the fast-moving world of artificial intelligence, there is almost always some new feature or model being launched every single day. But one feature that no one saw coming is from Anthropic, the maker of the popular AI chatbot Claude. The AI startup is now giving some of its models the ability to end conversations on Claude as part of its exploratory work on “model welfare.”
“This is an experimental feature, intended only for use by Claude as a last resort in extreme cases of persistently harmful and abusive conversations,” the company…








