OpenAI has confirmed that conversations on ChatGPT that indicate a risk of serious physical harm to others may be reviewed by human moderators and, in extreme cases, referred to the police.
The company outlined these measures in a recent blogpost explaining how the AI handles sensitive interactions and potential safety risks.
Rules for self-harm and threats to others
Claiming that ChatGPT is designed to provide empathetic support to users experiencing distress, OpenAI stressed that its safeguards differentiate between self-harm and threats to others….








