Tech
Are conversations with AI chatbots safe?Microsoft uncovers a serious flaw that could expose your personal conversations
[ad_1]
Microsoft has warned about a new kind of side-channel attack that can reveal to an attacker what a user is talking about with an AI chatbot like ChatGPT or Gemini. The vulnerability, called “Whisper Leak”, would not give attackers the chance to read the whole text conversation, but they could still infer the topic of conversation by analysing patterns in network traffic.
In a blog post, Microsoft said that the new vulnerability could allow ISPs, governments, or someone on the same Wi-Fi to learn what the user is discussing with the AI chatbot. The…
[ad_2]
Source link
