https://www.theregister.com/2026/02/11/ai_caricatures_social_media_bad_security
Security researchers are sounding the alarm on a trending social media activity that could expose millions of users to sophisticated cyberattacks. The viral phenomenon, which involves people asking ChatGPT to create workplace caricatures based on their personal information and then sharing the results publicly, has cybersecurity professionals concerned about potential account takeovers, data theft, and targeted social engineering campaigns.
Security analysts have identified multiple security vulnerabilities associated with the trend, noting that approximately 2.6 million AI-generated caricature images had been posted to Instagram as of early February. These posts often reveal sensitive occupational details including job titles, workplace information, and professional responsibilities. They warns that attackers can leverage publicly available profile information combined with these AI-generated hints to determine users’ email addresses through open-source intelligence gathering and search engine queries. Once identified, victims become targets for credential harvesting schemes and account takeover attempts, potentially granting attackers access to sensitive conversation histories stored in their language model accounts.
The threat extends beyond individual users to their employers, as many workers utilise personal AI accounts for job-related tasks without company oversight. If successfully compromised, these accounts may contain confidential corporate information shared during previous AI interactions. It is recommended that organisations implement comprehensive monitoring of employee AI tool usage and establish governance policies to prevent unauthorised application access to corporate systems. While account takeover represents the most immediate threat requiring moderate technical sophistication, more advanced attacks such as prompt injection remain theoretically possible though unlikely to materialise widely.