-
In an era where AI and SaaS applications underpin daily workflows, organizations face an unprecedented challenge: the invisible exfiltration of sensitive information. Traditional, file-based data loss prevention (DLP) measures were designed for attachm…
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
-
Researchers have discovered a critical zero-click vulnerability in ChatGPT’s Deep Research agent that allows attackers to silently steal sensitive Gmail data without any user interaction. This sophisticated attack leverages service…
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
-
The Kimsuky APT group has begun leveraging generative AI ChatGPT to craft deepfake South Korean military agency ID cards. Phishing lures deliver batch files and AutoIt scripts designed to evade anti-virus scanning through sophisticated obfuscation. Org…
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
-
The latest technique, uncovered by AI researcher @LLMSherpa on X (formerly Twitter), exposes a little-known vulnerability in OpenAI’s ChatGPT system, a prompt insertion attack leveraging the user’s OpenAI account name. Unlike traditional prompt injections, which typically involve cleverly crafted user input, this method exploits the way OpenAI stores the account name within ChatGPT’s internal system […] The post New Prompt Insertion Attack – OpenAI Account Name Used to Trigger ChatGPT Jailbreaks appeared first on Cyber Security News.
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
-
Security researchers from Adversa AI have uncovered a critical vulnerability in ChatGPT-5 and other major AI systems that allows attackers to bypass safety measures using simple prompt modifications. The newly discovered attack, dubbed PROMISQROUTE, ex…
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶


