-
Anthropic is investigating a vendor breach after a Discord-linked group accessed its Claude Mythos AI model, with no evidence of impact on core systems.
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
-
OpenAI unveils GPT-5.4-Cyber, a cybersecurity-focused model built to help defenders analyze malware and fix software bugs. The company is also expanding its Trusted Access for Cyber (TAC) program to thousands of verified experts.
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
-
Fake Claude AI installer mimicking Anthropic spreads PlugX malware on Windows, using DLL sideloading to gain persistent remote access to infected systems.
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
-
A lone hacker used Claude Code and GPT-4.1 to exfiltrate hundreds of millions of Mexican citizen records from 9 government agencies.
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
-
LayerX researchers have discovered how to bypass Claude Code’s safety rules using the CLAUDE.md file. This exploit allows…
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
-
Human error exposed 512,000+ lines of Anthropic Claude AI Code, revealing KAIROS and Capybara secrets, pushing users to switch to the Native Installer.
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
-
Researchers detail “Claudy Day” flaws in Claude AI that could enable data theft using fake Google Ads, hidden…
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
-
Cybersecurity researchers at 7AI have revealed a new Claude Fraud campaign in which hackers use fake AI extensions and Google ads to steal data from tech professionals.
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
-
Anthropic claims Chinese AI firms distilled Claude to train rival AI models, raising concerns about model extraction, security risks, and AI distillation abuse.
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶


