• The cybersecurity community is witnessing a rise in credential‑stuffing attacks targeting corporate Single Sign‑On (SSO) systems, with recent campaigns focusing on F5 BIG‑IP devices. To understand the source of the stolen logins, Defused Cyber analyzed a dataset of 70 unique email‑password pairs used in the attack. When cross‑referenced with Hudson Rock’s cybercrime database of Infostealer […]

    The post Infostealers Drive Massive Brute-Force Attacks on Corporate SSO Gateways with Stolen Credentials appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • The Dutch telecommunications company Odido suffered a massive data breach that exposed the personal information of nearly 700,000 customers. The incident, which included an extortion attempt, has raised serious concerns about customer privacy and data security in the telecom sector. Following the breach, attackers leaked the stolen information online in two separate dumps. Extent of […]

    The post 1 Million Records from Dutch Telco Odido Leaked Online in Massive Data Breach appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • The FreeBSD Project has disclosed a critical security vulnerability, tracked as CVE-2025-15576, which allows attackers to escape jail environments and gain unauthorized access to the full host filesystem. This flaw impacts FreeBSD versions 14.3 and 13.5, leaving unpatched systems exposed to severe security risks. FreeBSD Vulnerabilities FreeBSD jails are a powerful operating system-level virtualization technology. […]

    The post FreeBSD Vulnerabilities Enable Attackers to Crash Entire System appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • A Go‑based remote administration tool known as Vshell is emerging as a favored alternative to Cobalt Strike among both red teams and threat actors. Though marketed as a legitimate network administration and security testing platform, recent analyses indicate that Vshell’s powerful post‑compromise capabilities are increasingly used in unauthorized operations. Developed as a cross‑platform command‑and‑control (C2) framework, Vshell […]

    The post Vshell Gains Popularity Among Cybercriminals as Cobalt Strike Alternative appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • Microsoft is expanding its threat detection capabilities by extending Microsoft Defender for Office 365 (MDO) URL click alerting into Microsoft Teams. This critical update allows security teams to detect, investigate, and respond to potentially malicious link clicks within Teams messages, expanding threat monitoring beyond traditional email vectors. By surfacing these alerts, organizations can identify threats […]

    The post Microsoft Defender Enhances Security with URL Click Alerts for Microsoft Teams appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • Microsoft Defender researchers have uncovered a new campaign that abuses trojanized gaming utilities to deliver multi‑stage malware with remote access, data theft, and payload delivery capabilities. Attackers are masquerading as popular tools such as Xeno.exe and RobloxPlayerBeta.exe, tricking gamers into launching the malicious chain via downloads shared through web browsers and chat platforms. Once a […]

    The post Microsoft Defender Discovers Trojanized Gaming Utility Campaign Stealing Data with RATs appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • Juniper Networks has issued an out-of-cycle critical security bulletin addressing a severe vulnerability affecting its PTX Series routers running Junos OS Evolved. The flaw allows an unauthenticated, network-based attacker to execute malicious code with root privileges, potentially leading to complete device takeover. This critical security issue underscores the importance of securing core network infrastructure against […]

    The post Juniper Networks PTX Vulnerability Allows Full Router Takeover, Exposing Networks appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • North Korean threat group APT37 is using a new multi‑stage toolset to jump air‑gaps and conduct deep surveillance by abusing removable media, Ruby, and cloud services in a campaign Zscaler ThreatLabz tracks as “Ruby Jumper.”​ The campaign’s main goal is to move data and commands between internet‑connected and air‑gapped systems while deploying powerful surveillance backdoors. […]

    The post North Korean APT37 Unleashes Novel Malware to Target Air-Gapped Systems appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • Security researchers at Truffle Security discovered that legacy public-facing Google API keys can silently gain unauthorized access to Google’s sensitive Gemini AI endpoints. This flaw exposes private files, cached data, and billable AI usage to attackers without any warning or notification to developers. The vulnerability highlights the severe danger of retrofitting modern AI capabilities onto […]

    The post Google API Keys Leak Sensitive Data Without Warning via Gemini appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • If the Pentagon carries out its threat to blacklist Anthropic’s Claude AI platform, it could be three months or even longer before the U.S. military regains access to such a powerful tool on its classified networks, according to multiple sources familiar with the fight between the Defense Department and the AI maker. 

    On Thursday, Anthropic CEO Dario Amodei reiterated his refusal to allow Claude to be used for mass surveillance of U.S. citizens or to guide fully autonomous weapons, rejecting Pentagon requests to make unfettered use of the model.

    Claude is one of just two large generative-AI models that the Pentagon has made available on classified networks, and it is the only one that belongs to the cutting-edge group of frontier models. The Defense Department isn’t saying just how it uses such models. But Emil Michael, defense undersecretary for research and engineering, has suggested that their uses include intelligence (“to synthesize a lot more intelligence using a machine than a human analyst”) and warfighting (“How do you predict what might happen in the conflict, what things you might need in the conflict?”).

    Earlier on Thursday, Pentagon spokesperson Sean Parnell said that DOD only seeks the ability to “use Anthropic's model for all lawful purposes,” adding that the idea that the Pentagon wants fully autonomous weapons or mass surveillance is a false narrative “peddled by leftists in the media.” 

    But Amodei said those are the only two limits he insists on. 

    In “a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. Some uses are also simply outside the bounds of what today’s technology can safely and reliably do,” he said in his statement.

    Pentagon officials have threatened various reprisals should Anthropic insist on its limits, including invoking the Defense Production Act to use the company’s product without the company’s permission. 

    On Wednesday, a defense official told Defense One, “The Secretary will not hesitate to invoke the DPA if an agreement cannot be reached.”

    Parnell’s post on Thursday made no mention of the DPA. The company, he said, has “until 5:01 PM ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk for DOW.”

    In his statement, Amodei responded quizzically. “They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a ‘supply chain risk’—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.”

    Easier said than done

    If the Pentagon does designate the San Francisco-based AI startup as a supply-chain risk, it would touch off a lengthy and likely expensive series of protective measures, the people familiar said. 

    Operators would have to reconfigure data inputs that they are feeding into models, re-examine how to share data in real-time with the intelligence community which also uses Claude widely, and re-validate that replacement models were functioning as the military expected it to, they said.

    In July, Anthropic received a $200 million contract to provide its frontier-model tools to the Pentagon, as did the other three U.S. makers of such products: OpenAI, Google, and xAI. 

    Department leaders have urged their people to use the new tools, though they have declined to say how publicly. And even the Pentagon doesn’t really know; it is reportedly asking various commands to describe how much they use Anthropic. (Michael, however, has described U.S. INDOPACOM as “probably one of the premier users.”)

    So why is Claude the only one deployed on classified networks? One key reason, according to a defense official: Anthropic’s tools were the easiest to deploy on cloud networks powered by AWS, which contributes the largest chunk of the Pentagon’s Joint Warfighting Cloud Capability.

    The two companies are especially close. AWS is the leading cloud-service provider to Anthropic, which trains its models using Amazon’s proprietary Trainium chips.

    By contrast, Google runs Gemini on its own cloud and trains it on TPU v5p chips. xAI is partnered with Oracle and does most of its Grok training on NVIDIA H100 GPUs. OpenAI has a “primary” relationship with Microsoft Azure, though it recently announced a “strategic training” partnership with AWS.

    None of these relationships are static. Anthropic trained its first models on NVIDIA chips. But as demand grew, the various frontier AI companies inked long-term strategic contracts that mean migrating from one environment to another would undo months of work. 

    The individuals said it could be twelve months or longer to replace the capability. However, a Defense Department official said that he expected additional frontier AI models to be widely available on the Pentagon’s GenAi.mil interface before summer. 

    AWS did not respond to requests for comment.

    Breaking up for the wrong reasons

    Michael has said that his objection to Anthropic’s stance is that it creates unpredictability. What if, he said last week, operators were using Claude during a mission, and  “then the model itself learns what you're trying to do… and it stops working. That’s a risk I cannot take.”

    But Anthropic executives counter that they must draw lines precisely because of AI’s unpredictability. They say there’s no way to guarantee that their models can perform safely in scenarios that involve lethal autonomy—at least not without meaningful human supervision—and they don’t believe the model is safe in situations that might involve AI for mass surveillance, according to sources familiar with the discussions.

    And they agree with Michael’s contention that some of the Pentagon’s frontier models might perform better at various tasks than others.

    The sources also said the conversations between the Pentagon and the company had been proceeding along more or less normal lines. Anthropic, they say, had been willing to make various accommodations. But the tone changed after the discussions became public.

    On Tuesday, the company released a new version of its safety policies, which many saw as an abandonment of its core safety promise.

    In the blog post announcing the change, the company said that it would be moving toward “nonbinding but publicly declared targets” for safety. “Rather than being hard commitments, these are public goals that we will openly grade our progress towards.”

    Lawmakers are dipping a toe into the debate. Sen. Mark Warner, D-Va., called the fight “another indication that the Department of Defense seeks to completely ignore AI governance–something the Administration’s own Office of Management and Budget and Office of Science and Technology Policy have described as fundamental enablers of effective AI usage,” in a statement. He called the episode further evidence of “the need for Congress to enact strong, binding AI governance mechanisms for national security contexts.”

    The Pentagon has in the past placed policy limits on the use of autonomous weapons, but Congress has passed no legislative limits.

    ]]>

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶