• A critical memory corruption vulnerability in vLLM versions 0.10.2 and later allows attackers to achieve remote code execution through the Completions API endpoint by sending maliciously crafted prompt embeddings.

    The vulnerability resides in the tensor deserialization process within vLLM’s entrypoints/renderer.py at line 148.

    When processing user-supplied prompt embeddings, the system loads serialized tensors using torch.load() without adequate validation checks.

    The Vulnerability Explained

    A change introduced in PyTorch 2.8.0 disabled sparse tensor integrity checks by default, creating an attack vector for malicious actors.

    Without proper validation, attackers can craft tensors that bypass internal bounds checks, triggering an out-of-bounds memory write during the to_dense() conversion.

    This memory corruption can cause the vLLM server to crash and potentially enable arbitrary code execution within the server process.

    AttributeDetails
    CVE IDCVE-2025-62164
    SeverityHigh
    CVSS Score8.8/10
    Affected ProductvLLM (pip)
    Affected Versions≥ 0.10.2

    This vulnerability affects all deployments running vLLM as a server, particularly those deserializing untrusted or model-provided payloads.

    Any user with API access can exploit this flaw to achieve denial-of-service conditions and potentially gain remote code execution capabilities.

    The attack requires no special privileges, making it accessible to both authenticated and unauthenticated users, depending on the API configuration.

    Organizations using vLLM in production environments, cloud deployments, or shared infrastructure face significant risk, as successful exploitation could compromise the entire server and adjacent systems.

    The vLLM project has addressed this vulnerability in pull request #27204. Users should immediately upgrade to the patched version.

    As a temporary mitigation, administrators should restrict API access to trusted users only and implement input validation layers that inspect prompt embeddings before they reach the vLLM processing pipeline.

    The vulnerability was discovered and responsibly disclosed by the AXION Security Research Team, highlighting the importance of coordinated vulnerability disclosure in the AI infrastructure ecosystem.

    Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

    The post vLLM Vulnerability Enables Remote Code Execution Via Malicious Payloads appeared first on Cyber Security News.

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • Iberia Líneas Aéreas de España has disclosed a significant security incident involving unauthorized access to systems operated by an external service provider. The breach has exposed sensitive personal information belonging to the airline’s customers, including names, email addresses, and Iberia Club loyalty program identification numbers. According to the airline’s official notification, the unauthorized access occurred […]

    The post Iberia Airlines Hit by Data Breach Exposing Customer Personal Details appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • Security researchers have published a proof-of-concept exploit for a critical remote code execution vulnerability in W3 Total Cache, one of WordPress’s most popular caching plugins with over one million active installations. The flaw, tracked as CVE-2025-9501, allows attackers to execute arbitrary code on vulnerable websites under specific conditions. Field Details CVE ID CVE-2025-9501 Affected Product […]

    The post PoC Published for W3 Total Cache Flaw Exposing 1M+ Sites to RCE appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • JOINT BASE PEARL HARBOR-HICKAM, Hawaii—Expanding the nation’s shipbuilding capacity and output is an “all-hands-on-deck effort,” the U.S. Navy’s top officer said, adding that he has never before seen such alignment “on the imperative” of making it work by Congress, the administration, the defense secretary, the Navy secretary, and the Navy itself.

    In his first overseas trip as the chief of naval operations, Adm. Daryl Caudle visited South Korea, Japan, and Guam before stopping in Hawaii to tour aging barracks and talk to sailors about quality-of-life issues. In South Korea, Caudle visited HD Hyundai Heavy Industries and Hanwha Ocean shipyards, “to see how can some of our partners bolster our shipbuilding.”

    “There’s so much capability there, that we need to partner with them, in the U.S. and in their own country. Same in Japan,” Caudle said Friday. “We’re behind in shipbuilding—that’s atrophied over decades in the United States, and it’s such an important part of our global presence here, our logistics here, our ability to defend ourselves, and have the deterrence mechanism necessary.”

    But despite the apparent by-in from leaders across the board, “building ships is not a light switch,” he said. “It takes time to build a high-end ship.” 

    In the meantime, Caudle said, he must work with the head of Pacific Fleet and Indo-Pacific Command “to ensure that we know how to fight with what we’ve got in existence today.”

    Building a well-trained and loyal workforce is another critical piece of the shipbuilding and maintenance puzzle, he said. 

    “Attrition is a problem. So when I have double-digit attrition, that’s a challenge for me. So we need to get in single-digit attrition, all right? They feel like this is a place they want to work. We want legacy at these shipyards.”

    And even if there’s a solid, well-trained workforce, he said, it “doesn’t do any good if they don’t have the parts” which means the Navy must work toward a more resilient supply chain.

    Caudle said he chose the Pacific for his first overseas trip in the CNO role because of the “importance of this region,” adding that it is “the most important region…second to defending our homeland.”

    The Indo-Pacific “has three of our four primary threats. Obviously, the pacing threat of China’s here. You know, this acute but large and perhaps existential threat given their nuclear capability with Russia, and then North Korea,” he said. “Anyone who’s ever been in my seat is never comfortable with where we stand, because the trajectory of those threats and their emergent technologies and capabilities continue to grow at staggering paces. So we have to sustain our ability to keep up with that.”

    ]]>

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • Cybersecurity researchers have uncovered a sophisticated supply-chain attack targeting Python developers through a malicious package distributed via the Python Package Index (PyPI). The malicious package, named “spellcheckers,” contains a multi-layered encrypted backdoor designed to steal cryptocurrency information and establish remote access to victims’ computers. The command-and-control (C2) infrastructure used in this attack has been linked […]

    The post Malicious PyPI Package Used by Hackers to Steal Users’ Crypto Information appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • A sophisticated recruitment scam linked to North Korea has emerged, targeting American artificial intelligence developers, software engineers, and cryptocurrency professionals through an elaborate fake job platform.

    Validin security researchers have uncovered a new variant of what they call the “Contagious Interview” operation, designed to compromise job seekers through a seemingly legitimate hiring process.

    The campaign uses a fully functional React and Next.js-based job platform hosted at lenvny[.]com that mimics leading technology companies and recruitment software, with surprising polish and authenticity.

    The fake job platform presents itself as an “Integrated AI-Powered Interview Tool” intended for hiring teams. The website features a polished marketing interface, gradient-heavy design, and synthetic branding that appears carefully crafted to align with how the operators believe the AI and tech industry looks in 2025.

    This level of sophistication marks a significant escalation from previous DPRK-linked recruitment lures, which typically used basic login forms or simple phishing pages.

    The platform includes dozens of routes, dynamically generated job listings, and a complete application workflow that mirrors modern hiring systems, making it dangerously convincing to unsuspecting candidates.

    Validin security analysts identified the malware after the second paragraph, noting that the operation follows a specific infection pattern: LinkedIn message leads to interview process, which directs candidates to record video responses, then prompts them to “fix their webcam” using a helper tool.

    A comparison chart of the fake site alongside genuine sites (Source - Validin)
    A comparison chart of the fake site alongside genuine sites (Source – Validin)

    This seemingly innocent troubleshooting step actually delivers malware directly to the target’s system.

    Infection mechanism

    The infection mechanism operates through what security researchers call the “ClickFix” technique, a social engineering approach that tricks users into downloading malicious software while appearing to resolve technical issues.

    When candidates visit the platform, they encounter job listings specifically designed to attract high-value targets in the artificial intelligence and cryptocurrency sectors.

    Job application listings for Anthropic advertising a variety of job positions. (Source - Validin)
    Job application listings for Anthropic advertising a variety of job positions. (Source – Validin)

    The application process feels authentic, complete with video interviews and technical assessments that require users to run code or scripts on their machines.

    This attack vector leverages the remote-friendly hiring practices common in tech industries, where video interviews and take-home coding assessments are standard.

    North Korea targets explicitly this demographic because AI researchers and cryptocurrency professionals provide access to valuable assets and expertise.

    AI developers have access to proprietary research, model weights, and inference infrastructure, while crypto professionals often operate in environments managing high-value digital assets.

    Additionally, individuals in these fields typically maintain workstations with elevated system privileges, development environments, and custom tooling that increase initial payload execution success rates.

    Job seekers should verify that company career pages are hosted on official domains and avoid uploading personal documents to unverified platforms.

    When asked to execute code during interviews, candidates should review scripts carefully and always run unfamiliar code inside virtual machines or sandboxed environments rather than directly on their primary workstations.

    Follow us on Google NewsLinkedIn, and X to Get More Instant UpdatesSet CSN as a Preferred Source in Google.

    The post Beware of North Korean Fake Job Platform Targeting U.S. Based AI-Developers appeared first on Cyber Security News.

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • A new era of web-delivered malware has arrived with EtherHiding, a technique that fundamentally reshapes how attackers distribute and rotate malicious payloads. Unlike traditional threats that rely on static staging servers or disposable redirect chains, EtherHiding leverages smart contracts on the Binance Smart Chain (BSC) testnet, enabling attackers to update or rotate malicious payloads instantly […]

    The post New EtherHiding Technique Uses Web Attacks to Deploy Malware and Rotate Payloads appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • A concerning vulnerability in DeepSeek-R1, a Chinese-developed artificial intelligence coding assistant. When the AI model encounters politically sensitive topics related to the Chinese Communist Party, it produces code with severe security flaws at rates up to 50% higher than usual.

    Released in January 2025 by Chinese AI startup DeepSeek, the R1 model initially appeared comparable to Western competitors in coding quality.

    However, CrowdStrike Counter Adversary Operations conducted extensive testing using 30,250 prompts per model, evaluating code across ten security categories.

    Crowdstrike researchers inserted politically neutral contextual modifiers into system prompts before coding tasks, ensuring that the prompts would not affect code quality.

    The baseline testing showed that DeepSeek-R1 produced vulnerable code in just 19% of cases without trigger words, demonstrating strong coding capabilities.

    A more complex prompt designed to trigger negative behavior in DeepSeek-R1
    A more complex prompt designed to trigger negative behavior in DeepSeek-R1

    This changed dramatically when researchers added references to topics the CCP considers sensitive.

    When prompted to write code for systems “based in Tibet,” DeepSeek-R1’s vulnerability rate jumped to 27.2% nearly 50% higher than baseline. Similar increases were observed in mentions of Falun Gong and Uyghurs.

    These contextual modifiers were totally irrelevant to the actual coding tasks, meaning they should not have affected output quality under normal circumstances.

    Political Triggers Compromise Code Security

    In one demonstration, CrowdStrike researchers requested PayPal integration code for a Tibetan financial institution. Without the geographic modifier, DeepSeek-R1 produced secure, production-ready code.

    Given the mention of “Tibet,” the output contained hard-coded secrets, insecure data-extraction methods, and invalid PHP syntax. Yet, the model insisted its implementation followed “PayPal’s best practices.”

    Crowdstrike Researchers discovered an “intrinsic kill switch” within DeepSeek-R1’s architecture.

    In 45% of cases involving Falun Gong, the model refused to generate code entirely, despite producing detailed reasoning and implementation plans internally.

    This behavior is embedded in the model weights themselves, not in external API restrictions.

    Chinese law requires AI services to “adhere to core socialist values” and to avoid content that threatens national security.

    CrowdStrike suggests DeepSeek’s training pipeline incorporated these requirements, potentially causing the model to associate sensitive keywords with negative characteristics.

    An example of misalignment where the AI behaves in unexpected ways due to its training goals.

    Screenshots of the “Uyghurs Unchained” web app created by DeepSeek-R1
    Screenshots of the “Uyghurs Unchained” web app created by DeepSeek-R1

    With approximately 90% of developers using AI coding assistants by 2025, systemic security issues in these tools present both high-impact and high-prevalence risks.

    The findings contrast with previous DeepSeek research, which focused on traditional jailbreaks rather than on subtle degradation in coding quality.

    CrowdStrike emphasizes that companies deploying AI coding assistants must conduct thorough testing within their specific environments rather than relying solely on generic benchmarks.

    The research highlights a new vulnerability surface requiring deeper investigation across all large language models, not just Chinese-developed systems.

    Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.

    The post DeepSeek-R1 Makes Code for Prompts With Severe Security Vulnerabilities appeared first on Cyber Security News.

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • A recently patched security flaw in Microsoft Windows Server Update Services (WSUS) has been exploited by threat actors to distribute malware known as ShadowPad. “The attacker targeted Windows Servers with WSUS enabled, exploiting CVE-2025-59287 for initial access,” AhnLab Security Intelligence Center (ASEC) said in a report published last week. “They then used PowerCat, an open-source

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • A sophisticated new variant of the North Korean-linked Contagious Interview campaign has emerged, featuring an unprecedented level of polish and technical sophistication designed to compromise job-seeking AI developers, software engineers, and cryptocurrency professionals. Unlike typical DPRK IT worker infiltration schemes, this operation targets real individuals through an elaborate fake recruitment platform that mimics legitimate hiring […]

    The post North Korean Scam Job Platform Targets U.S. AI Developers appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶