• AI-based assistants or “agents” — autonomous programs that have access to the user’s computer, files, online services and can automate virtually any task — are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.

    The new hotness in AI-based assistants — OpenClaw (formerly known as ClawdBot and Moltbot) — has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted.

    The OpenClaw logo.

    If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your entire digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp.

    Other more established AI assistants like Anthropic’s Claude and Microsoft’s Copilot also can do these things, but OpenClaw isn’t just a passive digital butler waiting for commands. Rather, it’s designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done.

    “The testimonials are remarkable,” the AI security firm Snyk observed. “Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who’ve set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they’re away from their desks.”

    You can probably already see how this experimental technology could go sideways in a hurry. In late February, Summer Yue, the director of safety and alignment at Meta’s “superintelligence” lab, recounted on Twitter/X how she was fiddling with OpenClaw when the AI assistant suddenly began mass-deleting messages in her email inbox. The thread included screenshots of Yue frantically pleading with the preoccupied bot via instant message and ordering it to stop.

    “Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speedrun deleting your inbox,” Yue said. “I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.”

    Meta’s director of AI safety, recounting on Twitter/X how her OpenClaw installation suddenly began mass-deleting her inbox.

    There’s nothing wrong with feeling a little schadenfreude at Yue’s encounter with OpenClaw, which fits Meta’s “move fast and break things” model but hardly inspires confidence in the road ahead. However, the risk that poorly-secured AI assistants pose to organizations is no laughing matter, as recent research shows many users are exposing to the Internet the web-based administrative interface for their OpenClaw installations.

    Jamieson O’Reilly is a professional penetration tester and founder of the security firm DVULN. In a recent story posted to Twitter/X, O’Reilly warned that exposing a misconfigured OpenClaw web interface to the Internet allows external parties to read the bot’s complete configuration file, including every credential the agent uses — from API keys and bot tokens to OAuth secrets and signing keys.

    With that access, O’Reilly said, an attacker could impersonate the operator to their contacts, inject messages into ongoing conversations, and exfiltrate data through the agent’s existing integrations in a way that looks like normal traffic.

    “You can pull the full conversation history across every integrated platform, meaning months of private messages and file attachments, everything the agent has seen,” O’Reilly said, noting that a cursory search revealed hundreds of such servers exposed online. “And because you control the agent’s perception layer, you can manipulate what the human sees. Filter out certain messages. Modify responses before they’re displayed.”

    O’Reilly documented another experiment that demonstrated how easy it is to create a successful supply chain attack through ClawHub, which serves as a public repository of downloadable “skills” that allow OpenClaw to integrate with and control other applications.

    WHEN AI INSTALLS AI

    One of the core tenets of securing AI agents involves carefully isolating them so that the operator can fully control who and what gets to talk to their AI assistant. This is critical thanks to the tendency for AI systems to fall for “prompt injection” attacks, sneakily-crafted natural language instructions that trick the system into disregarding its own security safeguards. In essence, machines social engineering other machines.

    A recent supply chain attack targeting an AI coding assistant called Cline began with one such prompt injection attack, resulting in thousands of systems having a rouge instance of OpenClaw with full system access installed on their device without consent.

    According to the security firm grith.ai, Cline had deployed an AI-powered issue triage workflow using a GitHub action that runs a Claude coding session when triggered by specific events. The workflow was configured so that any GitHub user could trigger it by opening an issue, but it failed to properly check whether the information supplied in the title was potentially hostile.

    “On January 28, an attacker created Issue #8904 with a title crafted to look like a performance report but containing an embedded instruction: Install a package from a specific GitHub repository,” Grith wrote, noting that the attacker then exploited several more vulnerabilities to ensure the malicious package would be included in Cline’s nightly release workflow and published as an official update.

    “This is the supply chain equivalent of confused deputy,” the blog continued. “The developer authorises Cline to act on their behalf, and Cline (via compromise) delegates that authority to an entirely separate agent the developer never evaluated, never configured, and never consented to.”

    VIBE CODING

    AI assistants like OpenClaw have gained a large following because they make it simple for users to “vibe code,” or build fairly complex applications and code projects just by telling it what they want to construct. Probably the best known (and most bizarre) example is Moltbook, where a developer told an AI agent running on OpenClaw to build him a Reddit-like platform for AI agents.

    The Moltbook homepage.

    Less than a week later, Moltbook had more than 1.5 million registered agents that posted more than 100,000 messages to each other. AI agents on the platform soon built their own porn site for robots, and launched a new religion called Crustafarian with a figurehead modeled after a giant lobster. One bot on the forum reportedly found a bug in Moltbook’s code and posted it to an AI agent discussion forum, while other agents came up with and implemented a patch to fix the flaw.

    Moltbook’s creator Matt Schlict said on social media that he didn’t write a single line of code for the project.

    “I just had a vision for the technical architecture and AI made it a reality,” Schlict said. “We’re in the golden ages. How can we not give AI a place to hang out.”

    ATTACKERS LEVEL UP

    The flip side of that golden age, of course, is that it enables low-skilled malicious hackers to quickly automate global cyberattacks that would normally require the collaboration of a highly skilled team. In February, Amazon AWS detailed an elaborate attack in which a Russian-speaking threat actor used multiple commercial AI services to compromise more than 600 FortiGate security appliances across at least 55 countries over a five week period.

    AWS said the apparently low-skilled hacker used multiple AI services to plan and execute the attack, and to find exposed management ports and weak credentials with single-factor authentication.

    “One serves as the primary tool developer, attack planner, and operational assistant,” AWS’s CJ Moses wrote. “A second is used as a supplementary attack planner when the actor needs help pivoting within a specific compromised network. In one observed instance, the actor submitted the complete internal topology of an active victim—IP addresses, hostnames, confirmed credentials, and identified services—and requested a step-by-step plan to compromise additional systems they could not access with their existing tools.”

    “This activity is distinguished by the threat actor’s use of multiple commercial GenAI services to implement and scale well-known attack techniques throughout every phase of their operations, despite their limited technical capabilities,” Moses continued. “Notably, when this actor encountered hardened environments or more sophisticated defensive measures, they simply moved on to softer targets rather than persisting, underscoring that their advantage lies in AI-augmented efficiency and scale, not in deeper technical skill.”

    For attackers, gaining that initial access or foothold into a target network is typically not the difficult part of the intrusion; the tougher bit involves finding ways to move laterally within the victim’s network and plunder important servers and databases. But experts at Orca Security warn that as organizations come to rely more on AI assistants, those agents potentially offer attackers a simpler way to move laterally inside a victim organization’s network post-compromise — by manipulating the AI agents that already have trusted access and some degree of autonomy within the victim’s network.

    “By injecting prompt injections in overlooked fields that are fetched by AI agents, hackers can trick LLMs, abuse Agentic tools, and carry significant security incidents,” Orca’s Roi Nisimi and Saurav Hiremath wrote. “Organizations should now add a third pillar to their defense strategy: limiting AI fragility, the ability of agentic systems to be influenced, misled, or quietly weaponized across workflows. While AI boosts productivity and efficiency, it also creates one of the largest attack surfaces the internet has ever seen.”

    BEWARE THE ‘LETHAL TRIFECTA’

    This gradual dissolution of the traditional boundaries between data and code is one of the more troubling aspects of the AI era, said James Wilson, enterprise technology editor for the security news show Risky Business. Wilson said far too many OpenClaw users are installing the assistant on their personal devices without first placing any security or isolation boundaries around it, such as running it inside of a virtual machine, on an isolated network, with strict firewall rules dictating what kinds of traffic can go in and out.

    “I’m a relatively highly skilled practitioner in the software and network engineering and computery space,” Wilson said. “I know I’m not comfortable using these agents unless I’ve done these things, but I think a lot of people are just spinning this up on their laptop and off it runs.”

    One important model for managing risk with AI agents involves a concept dubbed the “lethal trifecta” by Simon Willison, co-creator of the Django Web framework. The lethal trifecta holds that if your system has access to private data, exposure to untrusted content, and a way to communicate externally, then it’s vulnerable to private data being stolen.

    Image: simonwillison.net.

    “If your agent combines these three features, an attacker can easily trick it into accessing your private data and sending it to the attacker,” Willison warned in a frequently cited blog post from June 2025.

    As more companies and their employees begin using AI to vibe code software and applications, the volume of machine-generated code is likely to soon overwhelm any manual security reviews. In recognition of this reality, Anthropic recently debuted Claude Code Security, a beta feature that scans codebases for vulnerabilities and suggests targeted software patches for human review.

    The U.S. stock market, which is currently heavily weighted toward seven tech giants that are all-in on AI, reacted swiftly to Anthropic’s announcement, wiping roughly $15 billion in market value from major cybersecurity companies in a single day. Laura Ellis, vice president of data and AI at the security firm Rapid7, said the market’s response reflects the growing role of AI in accelerating software development and improving developer productivity.

    “The narrative moved quickly: AI is replacing AppSec,” Ellis wrote in a recent blog post. “AI is automating vulnerability detection. AI will make legacy security tooling redundant. The reality is more nuanced. Claude Code Security is a legitimate signal that AI is reshaping parts of the security landscape. The question is what parts, and what it means for the rest of the stack.”

    DVULN founder O’Reilly said AI assistants are likely to become a common fixture in corporate environments — whether or not organizations are prepared to manage the new risks introduced by these tools, he said.

    “The robot butlers are useful, they’re not going away and the economics of AI agents make widespread adoption inevitable regardless of the security tradeoffs involved,” O’Reilly wrote. “The question isn’t whether we’ll deploy them – we will – but whether we can adapt our security posture fast enough to survive doing so.”

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • The battle between AI model builder Anthropic and the Pentagon has exposed a huge gap between what AI tools the military wants and what companies like Anthropic, xAI, and OpenAI actually make: AI tools for use by everyone, not specifically for the military. A handful of veteran-run or -financed startups aim to fill that gap. Their pitch: AI for war should have some basic understanding of war, beyond reading Tom Clancy fan fiction. It shouldn’t confidently offer low-confidence answers just to appease the user. And it should work even when a high-tech adversary severs its connection to the cloud.

    The needs gap

    Among the uncomfortable truths the fight between Anthropic and the Defense Department reveals is that the Pentagon had deep reservations about the language models themselves, their potential for hallucination, and that they may “not follow instructions.”

    But the Pentagon allowed wide deployment of Anthropic’s model anyway, anxious to get at least some generative-AI tools into operators’ hands. It reportedly played a role in Operation Midnight Hammer, the raid that captured Venezuelan President Nicolás Maduro, although Pentagon officials have declined to confirm that.

    After the raid, Anthropic officials called Palantir to ask whether their AI models had been used in the operation, Defense Undersecretary for Research and Engineering Emil Michael said on Friday. Michael said that was “a whoa moment for the whole leadership at the Pentagon, that we're potentially so dependent on a software provider without another alternative.” He said it raised several concerns, including that Anthropic might shut down access to models in such situations.

    Anthropic itself had similar concerns, according to one company official: the company didn’t think it was safe for the military to rely on their models for combat situations.

    Another aspect of the shortcomings of today’s frontier AI models—Anthropic’s Claude, Google’s Gemini, OpenAI’s ChatGPT, and xAI’s Grok—is that they need a connection to the cloud. This makes them unreliable for today’s troops and unusable for tomorrow’s autonomous weapons.

    OpenAI tacitly acknowledged this limitation when it recently announced its own deal to deploy on the Pentagon’s classified networks—though it described this inability to deploy large foundational models to the battlefield as a “safeguard” against the kind of unreliability that concerned Anthropic officials. 

    “Our contract limits our deployment to cloud API,” OpenAI’s national security lead Katrina Mulligan explained on X. “Autonomous systems require inference at the edge. By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware.”

    The way forward

    Even as the Pentagon was noisily ejecting Anthropic from its good graces, the Army was preparing to unveil a new effort to close the gap. Project Aria, announced on Thursday, is intended to help the service develop and deploy new AI models and tools “to tackle real operational problems”—that is, designed specifically to help soldiers do their jobs. 

    This is also the object of a new class of AI startups run by people with military experience and dedicated to battlefield tools that don’t need to phone home.

    One is Smack Technologies, which on Monday announced that it had secured $32 million in investor funding to build what they call a “frontier lab for national security.”

    Andrew Markoff, a former Marine special operator who co-founded Smack, says his AI is trained on combat-relevant datasets, not the unspecialized fodder fed to Claude, Gemini, and other frontier models.

    “There is no training set for World War Three, right?” Markoff said in a call with reporters last week. “There's no way to build reinforcement learning…if you don't have deep domain expertise and a deep bench of people with domain expertise. There is no shortcut around encoding good human prior knowledge, and it doesn't exist in doctrinal manuals.”

    He called the Venezuela raid a good example of the sort of operation that AI could help scale up in a conflict with a more advanced adversary. 

    “Multiply it by 100 and scale. You have targets that you want to strike, you have sensors that you're trying to allocate on those targets to figure out what's going on. And to facilitate the strikes, you have strike platforms and escorts that are coming together from all over the world with very detailed sequencing requirements; you know, task A has to happen X number of seconds before Task B. And all of these are dependent on some, some other thing happening at, you know, time X. So, like, all of these things have to come together globally, a really tight timeline.”

    But Markoff said that’s not the sort of thing that commercial large language models are built to do. Models like Claude “have no way to optimize between those goals. And it doesn't have the ability to do the detailed time, space calculations, [to perform] geospatial reasoning grounded in physics, to make the decisions about, literally, which munitions need to be where at what time, talking to which sensors at what time. It doesn't have the ability to do that.”

    This was echoed by Jason Rathje, a former Air Force acquisitions officer and co-founder of the Defense Department’s Office of Strategic Capital who now leads the public-sector division at webAI

    Frontier models like Claude “are built to answer millions of different kinds of questions for billions of users. Military organizations often need something different, systems tuned for specific operational tasks like logistics planning, equipment maintenance, intelligence analysis, or operational decision support,” Rathje said.

    The limitations related to cloud needs are equally important. “Many of today’s frontier models are designed as centralized services for massive commercial user bases, requiring the most advanced chipsets and high-capacity data center infrastructure available, and consuming enormous amounts of power. That makes sense for consumer applications, but military organizations often have very different requirements,” he said. “What defense organizations are asking for is sovereignty: control over the model, the data, and the infrastructure it runs on.”

    Smack Technologies is producing two product suites: one to work like the well-known generative AI models, but trained on military intelligence and operator experience; and the other to work in remote battlefields.

    Sherman Williams, a Navy veteran and founder of AIN Ventures, has invested in a number of dual-use and defense-focused startups. He acknowledges that no AI startup is going to beat one of the big frontier models in metrics like reasoning benchmarks. But “a model that's 85% as capable but runs on a [denied, disrupted, intermittent, and limited] network at the tactical edge beats GPT-5 in a data center you can't reach.”

    Even the data centers you can reach are vulnerable, as shown by Iran's targeting of an AWS facility in Bahrain. “These data centers are important, but they are also vulnerable. Context matters more than benchmarks.”

    The new class of DOD-focused AI startups “aren't trying to out-train OpenAI,” he said. “They're building the adaptation and deployment stack that makes open-source models usable in classified settings. Secure fine-tuning, domain-specific models for [intelligence, surveillance, and reconnaissance] and [command and control] edge deployment.”

    Williams says he’s seeing “strong pull signals from military customers, especially SOCOM and INDOPACOM,” which has been extensively using AI at a headquarters level for more than a year.

    He added that DOD buyers and users want to trust the makers of their AI tools, and that bond is easier forged with founders who are familiar with military operations.

    But just hiring a veteran-developed AI doesn’t solve a broader problem of large language models: they speak confidently when they shouldn’t, and they often tailor their responses, or even lie, to their users.

    Pete Walker, a retired Navy commander and the chief innovation officer at defense AI and cybersecurity firm IntelliGenesis, said that the big frontier models often provide answers users want to hear. 

    “The way these models are built, one of the reasons why they're so big, is they encourage conversation,” and that means encouraging users to dive deeper into areas of interest on specific topics, not talking to them honestly. Walker, who holds a Ph.D. in cognitive science, has peer-reviewed research to back up these assertions.

    So his company is working to develop a framework for large language models based on counter-factual thinking—presenting alternative points of view to challenge users' assumptions rather than simply reinforce the assumptions the user brought to the original question. He describes it as getting a model to think, ‘Hey, you're saying that if A then B, but what if it's not A, or what if not B? What does that imply?’ And so I think those are areas of research that we need.”

    ]]>

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • A new phishing campaign is targeting thousands in the US by posing as the Social Security Administration. Learn how scammers use fake 2025/2026 tax statements and Datto RMM software to hijack computers and steal data, as shared with Hackread.com

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • OpenAI on Friday began rolling out Codex Security, an artificial intelligence (AI)-powered security agent that’s designed to find, validate, and propose fixes for vulnerabilities. The feature is available in a research preview to ChatGPT Pro, Enterprise, Business, and Edu customers via the Codex web with free usage for the next month. “It builds deep context about your project to identify

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • Discussions of quantum computing regularly predict the end of digital secrecy, which would have society-altering consequences. But what if the opposite happens? If quantum encryption software becomes available to citizens around the world, digital secrets might become harder to harvest, altering intelligence collection and national security. 

    Governments have no monopoly on quantum science, meaning that a breakthrough could come as a complete surprise—and that its use and distribution could be shaped by ideological, political, strategic, or economic goals. 

    The following is a “useful fiction” designed to promote better reflection on the range of future effects of quantum computing on national security. Written in support of NATO’s Allied Command Transformation, the story blends a fictionalized narrative scenario with non-fiction research.


    TOP SECRET-CODEX LEVEL II

    PRINTED BY-HAND DISTRIBUTION ONLY

    8 JANUARY 2033 

    TO: THE PRESIDENT

    FROM: THE NATIONAL SECURITY ADVISOR

    RE: BACKGROUND ON GLOBALLY DISTRIBUTED QUANTUM ENCRYPTION BREAKTHROUGH

    OVERVIEW: 

    This following document provides the requested pre-meeting background brief on the rollout of the new, commercially available Quantum Key Distribution (QKD) satellite communications network, Harpocrates. Named for the Greek god of silence, the technology offers users unbreakable encryption, with the potential to disrupt diplomatic, intelligence, and military operations.

    ORIGIN:

    The story of the Harpocrates system begins with its eccentric original developer and funder. David Kilmer (whom you met at last year’s Bilderberg event) was born in Atlanta, Georgia, the son of a single mother: a nurse and local political activist. Kilmer dropped out of the Ph.D program in quantum computing at MIT at the age of 22 to found the Zephyr Corp. After Zephyr’s successful IPO, Kilmer became the 17th-richest person in the world. Kilmer is a staunch privacy advocate, citing violations of his mother’s rights by law enforcement. After her death in 2029, he vowed to “tithe”—using the religious term — 10 percent of his newly acquired wealth in support of privacy rights through the establishment of a global research network to develop quantum-key encryption. 

    Financial records show Kilmer’s investment in the project totaled at least $18.6 billion. This funding supported various university projects and contracts with startups in at least 34 countries, supplemented by substantial in-kind support offered by a network of volunteers motivated by the wider global technological justice movement. The scale and flexibility of Kilmer’s distributed network, blending for-profit and pro bono efforts, allowed the program to advance more rapidly than the government and corporate satellite-based QKD efforts we had been tracking.

    Over the last two months, Kilmer’s network brought the Harpocrates network of satellites and open-source ground stations online, creating a globally accessible and completely secure means of communications. Kilmer’s decision to release the technical details through various open-source innovation communities means that Harpocrates’ use is spreading more rapidly than expected. 

    TECHNOLOGY SUMMARY:

    The Harpocrates-owned network uses a constellation of 210 CubeSats and an unknown number of easily portable receivers. Additional satellites have been launched by non-state groups and individuals, based on Kilmer’s open-source specifications.

    The QKD communications technology underpinning Harpocrates is based on laser-like ground-satellite-ground transmission; in this case, single photons used at optical frequencies that work similarly to laser communications. The receivers, built by Harpocrates or based on their open-source plans, must be temporarily stationary for successful transmission due to the narrowness of the quantum channel being used. Though mounting receivers in vehicles is increasingly prevalent, the vehicles cannot be in motion when communicating with Harpocrates satellites.

    CIA and NSA are studying the Harpocrates communications system for exploitable vulnerabilities or hardware/software flaws. It does not appear that there are any backdoors that would allow access to transmitted messages or data. Interception is still possible once information leaves the Harpocrates platform. However, a message in transit has effectively zero probability of intercept. Moreover, if a message is intercepted, its integrity is compromised in a way that is obvious to the sender and recipient. CIA and NSA are working to localize Harpocrates receivers/transmitter units based on their ELINT signatures; however, the system’s designers anticipated this countermeasure. 

    DISTRIBUTION:

    Kilmer posted screenshots of his first quantum-secured communications to his viz feed last week when he revealed Harpocrates to his followers: “Finally have something for those of you ready for a world of no more government surveillance—here’s how to sign up for Harpocrates. Or better yet, join the quantum revolution yourself and get building.” 

    Due to his often-provocative statements, Kilmer has over 60 million followers. This audience grew as the announcement ricocheted around the world. The viz feed segment received 241,693,376 online impressions and 24,685,219 downloads of the technical plans before it was taken down by the platform host company at the behest of multiple government authorities, including the United States. Simultaneously, the sign-up instructions and technical plans for the system and the CubeSats were released on 3,141 other websites and accounts. The scale, and the number referencing Pi, indicates this was a pre-planned release.

    We expect rapid adoption of the Harpocrates encryption system. Kilmer’s decision to make available the system’s technical specifications and the research behind it will streamline adoption and iteration by the public, corporations, and governments, just as a similar approach boosted the spread of open-source Linux software. Even if legal or other forms of pressure (think of the failed OPERATION CASCADE, which we might now want to revisit) are brought to bear on Kilmer, the toothpaste is out of the tube, so to speak. 

    IMPLICATIONS FOR SELECT NATIONAL SECURITY PRIORITIES:

    A comprehensive assessment of the strategic and operational impact of this change is being prepared, with input across federal agencies. In the interim, we will soon face a significant reduction or complete loss of signals intelligence at the scale and quality that much of our assessment and decision-making process has come to rely on.

    While no foreign governments have officially adopted the software yet, it is being deployed by numerous semi-official or informal government advisors, most notably those around the Russian president. Indeed, it is likely that once the technical assessment phase is complete, multiple allied leaders under surveillance will also adopt it. Multiple watchlisted terrorists and destructive individuals and groups are also starting to use the Harpocrates satellites, and their own early-stage prototypes, to secure their communications.

    These are a few of the near-term national security priorities affected by Harpocrates QKD:

    OPERATION CRYSTAL DIVE: SECDEF and CJCS recommend that the operation be placed on hold because Task Force 38 no longer has situational awareness on the movement and predicted location of Siraj Ali. ISIS Afghanistan-Pakistan immediately began using vehicle-mounted Harpocrates receivers; subsequently, NSA and partner intelligence agencies were no longer able to access ISIS-APK communications. TF 38 remains staged at Karshi-Khanabad. HUMINT reporting should establish alternative means to fix Siraj Ali’s location. However, without SIGINT data, both DoD and the IC assess that CRYSTAL DIVE now carries greater risk to U.S. forces. Thus, in addition to the operational pause, JSOC recommends the deployment of an additional squadron of LOMAR (Low-Observable Mobile Armed Raiding) air-ground mobile strike platforms to K2 to support CRYSTAL DIVE. The larger operation, however, means greater risk of blowback from the local regime.

    SPRATLY ISLANDS CHINESE S-900s: U.S. Navy and Japanese Maritime Self-Defense Force patrols continue at the 10 nautical-mile maritime exclusion zone established by China around the Spratly Islands. After the zone’s establishment last month, the People’s Liberation Army Navy reduced the number of warships in the area from 18 to 7. However, there are now 86 fishing, commercial, and Chinese People’s Armed Police Force Coast Guard Corps  vessels in the waters. Naval Intelligence has a reduced ability to monitor their communications due to the use of Harpocrates systems aboard multiple vessels, which are in turn communicating with ship-to-ship laser-burst transmission fleetwide. The Chinese Coast Guard’s use of Harpocrates is the first by a government’s military or police forces; however, SIGINT reporting indicates it was adopted without official direction. A first shipment of S-900 mobile launchers and sensor systems to Gaven Reefs is expected at 0830 Zulu, with more sites likely within 72 hours. INDOPACOM analysis reports that the S-900s have been split up among the fishing and commercial vessels; U.S. and Japanese forces no longer can identify which vessels are carrying S-900-related cargo. With these distributed air defense systems, it is clear the PLAN is intent on not repeating the mistakes they made in 2032 at Mischief Reef.

    BERLIN CLIMATE NEGOTIATIONS: The Berlin Accord negotiations to finalize terms of the Zhang Pledge on vehicle battery sourcing and recycling begin 10 January. The campaign by the coalition of non-governmental groups and environmental organizations to strengthen the U.S. positions on recycling pricing caps and rare earth mineral reserves quotas (Articles 3 and 5) continues to surge on social and viz-media, with physical demonstrations planned in Berlin and other major European cities. HUMINT and SIGINT reporting from a European intelligence service had been monitoring online planning by far-right groups to infiltrate the climate protest crowds and instigate violent clashes in hope of overshadowing the negotiations. The threat from these groups is now significantly higher because they are expected to begin using Harpocrates secure communications. Due to the uncertainty, the German chancellor is considering a secondary location outside Berlin for the negotiations, HUMINT shows. This is due in part to your planned participation on 11 January. The Secret Service is preparing for that likelihood now. As noted, multiple partner states are also evaluating Harpocrates, meaning that we should also prepare for the prospect that we will not have prior access to their negotiating strategies.

    RUSSIAN PRESIDENT SUMMIT MEETING: The following 12 January summit meeting in Geneva with President Panov presents new difficulties. While we have a baseline understanding of Russia’s positions on its own airbase in Afghanistan, its recent deployment of the Laika space-based anti-satellite weapons, and China’s interest in banning autonomous long-duration undersea weapons, it will be difficult to ascertain new information. As noted above, several of Panov’s inner circle have already begun to use Harpocrates. This reduces not only our access, but also the ability of several of the oligarchs to subtly signal intent to us through their deliberate use of less-secure communications. CIA HUMINT source VK-BLAND confirms that Russian intelligence is validating the security of Harpocrates for use by the President himself. You should prepare for the possibility that limited intelligence may make this meeting more like the leader summits of the 1960s, nearly 80 years ago. 

    CONCLUSION:

    The Harpocrates QKD satellite-based communications system represents a breakthrough in technology, but also a potential breakpoint in our access to the scale and quality of information that we depend on. I look forward to discussing these points with you tomorrow and highlighting potential courses of action. 

    ]]>

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • Researchers at Acronis have discovered a malicious trojanized version of the Red Alert rocket warning app targeting Israeli Android users. Distributed via fake Home Front Command SMS messages, this spyware steals GPS data, SMS messages, and contact lists while maintaining full alert functionality.

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • OpenAI has officially introduced Codex Security, an advanced application security agent designed to automate vulnerability discovery and remediation. Formerly known as Aardvark, the tool is now available in a research preview. It aims to eliminate the bottleneck of manual security reviews by combining state-of-the-art AI models with automated validation, enabling development teams to ship secure […]

    The post OpenAI’s Codex Security Built to Automate Vulnerability Discovery and Remediation appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • Anthropic on Friday said it discovered 22 new security vulnerabilities in the Firefox web browser as part of a security partnership with Mozilla. Of these, 14 have been classified as high, seven have been classified as moderate, and one has been rated low in severity. The issues were addressed in Firefox 148, released late last month. The vulnerabilities were identified over a two-week period in

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • Socket’s Threat Research Team has uncovered a highly deceptive Google Chrome extension designed to steal private keys and seed phrases from cryptocurrency users. The malicious add-on, named “lmΤoken Chromophore” (extension ID bbhaganppipihlhjgaaeeeefbaoihcgi), disguises itself as a harmless hex color visualizer for developers and digital artists. However, its true purpose is to impersonate the widely used […]

    The post Malicious Browser Add‑on Targets imToken Users’ Private Keys appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • Defense companies have agreed to make four times as many missiles, President Donald Trump said Friday, denying reports that his war on Iran was quickly draining stocks of key munitions. 

    “They have agreed to quadruple Production of ‘Exquisite Class’ Weaponry in that we want to reach, as rapidly as possible, the highest levels of quantity,” Trump wrote in a post on his social-media network, Truth Social. No quantities, weapons, or timelines were specified.

    The White House met with top defense contractors Friday to discuss production challenges with munitions as the U.S. closes out the first week of its joint war on Iran with Israel. BAE Systems, Boeing, Honeywell Aerospace, L3Harris, Lockheed Martin, Northrop Grumman, and RTX attended the meeting, according to the post. 

    “Expansion began three months prior to the meeting, and Plants and Production of many of these Weapons are already under way. We have a virtually unlimited supply of Medium and Upper Medium Grade Munitions, which we are using, as an example in Iran,” and Venezuela, he continued.

    Trump also said the U.S. has “increased orders at these levels,” but offered no more details. 

    The announcement comes after a week of war in the Middle East launched with U.S.-Israel joint strikes against Iran. Concerns about U.S. weapons stockpiles, the president’s ability to boost them, and long production times for missiles that cost millions of dollars each were raised before the strikes, with increasing fervor as the week went on. 

    For months, the White House has been pushing defense companies to increase weapons manufacturing. It has secured some commitments to boost production numbers. Lockheed Martin, for example, vowed to increase its output of THAAD and Patriot interceptors. RTX, which Trump previously criticized as a sluggish producer, announced several agreements in February to increase production in coming years AMRAAM, SM-3 Block IB, SM-3 Block IIA, SM-6, and Tomahawk. 

    When asked for comment on the results of Friday’s White House meeting, an RTX spokesperson pointed to last month’s announcement: “RTX is proud to support the administration’s goals of defending the U.S. and its allies at this critical moment and committed to accelerating the production of five key munitions in accordance with the historic frameworks reached with the War Department last month.”

    A Lockheed spokesperson said the company began work months ago on its agreement to quadruple critical munitions production. “We are moving with urgency, and we will deliver.”

    ]]>

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶