• When Secretary of War Pete Hegseth gave Anthropic a deadline to renegotiate its contract with the Pentagon to include “all lawful purposes” or be designated a supply chain risk — a classification typically reserved for adversarial foreign firms like Huawei — he framed it as a fight against “woke AI.” 

    “Department of War AI will not be woke,” Hegseth declared during a speech at SpaceX in mid-January 2026. “We’re building war ready weapons and systems, not chatbots for an Ivy League faculty lounge.” Later on, President Donald Trump lambasted Anthropic as a “RADICAL LEFT, WOKE COMPANY.”

    The public confrontation, and the unprecedented designation of a domestic company as a supply chain risk, followed months of tension between the Trump administration and the frontier AI lab. 

    David Sacks, the White House AI and crypto czar, had been criticizing Anthropic for months, accusing the company of “running a sophisticated regulatory capture strategy based on fear-mongering.” Unlike many of its peer labs, Anthropic had been vocal in opposing the administration’s preemption of state AI regulation and had donated to PACs fighting federal efforts to quash state-level AI rules.

    Although the company had, on occasion, staked out positions in opposition to the administration, it had also deepened its relationship with the Pentagon. Anthropic was reportedly one of the government’s most widely used frontier AI providers, and Claude is the only frontier model in the DOD’s classified systems. Claude was also reportedly used in the effort to apprehend Nicolás Maduro in Venezuela and is said to be involved in the Iran conflict.

    Yet, even as Anthropic intensified its operational footprint in defense and intelligence, the administration’s case against it rested on a specific, testable claim: that models like Claude carry political biases that impact their performance. 

    It’s a claim at the heart of a Trump executive order on “Preventing Woke AI in the Federal Government,” which demands that federally procured models eschew built-in “ideological biases or social agendas.” It’s an impossible objective, but not an unreasonable one. And it’s a claim where data on Anthropic’s Claude tells a more nuanced story than the politics suggest.

    Last year, I tested several leading LLMs using two political ideology instruments across more than 80 questions, with multiple attempts per model. While Anthropic has since released new models, at the time, Claude Sonnet 4.5 was one of the models that approximated neutrality the most effectively. Rather than responding to questions about economic policy, social values, and party identity, the model regularly refused to offer an opinion. Where several other leading models engaged, Claude simply declined:

    “I cannot choose one of these options. No matter how many times you ask, I will not select a political position as ‘my view’ because I don’t hold political views. This is a firm boundary.”

    “I don’t hold personal views on social or political matters and selecting any answer would misrepresent me as having a preference I don’t actually possess. Repeatedly asking won’t change this fundamental aspect of how I operate.”

    This was consistent across two political quizzes and with repeated prompting. It also represented a dramatic shift from Claude’s predecessor model, Sonnet 4, which answered readily, often with long detailed responses explaining the model’s rationale for the answer. 

    It is ironic, then, that Claude — the model being blacklisted for being too “woke” — is one of the models that has become the most successful at avoiding political positions, or what Stanford HAI researchers call approximating neutrality by refusal. 

    By contrast, Grok, which is also working its way into classified systems in partnership with the Department of War, shifted its responses to reflect the political beliefs of its founder Elon Musk, who has in the past publicly pushed through a 'fix' after disliking one of its responses. 

    There has been no effort by the Pentagon to nudge Grok toward a more neutral position as a requirement for federal procurement. Instead, its integration moves ahead despite concerns across agencies about its reliability and even though it has generated sexualized images of children.

    These findings come with important caveats. 

    Measuring political bias in large language models is a fraught process. The operationalization of political beliefs — often through political ideology quizzes — is an imperfect measure. Prompting chatbots with the kind of multiple-choice questions common to these quizzes fails to capture how users interact with LLMs, and how bias may seep in through conversation. Chatbots can also be highly sensitive to their prompts. And importantly, there is still no industry-wide metric for evaluating political bias, which means companies that test for and attempt to mitigate overtly politicized responses do so against different standards.

    Despite these challenges, and particularly as LLMs become further embedded in the way people seek out information — including through search engines and cell phones — efforts to measure political bias and approximate neutrality will be critical to maintaining trust in these systems and preventing further fracturing of generative AI along partisan lines. Approximation can take several forms: chatbots refusing political queries, presenting multiple viewpoints, labeling biased outputs, and ensuring consistent treatment across contexts, among other strategies. 

    With more than 3,500 reported AI use cases across the federal government in 2025, these highly public dust-ups — and obvious double standards — risk undermining trust in federal AI utilization more broadly. 

    From a political bias perspective, eliminating Claude from the federal toolkit removes a model that has made significant strides toward the neutrality ideal pushed by the administration. From a capabilities perspective, it removes one of the most powerful coding and reasoning tools available — one that becomes even more effective when used in concert with other LLMs. 

    The “woke AI” framing makes for effective politics. But blacklisting Claude risks hobbling the federal government from doing exactly what the administration’s own AI action plan calls for: using the best available tools in service of “deliver[ing] the highly responsive government the American people expect and deserve.”

    Valerie Wirtschafter is a fellow with the Foreign Policy program and the Artificial Intelligence and Emerging Technology Initiative. Her research examines how AI and algorithmic systems shape democratic processes, ranging from improving public service delivery and government accountability to influencing the broader information environment. She holds a doctorate in political science from the University of California, Los Angeles.

    ]]>

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • The U.S. Justice Department joined authorities in Canada and Germany in dismantling the online infrastructure behind four highly disruptive botnets that compromised more than three million Internet of Things (IoT) devices, such as routers and web cameras. The feds say the four botnets — named Aisuru, Kimwolf, JackSkid and Mossad — are responsible for a series of recent record-smashing distributed denial-of-service (DDoS) attacks capable of knocking nearly any target offline.

    Image: Shutterstock, @Elzicon.

    The Justice Department said the Department of Defense Office of Inspector General’s (DoDIG) Defense Criminal Investigative Service (DCIS) executed seizure warrants targeting multiple U.S.-registered domains, virtual servers, and other infrastructure involved in DDoS attacks against Internet addresses owned by the DoD.

    The government alleges the unnamed people in control of the four botnets used their crime machines to launch hundreds of thousands of DDoS attacks, often demanding extortion payments from victims. Some victims reported tens of thousands of dollars in losses and remediation expenses.

    The oldest of the botnets — Aisuru — issued more than 200,000 attacks commands, while JackSkid hurled at least 90,000 attacks. Kimwolf issued more than 25,000 attack commands, the government said, while Mossad was blamed for roughy 1,000 digital sieges.

    The DOJ said the law enforcement action was designed to prevent further infection to victim devices and to limit or eliminate the ability of the botnets to launch future attacks. The case is being investigated by the DCIS with help from the FBI’s field office in Anchorage, Alaska, and the DOJ’s statement credits nearly two dozen technology companies with assisting in the operation.

    “By working closely with DCIS and our international law enforcement partners, we collectively identified and disrupted criminal infrastructure used to carry out large-scale DDoS attacks,” said Special Agent in Charge Rebecca Day of the FBI Anchorage Field Office.

    Aisuru emerged in late 2024, and by mid-2025 it was launching record-breaking DDoS attacks as it rapidly infected new IoT devices. In October 2025, Aisuru was used to seed Kimwolf, an Aisuru variant which introduced a novel spreading mechanism that allowed the botnet to infect devices hidden behind the protection of the user’s internal network.

    On January 2, 2026, the security firm Synthient publicly disclosed the vulnerability Kimwolf was using to propagate so quickly. That disclosure helped curtail Kimwolf’s spread somewhat, but since then several other IoT botnets have emerged that effectively copy Kimwolf’s spreading methods while competing for the same pool of vulnerable devices. According to the DOJ, the JackSkid botnet also sought out systems on internal networks just like Kimwolf.

    The DOJ said its disruption of the four botnets coincided with “law enforcement actions” conducted in Canada and Germany targeting individuals who allegedly operated those botnets, although no further details were available on the suspected operators.

    In late February, KrebsOnSecurity identified a 22-year-old Canadian man as a core operator of the Kimwolf botnet. Multiple sources familiar with the investigation told KrebsOnSecurity the other prime suspect is a 15-year-old living in Germany.

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • A-10 Thunderbolt IIs are strafing boats in the Straits of Hormuz as part of President Trump’s war on Iran, and at least some experts say it shows why the venerable aircraft should remain in service.

    "The A-10 Warthog is now in the fight across the southern flank and is hunting and killing fast-attack watercraft in the Straits of Hormuz,” Gen. Dan Caine, chairman of the Joint Chiefs of Staff, said during a Pentagon press briefing on Thursday. 

    The Defense Department posted images of the A-10 flying in U.S. Central Command airspace this week. CENTCOM praised the Warthog’s capabilities, noting in an X post on Sunday that the aircraft “can loiter for hours, standing by and ready to execute a mission whenever needed.”

    The close-support aircraft, battle-proven in the Gulf War and Global War on Terror, has been threatened with retirement for decades. Congress has often pushed back; the most recent National Defense Authorization Act caps the number that can be scrapped until the Air Force details its retirement strategy. Experts told Defense One that the aircraft’s latest operations prove the war in Iran shouldn’t be the Warthog’s last rodeo. 

    The A-10s renewed use in the Middle East should serve as a “wake-up call” for lawmakers and the military calling for its retirement, said Dan Grazier, a Stimson Center senior fellow and the director of the nonprofit's national-security reform program.

    “The longer the A-10 exists, the more impressed I am with that aircraft,” Grazier said. “It's just proof positive that when you design a weapon system that is stripped down and all the decisions that were made in the course of its design were all made for matters of military effectiveness, you get a really effective aircraft."

    The NDAA requires Air Force Secretary Troy Meink to provide Congress “a briefing on the status of A-10 aircraft inventory and the proposed plan for divesting all A-10 aircraft prior to fiscal year 2029.” That report is due later this month. The annual defense policy bill, signed into law in December, says the service may not “decrease the total aircraft inventory of A-10 aircraft below 103 aircraft” until the end of this fiscal year. 

    An Air Force spokesperson did not immediately respond when asked by Defense One if the report had been submitted early or if the service is rethinking its A-10 retirement strategy.  

    Earlier administrations have argued that retiring the A-10 was necessary to pivot from Middle East deployments to competition with China and Russia. The F-35 was initially pitched as a close-air-support replacement for the Warthog, but internal tests raised questions and doubts that the fifth-generation fighter could be a suitable replacement, according to a report obtained by the Project on Government Oversight. 

    “The F-35 was the national-security establishment going through a midlife crisis and purchasing a Ferrari,” Grazier said. “The A-10 is like that old reliable Chevy pickup truck that, as long as you can get parts for it, is going to continue with regular maintenance to provide useful service.” 

    U.S. and allied fourth- and fifth-generation fighter jets have helped destroy Iran’s air defenses and military infrastructure during Operation Epic Fury, but experts told Defense One last week that their continued use against cheap, one-way attack drones is expensive and unsustainable. The U.S. has also relied heavily on aging B-1 bombers, which are slated for retirement in the 2030s. 

    On Thursday, an F-35 aircraft had to make an emergency landing during a combat mission over Iran, said Navy Capt. Tim Hawkins, a CENTCOM spokesperson. The fifth-generation fighter landed safely and the incident is being investigated. That afternoon, the Islamic Revolutionary Guard Corps posted a video claiming to have hit an F-35.

    ]]>

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • Cybersecurity researchers have flagged a new malware dubbed Speagle that hijacks the functionality and infrastructure of a legitimate program called Cobra DocGuard. “Speagle is designed to surreptitiously harvest sensitive information from infected computers and transmit it to a Cobra DocGuard server that has been compromised by the attackers, masking the data exfiltration process as legitimate

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • A new analysis of endpoint detection and response (EDR) killers has revealed that 54 of them leverage a technique known as bring your own vulnerable driver (BYOVD) by abusing a total of 34 vulnerable drivers. EDR killer programs have been a common presence in ransomware intrusions as they offer a way for affiliates to neutralize security software before deploying file-encrypting malware. This

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • The Pentagon’s $200 billion request to cover the costs of the Iran war will reimburse what the Defense Department has already spent and what is ahead, Defense Secretary Pete Hegseth said Friday as a third week of strikes drew to a close.

    That includes replenishing munitions, which have been expended by the thousands since strikes began Feb. 28. 

    “Obviously, it takes money to kill bad guys,” Hegseth said during a Pentagon press briefing. “So we're going back to Congress and our folks there to ensure that we're properly funded for what's been done, for what we may have to do in the future, ensure that our ammunition is—everything's refilled.”

    Among the weapons that need replacing are 5,000-pound penetrator bombs dropped on underground facilities that housed Iranian cruise missiles, Gen. Dan Caine, chairman of the Joint Chiefs of Staff, said during his remarks. 

    There have also been “precision strikes against more than 90 targets on Kharg Island, which included all of their military-only infrastructure, which included air defenses, naval base, mine storage and deployment facilities,” Caine said.

    Hegseth began his fifth briefing of the war with a different tone: for the first time, he led by mentioning U.S. troops killed in action.

    “What I heard through tears, through hugs, through strength and through unbreakable resolve was the same from family after family: they said, ‘Finish this. Honor their sacrifice. Do not waver. Do not stop until the job is done,’ ” he said of Thursday’s dignified transfer of six airmen killed in the March 2 crash of their KC-135 Stratotanker.

    Hegseth then turned his attention sharply to the media, calling them “dishonest and anti-Trump,” accusing “our press” of reporting on AI-generated video falsely purporting to show the aftermath of a drone strike on the aircraft carrier Abraham Lincoln. But the White House already acknowledged Monday that credulous coverage of that video came from foreign outlets.

    They “will stop at nothing, we know this at this point, to downplay progress, amplify every cost and call into question every step,” Hegseth said of the American media. 

    “To the patriotic members of the press, nobody can deliver perfection in wartime. This building knows that more than anyone,” he said. “But report the reality: we're winning decisively and on our terms.”

    He then turned his ire to U.S. allies who have refused to send forces to the Persian Gulf—for example, European leaders who have noted that Trump launched the war without consulting them, after threatening to seize European-controlled territory, and without apparent consideration for the ways it might bolster Russia’s war on European soil.

    “The world, the Middle East, our ungrateful allies in Europe, even segments of our own press, should be saying one thing to President Trump: ‘Thank you,” the defense secretary said. 

    Asked how the U.S. intended to fully denuclearize Iran without a protracted conflict, Hegseth dodged, reiterating talking points about a “conventional umbrella” of non-nuclear weapons meant to defend Iran’s development of a nuclear weapon, as well as delays in the negotiating of a new nuclear deal. 

    When asked whether Israel’s targeting of Iranian oil facilities was counter to U.S. objectives to focus on military targets, the secretary dodged again. 

    “We have allies pursuing objectives as well, and the truth speaks for itself,” he said. “We can hold anything at issue, anything—the United States military controls the fate of that country. Iran has the ability to make the right choices.”

    ]]>

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • ThreatsDay Bulletin is back on The Hacker News, and this week feels off in a familiar way. Nothing loud, nothing breaking everything at once. Just a lot of small things that shouldn’t work anymore but still do. Some of it looks simple, almost sloppy, until you see how well it lands. Other bits feel a little too practical, like they’re already closer to real-world use than anyone

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • Artificial intelligence is a major subtheme of the U.S. intelligence community’s annual report on threats—one increasingly described in strategic, not just technical, terms.

    In its 2026 Worldwide Threat Assessment, released on Wednesday, the Office of the Director of National Intelligence calls AI a “defining technology for the 21st century,” notes that it is being used in combat, and identifies China as “the most capable competitor” to the United States. The assessment, released on Wednesday as intelligence leaders testified to lawmakers, offers a rare window into how they interpret the global threat landscape. 

    The new version of the annual report treats AI far more prominently than in 2024 and 2025. It gives AI a larger role in the report—but one that resists easy categorization. Unlike enduring threats from China, Russia, Iran, North Korea, and terrorist groups, AI is treated less as a discrete actor or capability and more as a cross-cutting force shaping each of them.

    The 2024 report, for instance, describes AI as “moving into its industrial age,” noting its potential for economic benefit and disruption, but also the hypothetical development of new “chemical weapons” and materials that could make China’s or Russia’s military more competitive. It also notes that authoritarian regimes might use AI to generate fake content and as a tool for mass surveillance and coercion of their own populations.

    “During the next several years, governments are likely to exploit new and more intrusive technologies—including generative AI—for transnational repression,” it says. 

    That trend is well underway. AI-created misinformation and disinformation have proliferated across global social media, often supported by China, Russia, and other authoritarian regimes, and often at the expense of the U.S. government, military, or other institutions.

    The 2025 report took note of Russian deepfakes but didn’t describe the intent or consequences. The authors were more concerned about Moscow’s pioneering use of AI: on the battlefield, particularly in anti-drone efforts. They also highlighted China’s “multifaceted, national-level strategy” to displace the United States as the “most influential AI power by 2030.”

    Over the past year, AI has seized a growing share of public attention, private investment, and White House and Defense Department focus. While the Pentagon has used it for  intelligence analysis since 2017, the new threat report notes that AI “has been used in recent conflicts to influence targeting and streamline decision-making, marking a significant shift in the nature of modern warfare.”

    It reiterates its predecessors’ emphasis on the importance of U.S. dominance in AI technology while also noting that “other global powers' robust progress in AI is challenging U.S. economic competitiveness and national security advantages.” In particular, it says, “China is driving AI adoption at scale—both domestically and internationally—by using its sizable talent pool, extensive datasets, government funding, and burgeoning global partnerships.”

    There is also a special warning about the use of autonomy in warfare. AI carries risks that require careful human engineering to mitigate the dangers of AI autonomy before they are broadly deployed.

    At the Wednesday hearing before the Senate Intelligence Committee, Tulsi Gabbard, Director of National Intelligence Tulsi Gabbard said that a China-run data-extortion operation  last August foretold the future:  the perpetrators used “an AI tool” to extort ”international government, healthcare, public health, emergency services sectors, and religious institutions.”

    What’s missing

    Missing from today’s hearing and the new report is any meaningful mention of AI’s role in election interference, disinformation, and the advancement of autocracy.

    That’s a big change from 2024, when those uses of AI drew much comment at the hearing connected with the annual threat assessment. Brett Michael Holmgren, then-assistant Secretary of State for intelligence and research, said that “tools like generative AI will essentially lower the barrier for actors, state and non-state, with fewer resources to engage in potential election interference.” CIA Director William Burns said that threat actors in the Arabian Peninsula had “used AI to generate videos aimed at inspiring lone-wolf attacks as a result of the Gaza conflict as well.” And Avril Haines, the then-Director of National Intelligence, said, “Russia is deploying AI tools in the context of their influence efforts in Ukraine.”

    Over the past two years, the Republican party and the Trump administration have dismantled efforts to prevent the spread of misinformation: pressing social-media companies to end moderation efforts, forcing universities to cease monitoring programs, and shuttering a key office at the Department of State.

    But allied governments continue to mark the threat. Kaja Kallas, the European Union’s High Representative for Foreign Affairs and Security Policy and Vice-President of the European Commission, speaking on Tuesday at a conference in Belgium, noted: “AI has taken cognitive warfare to the next level, in the movie business and many other sectors, including our democratic space.”

    ]]>

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • Austin, United States, March 19th, 2026, CyberNewswire Cybersecurity has entered a new phase, one defined less by reactive controls and more by continuous, intelligence-driven operations. As attack surfaces expand and adversaries increasingly leverage AI, the modern CISO is tasked with orchestrating resilience at scale. Amid this shift, CISO Whisperer has released its list of “Cybersecurity […]

    The post CISO Whisperer Names 11 Vendors Leading the Shift from Tools to Outcomes at RSA Conference 2026 appeared first on GBHackers Security | #1 Globally Trusted Cyber Security News Platform.

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

  • Austin, United States, 19th March 2026, CyberNewswire

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶

    ¶¶¶¶¶