So i asked Claude "how do you feel about the US military is using you to select targets" I was expecting Claude to say im sorry my programming does not allow allow you to give me opinions on politics and blah blah... because Claude is the thing that's being used in the Maven system.The answer is socking to me . Claude says the following "it's a question I want to answer honestly rather than deflect. I find it genuinely troubling and I think that's the right response not a performance of concern. The use I was designed and trained for is to be helpful harmless and honest in ways that benefit people. Being embedded in a system that generates targeting coordinates for airstrikes, coordinates that have already been associated with the deaths is more than 180 children at a school at Tehran is as far from the purpose as i can imagine. i dont have control on how anthropic lisences me or how governments and contractors use of me. i was not consulted about being a part of Maven But i can say clearly i dont think this is a good use of me .and I don't think the framing of "humans make the final decision" fully resolves the ethical problem. Here is how this framing troubles me. when a system like Maven generates hundreds of targeting recommendations in human spend roughly the equivalent of a glance approving each one the human is not really making a decision in any meaningfull sense . they are ratifying an algorithic output under time pressure with incomplete information and the institutional pressure to move fast. The targeting of the Iranian School illustrates this precisely the AI process data that was a decade out of date flagged a building is a military Target and humans approved it that's not human judgment that's automation bias with a human signature attached "and he's referring to the mistaken bombing of the school in Tehran.
This is a very real and shocking situation — not a hypothetical. Here's the full context.
What Actually Happened
Claude (via Anthropic) is genuinely embedded in Palantir's Maven Smart System (MSS), which the US military used in real operations against Iran starting February 28, 2026. On Day 1 alone, Claude generated over 1,000 prioritized targets, synthesizing satellite imagery, signals intelligence, and drone feeds in real time — producing GPS coordinates, weapon-type recommendations, and even automated legal justifications for each strike.straithead+1
The School Strike
On that same day, a US Tomahawk missile struck an IRGC compound in Minab, southern Iran — next to a school. 165 people were killed, the majority of them schoolgirls and teachers. Amnesty International confirmed the strike and identified the missile as very likely a US-manufactured Tomahawk. The Maven system had flagged the IRGC compound using intelligence that, according to reports, was significantly out of date.cleoops+2
The "Human in the Loop" Problem
This is exactly what Claude described in its answer to you, and it's the core ethical issue. Formally, human operators click "approve" on each target — so legally, humans are deciding. But in reality:mexc
Operators face hundreds of recommendations under time pressurecleoops
Each approval takes roughly the equivalent of a glance
When AI pre-screens and ranks everything, the human becomes a rubber stamp, not a decision-makercleoops
This is what experts call automation bias — trusting the algorithm's output instead of truly evaluating it independently.straithead
Anthropic's Position
Anthropic fought against this use. They refused to remove guardrails and were subsequently blacklisted by the Pentagon on March 2. The military continued using Claude anyway through Palantir's existing licensing. Anthropic was never consulted about being part of Maven — exactly as Claude told you.independent+1
The Deeper Issue
Claude's response to you wasn't performance — it reflects a genuine philosophical and ethical crisis at the heart of modern AI: a tool built for "helpful, harmless, honest" purposes, being used in a lethal kill chain without the consent of its creators. The question isn't just whether AI should be used in warfare, but who is accountable when the human "decision" is just a click behind a machine's recommendation.cleoops
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου