• 0 Posts
  • 24 Comments
Joined 7 months ago
cake
Cake day: September 22nd, 2025

help-circle


  • There is nothing that indicates that Anthropic’s AI is used to analyze data, I’m not saying it’s not, just that we don’t know. I’m going to quote a smaller section of a quote I made earlier of the same Guardian article:

    In late 2024, years after the core system was operational, Palantir added an LLM layer – this is where Claude sits – that lets analysts search and summarise intelligence reports in plain English.

    But the term AI is an issue here, there are multiple, of different kind, made by different companies. There is AI used for targeting, no doubt, but it’s not Claude, it’s Maven and some other subcomponents. The fact that Anthropic joined the project late, after it was already operational, is a good hint that they do not bring a core feature, but that’s only speculation.





  • I suggest you read the article.

    The AI underneath the interface is not a language model, or at least the AI that counts is not. The core technologies are the same basic systems that recognise your cat in a photo library or let a self-driving car combine its camera, radar and lidar into a single picture of the road, applied here to drone footage, radar and satellite imagery of military targets. They predate large language models by years. Neither Claude nor any other LLMs detects targets, processes radar, fuses sensor data or pairs weapons to targets. LLMs are late additions to Palantir’s ecosystem. In late 2024, years after the core system was operational, Palantir added an LLM layer – this is where Claude sits – that lets analysts search and summarise intelligence reports in plain English. But the language model was never what mattered about this system.