Cursor Launches a New AI Agent Experience to Take On Claude Code and Codex

Cursor announced Thursday the launch of Cursor 3, a new product interface that allows users to spin up AI coding agents to complete tasks on their behalf. The product, which was developed under the code name Glass, is Cursor’s response to agentic coding tools like Anthropic’s Claude Code and OpenAI’s Codex, which have taken off … Read more

Anthropic Says That Claude Contains Its Own Kind of Emotions

Claude has been through a lot lately—a public fallout with the Pentagon, leaked source code—so it makes sense that it would be feeling a little blue. Except, it’s an AI model, so it can’t feel. Right? Well, sort of. A new study from Anthropic suggests models have digital representations of human emotions like happiness, sadness, … Read more

A New AI Documentary Puts CEOs in the Hot Seat—but Goes Too Easy on Them

It’s not easy to get an interview with Sam Altman—just ask Adam Bhala Lough, the filmmaker behind the recent documentary Deepfaking Sam Altman. Lough originally planned a feature exploring the potential and perils of AI that would center on a conversation with the OpenAI CEO. But, after having his inquiries ignored for months, he opted … Read more

Anthropic Supply-Chain-Risk Designation Halted by Judge

Anthropic won a preliminary injunction barring the US Department of Defense from labeling it a supply-chain risk, potentially clearing the way for customers to resume working with the company. The ruling on Thursday by Rita Lin, a federal district judge in San Francisco, is a symbolic setback for the Pentagon and a significant boost for … Read more

OpenClaw Agents Can Be Guilt-Tripped Into Self-Sabotage

Last month, researchers at Northeastern University invited a bunch of OpenClaw agents to join their lab. The result? Complete chaos. The viral AI assistant has been widely heralded as a transformative technology—as well as a potential security risk. Experts note that tools like OpenClaw, which work by giving AI models liberal access to a computer, … Read more

Pentagon’s ‘Attempt to Cripple’ Anthropic Is Troublesome, Judge Says

The US Department of Defense appears to be illegally punishing Anthropic for trying to restrict the use of its AI tools by the military, US district judge Rita Lin said during a court hearing on Tuesday. “It looks like an attempt to cripple Anthropic,” Lin said of the Pentagon designating the company a supply-chain risk. … Read more

Chris Hayes Has Some Advice for Keeping Up With the News

Chris Hayes makes a living from attention: What deserves some, what doesn’t, and how to make sure the public gives their own limited span of it to the right things. That sounds simple enough. But as I found during my conversation with Hayes, which kicks off season two of The Big Interview podcast, it’s increasingly … Read more

Anthropic Denies It Could Sabotage AI Tools During War

Anthropic cannot manipulate its generative AI model Claude once the US military has it running, an executive wrote in a court filing on Friday. The statement was made in response to accusations from the Trump administration about the company potentially tampering with its AI tools during war. “Anthropic has never had the ability to cause … Read more

Justice Department Says Anthropic Can’t Be Trusted With Warfighting Systems

The Trump administration argued in a court filing on Tuesday that it did not violate Anthropic’s First Amendment rights by designating the AI ​​developer a supply-chain risk and predicted that the company’s lawsuit against the government will fail. “The First Amendment is not a license to unilaterally impose contract terms on the government, and Anthropic … Read more

Palantir Demos Show How the Military Could Use AI Chatbots to Generate War Plans

When the user asks “What enemy military unit is in the region?” the AIP Assistant guesses that it’s “likely an armor attack battalion based on the pattern of the equipment.” This prompts the analyst to request a MQ-9 Reaper drone to survey the scene. They then ask the AIP Assistant to “generate three courses of … Read more