Anthropic has updated its usage policy for Claude AI, introducing stricter rules on weapons development while easing restrictions on political content. The new guidelines take effect on September 15, 2025, reflecting growing concerns about AI safety as models become more powerful and integrated into real-world systems.
Stronger Restrictions on Weapons & Cybersecurity
In its updated policy, Anthropic explicitly bans the use of Claude for:
- Biological, chemical, radiological, and nuclear (CBRN) weapons
- High-yield explosives
- Any tools that could compromise computer or network systems
The company highlighted risks tied to Claude’s advanced features like Computer Use (which lets the AI directly control a computer) and Claude Code (which integrates AI into developer terminals).
To address these risks, a new section titled “Do Not Compromise Computer or Network Systems” has been added, prohibiting users from:
- Identifying or exploiting security vulnerabilities
- Creating or distributing malware
- Developing denial-of-service attack tools
Looser Rules on Political Content
Interestingly, while tightening security and weapons rules, Anthropic has relaxed its stance on political content.
Previously, Claude was barred from generating any campaign or lobbying material. The revised policy now only restricts cases that:
- Disrupt democratic processes
- Deceive voters
- Involve targeted campaign messaging
This change opens the door for legitimate political uses, such as policy research, civic education, and political commentary, responding to user feedback that the old rules were too limiting.
Safety First, With Flexibility for Businesses
These updates build on Anthropic’s AI Safety Level 3 protections, first introduced in May with Claude Opus 4. Those safeguards were designed to resist jailbreak attempts and block misuse in weapons development.
Anthropic also clarified that its strictest rules apply mainly to consumer-facing apps, giving enterprises more flexibility for internal business use while still ensuring strong safeguards for individual users.
Why This Matters
The new policy highlights a dual approach to AI governance:
- Tighter security to prevent misuse in weapons and cyber threats.
- Greater freedom in political discussions to support legitimate civic and educational use.
With AI systems gaining advanced capabilities, companies like Anthropic are under pressure to balance innovation with responsibility.
Recently, Claude introduced a new learning model and has also gained the ability to automatically end abusive or harmful conversations, reinforcing Anthropic’s focus on safe and responsible AI interactions.