Anthropic rolls out million-token Claude upgrade

News
Image: Anthropic

Anthropic Unveils Major Claude Upgrades, Intensifying Enterprise AI Race

Anthropic has rolled out significant upgrades to its Claude AI platform, introducing both a dramatically expanded context window and on-demand chat recall capabilities. The moves are set to position the company to compete more directly with OpenAI and Google for dominance in the fast-growing enterprise AI market.

The most notable update is to Claude Sonnet 4, which now supports a massive 1 million tokens of context through the Anthropic API. This represents a five-fold increase from the previous 200,000-token limit. The expansion allows users to process entire codebases exceeding 75,000 lines of code or analyze dozens of research papers simultaneously while preserving a deep understanding of their relationships. The enhanced capability is currently available in public beta for customers with Tier 4 and custom rate limits, with a wider rollout expected in the coming weeks.

On-Demand Memory Feature Prioritizes Privacy

Simultaneously, Anthropic introduced a memory feature that enables Claude to search and reference past conversations. Unlike ChatGPT’s persistent memory, which automatically builds user profiles, Claude’s approach requires explicit user requests to access previous chats. This design choice prioritizes privacy while maintaining functionality, a key differentiator for regulated industries.

The memory feature is currently limited to Claude Max, Team, and Enterprise subscribers across web, desktop, and mobile platforms. Users can enable the functionality through their profile settings, allowing Claude to maintain separate contexts for different projects and workspaces.

Enterprise-Focused Pricing and Competitive Positioning

The expanded context window comes with premium pricing for larger prompts. According to Anthropic’s documentation, prompts exceeding 200,000 tokens incur double the standard input rate at $6 per million tokens, up from $3. Output tokens for these large prompts are priced at $22.50 per million, versus the standard $15. The company noted that cost-saving features like prompt caching and batch processing can reduce expenses by up to 50% for qualified use cases.

These upgrades arrive amid an intensifying AI race. Early adopters of the million-token context window, such as coding platform Bolt.new and iGent AI, have reported improved accuracy and more autonomous workflows. Brad Abrams, product lead for the Claude platform, highlighted that the expanded context window will bring “a lot of benefit” to AI coding platforms and agentic tasks.

Anthropic’s privacy-first memory feature is a strategic move to attract enterprises concerned about data exposure and compliance with frameworks like GDPR and HIPAA. The company, which is reportedly seeking funding at a valuation of up to $170 billion, is clearly reinforcing its enterprise-first strategy with these latest platform enhancements.

Leave a Reply

Your email address will not be published. Required fields are marked *

You might also like

claude_ai

Anthropic Updates Claude AI Policy: Stricter Weapons Ban, Looser Political Rules

Anthropic has updated its usage policy for Claude AI, introducing stricter rules on weapons development while easing restrictions on political…

Read more →

Meta Releases DINOv3, a 7B Parameter Vision Model

Meta’s DINOv3: The Powerful AI Model That’s Already Exploring Mars Meta has unveiled DINOv3, a massive new computer vision model…

Read more →

Claude AI’s New Learning Modes Are Now Available to Everyone

Anthropic has officially rolled out its advanced learning features for the Claude AI assistant to all users, not just institutional…

Read more →

Claude AI Can Now End Abusive Conversations

In the rapidly evolving world of artificial intelligence, it’s rare for a new feature to come as a complete surprise.…

Read more →