news

Claude Code Limit Controversy: Anthropic's New Policy Sparks User Backlash, Bursting the 'Unlimited' AI Subscription Myth?

July 30, 2025
Updated Jul 30
6 min read

AI company Anthropic recently implemented weekly usage limits on its programming tool, Claude Code, sparking strong backlash from Pro and Max users. This article delves into the details of the new policy, the core reasons for user dissatisfaction, and the implications of this event for the sustainable development of AI services.


Recently, one of the hottest topics in the AI circle has been the “shackles” quietly placed on the powerful AI programming assistant, Claude Code, by the artificial intelligence company Anthropic. This new policy targeting paid plans—a weekly usage limit—has created a stir in the developer community and has led many to rethink: are the AI services we subscribe to truly “all-you-can-drink”?

This change primarily affects Pro users, who pay $20 per month, and the higher-tier Max users, whose monthly fees range from $100 to $200. Anthropic announced that the new restrictions will officially take effect on August 28, 2025. On the surface, this seems to be a response to the surge in service demand and resource allocation issues, but the way it was handled, especially the lack of transparent communication, has ignited user anger.

The New Limits Arrive: Who is the Biggest Victim?

Let’s first take a look at how “strict” this new limit really is.

According to Anthropic’s announcement, this adjustment is expected to affect less than 5% of “high-usage” users. But what are the specific numbers?

  • Pro Plan Users ($20/month): Can use the Claude Sonnet model for about 40 to 80 hours per week.
  • Max Plan Users ($100/month): Can use the Sonnet model for about 140 to 280 hours per week, or the more powerful Opus model for about 15 to 35 hours.
  • Max Plan Users ($200/month): Can use the Sonnet model for about 240 to 480 hours per week, or the Opus model for about 20 to 40 hours.

It is worth noting that these hours will fluctuate based on factors such as the size of the user’s uploaded codebase. Moreover, Anthropic has kindly provided Max users with a “value-added” option, allowing them to purchase additional usage at the standard API rate.

Sounds acceptable, right? But the devil is in the details. Many users quickly discovered that by paying an extra $100 per month to upgrade from the $100 to the $200 Max plan, they only get a mere 5 extra hours of usage on the most powerful Opus model. This cost-performance ratio, frankly, is a bit baffling, and it’s no wonder users are questioning the reasonableness of this pricing strategy.

The Problem Isn’t the Limit, but the “Unclear” Communication

To be honest, if it were just a matter of setting a limit, people might understand. What really triggered user dissatisfaction was Anthropic’s “act first, report later,” almost ambush-like communication style.

As early as mid-July, before the official announcement, many Max plan users had already hit an invisible wall without warning. They suddenly received notifications that their “usage limit had been reached” in the middle of their work, but strangely, many said their usage was nowhere near the “900 messages per 5 hours” limit that the company had vaguely claimed.

This non-transparent, undisclosed adjustment made users feel like they were being kept in the dark. A developer on social media made a clever analogy: he said that current AI subscription services are like going to a “gas station with an opaque fuel tank.” You pay the money, but you don’t know how much “fuel” (i.e., AI inference services) you’ve actually gotten, let alone how far your car can go. Anthropic’s vagueness about how the limits are calculated has undoubtedly exacerbated this distrust.

This is the core reason why users are so dissatisfied with Anthropic’s new policy:

This is no longer just a matter of usage limits, but a collapse of trust. Users feel deceived, they cannot predict the stability of the service, nor can they plan their workflows. When a company can change its terms of service at any time without notice, user confidence will naturally plummet.

Why Did Anthropic “Resort to This”? Official Explanation and Future Promises

Facing a torrent of questions, Anthropic officials also came forward to explain. They stated that since its launch, Claude Code has been far more popular than expected, and demand has surged. Some “hardcore” users even run the tool 24/7, which has put enormous pressure on system resources.

Therefore, the officials claim that setting new limits is to optimize resource allocation and ensure that the vast majority of users can enjoy a stable and smooth service experience. At the same time, it is also to curb the abuse of services by a few people, such as sharing accounts or reselling services.

Anthropic also promised to actively explore other solutions in the future to meet the needs of professional users who truly require long-term operation. But in the short term, setting limits is a necessary evil to maintain service quality.

The Promise of Unlimited vs. The Reality of Limited Computing Power: A Test for the AI Subscription Model

The Claude Code controversy has actually sounded a wake-up call for the entire AI industry. It has sparked a deeper discussion: where does the sustainability of AI subscription services lie?

Some argue that any membership plan that claims “unlimited use” is essentially a “beautiful lie” that is difficult to sustain in the long run in a field like AI that requires massive computing power. Computing resources are finite and expensive, while user demand can grow infinitely.

Anthropic’s case may be a microcosm of this. It tells all AI companies that finding the delicate balance between rapid user growth and limited computing resources will be the key to future survival and development. More importantly, how to maintain trust and open communication with users throughout this process is of paramount importance.

After all, no matter how powerful the technology, once it loses the trust of its users, it is just a castle in the air.


Frequently Asked Questions (FAQ)

Q1: What are the new usage limits for Claude Code?

A1: The new limits mainly target the Pro and Max paid plans. Pro users can use the Sonnet model for about 40-80 hours per week; $100 Max users can use the Opus model for about 15-35 hours; $200 Max users can use the Opus model for about 20-40 hours. The specific hours will vary depending on factors such as the size of the codebase.

Q2: Why are users so dissatisfied with Anthropic’s new policy?

A2: The main reason is not the limit itself, but Anthropic’s lack of transparent communication. Many users were restricted without warning before the official announcement and could not know the specific restriction standards, which seriously damaged their trust in the stability of the service and the company itself.

Q3: How does Anthropic officially explain this adjustment?

A3: Anthropic stated that due to the surge in service demand, the ultra-high intensity use by some users has caused system resource constraints. Setting limits is to optimize resource allocation and combat abuse such as account sharing to ensure the quality of service for the majority of users.

Q4: What is the impact of this event on the AI industry?

A4: This event has sparked widespread discussion in the industry about the sustainability of AI subscription services. It highlights the contradiction between the promise of “unlimited use” and expensive and limited computing resources, reminding all AI companies that while pursuing user growth, they must establish more transparent and sustainable business models and user communication strategies.

Share on:
Featured Partners

© 2025 Communeify. All rights reserved.