news

AI Daily: ChatGPT Faces Uninstall Wave! Claude Takes the Top Spot and Qwen3.5 Small Models Rise

March 3, 2026
Updated Mar 3
8 min read

When ChatGPT Faces a Trust Crisis: Claude’s Comeback and the Rise of Qwen3.5 Open-Source Small Models

The artificial intelligence market has recently taken an unexpected turn. From the app uninstall wave triggered by OpenAI’s partnerships, to Claude launching a free memory feature and an exclusive learning platform, and finally Qwen3.5 releasing four lightweight yet powerful open-source models. This article will walk you through the key dynamics and future trends of the large language model market.


Did you know? Trends in the tech world are always changing rapidly. Sometimes, a single business decision can completely alter user loyalty. Recently, the AI market has witnessed a massive real-world user migration.

The competition among major language models has expanded from pure “technical rivalry” to a battle over “trust” and “practicality.” Users are increasingly concerned about the corporate values behind these powerful tools, while also demanding more personalized and lightweight operational solutions.

Let’s take a closer look at the major events over the past few days that have shaken the industry.

The Price of Trust: ChatGPT’s Uninstall Surge and Claude’s Strong Rise

User stickiness for tech products is often built on a delicate foundation of trust. According to a report by TechCrunch, on Saturday, February 28, 2026, uninstalls of the ChatGPT mobile app in the US surged by a staggering 295% compared to the previous day.

This number is quite alarming. Usually, the daily uninstall rate fluctuation for ChatGPT is only around 9%. This sudden data anomaly stems primarily from a strong consumer reaction to OpenAI’s business moves.

Reports indicated that OpenAI had reached a partnership agreement with the US Department of Defense. Many users expressed concerns that artificial intelligence technology could be used for military surveillance or automated weapons. This anxiety over privacy and safety directly reflected in App Store reviews. In a very short time, ChatGPT’s one-star reviews skyrocketed by 775%, while its five-star ratings were cut in half.

A Victory of Values: Claude Reaps the Benefits

When a portion of users decided to leave ChatGPT, they needed a new alternative. At this moment, Anthropic’s Claude became the biggest beneficiary.

Anthropic had previously made it clear that they would refuse similar agreements with defense departments. They worry that the technology could be abused before absolute safety is ensured. This stance of holding a moral bottom line clearly resonated with a large number of consumers.

Market data speaks for itself. Over the same weekend, Claude’s daily downloads experienced explosive growth. According to estimates by Appfigures, on February 28, Claude’s downloads surged by 88%, officially surpassing ChatGPT in daily downloads for the first time and successfully taking the top spot on the US App Store’s free app chart. Not only that, Claude also took the number one spot in six other countries simultaneously, including Germany, Canada, Switzerland, Belgium, Luxembourg, and Norway.

This certainly gives food for thought. While technology is important, how a company chooses to use it seems to be the ultimate factor in deciding whether consumers stay or leave.

Seamless Personalized Experience: Claude’s Memory Feature Now Completely Free

Besides gaining recognition for its corporate philosophy, Claude is also making serious strides in product feature iteration. Just recently, Anthropic announced exciting news for free users.

According to an official announcement on X, Claude’s “Memory” feature has now been officially included in the free plan. This means everyday users can have the AI remember their past conversation preferences and important information, enjoying a continuous experience akin to a personal assistant.

Easy Migration: Importing and Exporting Has Never Been Easier

Even better, the official team has greatly simplified the data migration process. If you were accustomed to using other AI services in the past, you can now easily import those saved memories or contextual data directly into Claude. The system even provides a dedicated prompt to help users grab memory data from other accounts.

Of course, data sovereignty always remains in the hands of the user. Users can export these memory data at any time. This high degree of flexibility and openness undoubtedly lowers the barrier to switching platforms, and it’s no wonder more and more people are willing to switch sides.

From Beginner to Expert: Anthropic Launches a Comprehensive Educational Platform

Having a good tool also requires knowing how to maximize its value. To this end, Anthropic has carefully crafted the Anthropic Academy.

Hosted on the Skilljar learning management system, this platform offers a complete learning path from basics to advanced levels. Whether you are an everyday professional, an educator, or a professional software developer, you can find suitable resources here.

A Rich Matrix of Courses

The course design on the platform is highly specific and practical. For example, “Claude 101” focuses on teaching daily work tasks to help beginners get up to speed quickly. The “AI Fluency” series is subdivided into student, educator, and non-profit organization versions, exploring how to safely and ethically collaborate with AI.

For technical personnel, this is a treasure trove. The platform provides a complete tutorial on “Building Applications with the Claude API,” and even covers the highly anticipated Model Context Protocol (MCP). Developers can learn from scratch how to use Python to build MCP servers and clients, master the three core elements of tools, resources, and prompts, and perfectly connect Claude with external services.

Additionally, for enterprise-level users, the platform has prepared advanced courses on integrating Claude with Amazon Bedrock and Google Cloud Vertex AI. Through this systematic training, anyone can gradually become a master of artificial intelligence.

Small Size, Big Intelligence: Qwen3.5 Redefines Lightweight Models

Moving on from the major shifts in the application layer, let’s turn our attention to the open-source ecosystem of underlying technologies. While major companies are desperately trying to make their models larger, a revolution regarding “lightweighting” is quietly taking place.

Recently, Alibaba Cloud officially open-sourced four small-sized models of the Qwen3.5 series in the Qwen3.5 section on Hugging Face and the ModelScope community. The parameter sizes of these four models are 0.8B, 2B, 4B, and 9B respectively.

Edge Computing Breaking Hardware Limits

Why does the market need small models? The answer is simple: not everyone has massive server computing power.

The defining feature of the 0.8B and 2B versions is being “extremely lightweight.” They take up minimal storage space yet offer incredibly fast inference speeds. These types of models are practically tailor-made for smartphones and IoT edge devices. Imagine a scenario where, even without an internet connection, a device can still engage in low-latency, real-time voice or text interaction—what a fascinating application.

The Perfect Foundation for Agents

The 4B version is positioned as a strong foundation for lightweight Agents. It utilizes native multimodal training, capable of processing both text and visual information simultaneously. As the core brain of a lightweight agent, the 4B model achieves a near-perfect balance between computing performance and resource consumption.

As for the 9B version, it demonstrates what it means to “punch above its weight.” Despite its compact structure, its performance in various benchmarks rivals the giants of the past with hundreds of billions of parameters (the text shows it has performance comparable to gpt-oss-120B), which is quite astonishing. For server-side deployments that require high logical reasoning capabilities but have relatively limited VRAM resources, the 9B is definitely a highly cost-effective general-model choice right now.

The Qwen 3.5 family has released a total of 8 models so far, including ultra-large and medium sizes. Their common characteristic is “achieving stronger intelligence with less compute.” These open-source resources have injected a shot in the arm for the developer community and laid a solid foundation for the widespread adoption of native multimodal agents in the future.


Frequently Asked Questions (FAQ)

1. Why did the uninstalls of the ChatGPT mobile app suddenly increase so drastically? This was primarily caused by consumer concerns over privacy and application ethics. According to market intelligence data, after OpenAI announced a partnership with the US Department of Defense, it sparked worries among some users. Many feared that the associated AI technology could be applied to unverified military fields or surveillance operations, leading a massive number of users to uninstall the app in a short period and leave numerous one-star negative reviews.

2. What are the benefits of Claude’s newly opened “Memory” feature? This feature allows Claude to remember a user’s past conversational context, preferences, and specific details. As a result, users don’t have to repeat background information every time they start a new conversation, making communication much more natural and fluid. Even better, this feature has been made available to free plan users and supports importing existing memory data from other AI platforms, significantly reducing the pain period of switching tools.

3. What are the respective use cases for the four open-source small models (0.8B to 9B) released by Qwen3.5? These four models are optimized for different resource constraints. The 0.8B and 2B models are compact and highly responsive, making them perfect for deployment on edge devices like smartphones and smart home appliances to handle real-time, low-latency interactions. The 4B model boasts excellent multimodal processing capabilities, making it an ideal choice for building lightweight AI assistants (Agents). The 9B model offers exceptional computing performance, suitable for server environments with limited resources but requiring strong reasoning capabilities.

Share on:
Featured Partners

© 2026 Communeify. All rights reserved.