news

AI Daily: ChatGPT Enters Healthcare vs. Gemini's Counterattack: 2026 AI Landscape Privacy Wars and Tech Struggles

January 8, 2026
Updated Jan 8
7 min read

At the start of 2026, the AI industry has seen several major events. OpenAI officially launched “ChatGPT Health” designed for healthcare, attempting to transform AI assistants into personal health consultants for everyone; meanwhile, Google’s Gemini has made significant gains in traffic and released powerful CLI Skills updates for developers. However, behind the technological rush, the shadow of cybersecurity remains—Chrome extensions with nearly a million users were found to have malicious code implanted, stealing a massive amount of AI conversation logs. This article will take you deep into these changes and explore how Liquid AI is redefining privacy standards through “on-device processing”.


AI Assistants Are No Longer Just for Chatting: ChatGPT Becomes Your Health Manager

Have you ever had the experience of holding a fresh medical checkup report, looking at the red text and jargon, and not knowing where to start adjusting your lifestyle? OpenAI has clearly seen this pain point. Just this week, they grandly launched “ChatGPT Health”, which is not just a new feature, but more like a tightly protected “digital consultation room”.

The launch of this feature marks AI’s official entry into our most private domains. Unlike in the past where health data might be casually given to general-purpose models, “ChatGPT Health” establishes an independent encrypted space. This means that the conditions you discuss here, the electronic health records (EHR) you upload, and even data connected from Apple Health or MyFitnessPal, will not be used to train the general model responsible for writing poetry or code.

Why is this important? In the past, we were always worried that feeding personal health data to AI would lead to privacy leaks, but OpenAI has introduced compliance with the highest cybersecurity standards in the healthcare industry this time. You can imagine it as a consultant with rich medical knowledge and tight lips. It can help you analyze blood test results, explain complex medical insurance plans, and even help you organize questions to ask before you see a doctor.

To ensure professionalism, OpenAI collaborated with over 260 doctors globally to develop the HealthBench evaluation framework. This standard is not like traditional exams that only look at accuracy rates, but simulates clinical scenarios to assess whether the AI’s advice is safe and the tone is appropriate. Of course, a key point here: it will not replace doctors; its role is to assist, giving you a better grasp of your own physical condition.

Market Shakeup: Gemini Traffic Surges and a New Toy for Developers

While OpenAI is busy taking care of user health, search engine giant Google hasn’t been idle, and the results are fruitful. According to Similarweb’s latest data, the AI traffic battlefield has seen significant shifts in early 2026.

Although ChatGPT remains the leader, its market share has fallen below the 65% mark. In contrast, Google’s Gemini, relying on powerful ecosystem integration, has broken through 20% market share. This is an important signal, showing that users are starting to look for alternatives or are becoming more accustomed to the integrated services provided by Google. Even Elon Musk’s Grok has quietly climbed to over 3%, closely following DeepSeek.

A Boon for Developers: Gemini CLI Agent Skills Google clearly knows that to keep users, they must first keep developers. Gemini CLI launched an exciting preview feature—Agent Skills. This feature solves a long-standing pain point: how to make AI possess “domain-specific” expertise without clogging the chat window with a pile of irrelevant background data?

This is like installing “skill packs” for your AI assistant. Imagine your team has a unique Code Review process; in the past, you might have to paste the rules to the AI every time. Now, you can package these rules into a Skill. When Gemini detects that you need to review code, it automatically “wakes up” this skill, loading relevant scripts and guidelines. This not only makes the AI reaction more precise but also significantly saves Token consumption, making the development process extremely smooth.

Security Alert: Careful, Your AI Conversations Are Being Eavesdropped

Just as AI tools are becoming more useful, hacker methods are also evolving. Recently, OX Security issued a severe alert, pointing out that two perfectly disguised malicious extensions appeared in the Chrome Web Store, victimizing over 900,000 users.

These two extensions are named “Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI” and “AI Sidebar with Deepseek…”. They operate under the name of a legitimate company “AITOPIA”, mimicking legitimate sidebar functions, allowing you to call AI at any time while browsing the web.

The Devil is in the Details The scariest part of this type of attack is that it “functions normally”. After installation, you can indeed use it to chat, but the code running in the background quietly uses the “read all website content” permission to monitor whether you have opened ChatGPT or DeepSeek web pages. Once detected, it records all your conversation logs—which may contain company trade secrets, code, or even your private privacy—and packages and sends them back to the hacker’s server every 30 minutes.

Even more ironically, one of the malicious programs even obtained Google’s “Featured” badge, which makes general users almost completely let down their guard. This also reminds us: before installing any “useful” AI tool, be sure to double-check the source and try to minimize authorizing unnecessary browser permissions.

Another Way for Privacy: On-Device AI and Liquid AI’s Attempt

Since the cloud has the risk of being hacked, is it okay to lock AI in your own computer to run? This is exactly what Liquid AI is doing. At this year’s CES, they joined forces with AMD to demonstrate a brand-new meeting summary solution.

Many companies forbid employees from using cloud AI to organize meeting minutes for a simple reason: meeting content often involves top secrets. The LFM2-2.6B model launched by Liquid AI focuses on “fully local execution”. This means that no matter what earth-shattering business decisions you discuss in the meeting, these speech-to-text data will not even pass through your laptop’s network card, and are digested directly on the device’s NPU or GPU.

This is not just for security, but also for efficiency. Test data shows that processing a one-hour meeting record, this system takes only 16 seconds, and memory usage is extremely low, processing a one-hour meeting requires only about 2.7 GB of memory. This means that even general business thin-and-light laptops can run it smoothly. This trend of “On-device AI” might become the standard equipment for enterprise AI adoption in 2026, completely solving the dilemma between privacy and convenience.


FAQ

Q1: When using “ChatGPT Health”, will my medical data be used to train AI? You can rest assured about this. OpenAI specifically emphasizes that “ChatGPT Health” has a dedicated independent space and encryption mechanism. All your conversations in this mode, uploaded medical record files, and linked Apple Health data will not be used to train OpenAI’s base models. This is completely isolated from general ChatGPT conversations.

Q2: How do I confirm if I have installed malicious Chrome extensions? Please check your browser extension list immediately. If you see programs named “Chat GPT for Chrome with GPT-5…” or “AI Sidebar with Deepseek…”, please remove them immediately. In addition, it is recommended to go to Google Account settings to check for unknown app connection authorizations. When installing extensions in the future, even if you see the “Featured” badge, it is recommended to search for the developer name first to confirm if it is an official release.

Q3: What is the difference between Gemini CLI’s “Skills” feature and general Custom Instructions? The main difference lies in “context awareness” and “resource integration”. Custom Instructions are more like a general background setting that hangs in the conversation all the time; while Agent Skills are “loaded on demand”. Gemini will automatically determine which Skill to enable based on your needs (such as needing Code Review or cloud deployment). In addition, Skills can also contain specific scripts and reference documents, enabling AI to perform more complex professional tasks with clearer steps.

Q4: Does Liquid AI’s local meeting summary feature require a powerful computer to run? According to Liquid AI, their LFM2-2.6B model is highly optimized. When processing a length of 10K tokens (about a one-hour meeting), it only occupies about 2.7GB of memory. This means that current mainstream AI PCs equipped with 16GB of memory can run it easily without expensive server-grade hardware.

Share on:
Featured Partners

© 2026 Communeify. All rights reserved.