In this moment of constant AI technological innovation, the tech world welcomes several heavyweight updates today. From creative design to code debugging, to breakthroughs in speech synthesis technology, these tools are quietly changing the way we work. Most notable are Adobe integrating its core apps into ChatGPT, and Cursor and Google launching revolutionary features in the coding development field respectively. This is not just an upgrade of tools, but a brand new imagination of workflow.
Adobe Photoshop, Express, and Acrobat Officially Join ChatGPT
For many creators or office workers who need to process documents, this is undoubtedly exciting news. Adobe announced that it has officially integrated Photoshop, Adobe Express, and Acrobat into ChatGPT. This means that users with a ChatGPT account can now directly call these tools in the chat window to complete tasks without switching back and forth between applications.
This integration utilizes Adobe’s Agentic AI technology, making operations unprecedentedly intuitive. Imagine you only need to type in natural language “blur the background of this photo” or “adjust the brightness of this image,” and ChatGPT can automatically invoke Photoshop functions to complete the command. For those unfamiliar with complex photo editing software, this significantly lowers the barrier to entry. Adobe Digital Media President David Wadhwani also stated that this is an important step in making creativity accessible to the masses.
In addition to photo editing, the integration of Adobe Express allows users to generate invitations, social media images, and even perform subsequent edits directly in the chat. The Acrobat function makes processing PDF files easier, whether it is extracting text, merging files, or converting formats, all can be completed through simple conversation. Currently, these functions are open to ChatGPT users worldwide and are available on desktop, web, and iOS versions.
Cursor Launches Debug Mode: Letting AI Debug Like a Senior Engineer
There is also significant progress in the field of program development today. The editor Cursor, much loved by developers, has launched a brand new Debug Mode, pushing AI’s coding ability to a new level. In the past, when facing complex bugs, AI often could only guess based on static code, which led to fix suggestions sometimes being inaccurate or even producing hallucinations.
Cursor’s team observed their own engineers’ debugging process and found that the key lies in “runtime information.” Therefore, the new Debug Mode is no longer just blind guessing; it will first read your code, propose multiple hypotheses, and then automatically insert logs into the code to collect runtime data. When you reproduce the bug, the AI will lock onto the root cause of the problem based on the returned real data.
This interactive repair process is like having a senior engineer sitting next to you, helping you analyze variable states, execution paths, and timing information. Once the problem is found, the AI will generate a targeted fix plan and ask you to verify it again. If the problem is solved, it will automatically remove all the debug code just inserted, returning a clean codebase to you. This not only improves the success rate of fixes but also significantly saves developers time wasted on “guessing games.”
Google Jules: A Coding Assistant Who Proactively Manages “Housework”
At the same time, Google is also making efforts in its AI development tools, releasing Jules’ proactive update feature. If Cursor is your debugging partner, then Jules is more like a proactive butler. Google introduced “Suggested Tasks” and “Scheduled Tasks” features, allowing Jules to discover and handle problems before the developer even speaks.
This feature allows Jules to continuously scan the codebase, proposing suggestions for #TODO comments in the code or potential optimization areas. Developers only need to review and approve. In addition, through integration with Render, Jules can even automatically analyze logs and propose fix plans when deployment fails, forming a complete closed loop from writing to deployment. This “proactivity” is an important trend in the development of current AI Agents, aiming to reduce the cognitive burden on developers so that humans can focus on more creative logic design.
Leap in Voice Technology: Google Gemini 2.5 TTS and Zhipu AI GLM-TTS
In terms of auditory experience, today is equally lively. Google announced the update of the Gemini 2.5 Text-to-Speech (TTS) model, focusing on improving voice control and expressiveness. The new model has made significant progress in the diversity of intonation, whether it is the tense narration in a suspense novel or the friendly tone of a customer service bot, it can be grasped precisely. More importantly, it improves performance in “multi-speaker” scenarios, making simulated interviews or multi-person conversations sound more natural without abrupt transitions.
On the other hand, the Zhipu AI team from China also open-sourced its latest voice model. GLM-TTS is a high-quality speech synthesis system based on large language models, supporting Zero-shot voice cloning, requiring only 3 to 10 seconds of sample audio to mimic the speaker’s voice. The model introduces a reinforcement learning framework to optimize emotional expression, solving the problem of flat voices in traditional TTS. At the same time, they also released GLM-ASR-Nano-2512, a lightweight speech recognition model that excels at processing mixed speech of Chinese, English, and Cantonese, maintaining high recognition rates even in noisy environments.
OpenAI Strengthens Cyber Resilience and Google’s Search Ecosystem Layout
As AI model capabilities become stronger, security issues naturally become the focus. OpenAI released a report on Strengthening Cyber Resilience, emphasizing that with the improvement of models in code generation and analysis capabilities, precautions must be taken to prevent these technologies from being used for cyber attacks. OpenAI is evaluating the risks of new models through the “Preparedness Framework” and collaborating with external security experts for Red Teaming, ensuring that while models help defenders, they do not become tools for hackers. They also launched a security research Agent named Aardvark, which can assist developers in automatically scanning and patching vulnerabilities.
Google launched new tools in the web ecosystem aiming to balance AI development with the interests of content creators. The newly launched “Preferred Sources” feature allows users to pin their trusted media or blogs in search results. At the same time, Google is also trying to add AI-generated article summaries and voice broadcasts to Google News, and establishing partnerships with multiple global news agencies to explore business monetization models in the AI era.
Other News Worth Watching
- Claude Code Upgrade: Anthropic released an update for its development tool Claude Code CLI, adding Async subagents and instant compact mode, further improving development efficiency.
- Google Labs Pomelli Animated: Google’s experimental project Pomelli now introduces animation features. Supported by the Veo 3.1 model, users can transform static content into animations that fit their brand style, currently open for free trial in select countries.
- DeepMind FACTS Benchmark: To combat AI hallucinations, Google DeepMind released the FACTS Benchmark Suite, a set of tools for systematically evaluating the factual correctness of large language models, helping to develop more reliable AI models.
Frequently Asked Questions (FAQ)
Q: How do I use Adobe’s features in ChatGPT? To use these features, you need to type commands directly into ChatGPT, such as “use Photoshop to adjust the brightness of this image.” The system will automatically detect and invoke the corresponding Adobe application. Please note that these features are currently being rolled out gradually to global users, and you may need to log in to your Adobe account to use the full functionality.
Q: What is the difference between Cursor’s Debug Mode and general AI chat debugging? General AI chat debugging usually relies only on static analysis of the code snippets you provide, which is prone to incorrect guesses. Cursor’s Debug Mode proactively inserts logs into your code, collects runtime data during actual program execution, and diagnoses problems based on this real data, so the repair accuracy is usually higher.
Q: What are the main improvements in Google Gemini 2.5 TTS? Gemini 2.5 TTS mainly improves the “expressiveness” and “control” of the voice. It can more precisely follow instructions regarding intonation, emotion, and speech speed (e.g., “say in a nervous tone”), and when handling multi-person conversations, it can better distinguish and maintain the voice characteristics of different characters, sounding more like real human conversation.


