OpenAI’s Developer Day in 2025 was not just about the release of GPT-5; it marked a fundamental strategic shift. This article provides an in-depth analysis of how OpenAI is turning ChatGPT into a new, AI-native operating system through ‘ChatGPT Built-in Apps,’ the AgentKit toolkit, and a major partnership with AMD. We will also explore how this will disrupt the future for developers, SaaS companies, and the entire tech industry.
A Turning Point of an Era: Not Just an Update, but a Reshaping
On October 6, 2025, the air in San Francisco was filled with excitement and tension. The OpenAI Developer Day (DevDay 2025) held on this day was far from a routine product update. It was a carefully planned declaration, heralding the arrival of a new era: ChatGPT is no longer just a smart chatbot; it is evolving into a fully functional, AI-native operating system.
This might sound a bit abstract, but imagine this: the software of the future will no longer be a collection of isolated apps, but a collaborative operation of countless autonomous AI agents within a unified conversational interface. OpenAI’s goal is to become the underlying architecture of this new world, just like Microsoft’s Windows or Apple’s iOS, defining the way humans and machines interact for the next decade.
The Four Pillars of the New World
This grand transformation is built on four closely interconnected strategic pillars:
- Platformization: Introducing the new “ChatGPT Built-in Apps” and Apps SDK, allowing third-party services to be seamlessly embedded into conversations, turning ChatGPT into a powerful application platform.
- Agentification: Releasing the comprehensive toolkit AgentKit, significantly lowering the barrier to developing autonomous AI agents and turning the concept of “AI employees” into reality.
- Model Superiority: Launching the new generation of GPT-5 model family, a new version of Codex powered by GPT-5, and opening the API for the Sora 2 video generation model, providing the most powerful intelligence engine for the entire ecosystem.
- Infrastructure Dominance: Reaching a historic strategic partnership with chip giant AMD to secure the massive computing power required for future model training and expansion, consolidating its position in the hardware supply chain.
These initiatives are interlinked and all point to a clear goal: to build an irreplaceable ecosystem in the AI era.
Why Build an “Operating System”? OpenAI’s Moat Strategy
You might ask, with Anthropic’s Claude and Google’s Gemini catching up in model performance, why is OpenAI going to such great lengths to build a platform?
The answer is simple: relying solely on the “strongest model” strategy is becoming increasingly fragile. As model capabilities gradually become commoditized, the real competitive advantage is no longer who has the smartest model, but whose ecosystem is the most practical and indispensable.
This is a classic platform war. By transforming ChatGPT into a platform where developers can set up shop and transactions can take place, OpenAI is building a powerful moat based on “network effects.” A large user base (currently over 800 million active users per week) attracts developers, and a rich array of applications creates more value for users, which in turn attracts more users. This is a self-reinforcing positive cycle aimed at becoming an indispensable middle layer in the AI value chain.
The New Application Layer: When Apps “Live” in ChatGPT
The most revolutionary announcement at this conference was undoubtedly the “ChatGPT Built-in Apps.” This feature allows developers to embed complete interactive applications directly into the conversation flow, completely changing the game. The previous GPTs and Plugins models are being replaced by a more native, more deeply integrated ecosystem.
A New Gateway for Developers: The Apps SDK
To achieve this vision, OpenAI has released the Apps SDK, which is currently in preview. This toolkit allows developers to build applications that can connect to external data, trigger real-world actions, and render rich UI directly in the chat interface.
It is worth noting that the technical foundation of this SDK is the Model Context Protocol (MCP), an open technical standard. This means that applications developed with it can theoretically run on any platform that supports MCP. This far-sighted decision aims to encourage wider adoption and build a universal AI interoperability foundation.
The Future is Now: Those Amazing Live Demos
The conference showcased integrated applications from the first batch of partners, including industry giants like Booking.com, Canva, Spotify, and Zillow. These demos vividly demonstrated the power of seamless integration:
- Interactive Learning: When you ask Coursera to “teach me machine learning,” it no longer just gives you a text explanation, but plays a tutorial video directly in the chat box. You can even ask ChatGPT questions about the content of the video while watching.
- Smart Real Estate Search: You can tell Zillow to “find me a three-bedroom house with a yard,” and an interactive map will immediately appear and update the results in real time, all operated through natural language.
- Design Process Reinvented: Upload a hand-drawn sketch and then tell Figma to “turn this into a usable chart,” and the Figma application will take over and automatically complete the design.
- Party Planner: When you say “help me create a playlist for my party,” ChatGPT will not only recommend songs but also proactively suggest using the Spotify app to create and manage the playlist.
What Does This Mean for Existing SaaS Companies?
This is a huge warning sign, but also an unprecedented opportunity. The most powerful aspect of the new “ChatGPT Built-in Apps” model is its ability to proactively recommend and present relevant tools at the exact moment a user’s need arises. This context-based discovery mechanism is far more efficient than traditional app store searches or SEO.
For companies like Canva or Zillow, failing to become a “ChatGPT Built-in App” in the future could mean facing a serious competitive disadvantage, much like not having a mobile app a decade ago. This forces the entire SaaS industry to rethink its market strategy.
More profoundly, OpenAI has introduced the Agentic Commerce Protocol. This is an open standard designed to enable real-time, secure transactions within ChatGPT. When a user says “book me a flight to Tokyo,” the real value lies in the ticket transaction itself. Through this protocol, OpenAI positions itself to profit from every economic activity that occurs through its platform, creating a business model that is orders of magnitude larger than simply selling API tokens.
AgentKit: Making it Easy to Build Autonomous AI Agents
Although the concept of “AI agents” is very hot, the reality is that there are very few successful cases of deploying production-level agents. The development process is extremely complex, the tools are fragmented, and there is a lack of unified standards. To solve this pain point, OpenAI has launched AgentKit.
AgentKit is positioned as a “complete set of building blocks” for building, deploying, and optimizing agent workflows.
- Agent Builder: A visual canvas that allows developers to design complex multi-agent workflows by dragging and dropping nodes, much like using Canva, greatly lowering the technical barrier.
- Connector Registry: An enterprise-level centralized management backend that allows administrators to uniformly supervise data and tool connections, ensuring security and compliance.
- ChatKit: A UI toolkit that allows enterprises to quickly embed a branded chat interface into their own websites or applications, providing a native experience powered by OpenAI.
At the same time, AgentKit also integrates advanced Evals and Guardrails functions to ensure that agents can operate reliably and safely in the real world.
The strategic intent of this move is very clear: instead of letting third-party frameworks like LangChain become the industry standard, OpenAI has chosen to launch a fully functional, deeply integrated official tool, making “developing on the OpenAI platform” the easiest and most efficient path, thereby firmly occupying the core position of the agent ecosystem.
Intelligence Core Upgrade: GPT-5 Family and Sora 2 API
All of these grand plans depend on powerful foundation models. DevDay 2025 officially launched the new generation of the GPT-5 model series, adopting a precise market segmentation strategy.
- GPT-5 Pro: The smartest and most accurate top-tier model, designed for high-risk, high-precision tasks in fields like finance and law.
- GPT-5: The new flagship model, optimized for complex coding and agent tasks.
- GPT-5 mini & nano: Faster and more cost-effective lightweight versions, making large-scale AI applications more economically viable.
This tiered pricing strategy aims to cover the entire market from high-end enterprises to individual developers, effectively preventing competitors from eroding its market by offering cheaper but “good enough” models.
In the multimodal field, the highly anticipated text-to-video model Sora 2 has finally been opened up through an API. And its pricing has directly triggered a price war with its main competitor, Google Veo 3.
The Price War for Video Generation Models Has Officially Begun
Shortly after OpenAI announced the pricing for the Sora 2 API, a comparison with the price of Google’s Veo 3 model shows that the competition in the AI video generation field has entered a white-hot stage. Let’s directly compare the prices of both sides:
| Model | Price per second (USD) | Notes |
|---|---|---|
| OpenAI Sora 2 | $0.10 | Standard quality (720p) |
| Google Veo 3 Fast | $0.15 | Fast generation model |
| OpenAI Sora 2 Pro | $0.30 | High quality (720p) |
| Google Veo 3 Standard | $0.40 | Standard generation model |
| OpenAI Sora 2 Pro | $0.50 | High quality (1024p) |
From this latest pricing table, it is clear that the market competition is extremely fierce and strategic.
- Advantage in the entry-level market: OpenAI’s standard version of Sora 2, priced at $0.10 per second, is currently the most economical choice on the market. This is extremely attractive to developers who want to try out video generation functions or conduct large-scale testing.
- Tug-of-war in the mid-to-high-end market: Google’s Veo 3 Fast ($0.15/sec) and Veo 3 Standard ($0.40/sec) are in direct competition with OpenAI’s Sora 2 Pro series in terms of price and performance. Developers’ choices will no longer be based solely on price, but will delve deeper into the specific performance of each model in terms of generation speed, image realism, and detail processing.
This price war is good news for the entire ecosystem. It means that the cost of developing and applying multimodal AI is rapidly decreasing, which will inspire the birth of more innovative applications.
Laying the Foundation for the Future: A Multi-Billion Dollar Bet with AMD
The most significant business announcement at this conference was the “milestone multi-billion dollar agreement” reached between OpenAI and chip giant AMD. This is not just a hardware purchase, but a clever strategic layout.
The key to the agreement is that AMD will deploy up to 6 gigawatts of GPU computing power for OpenAI, while OpenAI has obtained warrants to purchase up to 10% of AMD’s equity.
This move solves the AI industry’s long-standing heavy reliance on Nvidia GPUs, achieving supply chain diversification. Even more brilliantly, the equity binding closely links AMD’s success with OpenAI’s development. When OpenAI’s demand drives up AMD’s stock price, OpenAI itself can also profit from it, forming a self-reinforcing financial cycle. This also explains why OpenAI needs such a huge infrastructure investment—its API usage has grown by an astonishing 20 times in one year.
New Competitive Landscape: The Platform War is in Full Swing
The announcements at DevDay 2025 have officially upgraded the AI competition from the model level to a full-scale war of platform ecosystems.
- Comparison with Google: Google’s strategy is to deeply integrate the Gemini model into its existing massive product ecosystem (Search, Android, etc.). This is a model of “injecting AI soul into existing systems.” OpenAI, on the other hand, has chosen a more disruptive “build a new system from scratch” approach.
- Comparison with Anthropic: Anthropic focuses on enterprise-level and high-security agent AI, leaning more towards a code-driven approach. OpenAI’s AgentKit, with its visual, low-barrier characteristics, is aimed at a broader group of developers.
This “ChatGPT as an operating system” model poses a direct threat to the traditional software industry. In the future, users may no longer need to open different apps, but can call all services from a unified conversational interface. This will fundamentally change the software distribution model, posing a huge challenge to Apple and Google’s app stores, as well as Google’s core search business.
Conclusion: Advice for Developers, Enterprises, and Investors
DevDay 2025 marks the entry of the AI industry into a new phase centered on platforms and ecosystems. This is both a challenge and a huge opportunity.
- For developers: Learning to use the Apps SDK and AgentKit to develop “ChatGPT Built-in Apps” is no longer an option, but a key strategy for the future. Whoever can adapt to the new interaction model the fastest may gain a first-mover advantage in the next software era.
- For enterprises: They should immediately start experimenting with AgentKit to explore the possibility of automating internal processes and developing new customer experiences, while paying attention to the security and governance functions it provides.
- For investors: They need to re-evaluate the moats of traditional SaaS companies. The most valuable companies in the next decade are likely to be those that can most deeply and effectively integrate their own services into a dominant AI platform like ChatGPT.
The revolution initiated by OpenAI has just begun. It is not just an evolution of technology, but a complete reshaping of the entire digital world’s infrastructure.


