news

AI Daily: SpaceX Acquires xAI, OpenAI Launches Desktop Command Center

February 3, 2026
Updated Feb 3
6 min read

In this tech world full of surprises, it seems like something big happens every morning we wake up. If we used to discuss how AI chats, the focus has now shifted to how AI “takes over” work, and even how it flies into space.

Today’s content is rich, featuring not only the blockbuster merger of SpaceX and xAI but also OpenAI’s brand new developer tool, and even Google teaching AI how to deceive people at the poker table. Let’s take a look at these technological advances that are changing the future.

1. Flying to Space for Compute? SpaceX Officially Acquires xAI

This might be the craziest yet most logical news recently. SpaceX announced the official acquisition of xAI. This is not just a business merger, but more like a gamble by Musk on the future architecture of human civilization.

Why do this? The reason is simple and cruel: Earth is running out of electricity. Current AI development relies heavily on massive ground-based data centers, whose demand for power and cooling systems is a bottomless pit. If the energy problem cannot be solved on the ground, then look to the sky.

SpaceX has proposed a concept called “Orbital Data Center”. It sounds sci-fi, but the logic is sound: there is near-constant solar energy in space, no atmospheric obstruction, and energy is almost inexhaustible. Through SpaceX’s latest plan, they intend to use Starship, a giant rocket capable of carrying hundreds of tons of payload, to launch data centers into orbit.

Imagine millions of satellite servers operating in orbit, using solar energy for high-intensity AI computing. This not only solves the heat dissipation and power supply problems but is also the first step towards a Kardashev Type II civilization (harnessing the total energy of a star). This acquisition vertically integrates xAI’s model capabilities with SpaceX’s launch capabilities, aiming directly at a superintelligence that can understand the universe.

2. Not Just Writing Code: OpenAI Releases Codex App

If SpaceX is looking up at the stars, then OpenAI is looking down to solve the most practical problems at developers’ fingertips. OpenAI has just released the Codex Desktop App for macOS.

This is not just another chat window. Developers know that the challenge has shifted from “let AI write this code” to “how to manage this group of AIs to complete the entire project for me”. The Codex App is more like a command center, allowing developers to manage multiple AI Agents simultaneously.

Here is a great detail: you can let these Agents work in parallel. For example, one Agent is responsible for fixing bugs, another for writing tests, and you just need to review the results like a commander. It even has built-in support for worktrees, which means each Agent can modify code in an independent environment without interfering with your main branch.

In addition, OpenAI introduced the concept of “Skills”. You can package common workflows into skills and let Codex call them automatically when needed. Whether it is automating documentation generation or connecting to a local terminal to execute commands, this App is trying to bridge the gap between “model capability” and “actual productivity”.

3. Small but Mighty: StepFun Open Sources Step 3.5 Flash Model

The open-source community also welcomed a strong challenger today. StepFun released and open-sourced their Step 3.5 Flash Model.

The highlight of this model lies in its “Intelligence Density”. Although it has 196B total parameters, it uses a Mixture of Experts (MoE) architecture, activating only 11B parameters during each inference. What does this mean? It means it runs incredibly fast and has significantly lower hardware requirements, making it very suitable for local deployment.

This model is specifically optimized for inference speed, performing particularly well in programming and mathematical operations. According to official data, it can even wrestle with GPT-4 class models in multiple benchmarks while maintaining extremely high response speeds. For developers who want to build private AI applications or do not want to upload data to the cloud, this is definitely good news.

If you are interested, you can go directly to their GitHub page or Hugging Face to download the weights and try it out.

4. AI Learns Deception? Game Arena Adds Werewolf and Poker Tests

Remember when we only tested AI playing Chess? That was because Chess is a “perfect information game” where both sides see everything on the board. But the real world is not like that; reality is full of hidden information, lies, and uncertainty.

Google DeepMind obviously realized this too. Their Kaggle Game Arena recently added two very interesting benchmarks: Werewolf and Texas Hold’em Poker.

This maximizes the difficulty for AI. In Werewolf, the model must learn to lie, form alliances, and even read the room during conversations to hide its identity; at the poker table, it needs to calculate risks and learn to bluff. This is actually testing AI’s “soft skills” — communication skills, negotiation ability, and decision-making power in ambiguous situations.

Currently, the Gemini 3 series models are performing quite impressively on the leaderboard, showing that the new generation of models are not just calculators; they are starting to understand how to handle “human” level complex interactions.

You can go directly to Kaggle Game arena to check it out.

5. Document Processing Tool: GLM-OCR Visual Understanding Model

Finally, let’s look at a practical tool. The Zhipu AI team released GLM-OCR, a lightweight model designed specifically for OCR (Optical Character Recognition).

When dealing with PDFs, scans, or messy tables, traditional OCR often drops the ball. Although GLM-OCR has only 0.9B parameters, its performance in parsing complex layouts, handwritten text, and even mathematical formulas reaches SOTA (State of the Art) levels.

For enterprises or individuals who need to digitize a large amount of paper documents, this is an extremely cost-effective choice. It can directly output structured Markdown or JSON formats, making subsequent data processing much easier. Interested friends can refer to their GitHub and Hugging Face pages.


FAQ

Q: Can sending data centers to space really solve the energy problem? A: Theoretically, it is feasible. Data centers on Earth consume massive amounts of electricity and require cooling, while space possesses almost infinite solar energy and extremely low ambient temperatures (beneficial for heat dissipation). Although the launch cost is high, Starship’s goal is to drive transportation costs down to a very low level. Coupled with no need for maintenance personnel and land rent in space, this is a potential path to solving the AI energy bottleneck in the long run.

Q: What is the difference between OpenAI’s Codex App and the current ChatGPT? A: ChatGPT is mainly for conversational interaction, while Codex App is a “work environment” designed for development. It can directly read files on your computer, allowing you to command multiple AI Agents to perform different coding tasks simultaneously, and even run terminal commands locally. It is more like a virtual office with AI employees, rather than just a chatbot.

Q: What are the benefits of Step 3.5 Flash’s MoE architecture? A: The biggest advantage of the MoE (Mixture of Experts) architecture is “efficiency”. Although the model’s total parameter count is large (containing a lot of knowledge), it only employs a small fraction of the most relevant parameters (experts) for each word processed. This allows the model to remain smart (knowledgeable) while running fast (low inference cost), making it very suitable for running on resource-constrained devices.

Share on:
Featured Partners

© 2026 Communeify. All rights reserved.