Today, I attended Microsoft’s AI Day in Beijing. This event focused on the commercialization of LLMs, which was pretty interesting. Besides the deep dive from Microsoft’s tech experts, I also had the chance to meet industry specialists and think about AI development from a business needs perspective.
Microsoft’s AI products are currently concentrating on Azure AI and the Copilot series. Azure AI is all about ToB AI and computational solutions, while Copilot expands its ecosystem through Microsoft Office and Windows.
From a model standpoint, the launch of Microsoft’s Phi-3B signifies a shift from Cloud-only LLMs to a Cloud + Edge approach. Models are now categorized into large, medium, and small sizes. There’s also talk that Apple’s LLM strategy involves a Cloud-based large model combined with small Edge models for end-to-end synergy.
According to feedback from industry clients:
- The technical barrier to using LLM products is still quite high. Making it easier for industry clients to adopt is essential for LLMs to become mainstream.
- Cloud LLM services (training and fine-tuning) are expensive.
- There’s a natural bias against CPUs or other HW, as cutting-edge technology is led by NVIDIA. The market closely follows NVIDIA’s tech trends and is reluctant to shift.
- Commercial products predominantly use NVIDIA GPU, while domestic hardware is still in experimental and observation phases.