03/31 2025
479
Article | Smart Relativity
This year, DeepSeek has undeniably emerged as the leading trend in the AI sphere. As more companies adopt its application practices, a new consensus is forming within the industry: While DeepSeek's popularity may appear as a triumph for state-of-the-art AI large models, it fundamentally underscores the pivot of the AI industry from technological breakthroughs to commercialization.
Specifically, technological advancements must converge with commercialization capabilities to bring AI from the lab to the masses, transforming it into a practical tool for productivity enhancement. The business models of Silicon-based Fluidity and Ascend serve as testament to this approach.
Recently, at the Ascend AI Partner Summit titled "Rising with the Times," Hu Jian, co-founder and chief product officer of Silicon-based Fluidity, recounted the collaborative journey with Ascend in developing DeepSeek's business practices. He emphasized that the integration of technology, cost, and scale represents the cornerstone for the application and deployment of large model inference.
Hu Jian, Co-founder & Chief Product Officer, Silicon-based Fluidity
This perspective aligns with the core tenets of the contemporary AI industry. When high-performance models like DeepSeek encounter reliable computing power from Ascend, coupled with cost optimization and performance enhancements facilitated by Silicon-based Fluidity's engineering capabilities, it accelerates the adoption of large models by enterprises, quietly heralding an industrial transformation towards widespread AI application.
Simultaneously, this marks a pivotal juncture for the local AI industry, steering it towards a virtuous cycle where both supply and demand sides of the market achieve a genuine two-way fit.
Behind DeepSeek's meteoric rise lies the concerted efforts of computing power and cloud services.
Despite DeepSeek's impressive performance and swift ascent to fame due to its cost-effectiveness and high performance, a critical distinction must be clarified for the general public: the translation of model capabilities into industrial value is not a straightforward linear relationship; model capabilities do not equate to industrial value.
Consequently, despite DeepSeek's popularity, enterprises still grapple with numerous challenges when deploying AI large models, such as hardware adaptation difficulties, data security concerns, sluggish computing power efficiency, lack of ecosystem support for easy initiation, and more. Hu Jian aptly noted, "Enhancing the inference speed of the overall large model in a constrained environment to make it a genuine intelligence in applications is a pain point faced by many enterprises."
The prevalence of these pain points underscores that enterprises' deployment of DeepSeek is not as seamless as anticipated. Conversely, this presents a valuable opportunity for the AI industry's growth.
According to Hu Jian, during the Spring Festival, Silicon-based Fluidity and Ascend jointly launched a "comprehensive" model inference service for DeepSeek, garnering positive market reception. Platform traffic surged exponentially, with daily visits exceeding one million and registered users surpassing five million. This propelled Silicon-based Fluidity to the top of both global and domestic growth charts for AI products, setting a new benchmark for AI inference.
Thus, the demand for enterprises' digital and intelligent transformation is robust. Post DeepSeek's popularity, enterprises find themselves in a predicament: eager to deploy AI large models but uncertain of how to proceed. The collaboration between Silicon-based Fluidity and Ascend not only offers enterprises an efficient application pathway but also validates the standard paradigm for the inclusive development of the AI industry – the translation of model capabilities into industrial value hinges on the synergy of "large models + computing power + cloud services" to meet the rapidly expanding market demand.
Without the combined support of computing power and cloud services, DeepSeek's phenomenal popularity might not endure and swiftly translate into industrial value. Industry practices post DeepSeek's emergence, exemplified by Silicon-based Fluidity and Ascend, underscore that AI inclusion necessitates maximal optimization of different models on various hardware to achieve high performance of AI large models.
In this process, the local AI industry ecosystem has largely established a comprehensive process to manage the substantial traffic generated by DeepSeek, making it seem effortless.
Co-evolution of domestic technology systems, accelerating the widespread application of AI large models.
Now, how should the application and deployment of AI large models be specifically executed to foster AI inclusivity? This necessitates further exploration of strategies.
According to Hu Jian, Silicon-based Fluidity leverages its technological edge to gain a cost advantage in the short term, which in turn fosters greater scale. This larger scale presents additional technological challenges, driving further technological enhancement. Consequent cost reductions significantly bolster scale advantages. Thus, the integration of technology, cost, and scale enables Silicon-based Fluidity to swiftly establish itself in the fiercely competitive market. It is evident that technological advantage serves as the initial "lever" and key for Silicon-based Fluidity to penetrate the market.
For instance, in adapting to DeepSeek's underlying architecture, Silicon-based Fluidity and Ascend have undertaken extensive optimization efforts. At the operator level, by deeply integrating the characteristics of DeepSeek's ultra-large-scale models, they have systematically optimized operator fusion and scheduling, deeply adapting to DeepSeek's underlying framework design.
From Silicon-based Fluidity's perspective, the core underlying technology lies in providing a high-performance, user-friendly, and scalable inference engine. This is crucial for accelerating the widespread application of AI large models. Through multi-dimensional technical optimizations such as INT 8 quantization, multi-token prediction enhancement, and dynamic batch processing, Silicon-based Fluidity and Ascend have achieved inference performance on par with industry-leading high-performance GPUs.
These collaborative technical optimizations with Ascend have significantly bolstered Silicon-based Fluidity's core competitiveness in the AI industry, providing a stable and reliable foundation for enterprises to deploy DeepSeek. More importantly, this successful practice has increasingly highlighted the critical value of the co-evolution of domestic technology systems.
Today, the pain points and needs of enterprises regarding the application and deployment of AI large models are clear. The industry requires a comprehensive solution encompassing model adaptation, computing power support, and application acceleration. In this process, while DeepSeek offers top-tier model performance, Ascend serves as the computing power base, providing high-performance computing resources and adaptation capabilities. Meanwhile, Silicon-based Fluidity directly addresses enterprises' specific needs, offering engineering capabilities like one-stop MaaS cloud services and large model inference engines that achieve application acceleration and scenario adaptation, thereby accelerating the AI industry's progression.
In this manner, the comprehensive advantages of the domestic technology system will be recognized by an increasing number of enterprises, further reshaping the current AI industry landscape. As more enterprises recognize that domestic solutions can not only co-evolve and compete with international giants in performance but also offer services more aligned with local needs, the market's perception of the domestic technology system will shift.
Thus, DeepSeek's popularity signifies not just the success of an AI large model but also a turning point in the co-evolution of the domestic technology system. When Silicon-based Fluidity, Ascend, and DeepSeek resonate in unison with enterprises' needs for deploying AI large models, the local AI industry can further break through current technological barriers and expedite the widespread application of AI.
From this juncture onwards, Chinese enterprises will increasingly opt for domestic technology systems in their digital and intelligent transformation journeys, truly achieving a two-way fit between the supply and demand sides of the local market, enabling the industry to develop towards a virtuous cycle.
*All images in this article are sourced from the internet