MWC Cloud Computing Wave: The Rise of Edge Cloud Amidst 5G-A and AI

03/06 2025 392

Is edge cloud the ideal solution for AI implementation?

Edge Cloud at MWC 2025

At this year's MWC Conference, the true stars may not be 5G-Advanced (5G-A), 6G, or Artificial Intelligence (AI), but rather edge computing and edge cloud.

MWC25 has commenced, and the Leitech reporting team is on-site in Barcelona, bringing you frontline coverage. Leading domestic mobile communication giants such as China Mobile and the other two major operators, alongside Huawei, ZTE, and others, all showcased 5G-A and 6G-related technologies and products. AI, which has gained significant traction in recent years, also made frequent appearances. However, whether it's 5G-A/6G or AI, they are mere tools. How to leverage their value lies in the products or industries that utilize these technologies. At this MWC Conference, the edge computing displayed by communication industry hardware and software enterprises is pivotal to the implementation of new technologies.

Hardware and software enterprises are making strides, and edge computing is becoming infrastructure.

As a leading domestic mobile communication enterprise, ZTE exhibited a full-stack intelligent computing infrastructure at MWC25, including liquid-cooled data centers, intelligent computing servers, general-purpose computing servers, lossless high-speed switches, and other equipment. They also showcased the AiCube, an all-in-one machine for training and inference capable of deploying the comprehensive DeepSeek large model.

ZTE's Exhibit at MWC 2025

(Image source: Leitech production)

Compared to existing cloud-side and end-side AI deployments, AiCube, which can serve as an edge data center, is not only user-friendly but also offers enhanced data privacy protection and flexibility, meeting the high standards of enterprise users for distributed computing and data security.

Lenovo, primarily known for its PCs, not only exhibited multiple AI PCs at the MWC25 Conference but also launched the market's first entry-level AI inference server, ThinkEdge SE100, with the slogan "Redefining AI Flexibility and Efficiency."

ThinkEdge SE100 is 85% smaller than traditional servers, offering higher portability, GPU support, and a power consumption of only 140W. It enables enterprise-level AI in any region with its plug-and-play functionality. Its low cost makes it affordable even for small businesses, while its excellent shock resistance and dust-proof design allow ThinkEdge SE100 to adapt to more environments.

Most critically, its hybrid cloud deployment and machine learning capabilities can push computing, storage, networking, and other services to edge nodes, providing users with localized cloud services and enabling distributed computing to improve resource utilization efficiency.

ThinkEdge SE100

(Image source: Leitech production)

Compared to well-known companies like ZTE and Lenovo, Fibocom may not be as famous among ordinary consumers but is equally impressive. At the MWC25 Conference, Fibocom unveiled the full-matrix AI module and solution series named "Nebula." This series of products, strictly speaking, belongs to end-side AI, with computing power ranging from 1T to 50T, supporting the operation of large models with varying parameter quantities such as Tongyi Qianwen and DeepSeek.

For instance, the version with 18T computing power can run an AI large model with 7 billion parameters on the end side, while the version with 3.2T computing power can only handle lightweight AI tasks. Indeed, due to factors such as cost and size, pure end-side AI struggles to run AI large models with a large number of parameters. Especially for enterprise users, once multiple people need to use AI simultaneously, the computing resources of end-side AI are insufficient to meet the demand.

In the past, due to the impact of computing power, privacy security, and data transmission efficiency, the share of edge computing devices has been on the rise but has yet to reach a critical point for explosion. Nowadays, the impending 5G-A and the already initiated standardization project for 6G may become the catalyst for the explosion of edge computing. Lenovo stated that the edge market is expected to maintain a growth rate of nearly 37% by 2030.

However, future edge computing will not purely involve AI training and inference on edge devices. It will most likely be deployed at the edge of the network in the form of small-scale data centers or cloud nodes, providing cloud services to surrounding users and devices, supporting the deployment and inference of multiple applications and large models.

Empowered by new communication technologies, is edge cloud the ideal solution for AI implementation?

The impact of AI large models has reverberated across various industries, but how to efficiently, securely, and cost-effectively deploy and utilize AI large models remains a challenge for enterprises. Cloud-side AI large models with higher usage rates require data to be uploaded to the cloud, posing risks of leakage and relatively high latency. End-side AI can avoid data upload, but the cost of deploying large AI models with numerous parameters on the device is prohibitive and not conducive to the optimal allocation of computing resources.

Edge computing refers to computing power nodes close to data sources or users, characterized by low latency and fast response speeds, capable of handling tasks with strong immediacy and ensuring user data security. The distributed architecture enables edge computing to utilize resources more efficiently, and even if some edge nodes fail, it will not cause the entire edge computing system to collapse.

Edge cloud extends the capabilities of cloud computing to the edge of the network, forming a "cloud-edge-end trinity collaboration" with central clouds and IoT. While leveraging the low latency and fast response characteristics of edge computing, it also realizes data localization, distributed computing, and offers a high degree of flexibility.

Edge cloud is a small data center integrating storage, networking, and platform services, which will play a pivotal role in industrial automation, transportation, smart cities, autonomous driving, and other fields. However, the need to upload data to cloud nodes results in higher network requirements when handling tasks with large data volumes. Fortunately, 5G-A and 6G provide a foundational guarantee for edge cloud data transmission.

Edge Cloud Integration

(Image source: Leitech production)

At the MWC25 Conference, chip maker Qualcomm not only released the X85 baseband supporting 5G-A but also announced the fourth-generation Snapdragon platform. They will collaborate with IBM to extend enterprise-level generative AI from the cloud to the edge based on 5G-A and AI technology. MediaTek stated that transforming device clouds into edge clouds through hybrid computing is a key component of the 6G era. China Mobile, a mobile communication giant, enhanced the transmission rate and system capacity of 5G-A with the uplink and downlink three-carrier aggregation solution and utilized AI technology for intelligent perception and regulation of data transmission.

With the support of 5G-A technology, edge computing will provide core technical support for localized data processing for edge clouds, expanding the boundaries of edge computing capabilities through cloud computing resource integration.

For enterprises, edge clouds close to nodes can effectively improve the immediacy and efficiency of data interaction and AI inference while ensuring information security. In scenarios requiring high response speeds, such as smart cities, industrial automation, and autonomous driving, edge clouds are more reliable and efficient than traditional cloud computing.

Edge Cloud in Smart Cities

(Image source: Leitech production)

Additionally, the open sourcing of AI large models will become a significant factor in boosting the popularity of edge clouds. Taking the DeepSeek-R1 model as an example, since it can be used commercially for free, enterprises do not need to incur huge costs and can use data to train it specifically, thereby enhancing the model's professionalism.

Some enterprises have also developed open-source models for professional scenarios, such as Baichuan4-Finance by Baichuan Intelligence, which focuses on the financial field and can be used commercially for free. The advent of these large models will further reduce the cost for small and medium-sized enterprises to deploy end-side AI, edge computing, and edge clouds.

Edge cloud will ultimately impact everyone's life.

At MWC25, when communication enterprises introduced 5G-A and 6G, they generally focused on B-end scenarios, emphasizing the role of these technologies in the business sector. The same is true for edge computing and edge cloud. Both Lenovo and ZTE believe that edge computing devices can alleviate the burden on small and medium-sized enterprises, but the less discussed C-end scenarios also have numerous roles for edge clouds.

For example, the cloud gaming industry can allocate game rendering to nodes closer to users. Tianyi Cloud Gaming has realized cloud esports through edge clouds, providing users with a gaming experience featuring clearer image quality, higher frame rates, and lower latency. Previously, due to immature technology, cloud phones failed due to issues such as high latency and server lags. Based on edge clouds, these issues can be improved, making low-latency, high-performance cloud phones a reality.

Not just phones, with the support of ultra-fast 5G-A, 6G networks, and low-cost AI solutions, future products such as TVs and laptops can be cloud-based, and local devices will no longer require powerful chips, ultra-large-capacity memory, and flash storage. With edge clouds and AI, these two major technologies can intelligently regulate the resources occupied by cloud devices, significantly reducing the cost for users to access high-performance devices.

Edge Cloud in Future Devices

(Image source: Leitech production)

It has been some time since the widespread deployment of 5G, but its impact on our lives has not been significant enough. The reason may be that other industries suitable for 5G technology are not yet mature. Nowadays, with the advent of AI, edge computing, and edge clouds, the requirements for data transmission rates and latency are increasingly high, which may unleash the potential of 5G, 5G-A, and even the future 6G in C-end scenarios.

Thanks to the high bandwidth and low latency characteristics of new communication technologies, edge clouds will simultaneously empower B-end and C-end scenarios, enabling some devices to be cloud-based. With its continuously tapped value, edge cloud will also have the opportunity to develop rapidly, complementing edge computing and cloud computing in handling tasks with relatively low complexity and high immediacy requirements. After completing training tasks on cloud computing platforms, AI large models can be deployed to edge cloud platforms to achieve low-cost, low-latency inference.

Source: Leitech

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.