AI Glasses' Undercover Battle for 2025: A Review of Technological Routes and Commercialization Status

03/07 2025 340

Since Ray-Ban Meta validated the feasibility of consumer-grade AI glasses, this sector has shown unprecedented vitality both domestically and internationally.

From the "full-color light waveguide solution" promised by Thunderbird X3 Pro, to Rokid's latest iteration of spatial computing modules, and Xiaomi's binocular display system, more than 40 manufacturers have deployed AI glasses, with over 40 models appearing at CES 2025 alone. The year 2025 is considered a crucial node for the industry to collectively enter technological verification. New Eyes roughly estimates that at least 50 new models are awaiting release this year.

AI glasses are viewed as the next-generation super hardware entry point, with underlying reasons that fundamentally differ from the path of AR/VR. Compared to the failure of the first-generation Google Glasses and the decline of the VR industry a decade later, the core logic of AI glasses is more pragmatic, focusing on "scenario adaptability and technology integration capabilities." Simply put, more emphasis is placed on implementation in high-frequency scenarios in daily life.

As products transition from screenless, Bluetooth, and camera interim solutions to integrated AR+AI applications, the ultimate expectation is for them to become "extensions of human senses." As for the main thread of this interaction revolution, it is not only about solving hardware iteration and application issues but also about pondering how to break through from "practicality" and "value."

1

Make the Glasses First, Then Consider AI

In the consumer electronics sector, shipments of 2 million units have always been considered the lifeline for AI hardware, marking the successful passage of such products through market scrutiny. According to the latest data from Wellsenn XR, global sales of AI glasses last year amounted to 2.34 million units, with RayBan Meta accounting for 2.24 million alone.

This set of data is quite interesting. Before RayBan Meta became a hit, there were dozens of domestic and international companies developing various types of smart glasses, but none of them, whether it was Google, Microsoft, Apple, or Huawei, truly made it to the mass market. Meta, on the other hand, solved the mass production challenges that have plagued this field for over a decade in less than three years.

The reasons behind this are closely related to its product positioning and design philosophy.

Before officially making AI glasses, Zuckerberg had invested nearly a decade in the AR Orion project, but commercialization was never achieved due to high costs. Although Meta firmly believes that AR will replace smartphones in the future, it is evident that thorny real-world problems require more gradual solutions.

The significance of RayBan Meta lies in its role as a transitional solution from traditional glasses to AR headsets. Compared to other smart glasses on the market, which are either overly showy (pursuing the immersive sci-fi feel of AR) or overly mundane (like Bluetooth audio glasses), Meta's approach is the opposite: first, enhance product acceptance and practicality, and then verify the feasibility of cutting-edge technology.

They didn't go all-in on AI from the start but focused on creating a pair of glasses that combine technology and fashion, gradually adding other features, and striking a balance between performance, comfort, and price.

Outsiders summarize the selling points of RayBan Meta: it is first and foremost a decent pair of glasses, weighing 49g and priced at less than $300. For the average price of a pair of Ray-Ban sunglasses, you get the multi-experience of "premium brand glasses + Bluetooth headset + camera + voice assistant," making it highly cost-effective. Many industry insiders also express that the popularity of Meta owes more to Ray-Ban than to the AI functionality itself.

The ultimate market effect indeed meets Meta's target positioning for mass consumption rather than focusing solely on niche groups like geeks and business professionals.

At this point, comparing RayBan Meta with previous smart glasses, besides early tech toys like Google Glass and HoloLens, which were bulky and expensive, Huawei, Nreal, Bose, OPPO, and others have also launched similar products. However, the issues are apparent: either the devices are too heavy, too expensive, or their functionality is limited and incomplete, or their battery life cannot support all-day use.

Against this backdrop, RayBan Meta serves as a watershed. On the one hand, it has rekindled the outside world's interest in the commercialization potential of AI glasses; on the other hand, the lightweight "audio + camera" product solution has replaced the previous design thinking for AI glasses and gained popularity in 2024.

Over the past year, Ray-Ban Meta has successfully sparked a wave of industry catching up and benchmarking. Manufacturers from various industries have poured into the sector, including professional-grade AR players like Thunderbird, Meizu, and Rokid, as well as entrepreneurial players like Inmo, Gudong, and Sanag. Even tech giants like Baidu, ByteDance, Tencent, and Xiaomi have corresponding preparation plans.

However, the number of actually released products is still limited. From the domestically first mass-produced AI glasses Sanag A1 to the Thunderbird V3, which is lighter than ordinary glasses, they basically haven't deviated from the functional framework of Ray-Ban Meta. But for these first-generation products, what's more important than innovation at the moment is to further refine aspects such as appearance, price, camera, storage, weight, and battery life while following Meta's lead.

In addition, following Meta's collaboration with Ray-Ban, brands like Rokid, Thunderbird, and Beecom Technology have continued the same market approach, successively partnering with traditional eyewear manufacturers like Bolon, Boss, and Formosa Optical. While collaborating across the upstream and downstream supply chains, they leverage the experience of traditional eyewear manufacturers to promote product popularization.

2

From Audio + Camera to AI + AR

A Crucial Step for AI Glasses

"The AI glasses market only seems lively, but product homogenization is severe."

Last year, Li Hongwei, the founder of Thunderbird, publicly stated that since most AI glasses are ordinary glasses equipped with AI, headphones, cameras, and other functions, the industry's potential has not been fully tapped. Meta CTO Bosworth also expressed a similar view in a blog post, stating that Ray-Ban Meta essentially simplifies complex technology into mass-acceptable smart accessories, and it is only the first step towards the "AR glasses we ultimately want to build."

As the market gradually opens up and moves towards the ultimate vision of a "natural interaction terminal," AI glasses still need to break through multiple key technologies.

One of the most important steps is "display," which is also a major focus in the industry this year.

Take Ray-Ban Meta, for example. After upgrading real-time conversation, real-time translation, and music recognition functions, its next big move is to install a display screen. The question is that the success of this product previously relied on its lighter weight after abandoning AR display. The outside world wonders how Meta will achieve display effects without sacrificing other aspects?

Judging from the AI glasses currently awaiting release, in addition to conventional audio and camera glasses, many players have joined the ranks preparing for AR effects. For instance, the recently popular Rokid Glasses, whose founder Misa demonstrated functions such as speech prompting, text navigation, and real-time translation in public, allowing wearers to clearly see a monochrome interface displayed in real-time on the lens.

Rokid adopts the currently most mainstream "light waveguide" display solution, similar to the technology principle of HUD. Simply put, when light waveguide technology is applied to lenses, it is called a light waveguide lens. There will be a small hole at the edge of the lens, and images will be projected inside the lens through this hole, continuously reflecting between the two surfaces of the lens, and finally entering the human eye to achieve an imaging effect.

Compared to visual solutions like Micro-OLED in Vision Pro and LCoS in the industrial field, using light waveguide lenses paired with Micro LED light engines makes the display component lighter (controllable within 10g) and relatively less costly. Currently, this solution will be equipped on new AI glasses like Thunderbird X3 Pro, StarV Glasses, ThundeRobot AURA, and Gudong, but due to the difficulty and high cost of mass production, as well as issues like easy chromatic dispersion, most manufacturers adopt monochrome displays.

Many AR companies have described these scenes to users: anchoring different application windows at different positions in real space, automatically displaying street sign information when walking on unfamiliar streets, waving to grab Excel spreadsheets in the air, and using eye movements to select data columns—from AR navigation, virtual information overlay, to real-time translation, it can be said that the core functions of AI glasses rely on display technology to achieve.

As Zuckerberg said, "AI glasses without display functionality are merely a product of immature AR development." To some extent, display effect is the "technological manifestation" carrier of AI glasses, determining whether the product can cross the gap from "showy" to "practical." On this basis, the imagination of AI glasses in the application layer will further broaden.

Of course, it is not hard to imagine that as more and more AI glasses achieve AR effects in the future, more and more new challenges will emerge. For example, how to balance computing power and energy consumption? How to break through the limitations of monochrome light waveguides? How to present more and more personalized application screens on the lens? The answers to these questions will directly affect whether AI glasses will evolve into mobile computing platforms or even become multi-modal systems that truly understand the surrounding environment.

3

What Determines the Upper Limit of AI Glasses

May Not Be AI

The outside world views 2025 as the first year of AI glasses' development, mainly based on two expectations: one is that the consumer market is about to usher in a "hundred glasses war," and the other is that domestic large models may become the core driving force for the explosion of AI glasses demand.

As we all know, domestic large models have experienced explosive growth over the past two years but have already shown signs of homogeneity, with technological implementation challenges and commercialization dilemmas intertwined. Some manufacturers have even fallen into vicious price competition. Against this backdrop, the industry holds high hopes for AI glasses, seeing them as a key carrier for breaking through the application bottleneck of large models.

So far, among the products that have been released, brands such as Sanag, Baidu, and Rokid have successively launched AI glasses equipped with domestic large models. Coupled with the recent popularity of DeepSeek, which can further reduce computing power consumption, battery life pressure, and hardware costs through layered architecture design and open access strategies, theoretically enabling more hardware devices to integrate AI capabilities.

However, many industry observers hold a cautious attitude towards this. Several insiders point out that both AR glasses, AI glasses, and large models themselves are currently in a development bottleneck: large models have converging capabilities, and the functional limitations of AI glasses are also quite obvious: basic functions are concentrated in low-frequency scenarios such as voice interaction, real-time translation, and navigation reminders, and the sensitivity and accuracy of information capture are still insufficient.

User experience surveys show that consumers' perception of "intelligence" has not significantly improved. Most people are more concerned about basic attributes such as photography quality and wearing comfort, and even believe that existing AI functions have not yet reached the threshold of true intelligence.

After experiencing the early hype and later cooling of products like mobile phone large models, Ai Pin, and Rabbit R1, consumer expectations for AI technology have declined. The integrated nature of AI glasses even amplifies this contradiction: the stacking of functions inevitably leads to dual challenges of device lightweightness and battery life.

From a corporate perspective, as a benchmark in this field, Meta's reaction to AI functionality is not intense. Compared to peers' enthusiasm for AI, Ray-Ban Meta's independent research and development and updates of AI functions have remained restrained and even relatively sluggish.

In fact, Zuckerberg has a far-reaching ambition.

He has repeatedly emphasized the strategic value of AI glasses as the "next-generation computing platform," believing that the prerequisite for them to replace smartphones is to build a new ecosystem. As competition in the AI glasses sector intensifies, compared to single-faceted competition in the hardware market, Meta's long-term plan for AI glasses is to turn Ray-Ban Meta into an open platform, providing more space for third parties to allow them to run their self-developed apps on Ray-Ban Meta by listing them in the app store.

This vision coincides with Rokid's. As a system software vendor in the domestic AR field, Rokid established its positioning as an "operating system" from an early stage. Misa set a goal for himself and the company to become the "Android" of the spatial computing era and made it clear: "The competition in AR has entered a battle of system software and ecosystem. In the future, personal computing platforms will inevitably evolve into operating systems with unified interaction paradigms."

Years ago, through the full compatibility of Android applications achieved by the AR Studio suite, Rokid demonstrated ecological scalability in both the consumer market and industrial scenarios. By the end of 2023, the number of developers had reached 6,000, with one-third being enterprise-level developers. In Misa's view, these long-accumulated resources can also spill over into the ecological development of AI glasses.

Just as Alex Kipman, the former head of Microsoft's Hololens 2 team, predicted, the ultimate battlefield for AI glasses is not within the lenses but involves the reconstruction of the entire software ecosystem.

Just as touchscreen smartphones disrupted physical buttons, for AI glasses to replace smartphones, they must also nurture an interaction paradigm that surpasses touchscreens—only then will AI glasses truly become a disruptive carrier for the next-generation computing platform.

Solemnly declare: the copyright of this article belongs to the original author. The reprinted article is only for the purpose of spreading more information. If the author's information is marked incorrectly, please contact us immediately to modify or delete it. Thank you.