**AI Breakthrough: Ray-Ban Meta Smart Glasses Set to Revolutionize Multimodal AI**

Seattle, Washington – When the Ray-Ban Meta Smart Glasses were first launched last fall, they impressed users with their content capture capabilities and headphone functionality. However, they were lacking a crucial feature: multimodal AI. This feature, which allows an AI assistant to process various types of information like photos, audio, and text, was not initially available to all users.

Following an early access program rollout, Meta has announced that multimodal AI will now be accessible to everyone. This development comes at an interesting time, with the recent release of the Humane AI Pin receiving mixed reviews from critics and users alike. The performance of AI gadgets has been under scrutiny, but the introduction of multimodal AI on the Ray-Ban Meta Smart Glasses demonstrates a potential for improvement in this technology.

The primary function of the Meta glasses is to respond to the command “Hey Meta, look and…” Users can then ask the AI to identify plants, translate text, provide information about landmarks, and more. The glasses capture an image, the AI processes the data in the cloud, and delivers the answer audibly to the user. While the capabilities of the AI are not endless, users find enjoyment in exploring its limitations.

One user shared their experience using the AI to identify different car models, with mixed success. The AI correctly identified some vehicles but also made errors, highlighting the need for further improvement. Another user tested the AI’s ability to identify various succulent plants with varying degrees of success, demonstrating the challenges of interacting with AI technology.

In a humorous anecdote, a user recounted a competition with their spouse to use the AI to identify a large rodent in their neighbor’s backyard. Despite some limitations, such as the lack of zoom functionality, the AI was eventually able to identify the groundhog. Users find the AI most useful when out and about, as it serves as a convenient extension of their phone for identifying objects of interest.

While there are some limitations to the AI’s functionality, users appreciate the familiar design of the glasses and their overall performance. The glasses can be used for livestreaming, as a POV camera, and as open-ear headphones, making them versatile beyond just the AI feature. The presence of AI on the glasses serves to acclimate users to the concept of wearable technology, aligning with the broader trend of integrating technology into everyday objects.