- Meta introduced its first consumer holographic AR glasses, showcasing a lightweight design and advanced features, though public availability is still pending.
- The budget-friendly Quest 3S was revealed, starting at $299, with mixed-reality capabilities and a reduced display resolution, while the Quest 3 sees a price drop.
- Meta launched a voice assistant with celebrity AI voices, enabling interaction across platforms and enhanced image analysis features.
- The latest Llama model offers multimodal capabilities, allowing for image interpretation and object identification, though it's restricted in Europe.
- Updated smart glasses featuring real-time AI processing and translation are set for release, promising an interactive user experience.
- New AI capabilities allow users to edit photos and answer questions about shared images, enhancing user engagement and interaction.
The buzz surrounding Meta Connect 2024 was palpable as the world eagerly awaited groundbreaking announcements from the tech giant. Now that the event has concluded, Meta has unveiled a range of innovative AI-oriented products, along with significant hardware and software updates. These developments mark a pivotal step in the company's mission to shape the future of artificial intelligence and the Metaverse.
For those who missed it, here are some key highlights and announcements that emerged from this Meta Connect 2024.
Orion AR Glasses
Zuckerberg unveiled Orion, which Meta claims will be its first consumer full holographic AR glasses, although they won't be available to the public for some time. Highlighting their lightweight design, compatibility with hand and eye tracking, and neural interface, he showcased Orion with endorsements from notable early testers, including Nvidia CEO Jensen Huang.
Positioned as the future of Meta's AR initiatives, these glasses are significantly smaller than Snap’s Spectacles 5 and feature tiny projectors embedded in the temples for a heads-up display, reminiscent of Google Glass.
Zuckerberg also emphasized that while Orion is a decade in the making and offers a glimpse into an exciting future, the technology is still in the fine-tuning phase before becoming a consumer product.
Quest 3S
At Meta Connect 2024, Meta officially unveiled the Quest 3S, following several pre-event leaks. Positioned as a budget-friendly alternative to the Quest 3, the Quest 3S starts at $299 for the 128GB model and $399 for 256GB. While it offers a similar wireless experience and compatibility with the existing Quest app and game library, Meta is particularly excited about its mixed-reality capabilities.
To make the Quest 3S more affordable, Meta has made some compromises. The display is downgraded to the same 1832×1920 resolution found in the Quest 2, a significant cost-cutting move from the Quest 3’s 2064×2208 display. The field of view remains identical to the Quest 2 at 96H/90V, slightly narrower than the Quest 3’s 110H/96V. Despite these reductions, the 3S is lighter than the Quest 3 and offers better average battery life.
Alongside the Quest 3S, Meta announced a price drop for the Quest 3, which now starts at $499, down from $649, with the 512GB model dropping to $500. All Quest 3 and 3S purchases will include a copy of Batman: Arkham Shadow and three months of Quest+.
With the arrival of the Quest 3S, Meta is officially discontinuing the Quest 2 and Quest Pro models. The Quest 3S ships on October 15, and early impressions suggest it will appeal to budget-conscious users seeking a strong mixed-reality experience.
Meta AI Voice Assistant
Meta rolled out its new AI voice assistant, allowing users to interact with Meta AI through spoken questions and receive vocal responses via Messenger, Facebook, WhatsApp, and Instagram. The feature offers multiple voice options, including AI-generated voices of celebrities like Dame Judi Dench, John Cena, and Awkwafina.
While it’s similar to Google’s Gemini Live, which transcribes speech before responding with a synthetic voice, Meta’s high-profile celebrity voices set it apart. The company reportedly spent millions on these likenesses, though it remains to be seen how impactful this will be.
In addition, Meta AI has received an upgrade that enables image analysis. Users can now upload photos—for instance, a flower or a dish—and ask Meta AI for identification or cooking instructions, though accuracy may vary.
Meta is also testing a translation tool for Instagram Reels, designed to dub creators' speech and auto-lip-sync in another language. The pilot is currently limited to select videos from Latin America, offering translations between English and Spanish.
Llama 3.2 AI Model
Earlier this week, Google rolled out its upgraded Gemini models, and just before that, OpenAI introduced its o1 model. But on Wednesday, Meta took center stage at its annual Meta Connect 2024 developer conference in Menlo Park, unveiling its latest advancements.
Meta's Llama models have now evolved to version 3.2, bringing multilingual and multimodal capabilities. The Llama 3.2 11B and 90B models can now interpret charts, caption images, and identify objects in photos based on a simple description.
These models can analyze a park map to estimate the length of a trail or predict where the terrain becomes steeper. When presented with a company's revenue graph, they can easily highlight the top-performing months.
However, Llama 3.2 11B and 90B are unavailable in Europe due to EU regulations, meaning that certain Meta AI features, like image analysis, remain restricted for European users.
Ray-Ban Meta Smart Glasses
Zuckerberg also unveiled updates to the company’s Ray-Ban Meta smart glasses, continuing to push the idea that smart glasses could be the next big consumer tech. Set for release later this year, the glasses will integrate new AI features along with familiar smartphone functions.
Among the updates, real-time AI video processing and live language translation stand out as key innovations. Other additions, like QR code scanning, reminders, and partnerships with iHeartRadio and Audible, aim to bring popular smartphone features to the glasses.
Soon, users will be able to ask the Ray-Ban Meta glasses questions about what’s in front of them, with Meta AI providing real-time verbal responses. Currently limited to snapping photos and offering descriptions, the glasses will soon support video-based interactions, enhancing the overall experience when these multimodal features launch later this year.
AI Editing
Meta announced that its AI can now assist users in editing photos and answering questions about images they share. These new features are enabled by Meta AI’s enhanced multimodal capabilities, powered by the advanced Llama 3.2 models. This upgrade allows users to share photos in chats, not just text, similar to Google Gemini and OpenAI's ChatGPT.
With this functionality, Meta AI can interpret images and respond to related questions. For instance, users can upload a picture of a flower and ask the AI to identify it, or share a dish and inquire about its recipe. However, the accuracy of these responses remains to be fully evaluated.
Another notable feature of the new photo support is the ability to edit images using AI. After uploading a photo, users can request Meta AI to make changes—such as adding or removing objects, altering outfits, or modifying the background, like inserting a rainbow into the sky.
Meta Connect 2024 showcased a variety of exciting developments that signal the company’s ambitious strides in AI and AR technology. From the unveiling of Orion and the Quest 3S to enhancements in Meta AI’s capabilities, the event highlighted innovations that will shape the digital landscape in the days and months to come. As these features roll out, users can look forward to a transformative experience across Meta's platforms.
Edited by Harshajit Sarmah