Tag: meta

  • Meta confirms it may train its AI on any image you ask Ray-Ban Meta AI to analyze

    Meta confirms it may train its AI on any image you ask Ray-Ban Meta AI to analyze

    [ad_1]

    We recently asked Meta if it trains AI on photos and videos that users take on the Ray-Ban Meta smart glasses. The company originally didn’t have much to say.

    Since then, Meta has offered TechCrunch a bit more color.

    In short, any image you share with Meta AI can be used to train its AI.

    “[I]n locations where multimodal AI is available (currently US and Canada), images and videos shared with Meta AI may be used to improve it per our Privacy Policy,” said Meta policy communications manager Emil Vazquez in an email to TechCrunch.

    In a previous emailed statement, a spokesperson clarified that photos and videos captured on Ray-Ban Meta are not used by Meta for training as long as the user doesn’t submit them to AI. However, once you ask Meta AI to analyze them, those photos fall under a completely different set of policies.

    In other words, the company is using its first consumer AI device to create a massive stockpile of data that could be used to create ever-more powerful generations of AI models. The only way to “opt out” is to simply not use Meta’s multimodal AI features in the first place.

    The implications are concerning because Ray-Ban Meta users may not understand they’re giving Meta tons of images – perhaps showing the inside of their homes, loved ones, or personal files – to train its new AI models. Meta’s spokespeople tell me this is clear in the Ray-Ban Meta’s user interface, but the company’s executives either initially didn’t know or didn’t want to share these details with TechCrunch. We already knew Meta trains its Llama AI models on everything Americans post publicly on Instagram and Facebook. But now, Meta has expanded this definition of “publicly available data” to anything people look at through its smart glasses and ask its AI chatbot to analyze.

    This is particularly relevant now. On Wednesday, Meta started rolling out new AI features that make it easier for Ray-Ban Meta users to invoke Meta AI in more natural way, meaning users will be more likely to send it new data that can also be used for training. In addition, the company announced a new live video analysis feature for Ray-Ban Meta during its 2024 Connect conference last week, which essentially sends a continuous stream of images into Meta’s multimodal AI models. In a promotional video, Meta said you could use the feature to look around your closet, analyze the whole thing with AI, and pick out an outfit.

    What the company doesn’t promote is that you are also sending these images to Meta for model training.

    Meta spokespeople pointed TechCrunch towards its privacy policy, which plainly states: “your interactions with AI features can be used to train AI models.” This seems to include images shared with Meta AI through the Ray-Bans smart glasses, but Meta still wouldn’t clarify.

    Spokespeople also pointed TechCrunch towards Meta AI’s terms of service, which states that by sharing images with Meta AI, “you agree that Meta will analyze those images, including facial features, using AI.”

    Meta just paid the state of Texas $1.4 billion to settle a court case related to the company’s use of facial recognition software. That case was over a Facebook feature rolled out in 2011 called “Tag Suggestions.” By 2021, Facebook made the feature explicitly opt-in, and deleted billions of people’s biometric information it had collected. Notably, several of Meta AI’s image features are not being released in Texas.

    Elsewhere in Meta’s privacy polices, the company states that it also stores all the transcriptions of your voice conversations with Ray-Ban Meta, by default, to train future AI models. As for the actual voice recordings, there is a way to opt-out. When you first login to the Ray-Ban Meta app, users can choose whether voice recordings can be used to train Meta’s AI models.

    It’s clear that Meta, Snap, and other tech companies are pushing for smart glasses as a new computing form factor. All of these devices feature cameras that people wear on their face, and they’re mostly powered by AI. This rehashes a ton of privacy concerns we first heard about in the Google Glass era. 404 Media reported that some college students have already hacked the Ray-Ban Meta glasses to reveal the name, address, and phone number of anyone they look at.

    [ad_2]

    Source link

  • Meta won’t say whether it trains AI on smart glasses photos

    Meta won’t say whether it trains AI on smart glasses photos

    [ad_1]

    Meta’s AI-powered Ray-Bans have a discreet camera on the front, for taking photos not just when you ask them to, but also when their AI features trigger it with certain keywords such as “look.” That means the smart glasses collect a ton of photos, both deliberately taken and otherwise. But the company won’t commit to keeping these images private.

    We asked Meta if it plans to train AI models on the images from Ray-Ban Meta’s users, as it does on images from public social media accounts. The company wouldn’t say.

    “We’re not publicly discussing that,” said Anuj Kumar, a senior director working on AI wearables at Meta, in a video interview with TechCrunch on Monday.

    “That’s not something we typically share externally,” said Meta spokesperson Mimi Huggins, who was also on the video call. When TechCrunch asked for clarification on whether Meta is training on these images, Huggins responded, “we’re not saying either way.”

    Part of the reason this is especially concerning is because of the Ray-Ban Meta’s new AI feature, which will take lots of these passive photos. Last week, TechCrunch reported that Meta plans to launch a new real-time video feature for Ray-Ban Meta. When activated by certain keywords, the smart glasses will stream a series of images (essentially, live video) into a multimodal AI model, allowing it to answer questions about your surroundings in a low-latency, natural way.

    That’s a lot of images, and they’re photos a Ray-Ban Meta user might not consciously be aware that they’re taking. Say you asked the smart glasses to scan the contents of your closet to help you pick out an outfit. The glasses are effectively taking dozens of photos of your room and everything in it, and uploading them all to an AI model in the cloud.

    What happens to those photos after that? Meta won’t say.

    Wearing the Ray-Ban Meta glasses also means you’re wearing a camera on your face. As we found out with Google Glass, that’s not something other people are universally comfortable with, to put it lightly. So you’d think it’s a no-brainer for the company that’s doing it to say, “Hey! All your photos and videos from your face cameras will be totally private, and siloed to your face camera.”

    But that’s not what Meta is doing here.

    Meta has already declared that it is training its AI models on every American’s public Instagram and Facebook posts. The company has decided all of that is “publicly available data,” and we might just have to accept that. It and other tech companies have adopted a highly expansive definition of what is publicly available for them to train AI on, and what isn’t.

    However, surely the world you look at through its smart glasses is not “publicly available.” While we can’t say for sure that Meta is training AI models on your Ray-Ban Meta camera footage, the company simply wouldn’t say for sure that it isn’t.

    Other AI model providers have more clear-cut rules about training on user data. Anthropic says it never trains on a customer’s inputs into, or outputs from, one of their AI models. OpenAI also says it never trains on user inputs or outputs through its API.

    We’ve reached out to Meta for further clarification here, and will update the story if they get back to us.

    [ad_2]

    Source link

  • Meta rethinks smart glasses with Orion

    Meta rethinks smart glasses with Orion

    [ad_1]

    Meta Connect 2024 was this week, showcasing new hardware and software to support two of the company’s big ambitions: AI and the metaverse. CEO Mark Zuckerberg announced new Quest headsets, updates to Meta’s Llama AI model, and real-time video capabilities of Ray-Ban Meta smart glasses. The biggest reveal, though, was Orion, a true AR glasses prototype touted as “the most advanced glasses the world has ever seen.”

    OpenAI CTO Mira Murati announced this week that she is leaving the company after more than six years. Hours after the announcement, OpenAI’s chief research officer, Bob McGrew, and a research VP, Barret Zoph, also left the company. The high-level departures come less than a week before the start of OpenAI’s annual developer conference.

    One of CloudKitchens’ earliest employees is suing the company. In the lawsuit, Isabella Vincenza alleges wrongful termination, sex discrimination, and a hostile work environment, including an intense “bro culture” at the company. Vincenza also claims that she was “retaliated against for standing up for herself” following her pregnancy and subsequent maternity leave. 


    This is TechCrunch’s Week in Review, where we recap the week’s biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here.


    News

    Talk to me, ChatGPT: OpenAI rolled out Advanced Voice Mode following some delays and controversy. The feature has an updated blue spherical look, five new voices, and improved accent capabilities for customers in ChatGPT’s Plus and Teams tiers. Read more

    YC Demo Day: Y Combinator kicked off its two-day “Demo Day” event showcasing what the most recent YC batch companies are building. Here are the companies worth paying attention to out of the event. (Spoiler alert: They pretty much all use AI.) Read more

    Amazon employees vs. RTO: Amazon CEO Andy Jassy announced that employees will be expected to work from the office five days a week starting in 2025. But an anonymous survey created by workers reveals that many who have grown accustomed to a hybrid work structure are “strongly dissatisfied.” Read more

    How much can a phone wallpaper cost? Marques Brownlee, known on YouTube as MKBHD, launched the wallpaper app Panels, where he’s curating high-quality digital wallpapers from artists. But in order to access high-resolution wallpapers without ads, users need to cough up about $50 per year. Read more

    WordPress vs. WP Engine: A heated legal battle is brewing between WordPress founder and Automattic CEO Matt Mullenweg and WP Engine — which hosts websites built on WordPress — after Mullenweg wrote a blog post calling WP Engine a “cancer to WordPress.” Read more

    X switches up the block feature: X will soon change how its block feature works so that accounts you have blocked will still be able to see your public posts. Elon Musk clarified that blocked accounts still won’t be able to engage with users who have blocked them. Read more

    RevenueCat turns up the heat: Subscription management platform RevenueCat acquired Dipsea, an app offering subscriptions to “spicy” audiobooks. The idea is to bring a subscription-based app in-house to serve as a testing ground for RevenueCat’s new features. Read more

    RIP, TikTok Music: ByteDance is shuttering its music streaming service, TikTok Music, in November. TikTok Music was rooted in a ByteDance product called Resso, and the service was later available in Indonesia, Brazil, Australia, Singapore, and Mexico. Read more

    Meta gets hit with another privacy penalty: Meta has been reprimanded and fined $101.5 million (at current exchange rates) by Ireland’s Data Protection Commission for a 2019 breach that exposed hundreds of millions of Facebook passwords. Read more

    Hands-on with Plaud’s NotePin: TechCrunch’s Brian Heater has been testing Plaud’s $169 ChatGPT-powered NotePin to transcribe meetings and take notes. Unlike other AI pins, Plaud’s product feels like a solution to real issues, he argues. Read more

    Analysis

    Sam Altman goes “god mode”: OpenAI CEO Sam Altman has historically pitched AI as the solution to the world’s problems, despite its significant impact on the environment. In a new rose-colored-glasses blog post, Altman presents an incredibly positive update on the state of AI, hyping its world-changing potential. But, as TechCrunch’s Sarah Perez notes, much of what he writes is seemingly meant to make skeptics see how much AI matters and could well have the opposite result. Read more

    [ad_2]

    Source link

  • Meta offers a glimpse through its supposed iPhone killer: Orion

    Meta offers a glimpse through its supposed iPhone killer: Orion

    [ad_1]

    For years, Silicon Valley and Wall Street have questioned Mark Zuckerberg’s decision to invest tens of billions of dollars into Reality Labs. This week, Meta’s wearables division unveiled a prototype of its Orion smart glasses, a form factor the company believes one day could replace the iPhone. That idea sounds crazy… but maybe a little less crazy than it did a week ago.

    Orion is a prototype headset that combines augmented reality, eye and hand tracking, generative AI, and a gesture-detecting wristband. Through micro LED projectors and silicon carbide lenses (which are quite expensive), Meta seems to have cracked a longstanding AR display challenge. The idea is that you can look through Orion — you know, like a pair of glasses — but also see application windows projected on the lenses that appear as if they’re embedded in the world around you. Ideally, you can use your hands, eyes, and voice to navigate the environment.

    Meta Orion
    The Orion smart glasses need a wristband and wireless compute puck to work. (Meta)
    Image Credits: Meta

    Though to be clear, Meta’s Orion smart glasses are chunkier than your average readers, reportedly cost $10,000 a pop, and won’t be available for sale anytime soon. We’re talking years from now. All the technology in Orion is relatively young, and all of it needs to get cheaper, better, and smaller to work its way into a pair of smart glasses you can buy at the mall. Zuckerberg says the company has already been working on Orion for 10 years, but there’s still no path to a sellable product.

    However, Meta is hardly the only company trying to put a smartphone replacement on your face.

    This month, Snap unveiled its latest generation of Spectacles smart glasses, which are larger than Orion and have a more limited field of view. One former Snap engineer called the latest Spectacles “obviously bad” — though you can actually order them. Google hinted during its I/O conference in May that it, too, is working on a pair of smart glasses, perhaps a revamp of its failed Google Glass experiment from last decade. Apple is reportedly working on AR glasses that sound a lot like Orion. And we can’t rule out Jony Ive’s new startup, LoveFrom, which he recently confirmed is working on an AI wearable with OpenAI (though we don’t know if they’re glasses, a pin, or something else entirely).

    What’s brewing is a race among Big Tech’s richest companies to create a sleek pair of smart glasses that can do everything your smartphone can — and hopefully something more. Meta’s prototype made two things clear: there is something there, but we’re not “there” yet.

    These devices are a notable departure from the Quest virtual reality headsets Meta has been pushing for years now, and Apple’s Vision Pro. There’s a lot of similar technology involved, like eye-tracking and hand tracking, but they feel completely different to use. VR headsets are bulky, uncomfortable to wear, and make people nauseous from staring at the displays. Sunglasses and eyeglasses, on the other hand, are relatively pleasant to wear and millions of Americans use them everyday.

    To Zuckerberg’s credit, he’s been pushing the eyewear form factor for quite a long time, when it certainly was not popular to do so. It’s long been reported that Meta’s CEO hates that his popular social media apps have to be accessed through Apple’s phones (perhaps leading to the ill-fated Facebook Phone). Now, Meta’s competitors are also dipping their toes into eyewear computing.

    Andrew Bosworth, CTO of Meta and head or Reality Labs, wearing a clear pair of Orion smart glasses. (David Paul Morris/Bloomberg via Getty Images)

    Meta’s early investment here seems to be paying off. Zuckerberg gave a keynote presentation of Orion on Wednesday that we won’t be forgetting anytime soon, filling a room full of skeptical journalists with electricity and excitement. TechCrunch has not demoed Orion yet, but initial reviews have been very positive.

    What Meta offers today is the Ray-Ban Meta: a pair of glasses with cameras, microphones, speakers, sensors, an on-device LLM, and the ability to connect to your phone and the cloud. The Ray-Ban Meta is far simpler than Orion, but relatively affordable at $299 — actually not much more than a regular pair of Ray-Bans. They’re kind of like the Spectacles 3 that Snap released a few years ago, though the Ray-Ban Meta glasses appear more popular.

    Despite the vast differences in price and capabilities, Orion and Ray-Ban Meta are more related than you might think.

    “Orion is really the future, and we ultimately want to go for the full holographic experience. You can think about Ray-Ban Meta as our first step there,” said Li-Chen Miller, a VP of product at Meta who leads its wearables team, in an interview with TechCrunch. “We really need to nail the basic things, like making sure it’s comfortable, people want to wear it, and that people find value in it every day.”

    One of the things Meta is trying to nail with Ray-Ban Meta is AI. Currently, the smart glasses use Meta’s Llama models to answer questions about what you see in front of you, by taking pictures and running them through the AI system alongside a user’s verbal requests. The Ray-Ban Meta’s AI features today are far from perfect: The latency is worse than OpenAI’s natural-feeling Advanced Voice Mode; Meta AI requires very specific prompts to work right; it hallucinates; and it doesn’t have a tight integration with many apps, making it less useful than just picking up my iPhone (perhaps by Apple’s deisgn). But Meta’s updates coming later this year try to address these issues.

    Li-Chen Miller, VP of product during Meta Connect in 2023. (David Paul Morris/Bloomberg via Getty Images)

    Meta announced it will soon release live AI video processing for their Ray-Bans, meaning the smart glasses will stream live video and verbal requests into one of Llama’s multimodal AI models and will produce real-time, verbal answers based on that input. It’s also getting basic features, like reminders, as well as more app integrations. That should make the whole experience a lot smoother, if it works. Miller says these improvements will filter up to Orion, which runs on the same generative AI systems.

    “Some things make more sense for one form factor than the other, but we’re certainly cross-pollinating,” said Miller.

    Likewise, she says some of Orion’s features may filter down as her team focuses on making the AR glasses more affordable. Orion’s various sensors and eye trackers are not cheap technologies. The problem is that Orion has to get both better and more economical.

    Another challenge is typing. Your smartphone has a keyboard, but your smart glasses won’t. Miller worked on keyboards at Microsoft for nearly 20 years before joining Meta, but she says Orion’s lack of keyboard is “freeing.” She argues that using smart glasses will be a more natural experience than using a phone. You can simply talk, gesture with your hands, and look at things to navigate Orion; all things that come naturally to most people.

    Another device that was criticized for lacking a keyboard was, ironically, the iPhone. Former Microsoft CEO Steve Ballmer infamously laughed at the iPhone in 2007, saying it wouldn’t appeal to business customers because it didn’t have a physical keyboard. People adapted though, and his comments sound naive more than 15 years later.

    I think making Orion feel natural is definitely more of a goal than a reality at this point. The Verge notes in its hands-on review that windows occasionally filled the entire glasses lens, completely obstructing the user’s view of the world around them. That’s far from natural. To get there, Meta will have to improve its AI, typing, AR, and a long list of other features.

    “For Ray-Ban Meta, we kept it very scoped to a few things, and then it does them really well,” said Miller. “Whereas, when you want to build a new, futuristic computing platform [with Orion], we have to do a lot of things, and do them all very well.”

    [ad_2]

    Source link

  • Meta fined $101.5M for 2019 breach that exposed hundreds of millions of Facebook passwords

    Meta fined $101.5M for 2019 breach that exposed hundreds of millions of Facebook passwords

    [ad_1]

    Reset your clocks: Meta has been hit with yet another privacy penalty in Europe. On Friday, Ireland’s Data Protection Commission (DPC) announced a reprimand and a €91 million fine — around $101.5 million at current exchange rates — after concluding a multiyear investigation into a 2019 security breach by Facebook’s parent company.

    The DPC opened a statutory inquiry into the incident in question in April 2019 under the bloc’s General Data Protection Regulation (GDPR) after Meta, or Facebook as the company was still called back then, notified it that “hundreds of millions” of users’ passwords had been stored in plaintext on its servers.

    The security incident is a legal issue in the European Union because the GDPR requires that personal data is appropriately secured.

    After investigating, the DPC has concluded that Meta failed to meet the bloc’s legal standard since the passwords were not protected with encryption. It created a risk as third parties could potentially access people’s sensitive information stored in their social media accounts.

    The regulator, which leads on oversight of Meta’s GDPR compliance, also found Meta broke the rules by failing to notify it of the breach within the required time frame (the regulation generally stipulates breach reporting should take place no later than 72 hours after becoming aware of it). Meta also failed to properly document the breach, per the DPC.

    Commenting in a statement, deputy commissioner Graham Doyle wrote: “It is widely accepted that user passwords should not be stored in plaintext, considering the risks of abuse that arise from persons accessing such data. It must be borne in mind, that the passwords the subject of consideration in this case, are particularly sensitive, as they would enable access to users’ social media accounts.”

    Reached for a response to its latest GDPR sanction, Meta spokesperson Matthew Pollard emailed a statement in which the company sought to play down the finding by claiming it took “immediate action” over what had been an “error” in its password management processes.

    As part of a security review in 2019, we found that a subset of FB [Facebook] users’ passwords were temporarily logged in a readable format within our internal data systems. We took immediate action to fix this error, and there is no evidence that these passwords were abused or accessed improperly,” Meta wrote. “We proactively flagged this issue to our lead regulator, the Irish Data Protection Commission, and have engaged constructively with them throughout this inquiry.

    Meta had already racked up a majority of the largest GDPR penalties handed out to tech giants so the latest sanction merely underscores the scale of its problems with privacy compliance.

    The penalty is notably stiffer than a €17 million fine the DPC handed to Meta in March 2022 over a 2018 security breach. The Irish regulator has had a change of senior management since then. However the two incidents are also different: Meta’s earlier security lapses affected up to 30 million Facebook users compared to the hundreds of millions whose passwords were said to have been exposed as a result of its failure to secure passwords in 2019.

    The GDPR empowers data protection authorities to issue fines for breaches where the amount of any penalty is calculated based on factors such as the nature, gravity and duration of the infringement; the scope or purpose of the processing; and the number of data subjects affected and level of damage suffered, among other considerations.

    The highest possible penalty under the GDPR is 4% of global annual turnover. So, in Meta’s case, a €91 million fine may sound like a significant chunk of change — but it remains a tiny fraction of the billions the company could theoretically face, given its annual revenue for 2023 was a staggering $134.90 billion.

    [ad_2]

    Source link

  • Meta Quest 3S vs. Meta Quest 3

    Meta Quest 3S vs. Meta Quest 3

    [ad_1]

    It’s time for a Meta Quest 3S vs. Meta Quest 3 face-off to help you understand the differences between the two.

    Meta Quest 3S was revealed Meta Connect 2024 alongside some updates to the Ray-Ban Meta Smart Glasses — and the new revelation of Project Orion, the company’s new AR glasses prototype.

    The Quest 3S has a starting price of $299.99 while the Quest 3 requires you to shell out a little more. So the question is, how do they differ? We’ll get into this right now.

    Meta Quest 3S vs. 3: Price and specs

    As mentioned, the Quest 3S is cheaper with a starting price of $299.99. This comes with the following specs:

    • Qualcomm Snapdragon XR2 Gen 2 chip

    • 128GB of storage

    • 8GB of RAM

    • 1,832 x 1,920-pixel display resolution

    • 90Hz and 120Hz refresh rate

    • Fresnel lens

    If you upgrade to 256GB of storage, you’ll have to shell out $399.99. In my experience with the Meta Quest 2 and Quest 3, 128GB was sufficient, netting me about 60 games and apps with some storage left over.

    The Meta Quest 3, on the other hand, currently has a starting price $649.99. (As of this writing, the $499.99 version with the 128GB model is out of stock.) It has the same chip, storage configurations, and RAM as the Quest 3S, but you’ll get better displays and more refresh rate options (more on that later).

    Meta Quest 3S vs. 3: Design

    The Meta Quest 3S appears to be bulkier than the Meta Quest 3 — similar to the Quest 2. The pricier Quest 3, on the other hand, has a sleeker appearance.

    The front part of the Quest 3S is different from the Quest 3, too. While both have a mostly-white body, the Quest 3S has its sensors arranged in a triangular pattern. However, the Quest 3, on the other hand, has its sensors arranged into larger, oval-shaped openings.

    Both headsets have the same controllers.

    Meta Quest 3S vs. 3: Display

    The Meta Quest 3’s displays have a higher screen resolution compared to the Quest 3S.

    • Quest 3S – 1,832 x 1,920-pixel resolution

    • Quest 3 – 2,064 x 2,208-pixel resolution

    Plus, the Quest 3 has a greater range of refresh rate options:

    • Quest 3S – 90Hz and 120Hz

    • Quest 3 – 72Hz, 80Hz, 90Hz, 120Hz

    You may be wondering, “Why would anyone run their Quest 3 at 72Hz as opposed to 120Hz?” Well, for those who use PCVR, reducing the refresh rate may allow them to prioritize a higher resolution without taking too much of a performance hit.

    It’s also worth noting that the Quest 3S has fresnel lenses as opposed to the Quest 3’s pancake lenses. Both lenses have their pros and cons, but pancake lenses are generally better, allowing for a lighter and thinner design, less distortion, and better image quality.

    Meta Quest 3S vs. 3: Features

    Both appear to have the same feature set, including augmented reality (AR) with passthrough mode. Passthrough, which lets you see the world around you, relies on a 4MP camera and lets you enjoy AR experiences.

    For the uninitiated, AR is a blend of digital artifacts with your real-world environment. For example, in an AR game, you can dodge virtual aliens while diving behind the sofa in your living room.

    Announced at Meta Connect 2024, you’ll also have an easier time with spatial computing with Quest 3S and Quest 3.

    Spatial computing was always available on the Quest series (at least as far back as the second-generation Quest 2 because I’ve demoed the ability to use it with a Windows laptop in 2021.) However, it appears that, with a new update, the experience should be a lot more seamless with the two Quest 3 models. You should also have the option to add multiple displays, too, which is new.

    But we’ll have to test them ourselves to know for sure.

    To sum it up, on both the Quest 3S and Quest 3, you’ll have access to AR and VR apps and games via the Quest Store, as well as support for PCVR with the Quest Link app. You can also play Xbox Game Pass games on both headsets.

    Meta Quest 3S vs. 3: Battery

    The Quest 3 has a bigger battery: 5,060 mAh to be exact. The Quest 3S, on the other hand, sports a 4,324 battery.

    According to our review, the Meta Quest 3 lasted one hour and 19 minutes with nonstop gameplay.

    This means that the Quest 3S is expected to have an even shorter runtime compared to the Quest 3 — and the Quest 3’s runtime is already short enough as it is.

    Meta Quest 3S vs. 3: Which is the winner?

    Based on battery life, I’d opt for the Meta Quest 3. We won’t know for sure until we review the Quest 3S, but based on the specs, it’s expected to be less power efficient than the Quest 3.

    However, in my experience, VR and AR experience take a toll on your eyes, so your VR headset shutting down after an hour isn’t the worst thing in the world. Your eyes need a break.

    On the plus side, the Quest 3S is a lot easier on your wallet. If you don’t mind the expectedly poor battery life, the Quest 3S is your best bet.



    [ad_2]

    Source link

  • Meta Missed Out on Smartphones. Can Smart Glasses Make Up for It?

    Meta Missed Out on Smartphones. Can Smart Glasses Make Up for It?

    [ad_1]

    Meta has dominated online social connections for the past 20 years, but it missed out on making the smartphones that primarily delivered those connections. Now, in a multiyear, multibillion-dollar effort to position itself at the forefront of connected hardware, Meta is going all in on computers for your face.

    At its annual Connect developer event today in Menlo Park, California, Meta showed off its new, more affordable Oculus Quest 3S virtual reality headset and its improved, AI-powered Ray-Ban Meta smart glasses. But the headliner was Orion, a prototype pair of holographic display glasses that chief executive Mark Zuckerberg said have been in the works for 10 years.

    Zuckerberg emphasized that the Orion glasses—which are available only to developers for now—aren’t your typical smart display. And he made the case that these kinds of glasses will be so interactive that they’ll usurp the smartphone for many needs.

    “Building this display is different from every other screen you’ve ever used,” Zuckerberg said on stage at Meta Connect. Meta chief technology officer Andrew Bosworth had previously described this tech as “the most advanced thing that we’ve ever produced as a species.”

    The Orion glasses, like a lot of heads-up displays, look like the fever dream of techno-utopians who have been toiling away in a highly secretive place called “Reality Lab” for the past several years. One WIRED reporter noted that the thick black glasses looked “chunky” on Zuckerberg.

    As part of the on-stage demo, Zuckerberg showed how Orion glasses can be used to project multiple virtual displays in front of someone, respond quickly to messages, video chat with someone, and play games. In the messages example, Zuckerberg noted that users won’t even have to take out their phones. They’ll navigate these interfaces by talking, tapping their fingers together, or by simply looking at virtual objects.

    There will also be a “neural interface” built in that can interpret brain signals, using a wrist-worn device that Meta first teased three years ago. Zuckerberg didn’t elaborate on how any of this will actually work or when a consumer version might materialize. (He also didn’t get into the various privacy complications of connecting this rig and its visual AI to one of the world’s biggest repositories of personal data.)

    He did say that the imagery that appears through the Orion glasses isn’t pass-through technology—where external cameras show wearers the real world—nor is it a display or screen that shows the virtual world. It’s a “new kind of display architecture,” he said, that uses projectors in the arms of the glasses to shoot waveguides into the lenses, which then reflect light into the wearer’s eyes and create volumetric imagery in front of you. Meta has designed this technology itself, he said.

    The idea is that the images don’t appear as flat, 2D graphics in front of your eyes but that the virtual images now have shape and depth. “The big innovation with Orion is the field of view,” says Anshel Sag, principal analyst at Moor Insights & Strategy, who was in attendance at Meta Connect. “The field of view is 72 degrees, which makes it much more engaging and useful for most applications, whether gaming, social media, or just content consumption. Most headsets are in the 30- to 50-degree range.”

    [ad_2]

    Source link

  • Meta teases Orion, brain-powered true AR glasses in a tiny package

    Meta teases Orion, brain-powered true AR glasses in a tiny package

    [ad_1]

    At Wednesday’s Meta Connect event, CEO Mark Zuckerberg announced Orion, what he described as “the most advanced glasses the world has ever seen.”

    The glasses, which are notably significantly smaller than Snap’s recently announced Spectacles 5, are true AR. Orion utilize tiny projects built into the glasses temples to create a heads-up display; think the 2024 version of Google Glass.

    The glasses, which Zuckerberg said were a decade in the making, don’t appear to be too far beyond the concept phase at this point. “These glasses exist, they are awesome, and they are a glimpse of a future that I think will be exciting,” the executive noted during the presentation. He added that the team still has a good bit of “fine-tuning” before Meta is ready to turn them into an official consumer product.

    Image Credits: Meta

    Notably, along with the standard voice prompts, Orion will be controlled through a “neural interface” That arrives by way of Meta’s 2019 acquisition of CTRL-Labs, which makes a wrist band that will be compatible with the devices.

    The company is positioning the upcoming glasses as a kind of successor to its current livestream product, Meta Ray-Bans. It notes,

    Yet while Ray-Ban Meta opened up an entirely new category of display-less glasses super-charged by AI, the XR industry has long dreamt of true AR glasses — a product that combines the benefits of a large holographic display and personalized AI assistance in a comfortable, all-day wearable form factor. Orion rises to the challenge.

    There are a lot of claims at the moment, such “as Orion has the largest field of view in the smallest AR glasses form to date,” but we’re far too early for any specifics at this point. That can, however, be seen as a dig at the new Spectacles, which are extremely large with a very narrow FOV.

    Meta Orion Holographic Glasses
    We see that TechCrunch logo, Meta 👀
    Image Credits: Meta

    “That field of view unlocks truly immersive use cases for Orion, from multitasking windows and big-screen entertainment to life-size holograms of people,” Meta notes, “all digital content that can seamlessly blend with your view of the physical world.”

    One key thing Orion does have in common with the new Spectacles is that it will initially be available for developers only. This is, of course, a common move in this world. Companies treat these announcements as a kind of proof-of-concept to get folks excited to develop for the platform.

    Meta's Orion Holographic Glasses, battery pack and wristband
    Image Credits: Meta

    Meta’s Ray-Bans were a bit of a surprise hit for the company, especially in the wake of steady, if slow, growth around the Quest line. If the product hews closely to the demos, it’s hard to accuse Zuckerberg of hyperbole, when compared to the likes of Snapchat Spectacles on one end and Apple’s Vision Pro on the other.

    The recent addition of Meta AI to the Ray-Bans can also been seen as a stepping stone to more fully realized augmented reality glasses. Features like translation and navigation would be even more powerful with a visual element in play.

    There were unsurprisingly bumps along the road getting to this stage. According to one recent report, building Orion cost in the neighborhood of $10,000 per unit. We know that Meta has gotten in the habit of losing money on Quest headsets, but nothing nearly that astronomical.

    The same report also suggests that Meta will deliver a version of the glasses with a significantly smaller HUD when it ships the wrist band ahead of Orion’s eventual arrival.

    “In the next few years, you can expect to see new devices from us that build on our R&D efforts,” Meta writes. “Orion isn’t just a window into the future — it’s a look at the very real possibilities within reach today. From Ray-Ban Meta glasses to Orion, we’ve seen the good that can come from letting people stay more present and empowered in the physical world, while tapping into all that the digital world has to offer.”

    [ad_2]

    Source link

  • Meta Teaches Its Ray-Ban Smart Glasses Some New AI Tricks

    Meta Teaches Its Ray-Ban Smart Glasses Some New AI Tricks

    [ad_1]

    The Ray-Ban Meta glasses are the first real artificial-intelligence wearable success story. In fact, they are actually quite good. They’ve got that chic Ray-Ban styling, meaning they don’t look as goofy as some of the bulkier, heavier attempts at mixed-reality face computers. The onboard AI agent can answer questions and even identify what you’re looking at using the embedded cameras. People also love using voice commands to capture photos and videos of whatever is right in front of them without whipping out their phone.

    Soon, Meta’s smart glasses are getting some more of these AI-powered voice features. Meta CEO Mark Zuckerberg announced the newest updates to the smart glasses’ software at his company’s Meta Connect event today.

    “The reality is that most of the time you’re not using smart functionality, so people want to have something on their face that they’re proud of and that looks good and that’s, you know, designed in a really nice way,” Zuckerberg said at Connect. “So they’re great glasses. We keep updating the software and building out the ecosystem, and they keep on getting smarter and capable of more things.”

    The company also used Connect to announce its new Meta Quest 3S, a more budget-friendly version of its mixed-reality headsets. It also unveiled a host of other AI capabilities across its various platforms, with new features being added to its Meta AI and Llama large language models.

    An image of a woman wearing the new RayBan Meta Headliner glasses in Caramel.

    Courtesy of Meta

    An image of a man wearing the new RayBan Meta Wayfarer glasses in Shiny Black.

    Courtesy of Meta

    As far as the Ray-Bans go, Meta isn’t doing too much to mess with a good thing. The smart spectacles got an infusion of AI tech earlier this year, and now Meta is adding more capabilities to the pile, though the enhancements here are pretty minimal. You can already ask Meta AI a question and hear its responses directly from the speakers embedded in the frames’ temple pieces. Now there are a few new things you can ask or command it to do.

    Probably the most impressive is the ability to set reminders. You can look at something while wearing the glasses and say, “Hey, remind me to buy this book next week,” and the glasses will understand what the book is, then set a reminder. In a week, Meta AI will tell you it’s time to buy that book.

    Image may contain Accessories and Sunglasses

    Courtesy of Meta

    Image may contain Accessories Sunglasses and Glasses

    Courtesy of Meta

    Meta says live transcription services are coming to the glasses soon, meaning people speaking in different languages could see transcribed speech in the moment—or at least in a somewhat timely fashion. It’s not clear exactly how well that will work, given that the Meta glasses’ previous written translation abilities have proven to be hit-or-miss.

    Zuckerberg says Meta isalso partnering with the Danish-based mobile app Be My Eyes to bring a feature to the Ray-Ban Meta glasses that that connects blind and low-vision people to volunteers who can view live video and talk the wearer through what is in front of them.

    “I think that not only is this going to be a pretty awesome experience today, but it’s a glimpse of the type of thing that might be more possible with always-on AI.”

    Image may contain Accessories and Sunglasses

    Courtesy of Meta

    Image may contain Accessories Sunglasses and Glasses

    Courtesy of Meta

    There are new frame colors and lens colors being added, and customers now have the option to add transition lenses that increase or decrease their shading depending on the current level of sunlight.

    Meta hasn’t said exactly when these additional AI features will be coming to its Ray-Bans, except that they will arrive sometime this year. With only three months of 2024 left, that means very soon.

    [ad_2]

    Source link

  • The Viral ‘Goodbye Meta AI’ Copypasta Will Not Protect You

    The Viral ‘Goodbye Meta AI’ Copypasta Will Not Protect You

    [ad_1]

    “Goodbye Meta AI” is the most recent Facebook copypasta to go viral online. A chunky wall of text pasted against a hazy orange-yellow gradient background, it’s complete with all the trend’s hallmarks: vague references to the legal system and unilateral declarations of personal protection. It almost feels nostalgic, a blast from the compulsory chain-email past. But, unfortunately, posting an image on Facebook, Instagram, or any social media platform is not how you actually opt out of having your posts be fed to AI models.

    This definitely isn’t the first time a meaningless copypasta has spread on the social media site. More than a decade ago, WIRED covered a popular “copyright hoax” with “pseudo-legalese” blanketing Facebook. It didn’t work then, and it doesn’t work now.

    “Goodbye Meta AI,” which has been shared thousands of times—including, reportedly, in the Instagram Stories of Tom Brady and James McAvoy—has been circulating since early September. Its claim that it can protect your data is blatantly dubious to savvy internet users, but the underlying desire to claw back one’s personal information from tech companies is a sympathetic one. The companies know so many granular details about users’ lives and desires that it can be unsettling. And, in the ongoing wave of generative AI, everything posted online seems vulnerable to being scraped to train the next biggest, baddest AI model.

    Two major red flags that can help you immediately spot a copypasta like this are urgent calls to action and unclear references to legal situations. In this case, the image says “all members must post” to keep their data safe, and it claims to be part of an unnamed attorney’s advice. The 2012 version said, “Anyone reading this can copy this text and paste it on their Facebook Wall.” The decade-old copypasta also included a misspelled reference to a European legal contract.

    “While we don’t currently have an opt-out feature, we’ve built in-platform tools that allow people to delete their personal information from chats with Meta AI across our apps,” says Emil Vazquez, a spokesperson for the company, when reached via email. You can find the steps for that here. He also points out European users can object to personal info being used for AI models—although, as WIRED reported last year, the form to object isn’t going to do much, if anything, for you.

    So, if an errant copypasta doesn’t work, what can you do to avoid having your public words and images be used for Meta’s AI model or that of another AI company? Stop posting online—that’s about it. Apart from walking away and never posting again, there’s not a realistic way for you to avoid the nimble scraper bots as an individual user right now.

    With that in mind, you can take steps to reduce the amount of information publicly available on your social media profiles, for a bit more privacy. Also, downloading old posts for your own records then deleting large swathes of them from the internet isn’t a bad idea. Want to go further? Take a look at this list of websites and apps which allow you to opt out of least an aspect of their AI training practices.

    [ad_2]

    Source link