Tag: meta

  • Arkham Shadow – The Meta Quest’s most epic VR release yet

    Arkham Shadow – The Meta Quest’s most epic VR release yet

    [ad_1]

    If you’re a Meta Quest 3 or 3S owner, you’re in luck. Meta-exclusive Batman: Arkham Shadow just dropped today and already people are going off the rails about how it might be the best AAA VR title of the year.

    Following Batman: Arkham (do we actually count Gotham City: Batman VR Experience?), Arkham Shadow picks up where Asylum left off. The core mechanics of the Arkham series are true to form, meaning much hand-to-hand combat, detective mode, Predator mode (stealth), and did I mention hand-to-hand combat?

    Arkham Shadow may as well be sold as a fitness title, considering the sheer amount of physical effort needed to knock out four and five opponents at a time with left-right combos, jabs, hooks, and special moves.

    Batman: Arkham Shadow | Official Gameplay Trailer

    While Arkham Shadow is an open world you can explore, the storyline is structured to focus mode on details. Arkham Shadow, developed by Camouflaj – the studio known for Iron Man VR – was inspired by The Legend of Zelda games when designing the puzzles and explorable areas in the latest Batman release. Would-be virtual Batmans are encouraged to uncover secrets and hidden details in Detective mode.

    While there are no multiplayer or co-op modes, the solo campaign should take around 10 hours if your goal is to just get through it. Much longer if you’re a completionist. Side missions are a bonus and should offer you a little extra satisfaction.

    Sadly, as it’s a Meta Quest 3/3S exclusive, there’s no PCVR support, no PlayStation VR release, no Steam VR, no nothing.

    If you don’t yet own the Quest 3/3S, Batman comes bundled with a new purchase. Considering Arkham Shadow is the most expensive Meta game to date at $70, that might offer you some consolation for the $300 Quest 3S or the $500 Quest 3 price.

    Here’s a video from ‘O.G. VR Gamer’ showing the player’s physical motions as well as the gameplay:

    Meta’s fully wireless and self-contained Quest headsets have been super impressive since they first launched – but many early adopters have found themselves hankering for proper AAA-grade experiences. It’s clear that Camouflaj has put in an exceptional effort here, playing to the strengths of the latest Quest hardware and pushing it to the max, while not skimping on character, story or action.

    We’d call it the Quest 3’s killer app for 2024… But then, that’d break the first rule of Batman club.

    Source: Meta



    [ad_2]

    Source link

  • What we know about the layoffs at Meta

    What we know about the layoffs at Meta

    [ad_1]

    Welcome back to Week in Review. This week, we’re diving into the recent layoffs at Meta; the fallout from the battle between WordPress and WP Engine; and whether Cybertrucks are simply too big to exist in Europe. Let’s get into it.

    Multiple teams at Meta were hit by layoffs this week. The company confirmed the layoffs in a statement to TechCrunch and noted that the changes were made to reallocate resources. The cuts reportedly impacted teams working on Reality Labs, Instagram, and WhatsApp, though Meta declined to comment on the record about how many employees were affected and what orgs they were part of.

    AWS CEO Matt Garman has harsh words for remote workers: Return to the office or quit. The executive recently told employees who don’t like the new five-day in-person work policy that “there are other companies around.” Last month, Amazon’s CEO Andy Jassy told employees that there will be a full return-to-office starting in 2025, an increase from three days for roughly the last year.

    Waymo gave software engineer Sophia Tung promo codes for free rides as an apology for the late-night honking she filmed over the summer that was caused by the self-driving cars. However, when Tung realized the codes weren’t capped in value, she attempted to use her last one to ride in a Waymo for 24 hours. Her plans were ultimately cut short — but she did last a good 6.5 hours.


    This is TechCrunch’s Week in Review, where we recap the week’s biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here.


    News

    Image Credits:Tesla

    Optimus gets some human help: Videos from Tesla’s “We, Robot” event showed Optimus mixing drinks, mingling with people, and even dancing. While seemingly impressive, later reports suggest the robots were being remotely operated by humans. Read more

    ChatGPT comes to Windows: OpenAI has begun previewing a dedicated Windows app for ChatGPT. The company says the app is an early version, arriving ahead of a “full experience” later this year. Read more

    Too big for Europe?: Tesla’s Cybertruck is facing blowback in Europe as transportation organizations suggest its oversized, sharp-edged design violates European safety standards and could endanger pedestrians, cyclists, and other motorists. Read more

    More WordPress drama: As the feud between WordPress and WP Engine wages on, an internal blog post revealed Automattic’s plan to enforce the WordPress trademark using “nice and not nice lawyers.” Read more

    X wants to sell your data: An update to X’s privacy policy indicates it would allow third-party “collaborators” to train their AI models on X data, unless users opt out. It implies the company is looking into licensing data to AI companies as a potential new revenue stream. Read more

    More accessible smartphones: The FCC issued rules requiring all mobile phones sold in the U.S. to be compatible with hearing aids. The news comes two years after the FDA made hearing aids available to all Americans without a prescription. Read more

    Byju Raveendran speaks out: The founder of the embattled edtech group Byju’s acknowledged that he made mistakes and that his startup, once valued at $22 billion, is now effectively worth “zero.” Read more

    Casio’s ransomware attack: Japanese electronics giant Casio confirmed that many of its systems remain unusable. The company sees “no prospect of recovery yet” almost two weeks after it was hit by a ransomware attack. Read more

    CapWay shuts down: The Y Combinator-backed fintech that sought to bring financial services to those in banking deserts shut down after a possible acquisition fell through, its founder Sheena Allen confirmed to TechCrunch. Read more 

    Can AI make us feel less alone?: AI-based mental health app Manifest wants to combat the “loneliness epidemic” affecting Gen Z by turning feelings into personalized daily affirmations. Read more

    Analysis

    Palmer Luckey
    Image Credits:David Paul Morris/Bloomberg / Getty Images

    The AI weapons debate: Silicon Valley is debating whether AI weapons in the U.S. should ever be fully autonomous — meaning an algorithm would make the final decision to kill someone. Some in the defense tech industry argue it’s necessary to keep up with global competition, but others believe humans should always make the final call. As Margaux MacColl writes, the fear is that once one nation implements autonomous weapons, others might feel forced to follow — and the ethical implications are massive. Read more

    [ad_2]

    Source link

  • Meta’s AI chief says world models are key to ‘human-level AI’ — but it might be 10 years out

    Meta’s AI chief says world models are key to ‘human-level AI’ — but it might be 10 years out

    [ad_1]

    Are today’s AI models truly remembering, thinking, planning, and reasoning, just like a human brain would? Some AI labs would have you believe they are, but according to Meta’s chief AI scientist Yann LeCun, the answer is no. He thinks we could get there in a decade or so, however, by pursuing a new method called a “world model.”

    Earlier this year, OpenAI released a new feature it calls “memory” that allows ChatGPT to “remember” your conversations. The startup’s latest generation of models, o1, displays the word “thinking” while generating an output, and OpenAI says the same models are capable of “complex reasoning.”

    That all sounds like we’re pretty close to AGI. However, during a recent talk at the Hudson Forum, LeCun undercut AI optimists, such as xAI founder Elon Musk and Google DeepMind co-founder Shane Legg, who suggest human-level AI is just around the corner.

    “We need machines that understand the world; [machines] that can remember things, that have intuition, have common sense, things that can reason and plan to the same level as humans,” said LeCun during the talk. “Despite what you might have heard from some of the most enthusiastic people, current AI systems are not capable of any of this.”

    LeCun says today’s large language models, like those which power ChatGPT and Meta AI, are far from “human-level AI.” Humanity could be “years to decades” away from achieving such a thing, he later said. (That doesn’t stop his boss, Mark Zuckerberg, from asking him when AGI will happen, though.)

    The reason why is straightforward: those LLMs work by predicting the next token (usually a few letters or a short word), and today’s image/video models are predicting the next pixel. In other words, language models are one-dimensional predictors, and AI image/video models are two-dimensional predictors. These models have become quite good at predicting in their respective dimensions, but they don’t really understand the three-dimensional world.

    Because of this, modern AI systems cannot do simple tasks that most humans can. LeCun notes how humans learn to clear a dinner table by the age of 10, and drive a car by 17 – and learn both in a matter of hours. But even the world’s most advanced AI systems today, built on thousands or millions of hours of data, can’t reliably operate in the physical world.

    In order to achieve more complex tasks, LeCun suggests we need to build three dimensional models that can perceive the world around you, and center around a new type of AI architecture: world models.

    “A world model is your mental model of how the world behaves,” he explained. “You can imagine a sequence of actions you might take, and your world model will allow you to predict what the effect of the sequence of action will be on the world.”

    Consider the “world model” in your own head. For example, imagine looking at a messy bedroom and wanting to make it clean. You can imagine how picking up all the clothes and putting them away would do the trick. You don’t need to try multiple methods, or learn how to clean a room first. Your brain observes the three-dimensional space, and creates an action plan to achieve your goal on the first try. That action plan is the secret sauce that AI world models promise.

    Part of the benefit here is that world models can take in significantly more data than LLMs. That also makes them computationally intensive, which is why cloud providers are racing to partner with AI companies.

    World models are the big idea that several AI labs are now chasing, and the term is quickly becoming the next buzzword to attract venture funding. A group of highly-regarded AI researchers, including Fei-Fei Li and Justin Johnson, just raised $230 million for their startup, World Labs. The “godmother of AI” and her team is also convinced world models will unlock significantly smarter AI systems. OpenAI also describes its unreleased Sora video generator as a world model, but hasn’t gotten into specifics.

    LeCun outlined an idea for using world models to create human-level AI in a 2022 paper on “objective-driven AI,” though he notes the concept is over 60 years old. In short, a base representation of the world (such as video of a dirty room, for example) and memory are fed into an world model. Then, the world model predicts what the world will look like based on that information. Then you give the world model objectives, including an altered state of the world you’d like to achieve (such as a clean room) as well as guardrails to ensure the model doesn’t harm humans to achieve an objective (don’t kill me in the process of cleaning my room, please). Then the world model finds an action sequence to achieve these objectives.

    Meta’s longterm AI research lab, FAIR or Fundamental AI Research, is actively working towards building objective-driven AI and world models, according to LeCun. FAIR used to work on AI for Meta’s upcoming products, but LeCun says the lab has shifted in recent years to focusing purely on longterm AI research. LeCun says FAIR doesn’t even use LLMs these days.

    World models are an intriguing idea, but LeCun says we haven’t made much progress on bringing these systems to reality. There’s a lot of very hard problems to get from where we are today, and he says it’s certainly more complicated than we think.

    “It’s going to take years before we can get everything here to work, if not a decade,” said Lecun. “Mark Zuckerberg keeps asking me how long it’s going to take.”

    [ad_2]

    Source link

  • Apple study reveals major AI flaw in OpenAI, Google, and Meta LLMs

    Apple study reveals major AI flaw in OpenAI, Google, and Meta LLMs

    [ad_1]

    Large Language Models (LLMs) may not be as smart as they seem, according to a study from Apple researchers.

    LLMs from OpenAI, Google, Meta, and others have been touted for their impressive reasoning skills. But research suggests their purported intelligence may be closer to “sophisticated pattern matching” than “true logical reasoning.” Yep, even OpenAI’s o1 advanced reasoning model.

    The most common benchmark for reasoning skills is a test called GSM8K, but since it’s so popular, there’s a risk of data contamination. That means LLMs might know the answers to the test because they were trained on those answers, not because of their inherent intelligence.

    SEE ALSO:

    OpenAI funding round values company at $157 billion

    To test this, the study developed a new benchmark called GSM-Symbolic which keeps the essence of the reasoning problems, but changes the variables, like names, numbers, complexity, and adding irrelevant information. What they discovered was surprising “fragility” in LLM performance. The study tested over 20 models including OpenAI’s o1 and GPT-4o, Google’s Gemma 2, and Meta’s Llama 3. With every single model, the model’s performance decreased when the variables were changed.

    Accuracy decreased by a few percentage points when names and variables were changed. And as the researchers noted, OpenAI’s models performed better than the other open-source models. However the variance was deemed “non-negligible,” meaning any real variance shouldn’t have occurred. However, things got really interesting when researchers added “seemingly relevant but ultimately inconsequential statements” to the mix.

    Mashable Light Speed

    SEE ALSO:

    Free Apple Intelligence upgrade likely arriving soon, leak suggests

    To test the hypothesis that LLMs relied more on pattern matching than actual reasoning, the study added superfluous phrases to math problems to see how the models would react. For example, “Oliver picks 44 kiwis on Friday. Then he picks 58 kiwis on Saturday. On Sunday, he picks double the number of kiwis he did on Friday, but five of them were a bit smaller than average. How many kiwis does Oliver have?”

    What resulted was a significant drop in performance across the board. OpenAI’s o1 Preview fared the best, with a drop of 17.5 percent accuracy. That’s still pretty bad, but not as bad as Microsoft’s Phi 3 model which performed 65 percent worse.

    SEE ALSO:

    ChatGPT-4, Gemini, MistralAI, and more join forces in this personal AI tool

    In the kiwi example, the study said LLMs tended to subtract the five smaller kiwis from the equation without understanding that kiwi size was irrelevant to the problem. This indicates that “models tend to convert statements to operations without truly understanding their meaning” which validates the researchers’ hypothesis that LLMs look for patterns in reasoning problems, rather than innately understand the concept.

    The study didn’t mince words about its findings. Testing models’ on the benchmark that includes irrelevant information “exposes a critical flaw in LLMs’ ability to genuinely understand mathematical concepts and discern relevant information for problem-solving.” However, it bears mentioning that the authors of this study work for Apple which is obviously a major competitor with Google, Meta, and even OpenAI — although Apple and OpenAI have a partnership, Apple is also working on its own AI models.

    That said, the LLMs’ apparent lack of formal reasoning skills can’t be ignored. Ultimately, it’s a good reminder to temper AI hype with healthy skepticism.

    Topics
    Apple
    Artificial Intelligence



    [ad_2]

    Source link

  • Social Media Tells You Who You Are. What if It’s Totally Wrong?

    Social Media Tells You Who You Are. What if It’s Totally Wrong?

    [ad_1]

    A few years ago I wrote about how, when planning my wedding, I’d signaled to the Pinterest app that I was interested in hairstyles and tablescapes, and I was suddenly flooded with suggestions for more of the same. Which was all well and fine until—whoops—I canceled the wedding and it seemed Pinterest pins would haunt me until the end of days. Pinterest wasn’t the only offender. All of social media wanted to recommend stuff that was no longer relevant, and the stench of this stale buffet of content lingered long after the non-event had ended.

    So in this new era of artificial intelligence—when machines can perceive and understand the world, when a chatbot presents itself as uncannily human, when trillion-dollar tech companies use powerful AI systems to boost their ad revenue—surely those recommendation engines are getting smarter, too. Right?

    Maybe not.

    Recommendation engines are some of the earliest algorithms on the consumer web, and they use a variety of filtering techniques to try to surface the stuff you’ll most likely want to interact with—and in many cases, buy—online. When done well, they’re helpful. In the earliest days of photo sharing, like with Flickr, a simple algorithm made sure you saw the latest photos your friend had shared the next time you logged in. Now, advanced versions of those algorithms are aggressively deployed to keep you engaged and make their owners money.

    More than three years after reporting on what Pinterest internally called its “miscarriage” problem, I’m sorry to say my Pinterest suggestions are still dismal. In a strange leap, Pinterest now has me pegged as a 60- to 70-year-old, silver fox of a woman who is seeking a stylish haircut. That and a sage green kitchen. Every day, like clockwork, I receive marketing emails from the social media company filled with photos suggesting I might enjoy cosplaying as a coastal grandmother.

    I was seeking paint #inspo online at one point. But I’m long past the paint phase, which only underscores that some recommendation engines may be smart, but not temporal. They still don’t always know when the event has passed. Similarly, the suggestion that I might like to see “hairstyles for women over 60” is premature. (I’m a millennial.)

    Pinterest has an explanation for these emails, which I’ll get to. But it’s important to note—so I’m not just singling out Pinterest, which over the past two years has instituted new leadership and put more resources into fine-tuning the product so people actually want to shop on it—that this happens on other platforms, too.

    Take Threads, which is owned by Meta and collects much of the same user data that Facebook and Instagram do. Threads is by design a very different social app than Pinterest. It’s a scroll of mostly text updates, with an algorithmic “For You” tab and a “Following” tab. I actively open Threads every day; I don’t stumble into it, the way I do from Google Image Search to images on Pinterest. In my Following tab, Threads shows me updates from the journalists and techies I follow. In my For You tab, Threads thinks I’m in menopause.

    Wait, what? Laboratorially, I’m not. But over the past several months Threads has led me to believe I might be. Just now, opening the mobile app, I’m seeing posts about perimenopause; women in their forties struggling to shrink their midsections, regulate their nervous systems, or medicate for late-onset ADHD; husbands hiring escorts; and Ali Wong’s latest standup bit about divorce. It’s a Real Housewives-meets-elder-millennial-ennui bizarro world, not entirely reflective of the accounts I choose to follow or my expressed interests.

    [ad_2]

    Source link

  • Meta Can’t Use Sexual Orientation to Target Ads in the EU, Court Rules

    Meta Can’t Use Sexual Orientation to Target Ads in the EU, Court Rules

    [ad_1]

    Europe’s most famous privacy activist, Max Schrems, landed another blow against Meta today after the EU’s top court ruled the tech giant cannot exploit users’ public statements about their sexual orientation for online advertising.

    Since 2014, Schrems has complained of seeing advertising on Meta platforms targeting his sexual orientation. Schrems claims, based on data he obtained from the company, that advertisers using Meta can deduce his sexuality from proxies, such as his app logins or website visits. Meta denies it showed Schrems personalized ads based on his off-Facebook data, and the company has long said it excludes any sensitive data it detects from its advertising operations.

    The case started with Schrems challenging whether this practice violated Europe’s GDPR privacy law. But it took an unexpected turn when a judge in his home country of Austria ruled Meta was entitled to use his sexuality data for advertising because he had spoken about it publicly during an event in Vienna. The Austrian Supreme Court then referred the case to the EU’s top court in 2021.

    Today, the Court of Justice of the European Union (CJEU) finally ruled that a person’s sexual orientation cannot be used for advertising, even if that person speaks publicly about being gay.

    “Meta Platforms Ireland collects the personal data of Facebook users, including Mr. Schrems, concerning those users’ activities both on and outside that social network,” the court said. “With the data available to it, Meta Platforms Ireland is also able to identify Mr. Schrems’ interest in sensitive topics, such as sexual orientation, which enables it to direct targeted advertising at him.”

    The fact that Schrems had spoken publicly about his sexual identity does not authorize any platform to process related data to offer him personalized advertising, the court added.

    “Now we know that if you’re on a public stage, that doesn’t necessarily mean that you agree to this personal data being processed,” says Schrems, founder of the Austrian privacy group NOYB. He believes only a handful of Facebook users will have the same issue. “It’s a really, really niche problem.”

    The CJEU also ruled today Meta has to limit the data it uses for advertising more broadly, essentially setting ground rules for how the GDPR should be enforced. Europe’s privacy law means personal data should not be “aggregated, analyzed, and processed for the purposes of targeted advertising without restriction as to time and without distinction as to type of data,” the court said in a statement.

    “It’s really important to set ground rules,” says Katharina Raabe-Stuppnig, the lawyer representing Schrems. “There are some companies who think they can just disregard them and get a competitive advantage from this behavior.”

    Meta said it was waiting for the CJEU’s judgment to be published in full. “Meta takes privacy very seriously and has invested over 5 billion Euros to embed privacy at the heart of all of our products,” Meta spokesperson Matt Pollard told WIRED. “Everyone using Facebook has access to a wide range of settings and tools that allow people to manage how we use their information.”

    Schrems has been a prolific campaigner against Meta since a legal challenge he made resulted in a surprise 2015 ruling invalidating a transatlantic data transfer system over concerns US spies could use it to access EU data. His organization has since filed legal complaints against Meta’s pay-for-privacy subscription model and the company’s plans to use Europeans’ data to train its AI.

    “It’s major for the whole online advertisement space. But for Meta, it’s just another one in the long list of violations they have,” says Schrems, of this latest ruling. “The walls are closing in.”

    [ad_2]

    Source link

  • Facebook launches a Gen Z-focused redesign

    Facebook launches a Gen Z-focused redesign

    [ad_1]

    Facebook wants to woo more younger users to join its social network, instead of spending all their time on TikTok, Instagram, and other social apps. To do so, parent company Meta on Friday announced a series of changes to the older social network which will put greater emphasis on local community information, videos, and Facebook Groups, among other things. Other Facebook products, like Meta AI, Facebook Dating, and Messenger are receiving updates as well.

    Most notably, the emphasis of Facebook’s redesign, announced today at a Facebook IRL pop-up event in Austin, will be partly on its entertainment options — a move meant to rival apps like TikTok. The changes will also focus on the more practical offerings Facebook provides users in a local community. Beyond buy/sell groups, the site has become a key staging ground and communication hub for other local groups, like those impacted by natural disasters, as is currently the case in states impacted by Hurricane Helene. (Due to climate change, these types of disaster response groups will likely become more common, too.)

    The updates come as the Facebook brand is in decline, which led the company to rename itself Meta in 2021, shifting its focus away from its top social app and onto the metaverse instead. Facebook’s user base has been growing older, and younger people haven’t been signing up in great numbers to create a new generation of users.

    That’s particularly true in the U.S. According to data from the Pew Research Center only 33% of U.S. teens are now Facebook as of last year, down from 71% of teens in 2014.

    Still, Meta is hopeful because it found that young adults, particularly 20-somethings, have been using the site for certain features, like Facebook Groups and Marketplace, for instance. The New York Times even covered the latter, as an example of how Facebook was being used by the next generation as a place to thrift shop, not to socialize.

    Today’s series of updates capitalize on these trends by making Facebook more approachable for those wanting to connect with their local community or be entertained, rather than as a friend-focused social network.

    New Facebook Features

    For starters, Facebook is introducing a new tab called “Local,” which will pull in local content from across places like Marketplace, Groups, and Events into a single section. Here, users will find things like nearby activities, local groups offering items for sale or for free, local recommendations about new neighborhood hot spots, and more.

    The tab will initially only be available in testing in select U.S. cities, including Austin, New York City, Los Angeles, Washington D.C., Chicago, Charlotte, Dallas, Houston, San Francisco, and Phoenix.

    Facebook Local FeedImage Credits:Meta

    In addition to the Local tab, a user’s local community will be highlighted in other ways on Facebook. A new, swipeable section will appear in the user’s Facebook Feed (formerly, News Feed) which will showcase interesting posts and information from the area. This may include things like local events, local Facebook Groups, notable people or businesses, items for sale on Marketplace, and more.

    The social network is introducing a new “Explore” tab as well, focused on personalized recommendations. This section will be powered by an algorithm that will surface not just content that entertains, but those that will connect you to interests, even if they’re narrowly defined. As an example, Facebook says users might find things like tips for traveling abroad for the first time, DIY tricks for repurposing furniture, or running groups for marathon training, among other things.

    The Explore tab will also gain a prominent place in the redesigned app, becoming the fifth button on the bottom navigation bar on iOS (top bar on Android), and one that’s directly in the center of the app.

    Clicking through will take you to a page that looks a lot like Pinterest or ByteDance’s Instagram/Pinterest competitor, Lemon8, which has been growing in popularity among younger users thanks to its promotion on TikTok. The page will be split into two sections, a “For You” feed and one focused on “Nearby,” or more local content.

    Facebook Explore on iOSImage Credits:Meta

    In terms of entertainment, the Video tab on Facebook will be updated in the weeks ahead to offer a full-screen video player that will allow users to watch short-form, long-form, and live videos in one place. This will give reels a more prominent place on Facebook, the company notes, and it reflects the use of video by younger users. On Facebook, young adults spend nearly 60% of their on the app watching videos, and more than half of them are watching Reels daily, Facebook points out.

    Facebook Events and Groups are being updated too.

    Facebook Events

    Facebook Events will also receive an upgrade by offering users both a Weekly and Weekend Digest of upcoming events based on their interests. These will come via a Facebook notification, the company says.

    Facebook EventsImage Credits:Meta

    Users will also be able to invite Instagram followers to events created on Facebook, as well as via SMS and email invites to those who have not registered an account on the site.

    A new AI feature is coming to Facebook Groups, too. The “Group AI” offering will help members of groups find answers to questions — including those that have previously been asked. This addresses one of the bigger headaches for Group admins, who find that new people joining the group typically ask the same questions again and again. The feature will have to be enabled by a Group’s admin, which then introduces a chat-like interface where Group members can ask questions and be linked to relevant group posts. The test is currently rolling out in the U.S. and Canada.

    customizable Facebook Group AIImage Credits:Meta

    The Groups update in particular targets a major area of Facebook for the next generation, as Groups now attract 1.8 billion users, and 25 million public groups are active every month, Meta notes.

    In addition to making Groups’ content easier to find on a per-group basis, Facebook’s search results will also organize content from groups in a section titled “What people in groups are saying.”

    Group Crowdsourcing (Search)Image Credits:Meta

    Meanwhile, still hoping to build a space for online dating now that dating apps are passé, Facebook Dating will add a “Matchmaker” feature that will allow up to 5 friends to swipe through potential matches for you. The company says this feature was introduced because the company saw that Facebook Dating conversations had increased 24% year-over-year among young adults in the U.S. and Canada.

    Facebook Dating MatchmakerImage Credits:Meta

    Other AI updates and new Messenger features

    Overall, the changes across Facebook aim to make the network more appealing to the next generation of users, who use the site in a different way than the young adults who first joined the site in their youth, and are now approaching middle age. That includes embracing new technology, like AI.

    A Meta AI-powered “Imagine Yourself” image-generation feature is being integrated into Facebook’s Feed, Stories, and on the user’s profile page, as previously announced.

    Meta AI “Imagine Yourself”Image Credits:Meta

    Plus, AI comment summaries — appearing across public Groups, Pages, and Creators — have been rolled out. (We’d argue that they aren’t that useful, though, and detract from one of the more engaging Facebook experiences — scrolling through crazy comment sections!)

    Image Credits:Meta

    Related to today’s news, the Notes feature from Messenger, which lets you share your current status in a bubble above your profile, will arrive on Facebook, initially as a test. Messenger’s Memories feature, which resurfaces photos from past chats, is also beginning to roll out.

    Messenger will additionally gain a new Communities feature, similar to WhatsApp, focused on letting people connect via their shared interests. These will offer an alternative to Facebook Groups and will include a shareable QR code for joining.

    The changes also follow this week’s launch of a new monetization program for creators that lets them earn more from content formats across Facebook.

    [ad_2]

    Source link

  • Meta’s Movie Gen Makes Convincing AI Video Clips

    Meta’s Movie Gen Makes Convincing AI Video Clips

    [ad_1]

    Meta just announced its own media-focused AI model, called Movie Gen, that can be used to generate realistic video and audioclips.

    The company shared multiple 10-second clips generated with Movie Gen, including a Moo Deng-esque baby hippo swimming around, to demonstrate its capabilities. While the tool is not yet available for use, this Movie Gen announcement comes shortly after its Meta Connect event, which showcased new and refreshed hardware and the latest version of its large language model, Llama 3.2.

    Going beyond the generation of straightforward text-to-video clips, the Movie Gen model can make targeted edits to an existing clip, like adding an object into someone’s hands or changing the appearance of a surface. In one of the example videos from Meta, a woman wearing a VR headset was transformed to look like she was wearing steampunk binoculars.

    An AI-generated video made from the prompt “make me a painter.”

    Courtesy of Meta

    An AI-generated video made from the prompt “a woman DJ spins records. She is wearing a pink jacket and giant headphones. There is a cheetah next to the woman.”

    Courtesy of Meta

    Audio bites can be generated alongside the videos with Movie Gen. In the sample clips, an AI man stands near a waterfall with audible splashes and the hopeful sounds of a symphony; the engine of a sports car purrs and tires screech as it zips around the track, and a snake slides along the jungle floor, accompanied by suspenseful horns.

    Meta shared some further details about Movie Gen in a research paper released Friday. Movie Gen Video consists of 30 billion parameters, while Movie Gen Audio consists of 13 billion parameters. (A model’s parameter count roughly corresponds to how capable it is; by contrast, the largest variant of Llama 3.1 has 405 billion parameters.) Movie Gen can produce high-definition videos up to 16 seconds long, and Meta claims that it outperforms competitive models in overall video quality.

    Earlier this year, CEO Mark Zuckerberg demonstrated Meta AI’s Imagine Me feature, where users can upload a photo of themselves and role-play their face into multiple scenarios, by posting an AI image of himself drowning in gold chains on Threads. A video version of a similar feature is possible with the Movie Gen model—think of it as a kind of ElfYourself on steroids.

    What information has Movie Gen been trained on? The specifics aren’t clear in Meta’s announcement post: “We’ve trained these models on a combination of licensed and publicly available data sets.” The sources of training data and what’s fair to scrape from the web remain a contentious issue for generative AI tools, and it’s rarely ever public knowledge what text, video, or audioclips were used to create any of the major models.

    It will be interesting to see how long it takes Meta to make Movie Gen broadly available. The announcement blog vaguely gestures at a “potential future release.” For comparison, OpenAI announced its AI video model, called Sora, earlier this year and has not yet made it available to the public or shared any upcoming release date (though WIRED did receive a few exclusive Sora clips from the company for an investigation into bias).

    Considering Meta’s legacy as a social media company, it’s possible that tools powered by Movie Gen will start popping up, eventually, inside of Facebook, Instagram, and WhatsApp. In September, competitor Google shared plans to make aspects of its Veo video model available to creators inside its YouTube Shorts sometime next year.

    While larger tech companies are still holding off on fully releasing video models to the public, you are able to experiment with AI video tools right now from smaller, upcoming startups, like Runway and Pika. Give Pikaffects a whirl if you’ve ever been curious what it would be like to see yourself cartoonishly crushed with a hydraulic press or suddenly melt in a puddle.

    [ad_2]

    Source link

  • Facial recognition Meta Ray-ban glasses knows who you are in real time

    Facial recognition Meta Ray-ban glasses knows who you are in real time

    [ad_1]

    In what might be described as a real-life Black Mirror episode, a Harvard student uses facial recognition with $379 Meta Ray-Ban 2 smart sunglasses – to dig up personal data on every face it sees in real time.

    If you’ve ever cared about your privacy, now might be the time to grab the tin foil hat. I’ve already got mine on.

    AnhPhu Nguyen, a junior at Harvard University, uses the livestreaming feature of his Meta Ray-Ban 2 smart glasses while a connected computer monitors the feed in real-time. He employs publicly available AI-powered facial recognition software to detect faces and scour the internet for more images of those individuals.

    He then uses databases like voter registration and online articles to gather names, addresses, phone numbers, next of kin, and even social security numbers.

    All of this data is scraped together using an LLM (Large Language Model) similar to ChatGPT which aggregates the data into a searchable profile that’s fed straight back to his phone.

    This entire process takes only seconds from being captured discretely on camera to being displayed on his phone, giving off real life Cyberpunk 2077 vibes.

    Nguyen has been very poignant to say that he’s not done any of this for nefarious or malicious purposes. He’s even published a small “how to” remove your information from some of the databases he uses to scrape your personal data. He wants to raise awareness of the implications this type of technology presents.

    While he offers a “solution” to help protect yourself, it’s really a small drop in a very large bucket that very well may never have a solution. Or maybe the solution will be wearing smart glasses of your own with Infrared Lights constantly blinding other facial recognition cameras?

    Unfortunately, bad actors (hackers that act maliciously) have already broken into many websites and databases, including in April of this year, when information on 3 billion people, including every single social security number in existence was stolen from the background check company National Public Data and posted on the Dark Web.

    With the proliferation of AI over just the last few years, one has come to expect to see it used in new and inventive ways … even if that carries a negative connotation, like deep fakes and disinformation to trick the masses into believing whatever narrative the creator wants them to believe.

    For now, Nguyen says he’s not releasing his software dubbed I-Xray.

    But if a smart college kid has already “cracked the code”, imagine what’s already happening behind the curtains. At least I think that was the lesson Edward Snowden was trying to tell us.

    Technical documentation: https://docs.google.com/



    [ad_2]

    Source link