Tag: meta

  • Early sign-ups to EU’s AI Pact include Amazon, Google, Microsoft and OpenAI — but Apple and Meta are missing

    Early sign-ups to EU’s AI Pact include Amazon, Google, Microsoft and OpenAI — but Apple and Meta are missing

    [ad_1]

    The European Commission has revealed a list of the first 100-plus signatories to the AI Pact — an initiative focused on getting companies to publish “voluntary pledges” on how they approach and deploy artificial intelligence.

    While the bloc’s legally binding risk-based AI rulebook (the AI Act) entered into force last month, it will be several years before all its compliance deadlines are in operation. That creates a vacuum of non-compliance that the EU is keen to plug with the AI Pact.

    The effort is intended to boost engagement and foster commitments so companies get on the front-foot by taking steps to implement the law’s requirements ahead of the deadlines. The Pact also focuses on fostering info-sharing so signatories can help each other respond to the incoming requirements of the bloc’s AI rulebook and proactively develop best practices.

    There are also three “core actions” that Pact signatories are expected to commit to (at a minimum):

    • Adopting an AI governance strategy to foster the uptake of AI in the organisation and work towards future compliance with the AI Act;
    • Identifying and mapping AI systems likely to be categorised as high-risk under the AI Act; and
    • Promoting AI awareness and literacy among staff, ensuring ethical and responsible AI development.

    Beyond that, there’s a long-list of potential pledges (available here in PDF form) that the Commission says was drafted by the AI Office, the body overseeing the AI Act, before being filtered after feedback from “relevant stakeholders” in the AI Pact network. The resulting pledge list allows for signatories to, essentially, pick and mix which commitments work for them.

    Examples include pledges to “design AI systems intended to directly interact with individuals so that those are informed, as appropriate, that they are interacting with an AI system,” and “clearly and distinguishably label AI generated content including image, audio or video constituting deep fakes”.

    This long list could encourage pro-compliance competition between signatories to see who’s offering the most when it comes to AI safety.

    A Pact to push for quicker AI Act compliance

    The AI Pact initiative was revealed in May 2023 by then-internal market commissioner, Thierry Breton, and Google had agreed to help regulators work on the initiative at the time. Over a year later, the EU now has many more signatures, although some notable names are missing from the list.

    Apple isn’t listed, for example, nor is Meta. The adtech giant told Reuters on Tuesday that it would not immediately join the effort, saying it wanted to focus its compliance work on the AI Act itself.

    Penalties for non-compliance with the EU’s legally binding AI rulebook are stiff: They can reach up to 7% of global annual revenue for violating banned uses of AI; up to 3% for non-compliance with other AI Act obligations; and up to 1.5% for supplying incorrect information to regulators.

    So if Meta steps wrong when it comes to the real AI rules, it could be on the hook for billions in fines. That may be why it has so far snubbed the Pact, as reneging on pledges could merely invite a public dressing down.

    French large language model company Mistral is also not on the list. The company was among the AI Act’s fiercest critics, so it’s not so surprising it hasn’t signed up for voluntary pledges either.

    Meanwhile, another European large language model maker, Germany’s Aleph Alpha, has inked the Pact. However, it recently said it was pivoting to providing B2B support for generative AI tools. Given its evolving business model, it may be reconfiguring its policy priorities, too.

    Others on the list include Amazon, Microsoft, OpenAI, Palantir, Samsung, SAP, Salesforce, Snap, Airbus, Porsche, Lenovo and Qulacomm.

    On the flip side, there’s no sign of Anthropic, Nvidia or Spotify — notable absences, especially the first two given their salience to AI development.

    Spotify’s absence is notable as the European company did sign an open letter organized by Meta last week, lobbying against regulations that might crimp the spread of generative AI.

    You can find the EU’s full list of early AI Pact sign-ups here.

    There’s a mix of types of companies signing up, including major European telecos, consulting firms, software players, banking/payment firms, multinationals, SMEs and consumer-facing platforms. Obviously, the 100+ names represent the tip of the iceberg when you consider how far and fast generative AI technologies are spreading.

    And as these are purely voluntary pledges, a signature on the AI Pact may not mean much more than a bid to grab reputational clout. Signatories are also invited to report on progress 12 months after they publish their own mix of pledges, which opens up the chance for another round of publicity.

    [ad_2]

    Source link

  • Russia-Backed Media Outlets Are Under Fire in the US—but Still Trusted Worldwide

    Russia-Backed Media Outlets Are Under Fire in the US—but Still Trusted Worldwide

    [ad_1]

    In Latin America alone, RT’s channels run 24/7, and reported 18 million viewers in 2018. African Stream, which was also named by the State Department as part of Russian state media’s influence architecture and later removed by YouTube and Meta, garnered 460,000 followers on YouTube in the two years it was up and running. And Woolley notes that in these markets, there is likely less competition for viewership than there is in the saturated US media landscape.

    “[Russian media] made headway in limited media ecosystems, where its attempts to control public opinion are arguably much more effective,” he says. Russian media particularly hones in on anti-colonial, anti-Western narratives that can feel particularly salient in markets that have been deeply impacted by Western imperialism. The US also has state-funded media that operates in foreign countries, like Voice of America, though according to the organization’s website, the 1994 U.S. International Broadcasting Act “prohibits interference by any US government official in the objective, independent reporting of news.”

    Rubi Bledsoe, a research associate at Center for Strategic and International Studies, says that even with Russian state media removed from some social platforms, its messages are still likely to spread in more covert ways, through influencers and smaller publications with which it has cultivated relationships.

    “Not only was Russian media good at hiding that they were a Russian government entity, on the side they would seed some of their stories to local newspapers and local media throughout the region,” she says, noting that the large South American broadcasting corporation TeleSur would sometimes partner with RT. (Other times, Russia will back local outlets like Cameroon’s Afrique Média). “All of these secondary and tertiary news outlets are a lot smaller, but can talk to parts of the local population,” she says.

    Russian media has also helped cultivate local influencers who often align with its messaging. Bledsoe points to Inna Afinogenova, a Russian Spanish-language broadcaster who previously worked for RT but now has her own independent YouTube channel where she has more than 480,000 followers. (Afinogenova left RT after saying she disagreed with the war in Ukraine).

    And Bledsoe says that the ban from the US might actually be a boon for Russian media in the parts of the world where it’s actively trying to cultivate its image as a trusted media brand. “The narratives that were shared through RT and other Russian media and in Iranian media as well, it’s a kind of anti-imperialist dig at the West, and the US,” she says. “Saying the US is the driving force behind this international system and they’re plotting, and they’re out to get you, to impose on other countries’ sovereignty.”

    Though Meta was a key avenue for the spread of Russian state media content, it still has a home on other platforms. RT does not appear to have a verified TikTok account, but accounts that exclusively post RT content, like @russian_news_ and @russiatodayfrance have tens of thousands of followers on the app. African Stream’s TikTok is still live with nearly 1 million followers. TikTok spokesperson Jamie Favazza referred WIRED to the company’s policies on election-related mis- and disinformation.

    A post on X on from RT’s account on September 18, the day after the ban linked to its accounts on platforms like right-wing video sharing platform Rumble, X, and Russian YouTube alternative VK. (RT has 3.2 million followers on X and 125,000 on Rumble). “Meta can ban us all it wants,” the post read. “But you can always find us here.” X did not respond to a request for comment.



    [ad_2]

    Source link

  • What to expect at Meta Connect 2024, including Quest 3S and new AR smart glasses

    What to expect at Meta Connect 2024, including Quest 3S and new AR smart glasses

    [ad_1]

    Meta Connect 2024 is so close, you can almost taste it.

    Launching during the week of Sept. 23, the social media giant is expected to rollout hardware and software goodies that will intrigue VR gamer enthusiasts, AI aficionados, and smart glasses devotees. But what, specifically, does Meta have up its sleeves?

    We have a few guesses based on credible reports.

    What to expect at Meta Connect 2024

    Last year, the Meta Quest 3 was announced in early June, but it got its full reveal at Meta Connect 2023.

    Mashable Games

    The headset boasted a sleeker, more comfortable design, as well as new AR capabilities, that made it more appealing than its predecessor. Once again, for Meta Connect 2024, the social media giant is expected to drop a new VR headset, but it’s not necessarily an upgrade over the Quest 3.

    Meta Quest 3S

    Rumor has it that Meta is planning on revealing a cheaper, more budget-friendly version of the Quest 3 called “Quest 3S.”

    Whether it was intentional or accidental, as discovered by a Reddit poster, Meta leaked Quest 3S in its own Meta Quest Link PC app for Windows. For the uninitiated, this software lets users connect their Meta-branded VR headsets to a PC, allowing them to access more demanding PCVR games with just the Quest Link cable (which helps users siphon graphics power from their PC’s GPU).

    The image appears to have the body of the Quest 2 (in that it isn’t as sleek as the Quest 3), but it has different cameras on the front.

    According to a leaker on X, Quest 3S will have the following:

    Mashable Light Speed

    • Qualcomm Snapdragon XR2 Gen 2 chip

    • 1,832 x 1,920-pixel resolution per eye

    • Up to 120Hz refresh rate

    • Quest Touch Plus controllers

    • 4 IR tracking cameras

    • 2 IR illuminators for depth sensing

    • 2 4MP cameras for passthrough

    Regarding price, Meta Quest 3S will reportedly have a starting price of $299. For reference, the starting price of the Quest 3 was $499 when it launched last year, so if the reported price is accurate, you’ll be saving $200 with Quest 3S.

    AR smart glasses

    Last year, Meta unveiled the second-generation Ray-Ban Meta Smart Glasses, which is packed with Meta AI.

    Ray-Ban Meta Smart Glasses on a table

    Ray-Ban Meta Smart Glasses
    Credit: Joe Maldonado / Mashable

    This time around, according to a report from Business Insider, Meta is planning on releasing a new pair of spectacles that are totally unrelated to Ray-Ban Meta Smart Glasses. Called “Orion” internally, these glasses will focus on augmented reality (AR).

    AR incorporates virtual elements into your real-world environment. Meta’s Quest 3 is capable of AR. For example, it has a “passthrough mode” that lets you see your true surroundings, but at the same time, you’ll have the option to see or interact with virtual objects in your space.

    Ray-Ban Meta Smart Glasses, on the other hand, have zero AR capabilities. It can play music, take pictures, capture videos, take calls — and even lets you chat with Meta AI. However, it doesn’t offer another augmented dimension — but Orion, reportedly, will.

    Meta AI

    Meta AI can be found across a myriad of Meta products, including Instagram, WhatsApp, and even the Ray-Ban Meta Smart Glasses.

    Meta Connect 2024


    Credit: Meta

    Last year, Meta introduced Instagram-based “Meta AI Personas,” which were celebrity-look-a-like chatbots that didn’t quite resonate with many people, including Mashable’s own AI reporter Cecily Mauran.

    Based on Meta AI, these chatbots featured the likeness of popular, high-profile people (i.e., Padma Laksmi and Snoop Dogg) while taking on roles like “Creative Writing Partner,” “Travel Expert,” and more.

    However, this year, they got the boot.

    This doesn’t mean that Meta AI won’t continue to be spotlighted during Connect 2024. We’re expecting lots of AI updates during the livestream.

    Meta Connect 2024 will take place on Wednesday, Sept. 25 at 1 p.m. ET.

    Topics
    Virtual Reality
    Meta



    [ad_2]

    Source link

  • Meta Connect 2024: How to Watch and What to Expect

    Meta Connect 2024: How to Watch and What to Expect

    [ad_1]

    Meta Connect, the big developer event and hardware showcase from the company that runs Facebook and Instagram, is kicking off next week. Meta is likely to show off its new VR and mixed-reality technology, put a shiny polish on its meandering metaverse ambitions, and delve into all the fresh ways it plans to squeeze artificial intelligence into every crevice of its devices and services.

    The event takes place on Wednesday September 25, starting at 10 am Pacific time. The keynote address, where most of the new stuff will be announced, will be livestreamed. The host for the event will be Meta CEO and newly minted cool guy Mark Zuckerberg. Zuck’s hour-long presentation will be followed by a developer-focused address at 11 am led by Meta CTO and Reality Labs chief Andrew Bosworth. You can watch the events on the Meta Connect website or on Meta’s YouTube channel. And yes, you can also watch it in VR in Meta Horizon.

    The focus of the event will likely be a fusion of Meta’s mixed-reality efforts and its AI ambitions across its product line. Like any tech event, there are bound to be surprises. Here are the big things to look out for.

    Blurry MetaVision

    The one thing Meta won’t likely be announcing is a very expensive VR headset. It’s a move informed by where the mixed-reality-device market is right now—and whether people actually want to spend big to buy in. Instead, rumors abound about a so-called Meta Quest 3S, a headset which could be a cheaper version of the Meta Quest 3 with lighter features.

    Meta was briefly the bigwig in the AR/VR space 10 years ago when Meta (then Facebook) bought the VR company Oculus. Shortly thereafter, Facebook changed its name to Meta and sank $45 billion into its vision of a digital universe that most people just don’t seem to give much of a damn about. Workplaces aren’t using Meta’s Horizon Workrooms that much—we’re all still on Zoom—and despite the initial bouts of expensive corporate land grabs for digital real estate, users aren’t exactly eager to move into the metaverse.

    Other companies have struggled to find their virtual footing. Apple released its first-mixed reality headset, the $3,500 Apple Vision Pro, in February. Since then, the product has been regarded as a rare misstep for the company, or at least very clearly a first-generation product not intended for the masses. The device didn’t sell very well and was widely criticized as being an expensive, heavy, and ultimately lonely experience. (Apple mentioned the Vision Pro only once, in passing, at its optimistic iPhone announcement event on September 9.)

    Had the Vision Pro’s, well, vision panned out, Meta may have been more inclined to pursue the pricy premium category of VR headset. In August, The Information reported that Meta seems to have abandoned—or at least delayed—plans to reveal an update to its Oculus Quest Pro that would have gone into the ring against Apple’s Vision Pro. Bosworth, Meta’s CTO, responded to that news on Meta’s Threads platform and insisted the move is not that big of a deal, but rather a natural part of the company’s device iterations. Still, it is a move that makes sense in the aftermath of the Apple Vision Pro fizzling out.

    [ad_2]

    Source link

  • US Senate Warns Big Tech to Act Fast Against Election Meddling

    US Senate Warns Big Tech to Act Fast Against Election Meddling

    [ad_1]

    Andy Carvin, the managing editor and research director of the Digital Forensic Research Lab (DFRLab), tells WIRED that his organization, which conducts a vast amount of research into disinformation and other online harms, has been tracking Doppelganger for more than two years. The scope of the operation should surprise few, he says, given the fake news sites follow an obvious template, and that populating them with AI-generated text is simple.

    “Russian operations like Doppelganger are like throwing spaghetti at a wall,” he says. “They toss out as much as they can and see what sticks.”

    Meta, in a written statement on Tuesday, said it had banned RT’s parent company, Rossiya Segodnya, and “other related entities” globally across Instagram, Facebook, and Threads for engaging in what it called “foreign interference activity.” (“Meta is discrediting itself,” the Kremlin replied Tuesday, claiming the ban has endangered the company’s “prospects” for “normalizing” relations with Russia.)

    Testifying on Wednesday, Meta president of global affairs Nick Clegg stressed the industry-wide nature of the problem facing voters online. “People trying to interfere with elections rarely target a single platform,” he said, adding that Meta is, nevertheless, “confident” in its ability to protect the integrity of “not only this year’s elections in the United States, but elections everywhere.”

    Warner appeared less than fully convinced, noting the use of paid advertisements in recent malign influence campaigns. “I would have thought,” he said, “eight years later, we would be better at at least screening the advertisers.”

    He added that, seven months ago, over two dozen tech companies had signed the AI Elections Accord in Munich—an agreement to invest in research and the development of countermeasures against harmful AI. While some of the firms have been responsive, he said, others have ignored repeated inquiries by US lawmakers, many eager to hear how those investments played out.

    While talking up Google’s efforts to “identify problematic accounts, particularly around election ads,” Alphabet’s chief legal officer, Kent Walker, was halted mid-sentence. Citing conversations with the Treasury Department, Warner interrupted to say that he’d confirmed as recently as February that both Google and Meta have “repeatedly allowed Russian influence actors, including sanctioned entities, to use your ad tools.”

    The Virginia senator stressed that Congress needed to know specifically “how much content” relevant bad actors had paid to promote to US audiences this year. “And we’re going to need that [information] extraordinarily fast,” he added, referring as well to details of how many Americans specifically had seen the content. Walker replied to say that Google had taken down “something like 11,000 efforts by Russian-associated entities to post content on YouTube and the like.”

    Warner additionally urged the officials against viewing Election Day as if it were an end-zone. Of equal and great importance is the integrity of the news that reaches voters, he stressed, in the days and weeks that follow.

    [ad_2]

    Source link

  • OpenAI previews its new Strawberry model

    OpenAI previews its new Strawberry model

    [ad_1]

    OpenAI this week unveiled a preview of OpenAI o1, also known as Strawberry. The company claims that o1 can more effectively reason through math and science, as well as fact-check itself by spending more time considering all parts of a query. The family of models is available in ChatGPT and via OpenAI’s API, though OpenAI says it plans to bring o1-mini access to all free users of ChatGPT at some point in the future.

    Apple’s “It’s Glowtime” event was this week, featuring the reveal of its iPhone 16 lineup. The phones have a new dedicated camera control feature, an A18 chip, and, of course, AI integration with Apple Intelligence. Apple also highlighted new features in its AirPods Pro 2, including the ability to use them as clinical-grade hearing aids. If you missed the event live, we put together a rundown of everything you need to know.

    Oprah Winfrey hosted an AI special featuring interviews with Bill Gates, OpenAI CEO Sam Altman, tech influencer Marques Brownlee, and FBI director Christopher Wray. While Altman may have overpromised what AI is capable of or how it can impact the world, the dominant tone of the conversations was one of skepticism — and wariness.


    This is TechCrunch’s Week in Review, where we recap the week’s biggest news. Want this delivered as a newsletter to your inbox every Saturday? Sign up here.


    News

    Figure 02 Robot
    Image Credits: Brian Heater

    Face-to-face with Figure 02: TechCrunch’s Brian Heater got some one-on-one time with Figure’s latest humanoid robot. It’s come a long way in a short amount of time, with a new look and the ability to walk. Read more

    Pour one out for Cohost: Cohost is shutting down after running out of money. The attempted X competitor differentiated itself by focusing on a chronological feed and pursuing a business model that didn’t rely on advertising. Read more

    It’s too late to opt out of Meta’s AI training: Meta has acknowledged that it uses public posts to train its AI models, but it became clear this week just how much it uses. So if your Facebook or Instagram profiles were ever public, your posts have likely been scraped. Read more

    Hail a robotaxi with Uber: Uber users in Austin and Atlanta will be able to hail Waymo robotaxis through the app in early 2025 as part of an expanded partnership between the two companies. Read more

    Bluesky pivots to video: Bluesky will now let users share videos of up to 60 seconds in length on its platform, allowing the social network to better compete with rivals X and Instagram’s Threads. Read more

    Seattle’s hottest tourist destination is a damaged Cybertruck: A damaged Cybertruck on a busy Seattle street became an unlikely destination, and even had a makeshift memorial in front of its front bumper. The truck has since been removed, however. Read more

    Robots can learn how to tie shoelaces, too: In a new paper, Google DeepMind researchers showcase a method for teaching robots to perform a range of dexterous tasks, including tying a shoe, hanging a shirt, and even fixing fellow robots. Read more

    Is OpenAI worth $150B? OpenAI is reportedly in talks with investors to raise $6.5 billion at a $150 billion pre-money valuation. It’s significantly higher than OpenAI’s previously reported valuation of $86 billion and far higher than any other AI startup today. Read more

    More trouble for Adam Neumann: The former WeWork CEO’s climate/crypto/carbon-credit startup Flowcarbon appears to be in the process of curling up to die as it reportedly refunds investors. Read more

    AI hits the VMAs red carpet: Want to dress like Chappell Roan’s take on Joan of Arc? Thanks to a partnership between MTV and Shopsense AI, users can find and purchase dupes of their favorite outfits from this year’s VMAs with just a photo. Read more

    Analysis

    a cartoon illustrated black ball bomb with a fizzing fuse on a red and black background
    Image Credits: Bryce Durbin / TechCrunch

    Apple punts on AI: Apple’s “It’s Glowtime” event featured a lot of AI talk, which is to be expected. But as Devin Coldewey writes, none of the Apple Intelligence features the company highlighted feel new or interesting — nor do they appear to include any meaningful changes since they were released in beta after WWDC. It raises the question, Is it a failure of imagination or of technology? Read more

    Maybe you should stop picking up the phone: For as long as we’ve had telephones, there have been people trying to trick someone into thinking they’re someone else — and voice AI tools are making it even easier for scammers to trick people on the phone. Zack Whittaker suggests that the best way to keep yourself safe from phone-related scams might be to just let an unknown call go to voicemail. Read more

    [ad_2]

    Source link

  • Amazon’s Audiobook Narrators Can Now Make Their Own AI Voice Clones

    Amazon’s Audiobook Narrators Can Now Make Their Own AI Voice Clones

    [ad_1]

    Synthetic voices have been proliferating for years, and the generative AI boom of the new ’20s has sped that process right along. AI voices are everywhere—in podcasts, in political campaigns, and in chatbots where they maybe-not-so-subtly replicate celebrity voices. Soon, they’ll be all up in your audiobooks too.

    Audible, the Amazon-owned audiobook company, announced a trial program for generating AI voice clones to read works in its audiobook marketplace. The announcement came via a post in ACX—Audiobook Creation Exchange—Audible’s service that lets authors and publishers turn written books into audiobooks.

    “We’re taking measured steps to test new technologies to help expand our catalog,” says the post, “and this week we are inviting a small group of narrators to participate in a US-only beta enabling them to create and monetize replicas of their own voices using AI-generated speech technology.”

    Audible says both the narrators and authors will have control over which projects their AI voices are used for and that final narrations will be reviewed as part of ACX’s production process to check for mispronunciations or other errors.

    Still, this might seem a tad incongruous with Audible’s current approach to narrated audiobooks, given that even after this announcement, ACX’s submission requirements still say that audiobook narrations, “must be narrated by a human.” But Amazon has already been bullish on AI, and implemented a similar AI audio program for its Kindle direct publishing operation last year.

    Right now the Audible program is limited, with a select group of narrators participating. But it’s easy to see where this could go from here, and soon Audible could be opened up to let any author capable of generating an AI voice that can read their own book. Other companies are playing in this space as well; the startup Rebind is enlisting authors to allow their voices to be cloned so an AI version of them can “guide” readers through their texts. Fans of audiobooks are on the fence about all of it.

    Personally, I cannot wait until these dulcet yet uncanny voices fall into the hands of the dinosaur eroticists.

    Here’s some other consumer tech news from this week.

    Papers, Please

    Google is letting users digitize even more of their personal information. Up next: passports.

    Google added digital drivers’ licenses to its Wallet platform last year, enabling Android users to store identification details on their phones. Soon (Google doesn’t say exactly when) users will be able to do the same with their US passports.

    There are some caveats, of course. A Google Wallet version of your passport will be accepted only at specific TSA checkpoints where digital IDs are allowed. (Here’s a map.) Also, Google makes sure to recommend that you keep your passport on hand anyway. Digital IDs aren’t typically accepted anywhere outside of airports, so if you get into a pinch while abroad you’ll want to have your physical documentation. But for a lucky subset of travelers, this will solve the problem of needing to take yet another thing out of your bag when going through airport security.

    Keepin’ Tabs

    Hey speaking of Google, the company also announced some good news for all of us filthy browser tab hoarders. Tab grouping is a feature in Google Chrome that lets you squirrel away all your browser tabs under group folders for easier sorting. (I’ll read them later, I swear!) Google says its grouping feature will soon be made to sync across platforms. That means you can seamlessly continue your desktop browsing journey on your mobile device, where you will definitely not just continue ignoring them.

    Tab grouping will also soon be available on Chrome in iOS, and should be able to sync across desktops as well. How soon is all this coming? Well, again Google wasn’t quite clear about that. Regardless, better start collecting all those browser tabs now. Never know when you might need them again.

    Menlo-Upon-Tyne

    Meta—the Facebook, Instagram, and WhatsApp company that also does AI—has announced that its AI services are set to colonize a new cultural realm: the Brits. Meta announced it will be training its AI models off data from the users of its platforms in the UK.

    Specifically, the data will be collected from anyone who uses Facebook or Instagram in the UK, and then used to train Meta’s AI accordingly. In its announcement, Meta says it hopes this move will help its AI tools more accurately reflect British culture and speech.

    [ad_2]

    Source link

  • Meta reignites plans to train AI using UK users’ public Facebook and Instagram posts

    Meta reignites plans to train AI using UK users’ public Facebook and Instagram posts

    [ad_1]

    Meta has confirmed that it’s restarting efforts to train its AI systems using public Facebook and Instagram posts from its U.K. userbase.

    The announcement comes three months after Facebook’s parent company paused its plans due to regulatory pressure in the U.K., with the Information Commissioner’s Office (ICO) raising concerns over how the company might use U.K. user data to train its generative AI algorithms — and how it was going about gaining consent. The Irish Data Protection Commission (DPC), Meta’s lead regulator in the European Union (EU), also objected to Meta’s plans after receiving feedback from several data protection authorities across the bloc — there is no word yet on when, or if, Meta will restart its AI training efforts in the EU.

    For context, Meta has been boosting its AI off user-generated content in markets such as the U.S. for some time, but Europe’s stringent privacy regulations has created obstacles for Meta — and other tech companies — looking to enhance their training dataset with more culturally diverse content. Back in May, however, Meta began notifying EU users of an upcoming privacy policy change, saying that it would begin using content from comments, interactions with companies, status updates, photos and their associated captions. The reasons, it argued, was that it needed to reflect “the diverse languages, geography and cultural references of the people in Europe.”

    These changes were due to come into effect on June 26, but the announcement spurred not-for-profit privacy activist organization NOYB (“none of your business”) to file a dozen complaints with constituent EU countries, arguing that Meta was contravening various aspects of the GDPR privacy regulations. This included the issue of opt-in versus opt-out, vis à vis where personal data processing must take place, users should be asked their permission first rather than requiring action to refuse.

    Meta, on the other hand, is relying on a provision within GDPR called “legitimate interest” to contend that its actions are compliant with the regulations. Meta used this legal basis previously to justify processing European users’ for targeted advertising — though the Court of Justice of the European Union (CJEU) ruled that legitimate interest couldn’t be used as justification in that scenario, which raises doubts about Meta’s latest efforts.

    That Meta has elected to kickstart its plans in the U.K., rather than the EU, is telling though, given that the U.K. is no longer part of the European Union, though it has transposed much of GDPR into its current data privacy regulatory framework.

    Meta says it has now “incorporated regulatory feedback” to ensure that its approach is “even more transparent,” and from next week users will start to see in-app notifications explaining what it’s doing. From there, it will start using public content to train its AI in the coming months.

    “This means that our generative AI models will reflect British culture, history, and idiom, and that U.K. companies and institutions will be able to utilise the latest technology,” the company wrote in a blog post. “We’re building AI at Meta to reflect the diverse communities around the world and we look forward to launching it in more countries and languages later this year.”

    Objections

    One of the many bones of contention first time around was how Meta enabled users to “opt-out.” Rather than giving users a straight ‘opt-in / out’ check-box, the company made users jump through hoops to find an objection form hidden behind multiple clicks or taps, at which point they were forced to state why they didn’t want their data to be processed. It was entirely at Meta’s discretion as to whether this request was honored, however the company said publicly that it would honor each request.

    Facebook "objection" form
    Facebook “objection” form
    Image Credits: Meta / Screenshot

    This time around, Meta is sticking with the objection form approach, meaning users will still have to formally apply to Meta to let them know that they don’t want their data used to improve Meta’s AI systems. Those who have previously objected won’t have to resubmit their objections, however.

    The company says it has made its objection form simpler this time around, incorporating feedback from the ICO, though it hasn’t yet explained how it’s simpler.

    TechCrunch has reached out to the ICO for comment, and will update when we hear back.

    [ad_2]

    Source link

  • Meta Llama: Everything you need to know about the open generative AI model

    Meta Llama: Everything you need to know about the open generative AI model

    [ad_1]

    Like every big tech company these days, Meta has its own flagship generative AI model, called Llama. Llama is somewhat unique among major models in that it’s “open,” meaning developers can download and use it however they please (with certain limitations). That’s in contrast to models like Anthropic’s Claude, OpenAI’s GPT-4o (which powers ChatGPT) and Google’s Gemini, which can only be accessed via APIs.

    In the interest of giving developers choice, however, Meta has also partnered with vendors including AWS, Google Cloud and Microsoft Azure to make cloud-hosted versions of Llama available. In addition, the company has released tools designed to make it easier to fine-tune and customize the model.

    Here’s everything you need to know about Llama, from its capabilities and editions to where you can use it. We’ll keep this post updated as Meta releases upgrades and introduces new dev tools to support the model’s use.

    What is Llama?

    Llama is a family of models — not just one:

    • Llama 8B
    • Llama 70B
    • Llama 405B

    The latest versions are Llama 3.1 8B, Llama 3.1 70B and Llama 3.1 405B, which was released in July 2024. They’re trained on web pages in a variety of languages, public code and files on the web, as well as synthetic data (i.e. data generated by other AI models).

    Llama 3.1 8B and Llama 3.1 70B are small, compact models meant to run on devices ranging from laptops to servers. Llama 3.1 405B, on the other hand, is a large-scale model requiring (absent some modifications) data center hardware. Llama 3.1 8B and Llama 3.1 70B are less capable than Llama 3.1 405B, but faster. They’re “distilled” versions of 405B, in point of fact, optimized for low storage overhead and latency.

    All the Llama models have 128,000-token context windows. (In data science, tokens are subdivided bits of raw data, like the syllables “fan,” “tas” and “tic” in the word “fantastic.”) A model’s context, or context window, refers to input data (e.g. text) that the model considers before generating output (e.g. additional text). Long context can prevent models from “forgetting” the content of recent docs and data, and from veering off topic and extrapolating wrongly.

    Those 128,000 tokens translate to around 100,000 words or 300 pages, which for reference is around the length of “Wuthering Heights,” “Gulliver’s Travels” and “Harry Potter and the Prisoner of Azkaban.”

    What can Llama do?

    Like other generative AI models, Llama can perform a range of different assistive tasks, like coding and answering basic math questions, as well as summarizing documents in eight languages (English, German, French, Italian, Portuguese, Hindi, Spanish and Thai). Most text-based workloads — think analyzing files like PDFs and spreadsheets — are within its purview; none of the Llama models can process or generate images, although that may change in the near future.

    All the latest Llama models can be configured to leverage third-party apps, tools and APIs to complete tasks. They’re trained out of the box to use Brave Search to answer questions about recent events, the Wolfram Alpha API for math- and science-related queries and a Python interpreter for validating code. In addition, Meta says the Llama 3.1 models can use certain tools they haven’t seen before (but whether they can reliably use those tools is another matter).

    Where can I use Llama?

    If you’re looking to simply chat with Llama, it’s powering the Meta AI chatbot experience on Facebook Messenger, WhatsApp, Instagram, Oculus and Meta.ai.

    Developers building with Llama can download, use or fine-tune the model across most of the popular cloud platforms. Meta claims it has over 25 partners hosting Llama, including Nvidia, Databricks, Groq, Dell and Snowflake.

    Some of these partners have built additional tools and services on top of Llama, including tools that let the models reference proprietary data and enable them to run at lower latencies.

    Meta suggests using its smaller models, Llama 8B and Llama 70B, for general-purpose applications like powering chatbots and generating code. Llama 405B, the company says, is better reserved for model distillation — the process of transferring knowledge from a large model to a smaller, more efficient model — and generating synthetic data to train (or fine-tune) alternative models.

    Importantly, the Llama license constrains how developers can deploy the model: App developers with more than 700 million monthly users must request a special license from Meta that the company will grant on its discretion.

    Alongside Llama, Meta provides tools intended to make the model “safer” to use:

    • Llama Guard, a moderation framework
    • Prompt Guard, a tool to protect against prompt injection attacks
    • CyberSecEval, a cybersecurity risk assessment suite

    Llama Guard tries to detect potentially problematic content either fed into — or generated — by a Llama model, including content relating to criminal activity, child exploitation, copyright violations, hate, self-harm and sexual abuse. Developers can customize the categories of blocked content, and apply the blocks to all the languages Llama supports out of the box.

    Like Llama Guard, Prompt Guard can block text intended for Llama, but only text meant to “attack” the model and get it to behave in undesirable ways. Meta claims that Llama Guard can defend against explicitly malicious prompts (i.e. jailbreaks that attempt to get around Llama’s built-in safety filters) in addition to prompts that contain “injected inputs.”

    As for CyberSecEval, it’s less a tool than a collection of benchmarks to measure model security. CyberSecEval can assess the risk a Llama model poses (at least according to Meta’s criteria) to app developers and end users in areas like “automated social engineering” and “scaling offensive cyber operations.”

    Llama’s limitations

    Llama comes with certain risks and limitations, like all generative AI models.

    For instance, it’s unclear whether Meta trained Llama on copyrighted content. If it did, users might be liable for infringement if they end up unwittingly using a copyrighted snippet that the model regurgitated.

    Meta at one point used copyrighted e-books for AI training despite its own lawyers’ warnings, according to recent reporting by Reuters. The company controversially trains its AI on Instagram and Facebook posts, photos and captions, and makes it difficult for users to opt out. What’s more, Meta, along with OpenAI, is the subject of an ongoing lawsuit brought by authors, including comedian Sarah Silverman, over the companies’ alleged unauthorized use of copyrighted data for model training.

    Programming is another area where it’s wise to tread lightly when using Llama. That’s because Llama might — like its generative AI counterparts — produce buggy or insecure code.

    As always, it’s best to have a human expert review any AI-generated code before incorporating it into a service or software.

    [ad_2]

    Source link

  • New evidence claims Google, Microsoft, Meta, and Amazon could be listening to you on your devices

    New evidence claims Google, Microsoft, Meta, and Amazon could be listening to you on your devices

    [ad_1]

    Companies want to know what potential customers are searching for online. Using that information, they can target each internet user with an ad for a product or service that is relevant to what they’re looking for.

    But people don’t always perform an online search for everything they want or need to buy. What if those companies could listen in to potential customers’ everyday lives and hyper-target them with advertising based on what they’re talking about?

    The marketers at media giant Cox Media Group (CMG) pitched this idea to potential advertising partners, according to a report from 404 Media. The tech news outlet recently obtained a November 2023 pitch deck from CMG that detailed its “Active Listening” service and how it can target advertising based on smart devices like smartphones, smart speakers, and smart TVs.

    “What would it mean for your business if you could target potential clients who are actively discussing their need for your services in their day-to-day conversations?” reads the beginning of the CMG sales pitch. “No, it’s not a Black Mirror episode – it’s Voice Data, and CMG has the capabilities to use it to your business advantage.”

    Mashable Games

    The pitch deck goes on to mention how it’s legal for companies to listen in on users and collect that data.

    “Creepy? Sure. Great for marketing? Definitely,” the CMG sales pitch says.

    Last year, 404 Media reported on CMG’s promotion of Active Listening, the use of microphones in smart devices to listen in to users for the purpose of targeted advertising. The outlet just obtained the pitch deck last week.

    Mashable Light Speed

    Internet users have long speculated that Big Tech companies like Google, Microsoft, Amazon, and Facebook-owner Meta were eavesdropping on them. And now, we have evidence of CMG’s marketing team promoting this type of service to advertisers. As 404 Media points out, CMG has maintained current or former partnerships with all four of those Big Tech companies.

    So, what’s going on here?

    Big Tech responds to CMG’s Active Listening

    Mashable reached out to all four Big Tech companies mentioned in 404 Media’s report on CMG’s Active Listening pitch deck. We heard back from all of them – Meta, Amazon, Google, and Microsoft. Each company provided a statement denying working with CMG to target advertising in this way.

    “Meta does not use your phone’s microphone for ads and we’ve been public about this for years,” a Meta spokesperson said in a statement provided to Mashable. “We are reaching out to CMG to get them to clarify that their program is not based on Meta data.”

    Meta told Mashable that it was looking into whether CMG potentially violated Facebook’s terms and conditions. The company said it would take action if necessary. A spokesperson also provided Mashable with a 2016 post that Facebook published about how the company does not use users’ phone’s microphones for the purpose of advertising.

    Amazon, Google, and Microsoft also pushed back against any involvement with CMG’s Active Listening.

    “Amazon Ads has never worked with CMG on this program and has no plans to do so,” an Amazon spokesperson told Mashable.

    “All advertisers must comply with all applicable laws and regulations as well as our Google Ads policies, and when we identify ads or advertisers that violate these policies, we will take appropriate action,” a Google spokesperson said in a statement provided to Mashable.

    “We are investigating and will take any necessary actions in line with our policies,” a Microsoft spokesperson said.

    While the companies who replied all denied taking part in the advertising program, privacy concerns regarding smart home devices will surely continue amongst consumers.



    [ad_2]

    Source link