Meta tests facial recognition to detect ‘celeb-bait’ ad scams and easy account access

Meta is expanding facial recognition tests as an anti-fraud tool to combat celebrity scam ads and more broadly, the Facebook owner announced on Monday.
Monika Bickert, Meta’s VP of content policy, wrote in a blog post that some of the tests aim to strengthen its existing anti-fraud methods, such as the automatic scanning (using machine learning classifiers) that works as part of its ad review system, to make it harder for fraudsters to fly under its radar and trick Facebook and Instagram users into clicking fake ads.
“Fraudsters often try to use images of public figures, such as content creators or celebrities, to entice people to engage with advertisements that lead to scam websites where they are asked to share personal information or send money. This program, often referred to as ‘celeb-bait,’ violates our policies and is not harmful to the people who use our products,” he wrote.
“Yes, celebrities are featured in many official advertisements. But because celeb-bait ads are often designed to look realistic, they’re not always easy to spot.”
The test appears to be using facial recognition as a backstop to flag ads as suspected by existing Meta systems if they contain an image of a public figure vulnerable to so-called “celeb-bait.”
“We will try to use facial recognition technology to match the face in the ad with the person’s public Facebook and Instagram profile pictures,” Bickert wrote. “If we confirm the game and the ad is a scam, we will block it.”
Meta says that this feature is not used for any other purpose than to fight against scam ads. “We immediately remove any facial data generated from these one-time comparison ads regardless of whether our system finds a match, and we do not use it for any other purpose,” he said.
The company said that early tests of this method – “with a small group of celebrities and public figures” (it did not specify who) – showed “promising” results in improving the speed and efficiency of detection and enforcement against this type of fraud.
Meta also told TechCrunch that it thinks the use of facial recognition could be effective in detecting deep-pocketed scam ads, where artificial intelligence has been used to generate images of famous people.
The social media giant has been accused for years of failing to stop fraudsters misusing the faces of famous people in an attempt to use its ad platform to pull off scams like dubious crypto investments on unsuspecting users. So it’s an interesting time for Meta to push anti-fraud methods based on facial recognition to this problem now, at a time when the company is at the same time trying to take as much user data as possible to train its AI models (as part of an industry-wide push to build productive AI tools). .
In the coming weeks, Meta said it will start showing in-app notifications to a large group of people who have been targeted by celebrity hackers – letting them know they are being subscribed to the program.
“Public figures enrolled in this protection can withdraw from their Account Center at any time,” Bickert noted.
Meta is also exploring the use of facial recognition in identifying fake celebrity accounts – for example, when fraudsters want to impersonate prominent people on the platform to increase their chances of fraud – and by using AI to compare profile pictures on a suspicious account with people’s Facebook profile pictures and -Instagram.
“We hope to test this and other new methods soon,” Bickert said.
Selfies and AI for account opening
Additionally, Meta has announced that it is testing the use of facial recognition used in video selfies to quickly unlock accounts for people who have been locked out of their Facebook/Instagram accounts after they have been taken over by fraudsters (such as when someone is tricked into giving out their passwords).
This appears to be aimed at attracting users by promoting the obvious use of facial recognition technology to verify identity – with Meta saying it will be a faster and easier way to regain account access than uploading a photo of a government-issued ID (which is a common way to unlock access now).
“Selfie verification expands the way people gain account access, it only takes a minute to complete and it’s an easy way for people to verify their identity,” Bickert said. “While we know that criminals will continue to try to exploit account recovery tools, this verification method will ultimately be more difficult for criminals to abuse than traditional identity verification.”
The facial recognition-based selfie video identification method being tested by Meta will require the user to upload a selfie that will be processed using facial recognition technology to match the video to the profile pictures of the account they are trying to access.
Meta says this method is similar to the authentication used to unlock the phone or access other apps, such as Apple’s FaceID on the iPhone. “As soon as someone uploads a selfie video, it will be encrypted and stored securely,” Bickert added. “It won’t be visible on their profile, friends, or other people on Facebook or Instagram. We immediately delete any facial data generated after this comparison whether there is a match or not.”
Setting users up to upload and save a selfie for ID verification could be another way for Meta to expand its offerings in the digital identity space — if enough users opt in to uploading their biometrics.
No testing in the UK or EU — yet
All of these facial recognition tests are done globally, per Meta. However the company noted, clearly, that tests are not currently being conducted in the UK or the European Union – where full data protection laws apply. (In the specific case of biometrics for ID verification, the bloc’s data protection framework requires explicit consent from the individuals involved in such a use case.)
Given this, Meta’s test seems to fit within a broader PR strategy it has deployed in Europe in recent months to try to pressure local lawmakers to reduce citizens’ privacy protections. In this case, the reason we are asking to suppress unrestricted AI-data processing is not the (self-serving) concept of data diversity or claims of loss of economic growth but a more specific goal of fighting against fraudsters.
“We are working with the UK regulator, policy makers and other experts as the review progresses,” Meta spokesperson Andrew Devoy told TechCrunch. “We will continue to seek feedback from experts and make changes as features develop.”
Yet while the use of facial recognition for narrow security purposes may be acceptable to some – and, in fact, it may be possible for Meta to do so under existing data protection laws – using human data to train commercial AI models is a complete kettle of fish.
Source link