Deepfake detection tool unveiled by Microsoft

By Leo Kelion
Technological bench editor

Deepfake Detector graphics

copyright of the imageGetty Images

Microsoft has developed a tool for detecting deepfakes – computer-manipulated images in which the likeness of one person was used to replace that of another.

The software analyzes photos and videos to provide a trust score on the likelihood that the material was artificially created.

The company says it hopes the technology will help “fight disinformation.”

One expert said it is in danger of becoming obsolete quickly due to the pace at which deepfake technology is advancing.

To address this, Microsoft has also announced a separate system to help content producers add hidden code to their footage so that any subsequent changes can be easily flagged.

Finding face swaps

Deepfakes rose to fame in early 2018 after a developer adapted cutting-edge AI techniques to create software that swapped one person’s face for another.

The process worked by feeding a computer with many stills of one person and footage of another. The software then used this to generate a new video with the former’s face instead of the latter’s, with matching expressions, lip sync, and other motions.

Since then, the process has been streamlined, opening it up to more users, and it now requires fewer photos to work.

There are some apps that require a single selfie to replace a movie star’s face with the user’s face within clips from Hollywood movies.

But there are concerns that the process may also be abused to create misleading clips, in which a prominent figure is made to say or act in a way that never happened, for political or other reasons.

Earlier this year, Facebook banned deepfakes that could mislead users

to think that a subject had said something they had not said. Twitter and TikTok later followed similar rules.

Microsoft’s video authentication tool works by trying to detect obvious signs that an image has been artificially generated, which may be invisible to the human eye.

copyright of the imageMicrosoft
image captionThe Video Authenticator tool provides a trust score based on the percentage of probability that a clip is a deepfake

These include thin fading or grayscale pixels at the edge of the point where the computer-created version of the target’s face was merged with that of the original subject’s body.

To build it, the company applied its machine learning techniques to a public dataset of around 1,000 deepfaked video sequences and then tested the resulting model on an even larger facial swap database created by Facebook.

A tech consultant noted that deepfake videos remain relatively rare for now and that most manipulated clips involve coarser re-edits done by a human. Even so, he welcomed Microsoft’s intervention.

“The only really widespread use we’ve seen so far is in non-consensual pornography against women,” commented Nina Schick, author of the book Deep Fakes and the Infocalypse.

“But synthetic media is expected to become ubiquitous in about 3-5 years, so we need to develop these tools in the future.

“However, as the detection capabilities improve, the generation capacity will also increase – it will never happen that Microsoft can release a tool that can detect all kinds of video manipulation.”

Fingerprint News

Microsoft has recognized this challenge.

In the short term, he said he hopes his existing product will help identify deepfakes ahead of the US election in November.

Instead of releasing it to the public, however, it only offers it through a third-party organization, which in turn will provide it for free to news publishers and political campaigns.

The reason for this is to prevent bad actors from taking over the code and using it to teach their deepfake generators how to avoid it.

To address the long-term challenge, Microsoft has partnered with the BBC, among other media organizations, to support Project Origin, an initiative to “tag” online content in a way that makes it possible to automatically detect any manipulation of the material.

The US tech firm will do this via a two-part process.

First, it created an Internet tool to add a fingerprint – in the form of certificates and “hash” values ​​- to media metadata.

Second, it created a reader, to check for any evidence that the fingerprints were affected by third-party changes to the content.

Microsoft says people will then be able to use the player in the form of a browser extension to verify that a file is genuine and check who produced it.

The manipulation of photos and videos is critical to the spread of often quite convincing disinformation on social media.

But complex or deepfake technology isn’t always needed at the moment. Simple editing technology is most of the time the preferred option.

That was the case with a recent manipulated video of US presidential candidate Joe Biden, which has been viewed over two million times on social media.

The clip shows a television interview during which Biden appeared to have fallen asleep. But it was false: the guest clip was from another TV interview and the effects of snoring had been added.

Computer-generated photos of people’s faces, on the other hand, have already become common features of sophisticated foreign interference campaigns, used to make fake accounts look more authentic.

One thing is for sure, multiple ways to spot media that has been manipulated or changed are not a bad thing in fighting online disinformation.

Related topics

  • Microsoft

  • Deepfakes

Check Also

Level Lock Review: Innovation at a Significant Price

“An engineering marvel that tucks away all the components of a smart lock.” Discreet, minimalist …

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.