GANs and the Detection of Synthetic Media
2 min read

GANs and the Detection of Synthetic Media

GANs fuel the production of synthetic media and the need for the detection of real and artificial assets.

Generative Adversarial Networks (GANs) represents a conceptual progress in machine learning. Introduced by Ian Goodfellow in 2014, GANs belong to a set of generative models, which are models that produce or generate content, such as images, videos, and audio. GANs use two neural networks, pitting one against the other in order to generate new, synthetic instances of data that can pass for real data. One neural network, known as the generator, creates new data instances while another network, known as the discriminator, determines whether or not a data instance belongs to the training data set.

The arrival of GANs feeds the commercial imperative for synthetic media. Rather than shooting new videos, the use of the networks can edit existing assets to create personalized videos. On one hand, the use of GANs can reduce the time and costs of a conventional production. On the other hand, as machine learning improves, the distinction between real assets and deepfakes will be much more difficult to detect. Clearly, to address the threat of bad actors leveraging the technology, scalable detection platforms are needed to handle the systemic risk that deepfakes pose to the spread of misinformation.

Potential Platforms to Detect Deepfakes

Researchers currently produce automated systems to detect indicators of deepfake videos by analyzing light, blinking patterns and real-world facial movements from expressions. Also, researchers also use recurrent neural networks to split images into patches and examine the patches pixel by pixel. Thus, one option is to import all of the detection techniques to use for spotting deepfakes. For instance, Jigsaw’s Assembler helps media organizations spot deepfakes by bringing multiple image manipulation detectors into one tool, each designed to spot specific techniques, such as copy-paste or alterations to image brightness. Detection, however, becomes a difficult solution to implement because almost every internet service and image upload pipeline modifies and compresses images, which reduces the number of available pixels to analyze.

In another avenue, other researchers prioritize verifying real videos over detecting fakes. Examples of verification methods include automating watermarking and identification of images taken on cameras and implementing blockchains to verify assets from sources. For instance, Truepic is a startup working on “Controlled Capture” technology that establishes trust in digital photos and videos by verifying their origin, pixel contents, and metadata, from the instant a user takes a picture through its application. In addition, Serelay developed an “Integrity Vector” approach that computes over 100 mathematical attributes connected to the media file at the point of capture. Unlike other solutions that leverages a chain-of-custody implementation, Serelay’s tech only employs the computations with the associated metadata to provide an assessment of the authenticity of the media item in question.

Enjoying these posts? Subscribe for more