Photos and videos are an essential part of journalism, as they add credibility and depth to a story. However, what happens when what you see with your own two eyes is no longer the truth?

With deep fakes, that’s a reality, as faces can be pasted onto other faces. Just like Snapchat’s face-swap, except that, if executed well, the change isn’t perceptible. Deep fakes are videos that have been altered, usually by superimposing another person’s face onto a body in the video.

Machine learning techniques can be fed lots of data, such as pictures of a person’s face, and once it has enough information, it can create a completely new image of the same person’s face. Using this technique, called a generative adversarial network (GAN), new text, audio, or images can be generated from a given data set.

The presence of deep fakes can potentially lead to a complete downfall in the public’s trust in media. With the widespread use of Photoshop, people learned to be wary of photos they see. However, no one has ever doubted the credibility of video and audio in quite the same way. Once deep fakes enter the picture, videos will also lose credibility.

These networks were primarily used by those researching artificial intelligence until a Reddit user with the screen name “DeepFakes” used the technique to post altered pornographic videos. He showed it a set of pictures of a celebrity and was able to insert the celebrity’s face onto the women in the videos. The entire subreddit r/deepfakes was eventually shut down, but after 90,000 users had joined the community.

The Reddit user had also developed an app called “FakeApp,” which allowed anyone to create their own deep fake. Anyone would be able to use deep fakes and exploit them, making a person say or do something that he/she never did.

There are still limitations in deep fake technology. The data set needed to teach the computer is very large –around 300-2000 images. Though celebrities and people who have many photos online are in danger, not everyone is at risk. However, with the rate of technological advancement, it may not be long until convincing deep fakes begin to create fake news or spread hoaxes.

Professor Craig Duff, a professor at the Medill School of Journalism in charge of video and broadcast specialization for graduate students, believes that a future with widespread deep fakes is a threat to journalism.

“If you already believe, as President Trump appears to believe, that the mainstream media is ‘fake news,’, then you will be inclined to disbelieve any video that comes from legitimate news sources,” Duff said. ”That could be a terrible blow to video journalism.”

What is being done about this national security threat in response, several states as well as Congress, are considering drafting legislation against audio and video that have been altered through the use of AI, and the Pentagon is also looking into the technology.  Independent researchers are trying to train computers to recognize visual inconsistencies in order to detect deep fake videos.

According to Professor Russell Walker of the Kellogg School of Management, it is up to us to develop solutions that safeguard our beliefs and fight the war against disinformation.

“Deep fakes are a major threat to our belief and evidence systems, to which visual evidence is central,” Walker said. “The problem is born of our technological capabilities, and that is where we need to look for tools that will enable us to evaluate authenticity and believability of photos, video and audio. Unraveling this will be challenging yet necessary for preserving our codified way of considering visual evidence.”