Did Trump really just say that?
by Aaaaaaaaaaaaron, Konstantin, Hans and Yannick
Artificial intelligence (AI) can be categorized into normal AI, machine learning and deep learning where machine learning is still a part of AI and deep learning is also a part of machine learning.
Any program is a form of artificial intelligence when it enables the computer to mimic human intelligence by using if-then rules, decision trees but also machine learning and deep learning. Machine learning is a subset of AI that includes abstruse statistical techniques that enable the machines to improve at tasks with experience. Deep learning on the other hand is a subset of machine learning composed of algorithms that permit the computer to train itself to perform tasks by exposing multi-layered neural networks to vast amounts of data. You can see deep learning as a rocket engine where the fuel is the huge amount of data fed to the algorithm. Deep learning can get more abstract than machine learning but it also needs more data.
The artificial neural networks (ANN) that the algorithms of deep learning are based on are inspired by actual biological neural networks. Artificial neural networks consist of many layers of artificial neurons with connections like synapses between them to process the data. They earn the ability to be self-learning by adding or deleting new artificial neurons and synapses to manage different tasks. Artificial neurons can also be weighted differently and thresholds for neurons can be added like in real life. Deep learning is used for image processing, pattern and speech recognition, autonomous vehicles, deep learning robots like housekeeping robots, early warning systems, etc. Deep learning is also used to create deepfakes. Deepfakes are videos where the original face of a person has been replaced with someone else. If you give the AI enough pictures of the person you want to make a deepfake of it will be able to calculate how that person would look like while smiling, being angry or while saying certain things. The technology that powers deepfakes, known as Generative Adversarial Networks, was only invented in 2014. GANs are made up of two rival computer networks. A synthesiser creates content, which the detector or discriminator compares to images of the real thing. If the discriminator detects an artefact (distortion in the created image) the synthesiser will start over again. Through hundreds of thousands of cycles of trial and error, the two systems can create immensely lifelike videos. With this you can make celebrities or politicians say anything you want them to say or place their face on anyone else’s body.
Sometimes this can be useful when someone is trying to create a funny harmless video where you can easily determine if it is faked. It can also be used in movies to make them look more realistic and save the editors some time because the AI will edit the movie for them. But this technology does more harm than good. Through deepfakes it has become very easy to create realistic faked videos. Anyone with enough data and enough computing power can, without the consent of the person that appears in the video, create hoaxes that lead to misinformation of the society and humiliation of those in the video. Today 93 Million Selfies are taken everyday so it is not that hard to collect enough data and GPUs (graphic cards) that make the calculations for the AI are becoming faster and cheaper year by year. Apps to create deepfakes like “FakeApp” have also already been released making the technology even more accessible for the private person. Another big problem is that 96% of deepfakes are used for pornography. Not only are celebrities faces edited onto the bodies of “pornstars” but there are is also a lot of fake revenge porn. This is a crime and also a form of identity theft violating the privacy of the affected, their human dignity and their basic human rights while possibly destroying their reputation. Once those fake videos are uploaded on the internet it is almost impossible to remove them again and it is also not easy to find out who the video was made of. When it became easier to edit photos through photoshop etc. people stopped considering that the image they are seeing is real but videos were still regarded as real. This credibility is now lost because of deepfakes. But this can be also seen in a positive way. The first deepfakes were not created by big companies but by individuals. If someone was able to create this technology in their bedroom throwing together a bunch of existing tools, someone with a bigger budget must have pulled it off a long time ago. There is no doubt large organizations with massive resources haven’t explored these techniques. Who knows, maybe we’ve seen some of their work, on the news, without knowing it. So thank deepfakes for making us realize once again that we can’t take everything we see for granted. Right know there is a “war” going on between programmers improving the AI to make deepfakes harder to detect and other companies developing AI to detect and debunk deepfakes. Large corporations such as Facebook and Microsoft have taken initiatives to detect and remove deepfake videos. The two companies announced earlier this year that they will be collaborating with top universities across the U.S. to create a large database of fake videos for research. If enough companies work together the “war” can be won and the credibility of videos will come back to some extend. Synthetic videos produced for purposes such as parody or education would need to be watermarked and the rest would be filtered out in the best possible way. As long as the “war” continues, attention must also be paid to audio deepfakes as those are on the rise at the moment. Nevertheless, you should not demonize AI or deep learning because of deepfakes being a bad thing in general, since AI can still make our lives easier and shape the future in a positive way if we make them reliable enough.