Which Hollywood film is based on crypto currency

AI in the cinema: How deepfakes could revolutionize Hollywood

When Hollywood brought a young Carrie Fisher back to the big screen for a brief appearance as Princess Leia in "Rogue One: A Star Wars Story", it was a sensation. After all, the digital creation of human characters is considered the holy grail of visual effects. In order to produce convincing film material in just a few seconds, entire armies of animators model their digital sound for months - picture by picture and in painstaking detail work. But times change. Today Youtubers like Shamook put their own creations online and confidently seek direct comparison with expensive Hollywood productions. For his own version of Princess Leia, Shamook, as he writes, needed nothing more than a PC, a few hundred pictures of Carrie Fisher from the old "Star Wars" films and a day of computing time. Anyone looking at the result suspects that Hollywood is about to change its paradigm.

Video manipulations like this are based on the rapid development of machine learning, a sub-category of artificial intelligence. In the meantime, the term "deepfake" has become established for this, an artificial word that is composed of the depth of the neural networks used and the English word for falsification. The most common form of deepfakes are "face swaps". An algorithm swaps the face of an actor in a video for another face, frame by frame, while the facial expressions of the original are largely retained. The technology is now so widespread and so easy to use that the internet is literally inundated with fun videos. So if you always thought that Sylvester Stallone might have been the better terminator after all, or if you'd rather see Jim Carrey swinging the ax in "The Shining" instead of Jack Nicholson, you should check out the YouTube channel "Ctrl Shift Face . " "recommended.

Artificial neural networks in the form of so-called auto-encoders are usually used for face swaps. Using as many images as possible, you will first learn to reduce a face to its essential features and then to reconstruct the original from them. In the case of deepfakes, however, the original should be replaced by another face with the same facial expression. Put simply, the image is reconstructed in this case with a different network that has been trained on the new face. However, it decides for itself which features the network considers to be "essential" in this process, and the details are often incomprehensible to its human trainers. The only important thing is that it reduces all faces to the same set of features. This is the only way it can reconstruct another face with the exact same facial expression based on the data obtained.