In anticipation of this new reality, a coalition of academic institutions, tech firms, and nonprofits are developing ways to spot misleading AI-generated media. Their work suggests that detection tools are a viable short-term solution but that the deepfake arms race is just beginning. The best AI-produced prose used to be closer to Mad Libs than The Grapes of Wrath , but cutting-edge language models can now write with humanlike pith and cogency. Given a semantic context, it predicts which words are most likely to appear in a sentence, essentially writing its own text. If words in a sample being evaluated match the top 10, , or 1, predicted words, an indicator turns green, yellow, or red, respectively. In effect, it uses its own predictive text as a benchmark for spotting artificially generated content. State-of-the-art video-generating AI is just as capable and dangerous as its natural language counterpart, if not more so. And researchers at Seoul-based Hyperconnect recently developed a tool — MarioNETte — that can manipulate the facial features of a historical figure, politician, or CEO by synthesizing a reenacted face animated by the movements of another person. Even the most realistic deepfakes contain artifacts that give them away, however.
Thank you for downloading
A daunting task
There are early reports of VR Deepfake Pornography circling the internet. For those who are unsure, this probably means we have a shockingly dystopian future ahead of us. Like, Black Mirror level dystopian. The concept of the Deepfake has been around for over a year now. In short, people with coding experience were able to use artificial intelligence to copy and paste faces onto other individuals, but in VIDEOS. And many of these are very convincing, like this fake Obama video. Unfortunately , most of the people using the tool are pasting celebrity faces onto pornstars. And since the birth of Deepfakes, simpler applications have cropped up, giving people without extensive computer knowledge the ability to render fake videos.
Samuel Woolley's The Reality Game documents an online world awash with alternative facts, deepfakes and other digitally disseminated disinformation, and explores how to limit the damage in the future. Woolley uses the term 'computational propaganda' for his research field, and argues that "The next wave of technology will enable more potent ways of attacking reality than ever". Woolley stresses that humans are still the key factor: a bot, a VR app, a convincing digital assistant -- whatever the tool may be -- can either control or liberate channels of communication, depending on "who is behind the digital wheel". Tools are not sentient, he points out, not yet, anyway and there's always a person behind a Twitter bot or a VR game. Creators of social media websites may have intended to connect people and advance democracy, as well as make money: but it turns out "they could also be used to control people, to harass them, and to silence them". Shining a light on today's "propagandists, criminals and con artists", can undermine their capacity to deceive.
Shortly before, a manipulated video of US Speaker of the House Nancy Pelosi making her sound as if she were drunk circulated on the internet. The video, which was even shared by President Trump and members of the GOP, was later verified as a hoax. But with the quality of these videos rapidly improving, DeepFakes have sparked wider epistemological discussions about the future understandings of knowledge and truth. These developments in AI and machine learning ML are concurrently taking place with giant leaps in immersive experiences technologies, better known as augmented reality AR and virtual reality VR. Here, I offer a few thoughts on how the marriage of these technologies might transform our understanding of reality and presence. In this way GANs generate replicas or realistic manipulations of natural objects and people in a way which look highly realistic. FaceApp , for instance, the controversial mobile app that creates face transformations of photographs, works with GANs. Yet, there is a reason why DeepFakes have sparked so much discussion recently. I spoke to Henry Ajder from Deeptrace Labs , who explained to me that:.