Illustration of woman watching a video on her phone in bed in the dark


Blurring the lines between fact and fiction

At a Glance



  • Video
  • Social Media
  • Fake News

What are DeepFakes?

This study considers the impact of the technologies used to generate and distribute hyper-realistic audio and video fake content, known as “DeepFakes.” Recent years have seen a significant rise in the quantity and the distribution of fake news media and content. The quality and ubiquity of these stories is having a significant impact on democratic systems, corporate integrity and personal safety.

In the last few years, fake content has fueled the virality of prejudiced inaccuracy to the extent that it has directly contributed to everything from measles outbreaks in Europe and the United States, through a bank run in Bulgaria, to market manipulation in crypto currencies, the rise of the alt-right, the mainstreaming of conspiracy theories, and most significantly, the democratic outcomes in the 2016 US Presidential Elections and the Brexit Referendum in the United Kingdom.

The societies we live in and the opinions we hold are increasingly being impacted and even shaped by anonymised malicious actors seeking results or actions that may not be in our best interests. This is a distinctly contemporary threat. The technologies that are freeing us may also be caging us.

New technologies have always presented some level of threat, either real or imaginary. Traditionally, the risk presented by these new technologies could be considered across three vectors:


How great a threat does the threat pose to society.


How easy is it to perpetrate.


How easy and cheap it is to prevent.

Considering DeepFakes across these parameters demonstrates the scope of the threat:

  1. DeepFakes present a significant systemic risk to the fidelity of the systems and processes we live by.
  2. DeepFakes can be created by anyone with a computer and some time on their hands.
  3. The technological solutions for combating DeepFakes are reactive in every sense. They require the content to be released before they can identify it; they must respond to improvements in the faked content after the fact; and the counter measures tend to be specific, whereas the technology that creates the fakes is general and widely applicable.

It seems clear that DeepFakes are likely to be the cyber threat of the decade. This report will clarify the history of the technology, the contemporary context and its impact, as well as illustrate the various risk factors through design fiction foresight.

It is delivered in three sections, analysing the technology that allows for 1) Creating DeepFakes, 2) Distributing DeepFakes, and 3) Countering DeepFakes.


Part 1

F o o g l e

An overview of DeepFakes

Few technologies have shaped our world in the last decade in the way that the technology used for creating fake content has. From conspiracies to constitutions, prosperity to protests, the propagation of fake news has had a material influence on democracies and ideas.

Previously, these malicious efforts relied principally on a combination of three well-established technologies: 1) The Internet, 2) Social Media, and 3) Chat bots. These three technologies allowed for the immediate and widespread distribution of contrived content, often in an effort to manipulate behaviours, either civic or commercial.

The capacity of this technology was further augmented after 2015 by the improved capacity for user targeting through advances in data collection and analytics. Now tailored content could be created to target specific users at exactly the right time to influence their decisions.

What we have seen in the last three years is only the beginning. The advent of DeepFakes has created a whole new class of pervasive risk that we are ill-equipped to guard against. Hyper-realistic fakes that look, sound and act like trusted individuals will become commonplace.

This new era of untrustable truths began with what we now call “DeepFakes.” The term “DeepFakes” refers to AI-generated, hyper-realistic fabricated video, imagery and audio of people. The result is an image, video or audio recording of a real or artificially generated person saying or doing something that never occurred.

DeepFakes pose a profound new personal, political, security and financial risk to all of us. An individual can be made to appear committing a crime they had nothing to do with. A politician can be made to appear corrupt. A CEO could be shown to announce quarterly results that were untrue.

How did DeepFakes begin?

Scientists and engineers have been working on various benign forms of data fabrication for some time. We have seen variants of the technology being used in smart phone filters and social media for face swaps; in puppetry or facial reenactment in the film industry; in audio fabrication in the music industry, among others.

The term ‘DeepFakes’ - a portmanteau of “deep-learning” and “fakes” - first appeared on social content platform Reddit in 2017 when a user named “Deepfakes” began to post pornographic clips of celebrities in compromising positions. The virality of the posts brought the term to the mainstream, but with it came the understanding that the technology could be used against any of us in a similar way.

Contemporary Context: A Post-Truth Age

We live at a time when fake news, chatbots, clickbait, troll and click farms, and social media are having a material impact on our lives, politics and societies. A contrived headline can travel around the world and back again in a moment, unchallenged and unstoppable. The time it takes to check the fidelity of the statement is too long to catch the same news cycle. By the time a correction can be issued, the audience has moved on and the falsehood is now assumed to be fact.

An analysis of fake news stories in the United States in 2018 shows that these stories get millions of views and are actively shared through social media platforms, principally Facebook and Twitter. The majority of stories have political implications and are designed to be divisive and controversial. Common subjects of these stories are the Clintons, George Soros, immigrants and Democrats in the US.

Viewed in isolation, many of these stories can be dismissed as evidently false, yet stories that are purposefully designed to target specific users and reaffirm their own beliefs are far more difficult to refute, especially when users see them being affirmed by their immediate network. It is entirely possible, due to the mechanics of digital social filtering (where websites or apps will push you towards content or people you agree with), that the consumers of this news may never even interact with a contradictory point of view.

These stories have begun to have very significant real-world impacts.

In 1998, Andrew Wakefield published a report saying that the Measles Mumps and Rubella vaccine was linked to instances of autism in children. No other researcher was ever able to match his results, and an ethics review later found that Wakefield had a financial conflict of interest and had falsified the data.

Despite this, many parents across the United States and Europe began to exhibit “vaccine hesitancy” - they were either reluctant or were refusing to vaccinate their children. The United States eradicated measles in 2004, but as a result of the “Anti-Vaxxer” movement, the disease has returned and affected 667 Americans in 2018. We have also seen a quadrupling of measles cases in Europe in the last year.

A further consequence is a global increase in attacks on healthcare workers related to vaccinations of various diseases, including polio. So serious is this issue that the World Health Organisation (WHO) has listed Vaccine Hesitancy as one of their global threats for 2019.

Perhaps the most significant impact of fake content is the role it plays in cyber warfare, where certain nation states and/or corporate interests try to undermine, divide or disrupt civil society through the distribution of targeted fake news designed to advance belligerent political agendas or financial gain. In Europe, this has been most notable in Great Britain in the lead up to the “Brexit” referendum, where fake news stories and mistruths were so ubiquitous on the pro-Brexit side of the debate that news broadcasts and politicians regularly referenced fake facts as if they were true.

Other examples of how coordinated fake news is producing extreme real-world events through manipulation and virality is the Pizzagate incident in Washington, D.C., and the Q-Anon conspiracy (see Digital Anthropology section in this study).

These instances of fake news are rudimentary in form and can be easily refuted, should one wish to interrogate them. But what if the message wasn’t found on the side of a Facebook page, but instead was delivered in a video by your most trusted statesperson and further supported with messages from societies great and good, sports people, celebrities, business people? How would you know? Is it their voice, their face, their mannerisms? What is fake and what is real have become indistinguishable. What if the message delivered was racist, violent or belligerent? What if a message, seemingly from the President of the country or the Minister of Finance, issued a warning on bank liquidity? Or an invasion? Or a contrived event aimed at stoking tensions?

Applications of DeepFakes

There are four major areas where this technology is becoming prevalent: 1) Imagery, 2) Video, 3) Audio, 4) Writing. All of these personally identifiable characteristics can be replicated by this technology.


The technology for creating video and imagery of real people is relatively simple to use. Anyone with a reasonably powerful GPU can feed a few hundred images or videos of a target into a machine learning programme called Generative Adversarial Network (GAN) to create realistic fake imagery or video of the individual. Minimum computing skills are required.

DeepFakes require cooperation between multiple technologies, most notably machine learning and neural networks. However, the specific technology that enables the creation of hyper-realistic fake content is called Generative Adversarial Networks (GAN). GANs are a class of machine learning algorithms used in unsupervised learning (a branch of machine learning that does not require labeled data) and implemented by a system of neural networks contesting with each other. It was created by researcher Ian Goodfellow in 2014. Goodfellow now works at Google.

The original technique was capable of generating photographs that look authentic to human observers. In 2018, researchers at Stanford University further developed a technology called deep video portraits, which allows a computer to transpose the expressions or actions of one person onto another person, for example making someone in a video smile or frown.

At the same time, researchers at the University of California, Berkeley used GANs to develop software that could transpose whole actions from one person to another. A person who can’t dance could suddenly do ballet; someone who couldn’t speak sign language can appear in a video signing perfectly.

Again in 2018, technology firm Nvidia announced that it had developed a way to create hyper-realistic imagery of non-existent humans, using what they call a “style-based generator.”


Significant advances were made in 2018, most notably by Google, in creating human-sounding voice AI. These AIs can act as your digital assistant, liaising with people on your behalf, without anyone knowing that it is a machine.

A number of companies have made significant advances in generating human voice outputs. There are many reasonable use cases, most notably in digital assistants like Siri or Alexa, where the user feels more comfortable engaging with a human voice rather than what is distinctively a robot. People engage and develop loyalty to the personality imbued in the voice itself.

In 2016, Adobe, the creator of Photoshop, launched a new voice editing tool called VoCo. The tool needs just 20 minutes of voice data to recreate any message in that person's voice. In effect, that means that it can generate any voice message in the voice of prominent politicians, business people, journalists, celebrities, etc.

The company Lyrebird have taken this even further by allowing you to create your own voices. This is useful for people who want to create chatbots, personalise audiobook readers or maintain voice privacy online, among others.

In 2015, a company called Replika (originally Luka) developed a service that uses voice data and AI to recreate loved ones after they die. The AI uses voice samples and data inputs from e-mails and videos of the deceased person to have conversations in their voice after death. The company recently pivoted towards creating digital companions.

There are many innovators in the space, but undoubtedly it is Google that have been working at it the longest and have made the most meaningful advances. Google have long held a belief that voice AI is the significant chasm that needs to be crossed for AI to really take hold of users’ lives. As far back as 2007, Google launched a directory enquiry service called GOO411, a free tool for connecting people to phone numbers they needed. This allowed Google to analyse enormous amounts of voice patterns and ultimately led to the creation of the Pixel phone and its voice capabilities. This resulted in controversy, when it became clear that Android phones were recording the conversations of people close to the device (you can delete some of these conversations in your account settings).

Google have now moved a step further by combining their digital assistant capabilities with their voice technology to generate realistic human voice AI. The product is called Duplex and is already live. You may well be receiving calls from robots who sound like people hoping to set up a meeting with you.


a small sample of a person's handwriting can produce a never-ending script of text in perfect replica. Signatures are now easily fabricated. Natural Language Processing (NLP) - the science of interaction between humans and computers through natural languages - in combination with these neural networks can now create chatbots that mimic the writing style and logic of an individual.

Companies like Adobe have been providing real digital signatures for some time now, but they have traditionally required you to generate it yourself and input the data through a photograph. In 2016, scientists at University College London developed software that could mimic a person's handwriting perfectly. This would allow users to increase the personalisation of their correspondence or work by typing in their own handwriting.

In parallel, Chicago-based company Narrative Science are developing and using NLP tools that can create text in the tone of voice of a company or an author. This may be the only way you ever get to read an ending to the Game of Thrones series.

Earlier this year, researchers at OpenAI decided against releasing a chatbot due to concerns about how dangerous it may be in the wrong hands. The AI can generate coherent paragraphs of text indistinguishable from human-created content. It could also perform rudimentary reading comprehension, translation, question-answering and summarisation without the need for specific training. These AIs can reasonably be expected to replace many content-creating and reporting roles in the next decade.

New and emerging technology ecosystems in imagery, audio, text, social and artificial intelligence have lowered the barrier to entry and enabled the creation and distribution of DeepFakes at scale. Source: L’Atelier BNP Paribas.

DeepFake Drivers

The surge in DeepFakes is spurred by both academic investigation and hobbyists’ imagination. Key drivers include:


There are real commercial opportunities for this technology that are driving further development. Technology capable of creating hyper-realistic audio-visual content will redefine both the format and economics of the entertainment industry, as well as produce a more sophisticated generation of digital assistants. AI that you can have a relationship with will lead to significantly more user openness and dependency, which will ultimately lead to far greater access to individual data for the countries providing this software.


There can be a real advantage to being able to create anonymous avatars of yourself for your digital life. You can look and sound however you like in your interactions with others. People will be incentivised to store digital files of personal characteristics so that an AI replicant can be developed in the event of their death.


The reality of the political threat posed by DeepFakes means that it has to be incorporated into the strategies of governments and politicians. We are very close to the time where we will see the first major cyber attack on a country or individual using DeepFakes.


As with many innovations on the Internet, the porn industry is an early and industrious mover in this space. Previously, the porn industry was one of the quickest markets to adopt online presences, e-commerce, streaming services, cryptocurrencies, Virtual Reality (VR), AI-based recommendations, etc. Many emerging technologies can’t really claim to have made it to the mainstream until they’ve been used either in crime or in pornography. This sector will continue to push the boundaries of both what’s possible and what’s ethical.

Illustration of a woman talking to camera surrounded by social media icons


Part 2


/df/ Distribution

Spreading Fakes

Advances in automation and social networking have allowed DeepFakes to spread with greater speed, reach, volume and coordination. Social media, in particular, has been fundamental to the distribution and evolution of content online.


Social media platforms enable information flows to scale exponentially through network reach. Platforms tend to generate more value as they scale, attracting more participants, which creates more value. The volume of interactions on the platform creates a virtuous feedback loop: demand draws supply (or vice versa), which draws demand, and so on.


The emergence of social botnets has played a significant role in augmenting the speed of spread on social media, particularly by triggering trending algorithms. The virality of interest and consumption can spread a message across time zones long before there is a chance to corroborate or validate the source.

A botnet is a synchronised network of remote accounts (bots) that are controlled centrally by servers. There are numerous types of bots - from commercial bots that promote click traffic, through customer service and transactions bots, to malicious bots spreading spam, disinformation and malware.

Social botnets refer to the coordinated use of many social media accounts, programmed to specific tasks, such as generating simple comments supporting a specific agenda, or retweeting or liking content created by real users who advocate for that agenda. This allows the controller of the botnet to generate a large volume of posts over a short period of time, creating the illusion of substantial organic information flows by real people.

Some bots on social networks are benign, programmed to do simple things like deliver news, check spellings or link the user to a Wikipedia page. But many are designed to be malicious. The range of malicious activity can range, for example:


Using hashtags related to a topic to comment on unrelated topics, thereby diluting the conversation about the primary topic. For example, a popular protest could be undermined through thousands of bots deliberately swamping the social media presence related to that activity, making it difficult for users to find and irritating those keeping track of it.


The process of using tags and hashtags for a popular topic to direct people to another cause, for example, people searching for #worldcup see many tweets about the Earth being flat.


Swamping a public debate online with a particular point of view, for example pro-Trump or pro-Brexit. The appearance of significant support can at times generate more support through the illusion of philosophical equivalence.


Botnets permeate the web. In 2017, scientists at Indiana University and University of Southern California estimated that between 9% and 15% of active Twitter accounts are bots. A study by scientists at Rice University later in 2017 put the number at up to 23%.

Beyond social media platforms, botnets have created high-volume, high-margin, highly sophisticated illegal businesses through spamming campaigns, click frauds and bank frauds, among others. A 2018 study by the University of Twente of the business models of botnets shows that a botnet linked to 10 million devices might cost $16 million to set up, but could generate $20 million per month from click fraud. A bank fraud operation with 30,000 bots can generate more than $18 million per month, according to estimates.


Advancements in machine and deep learning have enabled botnets to complete increasingly sophisticated tasks. Particularly in the last two years, botnets have moved from predominantly automated spam accounts to more complex tactics, including:

  • Ability to target specific individuals or groups with high precision. Some bots, sock puppet accounts (numerous accounts controlled by a single person) and cyborgs (people who use software to automate their social media posts) have become very effective at propagating information. In some cases, there are communities of interconnected viral bots and cyborgs that share each other’s posts and use retweet, mention and other strategies to interact with target groups
  • Ability to create virality by targeting specific moments of the information life cycle, particularly the first few seconds after an article is first published
  • Capacity to use meta data on social platforms - particularly from user-generated content like written text - to better mimic human behaviour and avoid detection
  • Capacity to target and amplify human-generated content rather than automated content because the former is more likely to be polarising
r / deepfakes

The making of new digital cultures



Fake content has led to the emergence of a number of digital communities, especially within the increasingly large conspiracy and political activism communities. However, that is not where DeepFakes came to our attention. As is the case with much of Internet culture, good and bad, it began on Reddit, the self-referential “Front page of the Internet.”


Some 350 million users come together on Reddit to form communities on topics from the mundane to the eccentric, to engage in conversation, debate and share content. It is both an extraordinary resource and platform for the development of digital culture. Reddit is the Wikipedia of digital anthropology, the after-school hangout for the digitally native.


The platform has frequently courted controversy for the conversations and communities it facilitates. Indicative examples include communities for facilitating the sale of drugs, guns and sex, as well as the sharing of violent, misogynistic, racist and peodiphilic content. Reddit was also the source of a number of other controversial communities and Internet folklore, including Gamergate, Incels, FindBostonBombers, The Fappening, Chimpire and Pizzagate.


Though Reddit continuously tries to clean up its communities, it is, by its nature, reactive. The transience of the Internet existence is that once a community is banned on one platform, it tends to re-appear immediately elsewhere, only worse. It is highly likely that Reddit will continue to be the launchpad for DeepFakes, both positive and negative.


Part 3


Sunday, 14 July 2019
  • World
  • Europe
  • Politics
  • Business
  • Tech
  • Finance
  • Opinion
  • Science
  • Video

From Sci-Fi to Sci-Facts

New opportunities for the technology

These technologies will also create significant new commercial opportunities, in particular for the entertainment industry. Film studios will produce new films starring long-dead stars. Imagine new blockbusters with Marilyn Monroe or Jean Marais. We will also see entirely computer-generated new actors, who never age, whom a studio own the rights to in perpetuity. No unions to worry about, no actor fees, no poor performance or controversies. Similarly, we can expect to hear new music from artists of a bygone era in their voice, as well as music created for computer-generated avatars.

Digital Assistants

Facial recognition, imaging and manipulation, in combination with human-like verbal and audio capacity, signals a significant shift in the usefulness of Digital Assistants. Assistants can now be customised to the extent that people can develop sentimental attachment to their AI. The AI can develop quirks and idiosyncrasies in order to optimise the connection between human and agent, thereby increasing usefulness whilst at the same time developing the trust capacity required to mitigate loneliness and isolation for many people. It can also improve guidance, advice and decision making assistance for the underconfident, or customised learning for people who consume information in non-standard ways.

After Death AI-facilitated Regeneration

AI-facilitated regeneration is the capacity for AI to reconstruct avatars of dead loved ones or figures of interest after death. This could, for example, allow a spouse to maintain a relationship with a deceased partner (particularly relevant in the case of elderly people who may live alone), a child to get to know a parent they never met or even give a student the chance to learn physics from Einstein, philosophy from Voltaire, chemistry from Marie Curie, etc. Imagine Facetiming with Luciano Pavarotti for you singing lessons. There will continue to be a demand for engagement with the deceased. This technology allows for realistic visual and audio reconstruction of those who have passed away.

Anonymous Therapy

Many people decline therapy because they feel uncomfortable acknowledging the need for it. They feel accountable to the therapist and sometimes even feel that the therapy diminishes them in some way. This technology allows for people to create anonymous avatars of themselves that they can use to interact online. These are new digital identities, divorced from the real version of them. People, especially military personnel and those who engage in stressful occupations, can use this capacity to create avatars in order to engage with medical professionals on a fully anonymous basis. They can give or receive therapy without consideration for judgement or accountability, using a constructed face and voice.

Solving For Voice Loss

Human voice synthesis can create new human voices. This technology has significant benefits for people who have lost the ability to speak. The technology was made most famous by Stephen Hawking, but improvements now mean that individuals could soon have their original voice recreated as well.



The entertainment industry will be significantly impacted by this technology. There will certainly be concerns about image rights and copyright due to the the ease of imitation. Fans will be able to create their own spin-off fictions using the studio characters. But instead of it being in written or comic book form, it may well be in hyper-realistic live action using the actual cast of original films.

The corollary will be the ease with which film studios can generate new hyper-realistic actors to star in their films - actors they do not have to pay, worry about or interact with unions to deal with.

Instead, the studios will own the rights to that digital actor in perpetuity. The actors’ voice, mannerisms and behaviours can be edited for the geographyand cultural norms of the audience, such that watching the same film in Stockholm and Singapore will mean seeing the actor give two distinct performances in two different languages, customised for two separate audiences.

The same can be said for musicians, live-action gaming and customised content. Entertainment will be changed fundamentally.

The relationship between producers and consumers will have to alter as well in order to reflect that. For example, this could lead to frequent co-ownership of characters, where a fan creates a new character for a series that the producers like and subsequently licence from them.

Indeed, if you are willing, you could even be the star of your own movie. You could be the hero or heroine, the villain or a bystander, your face swapped into a blockbuster movie. This also provides opportunities for advertisers, who may want to swap their spokesperson into, for example, a trailer for general distribution. The possibilities are endless.



The demand for translation services becomes more acute as the world becomes increasingly interconnected. Significant improvements have been made in this space in recent years, most notably free-to-use services like Google Translate and now Google Pixel Earbuds. The next step, enabled by this technology, is integrating automatic translation into media.

Examples include entertainment content where a film can be distributed in every language without any impairment to the visual appearance. The actors will adjust their pronunciation and mannerisms as appropriate to make it look real. But even more significantly, this is likely to include AI staff.

Imagine an international business with offices in 14 distinct regions has an important video conference for senior executive staff who speak different languages.

The AI in this case would be designed to look like a real person with a real voice. They may even hold a real title. It might not even be made apparent to people that the presenter is a bot. Once the presentation begins, the AI will present the content to each audience in their preferred language. They will answer questions in that preferred language and will behave in a way that is appropriate for that region. This will have a radical impact on staffing decisions, location decisions and communications efficacy.

Ethics & Consequences Of Adoption

It is important to consider the ethical implications of all new technologies. As the world is changing faster than ever, it can be easy to overlook the potential costs, implications and future risks imposed by widespread adoption of emerging technologies. We can become blinded by the utility provided by the technology, while sometimes the costs may be more apparent.

DeepFake technologies provoke an abundance of new risks and ethical concerns for people and nation states. One of the most notable concerns is the idea of consent. Should someone be allowed to use your image without your consent, even without financial gain? DeepFakes entered the public consciousness because hobbyists were using the technology to create realistic sexually explicit images of predominantly women, who had not consented to their likeness being used. In the instances of “revenge porn,” these videos could be particularly traumatic for the victim. Aside from the emotional impact, these videos can also impair the victims’ reputation, brand or credibility, reducing earning capacity or influence over time. This practice, whilst clearly unethical, is not illegal and effectively illustrates the challenge that legislation has to keep pace with innovation.

Technology is ambivalent to the way it used. It is, after all, just a tool, and tools can be used for good or for bad. We try to mitigate the malicious use of these tools through legislation but we have reached a point where technology is evolving faster than effective legislation can be developed. Below we identify some of the areas where issues might arise as a result of these technologies by the year 2025.

Posthumous IP Rights

This technology can be used to reanimate dead celebrities. Michael Jackson could release a new single or John Wayne could be in a new film. This raises questions about who owns these IP rights and how entertainers should look to manage their estate after death. But what about the rest of us? If our image or our voice is used after our death, should our estate benefit? Should we be able to bequeath future data revenues to inheritors?

Justice System

Videos, images and audio files are no longer deemed reliable evidence within criminal justice systems. The burden of proof has increased significantly, resulting in declining conviction rates across democracies. Commentators and general public are now arguing for harsher sentences and lower thresholds for convictions.

It is plausible that legislation will expand the rights of the government to track and monitor citizens through their devices, Internet usage, social media, drone footage, biometrics, payments and stationary camera infrastructure.

Parliament is discussing the possibility of mandating that people have digital profiles, prohibiting people from living exclusively analogue lives and limiting cash withdrawals and usage to small amounts.

Identity Theft

It has become remarkably easy to steal someone’s identity. Pending access to footage of their likeness, anyone with a modern laptop can create convincing fake content of that person or an avatar they can use to scam friends, family and acquaintances. Scam video calls have replaced traditional scam e-mails. People now receive panicked video calls from loved ones asking for emergency funds to be deposited in peculiar accounts. These hyper real similes are indistinguishable from real people. In the event that the victim’s e-mail or texts have already been hacked, the fake image will also be able to refer to personal context or memories.

Advice Regulation

The ubiquity of digital assistants has made the regulation on providing financial advice increasingly complex. Is the person receiving the advice from the AI creator? The device creator? The data provider? Or does the burden of action in personal finance fall squarely on the shoulders of the individual?


State-sponsored news channels begin to use computer-generated news presenters to ensure that there is no human interference with government messaging. The role of journalists and the free press is rapidly declining.


The public are becoming publicly cynical about content that appears online. Trust in traditional broadsheets is heightened, not least because people are aware of the lag between content production and delivery. They understand that there is the possibility for correction, should a mistake occur.

Looking Ahead


Looking Ahead

A Conclusion on DeepFakes

While new technologies have always carried a cost in their wake - be it physical, emotional, social, cultural, industrial or ethical - the speed of that change was slow enough to allow society a chance to comprehend the impact and mitigate them through cultural, economic and legislative initiatives. In the cases of sterilisation, nuclear war, eugenics and other complex issues, society often had decades to contemplate the moral quandaries. They had so long that it was often not until the next generation that the solution was implemented. We built societies that encourage unbounded innovation with the understanding that innovation would always be manageable.

We have left the era of mitigable change behind us. We have passed the point of human control. We are past the point where the great technological assets of a time are held by a small few government and corporate agents. Past the point where technology can prevent that which technology impairs. Technology has not yet become self-generative, but the speed of its evolution is such that the average person can no longer comprehend the implications and the complexity of the technology they use, and legislators are too slow and too impacted to prevent the damage that unbounded technological advancement can cause.

In the near future, it seems unlikely that we will be able to rely on human senses alone to establish the fidelity of our experiences. Seeing or hearing something no longer makes it true. What is fake is now real until proven otherwise. We will need assistance. It is likely to arrive in the form of an increased dependence on connected devices, like augmented reality contact lenses, connected to digital agent AIs that leverage neural nets and massive multi-user data engines that help us to corroborate veracity in the moment.

This does of course raise important questions about how we solve for systemic threats that are low complexity, low cost and quick to produce? How do you protect yourself in a world where everyone owns a technological rocket-launcher? The temptation is to watch everyone. Observe and predict. Under the guise of security, counter-terrorism and defence initiatives, we slowly have our independence stripped away. Threats beget threats, every one more sophisticated than the last, until eventually we live in data dictatorships, systemically incapable of protest.

DeepFakes are unique in that they are both understandable to the average person and profoundly impactful to the world we live in. There are genuine commercial and social benefits that the technology provides, which will see it continue to rapidly improve. Consequently, we can foster genuine public debate on the legislation, the journalism, the academia and the media of DeepFakes. This will provide us with an opportunity to discuss the costs, the benefits and the ethics of the technology in a meaningful way for the first time in a generation. That is, of course, as long as you can believe what you’re seeing.

Read More


Trump tweets edited video of Nancy Pelosi

CBS This Morning

You won't believe what Obama says in this video!


Using AI DeepFake techniques to bring Salvador Dali back to life


Google Duplex: AI Assistant calls locals businesses to make appointments

Jeffrey Grubb

Company created AI, sounds exactly like Joe Rogan

Jamin Laney

This AI can clone any voice, including yours