Back

Digital Adoption is Transforming Dissent... and Facilitating the Rise of the State

Insight

04 Feb 2021

-

Michal Rozynek

Dissenters of all kinds march out of a laptop computer screen.

In October 2019, an unusual game was launched by a small group of developers in Hong Kong. With VR headsets, players assumed the role of a peaceful "unnamed protester," pursued by police armed with rubber bullets, tear gas and batons. The game cannot be won. It ends with the protester being incapacitated. 

The people behind "Liberate Hong Kong" designed the game to help others understand what it is like to join protests in the city-state since it became polarised by the legislation of the 2019 extradition bill. Recently, the same creators brought the "Liberate Hong Kong" message to Animal Crossing, a massively multiplayer online (MMO) game, which grew virally since Covid-19 lockdown measures were put into place around the world, adding 3 million users in 3 weeks in Japan alone.

"Liberate Hong Kong" is among growing examples of online dissent, though the phenomenon is not new. Online political activism rose in the 1990s, when the World Wide Web allowed for the use of technology in political action. Early digital dissent largely mirrored offline formats, from online petitions, protests, and electronic civil disobedience, to political hacking (so-called hacktivism). 

Early examples of digital dissent included:

  • Strano Network organising a virtual sit-in against French nuclear policy by overloading government websites with traffic in 1995.
  • Virtual blockades by the Electronic Disturbance Theatre in 1998 in support of the Zapatista movement. The protest deployed automated software (FloodNet) to launch the attack.
  • Online and offline campaigns to disrupt state censorship of the internet across many countries, including Egypt and Syria, by a decentralised cluster, Telecomix.

It is tempting to reduce dissent to a series of disruptive activities. But it’s crucial to remember it is a direct response to the acts of public institutions and structures of authority. In the virtual world, however, the state feels less visible. This is why an increasing percentage of digital political activism is directed towards non-state, or quasi-state, actors.

For instance, the Pan Collective is a group of hackers that used a Twitter protocol to automatically send anti-chauvinism messages to individual Twitter users that the collective found sexist. In another famous case, web blockades were organised against the Euskal Herria Journal to protest the publication’s alleged support of the Basque independence movement in Spain. 

Online protests against quasi-state, or other private, actors have been comparatively more successful in recent decades than protests directed at governments. A virtual blockade of a government site, or an online petition, remain easier for governments to ignore than similar protests against companies, since the latter directly affect customer loyalty and shareholder returns. Government infrastructure has generally not been digitalised, and therefore has not been affected by online dissent in a significant way. 

However, this is about to change.

Digital transformation: Driving new forms of dissent


The internet has undergone radical transformation, driven by the demographic and geographical widening of access, the expansion of data and digital capabilities, and social change. The number of connected devices rose almost tenfold between 2009-2019, to circa 22bn, and is projected to total 43bn by 2023.

Digital dissent is unusual in that the primary technology that underpins it—the internet—is both a tool for mobilising dissent, as well as a source of threats to freedom and security. The future of digital dissent depends mainly on the shape our interaction with the online world takes. In this sense, dissent in the virtual world should be examined through the concepts and technologies that frame online interactions between “cyber-activists” or “political hackers” on one side, and "political trolls” and “cyber troops” on the other. 

The future of online dissent is, of course, impossible to predict in the very long term. Still, we can already see specific patterns of change emerging—many of which are being accelerated by the current Covid-19 crisis. 

Pattern 1: Unprecedented expansion of the digital world 

The pandemic is accelerating the move of everything online—from abandonment of cash payments in favour of electronic ones, to online adoption from categories of society that had not previously caught up with digital transformation. 

The lockdown generated by the Covid-19 pandemic acted as a catalyst event for the use of many digital technologies. Daily traffic to Netflix, Facebook and YouTube rose between 16-27 percent, followed by an explosion of video chatting apps—exemplified by daily traffic to Houseparty, an app that lets people video chat live and spontaneously in groups. Houseparty saw use rise 79 percent, and Zoom sessions more than tripled in the same period.

As a 5G-enabled future dawns on us in lockdown, we are likely to leave various forms of social distancing behind, just as millions of private and publicly-networked sensors come online. This will open unprecedented opportunities to combine current and new forms of digital surveillance and behavioural science—ultimately transforming structures of governance and control. The current consent-based privacy regime will struggle in a future where high-fidelity cameras and microphones pick up what we say, see or type, and analyse our emotional responses by tracking facial expressions.

In particular, new IoT infrastructure relies on a growing amount of unencrypted metadata, which, while anonymised, can still be used for surveillance.

Pattern 2: Unprecedented expansion of the State into digital


In the last few weeks, it’s become evident that we will also see an expansion of the state’s presence in the digital world. This is not just because more services are digitalised, but because we are asked to accept unprecedented levels of state surveillance online, with many countries calling for contact tracing and quarantine control apps. 

This normalises a trend that was already prevalent in both democratic and undemocratic regimes. According to the 2019 Carnegie Endowment for International Peace AI Global Surveillance Index, “at least 75 out of 176 countries globally are actively using AI technologies for surveillance purposes”, including smart city platforms, facial recognition systems, and smart policing.

While advanced democracies are more likely to use AI surveillance (51 percent), autocratic states are more likely to abuse it. Our new acceptance of the state’s presence in our digital lives comes at a time when democratic institutions are under pressure from a combination of growing social inequalities and new-nationalism. Will fear of the virus turn into a more permanent fear of the other? Many already are exploiting such imaginaries. If the world becomes poorer, more unequal, and less able to connect on a human level, more tension will emerge.

Pattern 3: Quasi-state order

While experts have focused on the tension between private and public actors, others note the increasingly symbiotic relationship between tech giants, who require state regulation to maintain monopolies; and governments, who use pressure on technology companies to exercise influence online. This is partly the reason behind concern about the role of private technology companies in national security. It is precisely such concerns that led the German Telecomms watchdog to order parents to destroy the My Friend Cayla “smart doll.”

The emergence of patriotic hackers—private actors paid by, or aligned with, states—is even more visible, with well-known cases ranging from attacks by Russian hackers on Estonia in 2007, to ongoing China-backed hacker activity in the US. Increasingly professionalised state hackers and cyber armies of trolls, employed to either spread misinformation amongst foreign states, or control dissent domestically, are becoming a new norm.

One example has been the Black Elevation anti-racism movement in the US, which turned out not to exist at all. Yet it managed to organise at least 30 real rallies, while—according to a review by the Atlantic Council—manipulating followers into creating tensions within already marginalised communities. Cyber troops can be both human and automated, with “bots” responsible for creating a growing number of fake political accounts. 

Dissenters approach a giant smartphone, tipped downward like a seesaw; on the other side, a large corporate hand presses a single finger to the phone's edge.

Dissenters approach a giant smartphone, tipped downward like a seesaw; on the other side, a large corporate hand presses a single finger to the phone's edge.

A new state digital infrastructure 

The expansion of digital, and of the state, provide insight into the changing nature of digital dissent. In a future where all our “offline” emotions, reactions and hopes are trackable or predictable online, and where the state is continuously present in digital interactions—through payments, taxes and public health tracking, digital dissent becomes the main form of protest. 

One obvious example is electronic voting. In many developing countries, electronic voting is promoted to fight voter identity fraud, due to large segments of the population without official government identification: Somaliland used iris scanning to identify voters in 2017. However, electronic voting is vulnerable to manipulation, creating new opportunities for political hacking. 

It took just minutes for Carsten Schuermann, a computer scientist, to hack a voting machine during DEF CON in 2016. The last year's edition of the hacking event confirmed that voting machines remain fundamentally vulnerable. Voter manipulation is, of course, a trend that we see amongst digital political activists outside of electronic voting, as illustrated by the 2018 Twitter data release on the social media activities of Iranian and Russian trolls around the 2016 US election. 

Attempts at online voter manipulation by political activists, political hackers and digital troops have already been deemed successful, thanks to algorithms that target susceptible audiences based on their digital lives. We must prepare for these techniques to become more sophisticated. A new breed of online manipulation is already underway, with the recent rise of neuropolitics consultants who can examine, analyse and predict biological and emotional responses to online content to make political messaging more effective.  

The rise in connected devices will allow machines to use, and eventually create, automated neuropolitics protocols, using a combination of facial and voice recognition to analyse emotions. This could lead to radicalisation of digital activism: Studies show that digital exploits the fact that certain emotions (particularly anger) spread much faster online. 

In a more distant future, we ought to ask ourselves what level of automation that the increasing digital presence of the state will involve. Central government spend on AI has grown at a faster rate (44 percent) than private spending, and over 25 countries have launched national AI strategies. Digital allows for prevention to become the predominant form of control.

As China prevents citizens from accessing forbidden content by enforcing the use of specific internet service providers, we could see people described with “at-risk” status in public health apps to be barred from entry or payment in certain locations. Such a system has been proposed in India, where the contact tracing app Aarogya Setu may serve as an e-pass. The French cybersecurity consultancy Defensive Lab Agency claimed the app will be able to access phone sensors, including microphones—opening the door to much more extensive surveillance.

Automating civil disobedience will be possible as more state infrastructure is moved online. If, for instance, tax collection is automated as all payments become digital, future protesters can design viruses that block tax collection… or other key services. Even the UK’s newly-launched Covid-19 contact tracing app opens new possibilities for disobedience (or foreign state interference). We can even imagine the manipulation of results to reach a political goal. 

Deep, shallow, critical: The future of activism 

The future of digital dissent may be less easy to ignore. Whilst the emergence of political gestures in computer games, like “Liberate Hong Kong,” or featuring Black Lives Matters messages in Animal Crossing, is perhaps more akin to the politicisation of art, the situation can be radically different in a future where digital is the primary way of organising and executing daily activity.

This opens the possibility of highly effective and disruptive dissent, ranging from blockades of critical digital infrastructure (public or private), to a stronger move towards alternative virtual worlds or networks. The same way cryptocurrency was imagined as a form of dissent toward global capitalism, we can imagine the emergence of fully virtual environments that allow organisation of dissent. We have seen early signs of this with the wide use of FireChat —a decentralised, Bluetooth based app—during the 2016 Hong Kong protest, to avoid government surveillance. 

The trouble is that the digital world is perhaps more fragile. It offers a higher risk that dissent can be hijacked by state and non-state actors. 

It is a well-known practice of non-democratic states to control or infiltrate dissidents by creating “provocations,” offline and online. Many are benign. In 2018, a deepfake of Donald Trump, urging the Belgian public to withdraw from the Paris Climate Agreement, was created by one of Belgium’s left-wing parties to elicit support. Similar activities—using deepfakes, shallow fakes and selective editing—are increasingly used by political activists to jam mainstream sources and broadcast divisive political messages. But, if exploited by hostile or authoritarian regimes, the practice can pose more serious threats internationally and domestically. 

This is because, whilst the technologies underpinning online political manipulation are not new, the consensus that shaped the world order over the past several decades seems to be coming to an end. Only a decade ago, we believed that China’s opening to free trade would move its political trajectory into a liberal-democratic one; those hopes are all but gone. For the next decade, we are entering a world which is, if not more hostile, clearly at odds over the values of free speech and privacy. Prolonged competition with China in the digital world can lead to further need for state interference and curbs to online lives. 

The potential for long-term disruption is therefore not solely the emergence of new technological capabilities with unforeseen repercussions; it’s that, as the digital world transforms, it will simultaneously also shift the geopolitical balance and ideological consensus that underpins the current world order.

04 Feb 2021

-

Michal Rozynek

Illustrations by Debarpan Das.

DATA-DRIVEN TECH & SOCIAL TRENDS. DISCOVERED WEEKLY. DELIVERED TO YOUR INBOX.

Your e-mail address is used to send you newsletters and marketing information on L’Atelier. You can unsubscribe at any time by using the unsubscribe link in our emails. You can find more information about the management of your personal data and your rights in our Data Protection Notice.

02/03

Related Insights

Insight

The technology that may transform trash into treasure

The amount of trash humans produce has arrived at an existential tipping point. What technological advancements could help turn the tide? 

    Chinese shoppers at a market make a medley of payments using their phones and facial recognition.
    Insight

    History, politics, relentless adaptation: China’s innovation strategy

    People define an economy. That does not change when the economy goes online. Stories about systems are stories about people... and despite significant regional distinctions, people in China are exploring the boundaries of our physical and virtual economies.

      Insight

      Filters Empower Lolita Syndrome... But It's Not (Quite) Tech's Fault

      In 2015, Snapchat released beautification filters, a seemingly benign way to enhance your look, but they wreak havoc on mental health. Why do these filters exist? What vacuum do they fill, and what do they say about technology and its implications for the lived human experience? 

        03/03

        L’Atelier is a data intelligence company based in Paris.

        We use advanced machine learning and generative AI to identify emerging technologies and analyse their impact on countries, companies, and capital.

        About us