We live in an era of unprecedented politics, and not simply for the reasons you may be thinking of. In a world where 3.5 billion people have at least one social media account, our data and online behaviour have become a valuable commodity when it comes to electioneering – as seen in the strategy of the Leave campaign in the 2016 Brexit referendum.
European Commission fights against disinformation
At the end of October, the European Commission published the annual self-assessment reports of signatories to the Code of Practice on Disinformation 2019, which include the technology giants Facebook, Google and Twitter. According to the Commission, the reports “indicate comprehensive efforts by the signatories to implement their commitments over the last 12 months” but “further serious steps by individual signatories and the community as a whole are still necessary”. This language has been repeatedly used by the Commission in their press briefings on the topic over the last number of months: a clear signal that alienation of the companies would do little to help the problem, and that tangible progress is coming at a snail’s pace. The Commission noted that action is lagging behind industry’s commitments, especially on empowering consumers and the research community, disrupting advertising and monetisation incentives for purveyors of disinformation, and on the transparency of political advertising. The institution has recently launched a call to create the European Digital Media Observatory, a platform for fact-checkers, academics and researchers to collaborate to tackle disinformation.
Social media giants
Sir Julian King, the outgoing British Commissioner has said that there remains a “disconnect” between the claims of progress from social media companies and “the lived experience”. Facebook now labels advertisements with the name of the purchaser and has received praise for doing so. However, more recently the social media giant has been in the eye of the storm ahead of next year’s election in the US, with the company announcing that it will not fact-check paid political advertising on its platform. In practice this would allow false claims by politicians to remain on the site with no scrutiny or adverse consequences if they are not verified. Facebook’s CEO Mark Zuckerberg has been praised by some for defending free speech but 250 Facebook employees have written to their boss denouncing his decision, calling the move a “threat to what Facebook stands for”. In October, Zuckerberg appeared before the US Congress and was questioned about his move, with Alexandria Ocasio-Cortez’s scrutiny of the CEO underlining the extent of disinformation this policy could allow. In November, Nick Clegg, Facebook’s policy chief, conceded that the company may place limits on targeted advertisements.
Meanwhile, Twitter chose an alternative approach by announcing it is banning political and “issues” advertisements. CEO Jack Dorsey said, “paying to increase the reach of political speech has significant ramifications that today’s democratic infrastructure may not be prepared to handle”. The policy has received broad support: it prohibits the targeting of political messages to specific audiences as seen in the Brexit referendum campaign. However, it is also seeking to delineate a grey area: this time the question is not what is harmful, but what is political? Referencing another potential problem with the company’s new policy, Taoiseacg Leo Varadkar has said he has “mixed feelings” about the announcement, noting that “part of what [politicians] do is advertising”.
In the run up to May’s European elections, Facebook set up a co-called “war room” in Dublin to monitor Facebook, Instagram and WhatsApp activity related to the elections, staffed by 40 employees including experts in security and disinformation, and supported by 100 others around the world. Such action is welcomed by the Commission, which has been keen to emphasise that they want to work with the tech giants, who have the power and competency to responsibly regulate their platforms. The Commission reviewed the elections in June and stated they were not free from disinformation, seeing it as a long-term challenge for the EU.
This week Google responded to the growing pressure on tech platforms by announcing it was limiting the targeting of political ads to general categories such as age, gender, or postal code level location. The company will begin enforcing the changes in the UK within a week and in time for next month’s general election with a broader rollout throughout the EU by the end of the year and the rest of world on 6 January.
Patrick Cosgrave, the Irish founder of the Web Summit, has suggested that a state “censoring agency” for social media could help resolve the problem of online hate speech. Asked about US Senator Elizabeth Warren’s vow to “break up big tech” should she win next year’s presidential elections, Cosgrave compared the digital giants and their technology to the invention of cars at the end of the 19th century – they have a potential for danger but are of overall benefit to our society, and necessary regulation can mitigate the risks. Given that most countries have a regulator for media broadcasters, advertising, and data protection, Cosgrave’s suggestion is fitting.
Disinformation and extremism
We have also seen a proliferation of terrorist content online, which can be fuelled by disinformation, hate speech, and extremist views. Following recent terrorist attacks, the lack of action by platforms such as Youtube and Facebook was notable: videos of the New Zealand massacre remained online with the excuse that they were being uploaded faster than they could be taken down. The European Parliament, Council, and Commission are seeking to agree rules on the removal of terrorist content online. The Parliament is unsure about backing proactive measures to remove content like the hashing technology developed by the Counter Extremism Project, eGLYPH, which can automatically prevent the re-uploading of verified extremist content. The Commission and Council hope an agreement will be reached on the anniversary of the Strasbourg terrorist attack on 18 December.
Von der Leyen’s Commission
The tech giants’ apprehension around regulation may be heightened by the incoming European Commission. Ursula von der Leyen has emphasised that the EU puts “values, rights, trust and the rule of law above all else” and that this should also apply to the digital age. Her statement that “new technologies will never mean new values” is a clear suggestion that she will seek to tackle the disruption new technology has brought for our democracies. She has said that the EU has only taken the “first steps” in its drive for regulation.
Danish Commissioner Margrethe Vestager will shortly take on the role of ‘Executive Vice President for A Europe Fit for the Digital Age’. In her current role as Competition Commissioner she has gained recognition for her tough stance on the tech giants, for example in the ruling on Apple’s unpaid taxes to the Irish state. The incoming Commission will continue work on a Digital Services Act which will “upgrade the liability and safety rules for digital platforms”, including rules for how platforms police illegal content online. Werner Stengg, who has been in charge of drafting the Digital Services Act, is set to join Vestager’s cabinet as a senior expert.
Meanwhile, Julian King has said that if platforms fail to improve their record by the end of 2019, the Commission may introduce “regulatory or co-regulatory measures”. However, tough action will probably be on hold until an independent consultant delivers a formal report on the performance of platforms on disinformation with further recommendations at the beginning of 2020. One step may be that they are forced to share their data more openly.
The big question in the coming months and years will not be whether there should be regulation, rather where to draw the line between free speech, disinformation, and manipulation, and who can be relied upon to police this sector.