martes, 23 de julio de 2024

Digitalisation and the security state: potential rights violations

 a) Freedom from discrimination

Digital technologies can be used to gather and share information on peoples’ nationality, ethnicity, religion, politics views, or gender (or other personal characteristics or beliefs) far more quickly and easily than was ever possible with paper-based data collection methods. This is particularly true with the advent of “computer vision” technologies, which can be trained to recognise, record and track people and objects. Historic practices of discrimination and exclusion can thus be turbo-charged by digital technologies, further amplifying imbalances of power between powerful institutions and ordinary people.

Facial recognition technology is perhaps the best-known form of computer vision technology, but the same types of system could also be plugged into a city’s CCTV to make it possible to track the movements of people wearing a certain type of clothing, or carrying particular placards or banners. In recent years there have been huge political and legal disputes over the legitimacy of the use of facial recognition technology – in particular by the police – and steps have been taken by governments to regulate it.

The most notable example so far has come in the European Union, with the Artificial Intelligence Act – but despite the best efforts of human rights campaigners and some parliamentarians, governments have managed to include in the law significant exemptions for law enforcement and national security use of the technology.1

It is by now well-known that many facial recognition algorithms fail to recognise non-white people and women at the same rate as they do white people and men, due to them being trained on discriminatory data sets – for example, images primarily of white men. Politicians, experts and civil society groups have made numerous calls for these flaws to be fixed, which is entirely justified. However, there is a risk that in calling for improvements in the accuracy of such technologies, their usage can be more easily-justified by governments.

b) Freedom of thought and opinion

Digital platforms – for example, Facebook or X – can be used to mine data on individuals and groups and then used for profiling and targeting. This is how the business model of many online platforms works: users’ behaviours and interactions are mapped and recorded, then sold on to advertising companies and data brokers.2

The use of your online habits to try to sell you shoes may seem relatively innocuous, but what if those habits were used instead to feed you political propaganda? The practice of online political micro-targeting is by now well-established, raising concerns due to the ways in which ‘it could be used to increase polarization of the electorate, and identify and target weak points where groups and individuals are most vulnerable to strategic influence, amongst others.’3

In western states, many politicians have accused Russian intelligence agencies of engaging in such practices in a bid to undermine liberal democracies – a claim that is no doubt true, but also conveniently distracts from the political failings of those liberal democracies over the last 40 years as inequality has soared and public services have been increasingly privatised. And, of course, interference in other states’ elections is by no means limited to the world’s more authoritarian states, as the history of the CIA amply demonstrates.4

Whoever is behind it, the use of online platforms to feed the public with political propaganda is intended to sway peoples’ thoughts and opinions in one direction or another. This is, of course, the point of all political advertising: but where the source of the information is unclear or unknown, it becomes impossible for the audience to critically evaluate it. There is thus a crucial need for strict, enforceable rules on the transparency of online advertising of all forms, as well as strict rules on how peoples’ online behaviour can be stored, sold and used to track and profile them.

c) Freedom of association and freedom of expression

It is well established that surveillance measures can have a “chilling effect” upon the exercise of individual rights, discouraging people from expressing their opinions, attending protests, or joining a trade union (amongst other things) if they know that their presence is likely to be put on record by the authorities, or if they are likely to face other unpleasant consequences.5 The digitalisation of the security state can extensively amplify this effect, at the same time as bolstering the powers of the authorities to monitor and track dissidents.

For example, in relation to protest, digital technologies make it possible to record, track and log everyone in attendance, and then to track their attendance at other protests in the future.6 That, in turn, makes it possible to develop a profile of an individual’s political preferences and beliefs, as well as the people they associate with. Police may then seek to target those on whom they have information, for example to “discourage” them from attending protests, arrest people pre-emptively,7 or to encourage them to act as informants.8 At the same time, digital technologies also facilitate the sharing of information by informants on their targets – as the case of the UK’s best-known “spycop”, Mark Kennedy, makes clear.9

This problem is further amplified by the popularity of social media platforms such as Facebook and X, where the views freely expressing by people can be hoovered up the authorities and used to profile them. The information gathered by the police that is generated through the use of social media platforms has been termed SOCMINT, or social media intelligence (akin to OSINT, open source intelligence; or HUMINT, human intelligence).10 Many activists who came of age prior to the advent of social media have sought to discourage the practice of organising protests through platforms such as Facebook, as doing so can automatically provide the police with a list of who is interested in attending, though the ease with which such platforms can be used to reach large numbers of interested people interested means their calls have often fallen on deaf ears.

This problem is not limited to engagement in protests. For example, in early 2021 in France, the government approved three decrees that allow the state to gather data on the political opinions, trade union activities and religious beliefs of people who could “harm the integrity of the territory or institutions of the Republic”, a vague term that expands the scope of police files far beyond what was previously permitted.11 In the UK, collusion between companies and state agencies in a practice known as “blacklisting” saw hundreds of trade union activists illegally barred from working in the construction industry in a bid to prevent the organisation and mobilisation of workers against poor pay and working conditions.12

d) Freedom of movement

Digital surveillance technologies are increasingly central to the regulation and control of international migration. States have amassed vast databanks of biographic and biometric data on visa applicants, tourists, business travellers and others, and are actively seeking to expand the information they collect and how they use it – for example, by using predictive algorithms to determine the level of “threat” posed by an individual, to determine if they warrant further searching or questioning by border officials.13 Through the use of new data collection and processing technologies, states (with the assistance of corporations) aim to make regular migration swifter and more convenient – albeit at the price of travellers handing over increasing amounts of personal data to the authorities. Digital technologies and communication networks also make it possible to extend state borders far beyond a country’s physical boundaries – many states require “pre-checks” of air passengers before they depart, in order to determine if they have permission to travel or not.14

However, many people are forced to cross borders in an “irregular” manner, and they too are subject to an increasing array of digitised surveillance measures. The world’s most technologically advanced states – in particular, European nations and the USA – employ drones, radar, satellites, blimps, planes and helicopters and a wide array of other sensors and tracking technologies to detect and monitor people travelling to, or attempting to cross, their borders.15 While self-evidently denying peoples’ right to freedom of movement across the globe, this also impinges upon the right to seek asylum. By preventing people from accessing their territory, wealthy states make it impossible for people to lodge a claim for international protection. The use of digital surveillance and data-gathering technologies is intimately linked to the effort to limit the number of refugees able to enter the world’s richer states.

Even when refugees are able to enter the territories of those states, digital technologies will be used to register and track their claims, and even to aid in their assessment. The authorities in a number of states have used systems for extracting vast quantities of data from mobile phones to try to check the veracity of peoples’ claims about their country of origin or their journeys; while in Germany the authorities were using automated dialect analysis tools to try to determine peoples’ country of origin. An extensive report by the University of Oxford found digital technologies being used for border surveillance; immigration forecasting; processing of residency and citizenship applications; document verification; risk assessment; speech recognition; distribution of benefits; matching tools; mobile phone data extraction; and electronic monitoring of asylum applicants.16

The use of digital technologies to restrict freedom of movement does not only apply to the crossing of state borders. During the COVID-19 pandemic, many states introduced contact tracing apps that were used to determine whether or not an individual or an area posed a risk of infection and to permit or deny people the right to leave their homes or enter a particular place or area. Many of these apps were designed by big tech companies – for example, Google – in collaboration with state authorities, providing a clear example of how the data that is gathered and processed by companies can be used by governments for surveillance and tracking of the population.

e) Privacy and data protection

Any surveillance or data collection is an infringement of privacy. To be justified, they must be legitimate, necessary and proportionate, and subject to the requisite checks and balance – for example, independent authorisation by a judge or other independent authority, and be subject to independent monitoring and review. Data processing systems and technologies should also be designed with privacy in mind, a principle often referred to as “privacy by design”.

Whenever a breach of the right to privacy is justified, the institutions processing data must then respect rules on data protection. For example, the EU’s General Data Protection Regulation requires compliance with a number of basic principles:

  • Lawfulness, fairness and transparency

  • Purpose limitation: data should only be used for the purpose for which they were collected

  • Data minimisation: only the minimum amount of data necessary should be processed

  • Data accuracy: data must be accurate

  • Storage limitation: data should only be stored for a specified time and afterwards deleted or, at least, made anonymous

  • Integrity and confidentiality: data must be stored securely, there should be controls on who is authorised to access it, and it should be kept confidential

  • Accountability: those who process the data must be accountable for the ways in which they process it, for example by data-processing institutions keeping records to demonstrate compliance with the rules.

f) Right to an effective remedy

If an individual’s data is misused by an authority that stores or processes it, the data subject (the individual on whom data is processed) must have access to an effective remedy allowing them to challenges that misuse. This generally requires recourse to an independent administrative and/or judicial mechanism that allows them to challenge that misuse and have data corrected or deleted if deemed necessary. The use of new technologies such as predictive algorithms, machine learning, “artificial intelligence” can pose challenges to the right to an effective remedy due to problems with determining how or why a decision or assessment has been made. For example, it must be possible for the authorities to show how an algorithm used by the authorities to determine whether or not an individual poses a security threat has come to that decision. This requires mechanisms built into the design of data processing systems that make it possible to trace and explain the decision-making or assessment process so that it can be meaningfully reviewed by humans.

Notes


  1. https://edri.org/our-work/eu-ai-act-deal-reached-but-too-soon-to-celebrate/ ↩︎

  2. https://privacyinternational.org/long-read/2433/i-asked-online-tracking-company-all-my-data-and-heres-what-i-found ↩︎

  3. https://academic.oup.com/idpl/article/11/4/348/6355314?login=false ↩︎

  4. https://www.wnyc.org/story/history-us-intervention-foreign-elections/ ↩︎

  5. https://www.opensocietyfoundations.org/uploads/c8c58ad3-fd6e-4b2d-99fa-d8864355b638/the-concept-of-chilling-effect-20210322.pdf ↩︎

  6. https://phm.org.uk/blogposts/dangers-of-technological-surveillance-in-policing-public-protests/ ↩︎

  7. https://www.statewatch.org/statewatch-database/uk-arrests-raids-and-wedding-parades-by-chris-jones/ ↩︎

  8. https://www.theguardian.com/uk/2009/apr/24/strathclyde-police-plane-stupid-recruit-spy ↩︎

  9. Kennedy, a police officer, infiltrated the UK and European environmental direction action movement for years, and was provided with a digital watch that allowed him to record and transmit conversations to his superiors. ↩︎

  10. https://www.tandfonline.com/doi/full/10.1080/02684527.2012.716965 ↩︎

  11. https://www.statewatch.org/news/2021/january/france-green-light-for-police-surveillance-of-political-opinions-trade-union-membership-and-religious-beliefs/ ↩︎

  12. https://www.thecanary.co/uk/analysis/2023/02/15/governments-and-police-colluded-with-private-agencies-to-blacklist-activists-report-confirms/ ↩︎

  13. https://www.statewatch.org/automated-suspicion-the-eu-s-new-travel-surveillance-initiatives/ ↩︎

  14. https://www.statewatch.org/analyses/2021/uk-nationality-and-borders-bill-biometric-permission-to-travel-scheme-will-affect-tens-of-millions-of-people/ ↩︎

  15. https://www.statewatch.org/publications/reports-and-books/europe-s-techno-borders/ ↩︎

  16. https://www.rsc.ox.ac.uk/publications/automating-immigration-and-asylum-the-uses-of-new-technologies-in-migration-and-asylum-governance-in-europe/@@download/file ↩︎

No hay comentarios.:

Publicar un comentario

There's no Alternative to Digital Ecosocialism

 So far, the debates on imagining an ecosocialist alternative have fluctuated between the approaches proposed by advocates of degrowth and...