For most of us, intelligent technologies are fundamental parts of our existence. We all have smart phones in our pockets and we live in the era of the ‘Internet of Things’, where our everyday devices are constantly sending and receiving information from the Internet. This generates massive amounts of data, and its collection and utility is becoming increasingly personalised. Online ads suggest products to buy based on our search history and Facebook optimises our newsfeeds based on our patterns of use – the friends we look at most and the pages we most visit. It would be naïve to believe that this trend of “persuasive computing” would be immune to the vote maximisation desires of politicians.
“Nudging” is an increasingly common practice used by government in which people’s views on issues as diverse as health, the environment, and civic participation are “nudged” in any desired direction. A recent example was an attempt to reduce the excessive amounts of antibiotics being prescribed by doctors. Emails were sent to doctors who prescribed the highest amount of antibiotics across the country, informing them of this fact. These doctors were subsequently seen to decrease the amount of antibiotics they were prescribing. This form of paternalism is already controversial, yet ‘big-data’ and the rapidly increasing sophistication of artificial intelligence could soon allow governments to conduct wide-scale campaigns that can produce any desired outcome.
We saw the beginnings of “intelligent governance” in online campaign platforms such as the “Flux” party and “Online Direct Democracy”, yet these organisations are based on crowdsourced democratic opinion rather than intelligent computer-based algorithms. Big nudging could mean computer software that can calculate policies that are “best” for society, even without human input. Despite this, politics is complicated and even with future advances in artificial intelligence in mind it remains doubtful that a program could really make complex decisions about governance and public policy. More problematic is the idea that governments may use this tool for surveillance and control of their citizens. For example, China is planning to introduce a “Citizen Score” for each of its citizens, generated by a computer program that analyses a persons internet history and social contacts to determine their trustworthiness. This score will be used to determine a person’s eligibility for certain loans, visas, and employment.
Another way in which artificial intelligence might harm politics could be technology used to target product advertising to voters online. The existence of this capability essentially means that in any given period in which the government desires support for a particular policy, they could manipulate online algorithms in order to nudge voters and candidates towards supporting them.
A number of recent articles have suggested that Cambridge Analytica managed to influence the results of the US election and EU referendum by data mining Facebook profiles and tailoring advertising to the psychological profiles of users. These reports have since been found to be exaggerated yet political commentators have admitted that such online influencing is highly possible. Just as dangerous is the fact that social media can analyse the political leanings of an individual and reflect it back in what the user sees on their newsfeed, resulting in individuals that have their own biases and opinions confirmed in a loop that can breed adherence to conspiratorial ideas and lessen their ability to determine what is truth.
If unchecked, the ‘optimisation’ of intelligent algorithms and their ability to manipulate big data may well lead to increased social fragmentation, rather than the connectedness of people trumpeted by so many social media giants.
For an in-depth look see Carole Cadwalladr’s piece in the Guardian, “The great British Brexit robbery: how our democracy was hijacked”.
By Isabella Banfer ed. Joel Lindsay