It’s been one of those weeks where the threads just tie themselves. With ChatGPT4’s astonishingly precocious intelligence, a lot of computer scientists are getting twitchy about AI. According to septuagenarian ‘AI Godfather’, Geoffrey Hinton, who has just quit Google after decades of AI research and development, this advanced level of computational learning wasn’t supposed to come into operation for another couple of decades; and it is giving him sleepless nights. But he did expect it to happen. This moment is, presumably, what he has been looking forward to his entire working life, which makes it difficult to sympathise with his late-onset insomnia.
Last month thousands of scientists and computer types signed an open letter proposing the halting of advanced AI research for 6 months. Like that’s going to happen! If what was being ruminated was not the eradication of the human species, you could almost have a laugh at these ‘Johnny come lately’ types who seem to have just got the picture of what they have been doing all these years. I’m reminded of the late queen’s poke at the Bank of England big wigs when she toured their offices after the 2008 crash: “Did nobody see it coming?”
Of course, there are people who do see these things coming but they are not the ones listened to. The computer scientist who warned us about computer power and its capacity to delimit the reach of human judgment was Joseph Weizenbaum. He designed the very first language system: Eliza, back in the 60s, and was shocked at the number of people who took this preliminary chatbot – modelled on the script of a Rogerian therapist – to be human. And what Weizenbaum concluded from their confusion, and I think he is right, is that it is the fact that human exchange has been rendered so shallow and mechanical as a result of technology’s hollowing out of culture that has resulted in our forgetting of who we are. We are unable to discern the fundamental difference between humans and machines because so much of life has been distorted to satisfy measurable criteria and directed towards function. Essentially, we’ve turned ourselves into machines and now chatbots sound like our friends. Some are even suggesting they’re conscious and deserve rights!
In June 2022 Google engineer Blake Lemoine got fired following his claim that chatbot Lamda [now renamed Bard] was sentient – a story which seems silly in every particular. Obviously a computer system is not sentient, or no more so than the maroon Cortina my parents sold after years of ‘faithful service’. My father drove us to Spain in it every summer throughout my childhood and my mother sobbed when it was driven up our road for the last time. In fact the parting was so emotionally charged that I can still recall the registration number, a recollection I am unable to make with subsequent boyfriends. If we draw our conclusions regarding sentience from our responses rather than from the workings of the thing itself, which some have suggested, that would indicate that the car was sentient but not the boyfriends. Such ludicrous claims, particularly given the ‘normalised’ cruelty society metes out to animals, who clearly are sentient, only serves to further evidence just how out of kilter we are with the world and ourselves.
The AI conundrum now facing us is called the Alignment problem. And the issue is, how can we be sure that when AI is rolled out and the world ‘put on auto-pilot’, as Brian Christian, author of ‘The Alignment Problem’, neatly puts it, that the AI systems we have brought into being will adhere to our values? The difficulty arises because we don’t know how deep learning works within the AI programmes we’ve created. Such machines are shown a vast data-set of information, given an instruction and offered a reward for achieving it. But how they achieve the goal we’ve set them is something we don’t always understand. So, the old adage ‘be careful what you wish for’, comes to mind.
That’s the alignment problem viewed from a technical perspective, as it is currently being addressed in the media: a future problem of ‘unknown unknowns’ in Rumsfeldian parlance. But the alignment problem should really be viewed from the human perspective. Not as something new and unknown but as the obvious consequence of the functional direction society took some time ago. Because what has resulted from that technological turn is nothing less than the homogenisation of human existence as everything that distinguishes us as unique and separate individuals is in the process of being jettisoned. So, whilst concerns are now being expressed about the power of AI to radically reshape our lives, in reality human life began being aligned with technology some time ago. And the most radical aspect of that realignment has been the dramatic shift from the individual to the mass which has been achieved by redefining the nature of what it means to be human.
It is evident that our old understanding of freedom, in terms of the individual’s freedom to speak and think and associate with others has been massively undermined in modern culture, largely due to an influx of restrictive laws and the policing of social media. At the same time, discreet ideals which shaped our autonomy such as dignity, responsibility, tolerance and respect seem to have had their day. It is not simply that these traditional ideals are no longer deemed worthy, but that they have come to be regarded as restrictive and debilitating, even harmful. And what is craved instead by the emancipatory urge that has shrugged them off is a loud and unconstrained formlessness which is no freedom at all. What we have been left with is an infantilised and narcissistic sort of group-think –a kind of inert and malleable mass mind, that doesn’t have a single thought until one is delivered to it and then everybody says the same thing. All of which would seem to suggest that we have created the perfect material substrate for the famed neural links, through which we can all be connected up to a single mind processor. That way society can ensure that everyone is thinking only appropriate thoughts, and there need be no fear of AI networks not upholding human values because there won’t be any independent minds left to embody them.
The other end of this topic’s thread, which usefully serves the point I’m trying to make, was provided by the Irish government around the time ChatGPT4 went online. Under a proposed new piece of legislation, supposedly focused on ‘hate speech’, it will soon be a criminal offence for a person to be in receipt of material that is deemed ‘likely to incite violence or hatred’. It doesn’t have to be your material, you may not intend to disseminate it, you may not even have asked for it. But the mere fact that you have received it could be enough for a custodial sentence of up to 2 years. The most sinister aspect of the bill, beyond it being yet another nail in the coffin of free speech, is surely its effect on association. A bit like the Chinese credit score system which deducts points from you if your friend posts something critical of the government, this legislation penalises you for receiving something the state deems ‘hateful’. And, since the state has claimed for itself the role of sole arbiter of correct information, receiving information from any unapproved source puts you at risk of prosecution. This is particularly so in this case as the normal burden of proof has been shifted which means that the onus falls on the recipient to prove their innocence. Clearly this is a cause for alarm. Which presumably is the aim of the law - to coerce individuals into greater compliance with approved opinions and to further entrench group-think.
There has been strong criticism of the bill from those who recognise the foundational importance of freedom. It is, after all, the essential ground on which the community of humanity depends. Yet many others seem comfortable with the policing of ‘thought crime’, appearing to view the unpredictability of independent thought as inherently dangerous. And it is this fear of the spontaneous and unpredictable that is driving society to seek the security of a more concentrated and programmable form of existence.
The tension between the unpredictability of the free individual and the imagined omnipotence of the single mass-man was of great concern to Hannah Arendt. In a letter to Karl Jaspers, written just after the publication of her work on totalitarianism, Arendt expressed the view that human unpredictability was at risk of being destroyed and individuals made ’superfluous’ in the drive for human omnipotence. She believed that the plurality of the human species was at risk if this drive towards total control continued. As Margaret Canovan summarises, in her study of Arendt’s political thought, “If men were to be omnipotent, they would have to lose their characteristic human quality of plurality and become just one man. Just as there is room in the heavens for only one omnipotent God, so the quest for human omnipotence entails the elimination of human plurality, and therefore of precisely the quality that makes men human. If Man is to be omnipotent, human beings as individuals have to disappear.”
This surely is our encroaching reality. And what the recent lockdown brought into focus, when the world seemed to divide between those who went along with the group-think and those who did not, was the spectre of Arendt’s omnipotent mass man straddling the world like a colossus. He was called ‘The Science’ and everyone was entreated to follow him. Beyond the reach of human understanding, this transcendent being, who clearly had nothing to do with science, demanded we give up our reason and just believe. And, as Arendt predicted, millions rushed to oblige.
Looking back now, through the lens of the recent revelations about the advanced state of AI, it is difficult not to see in that impressively synchronised pandemic response something that was not just anticipated but choreographed. Almost a trial run to see how easy it would be to effect a digital take over and put mass-man in charge. I say that largely because of the deployment of the ‘nudge units’: those bodies of social psychologists who advised governments on how to manipulate the population into supporting not just the lockdown but, more significantly, the extreme measures taken to exclude their dissenting fellow citizens. It was shockingly novel and looks very much like an experiment in radical social control.
By deliberately keeping information to a minimum, people were prevented from attaining an overview, which is a prerequisite for reaching a decision of one’s own. Instead, they were kept in a state of perpetual dependency, feverishly attuned to the government’s daily briefings for the latest sliver of information, though nothing of note was ever revealed. A major problem with the inadequate and often nonsensical government initiatives was that they defied human reason, which made them unacceptable to some. But the nonsense was also useful, and presumably intentional, because what it forged was an entirely new relationship to power, based not on reason or credibility but on trust. In many ways what transpired resembled a cult that people were mandated to believe in. Alternative sources of information were decried as heretical and those who sought them out in an effort to reassert their agency were not just mocked but also threatened with excommunication. Looking back now at just how strong a grip the group-think had over people’s minds, it becomes clear how information disseminated in this piecemeal fashion was being used as a tool of Reinforcement Learning, not to inform but to infantilise and enslave.
The psychological tools used for effecting social control, which is really just another term for Reinforcement Learning, are pain and reward. Using pain to deter and reward to encourage, human beings, and, indeed, machines can be trained to produce ‘correct’ outcomes. Indeed, Reinforcement Learning is the training technique that has been used to bring AI to its present high level of application. (Although, obviously in the AI context the ‘pain’ and ‘reward’ applied are somewhat different.) The roots of such learning – getting rats to navigate mazes and work levers etc., – which is called ‘shaping’, lie in Behaviourism: the theory of social psychology popularized by American psychologist B.F. Skinner in the 1940s. Through the use of such techniques Skinner confidently asserted that it was possible, “to shape an animal’s behaviour almost as a sculptor shapes a lump of clay.” And not just animals: when the same techniques started to be applied to machine learning in the 80s, they soon produced extraordinary results. It’s no wonder that Reinforcement Learning has been described as a “bridge between neuroscience, behaviourist psychology, engineering and mathematics.” Because humans and learning machines have been shaped the same way.
A third component of Reinforcement Learning is dependency, or being an ‘unorganized machine’, as Alan Turing put it when describing the need to have a child-like mind in order for such learning to be effective. Dependency is essential if the learner, whether machine or human, is to remain in an open, receptive state, requiring more information. The learner can’t be controlled without it, which explains the piecemeal distribution of information during lockdown. In a sense, the learner can’t be allowed to grow up and mature into a reasoning individual; they have to be kept infantile and dependent on the perturbations of the controlling source. All of which would seem to indicate nothing less than the ontological shift Arendt feared – from the plurality of reasoning individuals to the singularity of the omnipotent mass-man. And, indeed, B. F.Skinner, who largely pioneered this work, did recognise its revolutionary nature. He thought it was the future for humankind and even wrote a novel about a utopian society shaped by such techniques. It wasn’t that Skinner did not value the common sense and wisdom of the individual. He did. He thought that they’d achieved a great deal for the human race, but he also believed that their time was up.
Who can say? Rebellious populations refusing to yield their dwindling agency can, perhaps, be quelled with violence. Certainly, a lot of investment has gone into the urban military in recent years, as we are currently witnessing in France. Or would be, if the mainstream media reported it, but obviously, that’s not going to happen for fear of contagion. Yet, the violence looks likely to continue and to increase. It’s hard to see how the currently polarised state of global wealth which was greatly exacerbated due to lockdown, can be maintained without it. Whatever metric you look at – An Oxfam report of 2019 tells us that the world’s 26 richest people own the same as the poorest 50% on the planet; a House of Commons report of 2018 predicts that the richest 1% will control 2/3 of the planet’s wealth by 2030 - it appears unsustainable without vastly increased levels repression. And it probably can’t be maintained without massive increases in surveillance and censorship, and also the novel financial controls we saw emerge during lockdown as well. Scholar of global capitalism, William I. Robinson, points out that capitalism is in a time of systemic crisis, and has been since 2008, meaning that a completely different system of control will be required if it is to continue. And he describes the pandemic response as “a dry run for how digitalisation is going to be used by all the dominant groups to step up the restructuring of time and space and exercise greater control over the global working class.” Hence, the alignment problem and the sudden rush to establish safety protocols. However, independent machines might be a bit more difficult to stop than humans. It certainly appears that they are difficult to detect because we don’t know what they have learned. It’s the Ex Machina moment. When you can predict it, it’s probably already too late.
An excellent summary of what we are having to contend with - the genie is well and truly out if the bottle - is there any hope left in it?
I remember reading a science fiction (prediction) story in the 1960s about a computerised library system - the library card was the only way of accessing the materialand taking out loans - the protagonist borrowed a copy of Macbeth and when he tried to return it - he inserted his card - which had become slightly bent in his pocket and damaged - the computer system flagged up "murder" in bright colours - the robicops were there in moments - arrested him - charged him with a capital offence - found him guilty as charged - took him away - and executed him. Whether his corpse was used as food for others is a tale from another author.
If anyone recognises this story and can give me a link - I'd be very grateful - for I am old and before long this story will no longer exist.