HUMANITY IN AN AUTOMATED WORLD – THE SINGULARITY … – The HR Director Magazine

Posted: Published on November 26th, 2023

This post was added by Dr Simmons

Most corporates have had AI on their risk registers for ages cyberattacks, industry disruption and workforce planning but rarely in terms of existential risk. But recently, there have been calls for a global pause on AI to allow regulation to catch up, while the UK Governments 2023 National Risk Register names AI for the first time as a chronic risk. Have we lost control of AI already and how do we maintain our humanity in an automated world?

Interestingly, it turns out that the best way to control AI is to maintain our humanity, so they are in fact the same question, but well come back to that. But why should HR care? Because ever since industry embraced scientific management, we have increasingly been treating humans as resources the clue is in the name in the same way that we also have directors for financial, technical and operational resources. In this mental model, we render business ever more efficient by optimising these resources and, if AI could perform a role better than a person, well it makes sense to make that transition. In fact, shareholders would increasingly demand it and would it not actually be brilliant if we could shed all that messy HR stuff, by designing out humans altogether? But before we go out to tender on that front, it is worth checking that we actually know what being human is, in order to analyse what, if anything, might be lost in such a move. This is harder than it sounds, because in our zealous attempts only to programme the very best of our capacities into AI, we have scrupulously avoided all our junk code. A cultural commitment to scientific rationalism meant that all the human bits we were unsure about or that seemed a bit flaky we simply left out. So, what we are left with in AI is a rather inaccurate copy of human intelligence. Is this risky? Could we end up programming a master race of psychopaths? You have probably watched enough Sci-fi to have a fair sense of the odds. But what if these bad bits were actually the good bits? What if these too-human traits are actually more useful than they appear? Lets take a look at that cutting room floor.

I have identified seven key items of human junk code: Free-will, Emotions, Sixth Sense, Uncertainty, Mistakes, Meaning and Storytelling. Lets go through these in turn. The first one, Free-will, if you think about it, is a disastrous design choice. Letting creatures do whatever they want is highly likely to lead to their rapid extinction. So, lets design in some ameliorators the first of these is emotion Humans are a very vulnerable species because their young take nine months to gestate and are largely helpless for their first few years. Emotion is a good design choice, because it makes these creatures bond with their children and in their communities to protect the vulnerable. Next, you design in a Sixth Sense, so that when there is no clear data to inform a decision, they can use their intuition to seek wisdom from the collective unconscious, which helps de-risk decision-making. Then we need to consolidate this by designing in uncertainty. A capacity to cope with ambiguity will stop them rushing into precipitous decision-making and make them seek others out for wise counsel. If they do make mistakes, well, they will learn from them and mistakes that make them feel bad will develop in them a healthy conscience, which will steer them away from repeated harms in future. Now that we have corrected their design to promote survival, what motivators are needed for their future flourishing? They need to want to get out of bed on a dark day, so we fit them with a capacity for meaning-making, because a species that can discern or create meaning in the world will find reasons to keep living in the face of any adversity and to keep the species going over generations? We design in a super-power about storytelling, because stories allow communities to transmit their core values and purpose down the generations in a highly sticky way. Stories last for centuries, futureproofing the species through learned wisdom of our ancestors and the human species prevails.

We did not choose to design humanity into AI, because it seemed too messy a robot that was emotional and made mistakes would soon be sent back to the shop but on reflection, we can see that our junk code is not an accident or a mistake, but part of a rather clever defensive design. If this code is how we have solved our own control problem as a species, might we find wisdom in it for solving those problems for AI? Thats why answering the question about maintaining our humanity also answers the question about how best to stay in control and for workplaces, it underscores why we will still need those human resources for some time to come, because the very flaws that necessitate HR policies and a Human Resources department in the first place, are the source of our unique human contribution. Right now, we need people to train up AI. We know that tools like ChatGPT behave rather like your average intern: they are extraordinarily keen, but you really do need to check their work. Are your staff ready to supervise them well? As in coaching, the trick to using AI is about becoming ever more excellent at prompts and questions and thinking ahead about interesting assignments that will train them up. Its about honing your instincts so that youre not taken in by plausible answers. Its about thinking of the wider impacts of using AI on those around you and its about sharing advice and best practice so that we all become better at it. In short, its all about nurturing your own junk code to make up for AIs current lack of it.

Here is an example, your organisation is unique. Its unique because of its very particular mixture of history, culture and leadership even within its designated sector or geography and your human staff know this. They pick it up through the stories they are told on the day they arrive and at every office party or away day they have been on ever since. Successes and failures, heroes and villains, who makes it and who doesnt. In HR, they learn about its shadow side, through overseeing difficult disciplinaries, tribunals and exit interviews. So, their antennae are acutely attuned to pick up cultural tells and to instinctively relate what will work and what wont. This is too nuanced ever to be fed into an AI, because it changes as the people around you change. It changes with the weather, the news and the seasons. We can ask an AI to write a training plan, a website or a strategy presentation and it will do it in seconds, but only your staff know how to tweak this into something that will land in your context. In the past, we have frowned on water-cooler chit-chat as wasting valuable work time. But maybe these kinds of interactions are precisely where your staff must go, in order to finetune their intuitions about the culture through exactly this sort of office gossiping. It is another thing that we are in danger of losing as more people work remotely. What about the longer term? What might the role of HR be then? There will be a transition in many workplaces to a more AI-enabled workforce and that will mean re-training, re-skilling or exit for some. But before you ask ChatGPT to write your ten-year workforce plan for you, have a look at that list of junk code, where is it already acting as risk-mitigation in your organisation, even if sometimes it feels like whimsy or waywardness? What more could you do to nurture it? Because if we only prioritise the competencies weve already programmed into AI, there will be no reason to keep humans in the workplaces of the future. But if we lose them, we stand to lose our humanity too.

Dr Eve Poole OBE is the author of Robot Souls Published by CRC Press

FOR FURTHER INFO http://WWW.EVEPOOLE.COM

Read the original:

HUMANITY IN AN AUTOMATED WORLD - THE SINGULARITY ... - The HR Director Magazine

Related Posts
This entry was posted in Singularity. Bookmark the permalink.

Comments are closed.