Most corporates have had AI on their risk registers for ages cyberattacks, industry disruption and workforce planning but rarely in terms of existential risk. But recently, there have been calls for a global pause on AI to allow regulation to catch up, while the UK Governments 2023 National Risk Register names AI for the first time as a chronic risk. Have we lost control of AI already and how do we maintain our humanity in an automated world?
Interestingly, it turns out that the best way to control AI is to maintain our humanity, so they are in fact the same question, but well come back to that. But why should HR care? Because ever since industry embraced scientific management, we have increasingly been treating humans as resources the clue is in the name in the same way that we also have directors for financial, technical and operational resources. In this mental model, we render business ever more efficient by optimising these resources and, if AI could perform a role better than a person, well it makes sense to make that transition. In fact, shareholders would increasingly demand it and would it not actually be brilliant if we could shed all that messy HR stuff, by designing out humans altogether? But before we go out to tender on that front, it is worth checking that we actually know what being human is, in order to analyse what, if anything, might be lost in such a move. This is harder than it sounds, because in our zealous attempts only to programme the very best of our capacities into AI, we have scrupulously avoided all our junk code. A cultural commitment to scientific rationalism meant that all the human bits we were unsure about or that seemed a bit flaky we simply left out. So, what we are left with in AI is a rather inaccurate copy of human intelligence. Is this risky? Could we end up programming a master race of psychopaths? You have probably watched enough Sci-fi to have a fair sense of the odds. But what if these bad bits were actually the good bits? What if these too-human traits are actually more useful than they appear? Lets take a look at that cutting room floor.
I have identified seven key items of human junk code: Free-will, Emotions, Sixth Sense, Uncertainty, Mistakes, Meaning and Storytelling. Lets go through these in turn. The first one, Free-will, if you think about it, is a disastrous design choice. Letting creatures do whatever they want is highly likely to lead to their rapid extinction. So, lets design in some ameliorators the first of these is emotion Humans are a very vulnerable species because their young take nine months to gestate and are largely helpless for their first few years. Emotion is a good design choice, because it makes these creatures bond with their children and in their communities to protect the vulnerable. Next, you design in a Sixth Sense, so that when there is no clear data to inform a decision, they can use their intuition to seek wisdom from the collective unconscious, which helps de-risk decision-making. Then we need to consolidate this by designing in uncertainty. A capacity to cope with ambiguity will stop them rushing into precipitous decision-making and make them seek others out for wise counsel. If they do make mistakes, well, they will learn from them and mistakes that make them feel bad will develop in them a healthy conscience, which will steer them away from repeated harms in future. Now that we have corrected their design to promote survival, what motivators are needed for their future flourishing? They need to want to get out of bed on a dark day, so we fit them with a capacity for meaning-making, because a species that can discern or create meaning in the world will find reasons to keep living in the face of any adversity and to keep the species going over generations? We design in a super-power about storytelling, because stories allow communities to transmit their core values and purpose down the generations in a highly sticky way. Stories last for centuries, futureproofing the species through learned wisdom of our ancestors and the human species prevails.
We did not choose to design humanity into AI, because it seemed too messy a robot that was emotional and made mistakes would soon be sent back to the shop but on reflection, we can see that our junk code is not an accident or a mistake, but part of a rather clever defensive design. If this code is how we have solved our own control problem as a species, might we find wisdom in it for solving those problems for AI? Thats why answering the question about maintaining our humanity also answers the question about how best to stay in control and for workplaces, it underscores why we will still need those human resources for some time to come, because the very flaws that necessitate HR policies and a Human Resources department in the first place, are the source of our unique human contribution. Right now, we need people to train up AI. We know that tools like ChatGPT behave rather like your average intern: they are extraordinarily keen, but you really do need to check their work. Are your staff ready to supervise them well? As in coaching, the trick to using AI is about becoming ever more excellent at prompts and questions and thinking ahead about interesting assignments that will train them up. Its about honing your instincts so that youre not taken in by plausible answers. Its about thinking of the wider impacts of using AI on those around you and its about sharing advice and best practice so that we all become better at it. In short, its all about nurturing your own junk code to make up for AIs current lack of it.
Here is an example, your organisation is unique. Its unique because of its very particular mixture of history, culture and leadership even within its designated sector or geography and your human staff know this. They pick it up through the stories they are told on the day they arrive and at every office party or away day they have been on ever since. Successes and failures, heroes and villains, who makes it and who doesnt. In HR, they learn about its shadow side, through overseeing difficult disciplinaries, tribunals and exit interviews. So, their antennae are acutely attuned to pick up cultural tells and to instinctively relate what will work and what wont. This is too nuanced ever to be fed into an AI, because it changes as the people around you change. It changes with the weather, the news and the seasons. We can ask an AI to write a training plan, a website or a strategy presentation and it will do it in seconds, but only your staff know how to tweak this into something that will land in your context. In the past, we have frowned on water-cooler chit-chat as wasting valuable work time. But maybe these kinds of interactions are precisely where your staff must go, in order to finetune their intuitions about the culture through exactly this sort of office gossiping. It is another thing that we are in danger of losing as more people work remotely. What about the longer term? What might the role of HR be then? There will be a transition in many workplaces to a more AI-enabled workforce and that will mean re-training, re-skilling or exit for some. But before you ask ChatGPT to write your ten-year workforce plan for you, have a look at that list of junk code, where is it already acting as risk-mitigation in your organisation, even if sometimes it feels like whimsy or waywardness? What more could you do to nurture it? Because if we only prioritise the competencies weve already programmed into AI, there will be no reason to keep humans in the workplaces of the future. But if we lose them, we stand to lose our humanity too.
Dr Eve Poole OBE is the author of Robot Souls Published by CRC Press
FOR FURTHER INFO http://WWW.EVEPOOLE.COM
Read the original:
HUMANITY IN AN AUTOMATED WORLD - THE SINGULARITY ... - The HR Director Magazine
- Decoding the SS25 trends on PV New York's next show - Premiere Vision - November 26th, 2023 [November 26th, 2023]
- Jason Isaacs talks playing desperately troubled and unhappy, haunted Cary Grant in ITVX biopic - Yahoo News UK - November 26th, 2023 [November 26th, 2023]
- What I fear about generative AI, By Uddin Ifeanyi - Premium Times - November 26th, 2023 [November 26th, 2023]
- Palia Welcomes A New Quest-Giver NPC, Flow Trea Groves, And A ... - MMOs.com - November 26th, 2023 [November 26th, 2023]
- Mneskin: how Honey (Are u coming) became a live anthem - WECB - November 26th, 2023 [November 26th, 2023]
- The Slate Speaks: Childhood media shaping us today - The Slate Online - November 26th, 2023 [November 26th, 2023]
- AI Revolution The Change. The creation of computer systems that ... - Medium - November 26th, 2023 [November 26th, 2023]
- If AI Takes Over, What Will You Do? - Medium - November 26th, 2023 [November 26th, 2023]
- Sam Altman is back in the driver's seat at OpenAI next stop ... - TechRadar - November 26th, 2023 [November 26th, 2023]
- Scientists 3D Print a Complex Robotic Hand With Bones, Tendons, and Ligaments - Singularity Hub - November 26th, 2023 [November 26th, 2023]
- OpenAI Mayhem: What We Know Now, Don't Know Yet, and What Could Be Next - Singularity Hub - November 26th, 2023 [November 26th, 2023]
- The Singularity Is Fear - by Tomas Pueyo - Uncharted Territories - November 26th, 2023 [November 26th, 2023]
- This Week's Awesome Tech Stories From Around the Web (Through February 3) - Singularity Hub - February 4th, 2024 [February 4th, 2024]
- This Week's Awesome Tech Stories From Around the Web (Through March 16) - Singularity Hub - March 23rd, 2024 [March 23rd, 2024]
- Frax Finance Aims for $100B Locked Value with Roadmap - Crypto Times - March 23rd, 2024 [March 23rd, 2024]
- Critical Survey: Singularity Future Technology (NASDAQ:SGLY) and DSV A/S (OTCMKTS:DSDVF) - Defense World - March 23rd, 2024 [March 23rd, 2024]
- Vernor Vinge, science fiction writer and creator of the concept of the technological singularity, has died at the age of 79. - The Verge - March 23rd, 2024 [March 23rd, 2024]
- The Father Of The Singularity Dead At 79 | GIANT FREAKIN ROBOT - Giant Freakin Robot - March 23rd, 2024 [March 23rd, 2024]
- Vernor Vinge, father of the tech singularity, has died at age 79 - Ars Technica - March 23rd, 2024 [March 23rd, 2024]
- Vernor Vinge, influential sci-fi author who warned of AI 'Singularity,' has died - Popular Science - March 23rd, 2024 [March 23rd, 2024]
- Vernor Vinge, first author to describe cyberspace and 'The Singularity,' dies at 79 - The Register - March 23rd, 2024 [March 23rd, 2024]
- Vernor Vinge, Author Who Popularized AI Singularity, Dies at 79 - TheWrap - March 23rd, 2024 [March 23rd, 2024]
- Singularity Growth Invests Rs 400 Cr in Akshayakalpa, Lohum, and Others - Startup Story - April 20th, 2024 [April 20th, 2024]
- Cannes' Directors' Fortnight sets its sights on singularity - Cineuropa - April 20th, 2024 [April 20th, 2024]
- Exploding Stars Are Rarebut if One Was Close Enough, It Could Threaten Life on Earth - Singularity Hub - April 20th, 2024 [April 20th, 2024]
- Scientists Create Atomically Thin Gold With Century-Old Japanese Knife Making Technique - Singularity Hub - April 20th, 2024 [April 20th, 2024]
- Boston Dynamics Says Farewell to Its Humanoid Atlas RobotThen Brings It Back Fully Electric - Singularity Hub - April 20th, 2024 [April 20th, 2024]
- Revolutionary Soundscape: "Journey into Singularity" by Westfalen Winds - elblog.pl - April 20th, 2024 [April 20th, 2024]
- Ray Kurzwell revisits AI singularity prediction 20 years on | Technology | sfexaminer.com - San Francisco Examiner - April 20th, 2024 [April 20th, 2024]
- Cell Therapies Now Beat Back Once Untreatable Blood Cancers. Scientists Are Making Them Even Deadlier. - Singularity Hub - April 20th, 2024 [April 20th, 2024]