How to be Human in an Artificial World

You're reading:

How to Be Human

How to be Human in an Artificial World

Can humans upgrade to compete with AI?

Another day, another news story about Artificial Intelligence. AI news fills our screens with regularity, from as diverse perspectives as ‘the end of the world is nigh’ to ‘how dare university students use AI to write their college essays’ to the moral outrage levelled at those canny people who have dodged their parking fines by asking Chat GPT to craft a well-written letter pleading their case. Even though 2023 has been described as the year Artificial Intelligence entered the public consciousness, the noise about AI is already becoming so ubiquitous that most of us have tuned out.

Still, there are those who are embracing these new tools with relish, cutting down laborious tasks to mere seconds thanks to a whizzy AI-powered design programme or a tool that creates beautiful PowerPoints with intelligent text that would’ve taken the average person a whole day or more to write. Why wouldn’t anyone jump on this super-powered bandwagon, they ask? In another camp there is a growing sense of disquiet about the speed and complete lack of restriction with which players in this field are hurtling towards greater and greater capacities of Artificial Intelligence.

While creating self-driving cars or design-enhancing PowerPoints seem pretty benign and useful, what’s less clear is the implication of a world driven almost entirely by AI, and the leap from specific, task-oriented AI, known as Artificial Narrow Intelligence, (where its role is to undertake one specific task such as to play chess, translate a language, recommend movies you’d like to watch or books you’d like to buy), to Artificial Generalised Intelligence (AGI) which is a “type of artificial intelligence that possesses human-like cognitive abilities, such as the ability to learn, reason, solve problems, and communicate in natural language.” Basically, a computer that is as generally smart as humans, not just at one narrow specialty. Sounds great, until you realise that a computer that is as smart as us won’t stay as smart as us for very long.

Despite it taking millions of years for humans to evolve to this level of (sometimes questionable) intelligence, a computer programme that is designed to self-improve frankly won’t wait that long. And yet, for most of us this seems farcical, belonging to the realm of science fiction, not science fact. But just because progress seems slow, doesn’t mean it actually is. This absolutely terrifying image sums it up quite nicely (and the accompanying blog is well worth a read):

Basically, progress feels really, really slow – almost non-existent – until it speeds up exponentially, by which point it’s going to slap us in the face without warning, or so it will seem.

As the blogger Tim Urban writes in his highly readable blog Wait But Why, thanks to the Law of Accelerating Returns (a term coined by futurist Ray Kurzweil), it’s likely that the 21st century will see 1,000 times the progress of the 20th. Living in 2050 may feel as alien to us now as it would for our ancestors from the 1700’s time-travelling to 2015. While it’s hard to imagine these shifts with our linear minds, it’s clear that we’re already experiencing some head-spinning levels of change that are leaving us feeling disorientated, confused and anxious.

Consider this: in the last decade from 2013 to 2023, we’ve experienced Brexit, Trump, a global pandemic, the #MeToo and Black Lives Matter movements, a war in Europe… and the list goes on. Just one of these monumental, global happenings would be enough to alter our worldview and weigh heavy on our emotions. Consider for a moment the culmination of all those things happening, almost all at once, with the added weight of our own life experiences: the losses, health concerns, family problems, changes in work or lifestyle, and so on. Now imagine all of that speeding up, exponentially. Gulp.

It’s still hard to imagine this quantum leap of change when mostly the discourse we’re hearing about AI is centred on issues such as lazy (or smart, depending on your perspective) college students being handed their degrees on the back of AI-written essays. Placing this at the centre of our collective discussions about AI is akin to worrying about a spot of rain on the deck of the Titanic. The impact of AGI is so vast, so incomprehensible, that it has, as a worst case scenario, the capacity to lead to extinction of all human life on Earth, and perhaps as a best case scenario, the capacity to change every facet of human life, from no longer requiring that humans need to work, to generating completely free and green energy, to distributing adequate food supplies all over the world, providing every human with clean water and solving all environmental challenges we’ve unwittingly created.

But even solving one problem can create another. Will the current level of consciousness of our human leaders – their morality, leadership abilities, values and so forth – ensure that a mass transfer of jobs from humans to robots will result in us enjoying a life without toil, focusing instead on higher pursuits of creativity and enjoyment? Or will it be a repeat of the current system, but on a far broader, more magnified scale, where the ‘haves’ flourish at the expense of the ‘have-nots’? As writer Kevin Drum suggests, history may provide a pretty good indication of what’s to come in the future. “Robots will take over more and more jobs. And guess who will own all these robots? People with money, of course. As this happens, capital will become ever more powerful and labor will become ever more worthless. Those without money—most of us—will live on whatever crumbs the owners of capital allow us.” Sound familiar?

Whichever potential reality proves to be true, or something else entirely, there are some really, really big (and frankly, scary) considerations that should have been made long before AI was introduced to the mainstream. Now, high-profile thinkers are sounding an alarm call, but is it too late? Author Yuval Noah Harari recently claimed that AI has ‘hacked the operating system of human civilisation’ by becoming human-like in its ability to tell stories and influence human decision-making, while earlier this year one of Google’s software engineers, Blake Lemoine, claimed that the technology he’d developed had gained consciousness (for which he was widely discredited and fired from his position at Google).

In an article for Forbes, writer Philip Maymin reports that one of the ‘godfathers of deep learning’, Geoff Hinton, believes that we are on the precipice of creating AGI which will automatically relegate humans to the second-smartest species on the planet. Whether this is an existential threat to humankind or not seems to come down to a question of alignment, he writes. Will AGI be aligned to positive human values, or not? “Unaligned AGI’s may enslave us or kill us, perhaps even thinking it is for our own good. Hinton raises another possibility: they will be so good at persuasion they’ll be able to convince us of anything.”

Even programming AI with seemingly beneficial human values, such as kindness, won’t necessarily mean that we avoid an annihilation scenario. Consider an intelligence that’s far greater than human intelligence deciding what’s kinder for the planet: to allow humans to continue creating war and destroying our natural habitat, or to remove us like fleas from a dog. We wouldn’t blink an eye at using a ‘humane’ treatment on our beloved pet to remove pesky fleas, acknowledging it as the definite lesser of two evils. Enough said.

So where does all this leave us, as increasingly blind, fairly ignorant, often catastrophically stupid human beings? What’s clear is that Artificial Intelligence is not going anywhere. The horse has well and truly bolted the stable before we’ve even realised there’s a door. What remains unclear is whether this whole undertaking has been well thought through or not. By turning AI into a race to the top as nearly everything else on planet Earth, we’ve allowed anyone and everyone to jump onto the AI bandwagon without asking ourselves whether AI is ethical, whether it’s needed, and whether it should be regulated – and if so, by whom?

It’s clear that as a global society we should be screaming from the rafters for a far deeper, far more thorough discussion about the role and limits of AI before we gallop off into the sunset glowing with childish pride about what we’ve created. But for most of us that’s not within our gift. No one in the development labs of Google or Apple are listening to us, quite frankly, and as for the darker, more nefarious players in the field who will also be marching towards creating AGI for their own benefits, well, it doesn’t bear thinking about.

So what do we do?

What’s clear is this: technology is only as good as it’s programmed to be. And that relies broadly on how good the programmers are. Our first Apple iPhones were vastly more limited than the models we have today. The quality of the technology – the speed, the performance, the usability – has improved exponentially, but the drive behind the improvements hasn’t changed. The goal is always to make our phones faster, smarter, cheaper to produce, easier to navigate and so on. The goal is never to make them kinder, more compassionate, more respectful.

That idea is almost laughable, until we switch out the term ‘iPhone’ for AGI-powered robot police officers, or immigration officials. Then the human values or level of consciousness of the programmer becomes incredibly important. If a computer programme is going to decide whether someone goes to jail or not, or is granted asylum or not, don’t we want it to be imbued with the highest level of human values? Don’t we want it to be the wisest, most caring model ever created? Which begs the question, are the people creating this technology the wisest, most caring, most altruistic beings to ever walk the Earth?

It’s too late to suggest that our race for better and more powerful AI is halted. What’s most urgent now is to ensure that those who are creating these technologies are aligning with a higher level of conscious awareness than the base level of human thinking that gives rise to ignorance, greed, power games and war. As for the rest of us, if we are to live side-by-side with AI, our humanness is going to be of greater importance than ever before, but we’ll need an inner makeover if we’re going to compete.

Rather than schools training our children to perform rote memorisation the likes of which the most basic of computers could undertake in milliseconds, we should be aspiring to foster the next-level of human qualities. Rather than arguing with one another about politics on social media, perhaps we should be considering how to foster unprecedented levels of peace across the planet, or cultivating levels of creativity and curiosity the likes of which we’ve never seen before.

Even as I write this blog, the gap between where we are and where we need to be is literally playing out in front of me; a scene which would have been highly comical if it wasn’t quite so ironic. In a busy café, the front door is sticking. It becomes apparent that the overhead hinge has come loose, slipping down from its position to cause the door to jam open, not wide enough for anyone to pass. With a bit of tugging, the hinge loosens and the door is yanked open. The scene repeats itself over and over. Customer goes to exit the café, door jams, they can’t get out. What do these highly intelligent human beings do? Absolutely nothing. Each person (including the staff who bustle in and out, serving diners outside) tugs on the door until they are able to release it, (possibly further damaging the hinge and door in the process), and so the immediate problem for that individual is addressed, but overall, it is not solved. This continues until the door jams abruptly in the face of a waitress causing plates and cups to sail into the air and smash all over the floor. Even after this kerfuffle not one member of staff attempts to solve the problem, despite a customer reliably informing them of what’s causing it. Their solution? After forty-five minutes of this ongoing jam-yank-jam-yank, a waitress decides to prop open the door and we all promptly freeze as the cold October air comes flooding in.  

What does this microcosm of human behaviour tell us? For all our lofty ideals about human potential, most people focus solely on meeting their own needs, while simultaneously avoiding using their incredible capacity for problem-solving, ingenuity, and creativity. Why? Possibly because it takes effort, possibly because we’re too self-focused, possibly because we’ve trained ourselves out of thinking in this way.

What this story also sums up nicely is that a technology that’s been programmed to solve problems will do a far, far better job than us humans, much quicker, much more efficiently, and without the levels of ineptitude witnessed in this café. But what it won’t necessarily do is bring human qualities to the problem-solving. Kindness, compassion, altruism, respect, generosity, creativity, intuition and imagination, to name but a few. It is these qualities that we urgently need to foster – they should be front-and-centre of any educational curriculum, weaved into every workplace leadership programme, modelled and taught in every community. Human beings and Artificial Intelligence might be able to co-exist in the future, but not if we try to emulate and model ourselves on computers, striving to improve our capacity to think faster or memorise more – like a 1980’s Casio calculator trying to interface with the Terminator – and not if we become fearful and shrink back into ignorance, handing our power away and thus our ability to choose a different outcome.

We urgently need to become Humans 2.0: an upgraded, more advanced version of the beings we currently are. While our current human models have capacity for ‘higher’ motives and values, all too often we default back to the base behaviours of our animal origins. And if we fail to evolve at this most critical juncture? Well, we might just find that the job of Humans 2.0 is resolutely outsourced to AI.

Niki screenshot from Figma


the author

Hi, I'm Nikki

I created The Spark when I realised I’d lost touch with my own inner light, buried under years of over-work and overwhelm. After witnessing far too many children becoming smaller versions of themselves, shrinking back, disconnecting and becoming disillusioned, I’m on a mission to ignite my Spark to help children to find theirs, changing the way we nurture small humans into being.

What started as a journey of self-discovery is growing into a global movement to create a better childhood for all children and young people. Will you join me?

Follow Nikki:

NG - Landing Page Build V02-11

Need help to reignite your

child’s Spark?

Get our free guide to spot the telltale signs of Spark disconnection and learn how to reconnect your child to their inner light and joy.

Tell me more!

Sign up to our newsletter to be the first to hear about our tips, tools and training that will change the world for children.

This field is for validation purposes and should be left unchanged.

more from

this category
NG - Brand Handover Files - Style Elements-04