With each passing day, advances in artificial intelligence (AI) bring us closer to a world of general automation.

In many cases, this will be the realisation of utopian dreams that stretch back millennia – imagined worlds, like the Garden of Eden, in which all of humanity’s needs are provided for without reliance on the ‘sweat of our brows’. Indeed, it was with the explicit hope that humans would recover our dominion over nature that, in 1620, Sir Francis Bacon published his Novum Organum. It was here that Bacon laid the foundations for modern science – the fountainhead of AI, robotics and a stack of related technologies that are set to revolutionise the way we live. 

It is easy to underestimate the impact that AI will have on the way people will work and live in societies able to afford its services. Since the Industrial Revolution, there has been a tendency to make humans accommodate the demands of industry. In many cases, this has led to people being treated as just another ‘resource’ to be deployed in service of profitable enterprise – often regarded as little more than ‘cogs in the machine’. In turn, this has prompted an affirmation of the ‘dignity of labour’, the rise of Labor unions and with the extension of the voting franchise in liberal democracies, to legislation regulating working hours, standards of safety, etc. Even so, in an economy that relies on humans to provide the majority of labour required to drive a productive economy, too much work still exposes people to dirt, danger and mind-numbing drudgery.  

We should celebrate the reassignment of such work to machines that cannot ‘suffer’ as we do. However, the economic drivers behind the widescale adoption of AI will not stop at alleviating human suffering arising out of burdensome employment. The pressing need for greater efficiency and effectiveness will also lead to a wholesale displacement of people from any task that can be done better by an expert system. Many of those tasks have been well-remunerated, ‘white collar’ jobs in professions and industries like banking, insurance, and so on. So, the change to come will probably have an even larger effect on the middle class rather than working class people. And that will be a very significant challenge to liberal democracies around the world. 

Change to the extent I foresee, does not need to be a source of disquiet. With effective planning and broad community engagement, it should be possible to use increasingly powerful technologies in a constructive manner that is for the common good. However, to achieve this, I think we will need to rediscover what is unique about the human condition. That is, what is it that cannot be done by a machine – no matter how sophisticated? It is beyond the scope of this article to offer a comprehensive answer to this question. However, I can offer a starting point by way of an example. 

As things stand today, AI can diagnose the presence of some cancers with a speed and effectiveness that exceeds anything that can be done by a human doctor. In fact, radiologists, pathologists, etc are amongst the earliest of those who will be made redundant by the application of expert systems. However, what AI cannot do replace a human when it comes to conveying to a patient news of an illness. This is because the consoling touch of a doctor has a special meaning due to the doctor knowing what it means to be mortal. A machine might be able to offer a convincing simulation of such understanding – but it cannot really know. That is because the machine inhabits a digital world whereas we humans are wholly analogue. No matter how close a digital approximation of the analogue might be, it is never complete. So, one obvious place where humans might retain their edge is in the area of personal care – where the performance of even an apparently routine function might take on special meaning precisely because another human has chosen to care. Something as simple as a touch, a smile, or the willingness to listen could be transformative. 

Moving from the profound to the apparently trivial, more generally one can imagine a growing preference for things that bear the mark of their human maker. For example, such preferences are revealed in purchases of goods made by artisanal brewers, bakers, etc. Even the humble potato has been affected by this trend – as evidenced by the rise of the ‘hand-cut chip’.  

In order to ‘unlock’ latent human potential, we may need to make a much sharper distinction between ‘work’ and ‘jobs’.

That is, there may be a considerable amount of work that people can do – even if there are very few opportunities to be employed in a job for that purpose. This is not an unfamiliar state of affairs. For many centuries, people (usually women) have performed the work of child-rearing without being employed to do so. Elders and artists, in diverse communities, have done the work of sustaining culture – without their doing so being part of a ‘job’ in any traditional sense. The need for a ‘job’ is not so that we can engage in meaningful work. Rather, jobs are needed primarily in order to earn the income we need to go about our lives. 

And this gives rise to what may turn out to be the greatest challenge posed by the widescale adoption of AI. How, as a society, will we fund the work that only humans can do once the vast majority of jobs are being done by machines?  

copy license