Artificial General Intelligence and the Future of Societal Progress

It can be understood that the goals of societies are to maximise the well-being of all humans who share them (many will expand this to include all conscious creatures but I will leave this topic out for now). You could argue that the societal view of looking after its own members is also too narrow-minded and should be expanded to include all humans regardless of their societal circumstances. If we look at the work of Steven Pinker in his book Enlightenment Now, we can see that by virtually all measures, human well-being has been on an incline across the world and this continues to be the case even now. This is despite the press becoming increasingly negative over time.

I’m sure there will be no argument to the statement that well-being is measurably better in what we consider the “developed world” versus the “developing world”. There are many initiatives to try and improve this disparity from charities like doctors without borders who are trying to increase overall health and life expectancy, to efforts like Neil Turok’s “African Institute for Mathematical Sciences” (AIMS) which in 2003 set out to support the development of maths and science across the continent of Africa thus improving education.

Just as in almost all fields of human progress, automation and what is known as “weak AI” (otherwise known as “narrow AI”) is already helping to improve quality of life across the entire human experience. From computers simulating tests for vaccines to robots automating warehouse operations for online retailers, there are seemingly infinite use cases for technology to make our lives easier and better overall by these same societal measures.

With the rapid rate of improvement in our technological capabilities over the past century, it is not unreasonable to think that our world will become almost fully automated in the not-so-distant future. This will cover everything from self-driving transport (cars are already being road tested in many countries), automated delivery (already live in some cities), automated meal preparation, to things like medical research which could lead to disease eradication and maybe someday even ageing stasis or reversal.

If we are able to achieve a fully automated society, there is no reason to think that this wouldn’t then be expanded to all of humankind over time.

“[People] who are governed by reason – that is, who seek what is useful to them in accordance with reason – desire for themselves nothing, which they do not also desire for the rest of [humankind]”

Baruch Spinoza

The path to success

The precursor to almost all of the recent improvements to human well-being have stemmed from technology. With that in mind, we can assume that the fastest way to achieve our goals of maximising human well-being is to improve our technology. This will undoubtedly have to rely on what is currently considered the cutting edge of the technological landscape. This is a counterpart to the weak AI mentioned in the previous section and is aptly known as “strong AI”. Another common name for this is “Artificial General Intelligence” (AGI) which can be loosely defined as the point in which a computer could successfully perform any intellectual task that a human can. One of the interesting things about AGI is that the ways in which it attains this level of intellect is by teaching itself. Based on the fact that computers are already able to calculate at a rate many orders of magnitude faster than any human, it is reasonable to assume that the point in which it becomes level with our intellect will just be a negligible milestone as it quickly surpasses us in virtually every way (I will leave the topic of sentience for another time).

There are some large initiatives currently working to add some control to AGI should we realise our goal of developing it. This is because many scientists and philosophers have voiced concerns of humanity’s existential threat from AGI, specifically where poorly guided or instructed. Eliezer Yudkowsky calls this “the AI Alignment problem” and a good example of this danger is Nick Bostrom’s seemingly benign paperclip maximiser. The premise is simply that a machine is created with a goal of maximising paperclip production. It starts by creating paperclips and learns the most efficient ways to do so. As it learns more it realises that it could create even more paperclips if it just expands its operation. Before long, it has covered the whole world in paperclip manufacturing equipment and will continue to use all resources available to create as many paperclips as possible.

In this example our goal of human well-being has not been aligned with the machine and therefore we can assume that if humans were in the way, it would try to remove us and more importantly as there are atoms in the human body which could be used to create paperclips, it would use them.

Assuming that the world is not ultimately destroyed by poorly thought out AGI, the prospect of reaching this technological advancement could be the solution to all of humanity’s problems of well-being. Imagine a world where the farmers no longer need to tend to crops as the process has been completely automated and food is sent straight to homes with the amount needed to feed each household. The food is prepared automatically and we just have to sit down and eat it. Imagine a world where machines have been able to effectively cure all disease, famine, crime etc. – possibly even death.

Absolute success

Some people may think that this utopia of free time, no dangers, no need for money, no stress etc. is what we have all been hoping for. We often hear the cliche beauty pageant goal of “world peace” – well we would have it. No need for wars, no children dying of malaria, the threat of climate change solved, we could be advancing our civilisation’s cosmology to places that we can’t even comprehend.

Think about this on a domestic scale – there are no longer any chores to do at home as everything we need has been automated, no longer any jobs to go to, no longer anything to interfere with us “leading our lives”.

In this scenario, every almost every existing measure of societal health could be at its maximum but human well-being as a whole will not be. This is because happiness or what the Greeks called “eudaimonia” may still be lacking.

The question for humanity quickly becomes “what do we do with ourselves?”.

When looking at this from the perspective of ancient civilisations, their vision of maximising the well-being of small societies simply required a reliable harvest, a means to safeguard their tribe from danger and a form of medicine to prevent diseases. To such a civilisation, this would have seemed like a pipe dream, only achievable given a sufficiently generous deity. Despite this, we can now safely say that we have achieved their ancient goals to a reasonable extent but in doing so, we have also moved the goalposts.

With this in mind we could take the position that once we come close to achieving our current goals, we may end up moving the goalposts again. The problem is that this premise relies on a worldview that progression potential is virtually infinite or you risk just kicking the proverbial can down the road.

Reassessing our goals

You might think that this is decades or centuries away but we are already starting to see the results of these endeavours. It’s worth keeping in mind that even without the development of AGI, automation is likely to replace a large portion of existing jobs over the next 20 years anyway.

A controversial study published in 2013 by two Oxford researchers, Carl Benedikt Frey and Michael A. Osborne, stated that, “According to our estimates around 47 percent of total US employment is in the high risk category. We refer to these as jobs at risk – i.e. jobs we expect could be automated relatively soon, perhaps over the next decade or two”.

We already know that companies like Amazon employ robots in their warehouses, displacing many potential jobs for humans, cargo trucks in the United States often travel many hours at a time on very easy to navigate roads (by computing standards) which many expect to be at least semi-automated in the next 10 years, the mass-production of cars is almost entirely automated already and so on.

Some people will think that the roles lost to automation will be created elsewhere as our industries evolve. I have heard many times about different initiatives (whether serious or not) to try and train the “unskilled” workforce in things like software engineering as an example. This solution fails for many reasons but above all, many people who fall into this category either have no interest in software engineering or are just not good enough at it to be competitive in the market, especially when we consider that kids as young as 7 years old are learning programming in school.

We also have to consider that if true AGI were achieved, there would be no need for software engineers. The machines would be able to design and build software that is better and more efficient than anything we could build ourselves.

We said at the beginning that our goals have been to maximise human well-being but we’ve not yet achieved that.

Just like the paperclip maximiser, there are similar thought experiments which involve maximising human happiness. Some of these involve being hooked up to drugs which keep us in a permanent euphoric state, therefore maximising our happiness. Others involve creating a virtual world for us to live in where we can find the fulfillment we desire (think The Matrix or Ready Player One). This becomes dangerously close to simulation theory which is a rabbit hole all on it’s own.

Whatever your thoughts are about happiness and well-being, you would be hard pressed to claim that they are the same thing. I could after all be happily living my life on a cocktail of drugs while also being harvested for organs. This would be in direct contradiction to any preconceived thoughts of well-being. Could our preconceptions be wrong? Would happiness be enough for people? What do we actually need?

Up until this point, we had been working towards solving all the societal issues which we can easily define and measure as a means to improve human well-being. The problem is that the means by which we push all of these areas of societal health to their maximum may cause eudaimonia to sharply decline therefore creating an opposition to our goals.

Is there an alternative way to solve societal issues without making humanity redundant? Is developing AGI actually detrimental to the human condition? Does that make it wrong?

We can look at this through a philosophy lens. For example do our children need to be biologically related? The answer is widely accepted to be no, people adopt children all the time. Do our children have to be of the same species? Almost everyone would say that the answer is yes after all, society hasn’t yet accepted pets as children.

To take this philosophical problem a bit further, there is an old thought experiment called the “Ship of Theseus”. The Greek mythical hero Theseus sailed a ship into battle and after victory, his ship is left in the harbour. As time goes by, some of the wood begins to rot and so those panels are replaced. Decades go by and even more wood goes bad and needs replacing. Before long, none of the original components used to build the ship are a part of the ship anymore. Do we still consider it to be the same ship?

The relationship between these two issues might not be apparent but what if for example, your child was dying of some form of wasting disease. In a matter of months, their body would no longer function. Let’s imagine that their limbs start to go first and so you replace them with mechanical prosthetics. I think you see where this is going. By the time their entire body is replaced by machinery, would you have already emancipated them? Now you could argue that what makes this your child is that their mind is still intact. They still remember everything you had taught them and all the experiences you’ve shown them. What happens if their mind starts to fail and you have the option to download it into a machine which will mirror their mind exactly?

You may not accept them as your child but they would know no better than that you are their parent. If you took the leap of accepting a machine which had all of your child’s thoughts, memories and intellect as your actual child, at what point is that no longer the case as you start to improve their learning capacity or processing power.

As you can see, things are never as black and white as you might first assume. A case could be made that machines are the next step in human evolution by some extension of artificial selection – i.e. “technological selection”

Should we be allowed

The exact origin of our species is often disputed but by virtually all scientific research, it’s safe to say that we’ve existed for more than 100,000 years. Most of this existence was spent with very short life expectancy, high child mortality and what we would generally now consider to be the extremes of poverty.

One thing we do know is that our ancestors had made use of tools from very early on which allowed them to both hunt and defend themselves more effectively. In a Darwinian world often (albeit incorrectly) described as “survival of the fittest”, humans have been directly responsible for the extinction of countless species on this planet through our efforts to preserve our own.

This leads us to the question of, “what claim do we actually have on this planet?”. If it’s just that we are the dominant race and can defeat all other challengers, we may not be in that position in the future. Will we cede our throne as ruler of the planet when a new dominant race is born? Will we be given a choice? Do we have any morally sound positions to fall back on in defense of machines being in power?

What will we want

We’ve already established what we currently want and that we may need to reassess our goals in the future, but the question of what we will want is possibly not answerable right now.

Assuming that some degree of free will exists, we need to make a choice about what kind of world we would like to live in. We are regularly assessing this in current times so it would naturally follow that we will continue to do so in the future. The problem is that right now we are potentially at the start of a period of exponential growth for our civilisation and therefore we may just not get the opportunity to point this rocket of progression in the right direction if we don’t start now.

Some people even now would choose to either be hooked up to a euphoria machine or some form of virtual reality simulation. In an effort to prevent one of these scenarios being enforced on humanity, would we be happy limiting the extent that our technological offspring can flourish? Even then, would we still run out of jobs for humans to do or ways to occupy our time?

Could we find ways to keep humans level with AGI in terms of capabilities by upgrading our own hardware for example? How many people do you think would be happy today with the knowledge that we were planning to essentially turn people into some form of cyborg in order to do so?

One thing that’s clear is that something about our ideologies, preconceptions and biases will have to evolve. We can’t hope to reach a future we’d be satisfied with without adjusting our existing definitions of what good looks like.

Artificial General Intelligence and the Future of Societal Progress
Scroll to top