Ensuring that artificial intelligence ‘cares’ about actual human beings

Ensuring that artificial intelligence ‘cares’ about actual human beings A robot equipped with artificial intellgence is seen at the AI Xperience Centre at Vrije University of Brussels earlier this year. Photo: CNS
Some computers can think things through better than the cleverest of people, this has implications, writes David Quinn

The big scientific story of the year has, of course, been the development of various vaccines designed to protect us from Covid-19. But longer term perhaps the even bigger one has been the development by a company called ‘DeepMind’ of a means of predicting the behaviour of proteins. Apparently, this may open the door to all sorts of new medical breakthroughs in time.

But also of huge significance was the use by DeepMind of Artificial Intelligence (AI) to help it arrive at its goal. Science writers say the breakthrough shows how far AI has come.

AI is already able to ‘think’ through certain problems better than even the cleverest of humans. A good example is a chess game. A chess game is, in a certain sense, a problem to be solved. How do I win the game in front of me?

Only the brightest people are truly excellent at chess because their minds are far better than those of us lesser mortals at working out how to win a chess game. They can work out a strategy, see many moves ahead and work out the numerous ways in which their opponent might respond, and how they, in turn, can respond to each move.

Chess

Computers are now better at chess than we are. Artificial Intelligence simply has far more computational power than any human being could possibly have (it’s where the word ‘computer’ comes from, after all). So, in the end, what chance did we have?

Proteins are fiendishly complex. How could any human being predict how they will behave? Well, DeepMind has gone some way towards answering that problem because of its huge computational power, and it is only going to become better and better at it, leaving our puny brains further and further behind.

Ordinary people are going to have to start thinking a lot more about AI and soon. Politicians need to start working out their policy responses to it. It needs to be discussed more in public, and that discussion must involve not just scientists, but everyone, because it will affect everyone and has massive ethical implications, as well as implications for how we live our lives.

A big concern is the one that has existed ever since the invention of machines, namely, will they put us out of work?

Fortunately to date, machines have helped us to grow our economies enormously, creating more jobs and raising living standards along the way.

But if truck-drivers, say, are replaced by intelligent machines that can read a route, read traffic and get goods from A to B safely and cheaply, why pay a more expensive, error-prone human to do the same job?

Will all those truck drivers find jobs elsewhere? Some might, others won’t. One of the developments that helped give rise to both Brexit and Donald Trump was the loss of manufacturing jobs in Britain and the US to countries with cheaper labour, not least China. Yes, new jobs were created, but not necessarily for those in manufacturing industries who have suffered greatly as whole towns and regions become ‘rust-buckets’. So, AI will create winners and losers.

But here is another possibility, which is vaguely terrifying. What happens if eventually we can no longer control AI?

This is something that seems to belong in the realms of science fiction. We’ve all seen the movies in which intelligent machines seek to displace and even destroy us.

That is where I thought such discussion belonged until I read recently about the existence of an organisation called the Machine Intelligence Research Institute (MIRI).

The members of it are serious scientists who are concerned about some of the implications of AI. They disagree among themselves about exactly how far AI can advance and in what timeframe, but they agree that “AI is likely to begin outperforming humans on most cognitive tasks in this century.”

Worries

One of their big worries is that independent AI which is “not correctly designed to align its own goals to its best model of human goals, could cause catastrophic harm in the absence of adequate checks.”

To put that in plainer English, poorly designed AI may have goals of its own that have nothing to do with human goals and welfare and may actually conflict with them. The consequences of that could be disastrous.

Therefore, as we develop AI we have to be absolutely sure we know what we are doing, or like Dr Frankenstein, our creation might turn eventually turn on us.

This concept of ‘non-aligned intelligence’ is one I’ve only come across recently. It basically means an intelligence that does not think like us and is essentially, alien. It would have no sympathy for us and if it shared some of our goals, it would only be due to the way we have programmed it. But as the programming changes and evolves, perhaps it would lose those goals. When then?

The Oxford-based philosopher, Nick Bostrom, has come up with something called the paperclip problem. What might happen if we designed a machine whose job it was to produce as many paperclips as it can as efficiently as it can?

It sounds benign enough, but if it took the command very literally, it would not know when to stop and might eventually decide that humans, who are made of matter, can be turned into paperclips which are also made of matter.

Obviously, his scenario is much more sophisticated than this and has many levels, but this very simplified view of how AI could go very badly wrong hopefully gives you some idea of what scientists and ethicists are trying to grapple with as they think about AI.

In fact, last February the Vatican organised a conference to discuss exactly this topic. Leaders from IBM and Microsoft met senior Vatican officials and agreed to collaborate on “human-centred” ways of designing AI.

Pope Francis prayed that AI be aligned with human dignity because if it is not, then the decades to come may see changes imposed on us by a non-aligned intelligence that are vastly incompatible with any such concept.