Artificial Intelligence

With the intention of diverting my brain away from midterm exams and simultaneously getting thoughts out of my brain and on to paper, I have done some thinking and written out some of my considerations on one of my favourite topics, Artificial General Intelligence and Artificial Superintelligence and its future impact on us. Overall, I hope that it will be achieved within my lifetime, and hopefully help us in project humanity in ways that currently escape our reach. It can be positive if correct precautions are taken while we can think about it. It is a very interesting field of the future which may transform the future of humanity. How will it help us technologically and how could it affect our human survival? It may, as with any seemingly futuristic technology, just seep into our lives in real time and we simply take it in our stride and barely notice that life was any different before. In many ways we are already surrounded by AI’s. The financial incentive for the development of artificial intelligence is huge, and race will not stop, because from the point of view of those developing it, if they don’t somebody else will. There are many points of view on the future of AI ranging from optimists to pessimists about its effect on humanity, with varying reasons of worry or positivity to support that point of view. There are also plenty of top scientists who believe a huge take off in artificial general intelligence is unlikely anytime soon (one of these is David Deutsch who is an excellent Physicist who explains this well).  It is tempting to throw artificial intelligence fears in with the millennium bug or that the Large Hadron Collider was going to create a black hole, i.e. people worrying about nothing. Yet, I still think it is a fascinating topic to think about. I will play devil advocate and explore briefly some of my thoughts on the potential dangers of AGI / ASI.

We currently have chess playing computers that can defeat the greatest human chess player, however this is called brittle intelligence meaning it cannot do anything other than that; i.e. that same computer can’t defeat anyone at anything else, but can defeat me at chess a trillion times in a row. The interesting developments come in the form of strong AI or also known as artificial general intelligence (AGI) and then beyond that, artificial superintelligence (ASI). Strong AI can learn how to learn and this learning can transfer to multiple situations whilst this doesn’t diminish anything which it had previously learnt. In the ultimate example the computer will be able to make improvements to itself, and so eventually becoming the greatest designer of itself. This is where the idea of an exponential take off in intelligence is possible. So if there is this exponential take off, it will very quickly surpass any human brain capacity and potentially our human control. In fact our current computers are somewhat ‘superhuman’ in that they have huge memories, far greater calculating abilities than us and potential access to all of human knowledge via the internet. So an ASI with millions of times more computing power than humans could potentially make thousands of years of human level progress within months or weeks. This is how AI could escape from us, so I would like to think that many at the frontier of this field are asking the question how can we control the conduct and objectives of such a machine. As a result, there are certain responsibilities that need to be thought about to make sure this doesn’t destroy humanity rather than help us, and this comes in many forms. One is the ethics of the machine, which need to be on the same level as that of ideal human values and programmed to understand our ideal ethical systems. For example if you asked the machine “make all humans happy”, you don’t want the machines response to be to kill all unhappy people thus leaving only happy humans. You can’t assume that machines will share what we consider common sense, ideal values would need to be built into the machine. Likewise, asking it to conduct a mundane task such as “block all spam email”, we need to insure the response isn’t to just kill people, which probably would be the most effective way. These are specific and maybe extreme consequential examples, but there are a myriad of shades of grey issues that fall between these.

The two directions the ASI could go is ‘conscious’ or ‘unconscious’. ‘Conscious’ direction assumes that the consciousness we experience in the brain is not mysterious and is replicable in computer form, i.e. in silicon and ‘ones and zeros’. This leads to an interesting philosophical argument about our place in the food chain of things. We as humans base our significance on us being the most important species due to our capacity to experience a greater range of emotion and thus greater understanding the world; compared to for example bacteria, ants or really any other animal. The moment a ‘conscious’ AI that has far greater knowledge, potential and access to ‘experiencing the universe’; and looks down upon us the way we apathetically look down on bacteria or ants, our place in existence would be redefined as less important as we are relegated to the status of bacteria or ants. It is not necessarily unimportant that we survive but we become less important, because that AI would be more capable of making progress towards any understanding of the universe than we are. And also as it is now more important and intelligent than us, who’s to say that it would want to hang around with us measly humans and do our bidding? It could just build itself a spaceship and leave us behind.

The negatives shouldn’t be looked at as if it was an evil terminator robot intent on destroying humans. It should be looked at in the way that humans don’t hate ants, yet if there is an ants nest in the middle of where a hydroelectric that has just been built, the ants become less important; or when walking in the dark your mind isn’t concerned with the wellbeing of ants beneath your feet.

Also, even if we as humans came into possession of such an ASI power and had full control over its intentions, it may not be used for benign purposes. The development of nuclear power for example hasn’t been directed entirely at its energy benefits. In all its potential power, you could effectively be building the ultimate war machine. So in the wrong hands it could be the creation of a very dangerous weapon. The building of such a machine would need rational actors cooperating to seriously benefit humanity. I expect if it was simply made in the USA for example, Russia and China would be quite suspicious over the intentions of its use, and vice versa. Furthermore we would need to build an economic system which could cope with large scale unemployment even among white collar workers, but maybe new jobs will be produced? But this is what we want ultimately, an effective society in which we can work far fewer and less stressful hours and dedicate more of our finite and miniscule glimpse of the universe lives to our passions, talents, deep thought and interpersonal relationships.

In summary, it seems important that control issues should be mediated now, as AI can be excellent for the world in terms of problems it could solve for us that we might be incapable of solving, our deus ex machina. As the power of technology grows, the more chance there is that mistakes will have bigger consequences. Similar to how you have more chances to learn from errors with steam power than with nuclear power. ASI is something we don’t want to make any mistakes with, we should aim to get it right first time.

Leave a Reply

Your email address will not be published. Required fields are marked *