Superintelligence Paths, Dangers and Strategies – Book

During the past few months it has been somewhat hectic for me. I used to write and post on my blog more often. Lately I have set aside only weekend mornings to learn new things and refresh (courses), experiment, and write posts.

A few weeks ago I finished reading Superintelligence Paths, Dangers and Strategies by Nick Bostrom. If I am not mistaken the book made it to the NYT bestseller list during 2014. A teammate at work mentioned the book so I decided to get a copy and read it.

When you read a book you make your own opinion. In some cases it aligns with the thoughts of others, in some it does not. My opinion follows.

I am glad that I read the book. I would recommend it even though it has been a few years since it was published. Things in what we call Artificial Intelligence (AI for short) have changed to some extent in eight years.

The topic of the book, as the title suggests, is about superintelligence and of course AI.

Let’s take a look at what Wikipedia has to say about AI.

Artificial intelligence (AI) is intelligence demonstrated by computers, as opposed to the natural intelligence displayed by animals and humans. The term “artificial intelligence” had previously been used to describe machines that mimic and display “human” cognitive skills that are associated with the human mind, such as “learning” and “problem-solving”. This definition has since been rejected by major AI researchers who now describe AI in terms of rationality and acting rationally, which does not limit how intelligence can be articulated.

Once again let’s take a look at what Wikipedia has to say about superintelligence.

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. University of Oxford philosopher Nick Bostrom (quoted in this article by Wikipedia) defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”. A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

I just took a few sentences from the two Wikipedia articles. If interested in the subject I would recommend you to read both articles and perhaps follow some of the links, in order to get a better understanding of what people think about both topics.

The author uses many English words that to fully understand them I had to refer to dictionaries several times while I was reading the book. I did learn several new words and refresh my understanding of some.

The book mainly covers two main approaches on how we could reach superintelligence. At some point in the book an analogy is made, which I am interpreting here, as a chimp has level 1, a human would have level 1000 = 1 * 1000, and superintelligence could have level 1000000 = 1 * 1000 * 1000.

Superintelligence could be achieved by enhancing what we call today AI up to the point that it would match the definitions above. We could also get to it by emulating how human brains work. It could also be achieved by enhancing our brains with special drugs that do not exist yet.

Besides the paths on how to achieve superintelligence, there is the subject of control and who reaps the benefits provided by superintelligence.

Control would be for the good of humanity at the forefront of how the superintelligence is developed and managed once it becomes a reality.

Today we start going to school around the age of five.  We then continue until we graduate from high school (or equivalent). Then we go to college for a few years and end up with a bachelor, a master, or a PhD degree. The level achieved depends on many different reasons.

One way or the other, at some point we enter the workforce and start applying what we have learned. For some people learning ends after they start working. For others learning continues through life.

So who would be able to develop superintelligence? Based on what I know reading articles, papers and books, we appear to be quite far from understanding how the human brain works. Not sure how we started and came to be humans. There are many theories and beliefs. No matter what you believe, we must agree that the current human is quite incredible for what we can do. Earlier this year we were able to place the Webb telescope 1000000 miles away (Lagrange point 2) in space at an orbit that will help it keep under specified conditions to help it to penetrate deep into the time when our universe is believed to have started (let’s call it the Big Bang).

It seems from a technological aspect, many of our achievements have produced positive results. Others have brought pain and suffering for some. Let’s not get into details.

In order to fund superintelligence you would need huge investments and time. It seems that some wealthy countries would be interested in giving it a try. That could lead to a superintelligence race (not to call it an arms race). Depending on who gets there first, the world could change for the best or the worst.

Today we are in September 2022, and the world is coming out of the COVID-19 pandemic which claimed millions of lives worldwide, yet there is no consensus on how it came about. The disparity of what the wealthy and poor are growing every day, a country decided to take the land of another country and in the process displaced millions from their homes and caused tens of thousands in deaths. We have others who claim they own other countries and by force want to take it. I can continue going and going for pages describing how we humans behave.

If you think the problems are just at a political level, take a look at families. There are more problems with relations and health than there have been ever before.

Based on the current state of affairs, do you think it is a good idea to develop superintelligence? And independent of our beliefs, it is created, who will benefit from it? Would it pose an existential problem for humans? This and many other issues are covered in the book. As I said, it is a good read and a good time to start or continue thinking about superintelligence.

Before I am done with this post, I want to get back to the chimp analogy. Picture you in front of a chimp. You both need to get some food. Given the different physical attributes, the chimp might succeed a few times earlier in the struggle. In a very short time, you will get or produce all the food you and your peers need leaving the chimp to starve. Not pleasant but think about what has happened in our history and you will see we have done it many times. The sad part is that today we continue to do it.

Now replace the chimp with you and superintelligence. Who would starve? Think about how humans in the last few thousand years have exponentially increased our intelligence. We have done it while sleeping an average of eight hours, feeding, socializing, and many other activities. A superintelligence would appear in a lab and in days would pass the 1000 mark we mentioned between humans and the initial version. Based on our history it may run out of control and I do not believe it would be creating toothpicks out of control (this is an example in the book).

On a separate note, I am about 30% into a different book which deals with AI problems that we have been encountering for a few decades. Thinking about superintelligence and comparing it against current AI, makes me hope that we might be better off postponing it for a very long time.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.