Pause Giant AI Experiments – An Open Letter

This morning I watched a 30 minute presentation titled “Why the 6-month AI Pause is a Bad Idea. If interested watch the YouTube video to get an idea on the response by Andrew Ng and Yann LeCun. I have read a few articles on contributions from both of them to AI. As a matter of fact a few years back I took one or more courses taught by Andrew Ng.

This response is based on a request by hundreds titled: Pause Giant AI Experiments: An Open Letter. Among the petitioners you find Elon Musk, Steve Wozniak, and Yoshua Bengio, just to mention a few. As I am writing this post, the count of signatures is up to 17821 to 17931 in less than an hour. Continue reading “Pause Giant AI Experiments – An Open Letter”

The Alignment Problem – Book

In this post I will make a short review of the book “The Alignment Problem” by Brian Christian.

Overall I liked the contents of the book and its organization. I pay a lot of attention as to how material is presented. One technique is to repeat the important messages to allow the reader to have a second opportunity to think about them. In this book the author included a multipage conclusion that touches on the subjects of each of the nine chapters. Continue reading “The Alignment Problem – Book”

Superintelligence Paths, Dangers and Strategies – Book

During the past few months it has been somewhat hectic for me. I used to write and post on my blog more often. Lately I have set aside only weekend mornings to learn new things and refresh (courses), experiment, and write posts.

A few weeks ago I finished reading Superintelligence Paths, Dangers and Strategies by Nick Bostrom. If I am not mistaken the book made it to the NYT bestseller list during 2014. A teammate at work mentioned the book so I decided to get a copy and read it. Continue reading “Superintelligence Paths, Dangers and Strategies – Book”