
In his latest book Homo Deus Yuval Noah Harari makes three statements on the last page:
1 -Science is converging on an all-encompassing dogma, which says that organisms are algorithms, and life is data processing.
2 – Intelligence is decoupling from consciousness.
3 – Non-conscious but highly intelligent algorithms may soon know us better than we know ourselves.
He then goes on to ask three key questions:
1 – Are organisms really just algorithms, and is life really just data processing?
2 – What’s more valuable – intelligence or consciousness?
3 – What will happen to society, politics and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves?
One answer to question one seems to be pretty simple – ‘no’ because it misses out empathy, arguably a cornerstone of ethics. I think question two sets up a false dichotomy because it sidesteps the obvious possibility that a combination of the two may be more valuable, or at least as valuable, than each isolated. And I have no idea how to answer the third question. Anyone got any ideas?
Dickie Bellringer
Not having read the book, I am not sure what he means by highly intelligent algorithms but if the answer to question number one is “No” because it excludes empathy then I assume these algorithms cannot intelligently predict our emotional response to other people. Therefore, the algorithms may be able to tell us when we should eat, drink, sleep etc, ie intelligently predict our physical needs but they cannot predict our emotional responses. So they have their uses but also limitations in terms of our daily lives. In terms of society and politics, they may enable us to build better communities based on rational decision making and problem solving but will not remove the need for empathetic responses to human predicaments. What do others think?
LikeLike
Hi Mark, what he means by ‘highly intelligent algorithms’ is that all beings, including us, are essentially problem solving entities. The problem is that the machines that we are creating are often better at solving problems than we are and will become better and better. The broad sweep of the book takes in the tectonic shifts from deism through humanism to what he calls dataism. What this view of humanity leaves out, it seems to me, is our emotions, empathy and sympathy (as well as hate and anger of course). Indeed, looking at the index the word ’emotion’ does not appear. Of course, this does not necessarily mean that beings with these emotions are better at running the show than non-conscious, non-emotional beings, but I don’t think sentient beings themselves can be reduced to ‘highly intelligent algorithms’. A related question is how machines will choose to develop once they are able to improve themselves.
LikeLike