Quite late, but making my way through the well known Superintelligence book by Nick Bostrom. Some quite fascinating statistics of the futurist kind that are worth looking at. Will the development of AI being seen today, that uses methods like neural nets and Bayesian methods, be able to leverage itself to be self-replicating? And become in a sense 'Super', and go beyond human intelligence? What are the dangers involved? As Bostrom asks, what are the paths, dangers and strategies to be considered?
Superintelligence: Paths, Dangers, Strategies by Nick Bostrom and his site http://www.nickbostrom.com/ Lots of new resources at this link.
They write:
" ... Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence. ... "
Thursday, February 23, 2017
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment