What are the implications for improving AI? In an internet of things and people ... with things getting more intelligent and being able to take the initiative more often, what will the rules of interaction be? Examined this once before, in the 90s, and it is worth looking at again. I believe the radical changes suggested are still many years away. It is useful to think about how they need to be architected to maximize our value and safety. How long can we depend on regulatory and legal protection?
Patrick Tucker examines the issue. Pointing to " ... computer scientist and entrepreneur Steven Omohundro says that “anti-social” artificial intelligence in the future is not only possible, but probable, unless we start designing AI systems very differently today. Omohundro’s most recent recent paper, published in the Journal of Experimental & Theoretical Artificial Intelligence, lays out the case. ... " . A simplistic initial look, but a useful starting point to consider.
The technical paper pointed to is: Autonomous technology and the greater human good by Steve Omohundro Abstract:
" ... Military and economic pressures are driving the rapid development of autonomous systems. We show that these systems are likely to behave in anti-social and harmful ways unless they are very carefully designed. Designers will be motivated to create systems that act approximately rationally and rational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. The current computing infrastructure would be vulnerable to unconstrained systems with these drives. We describe the use of formal methods to create provably safe but limited autonomous systems. We then discuss harmful systems and how to stop them. We conclude with a description of the ‘Safe-AI Scaffolding Strategy’ for creating powerful safe systems with a high confidence of safety at each stage of development. ... "
Saturday, April 19, 2014
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment