Back to how we interact with machines and the implications of different aspects of dialog. Do we really expect a 'No' from a machine? Different than 'disobeying our commands', I would note. Both have their own implications.
Why we should build AI that sometimes disobeys our commands
In our desire to make ethical artificial intelligence, we better be ready for machines that can choose to say no, says Jamais Cascio
The future of human-AI interactions is set to get fraught. With the push to incorporate ethics into artificial intelligence systems, one basic idea must be recognised: we need to make machines that can say “no” to us.
Not just in the sense of not responding to an unrecognised command, but also as the ability to recognise, in context, that an otherwise proper and usable directive from a human must be refused. That won’t be easy to achieve and may be hard for some to swallow. ... "
Monday, November 27, 2017
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment