/* ---- Google Analytics Code Below */

Saturday, December 15, 2018

Nature of Human Trust in Machines

This topic came up in a recent discussion of AI.  Past evidence had said that in certain contexts people trust AI better than humans,  simplistically because the machines have no ulterior human motives.   But it came up that human goals could also be installed into them by humans.  I like the idea of classifying trust, had not seen that before.   Not also the inclusion of sensors,  how, why and when do we trust sensors?   And how does the inclusion of collaboration change the dynamic of trust?

New Models Sense Human Trust in Smart Machines 
Purdue University News

Purdue University researchers are using new "classification models" to assess the extent of humans' trust in intelligent collaborative machines. Purdue's Neera Jain and Tahira Reid created two types of "classifier-based empirical trust sensor models," which use electroencephalography (EEG) and galvanic skin response to gauge levels of trust. Forty-five research subjects wore wireless EEG headsets and a device on one hand to measure these factors. A "general trust sensor model" used the same set of psychophysiological features for all subjects, while the other model was tailored for each participant; the models had respective mean accuracies of 71.22% and 78.55%. Said Jain, “A first step toward designing intelligent machines that are capable of building and maintaining trust with humans is the design of a sensor that will enable machines to estimate human trust level in real time.” ... " 

No comments: