Yes, but also how decision links to further process and measured goals.
Articulation of Decision Responsibility By Robin Hill in ACM
Remember the days when record-keeping trouble, such as an enormous and clearly erroneous bill for property taxes, was attributed to "computer error?" Our technological society fumbles the assignment of responsibility for program output. It can be seen easily in exaggerations like this, from a tech news digest: "Google's Artificial Intelligence (AI) has learned how to navigate like a human being." Oh, my. See the Nature article by the Google researchers [Google] for the accurate, cautious, descripton and assessment. The quote given cites an article in Fast Company, which states that "AI has spontaneously learned how to navigate to different places..." [Fast Company] Oh, dear.
But this is not the root of the problem. In the mass media, even on National Public Radio, I hear leads for stories about "machines that make biased decisions." Exaggeration has been overtaken by simple inaccuracy. We professionals in Tech often let this pass, apparently on the belief the public really understands that machines and algorithms have no such capacity as is normally connoted by the term "decision"; we think that the speakers are uttering our own trade shorthand. When we say that "the COMPAS system decides that offender B is more likely to commit another crime than is offender D" [ProPublica; paraphrase mine], it's short for "the factors selected, quantified, and prioritized in advance by the staff of the software company Northpointe assign a higher numeric risk to offender B than to offender D." When the Motley Fool website says "computers have been responsible for a handful of `flash crashes' in the stock market since 2010," it means that "reliance on programs that instantaneously implement someone's pre-determined threshholds for stock sale and purchase has been responsible... etc." [Motley Fool] .... "
Wednesday, October 10, 2018
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment