Some of our earliest work in delivering expertise via AI to the enterprise failed because the system could not explain its reasoning to its users. The system could not sell itself. But does a system need to do that? Or does it just have to be right most of the time? Or right enough for the business purpose? Search, as it has trained us, gives us a list of results, some irrelevant, but we have become used to choosing the best. And usually go away satisfied. So is explanation even necessary? Or if explanation is not needed, is it still very useful?
In QZ: Algorithms explaining themselves.