An ultimate challenge, especially as systems start to morph into assistants with their own data, knowledge, and yes 'intelligence'. What are the the risks involved? In an era of increased external vulnerabilities? Where is the trust applied?
How Might We Increase System Trustworthiness?
By Peter G. Neumann
Communications of the ACM, October 2019, Vol. 62 No. 10, Pages 23-25 10.1145/3357225
The acm risks Forum (risks.org) is now in its 35th year, the Communications Inside Risks series is in its 30th year, and the book they spawned—Computer-Related Risks7—went to press 25 years ago. Unfortunately, the types of problems discussed in these sources are still recurring in one form or another today, in many different application areas, with new ones continually cropping up.
This seems to be an appropriate time to revisit some of the relevant underlying history, and to reflect on how we might reduce the risks for everyone involved, in part by significantly increasing the trustworthiness of our systems and networks, and also by having a better understanding of the causes of the problems. In this context, 'trustworthy' means having some reasonably well thought-out assurance that something is worthy of being trusted to satisfy certain well-specified system requirements (such as human safety, security, reliability, robustness and resilience, ease of use and ease of system administration, and predictable behavior in the face of adversities—such as high-probability real-time performance). ... "
(Full article requires registration)
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment