In the earlier days of AI we always included uncertainty and risk models in parallel. Even if risk was apparently minimal. This seems to be much less done today. Except if you use Bayesian methods that directly include uncertainty. Are we just not willing to accept risk in stronger context? In the below a claim they are doing it, will take a further look. 'Smart' should always mean understanding uncertainty and risk.
Smarter Models, Smarter Choices By University of Delaware
Researchers at the University of Delaware and the University of Massachusetts-Amherst have published details of a new approach to artificial intelligence that builds uncertainty, error, physical laws, expert knowledge, and missing data into its calculations and leads ultimately to much more trustworthy models.
Researchers at the universities of Delaware (UD) and Massachusetts-Amherst have developed a high-confidence approach to artificial intelligence-based models that incorporates uncertainty, error, physical laws, expert knowledge, and missing data into its calculations.
The model itself identifies data required to reduce errors, enabling a higher level of theory for generating more accurate data, further shrinking error boundaries on predictions and the area to explore.
UD's Joshua Lansford said, "Uncertainty is accounted for in the design of our model. Now it is no longer a deterministic model. It is a probabilistic one."
From University of Delaware
No comments:
Post a Comment