/* ---- Google Analytics Code Below */

Thursday, October 03, 2019

Adversarial Robustness Toolbox

The below talk has already occured, the recording and slides will be placed here:  http://cognitive-science.info/community/weekly-update/   (The slides are there already)  Some very fundamental things here.

 ISSIP CSIG Speaker Series

Invitation to the ISSIP Cognitive Systems Institute Group Webinar
Please join us for the next ISSIP CSIG Speaker Series (see details below, or click here).
Beat Buesser, IBM , "The First Major Release of Adversarial Robustness 360 Toolbox (ART) v1.0 - A Milestone in AI Security "

Background :
Beat Buesser is a Research Staff Member of IBM Research in the AI & Machine Learning Group at the Dublin Research Laboratory. He is leading the development of the Adversarial Robustness 360 Toolbox (ART) and his research focuses on the security of machine learning and artificial intelligence. Before joining IBM, he worked as postdoctoral associate at the Massachusetts Institute of Technology and obtained his doctorate degree from ETH Zurich.

Task Description :  Adversarial Robustness 360 Toolbox (ART) is a Python library supporting developers and researchers in defending, certifying and verifying Machine Learning models against adversarial threats and helps making AI systems more secure and trustworthy. ART addresses growing concerns about people’s trust in AI, specifically the security of AI in mission-critical applications. In this talk we are presenting ART v1.0 which extends ART to non-neural-network models including gradient boosted decision trees, support vector machines (SVM), random forests, logistic regression, Gaussian processes, decision trees, Scikit-learn pipelines, black-box classifiers, etc. and extends the input data types beyond images to include tabular data and texts.

Join LinkedIn Group https://www.linkedin.com/groups/6729452

No comments: