/* ---- Google Analytics Code Below */

Wednesday, January 13, 2021

Salesforce Doing Advanced Metric Analysis for NLP

Good to see interesting AI things in the sales-marketing domain, a place we played early on.

Salesforce researchers release framework to test NLP model robustness

Kyle Wiggers, @Kyle_L_Wiggers, January 13, 2021 6:00 AM in VentureBeat

In the subfield of machine learning known as natural language processing (NLP), robustness testing is the exception rather than the norm. That’s particularly problematic in light of work showing that many NLP models leverage spurious connections that inhibit their performance outside of specific tests. One report found that 60% to 70% of answers given by NLP models were embedded somewhere in the benchmark training sets, indicating that the models were usually simply memorizing answers. Another study — a meta analysis of over 3,000 AI papers — found that metrics used to benchmark AI and machine learning models tended to be inconsistent, irregularly tracked, and not particularly informative.

This motivated Nazneen Rajani, a senior research scientist at Salesforce who leads the company’s NLP group, to create an ecosystem for robustness evaluations of machine learning models. Together with Stanford associate professor of computer science Christopher RĂ© and University of North Carolina at Chapel Hill’s Mohit Bansal, Rajani and the team developed Robustness Gym, which aims to unify the patchwork of existing robustness libraries to accelerate the development of novel NLP model testing strategies. ... '

No comments: