/* ---- Google Analytics Code Below */

Sunday, January 02, 2022

Towards AI Explainability and Audits

 Will continue to increase in application.

Explainable AI is about to become mainstream: The AI audits are here - Impact of AI recruitment bias audit in New York city

Posted by ajit jaokar on November 28, 2021 at 10:30amView Blog

A few weeks ago, I said that we will be increasingly faced with AI audits and that I hoped such regulation would be pragmatic.  (Could AI audits end up like GDPR).

That post proved prophetic;  The New York city council has passed a new bill which requires mandatory yearly audits against bias on race or gender for users of...

Candidates can ask for an explanation or a human review  ‘AI’ includes all technologies – from decision trees to neural networks The regulation is needed and already, there is discussion about adding ageism and disabilities to this audit. I am almost sure that the EU will follow in this direction also

Here are my takes on this for data scientists:  I guess the first implication is: pure deep learning as it stands is impacted since it’s not explainable (without additional strategies / techniques)

The requirements for disclosure will make the whole process transparent and could have a greater impact than the regulation of algorithms. In other words, I always think that its easy to ‘regulate’ AI – when the AI is actually a reflection of human values and biases at a point in time

Major companies like Amazon who recognise the limitation of automated hiring tools had already abandoned such tools because the tools were based on data that reflected their current employee pool (automatically introducing bias).

I expect that this will become mainstream – not just for recruitment

We will see an increase in certification especially from Cloud vendors for people who develop on data for AI dealing with people

On a personal note, being on the autism spectrum, the legislation is well meaning and helpful towards people with limitations and disabilities – but I still believe that data driven algorithms reflect biases in society – and its easier to regulate AI than to look at our own biases

That’s one of the reasons I think the current data-driven strategy is not the future.

In my research and teaching, I have moved a lot towards Bayesian strategies and techniques to complement deep learning (because they are more explainable)  ... 

No comments: