Friday 12 June 2020

Work Underway to Assess and Rate AI Model Transparency

Consumers of AI models need more visibility and more transparency to be able to trust AI models that others are building.

By AI Trends Staff

We hold these truths to be self-evident: a machine learning model is only as good as the data it learns from. Bad data results in bad models. A bad model that identifies butterflies when it should be recognizing cats is easy to spot.

Sometimes a bad model might be more difficult to spot. If data scientists and ML engineers that trained the model selected a subset of available data with an inherent bias, the model results could be skewed. Or the model might not have been well-trained enough, or it could have issues with overfitting or underfitting.

If there are many ways the model could fail, how are we to trust the model? asks a recent account in Forbes on AI transparency and explainability by Ronald Schmelzer.

Application development best practices would call for a quality assurance (QA) and testing process supported by tools that can spot bugs or deviations from programming norms. Patched applications are run through regression tests to make sure new issues are not introduced. Continuous testing goes on as the application function gets more complex.

Ronald Schmelzer, managing partner and principal analyst at Cognilytica

While AI is different, it’s not like principles of sound software engineering suddenly do not apply.

Still, machine learning models derive functionality from the data and the use of algorithms that attempt to build the most accurate model from the data. The model is an approximation.

“We can’t just bug fix our way to the right model. We can only iterate. Iterate with better data, better algorithms, better configuration hyperparameters, more horsepower. These are the tools we have,” states author Schmelzer, managing partner and principal analyst at Cognilytica, an AI-focused research and advisory firm. He holds a BS in computer science and engineering from MIT, and an MBA from Johns Hopkins University.

Consumers of models do not have access to tools that tell them whether the model is a good one. They have a choice, use the AI or not.

“There is no transparency. As the market shifts from model builders to model consumers, this is increasingly an unacceptable answer. The market needs more visibility and more transparency to be able to trust models that others are building,” Schmelzer states.

So how do we get there?

Efforts to assess AI model transparency are underway. The National Institute of Standards and Technology (NIST) in May 2019 convened a meeting on the advancement of AI standards as part of AI strategy plans from the White House. The group discussed the idea of a method by which models could be assessed for transparency. Analysts from Cognilytica subsequently developed a multi-factor transparency assessment and contributed it to the Advanced Technology Academic Research Center (ATARC), a non-profit that seeks collaboration between government, academia and industry to address technology issues. The ATARC runs a number of working groups; Schmelzer chairs the group on AI Ethics and Responsible AI working group.

The proposed assessment aims for model developers to assess based on five factors for transparency:

  • How explainable is the algorithm used to build the model?
  • Can we get visibility into the data set used for training?
  • Can we get visibility into the methods of data selection?
  • Can we identify the inherent bias in the data set?
  • Can we get full visibility into model versioning?

Even though the models would be self-assessed by model developers, it would be progress.

“A transparency assessment is sorely needed by the industry,” Schmelzer stated. “In order to trust AI, organizations need models they can trust. It’s very hard to trust any model or any third party source without having transparency into how those models operate.”

Suggestions for Three Practical Steps

Three practical steps leaders can take to mitigate the effect of bias were suggested by Josh Sutton and Greg Satell writing in Harvard Business Review in October 2019. Sutton is the CEO of Agorai, founded in 2018, focused on offering reusable AI models in specific industries; Satell is a speaker and author of books including “Cascades: How to Create a Movement that Drives Transformational Change,” released in 2019.

First, subject the AI system to rigorous human review; second, insist engineers understand the algorithms incorporated in the AI system; and third, make the data sources used to train the AI available for audit.

“We wouldn’t find it acceptable for humans to be making decisions without any oversight, so there’s no reason why we should accept it when machines make decisions,” the authors state.

Eric Haller, head of Experian DataLabs

In an interview with the authors, Eric Haller, head of Datalabs at Experian, the credit reporting company, said early models used for AI were fairly simple, but today, data scientists need to be more careful when selecting models.

“In the past, we just needed to keep accurate records so that, if a mistake was made, we could go back, find the problem and fix it,” Haller stated. “Now, when so many of our models are powered by artificial intelligence, it’s not so easy. We can’t just download open-source code and run it. We need to understand, on a very deep level, every line of code that goes into our algorithms and be able to explain it to external stakeholders.”

Read the source articles in Forbes and in Harvard Business Review.

Source

The post Work Underway to Assess and Rate AI Model Transparency appeared first on abangtech.



source https://abangtech.com/work-underway-to-assess-and-rate-ai-model-transparency/

No comments:

Post a Comment