[ad_1]
In June 2022, Amazon re:MARS, the company’s in-person event that explores advancements and practical applications within machine learning, automation, robotics, and space (MARS), took place in Las Vegas. The event brought together thought leaders and technical experts building the future of artificial intelligence and machine learning, and included keynote talks, innovation spotlights, and a series of breakout-session talks.
Now, in our re:MARS revisited series, Amazon Science is taking a look back at some of the keynotes, and breakout session talks from the conference. We’ve asked presenters three questions about their talks, and provide the full video of their presentation.
On June 23, Suraj Muraleedharan, a principal consultant with Amazon Web Services Global Financial Services, presented the talk, “Improve explainability of ML models to meet regulatory requirements”. His session focused on the development, operational, and process improvements that can be incorporated by organizations to improve the explainability of models while adhering to regulatory requirements.
What was the central theme of your presentation?
As machine learning models are widely adopted, there is an increased focus on understanding why a machine learning model had a specific outcome. This is important to identify bias, which can happen at any stage of a machine learning workflow. Such biases can impact the well being of a customer, especially when the models are used to determine critical decisions like mortgage approval. Model explainability should be a mandatory metric for reviewing the quality of a model, and not an afterthought.
In what applications do you expect this work to have the biggest impact?
Every industry where machine learning models are being used to make business decisions can benefit from this approach. Amazon SageMaker Clarify can be used across the ML lifecycle to measure and monitor explainability and bias metrics. The focus of this presentation was on the financial industry — mortgages and credit risk ratings — but the application can be extended to natural language processing for document processing and other customer- facing products.
What are the key points you hope audiences take away from your talk?
Explainability and bias monitoring must be integral to your ML models — before they are deployed to production — and continuously monitored. In our society, ML models are influencing outcomes in multiple industries; it is important that development and business stakeholders ensure the outcomes for their customers are unbiased.
Amazon re:MARS 2022 — Improve explainability of ML models to meet regulatory requirements
window.fbAsyncInit = function() { FB.init({
appId : '1024652704536162',
xfbml : true, version : 'v2.9' }); };
(function(d, s, id){
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) {return;}
js = d.createElement(s); js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
[ad_2]
Source link