As an Amazon Associate I earn from qualifying purchases.

3 questions about the Amazon–National Science Foundation collaboration on fairness in AI

[ad_1]

A year ago, Amazon and the National Science Foundation announced a $20 million collaboration to fund academic research on fairness in AI over a three-year period. A month ago, NSF announced the first ten recipients of the program’s grants. Erwin Gianchandani, deputy assistant director for Computer and Information Science and Engineering at NSF, took some time to answer three questions about the program for amazon.science.

1. What is the challenge of fairness in AI?

Four things come to mind.

The first is trying to get to an understanding of what fairness really means. If you think about a mathematical definition of fairness, you could look at two different population types, and you could look at some statistical metric, such as success rate, when you run an algorithm or a classifier on each population. One notion of fairness is that you are trying to ensure that the metric is consistent across both of those population types.

There are other definitions of fairness, though. Philosophers have debated the different notions of fairness for ages. So at the heart of what we’re trying to do with this effort is to better understand what fairness means in the abstract sense so that we can understand how we can design our systems to build fairness into them.

Erwin Gianchandani, deputy assistant director for Computer and Information Science and Engineering at NSF.

A second challenge that we’ve identified is who is responsible if you have an AI system that makes unfair decisions. This is where it’s important to think about accountability and how we empower the user of an AI system to have confidence in their ability to take what’s coming out of the AI system and make an informed decision.

You’re trying to provide the user with as much information as possible to minimize the likelihood of unfairness in the outcome — or at least provide an understanding of the types and levels of unfairness that may be inherent to the prediction from the AI system. In other words, this is about trying to present to the end user all of the data that the system used to derive a recommendation to give the user a certain degree of confidence about that recommendation.

A third challenge area that we like to think about is taking this issue of fairness and turning it on its head: how can I harness AI to improve fairness and equity in society? You can think about, for example, equitable distribution of scarce resources like food, of access to health-care, of interventions that might be able to prevent homelessness, and so on. How do we take the vast array of data that are out there and apply AI systems to those data to extract meaningful insights that can allow us to yield improvements in equity in society?

A fourth and final challenge is, how do we construct AI systems so that their benefits are available to everyone? For example, facial-recognition systems should work equally well for people of all races; currently, they do not. Similarly, speech and natural-language systems should work for users from different socioeconomic, ethnic, age, cultural, and geographic groups; that poses significant challenges for current techniques.

2. How do the funded projects address these challenges?

Let me walk through a few examples. Before I do, I want to emphasize that these are just that — examples — and I don’t mean to imply any kind of preference, either toward these funded projects or toward the topics that they are pursuing.

The first challenge is to develop a definition of fairness. One project that we’ve funded in this space is looking at developing a robust theory and methodology for trying to assess and ensure fairness in settings where fairness metrics are currently hard to pin down. You could either specify a particular metric for fairness for a task or domain, or you could look at a particular set of input-output combinations and try to associate fairness characteristics to those.

Take a particular use case, like whether someone has the finances to open a bank account. There might be a set of inputs into the algorithm — one’s monthly or weekly income, current level of debt, and so forth. For every input characteristic or output characteristic, can we define a range within which we feel confident in the accuracy, so that we can essentially try to bound the degree of fairness or unfairness that might exist in that algorithm?

The team of researchers in this case is looking at a particular use case — recidivism in the criminal justice system.

The second challenge is to understand how an AI system produces a given result. We’ve funded a project that is seeking to develop techniques to facilitate better understanding of the entire life cycle of deep neural networks — the preparation of the data, the identification of features, the objectives when it comes to optimization of the system — so that the steps that led to a given output, along with that output, are presented to the user to inform their decision making.

So it’s about really being able to engineer into the outputs a sense of what the system is doing each step of the way so that the human user can see the various decision points. In other words, this is about making it easier to decipher the inner workings of the AI system and, in the process, allowing the user to appreciate any biases.

The third and fourth challenges are somewhat related — harnessing AI to improve equity in society and designing AI systems such that their benefits are equitably available to everyone. One of the projects we’ve funded in this space is looking at racial disparities following cardiac surgery.

We’ve known for quite some time, for example, that certain ethnic groups have higher rates of heart disease than others and are also known to suffer higher rates of postoperative issues — issues that occur after surgical interventions for heart disease. But what we don’t have a sense of is how much of that disparity is due to biological factors, how much of it is due do socioeconomic factors, how much of it is due to the differences in care depending on where people go for treatment, and so on.

We’ve funded a project that is to trying to bring AI tools to a rich electronic-health-record data set to try to understand conceptually and practically the source points for the disparities that we see.

Again, these are just a few examples illustrating the broad research areas, and I expect future awards through this collaboration may be outside these specific topics.

3. What are the advantages of a public-private partnership in addressing these challenges?

We see a significant value proposition in bringing the public and private sectors together.

First, it’s valuable for our academic community to understand the kinds of challenges that industry is seeing. We often call such research “use-inspired”: we have an ability to look at concrete problems and use those to motivate the research questions themselves.

Beyond that, we all know that today’s AI revolution is grounded in large quantities of data that are readily available, along with compute resources to leverage those data sets. In general, access to both of these — for example, access to cloud computing resources — can be really valuable to our academic researchers.

Third, academic researchers benefit from companies’ experience with accelerating the transition of research results out of the laboratory environment and into practice.

Finally, another dimension that’s really important to us is training the next generation of researchers and practitioners. I think we all agree that we’re going to see a real need for competencies in data science, machine learning, and AI across all sectors of our economy. Providing our students who are studying fairness in AI with exposure to industry — to the problems that industry is facing — is a means to nurture the talent that our research ecosystem is going to need going forward. It would be great if some of the students funded on these joint projects benefit from this exposure when they graduate and go on to start their careers.

See a complete list of the projects funded through the new NSF-Amazon collaboration.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo