As an Amazon Associate I earn from qualifying purchases.

Reid Blackman: The ethics of AI

[ad_1]

When Diya Wynn, a senior practice manager in Amazon’s Emerging Technologies and Intelligent Platforms group, created Amazon Web Services’ (AWS) responsible AI team in late 2020, she began looking for thought leaders with whom she could collaborate.

AI and machine learning technologies were game changers and had demonstrable upside, but there were potential downsides as well. A raft of potential issues for AWS’s customers — from legal concerns to ethical dilemmas — accompanied the deployment of these potent technologies.

Senior business leaders were slowly realizing that their organizations didn’t have the people or institutional knowledge and practices to address those risks. AWS customers were looking for guidance on how to use AI-based tools responsibly.

As Amazon made a strong commitment to responsible AI, Wynn found her first partner. Reid Blackman was a former philosophy professor turned entrepreneur who had leaped into the fray in 2018 by starting Virtue, a consulting firm aimed at helping companies institute responsible AI and ML practices within their organizations.

Blackman’s new book — Ethical Machines: Your Concise Guide to Totally Unbiased, Transparent, and Respectful AI (Harvard Business Review Press) is a how-to guide for navigating some daunting and potentially perilous waters.

Potent tech, unintended consequences

For Blackman, AI ethics in business is actually about intelligent systems designed to handle specific, narrow tasks far more efficiently than humans could. Machine learning, fueled by a seemingly bottomless trove of data, has enabled automated decision-making that produces faster, more accurate outcomes than ever before possible.

The National Science Foundation logo is seen on an exterior brick wall at NSF headquarters

Related content

Thirteen new projects focus on ensuring fairness in AI algorithms and the systems that incorporate them.

Unfortunately, such systems have also proven to be capable of flawed or unintended results that can do significant harm to a company’s reputation and bottom line. The more AI-based products and services have become integral to the global economy, the clearer it has become that oversight is essential.

Having been introduced by a mutual colleague participating in an AI working group Wynn and Blackman quickly realized they had similar goals, and a partnership emerged. While many executives consider “responsible AI” a bit esoteric, Blackman has embraced the opposite perspective.

“To him, AI ethics are not fuzzy,” Wynn said. “He has a way of getting around that conversation and helping people land on some concrete ideas about what is good and right.”

Together, they are delivering workshops for Amazon customers and providing the book as a primer on how to think about and implement effective internal processes and programs.

Responding to ‘alarm bells’

Blackman’s philosophy background proved to be a useful foundation for considering the implications of AI. Ethically responsible behavior, after all, is at the heart of most conversations about human nature. But after a decade of teaching, he had grown weary of the academic environment.

Blackman had an entrepreneurial bent: In grad school, he had started a successful fireworks wholesaling business. He noticed that the adoption of AI in business, though still in its infancy, had exploded. The decision to leave academia, where he taught philosophy at Colgate University and the University of North Carolina, was based on that entrepreneurial itch and a realization that a new market was materializing.

“It was 2018, and I became aware of engineers ringing alarm bells around the coming impact of AI on society,” Blackman said, citing the fallout from the Cambridge Analytica scandal.

Reid Blackman: Why we need a broader ethical AI perspective in business

By 2020 and 2021, the conversation around AI ethics and responsibility had reached a fever pitch. Blackman wrote articles about the subject for the likes of the Harvard Business Review and TechCrunch. He came at the problem with an unusual take. He distinguishes between two groups: AI for good, and AI for not bad.

“Those in the ‘AI for good’ group ask themselves the question: ‘How can we create a positive social impact with the powerful tool that is AI?'” Blackman explained. “That is usually the province of corporate social responsibility. There’s rarely a business model behind their goals.

“AI for not bad is about risk mitigation. These people have a goal, which may or may not be ethical in character, like making loans to people, interviewing people for jobs, diagnosing people with various diseases and recommending treatments, and they ask themselves: `How can we use AI to help us with those things in a way that doesn’t ethically screw things up?’”

That question is of paramount concern to business leaders. Because as these products and services tread close to ethical boundaries, their implementation has serious implications to business reputations.

The path to AI for good and ‘not bad’

Companies have long struggled to find ways to create and maintain an ethically sound organization, Blackman said, and AI adds a new layer of complexity. Making sure the tools they use are accurate, reliable, and based on sound science is a challenging task.

Added to the problem, he pointed out, is that senior leaders, specifically CEOs and COOs, believe this is a technical problem to be solved by engineers and data scientists. That, Blackman insisted, is wrong.

You will only get the systematic design, development, procurement and deployment of ethically responsible tools on the condition that you have a top-down strategy for doing it right.

“Ultimately, this resides with senior leadership,” Blackman said. “Junior engineers and data scientists want to do the right thing, but the truth is that you will only get the systematic design, development, procurement and deployment of ethically responsible tools on the condition that you have a top-down strategy for doing it right. What’s more you can’t math your way out of these problems. Data scientists need support from relevant experts who can help make the qualitative judgments that are a necessary feature of any robust ethical risk assessment.”

Blackman insists that ethics is not “squishy” or difficult to understand and mitigate if leaders are willing to learn enough about how AI and ML intersect with ethics.

The reason most organizations are slow in responding to this new challenge, he said, is that they may either not know they have a problem, or “They might know they have a problem, but nobody owns the problem. If nobody owns the problem, there will be no budget allotted to solve it. Corporate codes of conduct are usually too general and vague to effectively address the issue.

“Senior leaders are intellectually intimidated by the topic,” Blackman said. “They say, `This is something for the data scientists to figure out. Not me.’”

In his book, Blackman lays out what he considers the crucial distinction that organizations must understand in order to understand AI ethics: structure versus content. The content side of the equation is focused on the ethical issues to avoid. The structural structure side is aimed at how to mitigate those risks.

FiddlerAI_LeadImage.gif

Related content

Krishna Gade, the founder of this Alexa Fund portfolio company, answers three questions about ‘responsible AI’.

Blackman believes most leaders get a superficial view of the content side: Bias is bad, fairness is good, black box algorithms are scary. What, they then ask, should our structure look like? “They answer that too quickly,” Blackman said, “and they run into problems. If you go deeper on the content side, then the structure side will reveal itself.”

Going deeper, he suggested, means exploring what bias looks like, understanding the concept of discriminatory impacts, and developing a strong AI ethics statement. With a comprehensive understanding of the content side, businesses can create the appropriate procedures, processes, policies, and infrastructure to identify and mitigate risks.

Developing a partner ecosystem

For Wynn, building a “partner ecosystem” is a key method for scaling AWS’s work. Customers have been starting to ask more and more questions about AI and ML issues.

“Our consistent way to respond comes through our responsible AI framework to instantiate principles in an organization, the strategic guidance we offer in engagement, leveraging services and tools that compliment responsible AI principles,” Wynn explains.

“We’ve opened Pandora’s Box, and there is no closing it,” Wynn said. “What do we do in wielding this tremendous power we have with AI and ensure it isn’t harmful?”

Working with AWS is a potent partnership, Blackman added. “There is great value bringing our collective experience in qualitative assessment, ethical technology, and software engineering together to service customers,” Wynn agreed.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo