Over the past couple of years, the field of AI has been awash in concerns over ethics and fairness in AI. At the same time, the world has awakened to the deep-seated, structural problems of racial injustice.
The two are inextricably linked. AI is one of the most powerful technological transformations we’ve seen — part of a thread that begins with the rise of the personal computer and runs through the explosion of the internet and through the mobile revolution. It has the power to do great things but is commensurately dangerous.
One of the most important ways the industry can abate the potential harms of AI is to ensure diversity, equity, and inclusion (DEI) at every step in the process of making and deploying it. At this point in time, certainly the vast majority of those creating AI within the enterprise, in tech startups, and in small- to medium-sized businesses of all kinds understand this — why DEI is important not only for moral reasons, but for practical ones.
But actually operationalizing DEI is a different challenge, and that was the focus of VentureBeat’s recent event, “Evolve: Ensuring Diversity, Equity, and Inclusion in AI.” We sought the wisdom of a panel of industry experts: Huma Abidi, senior director of AI software products at Intel; Rashida Hodge, VP of North America go-to-market, global markets, at IBM; and Tiffany Deng, program management lead for ML fairness and responsible AI, at Google.
Changing the mindset: A better mirror
The old mantra of “move fast and break things” has expired. “I think there should be a new mantra: Move fast and do it right,” said Abidi. She pointed out that the very notion of “breaking things” is dangerous because the stakes in AI are so high. She added, “AI for all is only possible when technologists and business leaders consciously work together to create a DEI workforce.”
“As a Black woman in tech, I personally understand the harsh realities of what happens when we neglect to do the real work, and the real work is ensuring that the conversation is not just about the algorithm,” said Hodge. “Technology serves as a mirror for our society. It reveals our bias, it reveals our discrimination, [and] it reveals our racism.” She said that we have to understand that technologies are shaped by the people who make them, and that those people are not impervious to the systemic effects of working within an environment that isn’t diverse or inclusive.
Hodge also said that there needs to be a shift in focus from fixing things only by addressing the underlying algorithm to recruiting and retaining diverse talent. “More and more, technologies are about the nuance of people and processes, [and] the augmentation of people and processes, so these AI systems are a direct reflection of who we are, because they’re trained by us as individuals,” she said.
Deng said that people bring their whole selves to the table when it comes to AI, and that can serve as a guide for how to think about it as creators. Developing AI can’t be a siloed process. “Going into those communities, understanding how they’re using technology, understanding how they can be harmed, understanding what they need for it to be better, for it to be really more impactful for their lives” is key to creating AI, she said. “And it’s a perspective you’re missing if you don’t have a diverse workforce.”
Key takeaways:
- Change the old mindset and approach to development.
- Business leaders and technologists have to consciously work together to ensure a diverse workforce.
- Technology serves as a mirror for our society; we need a better mirror.
- People and their work are affected by being within diverse and non-diverse environments.
- It’s not always about the underlying algorithm; focus on recruiting and retaining diverse talent.
- Get out of the tech silo and reach out to the communities that will be affected by your AI to understand the potential harms and real needs that exist.
Building the right staff
“Your workforce should look like the people you’re trying to serve,” said Deng. She brought up the notion that’s been espoused elsewhere: that the perspective you don’t have is because that particular seat at the table is empty. That’s how you get blind spots, she said. That table should be reflective of society in general, but also “of the goals that we have for the future.”
Much has been made of the need for domain experts in AI projects. That is, if you’re building something for the education sector, you should bring in educators and rely on their expertise. If you’re trying to solve a problem in elder care, you need healthcare providers and specialists to get involved.
Although tapping domain experts is important, that’s just one part of a greater whole. “It’s not just about the domain expertise. It’s also about a very end-to-end business process transformation that includes domain experts,” said Hodge.
Abidi echoed this idea. “Addressing bias in AI is not solely a technical challenge,” she said. “The algorithms are created by people, so the biases in the real world are not just mimicked, but they can be amplified.” So, although domain experts are important for building AI systems, you need a greater swath of people from multiple areas. “You also need consumer advocates, public health professionals, industrialist designers, policy makers — all of them basically tying into the diverse workforce, which is … representative of the population that solution will be serving,” she added.
Key takeaways:
- Your workforce should look like the people you’re trying to serve, lest you get blind spots.
- It’s not just about acquiring domain expertise; it’s about an end-to-end business transformation.
- A “diverse workforce” includes people from multiple areas of expertise.
Ensuring the right workflows
With the right workforce in place, you need to ensure that you have the right workflows, too. Hodge emphasized that, conceptually, the first thing you should think about is the “why.”
“It’s really critical to understand what problem you are solving with AI,” she said. That clarity around your initial approach, she said, is important.
Deng echoed Hodge by calling up one of Dr. Timnit Gebru’s big pieces of advice: asking ourselves “should we be doing this?”
“I think that’s a really important first step in thinking about and changing workflows,” said Deng. Though AI can help transform virtually any industry or company, that’s a fundamental first question. What follows from it is asking if a given project or idea makes sense for the problem at hand, and how it could cause harm.
If you ask those crucial and hard questions from the outset of a project, the answers may lead you to shut down an entire workflow that would have had a poor outcome. That might require some courage, given internal or external pressures. Ultimately, though, making the sound choice is not just the right thing to do but also the best business decision, because it avoids projects that are doomed to fail.
Hodge asserted that from a practical perspective, there’s not necessarily a singular starting point for a given project; where you should begin depends on a company’s structure, needs, business problems it needs to solve, what in-house experts are available, and so on.
Abidi advocates for defining and building clear standards and processes that are quantifiable and have measurements of quality and robustness. “That, again, to me is leading to ethical solutions that are fair, transparent, [and] explainable,” she said.
One example she gave is Datasheet for Datasets, a paper led by Gebru that espouses the need for better documentation in AI. The paper abstract says that “every dataset [should] be accompanied with a datasheet that documents its motivation, composition, collection process, recommended uses, and so on.”
She also suggested another Gebru documentation project, Model Cards for Model Reporting. Per the paper: “Model cards also disclose the context in which models are intended to be used, details of the performance evaluation procedures, and other relevant information.”
“You need to basically build in these basic principles into your workflow,” she said. “My point is that like any other software product, you want to make sure it’s robust and all that, but for AI, you especially — besides having standards and processes — you need to add these additional things.”
There’s also the question of whether AI is overkill for the task at hand. “Not every problem needs to be solved by AI,” noted Hodge.
She also advocated for a careful, iterative approach to developing AI — an ongoing business process that has a lifecycle and requires you to keep returning to it as data changes or you need to adjust the model based on real-world results.
“With AI, change doesn’t have to happen in one swoop,” she said. “Some of the best AI projects that I’ve been involved in … MVP their way to scale.” They use incremental sprints, which is important because there’s nuance in this work, and that requires feedback, and more feedback, and more data, and so on. “Just like how we as humans process information and process nuance, as we read more information, as we go visit a different place, we have different perspectives. And we bring nuance to how we make decisions; we should look at AI applications in the exact same way,” she said.
Key takeaways:
- Don’t forget about the “why” and what problem(s) you’re trying to solve — and ask “Should we?”
- There’s no singular starting point for a project — it depends on a given company’s needs.
- Define and build clear standards and processes that are quantifiable and have measurements of quality and robustness.
- Not every problem needs to be solved by AI.
- “MVP” your way to scale — shortcuts in the work are shortcuts to failure.
- Think of AI development as an ongoing business process with a lifecycle — continue to revisit it.
General advice
Throughout the conversation, the panelists offered a great deal of general advice for companies looking to create AI projects and operationalize diversity, equity, and inclusion. Here is a summarized list:
- You don’t have to start from scratch — there are many great tools available already.
- AI is not magic! It requires training, expertise, appropriate design, and diverse data.
- Organizational readiness: Make sure your company is ready for the the solutions you’re making.
- Data readiness: The “garbage in, garbage out” adage holds true. Data feeds every AI solution, and you need to keep revisiting it over time.
- Never lose sight of the value you’re hoping to bring: iIs this AI project just something that’s interesting, or does it actually have an impact?
- There’s no AI without IA (information architecture), so look carefully at the structure of your data feeds, data lake, and so on.
- When you’re measuring results, don’t get too caught up in “accuracy” per se; understand what you’re solving for, examine how what you made is useful and relevant, and weigh the inherent tradeoffs on a case-by-case basis.
VentureBeat
VentureBeat’s mission is to be a digital townsquare for technical decision makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you,
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform
- networking features, and more.
Become a member
The post Evolve: Operationalizing diversity, equity, and inclusion in your AI projects appeared first on abangtech.
source https://abangtech.com/evolve-operationalizing-diversity-equity-and-inclusion-in-your-ai-projects/
No comments:
Post a Comment