Monday, 1 February 2021

Why companies are thinking twice about using artificial intelligence – Yahoo Finance

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.

Alex Spinelli, chief technologist for business software maker LivePerson, says the recent U.S. Capitol riot shows the potential dangers of a technology not usually associated with pro-Trump mobs: artificial intelligence.

The same machine-learning tech that helps companies target people with online ads on Facebook and Twitter also helps bad actors distribute propaganda and misinformation.

In 2016, for instance, people shared fake news articles on Facebook, whose A.I. systems then funneled them to users. More recently, Facebook’s A.I. technology recommended that users join groups focused on the QAnon conspiracy, a topic that Facebook eventually banned.

“The world they live in day in and day out is filled with disinformation and lies,” says Spinelli about the pro-Trump rioters.

A.I.’s role in disinformation, and problems in other areas including privacy and facial recognition, are causing companies to think twice about using the technology. In some cases, businesses are so concerned about ethics related to A.I. that they are killing projects involving A.I. or never starting them to begin with.

Spinelli says that he has canceled some A.I. projects at LivePerson and at previous employers that he declined to name because of concerns about A.I. He previously worked at Amazon, advertising giant McCann Worldgroup, and Thomson Reuters.

The projects, Spinelli says, involved machine learning analyzing customer data in order to predict user behavior. Privacy advocates often raise concerns about such projects, which rely on huge amounts of personal information.

“Philosophically, I’m a big believer in the use of your data being approved by you,” Spinelli says.

Ethical problems in corporate A.I.

Over the past few years, artificial intelligence has been championed by companies for its ability to predict sales, interpret legal documents, and power more realistic customer chatbots. But it’s also provided a steady drip of unflattering headlines.

Last year, IBM, Microsoft, and Amazon barred police use of their facial recognition software because it more frequently misidentifies women and people of color. Microsoft and Amazon both want to continue selling the software to police, but they called for federal rules about how law enforcement can use the technology.

IBM CEO Arvind Krishna went a step further by saying his company would permanently suspend its facial recognition software business, saying that the company opposes any technology used “for mass surveillance, racial profiling, violations of basic human rights and freedoms.”

In 2018, high-profile A.I. researchers Timnit Gebru and Joy Buolamwini published a research paper highlighting bias problems in facial recognition software. In reaction, some cosmetics companies paused A.I. projects that would determine how makeup products would look on certain people’s skin, for fear the technology could discriminate against Black women, says Rumman Chowdhury, the former head of Accenture’s responsible A.I. team and now CEO of startup Parity AI.

“That was when lot of companies cooled down with how much they wanted to use facial recognition,” Chowdhury says. “I had meetings with clients in makeup, and all of it stopped.”

Recent problems at Google have also caused companies to rethink A.I. More recently, Gebru, the A.I. researcher, left Google and then claimed that the company had censored some of her research. That research focused on bias problems with the company’s A.I. software that understands human language and the fact that the software used huge amounts of electricity in its training, which could harm the environment.

This reflected poorly on Google because the search giant has experienced bias problems in the past, when its Google Photos product misidentified Black people as gorillas, and the search giant champions itself as an environmental steward.

Shortly after Gebru’s departure, Google suspended computer access to another of its A.I. ethics researchers who has been critical of the search giant. A Google spokesperson declined to comment about the researchers or the company’s ethical blunders. Instead, he pointed to previous statements by Google CEO Sundar Pichai and Google executive Jeff Dean saying that the company is conducting a review of the circumstances of Gebru’s departure and is committed to continuing its A.I. ethics research.

Miriam Vogel, a former Justice Department lawyer who now heads the EqualAI nonprofit, which helps companies address A.I. bias, says many companies and A.I. researchers are paying close attention to Google’s A.I. problems. Some fear that the problems may have a chilling impact on future research about topics that don’t align with their employers’ business interests.

“This issue has captured everyone’s attention,” Vogel says about Gebru leaving Google. “It took their breath away that someone who was so widely admired and respected as a leader in this field could have their job at risk.” 

Although Google has positioned itself as a leader in A.I. ethics, the company’s missteps point to a contradiction with that high-profile crown. Vogel hopes that companies don’t overreact by firing or silencing their own employees who question the ethics of certain A.I. projects.

“I would hope companies do not take fear that by having an ethical arm of their organization that they would create tensions that would lead to an escalation at this level,” Vogel says.

A.I. ethics going forward

Still, the fact that companies are thinking about A.I. ethics is an improvement from a few years ago, when they gave the issue relatively little thought, says Abhishek Gupta, who focuses on machine learning at Microsoft and is founder and principal researcher of the Montreal AI Ethics Institute. 

And no one thinks companies will completely stop using A.I. Brian Green, the director of technology ethics at the Markkula Center for Applied Ethics at Santa Clara University, near San Francisco, says it’s become too important of a tool to drop.

“The fear of going out of business trumps the fear of discrimination,” Green says.

And while LivePerson’s Spinelli worries about some uses of A.I., his company is still heavily investing in its subsets like natural language processing, in which computers learn to understand language. He’s hoping that by being public about the company’s stance on A.I. and ethics, customers will trust that LivePerson is trying to minimize any harms.

LivePerson, along with professional services giant Cognizant and insurance firm Humana, are members of the EqualAI organization and have made public pledges that they will test and monitor their A.I. systems for problems involving bias.

Says Spinelli, “Call us out if we fail.” 

More must-read tech coverage from Fortune:

This story was originally featured on Fortune.com

Source

The post Why companies are thinking twice about using artificial intelligence – Yahoo Finance appeared first on abangtech.



source https://abangtech.com/why-companies-are-thinking-twice-about-using-artificial-intelligence-yahoo-finance/

No comments:

Post a Comment