The news cycle this week seemed to grab people by the collar and shake them violently. On Wednesday, Palantir went public. The secretive company with ties to the military, spy agencies, and ICE is reliant on government contracts and intent on racking up more sensitive data and contracts in the U.S. and overseas.
Following a surveillance-as-a-service blitz last week, Amazon introduced Amazon One, which allows touchless biometric scans of people’s palms for Amazon or third-party customers. The company claims palm scans are less invasive than other forms of biometric identifiers like facial recognition.
On Thursday afternoon, in the short break between an out-of-control presidential debate and the revelation that the president and his wife had contracted COVID-19, Twitter shared more details about how it created AI that appears to prefer white faces over black faces. In a blog post, Twitter chief technology officer Parag Agrawal and chief design officer Dantley Davis called failure to publish the bias analysis at the same time as the rollout of the algorithm years ago “an oversight.” The Twitter executives shared additional details about a bias assessment that took place in 2017, and Twitter says it’s working on moving away from the use of saliency algorithms. When the problem initially received attention, Davis said Twitter would consider getting rid of image cropping altogether.
There are still unanswered questions about how Twitter used its saliency algorithm, and in some ways the blog post shared late Thursday brings up more questions than it answers. The blog post simultaneously states that no AI can be completely free of bias, and that Twitter’s analysis of its saliency algorithm showed no racial or gender bias. A Twitter engineer said some evidence of bias was found during the initial assessment.
Twitter also continues to share none of the results from a 2017 assessment for gender and racial bias. Instead, a Twitter spokesperson told VentureBeat more details will be released in the coming weeks, the same response the company had when the apparent bias first came to light.
Twitter does not appear to have an official policy to assess algorithms for bias before deployment, something civil rights groups urged Facebook to develop this summer. It’s unclear whether the saliency algorithm episode will lead to any lasting change in policy at Twitter, but what makes the scandal worse is that so many people were unaware that artificial intelligence was even in use.
This all brings us to another event that happened earlier this week: The cities of Amsterdam and Helsinki rolled out algorithm registries. Both cities only have a few algorithms listed thus far and plan to make more changes, but the registry lists the datasets used to train an algorithm, how models are used, and how they were assessed for bias or risk. The goal, a Helsinki city official said, was to promote transparency so the public can trust the results of algorithms used by city governments. If they have questions or concerns, the registry lists the name and contact information of the city department and official responsible for the algorithm’s deployment.
When you step back and look at how companies positioned to profit from surveillance and social media platforms conduct themselves, a common element is a lack of transparency. One potentially helpful solution could be to follow the example of Amsterdam and Helsinki and create algorithm registries so that users know when machine intelligence is in use. For consumers, this could help them understand the ways in which social media platforms personalize content and influence what you see. For citizens, it can help people understand when a government agency is making decisions using AI, useful at a time when more appear poised to do so.
If companies had to comply with regulation that required them to register algorithms, researchers and members of the public might have known about Twitter’s algorithm without the need to run their own tests. It was encouraging that the saliency algorithm inspired a lot of people to conduct their own trials, and it sounds healthy for users to assess bias for themselves, but it didn’t have to be that difficult. While AI registries could increase scrutiny, that scrutiny could ultimately lead to more robust and fair AI in the world, ensuring that the average person can hold companies and governments accountable for the algorithms they use.
For AI coverage, send news tips to Khari Johnson and Kyle Wiggers and AI editor Seth Colaner — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI Channel.
Thanks for reading,
Khari Johnson
Senior AI Staff Writer
The post AI Weekly: Palantir, Twitter, and building public trust into the AI design process appeared first on abangtech.
source https://abangtech.com/ai-weekly-palantir-twitter-and-building-public-trust-into-the-ai-design-process/
No comments:
Post a Comment