This week, as the heads of four of the largest and most powerful tech companies in the world sat in front of a Congressional antitrust hearing and had to answer for the ways they built and run their respective behemoths, you could see how far the bloom on the rose of big tech has faded. It should also be a moment of circumspection for those in the field of AI.
Facebook’s Mark Zuckerberg, once the rascally college dropout boy genius you loved to hate, still doesn’t seem to grasp the magnitude of the problem of globally destructive misinformation and hate speech on his platform. Tim Cook struggles to defend how Apple takes a 30% cut from some of its app store developers’ revenue — a policy he didn’t even establish, a vestige of Apple’s mid-2000s vise grip on the mobile app market. The plucky young upstarts who founded Google are both middle-aged and have stepped down from executive roles, quietly fading away while Alphabet and Google CEO Sundar Pichai runs the show. And Jeff Bezos wears the untroubled visage of the world’s richest man.
Amazon, Apple, Facebook, and Google all created new tech products and services that have undeniably changed the world, and in some ways that are undeniably good. But as they all moved fast and broke things, they also largely excused themselves from the burden of asking difficult ethical questions, from how they built their business empires to the impacts of their products and services on the people who use them.
As AI continues to be the focus of the next wave of transformative technology, skating over those difficult questions is not an option. It’s a mistake the world can’t afford to repeat. And what’s more, AI doesn’t actually work properly without solving the problems around those questions.
Smart and ruthless was the way of old big tech; but AI requires people to be smart and wise. Those working in AI have to not only ensure the efficacy of what they make, but holistically understand the potential harms for the people upon whom AI is applied. That’s a more mature and just way of building world-changing technologies, products, and services. Fortunately, many prominent voices in AI are leading the field down that path.
This week’s best example was the widespread reaction to a service called Genderify, which promised to use natural language processing (NLP) to help companies identify the gender of their customers using only their name, username, or email address. The entire premise is absurd and problematic, and when AI folks got ahold of it to put it through the paces, they predictably found it to be terribly biased (which is to say, broken).
Genderify was such a bad joke that it almost seemed like some kind of performance art. In any case, it was laughed off of the internet. Just a day or so after it was launched, the Genderify site, Twitter account, and LinkedIn page were gone.
It’s frustrating to many in AI that such ill-conceived and poorly executed AI offerings keep popping up. But the swift and wholesale deletion of Genderify illustrates the power and strength of this new generation of principled AI researchers and practitioners.
Now in its most recent and successful summer, AI is already getting the reckoning that big tech is facing after decades. Other recent examples include an outcry over a paper that promised to use AI to identify criminality from people’s faces (which is really just AI phrenology), which led to its withdrawal from publication. Landmark studies on bias in facial recognition have led to bans and moratoriums on its use in several U.S. cities, as well as a raft of legislation to eliminate or combat its potential abuses. Fresh research is finding intractable problems with bias in well established data sets like 80 Million Tiny Images and the legendary ImageNet — and leading to immediate change. And more.
Although advocacy groups are certainly playing a role in pushing for these changes and answers to hard questions, the authority for it and the research-based proof is coming from those inside the field of AI — ethicists, researchers looking for ways to improve AI techniques, and actual practitioners.
There is, of course, an immense amount of work to be done, and many more battles to fight as AI becomes the next dominant set of technologies. Look no further than problematic AI in surveillance, military, the courts, employment, policing, and more.
But when you see tech giants like IBM, Microsoft, and Amazon pull back on massive investments in facial recognition, it’s a sign of progress. It doesn’t actually matter what their true motivations are, whether it’s narrative cover for a capitulation to other companies’ market dominance, a calculated move to avoid potential legislative punishment, or just a PR stunt. The fact is that for whatever reason, those companies see it as more advantageous to slow down and make sure they aren’t causing damage than to keep moving fast and breaking things.
The post AI Weekly: Big Tech’s antitrust reckoning is a cautionary tale for the AI industry appeared first on abangtech.
source https://abangtech.com/ai-weekly-big-techs-antitrust-reckoning-is-a-cautionary-tale-for-the-ai-industry/
No comments:
Post a Comment