Rechercher dans ce blog

Tuesday, June 30, 2020

Yann LeCun Quits Twitter Amid Acrimonious Exchanges on AI Bias - Synced

doro.indah.link

Turing Award Winner and Facebook Chief AI Scientist Yann LeCun has announced his exit from popular social networking platform Twitter after getting involved in a long and often acrimonious dispute regarding racial biases in AI.

Unlike most other artificial intelligence researchers, LeCun has often aired his political views on social media platforms, and has previously engaged in public feuds with colleagues such as Gary Marcus. This time however LeCun’s penchant for debate saw him run afoul of what he termed “the linguistic codes of modern social justice.”

It all started on June 20 with a tweet regarding the new Duke University PULSE AI photo recreation model that had depixelated a low-resolution input image of Barack Obama into a photo of a white male. Penn State University Associate Professor Brad Wyble tweeted “This image speaks volumes about the dangers of bias in AI.” LeCun responded, “ML systems are biased when data is biased. This face upsampling system makes everyone look white because the network was pretrained on FlickFaceHQ, which mainly contains white people pics. Train the *exact* same system on a dataset from Senegal, and everyone will look African.

Research scientist, co-founder of the “Black in AI” group, and technical co-lead of the Ethical Artificial Intelligence Team at Google Timnit Gebru tweeted in response, “I’m sick of this framing. Tired of it. Many people have tried to explain, many scholars. Listen to us. You can’t just reduce harms caused by ML to dataset bias.” She added, “Even amidst of world wide protests people don’t hear our voices and try to learn from us, they assume they’re experts in everything. Let us lead her and you follow. Just listen. And learn from scholars like @ruha9 [Ruha Benjamin, Associate Professor of African American Studies at Princeton University]. We even brought her to your house, your conference.” (This was a reference to ICLR 2020, where LeCun served as president and Benjamin presented the talk 2020 Vision: Reimagining the Default Settings of Technology & Society.)

Known for her work on racial and gender bias in facial recognition systems and other AI algorithms, Gebru has been advocating for fairness and ethics in AI for years. The Gender Shades project that she leads with MIT Media Lab Computer Scientist Joy Buolamwini revealed that commercial facial recognition software was more likely to misclassified and was less accurate with darker-skinned females compared to lighter-skinned men.

Gebru’s CVPR 2020 talk Computer vision in practice: who is benefiting and who is being harmed? again addressed the role of bias in AI, “I think that now a lot of people have understood that we need to have more diverse datasets, but unfortunately I felt like that’s kind of where the understanding has stopped. It’s like ‘let’s diversify our datasets. And that’s kind of ethics and fairness, right?’ But you can’t ignore social and structural problems.

LeCun replied that his comment was targeting the particular case of the Duke model and dataset. “The consequences of bias are considerably more dire in a deployed product than in an academic paper,” continued LeCun in a lengthy thread of tweets suggesting it’s not ML researchers that need to be more careful selecting data but engineers.

“Again. UNBELIEVABLE. What does it take? If tutorials at your own conference, books and books and talks and talks from experts coming to YOU, to your own house, feeding it to you, Emily and I even cover issues with how the research community approaches data. Nope. Doesn’t matter.” Gebru replied. “This is not even people outside the community, which we say people like him should follow, read, learn from. This is us trying to educate people in our own community. Its a depressing time to be sure. Depressing.”

Others from the AI and activist communities joined the fray, with far too many simply attacking either LeCun or Gebru. On June 25 LeCun offered an olive branch: “I very much admire your work on AI ethics and fairness. I care deeply about about working to make sure biases don’t get amplified by AI and I’m sorry that the way I communicated here became the story.” Gebru replied, “We’re often told things like ‘I’m sorry that’s how it made you feel.’ That doesn’t really own up to the actual thing. I hope you understand *why* *how* you communicated became the story. It became the story because its a pattern of marginalization.”

The week-long back-and-forth between LeCun and Gebru attracted thousands of likes, comments, and retweets, with a number of high-profile AI researchers expressing dissatisfaction with LeCun’s explanations. Google Research scientist David Ha commented, “I respectfully disagree w/Yann here. As long as progress is benchmarked on biased data, such biases will also be reflected in the inductive biases of ML systems Advancing ML with biased benchmarks and asking engineers to simply ‘retrain models with unbiased data’ is not helpful.” Canada CIFAR AI chair Nicolas Le Roux tweeted, “Yann, I know you mean well. I saw many people act like you just did in good faith, and get defensive when people pointed that this was not the proper response, until one day they stopped to listen and reflect and ultimately change their behaviour.”

Amid the heated debate, the Duke PULSE research team updated their paper, adding: “Overall, it seems that sampling from StyleGAN yields white faces much more frequently than faces of people of color.” The researchers referenced an April 2020 paper on demographic bias in artificially generated facial pictures by Salminen et al.: “Results indicate a racial bias among the generated pictures, with close to three-[fourths] (72.6%) of the pictures representing White people. Asian (13.8%) and Black (10.1%) are considerably less frequent, while Indians represent only a minor fraction of the pictures (3.4%).”

The team also added a “model card” to their study. Gebru was part of a team that introduced the model card framework in 2019 to “provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains.”

Slides from Gebru’s CVPR 2020 tutorial Computer vision in practice: who is benefiting and who is being harmed?

The artificial intelligence community has made a number of moves in recent years to encourage diversity and inclusivity, such as the “AI for All” initiative launched by Gebru’s Stanford supervisor Fei-Fei Li, and scheduling the major AI conference ICLR 2020 in Ethiopia (the conference went virtual due COVID-19). This year, NeurIPS, the world’s most prestigious AI conference, required authors to include a statement of the potential broader impact of their submitted papers, “including its ethical aspects and future societal consequences. Authors should take care to discuss both positive and negative outcomes.”


Journalist: Fangyu Cai | Editor: Michael Sarazen

We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.




July 01, 2020 at 07:29AM
https://ift.tt/2ZsiNIT

Yann LeCun Quits Twitter Amid Acrimonious Exchanges on AI Bias - Synced

https://ift.tt/2APQ0pp

No comments:

Post a Comment

Featured Post

Latest iPhone 13 Dummies Again Show New Diagonal Dual-Lens Camera Arrangement Coming to Standard Model - MacRumors

doro.indah.link More dummy models of the upcoming iPhone 13 have been shared online, this time by leaker DuanRui , indicating the new diag...

Popular Posts