Twitter's new plan: Examining the "unintentional mistakes" in its own algorithms
Author | Zhou Lei
Twitter announced a new plan.
The initiative, called Responsible Machine Learning, examines the fairness of the algorithms Twitter uses.
One of the contents of the plan is to have data scientists and engineers from within the company study how Twitter’s use of machine learning leads to algorithmic bias and evaluate the "unintentional harm" that its algorithms may cause, and then make the research results public.
"We are conducting in-depth analysis and research to assess whether the algorithms we use are potentially harmful," Twitter wrote in an official document.
One of the first tasks is to evaluate the racial and gender bias in Twitter's image cropping algorithm. As more news sites and social media use AI to identify and extract pictures, people have begun to notice racial bias in many algorithms, especially facial recognition.
Previously, some Twitter users pointed out that in photos with different races, Twitter's automatic image cropping algorithm will highlight the portrait area with lighter skin color when selecting the thumbnail area to present in the preview tweet.
Last September, researchers discovered that by clicking on these original images, you can see that they contain more people of other skin colors, but even if you change the position of dark and light people in the original image, the preview result will not change.
Some netizens also believe that this situation occurs because the algorithm tends to extract areas with high brightness and color density in the image, and it is not "discrimination."
In response to the accusation of racial discrimination in the algorithm, Twitter said at the time that it would conduct further investigations and promised to open source its image cropping machine learning algorithm to accept more user reviews and suggestions.
Chief Technology Officer Parag Agarwal said the algorithm needs to be constantly improved and the team is eager to learn from experience.
Last month, Twitter began testing showing full images instead of cropped previews.
But even if Twitter's algorithm is not "intentionally" racist, there may be some flaws in the development process that lead to racial discrimination.
Anima Anandkumar, director of AI research at NVIDIA, once pointed out that the training set used by the saliency algorithm is eye tracking data of heterosexual men, which will obviously transfer the racial bias of the subjects to the algorithm.
Twitter will also study its content recommendations, including how timeline feeds differ among different racial groups.
Twitter said it will "work closely" with third-party academic researchers, and the analysis results will be shared later and public feedback will be sought.
It’s not clear how big of an impact this initiative will have. Twitter said the findings won’t necessarily translate into visible product changes, but will instead be important discussions around how they build and apply machine learning.
Twitter CEO Jack Dorsey has also said he wants to create an algorithm marketplace, similar to an app store, to give users control over the algorithms they use. The company said in its latest blog post that they are in the early stages of exploring "algorithmic choice."
This is an urgent problem for not just Twitter, but for all major social media platforms.
Influenced by some social events in the United States, lawmakers have put pressure on Twitter, YouTube and Facebook to increase the transparency of their algorithms. Some lawmakers have proposed legislation requiring giants to assess whether their algorithms are biased.
Twitter's decision to analyze its own "algorithmic bias" follows other social networks such as Facebook, which set up a similar team in 2020.
A similar incident occurred at Microsoft before: as early as 2018, Microsoft's accuracy in identifying men with lighter skin reached 100%, but when identifying women with dark skin, the accuracy rate was only 79.2%.
In early June last year, Microsoft also caused a public outcry because of its facial recognition involving racial discrimination.
Jade Thirlwall, a member of the famous British girl group Little Mix, posted a post, strongly criticizing Microsoft's news website MSN for confusingly using a photo of another member of the group in a report about her.
The report confirmed that it was captured and generated by AI, but it confused the dark-skinned Leigh and the Arab Jade when searching for matching pictures.
After receiving more and more user complaints and the growing wave of anti-racial discrimination, a large number of technology companies including IBM and Amazon have been forced to reflect on the bias in their systems, especially facial recognition technology.
Compiled by Leifeng.com, Source: The Verge