Over the weekend, Twitter and Zoom were over spotted to have a racial bias in visual algorithms. It all had started when someone noted how Zoom appears to be removing the head of people with a darker skin pigmentation when using a virtual background, while it does not make this move on people that are having a lighter skin pigmentation.
In the tweet that reported the Zoom issue, it was also ironically spotted that Twitter too also appears to have a racial bias when it cropped the thumbnails to favor the face of a white person instead of a black one. Twitter has also reacted to the outrage that emerged, says that it was clear it had more work to do.
Zoom showed to have a problem with the virtual background algorithms that is manifested as racial bias. A researcher posted a thread on Twitter last Sunday that underlines the issue with the face-detection algorithm that allegedly erases black faces when applying a virtual background in the video conference app. In the same thread, with Madland attaches the photos of each user in the chat, Twitter’s image thumbnail cropping algorithm seemed to be supporting Madland over his black colleague.
As a response to Madland’s observations, Twitter Chief Design Offer Dantley Davis said that “It’s 100 percent our fault. No one should say otherwise. Now the next step is fixing it.”
Afterwards, several Twitter users have posted photos on the microblogging platform, highlighting the apparent bias. Parag Aggarwal, Twitter’s CTO have also reacted to the trend:
Other example indicated came from cryptographic engineer Tony Arcieri, tweeted on Sunday the mugshots of former US President Barack Obama and Senate majority leader Mitch McConnell to comprehend whether the platform’s algorithm can highlight the former or latter. Arcieri also used different patterns of putting mugshots in the images; mostly McConnell is shown than Obama.
But, once the engineer has inverted the colours of the mugshots, Obama’s image has shown up on the cropped view. Kim Sherrell, the Intertheory producer, also found that the algorithm jerks the favourite once the idea of Obama is changed with a higher contrast beam.
Other users also found that the algorithm appeared to give focus to brighter complexions even in the case of cartoons and animals. Twitter clients like Tweetdeck and Twitterrific, as well as a mobile, app, and desktop views, it showed different priorities for image cropping as some users noted.
Liz Kelley, a spokesperson of Twitter, responded to the tweets raising racial bias allegations against the platform and stated:
“We tested for bias before shipping the model and didn’t find evidence of racial or gender bias in our test, but it’s clear that we’ve got more analysis to do.” She further elaborated, “We’ll open source our work so others can review and replicate.”
In the year 2017, Twitter has discontinued the face detection for automatically cropped images in users’ timeline and deployed a saliency discovery algorithm that was aimed to emphasis on “salient” image regions. Twitter engineer Zehan Wang tweeted that the team has conducted some bias studies before releasing the recent algorithm and by the time that it was found there was “no significant bias between ethnicities (or genders).” On the other hand, he added that the company might review the study provided by Twitter users.
No Comment