The AI arms race is affecting minorities, and not in a good way

Unintentional bias in AI has warranted a lot of public criticism recently. However, we often don’t talk enough about the intentional bias that shows up in an alarming amount of fourth wave innovations. One shocking example came to light in late 2019 when leaked documents revealed that the popular social media platform TikTok was doling out “algorithmic punishments for unattractive and impoverished users.” Human moderators were instructed to flag users who were “unattractive, poor, or otherwise undesirable,” at which point the TikTok algorithms would prevent most users from viewing their content. These standards were put in place by the Chinese parent company ByteDance, a corporation that U.S. officials say has significant “ties to the Chinese Communist Party.” Punishing the underprivileged on social media is worrisome in itself. Numerous studies have outlined the damaging effects that unpopularity on social media can have on young people. This censoreship of the “poor” and “ugly” is offensive and harmful enough, but the discrimination does not stop there.

More recently, TikTok has begun blocking statements like “Black Lives Matter” and “Black support” from user bios. TikTok claims that the censorship was a fluke in their algorithms that were created to prevent statements about black hate. Critics are skeptical of this flimsy excuse, stating that phrases like “neo nazi” and “white supremacy” did not trigger the “protective” algorithms. TikTok claims that their algorithms will be updated to address the issue, but many users are skeptical. One comedian and black activist complained that he lost the ability to share certain types of content after calling attention to the platform’s discriminatory algorithms. The obvious intention here is to keep TikTok users from seeing complaints about their damaging AI screening tools. TikTok has harmed individual users by blocking their content, but it also has hampered the world wide movement towards racial justice.

TikTok isn’t the only Chinese tech giant that silences minority voices. WeChat, the popular Chinese messaging service, recently began removing LGBT group chats in an effort to silence a population they see as a threat to China’s authority. While most WeChat users reside in China, around 200 million live outside the country with 19 million in the United States alone. In other words, millions of people throughout the West have been prevented from discussing LGBT issues and topics because Chinese leaders feel threatened by them.

Photo Courtesy of The Reel Network

It is clear that China will use seemingly benign technology to silence progressive political movements that pose a threat to their balance of power. This should serve as a warning for liberal democracies that value individual rights and protection. Outsider technology is actively stifling groups that promote the same. Of course, any corporation in any part of the world can enforce discriminatory polices. The difference between Western companies and those from places like Russia and China is that Western citizens have the ability to punish bad behavior in their own countries. Liberal democracies provide opportunities to sue, cite, and regulate tech companies via their political institutions. Unfortunately, these tools for change are not available across the world. Albeit difficult, Western nations can regulate company activity. There is no legitimate procedure that would allow the West to regulate companies from Eastern countries with different power structures. We can’t effectively sue or target legislation at foreign tech companies. If there will be change, it has to come about through some other means.

When presented with these instances of discrimination in Chinese AI, we might think that the simple solution is to ban such technology from reaching Western shores. Unfortunately, this is a bit more complicated than it may appear. China and Russia have both made it clear that attempts to regulate or ban their largest tech companies will result in economic sanctions at the least. Powerful American companies like Apple are afraid of losing foreign markets, and lobby hard to prevent regulation of foreign tech giants.

Zhang Yiming, CEO and founder of ByteDance

Another problem with technology bans is the rising quality of foreign products. TikTok is a good example of this. Consumers have known for around two years now that TikTok engages in discriminatory censorship and that it harvests location, voice, and finger print data from its users, many of whom are minors. Even so, millions of Westerners use the app because of its entertaining content and attractive user interface. In short, foreign products are of a much higher quality than previous technology arms races. The U.S. has traditionally been an exporter of cutting edge tech products and isn’t used to receiving competitive goods from non-allied states. Western consumers don’t usually think about international politics when deciding to download an app or post content. They just want high quality products. We simply aren’t used to the idea that foreign goods and services could pose a threat to our well being,

Ziggi Tyler, pictured above, says TikTok blocked activist statements in his bio and then punished his account when he spoke out.

One solution to international AI discrimination is to keep markets open but prohibit companies in non-allied countries from collected user data in the U.S. and across the West. As noted, this will be tricky due to American lobbying efforts and the threat of repercussions. Another possible solution is to boycott applications that use data to discriminate. If we commit to educate ourselves on the products we use and avoid those that stifle democratic values, we can effectively prevent foreign influences from discriminating against minority groups. One of the most powerful tools at our disposal is our ability to chose what we download and where we interact online. Let corporate bigots feel the sting of lost profits. Together we can protect the rights of minorities and maybe even change the course of AI development forever. If you are passionate about activism, one of the best things you can do is ironically very passive. Avoid services that use their data irresponsibly until they commit to making products with liberty and justice for all.


One reply on “The AI arms race is affecting minorities, and not in a good way”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s