The Growing Problem of Covert Racism in Advanced AI Tools

3 min read

A recent publication has unveiled the concerning trend of increasing covert racial bias within popular artificial intelligence tools as they continue to advance. The report, authored by a team of technology and linguistics experts, specifically highlighted the presence of racist stereotypes about speakers of African American Vernacular English (AAVE) within large language models such as OpenAI’s ChatGPT and Google’s Gemini.

Valentin Hoffman, a researcher at the Allen Institute for Artificial Intelligence and co-author of the paper, expressed apprehensions regarding the widespread utilization of these technologies by companies for purposes such as screening job applicants. This raises the risk of individuals who use AAVE in their speech facing racial discrimination in various settings including education, employment, housing, and legal proceedings.

The study elucidates that AI models are more inclined to describe AAVE speakers as “intellectually deficient” and “lacking in motivation”, leading to unfavorable job assignments and lower-paying positions. Moreover, the report indicates that the use of AAVE in social media posts could potentially impact job candidates’ opportunities, as the language model may disregard them based on their dialect.

It is further revealed that the AI models are prone to recommending harsher penalties for criminal defendants who use AAVE in their court statements, prompting concern about their potential influence on criminal convictions.

The growing integration of AI models in administrative tasks within the US legal system has prompted calls for government intervention to regulate the use of large language models. Prominent AI experts have emphasized the necessity of addressing the largely unregulated application of these technologies to mitigate potential harm.

Despite attempts to establish ethical guidelines and boundaries for language models, the report underscores that as these models expand, so too does covert racism. It is noted that while boundaries may instruct language models to be more discreet about their racial biases, they do not eradicate the fundamental issue.

The widespread adoption of language models in the private sector is projected to escalate, with the generative AI market anticipated to become a $1.3tn industry by 2032. This growth has raised concerns about the potential harm these technologies might cause if federal regulation fails to keep pace with technological advancements.

As a result, there is a mounting demand for the regulation of these technologies, particularly in sensitive areas such as hiring and recruiting, to prevent racial biases from impacting crucial decisions. It is evident that while AI tools offer numerous advantages, it is imperative to urgently address the issue of covert racism to ensure the responsible and ethical use of these technologies.

+ There are no comments

Add yours