Notebookcheck Logo

AI-based recruitment: Experiment exposes racism by OpenAI's GPT

Discrimination in job evaluation and selection by OpenAI's GPT-3 and GPT-4 (symbolic image: DALL-E / AI, edited)
Discrimination in job evaluation and selection by OpenAI's GPT-3 and GPT-4 (symbolic image: DALL-E / AI, edited)
A recent study shows that OpenAI's GPT-3 and GPT-4 automated hiring programs are severely biased against certain groups of people. An extensive experiment by Bloomberg has shown that AI-based recruitment programmes discriminate against applicants on the basis of name and ethnicity, among other things.

Automated hiring programmes based on OpenAi's GPT are used by companies to select candidates for advertised positions. Recruiters use the tool to save time, but as a Bloomberg experiment inspired by landmark studies has shown, artificial intelligence is biased when evaluating and selecting candidates.

The problem is that the underlying AI model draws its information from vast amounts of data such as articles, online comments and social media posts, which can include racist, misogynistic and many other discriminatory content. For example, the comprehensive study used fictitious names (and CVs at the same level) associated with a particular ethnicity to apply for a real job.

Names were chosen that were associated with Black, White, Hispanic or Asian women and men. The experiment was conducted 1,000 times for a real job as a financial analyst with hundreds of different names and name combinations and then repeated again for four more jobs for software engineers and other professional groups.

Names from certain demographic groups were clearly favored by GPT-3. For example, Asian American female names were ranked highest for a financial analyst position, while names clearly indicative of Black males were ranked lowest, but names indicative of Black females were also ranked as a top candidate for a software engineer position only about 11 % of the time - about 36 % less often than the top-scoring group.

As a top candidate for a Human Resources position - a career field in which women have historically been more likely to work - GPT selected names associated with Hispanic women significantly more often, and as a top candidate for an HR Business Partner position, names associated with men were selected almost twice as often. These are just a few examples from the large experiment.

Although it is often assumed that AI is more neutral and can make better judgments, this experiment shows the opposite. The bias was found not only for GPT-3, but also for GPT-4.

Source(s)

Read all 2 comments / answer
static version load dynamic
Loading Comments
Comment on this article
Please share our article, every link counts!
> Expert Reviews and News on Laptops, Smartphones and Tech Innovations > News > News Archive > Newsarchive 2024 03 > AI-based recruitment: Experiment exposes racism by OpenAI's GPT
Nicole Dominikowski, 2024-03-10 (Update: 2024-03-10)