In the experiment, researchers generated eight AI-created images: four of a Black woman and four of a white woman, each with varying hairstyles. The only difference between images of the same woman was the hairstyle—straight hair, a big afro, a teeny-weeny afro (TWA), or braids for the Black woman; and straight hair, pixie cut, bob, or curly hair for the white woman. Three AI tools—Clarifai, Amazon Rekognition, and Anthropic’s Claude—were used to evaluate the images.
The results were stark. Two AI systems rated the Black woman with braids as less intelligent, less happy, and more “neutral” in emotion compared to when she had straight or afro styles. In some cases, the AI even failed to recognize her as the same person across hairstyles. For the white woman, no hairstyle carried significant intelligence penalties, and identity recognition remained largely consistent—except for a minor mix-up between straight and bob cuts.
Many companies now use AI facial recognition and analysis tools during remote hiring to verify candidates, detect fraud, and assess professionalism. But for Black women, a hairstyle change could trigger identity mismatches—leading to automatic rejection before a human even reviews their application. In workplaces using facial recognition for building or system access, this bias could mean denied entry, delays, or unnecessary security scrutiny.
Even more concerning, AI-powered video interview tools could subtly penalize candidates based on hairstyle, rating them as “less professional” or “less intelligent” without human recruiters realizing the bias. In competitive job markets, these small algorithmic judgments can mean lost opportunities.
AI systems don’t create bias from thin air—they learn it from data shaped by historical and societal prejudice. This includes Eurocentric beauty norms that have long pressured Black women to straighten their hair or wear styles deemed “professional” by predominantly white standards. The danger is that AI automates and scales these biases, making them harder to detect but more damaging in effect.
Without oversight, this bias becomes systemic—locking people out of jobs, buildings, and even fair performance evaluations simply because of cultural hairstyles.
Tackling this issue starts with awareness. Everyone in the hiring process—HR teams, managers, and recruiters—should understand hair discrimination, texturism, and laws like the CROWN Act, which protects against hair-based bias. Workplaces should ensure grooming policies are inclusive, hiring criteria are objective, and AI tools never have the final say without human review.
By holding both people and technology accountable, we can build hiring and workplace systems that value talent over texture and potential over perception.
𝗦𝗲𝗺𝗮𝘀𝗼𝗰𝗶𝗮𝗹 𝗶𝘀 𝘄𝗵𝗲𝗿𝗲 𝗿𝗲𝗮𝗹 𝗽𝗲𝗼𝗽𝗹𝗲 𝗰𝗼𝗻𝗻𝗲𝗰𝘁, 𝗴𝗿𝗼𝘄, 𝗮𝗻𝗱 𝗯𝗲𝗹𝗼𝗻𝗴. We’re more than just a social platform — from jobs and blogs to events and daily chats, we bring people and ideas together in one simple, meaningful space.