Algorithmic bias

Algorithmic bias (or systematic bias) occurs when an AI system consistently produces biased results, usually because the data used to train it was unbalanced, discriminatory, or incomplete. It’s not a mistake made by a single person, but a structural feature of the data that is passed on to the model.

A classic example is an AI hiring system that companies use to filter resumes. If that system is trained on historical data from a time when there were fewer women in tech, the model will learn that “male” = “better candidate for a tech position.” The result: the system automatically rejects resumes from qualified women. That’s algorithmic bias.

Another example: an AI lending system. If it’s trained on data from a time when there were fewer African-American loans (because they were discriminated against), the model learns the injustice of history and reproduces it. The system could reject a good African-American candidate simply because his cohort in the historical data was smaller.

Biases come from several sources: (1) Data biases – the dataset itself is unbalanced; (2) Algorithmic biases – even with good data, the choice of algorithm can favor one group; (3) Interpretive biases – how the model results are interpreted can introduce bias.

For startups: Regularly test the model with different demographic groups. If you see that the model performs worse for one group than another, that’s a red flag. Document it, address it, and communicate the limitations. Startups that act proactively now will have a healthier reputation.

Contact us

Fill in the form below and our team will contact you as soon as possible. We aim to respond within 24 hours on business days.

Contact - EN

Phone

+381 11 2417 566

Instagram

@naumovicipartneri

Working hours

Mon - Fri: 09 - 17

Email

office@naumovic-partners.com

Linkedin

Naumovic & Partners Law Office