In the era of big data more and more decisions are made using predictive models, built on historical data, for example, automated CV screening of job applicants, credit scoring for loans, or profiling of potential suspects by the police.

Predictive models may discriminate people even if the computing process is fair and well-intentioned. This is because most data mining methods are based upon assumptions that the historical data is correct, and represents the population well, which is often not true in reality.

Our book chapter describes several scenarios how this may happen, and discusses computational techniques how to cope with that. All the book is here.

An emerging field of discrimination-aware data mining studies how to make predictive models free from discrimination, when historical data, on which they are built, may be biased, incomplete, or contain past discriminatory decisions.