The propensity to take "mental shortcuts" (also known as heuristics) for judgement and decision-making, is an inherent feature of human cognition and serves important adaptive purposes in everyday life and human-human interaction. If these heuristics, however, produce systematic errors in our decision-making, they are called biases which, if accumulated over time, can produce substantial distortions of knowledge and behavior. Most artificial intelligence (AI) systems today are based on human-derived knowledge structures (ontologies) and/or annotated (big) data used for deep learning with artificial neural networks. Therefore, the human cognitive biases may be reproduced, inflated and disseminated by AI systems which could lead to a perpetuation of social injustices and discriminations that are based on human biases, e.g. with respect to ethnicity, gender and other social markers. In the talk, we will discuss this overlapping realm of human an artificial biases and ways of mitigating their negative social effects.