At the BIAS @ ECIR 2021 Workshop, our lab members continue to investigate the importance of fairness in search and recommendation that is increasingly drawing attention in recent years.
The paper explores how to define latent groups, which cannot be determined by self-contained features but must be inferred from external data sources, for fairness-aware ranking. In particular, taking the Semantic Scholar dataset released in TREC 2020 Fairness Ranking Track as a case study, we infer and extract multiple fairness-related dimensions of author identity including gender and location to construct groups.
We propose a fairness-aware re-ranking algorithm incorporating both weighted relevance and diversity of returned items for given queries. Our experimental results demonstrate that different combinations of relative weights assigned to relevance, gender, and location groups perform as expected.
Due to inaccurate group classifications, for our future work, we propose to explore public personal locations, such as using Twitter profile locations.
Interested to learn more?