Taking a step towards fairness-aware ranking by defining latent groups using inferred features.

Taking a step towards fairness-aware ranking by defining latent groups using inferred features.

At the BIAS @ ECIR 2021 Workshop, our lab members continue to investigate the importance of fairness in search and recommendation that is increasingly drawing attention in recent years.

The paper explores how to define latent groups, which cannot be determined by self-contained features but must be inferred from external data sources, for fairness-aware ranking. In particular, taking the Semantic Scholar dataset released in TREC 2020 Fairness Ranking Track as a case study, we infer and extract multiple fairness-related dimensions of author identity including gender and location to construct groups.

Results

We propose a fairness-aware re-ranking algorithm incorporating both weighted relevance and diversity of returned items for given queries. Our experimental results demonstrate that different combinations of relative weights assigned to relevance, gender, and location groups perform as expected.

Future work

Due to inaccurate group classifications, for our future work, we propose to explore public personal locations, such as using Twitter profile locations.

Interested to learn more?

Read the full research paper here or watch the full presentation.

Leave a Reply

Your email address will not be published. Required fields are marked *