Browsed by
Month: March 2022

ExpScore: Learning Metrics for Recommendation Explanation

ExpScore: Learning Metrics for Recommendation Explanation

Another published paper from our Lab members!

ABSTRACT
Many information access and machine learning systems, including recommender systems, lack transparency and accountability. Highquality recommendation explanations are of great significance to enhance the transparency and interpretability of such systems. However, evaluating the quality of recommendation explanations is still challenging due to the lack of human-annotated data and benchmarks. In this paper, we present a large explanation dataset named RecoExp, which contains thousands of crowdsourced ratings of perceived quality in explaining ecommendations. To measure explainability in a comprehensive and interpretable manner, we propose ExpScore, a novel machine learning-based metric that incorporates the definition of explainability from various perspectives (e.g., relevance, readability, subjectivity, and sentiment polarity). Experiments demonstrate that ExpScore not only vastly outperforms existing metrics and but also keeps itself explainable. Both the RecoExp dataset and open-source implementation of ExpScore will be released for the whole community. These resources and our findings can serve as forces of public good for scholars as well as recommender systems users.

Bingbing Wen | University of Washington, Seattle, WA, US | bingbw@uw.edu

Yunhe Feng | University of Washington, Seattle, WA, US| yunhe@uw.edu

Yongfeng Zhang | Rutgers University, New Brunswick, NJ, US | yongfeng.zhang@rutgers.edu
Chirag Shah | University of Washington, Seattle, WA, US, chirags@uw.edu |

Full article: https://dl.acm.org/doi/pdf/10.1145/3485447.3512269