Browsed by
Month: November 2019

A new NSF grant for explainable recommendations

A new NSF grant for explainable recommendations

Dr. Yongfeng Zhang from Rutgers University and Dr. Chirag Shah from University of Washington are recipients of a new grant from NSF (3 years, $500k) to work on explainable recommendations. It’s a step toward curing the “runaway AI”!

https://www.nsf.gov/awardsearch/showAward?AWD_ID=1910154

Recommendation systems are essential components of our daily life. Today, intelligent recommendation systems are used in many Web-based systems. These systems provide personalized information to help human decisions. Leading examples include e-commerce recommendations for everyday shopping, job recommendations for employment markets, and social recommendations to make people better connected. However, most recommendation systems merely suggest recommendations to users. They rarely tell users why such recommendations are provided. This is primarily due to the closed nature algorithms behind the systems that are difficult to explain. The lack of good explainability sacrifices transparency, effectiveness, persuasiveness, and trustworthiness of recommendation systems. This research will allow for personalized recommendations to be provided in more explainable manners, improving search performance and transparency. The research will benefit users in real systems through researchers? industry collaboration with e-commerce and social networks. New algorithms and datasets developed in the project will supplement courses in computer science and iSchool programs. Presentation of the work and demos will help to engage with wider audiences that are interested in computational research. Ultimately, the project will make it easier for humans to understand and trust the machine decisions.

This project will explore a new framework for explainable recommendation that involves both system designers and end users. The system designers will benefit from structured explanations that are generated for model diagnostics. The end users will benefit from receiving natural language explanations for various algorithmic decisions. This project will address three fundamental research challenges. First, it will create new machine learning methods for explainable decision making. Second, it will develop new models to generate free-text natural language explanations. Third, it will identify key factors to evaluate the quality of explanations. In the process, the project will also develop aggregated explainability measures and release evaluation benchmarks to support reproducible explainable recommendation research. The project will result in the dissemination of shared data and benchmarks to the Information Retrieval, Data Mining, Recommender System, and broader AI communities.

It’s a new chapter for us – at UW in Seattle

It’s a new chapter for us – at UW in Seattle

It’s been a bit quiet on iBlog lately and there is a good reason. The lab, along with me, has moved from Rutgers University in NJ to University of Washington (UW) in Seattle. This happened over the end of the summer and the beginning of the fall. Things were so chaotic at the time that we even missed celebrating or noticing 9 years of the lab!

This transition is still in progress. Most of the PhD students are still in NJ, but new students and projects are starting up with the lab in Seattle. Over the course of the next few weeks and months, we will be bringing more updates to our websites and social media channels.

It is a new chapter for us, indeed, but the journey goes on. We are still seekers!