2008 | OriginalPaper | Chapter
Metrics for Evaluating the Serendipity of Recommendation Lists
Authors : Tomoko Murakami, Koichiro Mori, Ryohei Orihara
Published in: New Frontiers in Artificial Intelligence
Publisher: Springer Berlin Heidelberg
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
In this paper we propose metrics
unexpectedness
and
unexpectedness
_
r
for measuring the serendipity of recommendation lists produced by recommender systems. Recommender systems have been evaluated in many ways. Although prediction quality is frequently measured by various accuracy metrics, recommender systems must be not only accurate but also useful. A few researchers have argued that the bottom-line measure of the success of a recommender system should be user satisfaction. The basic idea of our metrics is that unexpectedness is the distance between the results produced by the method to be evaluated and those produced by a primitive prediction method. Here,
unexpectedness
is a metric for a whole recommendation list, while
unexpectedness
_
r
is that taking into account the ranking in the list. From the viewpoints of both accuracy and serendipity, we evaluated the results obtained by three prediction methods in experimental studies on television program recommendations.