We consider Information Retrieval evaluation, especially at
program. It appears that systems obtain scores regarding not only the relevance of retrieved documents, but also according to document names in case of ties (i.e., when they are retrieved with the same score). We consider this tie-breaking strategy as an uncontrolled parameter influencing measure scores, and argue the case for fairer tie-breaking strategies. A study of 22
editions reveals significant differences between the Conventional unfair
’s strategy and the fairer strategies we propose. This experimental result advocates using these fairer strategies when conducting evaluations.