Skip to main content
Erschienen in: Discover Computing 3/2020

21.09.2019 | Axiomatic Thinking for Information Retrieval

Evaluation measures for quantification: an axiomatic approach

verfasst von: Fabrizio Sebastiani

Erschienen in: Discover Computing | Ausgabe 3/2020

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Quantification is the task of estimating, given a set \(\sigma \) of unlabelled items and a set of classes \({\mathcal {C}}=\{c_{1}, \ldots , c_{|{\mathcal {C}}|}\}\), the prevalence (or “relative frequency”) in \(\sigma \) of each class \(c_{i}\in {\mathcal {C}}\). While quantification may in principle be solved by classifying each item in \(\sigma \) and counting how many such items have been labelled with \(c_{i}\), it has long been shown that this “classify and count” method yields suboptimal quantification accuracy. As a result, quantification is no longer considered a mere byproduct of classification, and has evolved as a task of its own. While the scientific community has devoted a lot of attention to devising more accurate quantification methods, it has not devoted much to discussing what properties an evaluation measure for quantification (EMQ) should enjoy, and which EMQs should be adopted as a result. This paper lays down a number of interesting properties that an EMQ may or may not enjoy, discusses if (and when) each of these properties is desirable, surveys the EMQs that have been used so far, and discusses whether they enjoy or not the above properties. As a result of this investigation, some of the EMQs that have been used in the literature turn out to be severely unfit, while others emerge as closer to what the quantification community actually needs. However, a significant result is that no existing EMQ satisfies all the properties identified as desirable, thus indicating that more research is needed in order to identify (or synthesize) a truly adequate EMQ.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Fußnoten
1
Consistently with most mathematical literature, we use the caret symbol (\(^{\hat{\,}}\)) to indicate estimation.
 
2
In order to keep things simple we avoid overspecifying the notation, thus leaving some aspects of it implicit; e.g., in order to indicate a true distribution p of the unlabelled items in a sample \(\sigma \) across a codeframe \({\mathcal {C}}\) we will simply write p instead of the more cumbersome \(p_{\sigma }^{{\mathcal {C}}}\), thus letting \(\sigma \) and \({\mathcal {C}}\) be inferred from context.
 
3
In this paper we do not discuss the evaluation of ordinal quantification (OQ), defined as SLQ with a codeframe \({\mathcal {C}}=\{c_{1}, \ldots , c_{|{\mathcal {C}}|}\}\) on which a total order \(c_{1} \prec \cdots \prec c_{|{\mathcal {C}}|}\) is defined. Aside from reasons of space, the reasons for disregarding OQ is that there has been very little work on it [the only papers we know being (Da San Martino et al. 2016a, b; Esuli 2016)], and that only one measure for OQ (the Earth Mover’s Distance—see Esuli and Sebastiani 2010) has been proposed and used so far. For the same reasons we do not discuss regression quantification (RQ), the task that stands to metric regression as single-label quantification stands to single-label classification. RQ has been studied even less than OQ, the only work appeared on this theme so far being, to the best of our knowledge (Bella et al. 2014), which as an evaluation measure has proposed the Cramér-von-Mises u-statistic (see Bella et al. 2014 for details).
 
4
Note that two distributions p(c) and \({\hat{p}}(c)\) over \({\mathcal {C}}\) are essentially two nonnegative-valued, length-normalized vectors of dimensionality \(|{\mathcal {C}}|\). The literature on EMQs thus obviously intersects the literature on functions for computing the similarity of two vectors.
 
5
A divergence is often indicated by the notation \(D(p||{\hat{p}})\)); we will prefer the more neutral notation \(D(p,{\hat{p}})\). Note also that a divergence can take as arguments any two distributions p and q defined on the same space of events, i.e., p and q need not be a true distribution and a predicted distribution. However, since we will consider divergences only as measures of fit between a true distribution and a predicted distribution, we will use the more specific notation \(D(p,{\hat{p}})\) rather than the more general D(pq).
 
6
By the “range” of an EMQ here we actually mean its image (i.e., the set of values that the EMQ actually takes for its admissible input values), and not just its codomain.
 
7
One might argue that underestimating the prevalence of a class \(c_{1}\) always implies overestimating the prevalence of another class \(c_{2}\). However, there are cases in which \(c_{1}\) and \(c_{2}\) are not equally important. For instance, if \({\mathcal {C}}=\{c_{1},c_{2}\}\), with \(c_{1}\) the class of patients that suffer from a certain rare disease (say, one such that \(p(c_{1})=.0001\)) and \(c_{2}\) the class of patients who do not, the class whose prevalence we really want to quantify is \(c_{1}\), the prevalence of \(c_{2}\) being derivative. So, what we really care about is that underestimating \(p(c_{1})\) and overestimating \(p(c_{1})\) are equally penalized. The formulation of IMP, which involves underestimation and overestimation in a perfectly symmetric way, is strong enough that IMP is not satisfied (as we will see in Sect. 4) by a number of important EMQs.
 
8
In this case we enter the realm of cost-sensitive quantification, which is outside the scope of this paper; see (Forman 2008, §4 and §5) and (González et al. 2017, §10) for more on the relationships between quantification and cost.
 
9
The symbol \({\pm }\) stands for “plus or minus” while \({\mp }\) stands for “minus or plus”; when symbol \({\pm }\) evaluates to \(+\), symbol \({\mp }\) evaluates to −, and vice versa.
 
10
This is the basis of the “Strict Monotonicity” property discussed in Sebastiani (2015) for the evaluation of classification systems.
 
11
In Eq. 14 and in the rest of this paper the \(\log \) operator denotes the natural logarithm.
 
12
Since the standard logistic function \(\frac{e^{x}}{e^{x}+1}\) ranges (for the domain \([0,+\,\infty )\) we are interested in) on [\(\frac{1}{2}\),1], a multiplication by 2 is applied in order for it to range on [1,2], and 1 is subtracted in order for it to range on [0,1], as desired.
 
13
Esuli and Sebastiani (2014) mistakenly defined \({{\,\mathrm{NKLD}\,}}(p,{\hat{p}})\) as \(\frac{e^{{{\,\mathrm{KLD}\,}}(p,{\hat{p}})}-1}{e^{{{\,\mathrm{KLD}\,}}(p,{\hat{p}})}}\); this was later corrected into the formulation of Eq. 16 [which is equivalent to \(\frac{e^{{{\,\mathrm{KLD}\,}}(p,{\hat{p}})}-1}{e^{{{\,\mathrm{KLD}\,}}(p,{\hat{p}})}+1}\)] by Gao and Sebastiani (2016).
 
14
This is true only at a first approximation, though. In more precise terms, the maximum value that \({{\,\mathrm{NKLD}\,}}\) can have is strictly smaller than 1 because the maximum value that \({{\,\mathrm{KLD}\,}}\) can have is finite (see Eq. 15) and, as discussed at the end of Sect. 4.7, dependent on p, on the cardinality of \({\mathcal {C}}\), and even on the value of \(\epsilon \); as a result, the maximum value that \({{\,\mathrm{NKLD}\,}}\) can have is also dependent on these three variables (although it is always very close to 1—see the example in “Appendix 2.1” section).
 
15
It has to be remarked that, in some cases, differences of the latter type may be moderate, especially when |C| is high. For instance, when \(|C|=2\) the value of \(z_{AE}\) ranges on [0.5,1.0], but when \(|C|=10\) it ranges on [0.18, 0.20].
 
16
A similar situation occurs when evaluating multi-label classification via “microaveraged \(F_{1}\)”, a measure in which the classes with higher prevalence weigh more on the final result.
 
17
It is this author’s experience that even measures such as \(F_{1}\) can be considered by customers “esoteric”.
 
18
As an example, assume a (very realistic) scenario in which \(|\sigma |=1000\), \({\mathcal {C}}=\{c_{1},c_{2}\}\), \(p(c_{1})=0.01\), and in which three different quantifiers \({\hat{p}}'\), \({\hat{p}}''\), \({\hat{p}}'''\) are such that \({\hat{p}}'(c_{1})=0.0101\), \({\hat{p}}''(c_{1})=0.0110\), \({\hat{p}}'''(c_{1})=0.0200\). In this scenario \({{\,\mathrm{KLD}\,}}\) ranges on [0, 7.46], \({{\,\mathrm{KLD}\,}}(p,{\hat{p}}')=4.78\)e–07, \({{\,\mathrm{KLD}\,}}(p,{\hat{p}}'')=4.53\)e–05, \({{\,\mathrm{KLD}\,}}(p,{\hat{p}}''')=3.02\)e–03, i.e., the difference between \({{\,\mathrm{KLD}\,}}(p,{\hat{p}}')\) and \({{\,\mathrm{KLD}\,}}(p,{\hat{p}}'')\) (and the one between \({{\,\mathrm{KLD}\,}}(p,{\hat{p}}'')\) and \({{\,\mathrm{KLD}\,}}(p,{\hat{p}}''')\)) is 2 orders of magnitude, while the difference between \({{\,\mathrm{KLD}\,}}(p,{\hat{p}}')\) and \({{\,\mathrm{KLD}\,}}(p,{\hat{p}}''')\) is no less than 4 orders of magnitude. The increase in error (as computed by \({{\,\mathrm{KLD}\,}}\)) deriving from using \({\hat{p}}'''\) instead of \({\hat{p}}'\) is + 632,599%.
 
19
We assume \(|D|=1{,}000{,}000\). This assumption has no relevance on the qualitative conclusions we draw here, and only affects the magnitude of the values in the table (since the value of |D| affects the value of \(\epsilon \), and thus of \({{\,\mathrm{RAE}\,}}\), \({{\,\mathrm{NRAE}\,}}\), \({{\,\mathrm{DR}\,}}\), \({{\,\mathrm{KLD}\,}}\), \({{\,\mathrm{NKLD}\,}}\), \({{\,\mathrm{PD}\,}}\)—see Sect. 4.3) and following.
 
20
For the EMQs that require smoothed probabilities to be used, these definitions obviously need to be replaced by \(a\equiv p_{s}(c_{1})\) and \(x\equiv {\hat{p}}_{s}(c_{1})\).
 
Literatur
Zurück zum Zitat Beijbom, O., Hoffman, J., Yao, E., Darrell, T., Rodriguez-Ramirez, A., Gonzalez-Rivero, M., et al. (2015). Quantification in-the-wild: Data-sets and baselines. In Presented at the NIPS 015 workshop on transfer and multi-task learning, CA: Montreal. CoRR. arXiv:1510.04811. Beijbom, O., Hoffman, J., Yao, E., Darrell, T., Rodriguez-Ramirez, A., Gonzalez-Rivero, M., et al. (2015). Quantification in-the-wild: Data-sets and baselines. In Presented at the NIPS 015 workshop on transfer and multi-task learning, CA: Montreal. CoRR. arXiv:​1510.​04811.
Zurück zum Zitat Bella, A., Ferri, C., Hernández-Orallo, J., & Ramírez-Quintana, M. J. (2010). Quantification via probability estimators. In Proceedings of the 11th IEEE international conference on data mining (ICDM 2010). Sydney, AU (pp. 737–742). https://doi.org/10.1109/icdm.2010.75. Bella, A., Ferri, C., Hernández-Orallo, J., & Ramírez-Quintana, M. J. (2010). Quantification via probability estimators. In Proceedings of the 11th IEEE international conference on data mining (ICDM 2010). Sydney, AU (pp. 737–742). https://​doi.​org/​10.​1109/​icdm.​2010.​75.
Zurück zum Zitat Busin, L., & Mizzaro, S. (2013). Axiometrics: An axiomatic approach to information retrieval effectiveness metrics. In Proceedings of the 4th international conference on the theory of information retrieval (ICTIR 2013). Copenhagen, DK (p. 8). https://doi.org/10.1145/2499178.2499182. Busin, L., & Mizzaro, S. (2013). Axiometrics: An axiomatic approach to information retrieval effectiveness metrics. In Proceedings of the 4th international conference on the theory of information retrieval (ICTIR 2013). Copenhagen, DK (p. 8). https://​doi.​org/​10.​1145/​2499178.​2499182.
Zurück zum Zitat Card, D., & Smith, N. A. (2018). The importance of calibration for estimating proportions from annotations. In Proceedings of the 2018 conference of the North American chapter of the association for computational linguistics (HLT-NAACL 2018). New Orleans, US (pp. 1636–1646). https://doi.org/10.18653/v1/n18-1148. Card, D., & Smith, N. A. (2018). The importance of calibration for estimating proportions from annotations. In Proceedings of the 2018 conference of the North American chapter of the association for computational linguistics (HLT-NAACL 2018). New Orleans, US (pp. 1636–1646). https://​doi.​org/​10.​18653/​v1/​n18-1148.
Zurück zum Zitat Da San Martino, G., Gao, W., & Sebastiani, F. (2016b). QCRI at SemEval-2016 task 4: Probabilistic methods for binary and ordinal quantification. In Proceedings of the 10th international workshop on semantic evaluation (SemEval 2016). San Diego, US (pp. 58–63). https://doi.org/10.18653/v1/s16-1006. Da San Martino, G., Gao, W., & Sebastiani, F. (2016b). QCRI at SemEval-2016 task 4: Probabilistic methods for binary and ordinal quantification. In Proceedings of the 10th international workshop on semantic evaluation (SemEval 2016). San Diego, US (pp. 58–63). https://​doi.​org/​10.​18653/​v1/​s16-1006.
Zurück zum Zitat dos Reis, D. M., Maletzke, A., Cherman, E., & Batista, G. E. (2018a). One-class quantification. In Proceedings of the European conference on machine learning and principles and practice of knowledge discovery in databases (ECML-PKDD 2018). Dublin, IE. dos Reis, D. M., Maletzke, A., Cherman, E., & Batista, G. E. (2018a). One-class quantification. In Proceedings of the European conference on machine learning and principles and practice of knowledge discovery in databases (ECML-PKDD 2018). Dublin, IE.
Zurück zum Zitat dos Reis, D. M., Maletzke, A. G., Silva, D. F., & Batista, G. E. (2018b). Classifying and counting with recurrent contexts. In Proceedings of the 24th ACM international conference on knowledge discovery and data mining (KDD 2018). London, UK (pp. 1983–1992). https://doi.org/10.1145/3219819.3220059. dos Reis, D. M., Maletzke, A. G., Silva, D. F., & Batista, G. E. (2018b). Classifying and counting with recurrent contexts. In Proceedings of the 24th ACM international conference on knowledge discovery and data mining (KDD 2018). London, UK (pp. 1983–1992). https://​doi.​org/​10.​1145/​3219819.​3220059.
Zurück zum Zitat du Plessis, M. C., & Sugiyama, M. (2012). Semi-supervised learning of class balance under class-prior change by distribution matching. In Proceedings of the 29th international conference on machine learning (ICML 2012). Edinburgh, UK. du Plessis, M. C., & Sugiyama, M. (2012). Semi-supervised learning of class balance under class-prior change by distribution matching. In Proceedings of the 29th international conference on machine learning (ICML 2012). Edinburgh, UK.
Zurück zum Zitat Esuli, A., Moreo, A., & Sebastiani, F. (2018). A recurrent neural network for sentiment quantification. In Proceedings of the 27th ACM international conference on information and knowledge management (CIKM 2018). Torino, IT (pp. 1775–1778). https://doi.org/10.1145/3269206.3269287. Esuli, A., Moreo, A., & Sebastiani, F. (2018). A recurrent neural network for sentiment quantification. In Proceedings of the 27th ACM international conference on information and knowledge management (CIKM 2018). Torino, IT (pp. 1775–1778). https://​doi.​org/​10.​1145/​3269206.​3269287.
Zurück zum Zitat Esuli, A., & Sebastiani, F. (2010). Sentiment quantification. IEEE Intelligent Systems, 25(4), 72–75.CrossRef Esuli, A., & Sebastiani, F. (2010). Sentiment quantification. IEEE Intelligent Systems, 25(4), 72–75.CrossRef
Zurück zum Zitat Esuli, A., & Sebastiani, F. (2014). Explicit loss minimization in quantification applications (preliminary draft). In Proceedings of the 8th international workshop on information filtering and retrieval (DART 2014). Pisa, IT (pp. 1–11). Esuli, A., & Sebastiani, F. (2014). Explicit loss minimization in quantification applications (preliminary draft). In Proceedings of the 8th international workshop on information filtering and retrieval (DART 2014). Pisa, IT (pp. 1–11).
Zurück zum Zitat Ferrante, M., Ferro, N., & Maistro, M. (2015). Towards a formal framework for utility-oriented measurements of retrieval effectiveness. In Proceedings of the 5th ACM international conference on the theory of information retrieval (ICTIR 2015). Northampton, US (pp. 21–30). https://doi.org/10.1145/2808194.2809452. Ferrante, M., Ferro, N., & Maistro, M. (2015). Towards a formal framework for utility-oriented measurements of retrieval effectiveness. In Proceedings of the 5th ACM international conference on the theory of information retrieval (ICTIR 2015). Northampton, US (pp. 21–30). https://​doi.​org/​10.​1145/​2808194.​2809452.
Zurück zum Zitat Forman, G. (2006). Quantifying trends accurately despite classifier error and class imbalance. In Proceedings of the 12th ACM SIGKDD international conference on knowledge discovery and data mining (KDD 2006). Philadelphia, US (pp. 157–166). https://doi.org/10.1145/1150402.1150423. Forman, G. (2006). Quantifying trends accurately despite classifier error and class imbalance. In Proceedings of the 12th ACM SIGKDD international conference on knowledge discovery and data mining (KDD 2006). Philadelphia, US (pp. 157–166). https://​doi.​org/​10.​1145/​1150402.​1150423.
Zurück zum Zitat Gao, W., & Sebastiani, F. (2015). Tweet sentiment: From classification to quantification. In Proceedings of the 7th international conference on advances in social network analysis and mining (ASONAM 2015). Paris, FR (pp. 97–104). https://doi.org/10.1145/2808797.2809327. Gao, W., & Sebastiani, F. (2015). Tweet sentiment: From classification to quantification. In Proceedings of the 7th international conference on advances in social network analysis and mining (ASONAM 2015). Paris, FR (pp. 97–104). https://​doi.​org/​10.​1145/​2808797.​2809327.
Zurück zum Zitat González-Castro, V., Alaiz-Rodríguez, R., Fernández-Robles, L., Guzmán-Martínez, R., & Alegre, E. (2010). Estimating class proportions in boar semen analysis using the Hellinger distance. In Proceedings of the 23rd international conference on industrial engineering and other applications of applied intelligent systems (IEA/AIE 2010). Cordoba, ES (pp. 284–293). https://doi.org/10.1007/978-3-642-13022-9_29. González-Castro, V., Alaiz-Rodríguez, R., Fernández-Robles, L., Guzmán-Martínez, R., & Alegre, E. (2010). Estimating class proportions in boar semen analysis using the Hellinger distance. In Proceedings of the 23rd international conference on industrial engineering and other applications of applied intelligent systems (IEA/AIE 2010). Cordoba, ES (pp. 284–293). https://​doi.​org/​10.​1007/​978-3-642-13022-9_​29.
Zurück zum Zitat Kar, P., Li, S., Narasimhan, H., Chawla, S., & Sebastiani, F. (2016). Online optimization methods for the quantification problem. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (KDD 2016). San Francisco, US (pp. 1625–1634). https://doi.org/10.1145/2939672.2939832. Kar, P., Li, S., Narasimhan, H., Chawla, S., & Sebastiani, F. (2016). Online optimization methods for the quantification problem. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (KDD 2016). San Francisco, US (pp. 1625–1634). https://​doi.​org/​10.​1145/​2939672.​2939832.
Zurück zum Zitat Keith, K. A., & O’Connor, B. (2018). Uncertainty-aware generative models for inferring document class prevalence. In Proceedings of the conference on empirical methods in natural language processing (EMNLP 2018). Brussels, BE. Keith, K. A., & O’Connor, B. (2018). Uncertainty-aware generative models for inferring document class prevalence. In Proceedings of the conference on empirical methods in natural language processing (EMNLP 2018). Brussels, BE.
Zurück zum Zitat Levin, R., & Roitman, H. (2017). Enhanced probabilistic classify and count methods for multi-label text quantification. In Proceedings of the 7th ACM international conference on the theory of information retrieval (ICTIR 2017). Amsterdam, NL (pp. 229–232). https://doi.org/10.1145/3121050.3121083. Levin, R., & Roitman, H. (2017). Enhanced probabilistic classify and count methods for multi-label text quantification. In Proceedings of the 7th ACM international conference on the theory of information retrieval (ICTIR 2017). Amsterdam, NL (pp. 229–232). https://​doi.​org/​10.​1145/​3121050.​3121083.
Zurück zum Zitat MacKay, D. J. (2003). Information theory, inference and learning algorithms. Cambridge: Cambridge University Press.MATH MacKay, D. J. (2003). Information theory, inference and learning algorithms. Cambridge: Cambridge University Press.MATH
Zurück zum Zitat Maletzke, A. G., dos Reis, D. M., & Batista, G. E. (2017). Quantification in data streams: Initial results. In Proceedings of the 2017 Brazilian conference on intelligent systems (BRACIS 2017). Uberlândia, BZ (pp. 43–48). https://doi.org/10.1109/BRACIS.2017.74. Maletzke, A. G., dos Reis, D. M., & Batista, G. E. (2017). Quantification in data streams: Initial results. In Proceedings of the 2017 Brazilian conference on intelligent systems (BRACIS 2017). Uberlândia, BZ (pp. 43–48). https://​doi.​org/​10.​1109/​BRACIS.​2017.​74.
Zurück zum Zitat Milli, L., Monreale, A., Rossetti, G., Giannotti, F., Pedreschi, D., & Sebastiani, F. (2013). Quantification trees. In Proceedings of the 13th IEEE international conference on data mining (ICDM 2013). Dallas, US (pp. 528–536). https://doi.org/10.1109/icdm.2013.122. Milli, L., Monreale, A., Rossetti, G., Giannotti, F., Pedreschi, D., & Sebastiani, F. (2013). Quantification trees. In Proceedings of the 13th IEEE international conference on data mining (ICDM 2013). Dallas, US (pp. 528–536). https://​doi.​org/​10.​1109/​icdm.​2013.​122.
Zurück zum Zitat Milli, L., Monreale, A., Rossetti, G., Pedreschi, D., Giannotti, F., & Sebastiani, F. (2015). Quantification in social networks. In Proceedings of the 2nd IEEE international conference on data science and advanced analytics (DSAA 2015). Paris, FR. https://doi.org/10.1109/dsaa.2015.7344845. Milli, L., Monreale, A., Rossetti, G., Pedreschi, D., Giannotti, F., & Sebastiani, F. (2015). Quantification in social networks. In Proceedings of the 2nd IEEE international conference on data science and advanced analytics (DSAA 2015). Paris, FR. https://​doi.​org/​10.​1109/​dsaa.​2015.​7344845.
Zurück zum Zitat Nakov, P., Ritter, A., Rosenthal, S., Sebastiani, F., & Stoyanov, V. (2016). SemEval-2016 task 4: Sentiment analysis in Twitter. In Proceedings of the 10th international workshop on semantic evaluation (SemEval 2016). San Diego, US (pp. 1–18). https://doi.org/10.18653/v1/s16-1001. Nakov, P., Ritter, A., Rosenthal, S., Sebastiani, F., & Stoyanov, V. (2016). SemEval-2016 task 4: Sentiment analysis in Twitter. In Proceedings of the 10th international workshop on semantic evaluation (SemEval 2016). San Diego, US (pp. 1–18). https://​doi.​org/​10.​18653/​v1/​s16-1001.
Zurück zum Zitat Sebastiani, F. (2015). An axiomatically derived measure for the evaluation of classification algorithms. In Proceedings of the 5th ACM international conference on the theory of information retrieval (ICTIR 2015). Northampton, US (pp. 11–20). https://doi.org/10.1145/2808194.2809449. Sebastiani, F. (2015). An axiomatically derived measure for the evaluation of classification algorithms. In Proceedings of the 5th ACM international conference on the theory of information retrieval (ICTIR 2015). Northampton, US (pp. 11–20). https://​doi.​org/​10.​1145/​2808194.​2809449.
Zurück zum Zitat Tasche, D. (2017). Fisher consistency for prior probability shift. Journal of Machine Learning Research, 18, 95:1–95:32.MathSciNetMATH Tasche, D. (2017). Fisher consistency for prior probability shift. Journal of Machine Learning Research, 18, 95:1–95:32.MathSciNetMATH
Zurück zum Zitat Vaz, A. F., Izbicki, R., & Stern, R. B. (2018). Quantification under prior probability shift: The ratio estimator and its extensions. arXiv preprint arXiv:1807.03929. Vaz, A. F., Izbicki, R., & Stern, R. B. (2018). Quantification under prior probability shift: The ratio estimator and its extensions. arXiv preprint arXiv:​1807.​03929.
Metadaten
Titel
Evaluation measures for quantification: an axiomatic approach
verfasst von
Fabrizio Sebastiani
Publikationsdatum
21.09.2019
Verlag
Springer Netherlands
Erschienen in
Discover Computing / Ausgabe 3/2020
Print ISSN: 2948-2984
Elektronische ISSN: 2948-2992
DOI
https://doi.org/10.1007/s10791-019-09363-y

Weitere Artikel der Ausgabe 3/2020

Discover Computing 3/2020 Zur Ausgabe

Guest Editorial: Axiomatic Thinking for Information Retrieval

Axiomatic thinking for information retrieval: introduction to special issue

Axiomatic Thinking for Information Retrieval

On the foundations of similarity in information access

Premium Partner