Skip to main content
main-content
Top

Hint

Swipe to navigate through the articles of this issue

Published in: International Journal of Data Science and Analytics 3/2022

01-03-2022 | Review

A survey on machine learning methods for churn prediction

Authors: Louis Geiler, Séverine Affeldt, Mohamed Nadif

Published in: International Journal of Data Science and Analytics | Issue 3/2022

Login to get access
share
SHARE

Abstract

The diversity and specificities of today’s businesses have leveraged a wide range of prediction techniques. In particular, churn prediction is a major economic concern for many companies. The purpose of this study is to draw general guidelines from a benchmark of supervised machine learning techniques in association with widely used data sampling approaches on publicly available datasets in the context of churn prediction. Choosing a priori the most appropriate sampling method as well as the most suitable classification model is not trivial, as it strongly depends on the data intrinsic characteristics. In this paper, we study the behavior of eleven supervised and semi-supervised learning methods and seven sampling approaches on sixteen diverse and publicly available churn-like datasets. Our evaluations, reported in terms of the Area Under the Curve (AUC) metric, explore the influence of sampling approaches and data characteristics on the performance of the studied learning methods. Besides, we propose Nemenyi test and Correspondence Analysis as means of comparison and visualization of the association between classification algorithms, sampling methods and datasets. Most importantly, our experiments lead to a practical recommendation for a prediction pipeline based on an ensemble approach. Our proposal can be successfully applied to a wide range of churn-like datasets.
Appendix
Available only for authorised users
Footnotes
1
In a binary or churn prediction context, \(G=2\) and we consider the two classes \(+\),− that correspond to the churn and non-churn classes, respectively.
 
2
Before fitting a model, categorical variables are converted to their numerical representation through a dummification process where each category becomes a binary variable.
 
3
In our experiments, we consider both the linear SVM and the SVM-rbf, which is a kernel SVM using the Radial basis function, following Amnueypornsakul et al. results [6]
 
4
GEV-NN, iForest and DevNet being specifically designed for imbalance binary classification or anomaly detection, these approaches are only evaluated without sampling.
 
Literature
1.
go back to reference Abdillah, M.F., Nasri, J., Aditsania, A.: Using deep learning to predict customer churn in a mobile telecomunication network. eProc. Eng. 3(2) (2016) Abdillah, M.F., Nasri, J., Aditsania, A.: Using deep learning to predict customer churn in a mobile telecomunication network. eProc. Eng. 3(2) (2016)
4.
go back to reference Akbani, R., Kwek, S., Japkowicz, N.: Applying support vector machines to imbalanced datasets. In: European Conference on Machine Learning, pp. 39–50. Springer (2004) Akbani, R., Kwek, S., Japkowicz, N.: Applying support vector machines to imbalanced datasets. In: European Conference on Machine Learning, pp. 39–50. Springer (2004)
5.
go back to reference Alam, S., Sonbhadra, S.K., Agarwal, S., et al.: One-class support vector classifiers: a survey. Knowl. Based Syst. 196(105), 754 (2020) Alam, S., Sonbhadra, S.K., Agarwal, S., et al.: One-class support vector classifiers: a survey. Knowl. Based Syst. 196(105), 754 (2020)
7.
go back to reference Anderson, E.W., Sullivan, M.W.: The antecedents and consequences of customer satisfaction for firms. Mark. Sci. 12(2), 125–143 (1993) CrossRef Anderson, E.W., Sullivan, M.W.: The antecedents and consequences of customer satisfaction for firms. Mark. Sci. 12(2), 125–143 (1993) CrossRef
8.
go back to reference Batista, G.E., Bazzan, A.L., Monard, M.C., et al.: Balancing training data for automated annotation of keywords: a case study. In: WOB, pp. 10–18 (2003) Batista, G.E., Bazzan, A.L., Monard, M.C., et al.: Balancing training data for automated annotation of keywords: a case study. In: WOB, pp. 10–18 (2003)
9.
go back to reference Batista, G.E., Prati, R.C., Monard, M.C.: A study of the behavior of several methods for balancing machine learning training data. ACM SIGKDD Explor. Newsl 6(1), 20–29 (2004) CrossRef Batista, G.E., Prati, R.C., Monard, M.C.: A study of the behavior of several methods for balancing machine learning training data. ACM SIGKDD Explor. Newsl 6(1), 20–29 (2004) CrossRef
10.
go back to reference Batuwita, R., Palade, V.: Efficient resampling methods for training support vector machines with imbalanced datasets. In: The 2010 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2010) Batuwita, R., Palade, V.: Efficient resampling methods for training support vector machines with imbalanced datasets. In: The 2010 International Joint Conference on Neural Networks (IJCNN), pp. 1–8. IEEE (2010)
11.
go back to reference Benczúr, A.A., Csalogány, K., Lukács, L., et al.: Semi-supervised learning: a comparative study for web spam and telephone user churn. In: In Graph Labeling Workshop in conjunction with ECML/PKDD, Citeseer (2007) Benczúr, A.A., Csalogány, K., Lukács, L., et al.: Semi-supervised learning: a comparative study for web spam and telephone user churn. In: In Graph Labeling Workshop in conjunction with ECML/PKDD, Citeseer (2007)
12.
go back to reference Benoit, D.F., Van den Poel, D.: Improving customer retention in financial services using kinship network information. Expert Syst. Appl. 39(13), 11,435-11,442 (2012) CrossRef Benoit, D.F., Van den Poel, D.: Improving customer retention in financial services using kinship network information. Expert Syst. Appl. 39(13), 11,435-11,442 (2012) CrossRef
13.
go back to reference Bermejo, P., Gámez, J.A., Puerta, J.M.: Improving the performance of naive bayes multinomial in e-mail foldering by introducing distribution-based balance of datasets. Expert Syst. Appl. 38(3), 2072–2080 (2011) CrossRef Bermejo, P., Gámez, J.A., Puerta, J.M.: Improving the performance of naive bayes multinomial in e-mail foldering by introducing distribution-based balance of datasets. Expert Syst. Appl. 38(3), 2072–2080 (2011) CrossRef
14.
go back to reference Bhattacharya, C.: When customers are members: customer retention in paid membership contexts. J. Acad. Mark. Sci. 26(1), 31–44 (1998) CrossRef Bhattacharya, C.: When customers are members: customer retention in paid membership contexts. J. Acad. Mark. Sci. 26(1), 31–44 (1998) CrossRef
15.
go back to reference Błaszczyński, J., Stefanowski, J.: Local data characteristics in learning classifiers from imbalanced data. In: Advances in Data Analysis with Computational Intelligence Methods, pp. 51–85. Springer (2018) Błaszczyński, J., Stefanowski, J.: Local data characteristics in learning classifiers from imbalanced data. In: Advances in Data Analysis with Computational Intelligence Methods, pp. 51–85. Springer (2018)
16.
go back to reference Bolton, R.N.: A dynamic model of the duration of the customer’s relationship with a continuous service provider: the role of satisfaction. Market. Sci. 17(1), 45–65 (1998) CrossRef Bolton, R.N.: A dynamic model of the duration of the customer’s relationship with a continuous service provider: the role of satisfaction. Market. Sci. 17(1), 45–65 (1998) CrossRef
17.
go back to reference Bolton, R.N., Bronkhorst, T.M.: The relationship between customer complaints to the firm and subsequent exit behavior. ACR North Am. Adv. 22, 94–100 (1995) Bolton, R.N., Bronkhorst, T.M.: The relationship between customer complaints to the firm and subsequent exit behavior. ACR North Am. Adv. 22, 94–100 (1995)
19.
go back to reference Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996) MATH Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996) MATH
21.
go back to reference Breiman, L., Spector, P.: Submodel selection and evaluation in regression: the x-random case. Int. Stat. Rev. 60(3), 291–319 (1992) CrossRef Breiman, L., Spector, P.: Submodel selection and evaluation in regression: the x-random case. Int. Stat. Rev. 60(3), 291–319 (1992) CrossRef
22.
go back to reference Breiman, L., Friedman, J.H., Olshen, R.A., et al.: Classification and Regression Trees. Wadsworth, Belmont (1984) MATH Breiman, L., Friedman, J.H., Olshen, R.A., et al.: Classification and Regression Trees. Wadsworth, Belmont (1984) MATH
24.
go back to reference Burez, J., Van den Poel, D.: Handling class imbalance in customer churn prediction. Expert Syst. Appl. 36(3), 4626–4636 (2009) CrossRef Burez, J., Van den Poel, D.: Handling class imbalance in customer churn prediction. Expert Syst. Appl. 36(3), 4626–4636 (2009) CrossRef
25.
go back to reference Burman, P.: A comparative study of ordinary cross-validation, v-fold cross-validation and the repeated learning-testing methods. Biometrika 76(3), 503–514 (1989) MathSciNetMATHCrossRef Burman, P.: A comparative study of ordinary cross-validation, v-fold cross-validation and the repeated learning-testing methods. Biometrika 76(3), 503–514 (1989) MathSciNetMATHCrossRef
26.
go back to reference Burrus, C.S., Barreto, J., Selesnick, I.W.: Iterative reweighted least-squares design of fir filters. IEEE Trans. Signal Process. 42(11), 2926–2936 (1994) CrossRef Burrus, C.S., Barreto, J., Selesnick, I.W.: Iterative reweighted least-squares design of fir filters. IEEE Trans. Signal Process. 42(11), 2926–2936 (1994) CrossRef
27.
go back to reference Cabral, G.G., Oliveira, A.: One-class classification for heart disease diagnosis. In: IEEE International Conference on Systems, Man, and Cybernetics (SMC) pp. 2551–2556 (2014) Cabral, G.G., Oliveira, A.: One-class classification for heart disease diagnosis. In: IEEE International Conference on Systems, Man, and Cybernetics (SMC) pp. 2551–2556 (2014)
28.
go back to reference Castanedo, F., Valverde, G., Zaratiegui, J., et al.: Using deep learning to predict customer churn in a mobile telecommunication network (2014) Castanedo, F., Valverde, G., Zaratiegui, J., et al.: Using deep learning to predict customer churn in a mobile telecommunication network (2014)
29.
go back to reference Cervantes, J., Garcia-Lamont, F., Rodríguez-Mazahua, L., et al.: A comprehensive survey on support vector machine classification: applications, challenges and trends. Neurocomputing 408, 189–215 (2020) CrossRef Cervantes, J., Garcia-Lamont, F., Rodríguez-Mazahua, L., et al.: A comprehensive survey on support vector machine classification: applications, challenges and trends. Neurocomputing 408, 189–215 (2020) CrossRef
31.
go back to reference Chawla, N.V., Bowyer, K.W., Hall, L.O., et al.: Smote: synthetic minority over-sampling technique. J. Artif. Intell. Res. 16, 321–357 (2002) MATHCrossRef Chawla, N.V., Bowyer, K.W., Hall, L.O., et al.: Smote: synthetic minority over-sampling technique. J. Artif. Intell. Res. 16, 321–357 (2002) MATHCrossRef
32.
go back to reference Chen, C., Liaw, A., Breiman, L., et al.: Using random forest to learn imbalanced data. Univ. Calif. Berkeley 110(1–12), 24 (2004) Chen, C., Liaw, A., Breiman, L., et al.: Using random forest to learn imbalanced data. Univ. Calif. Berkeley 110(1–12), 24 (2004)
33.
go back to reference Chen, T., Guestrin, C.: Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd ACM Sigkdd International Conference on Knowledge Discovery and Data Mining, pp. 785–794. ACM (2016) Chen, T., Guestrin, C.: Xgboost: A scalable tree boosting system. In: Proceedings of the 22nd ACM Sigkdd International Conference on Knowledge Discovery and Data Mining, pp. 785–794. ACM (2016)
34.
go back to reference Chen, Y., Xie, X., Lin, S.D., et al.: Wsdm cup 2018: music recommendation and churn prediction. In: Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pp. 8–9. ACM (2018) Chen, Y., Xie, X., Lin, S.D., et al.: Wsdm cup 2018: music recommendation and churn prediction. In: Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pp. 8–9. ACM (2018)
35.
go back to reference Chowdhury, A., Alspector, J.: Data duplication: an imbalance problem? In: ICML’2003 Workshop on Learning from Imbalanced Data Sets (II), Washington, DC (2003) Chowdhury, A., Alspector, J.: Data duplication: an imbalance problem? In: ICML’2003 Workshop on Learning from Imbalanced Data Sets (II), Washington, DC (2003)
36.
go back to reference Clemente, M., Giner-Bosch, V., San Matías, S.: Assessing classification methods for churn prediction by composite indicators. Manuscript, Dept of Applied Statistics, OR & Quality, UniversitatPolitècnica de València, Camino de Vera s/n 46022 (2010) Clemente, M., Giner-Bosch, V., San Matías, S.: Assessing classification methods for churn prediction by composite indicators. Manuscript, Dept of Applied Statistics, OR & Quality, UniversitatPolitècnica de València, Camino de Vera s/n 46022 (2010)
38.
go back to reference Coussement, K., De Bock, K.W.: Customer churn prediction in the online gambling industry: the beneficial effect of ensemble learning. J. Bus. Res. 66(9), 1629–1636 (2013) CrossRef Coussement, K., De Bock, K.W.: Customer churn prediction in the online gambling industry: the beneficial effect of ensemble learning. J. Bus. Res. 66(9), 1629–1636 (2013) CrossRef
39.
go back to reference Coussement, K., Van den Poel, D.: Churn prediction in subscription services: an application of support vector machines while comparing two parameter-selection techniques. Expert Syst. Appl. 34(1), 313–327 (2008) CrossRef Coussement, K., Van den Poel, D.: Churn prediction in subscription services: an application of support vector machines while comparing two parameter-selection techniques. Expert Syst. Appl. 34(1), 313–327 (2008) CrossRef
40.
go back to reference Coussement, K., Benoit, D.F., Van den Poel, D.: Improved marketing decision making in a customer churn prediction context using generalized additive models. Expert Syst. Appl. 37(3), 2132–2143 (2010) CrossRef Coussement, K., Benoit, D.F., Van den Poel, D.: Improved marketing decision making in a customer churn prediction context using generalized additive models. Expert Syst. Appl. 37(3), 2132–2143 (2010) CrossRef
43.
go back to reference Demšar, J.: Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 7, 1–30 (2006) MathSciNetMATH Demšar, J.: Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 7, 1–30 (2006) MathSciNetMATH
46.
go back to reference Dingli, A., Marmara, V., Fournier, N.S.: Comparison of deep learning algorithms to predict customer churn within a local retail industry. Int. J. Mach. Learn. Comput. 7(5), 128–132 (2017) CrossRef Dingli, A., Marmara, V., Fournier, N.S.: Comparison of deep learning algorithms to predict customer churn within a local retail industry. Int. J. Mach. Learn. Comput. 7(5), 128–132 (2017) CrossRef
47.
go back to reference Domingos, P. Metacost: A general method for making classifiers cost-sensitive. In: Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 155–164 (1999) Domingos, P. Metacost: A general method for making classifiers cost-sensitive. In: Proceedings of the Fifth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 155–164 (1999)
48.
go back to reference Drummond, C., Holte, R.C., et al.: C4.5, class imbalance, and cost sensitivity: why under-sampling beats over-sampling. In: Workshop on Learning from Imbalanced Datasets II, Citeseer, pp. 1–8 (2003) Drummond, C., Holte, R.C., et al.: C4.5, class imbalance, and cost sensitivity: why under-sampling beats over-sampling. In: Workshop on Learning from Imbalanced Datasets II, Citeseer, pp. 1–8 (2003)
50.
go back to reference Effendy, V., Baizal, Z.A., et al.: Handling imbalanced data in customer churn prediction using combined sampling and weighted random forest. In: 2014 2nd International Conference on Information and Communication Technology (ICoICT), pp. 325–330. IEEE (2014) Effendy, V., Baizal, Z.A., et al.: Handling imbalanced data in customer churn prediction using combined sampling and weighted random forest. In: 2014 2nd International Conference on Information and Communication Technology (ICoICT), pp. 325–330. IEEE (2014)
52.
go back to reference Friedman, J., Hastie, T., Tibshirani, R.: The elements of statistical learning, vol 1. Springer Series in Statistics, New York (2001) Friedman, J., Hastie, T., Tibshirani, R.: The elements of statistical learning, vol 1. Springer Series in Statistics, New York (2001)
53.
go back to reference Gandomi, A., Haider, M.: Beyond the hype: big data concepts, methods, and analytics. Int. J. Inf. Manag. 35(2), 137–144 (2015) CrossRef Gandomi, A., Haider, M.: Beyond the hype: big data concepts, methods, and analytics. Int. J. Inf. Manag. 35(2), 137–144 (2015) CrossRef
54.
go back to reference Ganesan, S.: Determinants of long-term orientation in buyer–seller relationships. J. Mark. 58(2), 1–19 (1994) CrossRef Ganesan, S.: Determinants of long-term orientation in buyer–seller relationships. J. Mark. 58(2), 1–19 (1994) CrossRef
55.
go back to reference García, D.L., Nebot, À., Vellido, A.: Intelligent data analysis approaches to churn as a business problem: a survey. Knowl. Inf. Syst. 51(3), 719–774 (2017) CrossRef García, D.L., Nebot, À., Vellido, A.: Intelligent data analysis approaches to churn as a business problem: a survey. Knowl. Inf. Syst. 51(3), 719–774 (2017) CrossRef
56.
go back to reference García, V., Mollineda, R.A., Sánchez, J.S.: On the k-nn performance in a challenging scenario of imbalance and overlapping. Pattern Anal. Appl. 11(3), 269–280 (2008) MathSciNetCrossRef García, V., Mollineda, R.A., Sánchez, J.S.: On the k-nn performance in a challenging scenario of imbalance and overlapping. Pattern Anal. Appl. 11(3), 269–280 (2008) MathSciNetCrossRef
57.
go back to reference García, V., Sánchez, J.S., Mollineda, R.A.: On the effectiveness of preprocessing methods when dealing with different levels of class imbalance. Knowl. Based Syst. 25(1), 13–21 (2012) CrossRef García, V., Sánchez, J.S., Mollineda, R.A.: On the effectiveness of preprocessing methods when dealing with different levels of class imbalance. Knowl. Based Syst. 25(1), 13–21 (2012) CrossRef
59.
go back to reference Günther, C.C., Tvete, I.F., Aas, K., et al.: Modelling and predicting customer churn from an insurance company. Scand. Actuar. J. 1, 58–71 (2014) MathSciNetMATHCrossRef Günther, C.C., Tvete, I.F., Aas, K., et al.: Modelling and predicting customer churn from an insurance company. Scand. Actuar. J. 1, 58–71 (2014) MathSciNetMATHCrossRef
60.
go back to reference Gupta, S., Lehmann, D.R., Stuart, J.A.: Valuing customers. J. Mark. Res. 41(1), 7–18 (2004) CrossRef Gupta, S., Lehmann, D.R., Stuart, J.A.: Valuing customers. J. Mark. Res. 41(1), 7–18 (2004) CrossRef
61.
go back to reference Guyon, I., Gunn, S., Nikravesh, M., et al.: Feature Extraction: Foundations and Applications, vol. 207. Springer, Berlin (2008) MATH Guyon, I., Gunn, S., Nikravesh, M., et al.: Feature Extraction: Foundations and Applications, vol. 207. Springer, Berlin (2008) MATH
62.
go back to reference Guyon, I., Lemaire, V., Boullé, M., et al.: Analysis of the kdd cup 2009: fast scoring on a large orange customer database. In: Proceedings of the 2009 International Conference on KDD-Cup 2009, vol. 7, pp. 1–22. JMLR. org (2009) Guyon, I., Lemaire, V., Boullé, M., et al.: Analysis of the kdd cup 2009: fast scoring on a large orange customer database. In: Proceedings of the 2009 International Conference on KDD-Cup 2009, vol. 7, pp. 1–22. JMLR. org (2009)
63.
go back to reference Hadden, J., Tiwari, A., Roy, R., et al.: Churn prediction: does technology matter. Int. J. Intell. Technol. 1(2), 104–110 (2006) Hadden, J., Tiwari, A., Roy, R., et al.: Churn prediction: does technology matter. Int. J. Intell. Technol. 1(2), 104–110 (2006)
65.
go back to reference Han, H., Wang, W.Y., Mao, B.H.: Borderline-smote: a new over-sampling method in imbalanced data sets learning. In: International Conference on Intelligent Computing, pp. 878–887. Springer (2005) Han, H., Wang, W.Y., Mao, B.H.: Borderline-smote: a new over-sampling method in imbalanced data sets learning. In: International Conference on Intelligent Computing, pp. 878–887. Springer (2005)
67.
go back to reference Hart, P.: The condensed nearest neighbor rule (corresp.). IEEE Trans. Inf. Theory 14(3), 515–516 (1968) CrossRef Hart, P.: The condensed nearest neighbor rule (corresp.). IEEE Trans. Inf. Theory 14(3), 515–516 (1968) CrossRef
68.
go back to reference He, H., Ma, Y.: Imbalanced Learning: Foundations, Algorithms, and Applications. Wiley, New York (2013) MATHCrossRef He, H., Ma, Y.: Imbalanced Learning: Foundations, Algorithms, and Applications. Wiley, New York (2013) MATHCrossRef
69.
go back to reference He, H., Bai, Y., Garcia, E., Li, S.: ADASYN: adaptive synthetic sampling approach for imbalanced learning. In IEEE International Joint Conference on Neural Networks, 2008. IJCNN 2008 (IEEE World Congress on Computational Intelligence), vol. 3, pp. 1322– 1328 (2008) He, H., Bai, Y., Garcia, E., Li, S.: ADASYN: adaptive synthetic sampling approach for imbalanced learning. In IEEE International Joint Conference on Neural Networks, 2008. IJCNN 2008 (IEEE World Congress on Computational Intelligence), vol. 3, pp. 1322– 1328 (2008)
70.
go back to reference Hitt, L.M., Frei, F.X.: Do better customers utilize electronic distribution channels? The case of pc banking. Manag. Sci. 48(6), 732–748 (2002) CrossRef Hitt, L.M., Frei, F.X.: Do better customers utilize electronic distribution channels? The case of pc banking. Manag. Sci. 48(6), 732–748 (2002) CrossRef
71.
go back to reference Holte, R.C., Acker, L., Porter, B.W., et al.: Concept learning and the problem of small disjuncts. In: IJCAI, Citeseer, pp. 813–818 (1989) Holte, R.C., Acker, L., Porter, B.W., et al.: Concept learning and the problem of small disjuncts. In: IJCAI, Citeseer, pp. 813–818 (1989)
72.
go back to reference Hosein, P., Sewdhan, G., Jailal, A.: Soft-churn: optimal switching between prepaid data subscriptions on e-sim support smartphones. In: 2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA), pp. 1–6. IEEE (2021) Hosein, P., Sewdhan, G., Jailal, A.: Soft-churn: optimal switching between prepaid data subscriptions on e-sim support smartphones. In: 2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA), pp. 1–6. IEEE (2021)
74.
go back to reference Hudaib, A., Dannoun, R., Harfoushi, O., et al.: Hybrid data mining models for predicting customer churn. Int. J. Commun. Netw. Syst. Sci. 8(05), 91 (2015) Hudaib, A., Dannoun, R., Harfoushi, O., et al.: Hybrid data mining models for predicting customer churn. Int. J. Commun. Netw. Syst. Sci. 8(05), 91 (2015)
75.
go back to reference John, G.H., Langley, P.: Estimating continuous distributions in bayesian classifiers. In: Proceedings of the Eleventh conference on Uncertainty in Artificial Intelligence, pp. 338–345. Morgan Kaufmann Publishers Inc. (1995) John, G.H., Langley, P.: Estimating continuous distributions in bayesian classifiers. In: Proceedings of the Eleventh conference on Uncertainty in Artificial Intelligence, pp. 338–345. Morgan Kaufmann Publishers Inc. (1995)
76.
77.
go back to reference Kawale, J., Pal, A., Srivastava, J.: Churn prediction in MMORPGs: a social influence based approach. In: 2009 International Conference on Computational Science and Engineering, pp. 423–428. IEEE (2009) Kawale, J., Pal, A., Srivastava, J.: Churn prediction in MMORPGs: a social influence based approach. In: 2009 International Conference on Computational Science and Engineering, pp. 423–428. IEEE (2009)
78.
go back to reference Kim, Y.: Toward a successful CRM: variable selection, sampling, and ensemble. Decis. Support Syst. 41(2), 542–553 (2006) CrossRef Kim, Y.: Toward a successful CRM: variable selection, sampling, and ensemble. Decis. Support Syst. 41(2), 542–553 (2006) CrossRef
79.
go back to reference King, G., Zeng, L.: Logistic regression in rare events data. Polit. Anal. 9(2), 137–163 (2001) CrossRef King, G., Zeng, L.: Logistic regression in rare events data. Polit. Anal. 9(2), 137–163 (2001) CrossRef
80.
go back to reference Kohavi, R., et al.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Ijcai, Montreal, Canada, pp. 1137–1145 (1995) Kohavi, R., et al.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Ijcai, Montreal, Canada, pp. 1137–1145 (1995)
81.
go back to reference Kong, J., Kowalczyk, W., Menzel, S., et al.: Improving imbalanced classification by anomaly detection. In: Bäck, T., Preuss, M., Deutz, A., et al. (eds.) Parallel Problem Solving from Nature, vol. XVI, pp. 512–523. Springer, Cham (2020) CrossRef Kong, J., Kowalczyk, W., Menzel, S., et al.: Improving imbalanced classification by anomaly detection. In: Bäck, T., Preuss, M., Deutz, A., et al. (eds.) Parallel Problem Solving from Nature, vol. XVI, pp. 512–523. Springer, Cham (2020) CrossRef
82.
go back to reference Kumar, D.A., Ravi, V., et al.: Predicting credit card customer churn in banks using data mining. Int. J. Data Anal. Tech. Strateg. 1(1), 4–28 (2008) CrossRef Kumar, D.A., Ravi, V., et al.: Predicting credit card customer churn in banks using data mining. Int. J. Data Anal. Tech. Strateg. 1(1), 4–28 (2008) CrossRef
83.
go back to reference Laurikkala, J.: Improving identification of difficult small classes by balancing class distribution. In: Conference on Artificial Intelligence in Medicine in Europe, pp. 63–66. Springer (2001) Laurikkala, J.: Improving identification of difficult small classes by balancing class distribution. In: Conference on Artificial Intelligence in Medicine in Europe, pp. 63–66. Springer (2001)
84.
go back to reference Lemmens, A., Croux, C.: Bagging and boosting classification trees to predict churn. J. Mark. Res. 43(2), 276–286 (2006) CrossRef Lemmens, A., Croux, C.: Bagging and boosting classification trees to predict churn. J. Mark. Res. 43(2), 276–286 (2006) CrossRef
85.
go back to reference Leung, C.K., Pazdor, A.G., Souza, J.: Explainable artificial intelligence for data science on customer churn. In: 2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA), pp. 1–10. IEEE (2021) Leung, C.K., Pazdor, A.G., Souza, J.: Explainable artificial intelligence for data science on customer churn. In: 2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA), pp. 1–10. IEEE (2021)
87.
go back to reference Ling, C.X., Li, C.: Data mining for direct marketing: problems and solutions. In: Kdd, pp. 73–79 (1998) Ling, C.X., Li, C.: Data mining for direct marketing: problems and solutions. In: Kdd, pp. 73–79 (1998)
89.
go back to reference López, V., Fernández, A., Moreno-Torres, J.G., et al.: Analysis of preprocessing vs. cost-sensitive learning for imbalanced classification. Open problems on intrinsic data characteristics. Expert Syst. Appl. 39(7), 6585–6608 (2012) CrossRef López, V., Fernández, A., Moreno-Torres, J.G., et al.: Analysis of preprocessing vs. cost-sensitive learning for imbalanced classification. Open problems on intrinsic data characteristics. Expert Syst. Appl. 39(7), 6585–6608 (2012) CrossRef
90.
go back to reference López, V., Fernández, A., García, S., et al.: An insight into classification with imbalanced data: empirical results and current trends on using data intrinsic characteristics. Inf. Sci. 250, 113–141 (2013) CrossRef López, V., Fernández, A., García, S., et al.: An insight into classification with imbalanced data: empirical results and current trends on using data intrinsic characteristics. Inf. Sci. 250, 113–141 (2013) CrossRef
91.
go back to reference Maxham, J.G.: Service recovery’s influence on consumer satisfaction, positive word-of-mouth, and purchase intentions. J. Bus. Res. 54(1), 11–24 (2001) CrossRef Maxham, J.G.: Service recovery’s influence on consumer satisfaction, positive word-of-mouth, and purchase intentions. J. Bus. Res. 54(1), 11–24 (2001) CrossRef
92.
go back to reference McKinley Stacker, I.: Ibm waston analytics. Sample data: Hr employee attrition and performance [data file] (2015) McKinley Stacker, I.: Ibm waston analytics. Sample data: Hr employee attrition and performance [data file] (2015)
93.
go back to reference Mittal, B., Lassar, W.M.: Why do customers switch? the dynamics of satisfaction versus loyalty. J. Serv. Mark. 12(3), 177–194 (1998) CrossRef Mittal, B., Lassar, W.M.: Why do customers switch? the dynamics of satisfaction versus loyalty. J. Serv. Mark. 12(3), 177–194 (1998) CrossRef
94.
go back to reference Mittal, V., Kamakura, W.A.: Satisfaction, repurchase intent, and repurchase behavior: investigating the moderating effect of customer characteristics. J. Mark. Res. 38(1), 131–142 (2001) CrossRef Mittal, V., Kamakura, W.A.: Satisfaction, repurchase intent, and repurchase behavior: investigating the moderating effect of customer characteristics. J. Mark. Res. 38(1), 131–142 (2001) CrossRef
95.
go back to reference Mozer, M.C., Wolniewicz, R., Grimes, D.B., et al.: Predicting subscriber dissatisfaction and improving retention in the wireless telecommunications industry. IEEE Trans. Neural Netw. 11(3), 690–696 (2000) CrossRef Mozer, M.C., Wolniewicz, R., Grimes, D.B., et al.: Predicting subscriber dissatisfaction and improving retention in the wireless telecommunications industry. IEEE Trans. Neural Netw. 11(3), 690–696 (2000) CrossRef
96.
go back to reference Munkhdalai, L., Munkhdalai, T., Park, K.H., et al.: An end-to-end adaptive input selection with dynamic weights for forecasting multivariate time series. IEEE Access 7, 99,099-99,114 (2019) CrossRef Munkhdalai, L., Munkhdalai, T., Park, K.H., et al.: An end-to-end adaptive input selection with dynamic weights for forecasting multivariate time series. IEEE Access 7, 99,099-99,114 (2019) CrossRef
97.
go back to reference Munkhdalai, L., Munkhdalai, T., Ryu, K.H.: Gev-nn: a deep neural network architecture for class imbalance problem in binary classification. Knowl. Based Syst. 194(105), 534 (2020) Munkhdalai, L., Munkhdalai, T., Ryu, K.H.: Gev-nn: a deep neural network architecture for class imbalance problem in binary classification. Knowl. Based Syst. 194(105), 534 (2020)
98.
go back to reference Napierała, K., Stefanowski, J., Wilk, S.: Learning from imbalanced data in presence of noisy and borderline examples. In: International Conference on Rough Sets and Current Trends in Computing, pp. 158–167. Springer (2010) Napierała, K., Stefanowski, J., Wilk, S.: Learning from imbalanced data in presence of noisy and borderline examples. In: International Conference on Rough Sets and Current Trends in Computing, pp. 158–167. Springer (2010)
99.
go back to reference Neslin, S.A., Gupta, S., Kamakura, W., et al.: Defection detection: measuring and understanding the predictive accuracy of customer churn models. J. Mark. Res. 43(2), 204–211 (2006) CrossRef Neslin, S.A., Gupta, S., Kamakura, W., et al.: Defection detection: measuring and understanding the predictive accuracy of customer churn models. J. Mark. Res. 43(2), 204–211 (2006) CrossRef
100.
go back to reference Nguyen, H.M., Cooper, E.W., Kamei, K.: Borderline over-sampling for imbalanced data classification. Int. J. Knowl. Eng. Soft Data Paradig. 3(1), 4–21 (2011) CrossRef Nguyen, H.M., Cooper, E.W., Kamei, K.: Borderline over-sampling for imbalanced data classification. Int. J. Knowl. Eng. Soft Data Paradig. 3(1), 4–21 (2011) CrossRef
101.
go back to reference Nguyen, N., LeBlanc, G.: The mediating role of corporate image on customers’ retention decisions: an investigation in financial services. Int. J. Bank Market. 16(2), 52–65 (1998) CrossRef Nguyen, N., LeBlanc, G.: The mediating role of corporate image on customers’ retention decisions: an investigation in financial services. Int. J. Bank Market. 16(2), 52–65 (1998) CrossRef
102.
go back to reference Owen, A.B.: Infinitely imbalanced logistic regression. J. Mach. Learn. Res. 8(Apr), 761–773 (2007) MathSciNetMATH Owen, A.B.: Infinitely imbalanced logistic regression. J. Mach. Learn. Res. 8(Apr), 761–773 (2007) MathSciNetMATH
103.
go back to reference Pang, G., Xu, H., Cao, L., et al.: Selective value coupling learning for detecting outliers in high-dimensional categorical data. In: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 807–816 (2017) Pang, G., Xu, H., Cao, L., et al.: Selective value coupling learning for detecting outliers in high-dimensional categorical data. In: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 807–816 (2017)
104.
go back to reference Pang, G., Shen, C., van den Hengel, A.: Deep anomaly detection with deviation networks. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 353–362 (2019) Pang, G., Shen, C., van den Hengel, A.: Deep anomaly detection with deviation networks. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 353–362 (2019)
106.
go back to reference Paulin, M., Perrien, J., Ferguson, R.J., et al.: Relational norms and client retention: external effectiveness of commercial banking in Canada and Mexico. Int. J. Bank Market. 16(1), 24–31 (1998) CrossRef Paulin, M., Perrien, J., Ferguson, R.J., et al.: Relational norms and client retention: external effectiveness of commercial banking in Canada and Mexico. Int. J. Bank Market. 16(1), 24–31 (1998) CrossRef
107.
go back to reference Reichheld, F.F., Sasser, W.E.: Zero defections: quality comes to services. Harv. Bus. Rev. 68(5), 105–111 (1990) Reichheld, F.F., Sasser, W.E.: Zero defections: quality comes to services. Harv. Bus. Rev. 68(5), 105–111 (1990)
108.
go back to reference Reinartz, W.J., Kumar, V.: The impact of customer relationship characteristics on profitable lifetime duration. J. Mark. 67(1), 77–99 (2003) CrossRef Reinartz, W.J., Kumar, V.: The impact of customer relationship characteristics on profitable lifetime duration. J. Mark. 67(1), 77–99 (2003) CrossRef
109.
go back to reference Rennie, J.D.: Improving multi-class text classification with Naive bayes. Technical Report AITR, vol. 4 (2001) Rennie, J.D.: Improving multi-class text classification with Naive bayes. Technical Report AITR, vol. 4 (2001)
110.
go back to reference Risselada, H., Verhoef, P.C., Bijmolt, T.H.: Staying power of churn prediction models. J. Interact. Mark. 24(3), 198–208 (2010) CrossRef Risselada, H., Verhoef, P.C., Bijmolt, T.H.: Staying power of churn prediction models. J. Interact. Mark. 24(3), 198–208 (2010) CrossRef
111.
go back to reference Ruff, L., Kauffmann, J.R., Vandermeulen, R.A., et al.: A unifying review of deep and shallow anomaly detection. In: Proceedings of the IEEE (2021) Ruff, L., Kauffmann, J.R., Vandermeulen, R.A., et al.: A unifying review of deep and shallow anomaly detection. In: Proceedings of the IEEE (2021)
112.
go back to reference Ruisen, L., Songyi, D., Chen, W., et al.: Bagging of xgboost classifiers with random under-sampling and tomek link for noisy label-imbalanced data. In: IOP Conference Series: Materials Science and Engineering, p 012004. IOP Publishing (2018) Ruisen, L., Songyi, D., Chen, W., et al.: Bagging of xgboost classifiers with random under-sampling and tomek link for noisy label-imbalanced data. In: IOP Conference Series: Materials Science and Engineering, p 012004. IOP Publishing (2018)
113.
go back to reference Salas-Eljatib, C., Fuentes-Ramirez, A., Gregoire, T.G., et al.: A study on the effects of unbalanced data when fitting logistic regression models in ecology. Ecol. Ind. 85, 502–508 (2018) CrossRef Salas-Eljatib, C., Fuentes-Ramirez, A., Gregoire, T.G., et al.: A study on the effects of unbalanced data when fitting logistic regression models in ecology. Ecol. Ind. 85, 502–508 (2018) CrossRef
114.
go back to reference Saradhi, V.V., Palshikar, G.K.: Employee churn prediction. Expert Syst. Appl. 38(3), 1999–2006 (2011) CrossRef Saradhi, V.V., Palshikar, G.K.: Employee churn prediction. Expert Syst. Appl. 38(3), 1999–2006 (2011) CrossRef
115.
go back to reference Schölkopf, B., Williamson, R., Smola, A., et al.: Support Vector Method for Novelty Detection, pp. 582–588. MIT Press, Cambridge (1999) Schölkopf, B., Williamson, R., Smola, A., et al.: Support Vector Method for Novelty Detection, pp. 582–588. MIT Press, Cambridge (1999)
116.
go back to reference Seiffert, C., Khoshgoftaar, T.M., Van Hulse, J., et al.: An empirical study of the classification performance of learners on imbalanced and noisy software quality data. Inf. Sci. 259, 571–595 (2014) CrossRef Seiffert, C., Khoshgoftaar, T.M., Van Hulse, J., et al.: An empirical study of the classification performance of learners on imbalanced and noisy software quality data. Inf. Sci. 259, 571–595 (2014) CrossRef
117.
go back to reference Seymen, O.F., Dogan, O., Hiziroglu, A.: Customer churn prediction using deep learning. In: International Conference on Soft Computing and Pattern Recognition, pp. 520–529. Springer (2020) Seymen, O.F., Dogan, O., Hiziroglu, A.: Customer churn prediction using deep learning. In: International Conference on Soft Computing and Pattern Recognition, pp. 520–529. Springer (2020)
118.
go back to reference Siber, R.: Combating the churn phenomenon-as the problem of customer defection increases, carriers are having to find new strategies for keeping subscribers happy. Telecommun. Int. Edn. 31(10), 77–81 (1997) Siber, R.: Combating the churn phenomenon-as the problem of customer defection increases, carriers are having to find new strategies for keeping subscribers happy. Telecommun. Int. Edn. 31(10), 77–81 (1997)
119.
go back to reference Śniegula, A., Poniszewska-Marańda, A., Popović, M.: Study of machine learning methods for customer churn prediction in telecommunication company. In: Proceedings of the 21st International Conference on Information Integration and Web-based Applications & Services, pp. 640–644 (2019) Śniegula, A., Poniszewska-Marańda, A., Popović, M.: Study of machine learning methods for customer churn prediction in telecommunication company. In: Proceedings of the 21st International Conference on Information Integration and Web-based Applications & Services, pp. 640–644 (2019)
120.
go back to reference Stefanowski, J.: Dealing with data difficulty factors while learning from imbalanced data. In: Challenges in Computational Statistics and Data Mining. pp. 333–363. Springer (2016) Stefanowski, J.: Dealing with data difficulty factors while learning from imbalanced data. In: Challenges in Computational Statistics and Data Mining. pp. 333–363. Springer (2016)
123.
go back to reference Tan, S.: Neighbor-weighted k-nearest neighbor for unbalanced text corpus. Expert Syst. Appl. 28(4), 667–671 (2005) CrossRef Tan, S.: Neighbor-weighted k-nearest neighbor for unbalanced text corpus. Expert Syst. Appl. 28(4), 667–671 (2005) CrossRef
124.
go back to reference Tang, L., Thomas, L., Fletcher, M., et al.: Assessing the impact of derived behavior information on customer attrition in the financial service industry. Eur. J. Oper. Res. 236(2), 624–633 (2014) MathSciNetMATHCrossRef Tang, L., Thomas, L., Fletcher, M., et al.: Assessing the impact of derived behavior information on customer attrition in the financial service industry. Eur. J. Oper. Res. 236(2), 624–633 (2014) MathSciNetMATHCrossRef
126.
go back to reference Tian, J., Gu, H., Liu, W.: Imbalanced classification using support vector machine ensemble. Neural Comput. Appl. 20(2), 203–209 (2011) CrossRef Tian, J., Gu, H., Liu, W.: Imbalanced classification using support vector machine ensemble. Neural Comput. Appl. 20(2), 203–209 (2011) CrossRef
127.
go back to reference Tomek, I.: Tomek link: two modifications of CNN. IEEE Trans. Syst. Man Cybern. 6, 769–772 (1976) MathSciNetMATH Tomek, I.: Tomek link: two modifications of CNN. IEEE Trans. Syst. Man Cybern. 6, 769–772 (1976) MathSciNetMATH
128.
go back to reference Umayaparvathi, V., Iyakutti, K.: A survey on customer churn prediction in telecom industry: datasets, methods and metrics. Int. Res. J. Eng. Technol. 3, 2395 (2016) Umayaparvathi, V., Iyakutti, K.: A survey on customer churn prediction in telecom industry: datasets, methods and metrics. Int. Res. J. Eng. Technol. 3, 2395 (2016)
129.
go back to reference Umayaparvathi, V., Iyakutti, K.: Automated feature selection and churn prediction using deep learning models. Int. Res. J. Eng. Technol. 4(3), 1846–1854 (2017) Umayaparvathi, V., Iyakutti, K.: Automated feature selection and churn prediction using deep learning models. Int. Res. J. Eng. Technol. 4(3), 1846–1854 (2017)
130.
go back to reference Vafeiadis, T., Diamantaras, K.I., Sarigiannidis, G., et al.: A comparison of machine learning techniques for customer churn prediction. Simul. Model. Pract. Theory 55, 1–9 (2015) CrossRef Vafeiadis, T., Diamantaras, K.I., Sarigiannidis, G., et al.: A comparison of machine learning techniques for customer churn prediction. Simul. Model. Pract. Theory 55, 1–9 (2015) CrossRef
131.
go back to reference Van Hulse, J., Khoshgoftaar, T.M., Napolitano, A., et al.: Feature selection with high-dimensional imbalanced data. In: 2009 IEEE International Conference on Data Mining Workshops, pp. 507–514. IEEE (2009) Van Hulse, J., Khoshgoftaar, T.M., Napolitano, A., et al.: Feature selection with high-dimensional imbalanced data. In: 2009 IEEE International Conference on Data Mining Workshops, pp. 507–514. IEEE (2009)
132.
go back to reference Vapnik, V.: Statistical Learning Theory. Wiley, New York (1998) MATH Vapnik, V.: Statistical Learning Theory. Wiley, New York (1998) MATH
133.
go back to reference Varki, S., Colgate, M.: The role of price perceptions in an integrated model of behavioral intentions. J. Serv. Res. 3(3), 232–240 (2001) CrossRef Varki, S., Colgate, M.: The role of price perceptions in an integrated model of behavioral intentions. J. Serv. Res. 3(3), 232–240 (2001) CrossRef
135.
go back to reference Wang, S., Li, D., Song, X., et al.: A feature selection method based on improved fisher’s discriminant ratio for text sentiment classification. Expert Syst. Appl. 38(7), 8696–8702 (2011) CrossRef Wang, S., Li, D., Song, X., et al.: A feature selection method based on improved fisher’s discriminant ratio for text sentiment classification. Expert Syst. Appl. 38(7), 8696–8702 (2011) CrossRef
136.
go back to reference Van den Poel, D., Lariviere, B.: Customer attrition analysis for financial services using proportional hazard models. Eur. J. Oper. Res. 157(1), 196–217 (2004) MATHCrossRef Van den Poel, D., Lariviere, B.: Customer attrition analysis for financial services using proportional hazard models. Eur. J. Oper. Res. 157(1), 196–217 (2004) MATHCrossRef
137.
go back to reference Wang, S., Liu, W., Wu, J., et al.: Training deep neural networks on imbalanced data sets. In: 2016 International Joint Conference on Neural Networks (IJCNN), pp. 4368–4374. IEEE (2016) Wang, S., Liu, W., Wu, J., et al.: Training deep neural networks on imbalanced data sets. In: 2016 International Joint Conference on Neural Networks (IJCNN), pp. 4368–4374. IEEE (2016)
139.
go back to reference Weiss, G.M.: Mining with rarity: a unifying framework. ACM SIGKDD Explor. Newsl. 6(1), 7–19 (2004) CrossRef Weiss, G.M.: Mining with rarity: a unifying framework. ACM SIGKDD Explor. Newsl. 6(1), 7–19 (2004) CrossRef
140.
go back to reference Weiss, G.M.: The impact of small disjuncts on classifier learning. In: Data Mining, pp. 193–226. Springer (2010) Weiss, G.M.: The impact of small disjuncts on classifier learning. In: Data Mining, pp. 193–226. Springer (2010)
141.
go back to reference Weiss, G.M., Hirsh, H.: A quantitative study of small disjuncts. AAAI/IAAI 2000, 665–670 (2000) Weiss, G.M., Hirsh, H.: A quantitative study of small disjuncts. AAAI/IAAI 2000, 665–670 (2000)
142.
go back to reference Weiss, G.M., Provost, F.: Learning when training data are costly: the effect of class distribution on tree induction. J. Artif. Intell. Res. 19, 315–354 (2003) MATHCrossRef Weiss, G.M., Provost, F.: Learning when training data are costly: the effect of class distribution on tree induction. J. Artif. Intell. Res. 19, 315–354 (2003) MATHCrossRef
143.
go back to reference Wilson, D.L.: Asymptotic properties of nearest neighbor rules using edited data. IEEE Trans. Syst. Man Cybern. SMC–2(3), 408–421 (1972) MathSciNetMATHCrossRef Wilson, D.L.: Asymptotic properties of nearest neighbor rules using edited data. IEEE Trans. Syst. Man Cybern. SMC–2(3), 408–421 (1972) MathSciNetMATHCrossRef
144.
go back to reference Xiao, J., Huang, L., Xie, L.: Cost-sensitive semi-supervised ensemble model for customer churn prediction. In: 2018 15th International Conference on Service Systems and Service Management (ICSSSM), pp. 1–6. IEEE (2018) Xiao, J., Huang, L., Xie, L.: Cost-sensitive semi-supervised ensemble model for customer churn prediction. In: 2018 15th International Conference on Service Systems and Service Management (ICSSSM), pp. 1–6. IEEE (2018)
146.
go back to reference Xie, Y., Li, X.: Churn prediction with linear discriminant boosting algorithm. In: International Conference on Machine Learning and Cybernetics, pp. 228–233. IEEE (2008) Xie, Y., Li, X.: Churn prediction with linear discriminant boosting algorithm. In: International Conference on Machine Learning and Cybernetics, pp. 228–233. IEEE (2008)
147.
go back to reference Yang, C., Shi, X., Jie, L., et al.: I know you’ll be back: interpretable new user clustering and churn prediction on a mobile social application. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 914–922 (2018) Yang, C., Shi, X., Jie, L., et al.: I know you’ll be back: interpretable new user clustering and churn prediction on a mobile social application. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 914–922 (2018)
148.
go back to reference Yang, Z., Peterson, R.T.: Customer perceived value, satisfaction, and loyalty: the role of switching costs. Psychol. Market. 21(10), 799–822 (2004) CrossRef Yang, Z., Peterson, R.T.: Customer perceived value, satisfaction, and loyalty: the role of switching costs. Psychol. Market. 21(10), 799–822 (2004) CrossRef
150.
go back to reference Zadrozny, B., Elkan, C.: Learning and making decisions when costs and probabilities are both unknown. In: Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 204–213 (2001) Zadrozny, B., Elkan, C.: Learning and making decisions when costs and probabilities are both unknown. In: Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 204–213 (2001)
151.
go back to reference Zadrozny, B., Langford, J., Abe, N.: Cost-sensitive learning by cost-proportionate example weighting. In: Third IEEE International Conference on Data Mining, pp. 435–442. IEEE (2003) Zadrozny, B., Langford, J., Abe, N.: Cost-sensitive learning by cost-proportionate example weighting. In: Third IEEE International Conference on Data Mining, pp. 435–442. IEEE (2003)
152.
go back to reference Zeithaml, V.A., Berry, L.L., Parasuraman, A.: The behavioral consequences of service quality. J. Mark. 60(2), 31–46 (1996) CrossRef Zeithaml, V.A., Berry, L.L., Parasuraman, A.: The behavioral consequences of service quality. J. Mark. 60(2), 31–46 (1996) CrossRef
153.
go back to reference Zhao, Z., Peng, H., Lan, C., et al.: Imbalance learning for the prediction of n 6-methylation sites in MRNAS. BMC Genom. 19(1), 574 (2018) CrossRef Zhao, Z., Peng, H., Lan, C., et al.: Imbalance learning for the prediction of n 6-methylation sites in MRNAS. BMC Genom. 19(1), 574 (2018) CrossRef
154.
go back to reference Zhou, F., Yang, S., Fujita, H., et al.: Deep learning fault diagnosis method based on global optimization GAN for unbalanced data. Knowl. Based Syst. 187(104), 837 (2020) Zhou, F., Yang, S., Fujita, H., et al.: Deep learning fault diagnosis method based on global optimization GAN for unbalanced data. Knowl. Based Syst. 187(104), 837 (2020)
155.
go back to reference Zhou, Z.H., Liu, X.Y.: Training cost-sensitive neural networks with methods addressing the class imbalance problem. IEEE Trans. Knowl. Data Eng. 18(1), 63–77 (2005) CrossRef Zhou, Z.H., Liu, X.Y.: Training cost-sensitive neural networks with methods addressing the class imbalance problem. IEEE Trans. Knowl. Data Eng. 18(1), 63–77 (2005) CrossRef
157.
go back to reference Zong, B., Song, Q., Min, M.R, et al.: Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In: International Conference on Learning Representations (2018) Zong, B., Song, Q., Min, M.R, et al.: Deep autoencoding gaussian mixture model for unsupervised anomaly detection. In: International Conference on Learning Representations (2018)
Metadata
Title
A survey on machine learning methods for churn prediction
Authors
Louis Geiler
Séverine Affeldt
Mohamed Nadif
Publication date
01-03-2022
Publisher
Springer International Publishing
Published in
International Journal of Data Science and Analytics / Issue 3/2022
Print ISSN: 2364-415X
Electronic ISSN: 2364-4168
DOI
https://doi.org/10.1007/s41060-022-00312-5

Other articles of this Issue 3/2022

International Journal of Data Science and Analytics 3/2022 Go to the issue

Premium Partner