Skip to main content
Top

2022 | Book

Advances in Econometrics, Operational Research, Data Science and Actuarial Studies

Techniques and Theories

insite
SEARCH

About this book

This volume presents techniques and theories drawn from mathematics, statistics, computer science, and information science to analyze problems in business, economics, finance, insurance, and related fields.

The authors present proposals for solutions to common problems in related fields. To this end, they are showing the use of mathematical, statistical, and actuarial modeling, and concepts from data science to construct and apply appropriate models with real-life data, and employ the design and implementation of computer algorithms to evaluate decision-making processes.

This book is unique as it associates data science - data-scientists coming from different backgrounds - with some basic and advanced concepts and tools used in econometrics, operational research, and actuarial sciences. It, therefore, is a must-read for scholars, students, and practitioners interested in a better understanding of the techniques and theories of these fields.

Table of Contents

Frontmatter
The Cobb-Douglas Production Function for an Exponential Model

We investigate China’s post-1978 economic data in terms of compatible Cobb-Douglas production functions exhibiting different properties for different periods of time. Our methodology is grounded in the fact that the Cobb-Douglas function can be derived under the assumption of exponential growth in production, labor, and capital. We show that it appears to be the case by employing R programming and the method of least squares. Each Cobb-Douglas function used to characterize the economic growth within the corresponding period of time is determined by specifying the values of the labor share from the available empirical data for the period in question. We conclude, therefore, that the Cobb-Douglas function can be employed to describe the growth in production for the periods 1978–1984, 1985–1991, 1992–2002, 2003–2009, and 2010–2017 each marked by specific events that impacted the Chinese economy.

Roman G. Smirnov, Kunpeng Wang, Ziwei Wang
Threshold Unit Root Tests with Smooth Transitions

Since threshold autoregressive models were discovered, many unit root tests have been developed to test the unit root null hypothesis when considering regime change. On the other hand, Sollis, J Time Ser Anal 25:409–417, 2004, indicates that a threshold unit root test could be combined with some smooth transition logistic functions which are introduced by Leybourne et al., J Time Ser Anal 19:83–97, 1998. This paper investigates whether the Caner and Hansen, Econometrica 69:1555–1596, 2001, unit root test could be expanded with smooth transition functions and demonstrates the performance of this new unit root testing process with Monte Carlo simulations. Simulation results for finite sample properties show reasonable empirical size and power values. Also, the proposed unit root testing procedure is used to test unit root null hypothesis for industrial production indices of United States of America and Turkey.

Mehmet Özcan, Funda Yurdakul
Jump Connectedness in the European Foreign Exchange Market

We assess the jump connectedness (spillover) among five Group-of-Ten European currencies, namely the Swiss Franc, the Euro, the British pound, the Norwegian Krone, and the Swedish Krone. Our analysis covers a period starting from January 1999 to January 2018. Overall, we find evidence of jump connectedness in the Group-of-Ten European currencies, in which the Euro is the largest net transmitter and the British pound is the largest receiver of jump connectedness. Jump connectedness between the Euro and the Swiss Franc is the strongest followed by the Euro-Norwegian Krone and the Swiss Franc-Swedish Krone pair. Total jump connectedness among the five Group-of-Ten European currencies is time-varying and sensitive to the extreme events such as the Eurozone Debt Crisis. However, the good news is that their jump connectedness is in a downward trend, declining about a half of its peak observed in early 2007.

Emawtee Bissoondoyal-Bheenick, Robert Brooks, Hung Xuan Do
Modeling Currency Exchange Data with Asymmetric Copula Functions

In the fields of economics and finance, there are data sets with dependent structures that can be modeled symmetrically or asymmetrically. Analyzing the asymmetrically dependent data with a symmetric model can result in inaccurate financial decisions. Besides, the effect of any event such as the financial crisis on international financial returns can be captured more accurately with asymmetric models. Recent studies have revealed that asymmetric dependent structures can be observed in exchange rates. While dependency structures for a financial data set can be modeled with copula functions efficiently, asymmetric dependencies can be modeled with directional copula functions. In the literature, there are some asymmetric copula models constructed in different ways to model directional dependence. The aim of this study is to model asymmetric exchange rate data with directional dependency measures. For this reason, the dependence among the four currencies traded in US Dollars is investigated using Khoudraji type copula functions. Additionally, the proportions of the total variability between foreign exchange returns are examined in detail.

Emel Kızılok Kara, Sibel Açık Kemaloğlu, Ömer Ozan Evkaya
The Joint Tests of the Parity Conditions: Evidence from a Small Open Economy

This chapter presents the set of international parity conditions which are core financial theories related to exchange rates determination and joint tests of the validity of Uncovered Interest Rate Parity (UIP) and Purchasing Power Parity (PPP), two important international conditions of parity, for Turkey-US and Turkey-Euro Area within the multivariate-cointegration framework of (Johansen et al., Econometrics Journal 3:216–249, 2000) study, allowing structural breaks such as the global financial crisis of 2007–2009 and implementation of macroprudential policies after the global financial crisis. The cointegration tests statistics reveal two vectors that cointegrate in both systems containing prices, exchange rates, and interest rates for Turkey-US and Turkey-Euro Area for the 2005:1–2009:4 and 2005:2–2010:1 pairs of breaks, respectively. Additionally, for both systems, each parity condition is rejected when it is formulated jointly, which implies that in a financially open economy, asset and commodity market adjustments might be interrelated. Conversely, when each parity condition is formulated as less restrictive for both systems, it is not rejected. This suggests PPP and UIP with proportionality and symmetry conditions in the first and second vectors, respectively.

Kadir Y. Eryiğit, Veli Durmuşoğlu
Stochastic Volatility Models with Endogenous Breaks in Volatility Forecasting

The need for research on modelling and forecasting financial volatility has increased noticeably due to its essential role in portfolio and risk management, option pricing, and dynamic hedging. This paper contributes to the ongoing discussion of how researchers use regime shifts or structural breaks information to improve forecast accuracy. To accomplish this, we use the data on renewable energy markets. Thus, this study examines several models that accommodate regime shifts and investigates their forecasting performance. First, a subset of competing models (GARCH-class and stochastic volatility) employ the modified iterative cumulative sum of squares method to determine the estimation windows. This paper's novel aspect is that it studies the forecasting performance of various specifications of stochastic volatility models under this procedure. Second, we employ Markov switching GARCH models under alternative distribution assumptions. The rolling window-based forecast analysis reveals that Markov switching models offer more accurate volatility forecast results for most cases. Regarding distribution functions’ relevance, the normal distribution followed by Students $$t$$ t , skew Student $$t$$ t , and generalized hyperbolic distribution commonly dominates the series under investigation in the superior sets under all considered loss metrics.

Akram S. Hasanov, Salokhiddin S. Avazkhodjaev
Effect in Quality Control Based of Hotelling T2 and CUSUM Control Chart

Today, quality control methods have been used quite extensively. In statistical surveys, the measurements of sampling units according to the variable under consideration are expensive in all sense and based on probability random sampling units according to same variable by means of a method which is not expensive at all. In this study, the researcher created Hotelling’s T2 control charts, a multivariate statistical process control method. The performances of simple random sampling and ranked set sampling methods were compared to one another using these control charts. A statistics program was performed to see the average run length values for the comparison of the sampling performances. As a result of the study, it is determined that the process is examined by statistical quality control methods rather than bivariate methods when there is a relationship among variables in the processes covering more than one quality variable. Further research in the RSS method proved to be more efficient when units are difficult and costly to measure.

Hakan Eygü
A Robust Regression Method Based on Pearson Type VI Distribution

In classical regression analysis, the distribution of the error is assumed to be Gaussian, and Least Squares (LS) estimation method is used for parameter estimation. In practice, even if the distribution of errors is assumed to be Gaussian, residuals are not generally Gaussian. If the data set contains outlier (s) or there are observations that are suspected to be outlier, normality assumption is violated, and parameter estimates will be biased. Many statisticians used robust method, such as the M-Estimation Method, which is a generalized version of the Maximum Likelihood (ML) Estimation method, for parameter estimation when such problems occurred. However, if the data set has skewness and excess kurtosis, traditional M-Estimators cannot achieve a good solution. In this study, using the relationship between Pearson Differential Equation (PDE) and Influence Function (IF), M-Estimation method is proposed for data sets that follow Pearson Type VI (PVI) distribution. The advantage of this method takes into account the skewness and kurtosis values of the data set and generates dynamic solutions. Objective, influence, weight functions and tail properties of the PVI distribution are obtained by using the Probability Density Function (pdf) of the PVI distribution. For the regression parameter estimates, Iteratively Re-Weighted Least Squares Estimation Method (IRWLS) is used. In many simulation studies with different scenarios and applications with real data, if the data have skewness and excess kurtosis, the proposed method has achieved better results than other M-Estimation methods in terms of Total Absolute Deviation (TAB) and Mean Square Error (MSE).

Yasin Büyükkör, A. Kemal Şehirlioğlu
Discrete Volatilities of Listed Real Estate Funds

The purpose of this article is to examine hedging strategies of South African real estate investment trusts using discrete volatility models. Prior studies have illustrated volatility hedging in bonds, commodities and equities appropriately illustrated by discrete volatility models, but not much has been done on real estate investment trusts, especially South African ones. This article uses both Autoregressive Conditional Heteroskedasticity and Generalized Autoregressive Conditional Heteroskedasticity family models to price discrete volatilities. The results show that information asymmetry, heterogeneity and lagging effects are inherent real estate investment trusts; therefore, volatility modelling should be done in a cautious manner. Thus, incorporating these factors in real estate investment trust hedging strategies should have a remarkable significance both in academia and in practice. The same findings apply equally both on in- and out-of-sample data.

Annah Gosebo, Donald Makhalemele, Zinhle Simelane
Have Commodity Markets Political Nature?

This chapter examines whether good markets have political aspects, based on unique data for daily prices in İstanbul between 1918 and 1924. To set a convincing causal estimate for the impacts of political uncertainty on good prices, we focus on political risk changes during this historical episode that was not related to confounding factors, such as economic depression. Our findings shed light on the presence of higher political risk due to the resignations of governments, leading to good price fluctuations through sudden changes in supply and trade disruptions. Based on a natural experiment relating to the end of the Ottoman Empire, our results fill the gap in the literature, which covers limited research on the positive link between political events and fluctuations in commodity market prices.

Avni Önder Hanedar
A Nonlinear Panel ARDL Analysis of Pollution Haven/Halo Hypothesis

There is a growing popularity for the nonlinear econometric approaches, since linkages among variables are not always linear. Nonlinear approaches provide a broader range of knowledge compared to the linear model. This research aims to assess the impact of foreign direct investment on pollution. To capture the potential asymmetries resulting from rise and fall in the foreign direct investments, the nonlinear panel autoregressive distributed lag approach is employed. In the empirical analysis, annual data of selected 22 transition economies from 1995 to 2016 is utilized. The findings highlighted the existence of asymmetric linkages among variables. In other words, evidence reveals that positive shock in foreign direct investment improves environmental quality, while the negative shock is detrimental to the environment.

Ebru Çağlayan-Akay, Zamira Oskonbaeva
An Investigation of Asymmetries in Exchange Rate Pass-Through to Domestic Prices

After the 2000–2001 financial crisis in Turkey, a strong reform program was initiated involving the enactment of the floating exchange rate regime in February 2001 and the adoption of inflation targeting as monetary policy in January 2002. This study aims to analyze the dynamics of exchange rate pass-through (ERPT) for the inflation-targeting period in Turkey by estimating the magnitudes of the short-run and long-run pass-through and by testing whether these magnitudes differ in contexts of depreciations and appreciations. To this end, the nonlinear autoregressive distributed lag (NARDL) approach is used. Since the bounds-testing procedure does not allow for stochastic seasonality or nonseasonal integration orders higher than one, to check the suitability of the series for this methodology, both seasonal and nonseasonal unit root tests are performed. The empirical results reveal asymmetry in the ERPT in both the short run and long run. In the long run, whereas appreciations of the domestic currency are not transmitted to domestic prices, the pass-through of depreciation is 43%. In the short run, the pass-through of appreciations is realized only in the current month and is 10.5%. The short-run pass-through of depreciations fluctuates over seven periods, and the total pass-through is approximately 3.5%.

Fela Özbey
Investigation of the Country-Specific Factors for URAP

The international rankings of universities have a significant impact on the perception of academics, students, governments, and businesses toward the universities. The aim of the study is to identify country-specific factors that are thought to influence academic performance and to reveal how the country-specific factors should be developed for universities to increase their success. For this purpose, the University Ranking by Academic Performance index for universities in 103 countries covering the period 2013–2019 was taken as the dependent variable to represent academic performance. The index value, which is used as the dependent variable, was constituted by taking the average of the countries to which entered the ranking universities belong. The country-specific factors were taken as the political stability and absence of violence/terrorism, the rule of law, the freedom of the press, the economic freedom index, the university-industry collaboration in research & development index, and the gross domestic product per capita. The factors affecting the academic performance of universities were analyzed with spatial panel data methods, and the findings revealed that the rule of law, the university-industry collaboration, and the GDP per capita increase academic performance.

Eda Yalçın Kayacan, Aygül Anavatan
The Impact of Outsourcing and Innovation on Industry 4.0

In this study, we examined the impact of outsourcing and innovation activities of industry 4.0, focusing on the Turkish Fortune 500 list of companies. The list contains the largest companies in Turkey within the manufacturing, trade, and service industries. We obtained the data from the innovation-outsourcing scale adapted by Yanmaz Arpacı and Gülel (Yanmaz Arpacı Ö, Gülel FE (2019) Development of innovation-outsourcing scale in enterprises. In: Paper presented at 3rd international EUREFE congress. Adnan Menderes University, Aydın). This scale consists of four dimensions: the importance of outsourcing, the results of outsourcing, satisfaction in outsourcing, and innovation. We analyzed the data by using binary logistic and multinomial logistic regression methods. In the first model, the dependent variable is the status of having a research and development department in the company. In the second model, the dependent variable is taken as the company's state to switch to industry 4.0. As a result, we found that the importance of outsourcing and innovation is statistically significant in being research and development department. It must be noted throughout this research that the results of outsourcing, satisfaction in outsourcing, and innovation factors are statistically significant for industries switching to Industry 4.0.

Ferda Esin Gülel, Öncü Yanmaz Arpacı
Subjective Well-Being of Poor Households

In the neoliberalizing world, social policy practices are declining. However, social assistance, which is one of the important tools of social policy, is crucial in terms of reducing poverty while also ensuring the reproduction of labor. There are limited number of studies investigating the influence of social assistance which helps poor people to meet their needs in-kind or in-cash on subjective well-being. Using 2013 Income and Living Conditions Survey from TURKSTAT, this study contributes empirically in this inquiry by looking at effects of the social assistance on subjective well-being. For this purpose, partial proportional odds model was used. According to the results, being recipient of social assistance has been found statistically significant as predictors of subjective well-being. Also, social assistance has a negative effect on subjective well-being. This outcome of the study suggests that people who receive social assistance feel poorer, therefore they report themselves less likely to be happy.

Süreyya Temelli, Mustafa Sevüktekin
Formation of a Fishing and Aquaculture Cluster as a Tool for Regional Competitiveness

The purpose of this research is to analyze the viability of the emergence of a fishing and aquaculture cluster in the state of Michoacán, with the aim of becoming a tool that promotes the regional Competitiveness of the territory. To determine the feasibility of the formation of an agglomeration of companies, in this work the methodology proposed by (Fregoso in Factores determinantes en las asociaciones para formar clústers industriales como estrategia de desarrollo regional. Tesis Doctoral, Instituto Politécnico Nacional, México, 2012) is used, where the use of coefficients is considered to determine mathematically if the emergence of a cluster in the region. To apply the coefficients, we start from the proximity theory and the number of companies in the sector and in the industry, the number of workers in the sector and in the industry and the employed population in the industry are used as essential factors for the calculation. The information is collected from the INEGI Economic Census INEGI (Recuperado el 10 de 09 de 2020, de, 2019) and, by substituting the data in the coefficient formulas, it is concluded that, given the state's agricultural and specifically fishing vocation, the emergence of a fishing cluster that promotes the Regional Competitiveness is a feasible possibility in the Infiernillo, Costa, Tierra Caliente, Pátzcuaro, Cuitzeo and Lerma—Chapala regions of Michoacán.

María Francisca Peñaloza-Talavera, Jaime Apolinar Martínez-Arroyo, Marco Alberto Valenzo-Jiménez
A Path Analysis of Learning Approaches, Personality Types and Self-Efficacy

The aim of this study is to explore how self-efficacy and personality types of undergraduates affect their learning approach and to analyze the relationship between the variables involved. A model was developed using self-efficacy, personality types and learning approach and this model was tested using path analysis. The path analysis showed that extraversion, neuroticism, conscientiousness and openness had a significant effect on self-efficacy, while extroversion and openness had a significant effect on both deep and surface learning. It was further found that self-efficacy had a significant effect on deep and surface learning. According to the results, personality types directly or/and indirectly affect the learning approaches. In light of the findings of this study, when the deep learning approach is considered as the desired learning approach, it can be said that the effects of self-efficacy and personality types on deep learning were remarkable.

Mine Aydemir, Nuran Bayram Arlı
Emotions Mining Research Framework: Higher Education in the Pandemic Context

The pandemic situation in 2020 was a challenge for the organization of the educational process in higher education. The crisis has exacerbated inequalities between universities, which funding, digital sustainability and emergency training are weaker than their national and international competitors. The poor provision of the learning process in an electronic environment has led to a number of problems. Some of them are related to the acquisition of learning material and practical skills, lack of communication between students and academic staff. The increased use of the Internet during periods of social distance has also led to an increase in participating in social media activities, which have become forums for sharing opinions and expressing emotions through text and multimedia content. In this regard, the aim of the article is to propose a research framework for evaluation of emotional attitudes in social media. The author tested the practical applicability of the proposed framework by retrieving data from the social network Twitter and applying data mining techniques for analyzing large volumes of textual content.

Radka Nacheva
Uncertain Super-Efficiency Data Envelopment Analysis

The main goal of the current study is to propose a new method for ranking homogeneous decision-making units in the presence of uncertain inputs and/or outputs. To reach this goal, data envelopment analysis approach, super-efficiency technique, and uncertainty theory are applied. Accordingly, in this study, a novel uncertain super-efficiency data envelopment analysis approach is presented that is capable to be used under data uncertainty. Notably, the super-efficiency data envelopment analysis approach is proposed under constant returns to scale assumption and multiplier form. Additionally, to show the efficacy and applicability of the proposed method, a numerical example related to five decision-making units with two uncertain inputs and two uncertain outputs is utilized. The results indicate that the proposed uncertain super-efficiency data envelopment analysis approach is an effective and applicable method for performance evaluation and ranking of decision-making units under uncertainty environment.

Pejman Peykani, Jafar Gheidar-Kheljani, Donya Rahmani, Mohammad Hossein Karimi Gavareshki, Armin Jabbarzadeh
Using Data Mining Techniques for Designing Patient-Friendly Hospitals

Spending a long time in the hospital and the intense circulation between the clinics involve various risks for both patients and medical officials in the hospital. Especially in the COVID-19 outbreak, the minimization of this period is vital because of reducing the risk of transmission. Symptomatic patients can be immediately taken into the isolation and treatment process, while asymptomatic patients continue to spread the disease. Because patients suffering from other diseases like diabetes, cancer, or other chronic illness also have immune system problems, epidemics can be fatal for them. Therefore, especially during the pandemic, patients tend to delay their hospital visits due to the risk of contamination. In this situation, unless they reach the diagnosis and treatment, they will face more serious health problems. If we provide patients uncrowded hospitals and shorter hospital visits, we will reduce the risk of transmission of COVID-19 and other seasonal epidemics. Therefore, it is necessary to determine which clinics are visited frequently by the patients and which clinics and medical units work together in the diagnosis and treatment process. In this chapter, data mining techniques used in healthcare system design are explained and exampled by a real-life case study. Analyzing patient data by using data mining techniques allows us to reach the aim of this chapter. Association rules between clinics and other related medical units like blood-letting and nuclear medicine services are determined. They also reveal the circulation of patients in the hospital. Frequency analysis shows crowded clinics and other medical units. Minimizing this circulation and crowd of patients in the hospital also minimizes the risk of transmission of COVID-19. In this chapter, six months’ data of patients treated in a hospital in Turkey are used. These data include demographic information of the patients, as well as which clinics they visited and how many days they were treated in the hospital. As a result of the data mining analysis, clinics and medical units working together in the diagnosis and treatment process and the most crowded clinics will be determined. Recommendations will be made to reduce the distance between the clinics and units which have associations and increase the service capacities of the most crowded clinics, respectively. Thus, the application of data mining techniques for designing patient-friendly healthcare services is presented.

İpek Deveci Kocakoç, Gökçe Baysal Türkölmez
Sustainability Transition Through Awareness to Promote Environmental Efficiency

The 17 Sustainable Development Goals, United Nations’ Agenda 2030, the Paris Agreement, the European Green Deal, and the current global policy momentum towards green efficiency, motivate the need for a better understanding of the determinants of environmental efficiency to tackle climate change. By adopting a non-parametric metafrontier framework, the productive performance and environmental efficiency through the Data Envelopment Analysis and Directional Distance Function for each of the 104 countries from 2006 through 2014 are calculated. We contribute to the understanding of environmental efficiency patterns through partitioning the metafrontier via a factor encapsulating 56 environmental indicators to give rise to heterogeneous environmental awareness regimes. By adopting fractional probit models, we show econometrically that productive performance appears to be a major driver of environmental efficiency only for the environmentally aware country economies whereas a direct rebound effect is also documented. This is a result with major “policy sequencing” implications. Absorptive capacity reflecting the ability and potentiality of the country to benefit from technological developments seems to play a crucial role as well. The less environmentally aware cluster does not seem to respond the same way to the set of factors considered, indicating that complexity and latent mechanisms affect green efficiency.

Nikos Chatzistamoulou, Phoebe Koundouri
Data Mining Approach in Personnel Selection: The Case of the IT Department

Data mining studies have been frequently included in the literature recently. Data mining can be applied in every field, especially in banking, marketing, customer relationship management, investment and portfolio management. In the literature, the problem of personnel selection has been examined with the help of multi-criteria decision-making techniques. In this study, it has been aimed to apply data mining techniques in the field of human resources where relatively little has been used. The features of a large-scale construction company have been determined according to the competencies specified in the information technologies department announcement. The candidates were ranked according to these attributes. While ranking, accuracy values have been compared by using basic algorithms of data mining techniques. While applying the process steps, the necessary data pre-processing techniques have been applied to candidates who entered incomplete or incorrect information during the application process. Basically, the decision trees algorithm gave the highest accuracy. Also, random forest, adaboost, gradient boosting, and xgboost algorithms have been tried. In addition, it has found the attributes that should be looked at first in the application features. The high number of data enabled machine learning to learn information more easily and to weigh the existing criteria easily. With this study, it has been aimed to obtain a more objective result by weighting with machine learning algorithms instead of weighting the personnel selection problem with multi-criteria decision-making methodology. In addition, it is an extremely difficult process to interview candidates for recruitment under the current Covid-19 pandemic conditions that the whole world and our country are struggling with. Online conversations take a lot of time. With this study, it has been aimed to provide optimization by automating the process by weighting the features related to the existing data in the process. The study has been done in the WEKA and Python program.

Ezgi Demir, Rana Elif Dinçer, Batuhan Atasoy, Sait Erdal Dinçer
A Possibilistic Programming Approach to Portfolio Optimization Problem Under Fuzzy Data

Investment portfolio optimization problem is an important issue and challenge in the investment field. The goal of portfolio optimization problem is to create an efficient portfolio that incurs the minimum risk to the investor across different return levels. It should be noted that in many real cases, financial data are tainted by uncertainty and ambiguity. Accordingly, in this study, the fuzzy portfolio optimization model using possibilistic programming is presented that is capable to be used in the presence of fuzzy data and linguistic variables. Three objectives including the return, the systematic risk, and the non-systematic risk are considered to propose the fuzzy portfolio optimization model. Finally, the possibilistic portfolio optimization model is implemented in a real case study from the Tehran stock exchange to show the efficacy and applicability of the proposed approach.

Pejman Peykani, Mohammad Namakshenas, Mojtaba Nouri, Neda Kavand, Mohsen Rostamy-Malkhalifeh
A Hybrid Fuzzy MCDM Approach for ESCO Selection

In recent years, energy efficiency has become an important issue due to the growing energy demands of countries and firms. High costs of energy supplies and environmental issues are the main problems around the world and lead to a global effort for saving energy. Besides, there is a growing interest in providing energy services to achieve energy and environmental goals. Therefore, new companies called Energy Service Companies (ESCOs) providing energy services to energy users started to operate in the world market. In this context, it is important for the firms to choose the right Energy Service Company that will enable them to save energy and to assist in their energy efficiency projects. In this paper, a hybrid fuzzy Multi-Criteria Decision-Making (MCDM) approach is proposed for Energy Service Company selection process of a textile firm. This approach is based on fuzzy Stepwise Weight Assessment Ratio Analysis (SWARA) and fuzzy Measurement Alternatives and Ranking according to the Compromise Solution (MARCOS) methods. Firstly, the weights of decision criteria are determined by using fuzzy SWARA method. Later, Energy Service Company alternatives are evaluated with the help of fuzzy MARCOS method, and the best alternative for the textile firm is determined.

Nilsen Kundakcı
Are NBA Players’ Salaries in Accordance with Their Performance on Court?

Researchers and practitioners ordinarily fit linear models in order to estimate NBA player’s salary based on the players’ performance on court. On the contrary, we first select the most important determinants or statistics (years of experience in the league, games played, etc.) and utilize them to predict the player salary shares (salaries with regard to the team’s payroll) by employing the non-linear Random Forest machine learning algorithm. We are further able to accurately classify whether a player is low or highly paid. Additionally, we avoid the phenomenon of over-fitting observed in most papers by external evaluation of the salary predictions. Based on information collected from three distinct periods, 2017–2019, we identify the important factors that achieve very satisfactory salary predictions and we draw useful conclusions. We conclude that player salary shares exhibit a relatively high (non-linear) accordance with their performance on court.

Ioanna Papadaki, Michail Tsagris
Financial Distress Prediction Using Support Vector Machines and Logistic Regression

Financial distress and bankruptcies are highly costly and devastating processes for all parts of the economy. Prediction of distress is notable both for the functioning of the general economy and for the firm’s partners, investors, and lenders at the micro-level. This study aims to develop an effective prediction model with Support Vector Machine and Logistic Regression Analysis. As the field of the study, 172 firms that are traded in Borsa İstanbul, have been chosen. Besides, two basic prediction methods, LRA was also used as a feature selection method and the results of this model were compared. The empirical results show us, both methods achieve a good prediction model. However, the SVM model in which the feature selection phase is applied shows the best performance.

Seyyide Doğan, Deniz Koçak, Murat Atan
Predicting Stock Returns: ARMAX versus Machine Learning

In the modern world, online social and news media significantly impact society, economy and financial markets. In this chapter, we compared the predictive performance of financial econometrics and machine learning and deep learning methods for the returns of the stocks of the S&P 100 index. The analysis is enriched by using COVID-19-related news sentiments data collected for a period of 10 months. We analysed the performance of each model and found the best algorithm for such types of predictions. For the sample we analysed, our results indicate that the autoregressive–moving-average model with exogenous variables (ARMAX) has a comparable predictive performance to the machine and deep learning models, only outperformed by the extreme gradient boosted trees (XGBoost) approach. This result holds both in the training and testing datasets.

Darya Lapitskaya, Hakan Eratalay, Rajesh Sharma
Analysing the Residential Market Using Self-Organizing Map

Although the residential property market has strong connections with various sectors, such as construction, logistics, and investment, it works through different dynamics than other markets; thus, it can be analysed from various perspectives. Researchers and investors are mostly interested in price trends, the impact of external factors on residential property prices, and price prediction. When analysing price trends, it is beneficial to consider multidimensional data that contain attributes of residential properties, such as number of rooms, number of bathrooms, floor number, total floors, and size, as well as proximity to public transport, shops, and banks. Knowing a neighbourhood’s key aspects and properties could help investors, real estate development companies, and people looking to buy or rent properties to investigate similar neighbourhoods that may have unusual price trends. In this study, the self-organizing map method was applied to residential property listings in the Trójmiasto Area of Poland, where the residential market has recently been quite active. The study aims to group together neighbourhoods and subregions to find similarities between them in terms of price trends and stock. Moreover, this study presents relationships between attributes of residential properties.

Olgun Aydin, Krystian Zieliński
Advanced Car Price Modelling and Prediction

The scope of the paper is modelling and prediction of brand new car prices in the Greek market. At first the most important car characteristics are detected via a state-of-the-art machine learning variable selection algorithm. Statistical (log-normal regression) and machine learning algorithms (random forest and support vector regression) operating on the selected characteristics evaluate the predictive performance in multiple predictive aspects. The overall analysis is mainly beneficiary for consumers as it reveals the important car characteristics associated with car prices. Further, the optimal predictive model achieves high predictability levels and provides evidence for a car being over or under-priced.

Michail Tsagris, Stefanos Fafalios
Impact of Outlier-Adjusted Lee–Carter Model on the Valuation of Life Annuities

Annuity pricing is critical to the insurance companies for their financial liabilities. Companies aim to adjust the prices using a forecasting model that fits best to their historical data, which may have outliers influencing the model. Environmental conditions and extraordinary events such as a weak health system, an outbreak of war, and occurrence of pandemics like Spanish flu or Covid-19 may cause outliers resulting in misevaluation of mortality rates. These outliers should be taken into account to preserve the financial strength and liability of the life insurance industry. In this study, we aim to determine if there is an impact of mortality jumps in annuity pricing. We question the annuity price fluctuations among different countries and two models on country characteristics. Moreover, we show the annuity pricing on a portfolio for a more comprehensive assessment. To achieve this, a simulated diverse portfolio is created for the prices of four types of life annuities. Canada, Japan, and the United Kingdom as developed countries with high longevity risk, Russia and Bulgaria as emerging countries are considered. The results of this study prove the use of outlier-adjusted models for specific countries.

Cem Yavrum, A. Sevtap Selcuk-Kestel
Optimal Life Insurance and Annuity Demand with Jump Diffusion and Regime Switching

Classic Merton optimal life-cycle portfolio and consumption models are based on diffusion models for risky assets. In this paper, we extend the Richard’s (1975) optimal life-cycle model by allowing jumps and regime switching in the diffusion of risky assets. We develop a system of paired Hamilton–Jacobi–Bellman (HJB) equations. Using numerical methods, we obtain the results of agents’ behaviour. Our findings are that agents would be more conservative in consumption and annuitisation when the economic environment is more volatile and the bequest motive is stronger. However, under certain conditions, agents might increase their exposure to risky assets.

Jinhui Zhang, Sachi Purcal, Jiaqin Wei
Prediction of Claim Probability with Excess Zeros

Non-life insurance pricing is based on two components: claim severity and claim frequency. These components are used to estimate expected pure premium for the next policy period. Generalized linear models (GLM) are widely preferred for the estimation of claim frequency and claim severity due to the ease of interpretation and implementation. Since GLMs have some restrictions such as exponential family distribution assumption, more flexible Machine Learning (ML) methods are applied to insurance data in recent years. ML methods use learning algorithms to establish relationship between the response and the predictor variables as an intersection of computer science and statistics. Because of some insurance policy modifications such as deductible and no claim discount system, excess zeros are usually observed in claim frequency data. In the presence of excess zeros, prediction of claim probability can be a good alternative to the prediction of claim numbers since positive numbers are rarely observed in the portfolio. Excess zeros create imbalance problem in the data. When the data is highly imbalanced, predictions will be biased toward majority class due to the priors and predicted probabilities may be uncalibrated. In this study, we are interested in claim occurrence probability in the presence of excess zeros. A Turkish motor insurance dataset that is highly imbalanced is used for the case study. Ensemble methods that are popular ML approaches are used for the probability prediction as an alternative to logistic regression. Calibration methods are applied to predicted probabilities and results are compared.

Aslıhan Şentürk Acar
Risk Classification in Nonlife Insurance Premium Ratemaking

The aim of this research is to explore and analyze the benefits of risk classification methods by using data mining techniques on premium ratemaking in nonlife insurance. We rely on generalized linear models (GLMs) framework for nonlife premiums and examine the impact of specific data mining techniques on classifications of risk in motor hull insurance in Bosnia and Herzegovina. We study this relationship in an integrated framework considering a standard risk model based on the application of Poisson GLM for claims frequency estimate. Although GLM is a widely used method to determine insurance premiums, improvements of GLM by using the data mining methods identified in this paper may solve practical challenges for the risk models. The application of the data mining method in this paper aims to improve the results in the process of nonlife insurance premium ratemaking. The improvement is reflected in the choice of predictors or risk factors that have an impact on insurance premium rates. The following data mining methods for the selection of prediction variables were investigated: forward stepwise and neural networks. We provide strong and robust evidence that the use of data mining techniques influences premium ratemaking in nonlife insurance.

Amela Omerašević, Jasmina Selimović
Insurance Investments Management—An Example of a Small Transition Country

The COVID-19 pandemic during 2020 is the cause of the global health, economic, financial, and social crisis, which in terms of scale and possible harmful consequences has not been recorded since the global financial crisis in 2007–2008. For the BiH economy, risk exposure is particularly pronounced in terms of increased market risk of illiquidity of the economy, public and private companies, more precisely, illiquidity risk of (re) insurance company. The paper actualizes the application of modern methods in the management of the investment portfolio of a (re) insurance company, primarily methods for managing the risks of the securities portfolio and methods for managing credit risk. The starting point in this paper is the current domestic regulations on insurance, as well as the prescribed methodology for assessing the main risks and for quantitative analysis of the impact of the investment portfolio on the solvency of (re) insurance companies in BiH. To assess the risk of the securities portfolio, a model of variance and standard deviation was used, which show the degree of deviation of the security’s return from the expected average return, while the return (positive, or negative, or zero) is measured by rising or falling or keeping the price of the security at the same level between two measurement periods. The relationship between yields between two securities is measured by the correlation coefficient and covariance.

Željko Šain, Edin Taso, Jasmina Selimović
Metadata
Title
Advances in Econometrics, Operational Research, Data Science and Actuarial Studies
Editor
Prof. M. Kenan Terzioğlu
Copyright Year
2022
Electronic ISBN
978-3-030-85254-2
Print ISBN
978-3-030-85253-5
DOI
https://doi.org/10.1007/978-3-030-85254-2

Premium Partner