Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 4th International Symposium on Integrated Uncertainty in Knowledge Modeling and Decision Making, IUKM 2015, held in Nha Trang, Vietnam, in October 2015.

The 40 revised full papers were carefully reviewed and selected from 58 submissions and are presented together with three keynote and invited talks. The papers provide a wealth of new ideas and report both theoretical and applied research on integrated uncertainty modeling and management

Inhaltsverzeichnis

Frontmatter

Epistemic Uncertainty Modeling: The-state-of-the-art

This paper is about the state-of-the-art of epistemic uncertainty modeling from subjective probability (Thomas Bayes) to fuzzy measures (Michio Sugeno).

Hung T. Nguyen

Fuzzy Sets, Multisets, and Rough Approximations

Multisets alias bags are similar to fuzzy sets but essentially different in basic concepts and operations. We overview multisets together with basics of fuzzy sets in order to observe differences between the two. We then introduce fuzzy multisets and the combination of the both concepts. There is another concept of real-valued multisets as a generalization of multisets. Rough approximations of multisets and fuzzy multisets are discussed which uses a natural projection of the universal set onto the set of equivalence classes.

Sadaaki Miyamoto

What Is Fuzzy Natural Logic

Abstract

Natural Logic

. In 1970, G. Lakoff published a paper [8] in which he introduced the concept of

natural logic

with the following goals:

to express all concepts capable of being expressed in natural language,

to characterize all the valid inferences that can be made in natural language,

to mesh with adequate linguistic descriptions of all natural languages.

Natural logic is thus a collection of terms and rules that come with natural language and that allows us to reason and argue in it. According to G. Lakoff’s hypothesis, natural language employs a relatively small finite number of atomic predicates that take sentential complements (sentential operators) and are related to each other by meaning-postulates that do not vary from language to language. The concept of natural logic has been further developed by several authors (see, e.g., [2,9] and elsewhere).

Vilém Novák

Combining Fuzziness and Context Sensitivity in Game Based Models of Vague Quantification

We introduce a game semantic approach to fuzzy models of vague quantification that addresses a number of problems with previous frameworks. The main tool is the introduction of a new logical operator that supports context based evaluations of suitably quantified formulas.

Christian G. Fermüller

New Model of a Fuzzy Associative Memory

We propose a new theory of implicative fuzzy associative memory. This memory is modeled by a fuzzy preorder relation. We give a necessary and sufficient condition on input data that guarantees an effective composition of a fuzzy associative memory, which is moreover, insensitivity to a certain type of noise

Irina Perfilieva

Construction of Associative Functions for Several Fuzzy Logics via the Ordinal Sum Theorem

In this report, the ordinal sum theorem of semigroups is applied to construct logical operations for several fuzzy logics. The generalized form of ordinal sum for fuzzy logics on [0, 1] is defined in order to uniformly express several families of logical operations. Then, the conditions in ordinal sums for various properties of logical operations are presented: for examples, the monotonicity, the location of the unit element, the left/right-continuity, or and/orlikeness. Finally, some examples to construct pseudo-uninorms by the proposed method are illustrated.

Mayuka F. Kawaguchi, Michiro Kondo

Cognitively Stable Generalized Nash Equilibrium in Static Games with Unawareness

In game theory, recently models and solution concepts of games with unawareness have been developed. This paper focuses on static games with unawareness and points out a conceptual problem of an existing equilibrium concept called generalized Nash equilibrium. Some generalized Nash equilibria can be cognitively unstable in the sense that, once such an equilibrium is played, some agent may feel that the outcome is unexpected one at some level of someone’s perception hierarchy. This may lead to change in the agent’s perception and thus her behavior. Based on the observation, we characterize a class of generalized Nash equilibrium that satisfies cognitive stability so that it can avoid such a problem. Then we discuss relationships between cognitively sable generalized Nash equilibrium and Nash equilibrium of the objective game, that is, how unawareness can or cannot change the equilibrium convention.

Yasuo Sasaki

Maximum Lower Bound Estimation of Fuzzy Priority Weights from a Crisp Comparison Matrix

In Interval AHP, our uncertain judgments are denoted as interval weights by assuming a comparison as a ratio of the real values in the corresponding interval weights. Based on the same concept as Interval AHP, this study denotes uncertain judgments as fuzzy weights which are the extensions of the interval weights. In order to obtain the interval weight for estimating a fuzzy weight, Interval AHP is modified by focusing on the lower bounds of the interval weights similarly to the viewpoint of belief function in evidence theory. It is reasonable to maximize the lower bound since it represents the weight surely assigned to one of the alternatives. The sum of the lower bounds of all alternatives is considered as a membership value and then the fuzzy weight is estimated. The more consistent comparisons are given as a result of the higher-level sets of fuzzy weights in a decision maker’s mind.

Tomoe Entani, Masahiro Inuiguchi

Logarithmic Conversion Approach to the Estimation of Interval Priority Weights from a Pairwise Comparison Matrix

An alternative method for the estimation of interval priority weights in Interval AHP is proposed. The proposed method applies the logarithmic conversion to components of the pairwise comparison matrix and obtains logarithmically-converted interval priority weights so as to minimize the sum of their widths. The logarithmically-converted interval priority weights are estimated as an optimal solution of a linear programming problem. The interval priority weights are obtained by the antilogarithmic conversion of the optimal solution and by the normalization. Numerical experiments are conducted to demonstrate the advantages of the proposed estimation over the conventional estimation.

Masahiro Inuiguchi, Shigeaki Innan

An Effective Method for Optimality Test Over Possible Reaction Set for Maximin Solution of Bilevel Linear Programming with Ambiguous Lower-Level Objective Function

A bilevel linear optimization problem with ambiguous lowerlevel objective requires a decision making under uncertainty of rational reaction.With the assumption that the ambiguous coefficient vector of the follower lies in a convex polytope, we apply the maximin solution approach and formulate it as a special kind of three-level programming problem. According to its property that the optimal solution locates on an extreme point, we adopt k-th best method to search the optimal solution equipped with tests for possible optimality, local optimality and global optimality of a solution. In this study, we propose an effective method to verify the rational reaction of the follower which is essential to all steps of optimality test. Our approach uses a relatively small memory to avoid repetition of possible optimality tests. The numerical experiments demonstrate our proposed method significantly accelerates the optimality verification process and eventually computes an optimal solution more efficiently.

Puchit Sariddichainunta, Masahiro Inuiguchi

Proposal of Grid Area Search with UCB for Discrete Optimization Problem

In this paper, a novel method for the discrete optimization problem is proposed based on the UCB algorithm. Definition of the neighborhood in the search space of the problem easily affects the performance of the existing algorithms because they do not well take into account the dilemma of exploitation and exploration. To optimize the balance of exploitation and exploration, we divide the search space into several grids to reconsider the discrete optimization problem as a Multi-Armed Bandit Problem, and therefore the UCB algorithm is directly introduced for the balancing.We proposed a UCB-grid area search and conducted numerical experiments on the 0-1 Knapsack Problem. Our method showed stable results in different environments.

Akira Notsu, Koki Saito, Yuhumi Nohara, Seiki Ubukata, Katsuhiro Honda

Why Copulas Have Been Successful in Many Practical Applications: A Theoretical Explanation Based on Computational Efficiency

A natural way to represent a 1-D probability distribution is to store its cumulative distribution function (cdf)

F

(

x

) = Prob(

X

 ≤ 

x

). When several random variables

X

1

,…,

X

n

are independent, the corresponding cdfs

F

1

(

x

1

), …,

F

n

(

x

n

) provide a complete description of their joint distribution. In practice, there is usually some dependence between the variables, so, in addition to the marginals

F

i

(

x

i

), we also need to provide an additional information about the joint distribution of the given variables. It is possible to represent this joint distribution by a multi-D cdf

F

(

x

1

,…,

x

n

) = Prob(

X

1

 ≤ 

x

1

& … &

X

n

 ≤ 

x

n

), but this will lead to duplication – since marginals can be reconstructed from the joint cdf – and duplication is a waste of computer space. It is therefore desirable to come up with a duplication-free representation which would still allow us to easily reconstruct

F

(

x

1

,…,

x

n

). In this paper, we prove that among all duplication-free representations, the most computationally efficient one is a representation in which marginals are supplements by a copula.

This result explains why copulas have been successfully used in many applications of statistics: since the copula representation is, in some reasonable sense, the most computationally efficient way of representing multi-D probability distributions.

Vladik Kreinovich, Hung T. Nguyen, Songsak Sriboonchitta, Olga Kosheleva

A New Measure of Monotone Dependence by Using Sobolev Norms for Copula

Dependence structure, e.g. measures of dependence, is one of the main studies in correlation analysis. In [10], B. Schweizer and E.F. Wolff used L

p

-metric

$d_{L^{p}}(C,P)$

to obtain a measure of monotone dependence where

P

is the product copula or independent copula, and in [11] P. A. Stoimenov defined Sobolev metric

d

S

(

C

,

P

) to construct the measure

ω

(

C

) for a class of Mutual Complete Dependences (MCDs). Due to the fact that the class of monotone dependence is contained in the class of MCDs, we constructed a new measure of monotone dependence,

λ

(

C

), based on Sobolev metric which can be used to characterize comonotonic, countermonotonic and independence.

Hien D. Tran, Uyen H. Pham, Sel Ly, T. Vo-Duy

Why ARMAX-GARCH Linear Models Successfully Describe Complex Nonlinear Phenomena: A Possible Explanation

Economic and financial processes are complex and highly nonlinear. However, somewhat surprisingly, linear models like ARMAXGARCH often describe these processes reasonably well. In this paper, we provide a possible explanation for the empirical success of these models.

Hung T. Nguyen, Vladik Kreinovich, Olga Kosheleva, Songsak Sriboonchitta

A Copula-Based Stochastic Frontier Model for Financial Pricing

We use the concept of a stochastic frontier in production to analyses the problem of pricing in stock markets. By modifying the classical stochastic frontier model to accommodate for errors dependency, using copulas, we show that our extended stochastic frontier model is more suitable for financial analyses. The validation is achieved by using AIC in our model selection problem.

Phachongchit Tibprasorn, Kittawit Autchariyapanitkul, Somsak Chaniam, Songsak Sriboonchitta

Capital Asset Pricing Model with Interval Data

We used interval-valued data to predict stock returns rather than just point valued data. Specifically, we used these interval values in the classical capital asset pricing model to estimate the beta coefficient that represents the risk in the portfolios management analysis. We also use the method to obtain a point valued of asset returns from the intervalvalued data to measure the sensitivity of the asset return and the market return. Finally, AIC criterion indicated that this approach can provide us better results than use the close price for prediction.

Sutthiporn Piamsuwannakit, Kittawit Autchariyapanitkul, Songsak Sriboonchitta, Rujira Ouncharoen

Confidence Intervals for the Difference Between Normal Means with Known Coefficients

Statistical estimation of the difference between normal means with known coefficients of variation has been investigated for the first time. This phenomenon occurs normally in environment and agriculture experiments when the scientist knows the coefficients of variation of their experiments. In this paper, we constructed new confidence intervals for the difference between normal means with known coefficients of variation. We also derived analytic expressions for the coverage probability and the expected length of each confidence interval. To confirm our theoretical results, Monte Carlo simulation will be used to assess the performance of these intervals based on their coverage probabilities and their expected lengths.

Suparat Niwitpong, Sa-Aat Niwitpong

Approximate Confidence Interval for the Ratio of Normal Means with a Known Coefficient of Variation

An approximate confidence interval for the ratio of normal population means with a known coefficient of variation is proposed. This has applications in the area of bioassay and bioequivalence when the scientist knows the coefficient of variation of the control group. The proposed confidence interval is based on the approximate expectation and variance of the estimator by Taylor series expansion. A Monte Carlo simulation study was conducted to compare the performance of the proposed confidence interval with the existing confidence interval. Simulation results show that the proposed confidence interval performs as well as the existing one in terms of coverage probability and expected length. However, the approximate confidence interval is very easy to calculate compared with the exact confidence interval.

Wararit Panichkitkosolkul

Confidence Intervals for the Ratio of Coefficients of Variation of the Gamma Distributions

One of the most useful statistical measures is the coefficient of variation which is widely used in many fields of applications. Not only in a single population, the coefficients of variation are applied in two populations. In this paper, we proposed two new confidence intervals for the ratio of coefficients of variation in the gamma distributions based on the method of variance of estimates recovery with the methods of Score and Wald intervals. Moreover, the coverage probability and expected length of the proposed confidence intervals are evaluated via a Monte Carlo simulation.

Patarawan Sangnawakij, Sa-Aat Niwitpong, Suparat Niwitpong

A Deterministic Clustering Framework in MMMs-Induced Fuzzy Co-clustering

Although various FCM-type clustering models are utilized in many unsupervised classification tasks, they often suffer from bad initialization. The deterministic clustering approach is a practical procedure for utilizing a robust feature of very fuzzy partitions and tries to converge the iterative FCM process to a plausible solution by gradually decreasing the fuzziness degree. In this paper, a novel framework for implementing the deterministic annealing mechanism to fuzzy co-clustering is proposed. The advantages of the proposed framework against the conventional statistical co-clustering model are demonstrated through some numerical experiments.

Shunnya Oshio, Katsuhiro Honda, Seiki Ubukata, Akira Notsu

FCM-Type Co-clustering Transfer Reinforcement Learning for Non-Markov Processes

In applying reinforcement learning to continuous space problems, discretization or redefinition of the learning space can be a promising approach. Several methods and algorithms have been introduced to learning agents to respond to this problem. In our previous study, we introduced an FCCM clustering technique into Q-learning (called QLFCCM) and its transfer learning in the Markov process. Since we could not respond to complicated environments like a non-Markov process, in this study, we propose a method in which an agent updates his Q-table by changing the trade-off ratio, Q-learning and QL-FCCM, based on the damping ratio. We conducted numerical experiments of the single pendulum standing problem and our model resulted in a smooth learning process.

Akira Notsu, Takanori Ueno, Yuichi Hattori, Seiki Ubukata, Katsuhiro Honda

MMMs-Induced Fuzzy Co-clustering with Exclusive Partition Penalty on Selected Items

Fuzzy co-clustering is a powerful tool for summarizing cooccurrence information while some intrinsic knowledge on meaningful items may be concealed by the dominant items shared by multiple clusters. In this paper, the conventional fully exclusive item partition model is modified such that exclusive penalties are forced only on some selected items. Its advantages are demonstrated through two numerical experiments. In a document clustering task, the proposed model is utilized for emphasizing cluster-wise meaningful keywords, which are useful for effectively summarizing document clusters. In an unsupervised classification task, the classification quality is improved by efficiently selecting promising items based on the item-wise single penalization test.

Takaya Nakano, Katsuhiro Honda, Seiki Ubukata, Akira Notsu

Clustering Data and Vague Concepts Using Prototype Theory Interpreted Label Semantics

Clustering analysis is well-used in data mining to group a set of observations into clusters according to their similarity, thus, the (dis)similarity measure between observations becomes a key feature for clustering analysis. However, classical clustering analysis algorithms cannot deal with observation contains both data and vague concepts by using traditional distance measures. In this paper, we proposed a novel (dis)similarity measure based on a prototype theory interpreted knowledge representation framework named label semantics. The new proposed measure is used to extend classical K-means algorithm for clustering data instances and the vague concepts represented by logical expressions of linguistic labels. The effectiveness of proposed measure is verified by experimental results on an image clustering problem, this measure can also be extended to cluster data and vague concepts represented by other granularities.

Hanqing Zhao, Zengchang Qin

An Ensemble Learning Approach Based on Rough Set Preserving the Qualities of Approximations

In this paper, we confirm the effects of ensemble learning approaches in classification problems based on rough set. Furthermore, we propose an ensemble learning approach based on rough set preserving the qualities of approximations. The proposed method stands on a policy that subsets of attributes whose quality of lower approximation is less than the threshold value is not tolerate. We carried out numerical experiments in order to confirm the classification performance of the proposed method and confirmed its effectiveness.

Seiki Ubukata, Taro Miyazaki, Akira Notsu, Katsuhiro Honda, Masahiro Inuiguchi

Minimum Description Length Principle for Compositional Model Learning

Information-theoretic viewpoint at the data-based model construction is anchored on the assumption that both source data and a constructed model comprises certain information. Not having another source of information than source data, the process of model construction can be viewed at as the transformation of information representation. The combination of this basic idea with the Minimum Description Length principle brings a new restriction on the process of model learning: avoid models containing more information than source data, because these models must comprise an additional undesirable information. In the paper, the idea is explained and illustrated on the data-based construction of multidimensional probabilistic compositional models.

Radim Jiroušek, Iva Krejčová

On the Property of SIC Fuzzy Inference Model with Compatibility Functions

The single input connected fuzzy inference model (SIC model) can decrease the number of fuzzy rules drastically in comparison with the conventional fuzzy inference models. However, the inference results obtained by the SIC model were generally simple comapred with the conventional fuzzy inference models. In this paper, we propose a

SIC model with compatibility functions

, which weights the rules of the SIC model. Moreover, this paper shows that the inference results of the proposed model can be easily obtained even as the proposed model uses involved compatibility functions.

Hirosato Seki

Applying Covering-Based Rough Set Theory to User-Based Collaborative Filtering to Enhance the Quality of Recommendations

Recommender systems provide personalized information by learning user preferences. Collaborative filtering (CF) is a common technique widely used in recommendation systems. User-based CF utilizes neighbors of an active user to make recommendations; however, such techniques cannot simultaneously achieve good values for accuracy and coverage. In this study, we present a new model using covering-based rough set theory to improve CF. In this model, relevant items of every neighbor are regarded as comprising a common covering. All common coverings comprise a covering for an active user in a domain, and covering reduction is used to remove redundant common coverings. Our experimental results suggest that this new model could simultaneously present improvements in accuracy and coverage. Furthermore, comparing our model with the unreducted model using all neighbors, our model utilizes fewer neighbors to generate almost the same results.

Zhipeng Zhang, Yasuo Kudo, Tetsuya Murai

Evidence Combination Focusing on Significant Focal Elements for Recommender Systems

In this paper, we develop a solution for evidence combination, called 2-probabilities focused combination, that concentrates on significant focal elements only. Firstly, in the focal set of each mass function, elements with their probabilities in top two highest probabilities are retained; others are considered as noise, which have been generated when assigning probabilities to the mass function and/or by related evidence combination tasks had already been done before, and eliminated. The probabilities of eliminated elements are added to the probability of the whole set element. The achieved mass functions are called 2-probabilities focused mass functions. Secondly, Dempster’s rule of combination is used to combine pieces of evidence represented as 2-probabilities focused mass functions. Finally, the combination result is transformed into the corresponding 2-probabilities focused mass function. Actually, the proposed solution can be employed as a useful tool for fusing pieces of evidence in recommender systems using soft ratings based on Dempster-Shafer theory; thus, we also present a way to integrate the proposed solution into these systems. Besides, the experimental results show that the performance of the proposed solution is more effective than a typically alternative solution called 2-points focused combination solution.

Van-Doan Nguyen, Van-Nam Huynh

A Multifaceted Approach to Sentence Similarity

We propose a novel method for measuring semantic similarity between two sentences. The method exploits both syntactic and semantic features to assess the similarity. In our method, words in a sentence are weighted using their information content. The weights of words help differentiate their contribution towards the meaning of the sentence. The originality of this research is that we explore named entities and their coreference relations as important indicators for measuring the similarity. We conduct experiments and evaluate our proposed method on Microsoft Research Paraphrase Corpus. The experiment results show that named entities and their coreference relations improve significantly the performance of paraphrase identification and the proposed method is comparable with state-of-the-art methods for paraphrase identification.

Hien T. Nguyen, Phuc H. Duong, Tuan Q. Le

Improving Word Alignment Through Morphological Analysis

Word alignment plays a critical role in statistical machine translation systems. The famous word alignment system, IBM models series, currently operates on only surface forms of words regardless of their linguistic features. This deficiency usually leads to many data sparseness problems. Therefore, we present an extension that enables the integration of morphological analysis into the traditional IBM models. Experiments on English-Vietnamese tasks show that the new model produces better results not only in word alignment but also in final translation performance.

Vuong Van Bui, Thanh Trung Tran, Nhat Bich Thi Nguyen, Tai Dinh Pham, Anh Ngoc Le, Cuong Anh Le

Learning Word Alignment Models for Kazakh-English Machine Translation

In this paper, we address to the most essential challenges in the word alignment quality. Word alignment is a widely used phenomenon in the field of machine translation. However, a small research has been dedicated to the revealing of its discrete properties. This paper presents word segmentation, the probability distributions, and the statistical properties of word alignment in the transparent and a real life dataset. The result suggests that there is no single best method for alignment evaluation. For Kazakh-English pair we attempted to improve the phrase tables with the choice of alignment method, which need to be adapted to the requirements in the specific project. Experimental results show that the processed parallel data reduced word alignment error rate and achieved the highest BLEU improvement on the random parallel corpora.

Amandyk Kartbayev

Application of Uncertainty Modeling Frameworks to Uncertain Isosurface Extraction

Proper characterization of uncertainty is a challenging task. Depending on the sources of uncertainty, various uncertainty modeling frameworks have been proposed and studied in the uncertainty quantification literature. This paper applies various uncertainty modeling frameworks, namely possibility theory, Dempster-Shafer theory and probability theory to isosurface extraction from uncertain scalar fields. It proposes an uncertainty-based marching cubes template as an abstraction of the conventional marching cubes algorithm with a flexible uncertainty measure. The applicability of the template is demonstrated using 2D simulation data in weather forecasting and computational fluid dynamics and a synthetic 3D dataset.

Mahsa Mirzargar, Yanyan He, Robert M. Kirby

On Customer Satisfaction of Battery Electric Vehicles Based on Kano Model: A Case Study in Shanghai

Due to the greenhouse effect and limited energy resources, more and more countries and firms have put more attention to clean energy so as to reduce pollution emissions. The development of battery electric vehicle (BEV) becomes crucial to meet the government and society’s demands. As one new product with immature technology, there are many factors affecting the wide utilization of BEV. It is necessary to study customer satisfaction of BEV so as to distinguish customer needs, help find the way to improve customer satisfaction, and identify critical factors. Considering the non-linear relationship between product performance and customer satisfaction, the Kano model is used to analyze customer needs for the BEV so as to promote the adoption of BEV in Shanghai. Four approaches to Kano model are used to categorize the BEV attributes as must-be quality, one-dimensional quality, attractive quality and indifferent quality. According to the strategic rule

M

 > 

O

 > 

A

 > 

I

, the priorities of efforts towards promoting the adoption of BEV is identified, i.e., the government and vehicle firms have to fulfill all the must-be requirements. They should make great improvement of one-dimensional qualities to make the BEV competitive to the traditional motor vehicles. Finally, the customers will be very satisfied if the attractive requirements are fulfilled.

Yanping Yang, Hong-Bin Yan, Tieju Ma

Co-Movement and Dependency Between New York Stock Exchange, London Stock Exchange, Tokyo Stock Exchange, Oil Price, and Gold Price

This paper aims to analyze the co-movement and dependence of three stock markets, oil market, and gold market. These are gold prices as measured by gold future, crude oil prices as measured by Brent, and stock prices as measured by three developed stock markets comprising the U.S. Dow Jones Industrial Average, the London Stock Exchange, and the Japanese Nikkei 225 index. To capture the correlation and dependence, we employed the application of C-vine copula and D-vine copula. The results demonstrate that the C-vine copula is a structure more appropriate than the D-vine copula. In addition, we found positive dependency between the London Stock Exchange and the other markets; however, we also obtained complicated results when the London Stock Exchange, the Dow Jones Industrial Average, and Brent were given as the conditions. Finally, we found that gold might be a safe haven in this portfolios.

Pathairat Pastpipatkul, Woraphon Yamaka, Songsak Sriboonchitta

Spillovers of Quantitative Easing on Financial Markets of Thailand, Indonesia, and the Philippines

This paper provides the results of the effectiveness of the quantitative easing (QE) policy, including purchasing mortgage-backed securities, treasury securities, and other assets in the United States, on the financial markets of Thailand, Indonesia, and the Philippines (TIP) in the post-QE introduction period. In this study, we focused on three different financial markets, which include the exchange rate market, stock market, and bond market. We employed a Bayesian Markov-switching VAR model to study the transmission mechanisms of QE shocks between periods of expansion in the QE policy and turmoil with extraordinarily negative events in the financial markets and the global economy. We found that QE may have a direct substantial effect on the TIP financial markets. Therefore, if the Federal Reserve withdraws the QE policy, the move might also have an effect on the TIP financial market. In particular, purchasing the mortgage-backed securities (MBS) program is more likely to affect the TIP financial markets than purchasing the other programs.

Pathairat Pastpipatkul, Woraphon Yamaka, Aree Wiboonpongse, Songsak Sriboonchitta

Impacts of Quantitative Easing Policy of United States of America on Thai Economy by MS-SFABVAR

This paper provides new empirical evidences by combining the advantage of principal component analysis (PCA) with Markov-switching Bayesian VAR (MS-BVAR) to examine the durations and impacts of quantitative easing (QE) policy on the Thai economy. The results claimed that QE policy created monetary shock to the Thai economy around 4–5 months in each cycle before getting back to equilibrium. The result from foreign direct investment (FDI) was similar to foreign portfolio investment (FPI) channel that when QE was announced, excess capital stocks were injected into the emerging economies, including the Thailand stock market. Excess liquidity as a result of the QE policy pushed up the SET index of Thailand to reach the highest point in 2012. The booming stock market generated more real output (GDP), and greater levels of employment, private consumption, and policy interest rates. Also, it produced shocks for just 4–5 months for one cycle of QE. On the other hand, excess liquidity from QE caused the Thai Baht to appreciate significantly, and this affected Thailand’s trade balance negatively. Impulse response and filtered probability yielded similar results that the QE had the impact on the Thai economy seasonally, and that the impact was around 4–5 months in each cycle.

Pathairat Pastpipatkul, Warawut Ruankham, Aree Wiboonpongse, Songsak Sriboonchitta

Volatility and Dependence for Systemic Risk Measurement of the International Financial System

In the context of existing downside correlations, we proposed multi-dimensional elliptical and asymmetric copula with CES models to measure the dependence of G7 stock market returns and forecast their systemic risk. Our analysis firstly used several GARCH families with asymmetric distribution to fit G7 stock returns, and selected the best to our marginal distributions in terms of AIC and BIC. Second, the multivariate copulas were used to measure dependence structures of G7 stock returns. Last, the best modeling copula with CES was used to examine systemic risk of G7 stock markets. By comparison, we find the mixed C-vine copula has the best performance among all multivariate copulas. Moreover, the pre-crisis period features lower levels of risk contribution, while risk contribution increases gradually while the crisis unfolds, and the contribution of each stock market to the aggregate financial risk is not invariant.

Jianxu Liu, Songsak Sriboonchitta, Panisara Phochanachan, Jiechen Tang

Business Cycle of International Tourism Demand in Thailand: A Markov-Switching Bayesian Vector Error Correction Model

This paper uses the Markov-switching Bayesian Vector Error Correction model (MS-BVECM) model to estimate the long-run and short-run relation for Thailand’s tourism demand from five major countries, namely Japan, Korea, China, Russia, and Malaysia. The empirical findings of this study indicate that there exist a long-run and some short-run relationships between these five countries. Additionally, we analyses the business cycle in a set of five major tourism sources of Thailand and find two different regimes, namely high tourist arrival regime and low tourist arrival regime. Secondary monthly data results of forecasting were used to forecast the tourism demand from five major countries from November 2014 to August 2015 based on the period from January 1997 to October 2014. The results show that Chinese, Russian, and Malaysian tourists played an important role in Thailand’s tourism industry during the period from May 2010 to October 2014.

Woraphon Yamaka, Pathairat Pastpipatkul, Songsak Sriboonchitta

Volatility Linkages Between Price Returns of Crude Oil and Crude Palm Oil in the ASEAN Region: A Copula Based GARCH Approach

This paper used the copula based ARMA-GARCH to examine the dependence structure between the weekly prices of two commodities, namely Crude oil and Crude palm oil. We found evidence of a weak positive dependence between two commodities prices. These findings suggest that the crude oil market of the Middle East and the crude palm oil market of Malaysia are linked together. This information is useful for decision making in various area, such as the risk management in financial field and the international trade in agricultural commodities.

Teera Kiatmanaroch, Ornanong Puarattanaarunkorn, Kittawit Autchariyapanitkul, Songsak Sriboonchitta

The Economic Evaluation of Volatility Timing on Commodity Futures Using Periodic GARCH-Copula Model

Corn is rapidly emerging used as an energy crop. As such, it strengthen the corn-ethanol-crude oil price relationship. In addition, both corn price and crude oil price have been shown to have seasonal changes and also exhibit an asymmetric or tail dependence structure. Hence, this paper uses a periodic GARCH Copula model to explore the volatility and dependence structure between the corn and oil price. More importantly, an asset-allocation strategy is adopted to measure the economic value of the periodic GARCH Copula models. The out-of-sample forecasts show that periodic GARCH copula model performs better than other parametric models as well as a non-parametric model. This result is important since the copula-based GARCH not only statistically improved the traditional method, but has economic benefit to its application. The in-sample and out-of-sample results both show that a risk-averse investor should be willing to switch from non-parametric method, DCC model to Copula based Model.

Xue Gong, Songsak Sriboonchitta, Jianxu Liu

On the Estimation of Western Countries’ Tourism Demand for Thailand Taking into Account of Possible Structural Changes Leading to a Better Prediction

Forecasting tourist arrivals is an essential feature in tourism demand prediction. This paper applies Self Exciting Threshold Autoregressive (SETAR) models. The SETAR takes into account of possible structural changes leading to a better prediction of western tourist arrivals to Thailand. The finding reveals that although the forecasting method such as SARIMA GARCH is the state of art model in econometrics, forecasting tourism demand for some specific destinations without consideration of the potential structural changes means ignoring the long persistence of some shocks to volatility and the conditional mean values leading to less efficient forecast results than SETAR model. The findings show that SETAR model outperforms SARIMA GARCH model. Then this study based on the SETAR model uses the Bayesian analysis of Threshold Autoregressive (BAYSTAR) method to make one step ahead forecasting. This study contributes that SETAR overtakes SARIMA GARCH as it takes into account of the nonlinear features of the data via structural changes resulting in the better forecasting of Western Countries tourism demand for Thailand.

Nyo Min, Songsak Sriboonchitta

Welfare Measurement on Thai Rice Market: A Markov Switching Bayesian Seemingly Unrelated Regression

This paper aimed to measure the welfare of the Thai rice market and provided a new estimation in welfare measurement. We applied the Markov Switching approach to the Seemingly Unrelated Regression model and adopted the Bayesian approach as an estimator for our model. Thus, we have the MSBSUR model as an innovative tool to measure the welfare. The results showed that the model performed very well in estimating the demand and supply equations of two different regimes; namely, high growth and low growth. The equations were extended to compute the total welfare. Then, the expected welfare during the studied period was determined. We found that a mortgage scheme may lead the market to gain a high level of welfare. Eventually, the forecasts of demand and supply were estimated for 10 months, and we found demand and supply would tend to increase in the next few months before dropping around March, 2015.

Pathairat Pastpipatkul, Paravee Maneejuk, Songsak Sriboonchitta

Modeling Daily Peak Electricity Demand in Thailand

Modeling of daily peak electricity demand is very crucial for reliability and security assessments of electricity suppliers as well as of electricity regulators. The aim of this paper is to model the peak electricity demand using the dynamic Peak-Over-Threshold approach. This approach uses the vector of covariates including time variable for modeling extremes. The effect of temperature and time dependence on shape and scale parameters of Generalized Pareto distribution for peak electricity demand is investigated and discussed in this article. Finally, the conditional return levels are computed for risk management.

Jirakom Sirisrisakulchai, Songsak Sriboonchitta

Backmatter

Weitere Informationen