Skip to main content
Top

2013 | Book

Mathematical Statistics for Economics and Business

insite
SEARCH

About this book

Mathematical Statistics for Economics and Business, Second Edition, provides a comprehensive introduction to the principles of mathematical statistics which underpin statistical analyses in the fields of economics, business, and econometrics. The selection of topics in this textbook is designed to provide students with a conceptual foundation that will facilitate a substantial understanding of statistical applications in these subjects. This new edition has been updated throughout and now also includes a downloadable Student Answer Manual containing detailed solutions to half of the over 300 end-of-chapter problems.

After introducing the concepts of probability, random variables, and probability density functions, the author develops the key concepts of mathematical statistics, most notably: expectation, sampling, asymptotics, and the main families of distributions. The latter half of the book is then devoted to the theories of estimation and hypothesis testing with associated examples and problems that indicate their wide applicability in economics and business. Features of the new edition include: a reorganization of topic flow and presentation to facilitate reading and understanding; inclusion of additional topics of relevance to statistics and econometric applications; a more streamlined and simple-to-understand notation for multiple integration and multiple summation over general sets or vector arguments; updated examples; new end-of-chapter problems; a solution manual for students; a comprehensive answer manual for instructors; and a theorem and definition map.

This book has evolved from numerous graduate courses in mathematical statistics and econometrics taught by the author, and will be ideal for students beginning graduate study as well as for advanced undergraduates.

Table of Contents

Frontmatter
1. Elements of Probability Theory
Abstract
The objective of this chapter is to define a quantitative measure of the propensity for an uncertain outcome to occur, or the degree of belief that a proposition or conjecture is true. This quantitative measure will be called probability and is relevant for quantifying such things as how likely it is that a shipment of smart phones contains less than 5 percent defectives, that a gambler will win a crap game, that next year’s corn yields will exceed 80 bushels per acre, or that electricity demand in Los Angeles will exceed generating capacity on a given day. This probability concept will also be relevant for quantifying the degree of belief in such propositions as it will rain in Seattle tomorrow, Congress will raise taxes next year, and the United States will suffer another recession in the coming year.
Ron C. Mittelhammer
2. Random Variables, Densities, and Cumulative Distribution Functions
Abstract
It is natural for the outcomes of many experiments in the real world to be measured in terms of real numbers. For example, measuring the height and weight of individuals, observing the market price and quantity demanded of a commodity, measuring the yield of a new variety of wheat, or measuring the miles per gallon achievable by a new hybrid automobile all result in real-valued outcomes. The sample spaces associated with these types of experiments are subsets of the real line or, if multiple values are needed to characterize the outcome of the experiment, subsets of n-dimensional real space, \( {\mathbb{R}^n} \).
Ron C. Mittelhammer
3. Expectations and Moments of Random Variables
Abstract
The definition of the expectation of a random variable can be motivated both by the concept of a weighted average and through the use of the physics concept of center of gravity, or the balancing point of a distribution of weights. We first examine the case of a discrete random variable and look at a problem involving the balancing-point concept.
Ron C. Mittelhammer
4. Parametric Families of Density Functions
Abstract
A collection of specific probability density functional forms that have found substantial use in statistical applications are examined in this chapter. The selection includes a number of the more commonly used densities, but the collection is by no means an exhaustive account of the vast array of probability densities that are available and that have been applied in the literature.
Ron C. Mittelhammer
5. Basic Asymptotics
Abstract
In this chapter we establish some results relating to the probability characteristics of functions of n-variate random variables X (n)=(X 1,…,X n ) when n is large. In particular, certain types of functions Y n =g(X 1,…,X n ) of an n-variate random variable may converge in various ways to a constant, its probability distribution may be well-approximated by a so-called asymptotic distribution as \( n \) increases, or the probability distribution of g(X (n)) may converge to a limiting distribution as n → ∞.
Ron C. Mittelhammer
6. Sampling, Sample Moments and Sampling Distributions
Abstract
Prior to this point, our study of probability theory and its implications has essentially addressed questions of deduction, being of the type: “Given a probability space, what can we deduce about the characteristics of outcomes of an experiment?” Beginning with this chapter, we turn this question around, and focus our attention on statistical inference and questions of the form: “Given characteristics associated with the outcomes of an experiment, what can we infer about the probability space?”
Ron C. Mittelhammer
7. Point Estimation Theory
Abstract
The problem of point estimation examined in this chapter is concerned with the estimation of the values of unknown parameters, or functions of parameters, that represent characteristics of interest relating to a probability model of some collection of economic, sociological, biological, or physical experiments. The outcomes generated by the collection of experiments are assumed to be outcomes of a random sample with some joint probability density function \( f({{x}_{1}},\ldots, {{x}_n};\vec{\Theta} ) \). The random sample need not be from a population distribution, so that it is not necessary that X 1,…,X n be iid. The estimation concepts we will examine in this chapter can be applied to the case of general random sampling, as well as simple random sampling and random sampling with replacement, i.e., all of the random sampling types discussed in Chapter 6. The objective of point estimation will be to utilize functions of the random sample outcome to generate good (in some sense) estimates of the unknown characteristics of interest.
Ron C. Mittelhammer
8. Point Estimation Methods
Abstract
In this chapter we examine point estimation methods that lead to specific functional forms for estimators of q(Θ) that can be relied upon to define estimators that often have good estimator properties. Thus for the only result in Chapter 7 that could be used directly to define the functional form of an estimator is the theorem on the attainment of the CRLB, which is useful only if the probability model {f(x;Θ), Θ∈Ω} and the estimand q(Θ) are such that the CRLB is actually attainable. We did examine a number of important results that could be used to narrow the search for a good estimator of q(Θ), to potentially improve upon an unbiased estimator that was already available, or that could verify when an unbiased estimator was actually the best in the sense of minimizing variance or in having the smallest covariance matrix. However, since the functional form of an estimator of q(Θ) having good estimator properties is often not apparent even with the aid of the results assembled in Chapter 7, we now examine procedures that suggest functional forms of estimators.
Ron C. Mittelhammer
9. Hypothesis Testing Theory
Abstract
A primary goal of scientific research often concerns the verification or refutation of assertions, conjectures, currently accepted laws, or descriptions relating to a given economic, sociological, psychological, physical, or biological process or population. Statistical hypothesis testing concerns the use of probability samples of observations from processes or populations of interest, together with probability and mathematical statistics principles, to judge the validity of stated assertions, conjectures, laws, or descriptions in such a way that the probability of falsely rejecting a correct hypothesis can be controlled, while the probability of rejecting false hypotheses is made as large as possible. The precise nature of the types of errors that can be made, how the probabilities of such errors can be controlled, and how one designs a test so that the probability of rejecting false hypotheses is as large as possible is the subject of this chapter.
Ron C. Mittelhammer
10. Hypothesis Testing Methods and Confidence Regions
Abstract
In this chapter we examine general methods for defining tests of statistical hypotheses and associated confidence intervals and regions for parameters or functions of parameters. In particular, the likelihood ratio, Wald, and Lagrange multiplier methods for constructing statistical tests are widely used in empirical work and provide well-defined procedures for defining test statistics, as well as for generating rejection regions via a duality principle that we will examine.
Ron C. Mittelhammer
Backmatter
Metadata
Title
Mathematical Statistics for Economics and Business
Author
Ron C. Mittelhammer
Copyright Year
2013
Publisher
Springer New York
Electronic ISBN
978-1-4614-5022-1
Print ISBN
978-1-4614-5021-4
DOI
https://doi.org/10.1007/978-1-4614-5022-1