Skip to main content
Top

2020 | Book

Probability and Simulation

insite
SEARCH

About this book

This undergraduate textbook presents an inquiry-based learning course in stochastic models and computing designed to serve as a first course in probability. Its modular structure complements a traditional lecture format, introducing new topics chapter by chapter with accompanying projects for group collaboration. The text addresses probability axioms leading to Bayes’ theorem, discrete and continuous random variables, Markov chains, and Brownian motion, as well as applications including randomized algorithms, randomized surveys, Benford’s law, and Monte Carlo methods.

Adopting a unique application-driven approach to better study probability in action, the book emphasizes data, simulation, and games to strengthen reader insight and intuition while proving theorems. Additionally, the text incorporates codes and exercises in the Julia programming language to further promote a hands-on focus in modelling. Students should have prior knowledge of single variable calculus.

Giray Ökten received his PhD from Claremont Graduate University. He has held academic positions at University of Alaska Fairbanks, Ball State University, and Florida State University. He received a Fulbright U.S. Scholar award in 2015. He is the author of an open access textbook in numerical analysis, First Semester in Numerical Analysis with Julia, published by Florida State University Libraries, and a co-author of a children’s math book, The Mathematical Investigations of Dr. O and Arya, published by Tumblehome. His research interests include Monte Carlo methods and computational finance.

Table of Contents

Frontmatter
Chapter 1. Probability
Abstract
If shapes such as lines and triangles are emblematic of geometry, and symbols such as xy are emblematic of elementary algebra, then the fair die or coin is the symbol of probability. When we flip an unbiased coin, we know the outcome will be either heads or tails, with equal likelihood. The concept of a random experiment is an abstraction of this phenomenon: it is an experiment which has several possible outcomes, and we cannot tell what the outcome will be a priori.
Giray Ökten
Chapter 2. Discrete Random Variables
Abstract
In many problems involving probability, we are not as much interested in the actual random outcome, but a function of this outcome. For example, in Section 1.4 where we discussed Freivalds’ algorithm, we considered a sample space of vectors \(r=[r_1,\dotsc ,r_n]^T\) where each \(r_i\) was either 0 or 1.
Giray Ökten
Chapter 3. Continuous Random Variables
Abstract
In Chapter 2, we discussed discrete random variables. These random variables took on finite or countably many values. Here we will study random variables whose set of possible values is uncountable. For example, the price of a stock with its erratic ups and downs can be modeled as a random variable, and since in principle the price can be any positive number, modeling it using a continuous random variable is a sensible strategy. Another example is the waiting time for the first customer to enter a store, if you like, your neighborhood Starbucks. The waiting time can be any positive number (if we could measure it with sufficient precision), so a continuous random variable could be a good model.
Giray Ökten
Chapter 4. Markov Chains
Abstract
Until now, we mostly dealt with a single random variable at a time and used it to model a random quantity, such as the waiting time for a customer to enter a store or the number of deaths due to horse kicks. Occasionally the problem gave rise to a sequence of independent random variables like in the coupon collector’s problem. We studied tools that help us with the analysis of single random variables and sequences of independent random variables. However, in applications we frequently encounter sequences of random variables that are not independent. Andrey Markov was one of the firsts who gave a systematic analysis of such sequences in the early 1900s. Today we call these sequences Markov chains.
Giray Ökten
Chapter 5. Brownian Motion
Abstract
In the previous chapter, we learned about Markov chains: a sequence of random variables \(X_0,X_1,\ldots \) with a specific property (usually called the Markovian property) described in Definition 4.​1. We often think about the subscript of the random variables X as time: \(X_0\) is the current time, and time increments in steps of one unit which could be one second, hour, year, etc. This is a “discrete” representation of time, and for that reason the Markov chains we studied in the previous chapter are called discrete Markov chains.
Giray Ökten
Backmatter
Metadata
Title
Probability and Simulation
Author
Dr. Giray Ökten
Copyright Year
2020
Electronic ISBN
978-3-030-56070-6
Print ISBN
978-3-030-56069-0
DOI
https://doi.org/10.1007/978-3-030-56070-6