Preface In these notes we explain the measure theoretic foundations of modern probability.



Preface In these notes we explain the measure theoretic foundations of modern probability. The notes are used during a course that had as one of its principal aims a swift introduction to measure theory as far as it is needed in modern probability, e.g. to define concepts as conditional expectation and to prove limit theorems for martingales. Everyone with a basic notion of mathematics and probability would understand what is meant by f(x) and P(A). In the former case we have the value of some function f evaluated at its argument. In the second case, one recognizes the probability of an event A. Look at the notations, they are quite similar and this suggests that also P is a function, defined on some domain to which A belongs. This is indeed the point of view that we follow. We will see that P is a function -a special case of a measure- on a collection of sets, that satisfies certain properties, a σ-algebra. In general, a σ-algebra Σ will be defined as a suitable collection of subsets of a given set S. A measure µ will then be a map on Σ, satisfying some defining properties. This gives rise to considering a triple, to be called a measure space, (S, Σ, µ). We will develop probability theory in the context of measure spaces and because of tradition and some distinguished features, we will write (Ω, F, P) for a probability space instead of (S, Σ, µ). Given a measure space we will develop in a rather abstract sense integrals of functions defined on S. In a probabilistic context, these integrals have the meaning of expectations. The general setup provides us with two big advantages. In the definition of expectations, we do not have to distinguish anymore between random variables having a discrete distribution and those who have what is called a density. In the first case, expectations are usually computed as sums, whereas in the latter case, Riemann integrals are the tools. We will see that these are special cases of the more general notion of Lebesgue integral. Another advantage is the availability of convergence theorems. In analytic terms, we will see that integrals of functions converge to the integral of a limit function, given appropriate conditions and an appropriate concept of convergence. In a probabilistic context, this translates to convergence of expectations of random variables. We will see many instances, where the foundations of the theory can be fruitfully applied to fundamental issues in probability theory. These lecture notes are the result of teaching the course Measure Theoretic Probability for a number of years. To a large extent this course was initially based on the book Probability with Martingales by D. Williams, but also other texts have been used. In particular we consulted An Introduction to Probability Theory and Its Applications, Vol. 2 by W. Feller, Convergence of Stochastic Processes by D. Pollard, Real and Complex Analysis by W. Rudin, Real Analysis and Probability by R.M. Dudley, Foundations of Modern Probability by O. Kallenberg and Essential of stochastic finance by A.N. Shiryaev. These lecture notes have first been used in Fall 2008. Among the students who then took the course was Ferdinand Rolwes, who corrected (too) many typos and other annoying errors. Later, Delyan Kalchev, Jan Rozendaal, Arjun Sudan, Willem van Zuijlen, Hailong Bao and Johan du Plessis corrected quite some remaining errors. I am grateful to them all.

Instruction Files

Related Questions in mathematics category