{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# The Markov Property" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Overview\n", "\n", "A continuous time stochastic process is said to have the Markov property if\n", "its past and future are independent given the current state.\n", "\n", "(A more formal definition is provided below.)\n", "\n", "As we will see, the Markov property imposes a large amount of structure on\n", "continuous time processes.\n", "\n", "This structure leads to elegant and powerful results on\n", "evolution and dynamics.\n", "\n", "At the same time, the Markov property is general enough to cover many applied\n", "problems, as described in [the introduction](https://jstac.github.io/continuous_time_mcs/intro.html)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Setting\n", "\n", "In this lecture, the state space where dynamics\n", "evolve will be a [countable set](https://en.wikipedia.org/wiki/Countable_set),\n", "denoted henceforth by $S$, with typical elements $x, y$.\n", "\n", "(Note that “countable” is understood to include finite.)\n", "\n", "Regarding notation, in what follows, $\\sum_{x \\in S}$ is abbreviated to\n", "$\\sum_x$, the supremum $\\sup_{x \\in S}$ is abbreviated to $\\sup_x$ and so on.\n", "\n", "A **distribution** on $S$ is a function $\\phi$ from $S$ to $\\RR_+$ with\n", "$\\sum_x \\phi(x) = 1$.\n", "\n", "Let $\\dD$ denote the set of all distributions on $S$.\n", "\n", "To economize on terminology, we define a **matrix** $A$ on $S$ to be a map\n", "from $S \\times S$ to $\\RR$.\n", "\n", "When $S$ is finite, this reduces to the usual notion of a matrix, and,\n", "whenever you see expressions such as $A(x,y)$ below, you can\n", "mentally identify them with more familiar matrix\n", "notation, such as $A_{ij}$, if you wish.\n", "\n", "The product of two matrices $A$ and $B$ is defined by\n", "\n", "\n", "\n", "$$\n", "(A B)(x, y) = \\sum_z A(x, z) B(z, y)\n", " \\qquad ((x, y) \\in S \\times S) \\tag{3.1}\n", "$$\n", "\n", "If $S$ is finite, then this is just ordinary matrix multiplication.\n", "\n", "In statements involving matrix algebra, we *always treat distributions as row\n", "vectors*, so that, for $\\phi \\in \\dD$ and given matrix $A$,\n", "\n", "$$\n", "(\\phi A)(y) = \\sum_x \\phi(x) A(x, y)\n", "$$\n", "\n", "We will use the following imports" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "import numpy as np\n", "import scipy as sp\n", "import matplotlib.pyplot as plt\n", "\n", "import quantecon as qe\n", "from numba import njit\n", "\n", "from scipy.linalg import expm\n", "from scipy.stats import binom\n", "\n", "from matplotlib import cm\n", "from mpl_toolkits.mplot3d import Axes3D" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Markov Processes\n", "\n", "We now introduce the definition of Markov processes, first reviewing the\n", "discrete case and then shifting to continuous time.\n", "\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Discrete Time, Finite State\n", "\n", "The simplest Markov processes are those with a discrete time parameter and finite state space.\n", "\n", "Assume for now that $S$ has $n$ elements and let $P$ be a **Markov matrix**,\n", "which means that $P(x,y) \\geq 0$ and $\\sum_y P(x,y) = 1$ for all $x$.\n", "\n", "In applications, $P(x, y)$ represents the probability of transitioning from $x$ to\n", "$y$ in one step.\n", "\n", "A **Markov chain** $(X_t)_{t \\in \\ZZ_+}$ on $S$ with Markov\n", "matrix $P$ is a sequence of random variables satisfying\n", "\n", "\n", "\n", "$$\n", "\\PP\\{X_{t+1} = y \\,|\\, X_0, X_1, \\ldots, X_t \\} = P (X_t, y) \\tag{3.2}\n", "$$\n", "\n", "with probability one for all $y \\in S$ and any $t \\in \\ZZ_+$.\n", "\n", "In addition to connecting probabilities to the Markov matrix,\n", "[(3.2)](#equation-markovpropd) says that the process depends on its history only through\n", "the current state.\n", "\n", "We [recall that](https://python.quantecon.org/finite_markov.html#Marginal-Distributions), if $X_t$\n", "has distribution $\\phi$, then $X_{t+1}$ has distribution $\\phi P$.\n", "\n", "Since $\\phi$ is understood as a row vector, the meaning is\n", "\n", "\n", "\n", "$$\n", "(\\phi P)(y) = \\sum_x \\phi(x) P(x, y) \n", " \\qquad (y \\in S) \\tag{3.3}\n", "$$\n", "\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### The Joint Distribution\n", "\n", "In general, for given Markov matrix $P$, there can be many Markov chains\n", "$(X_t)$ that satisfy [(3.2)](#equation-markovpropd).\n", "\n", "This is due to the more general observation that, for a given distribution\n", "$\\phi$, we can construct many random variables having distribution $\\phi$.\n", "\n", "(The exercises below ask for one example.)\n", "\n", "Hence $P$ is, in a sense, a more primitive object than $(X_t)$.\n", "\n", "There is another way to see the fundamental importance of $P$, which is by\n", "constructing the joint distribution of $(X_t)$ from $P$.\n", "\n", "Let $S^\\infty$ represent the space of $S$-valued sequences $(x_0, x_1, x_2, \\ldots)$.\n", "\n", "Fix an initial condition $\\psi \\in \\dD$ and a Markov matrix $P$ on $S$.\n", "\n", "The **joint distribution** of a Markov chain $(X_t)$ satisfying\n", "[(3.2)](#equation-markovpropd) and $X_0 \\sim \\psi$ is the distribution $\\mathbf P_\\psi$ over\n", "$S^\\infty$ such that\n", "\n", "\n", "\n", "$$\n", "\\PP\\{ X_{t_1} = y_1, \\ldots, X_{t_m} = y_m \\}\n", " =\n", " \\mathbf P_\\psi\\{ (x_t) \\in S^\\infty \\,:\\, \n", " x_{t_i} = y_i \\text{ for } i = 1, \\ldots m\\} \\tag{3.4}\n", "$$\n", "\n", "for any $m$ positive integers $t_i$ and $m$ elements $y_i$ of the state space $S$.\n", "\n", "(Joint distributions of discrete time processes are uniquely defined by their\n", "values at finite collections of times — see, for example, Theorem 7.2 of [[Walsh, 2012](https://jstac.github.io/continuous_time_mcs/zreferences.html#id10)].)\n", "\n", "We can construct $\\mathbf P_\\psi$ by first defining $P_\\psi^n$ over\n", "the finite Cartesian product $S^{n+1}$ via\n", "\n", "\n", "\n", "$$\n", "\\mathbf P_\\psi^n(x_0, x_1, \\ldots, x_n)\n", " = \\psi(x_0)\n", " P(x_0, x_1)\n", " \\times \\cdots \\times\n", " P(x_{n-1}, x_n) \\tag{3.5}\n", "$$\n", "\n", "For any Markov chain $(X_t)$ satisfying [(3.2)](#equation-markovpropd) and $X_0 \\sim \\psi$,\n", "the restriction $(X_0, \\ldots, X_n)$ has joint distribution $\\mathbf\n", "P_\\psi^n$.\n", "\n", "This is a solved exercise below.\n", "\n", "The last step is to show that the family $(\\mathbf P_\\psi^n)$ defined at each\n", "$n \\in \\NN$ extends uniquely to a distribution $\\mathbf P_\\psi$ over the\n", "infinite sequences in $S^\\infty$.\n", "\n", "That this is true follows from a well known [theorem of Kolmogorov](https://en.wikipedia.org/wiki/Kolmogorov_extension_theorem).\n", "\n", "Hence $P$ defines the joint distribution $\\mathbf P_\\psi$ when paired with any initial condition $\\psi$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Extending to Infinite (Countable) State Spaces\n", "\n", "When $S$ is infinite, the same idea carries through.\n", "\n", "Consistent with the finite case, a **Markov matrix** is a map\n", "$P$ from $S \\times S$ to $\\RR_+$ satisfying\n", "\n", "$$\n", "\\sum_y P(x, y) = 1 \n", " \\text{ for all } x \\in S\n", "$$\n", "\n", "The definition of a Markov chain $(X_t)_{t \\in \\ZZ_+}$ on $S$ with Markov matrix $P$ is exactly as in [(3.2)](#equation-markovpropd).\n", "\n", "Given Markov matrix $P$ and $\\phi \\in \\dD$, we define $\\phi P$ by\n", "[(3.3)](#equation-update-rule).\n", "\n", "Then, as before, $\\phi P$ can be understood as the distribution of\n", "$X_{t+1}$ when $X_t$ has distribution $\\phi$.\n", "\n", "The function $\\phi P$ is in $\\dD$, since, by [(3.3)](#equation-update-rule), it is\n", "nonnegative and\n", "\n", "$$\n", "\\sum_y (\\phi P)(y) \n", " = \\sum_y \\sum_x P(x, y) \\phi(x)\n", " = \\sum_x \\sum_y P(x, y) \\phi(x)\n", " = \\sum_x \\phi(x)\n", " = 1\n", "$$\n", "\n", "(Swapping the order of infinite sums is justified here by the fact that all\n", "elements are nonnegative — a version of Tonelli’s theorem).\n", "\n", "If $P$ and $Q$ are Markov matrices on $S$, then, using the definition in\n", "[(3.1)](#equation-kernprod),\n", "\n", "$$\n", "(P Q)(x, y) := \\sum_z P(x, z) Q(z, y)\n", "$$\n", "\n", "It is not difficult to check that $P Q$ is again a Markov matrix on $S$.\n", "\n", "The elements of $P^k$, the $k$-th product of $P$ with itself, give $k$ step transition probabilities.\n", "\n", "For example, we have\n", "\n", "\n", "\n", "$$\n", "P^k(x, y) \n", " = (P^{k-j} P^j)(x, y) = \\sum_z P^{k-j}(x, z) P^j(z, y) \\tag{3.6}\n", "$$\n", "\n", "which is a version of the (discrete time) Chapman-Kolmogorov equation.\n", "\n", "Equation [(3.6)](#equation-kernprodk) can be obtained from the law of total probability: if\n", "$(X_t)$ is a Markov chain with Markov matrix $P$ and initial condition $X_0 =\n", "x$, then\n", "\n", "$$\n", "\\PP\\{X_k = y\\}\n", " = \\sum_z \\PP\\{X_k = y \\,|\\, X_j=z\\} \\PP\\{X_j=z\\}\n", "$$\n", "\n", "All of the [preceding discussion](#jdfin) on the connection between $P$\n", "and the joint distribution of $(X_t)$ when $S$ is finite carries over\n", "to the current setting." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The Continuous Time Case\n", "\n", "A **continuous time stochastic process** on $S$ is a collection $(X_t)$ of $S$-valued\n", "random variables $X_t$ defined on a common probability space and indexed by $t\n", "\\in \\RR_+$.\n", "\n", "Let $I$ be the Markov matrix on $S$ defined by $I(x,y) = \\mathbb 1\\{x = y\\}$.\n", "\n", "A **Markov semigroup** is a family $(P_t)$ of Markov matrices\n", "on $S$ satisfying\n", "\n", "1. $P_0 = I$, \n", "1. $\\lim_{t \\to 0} P_t(x, y) = I(x,y)$ for all $x,y$ in $S$, and \n", "1. the semigroup property $P_{s + t} = P_s P_t$ for all $s, t \\geq 0$. \n", "\n", "\n", "The interpretation of $P_t(x, y)$ is the probability of moving from state $x$\n", "to state $y$ in $t$ units of time.\n", "\n", "As such it is natural that $P_0(x,y) = 1$ if $x=y$ and zero otherwise, which\n", "is condition 1.\n", "\n", "Condition 2 is continuity with respect to $t$, which might seem restrictive\n", "but it is in fact very mild.\n", "\n", "For all practical applications, probabilities do not jump — although the\n", "chain $(X_t)$ itself can of course jump from state to state as time\n", "goes by.\n", "\n", "The semigroup property in condition 3 is nothing more than a continuous\n", "time version of the Chapman-Kolmogorov equation.\n", "\n", "This becomes clearer if we write it more explicitly as\n", "\n", "\n", "\n", "$$\n", "P_{s+t}(x, y) \n", " = \\sum_z P_s(x, z) P_t(z, y) \\tag{3.7}\n", "$$\n", "\n", "A stochastic process $(X_t)$ is called a (time homogeneous) **continuous time\n", "Markov chain** on $S$ with Markov semigroup $(P_t)$ if\n", "\n", "\n", "\n", "$$\n", "\\PP\\{X_{s + t} = y \\,|\\, \\fF_s \\}\n", " = P_t (X_s, y) \\tag{3.8}\n", "$$\n", "\n", "with probability one for all $y \\in S$ and $s, t \\geq 0$.\n", "\n", "Here $\\fF_s$ is the history $(X_r)_{r \\leq s}$ of the process up until\n", "time $s$.\n", "\n", "If you are an economist you might call $\\fF_s$ the “information set” at time\n", "$s$.\n", "\n", "If you are familiar with measure theory, you can understand $\\fF_s$ as\n", "the $\\sigma$-algebra generated by $(X_r)_{r \\leq s}$.\n", "\n", "Analogous to the discrete time case, the joint\n", "distribution of $(X_t)$ is determined by its Markov semigroup plus an\n", "initial condition.\n", "\n", "This distribution is defined over the set of all right continuous functions\n", "$\\RR_+ \\ni t \\mapsto x_t \\in S$, which we call $rcS$.\n", "\n", "Next one builds [finite dimensional distributions](https://en.wikipedia.org/wiki/Finite-dimensional_distribution) over $rcS$ using\n", "expressions similar to [(3.5)](#equation-mathjointd).\n", "\n", "Finally, the Kolmogorov extension theorem is applied, similar to the discrete\n", "time case.\n", "\n", "Corollary 6.4 of [[Le Gall, 2016](https://jstac.github.io/continuous_time_mcs/zreferences.html#id9)] provides full details." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The Canonical Chain\n", "\n", "Given a Markov semigroup $(P_t)$ on $S$, does there always exist a continuous\n", "time Markov chain $(X_t)$ such that [(3.8)](#equation-markovprop) holds?\n", "\n", "The answer is affirmative.\n", "\n", "To illustrate, pick any Markov semigroup $(P_t)$ on $S$ and fix initial\n", "condition $\\psi$.\n", "\n", "Next, create the corresponding joint distribution $\\mathbf P_\\psi$ over\n", "$rcS$, as described above.\n", "\n", "Now, for each $t \\geq 0$, let $\\pi_t$ be the time $t$ projection on\n", "$rcS$, which maps any right continuous function $(x_\\tau)$ into its time $t$ value\n", "$x_t$.\n", "\n", "Finally, let $X_t$ be an $S$-valued function on $rcS$ defined at $(x_\\tau) \\in rcS$ by $\\pi_t ( (x_\\tau))$.\n", "\n", "In other words, after $\\mathbf P_\\psi$ picks out some time path $(x_\\tau) \\in\n", "rcS$, the Markov chain $(X_t)$ simply reports this time path.\n", "\n", "Hence $(X_t)$ automatically has the correct distribution.\n", "\n", "The chain $(X_t)$ constructed in this way is called the **canonical chain**\n", "for the semigroup $(P_t)$ and initial condition $\\psi$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Simulation and Probabilistic Constructions\n", "\n", "While we have answered the existence question in the affirmative,\n", "the canonical construction is quite abstract.\n", "\n", "Moreover, there is little information about how we might simulate such a chain.\n", "\n", "Fortunately, it turns out that there are more concrete ways to build\n", "continuous time Markov chains from the objects that describe their\n", "distributions.\n", "\n", "We will learn about these in a [later lecture](https://jstac.github.io/continuous_time_mcs/uc_mc_semigroups.html)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Implications of the Markov Property\n", "\n", "The Markov property carries some strong implications that are not immediately\n", "obvious.\n", "\n", "Let’s take some time to explore them." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example: Failure of the Markov Property\n", "\n", "Let’s look at how the Markov property can fail, via an intuitive rather than\n", "formal discussion.\n", "\n", "Let $(X_t)$ be a continuous time stochastic process with state space $S = \\{0, 1\\}$.\n", "\n", "The process starts at $0$ and updates at follows:\n", "\n", "1. Draw $W$ independently from a fixed Pareto distribution. \n", "1. Hold $(X_t)$ in its current state for $W$ units of time and then switch\n", " to the other state. \n", "1. Go to step 1. \n", "\n", "\n", "What is the probability that $X_{s+h} = i$ given both the history $(X_r)_{r \\leq s}$ and current information $X_s = i$?\n", "\n", "If $h$ is small, then this is close to the\n", "probability that there are zero switches over the time interval $(s, s+h]$.\n", "\n", "To calculate this probability, it would be helpful to know how long the\n", "state has been at current state $i$.\n", "\n", "This is because the Pareto distribution [is not memoryless](https://jstac.github.io/continuous_time_mcs/memoryless.html#fail-mem).\n", "\n", "(With a Pareto distribution, if we know that $X_t$ has been at $i$ for a long\n", "time, then a switch in the near future becomes more likely.)\n", "\n", "As a result, the history prior to $X_s$ is useful for predicting $X_{s+h}$,\n", "even when we know $X_s$.\n", "\n", "Thus, the Markov property fails." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Restrictions Imposed by the Markov Property\n", "\n", "From the discussion above, we see that, for continuous time Markov chains,\n", "the length of time between jumps must be memoryless.\n", "\n", "Recall that, by [Theorem 1.1](https://jstac.github.io/continuous_time_mcs/memoryless.html#exp_unique), the only memoryless\n", "distribution supported on $\\RR_+$ is the exponential distribution.\n", "\n", "Hence, a continuous time Markov chain waits at states for an\n", "exponential amount of time and then jumps.\n", "\n", "The way that the new state is chosen must also satisfy the Markov property,\n", "which adds another restriction.\n", "\n", "In summary, we already understand the following about continuous time Markov chains:\n", "\n", "1. Holding times are independent exponential draws. \n", "1. New states are chosen in a Markovian’’ way, independent of the past given the current state. \n", "\n", "\n", "We just need to clarify the details in these steps to have a complete description." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Examples of Markov Processes\n", "\n", "Let’s look at some examples of processes that possess the Markov property." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Example: Poisson Processes\n", "\n", "The Poisson process discussed in our [previous lecture](https://jstac.github.io/continuous_time_mcs/poisson.html) is a\n", "Markov process on state space $\\ZZ_+$.\n", "\n", "To obtain the Markov semigroup, we observe that, for $k \\geq j$,\n", "\n", "$$\n", "\\PP\\{N_{s + t} = k \\,|\\, N_s = j\\}\n", " = \\PP\\{N_{s + t} - N_s = k - j \\,|\\, N_s = j\\}\n", " = \\PP\\{N_{s + t} - N_s = k - j\\}\n", "$$\n", "\n", "where the last step is due to independence of increments.\n", "\n", "From stationarity of increments we have\n", "\n", "$$\n", "\\PP\\{N_{s + t} - N_s = k - j\\}\n", " = \\PP\\{N_t = k - j\\}\n", " = e^{-\\lambda t} \\frac{ (\\lambda t)^{k-j} }{(k-j)!}\n", "$$\n", "\n", "In summary, the Markov semigroup is\n", "\n", "\n", "\n", "$$\n", "P_t(j, k) \n", " = e^{-\\lambda t} \\frac{ (\\lambda t)^{k-j} }{(k-j)!} \\tag{3.9}\n", "$$\n", "\n", "whenever $j \\leq k$ and $P_t(j, k) = 0$ otherwise.\n", "\n", "This chain of equalities was obtained with $N_s = j$ for arbitrary $j$, so we\n", "can replace $j$ with $N_s$ in [(3.9)](#equation-poissemi) to verify the Markov property [(3.8)](#equation-markovprop) for the Poisson process.\n", "\n", "Under [(3.9)](#equation-poissemi), each $P_t$ is a Markov matrix and $(P_t)$ is a\n", "Markov semigroup.\n", "\n", "The proof of the semigroup property is a solved exercise below.\n", "\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## A Model of Inventory Dynamics\n", "\n", "Let $X_t$ be the inventory of a firm at time $t$, taking values in the\n", "integers $0, 1, \\ldots, b$.\n", "\n", "If $X_t > 0$, then a customer arrives after $W$\n", "units of time, where $W \\sim \\Exp (\\lambda)$ for some fixed $\\lambda > 0$.\n", "\n", "Upon arrival, each customer purchases $\\min\\{U, X_t\\}$ units, where $U$ is an\n", "IID draw from the geometric distribution started at 1 rather than 0:\n", "\n", "$$\n", "\\PP\\{U = k\\} = (1-\\alpha)^{k-1} \\alpha\n", " \\qquad (k = 1, 2, \\ldots, \\; \\alpha \\in (0, 1))\n", "$$\n", "\n", "If $X_t = 0$, then no customers arrive and the firm places an order for $b$ units.\n", "\n", "The order arrives after a delay of $D$ units of time, where $D \\sim \\Exp (\\lambda)$.\n", "\n", "(We use the same $\\lambda$ here just for convenience, to simplify the exposition.)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Representation\n", "\n", "The inventory process jumps to a new value either when a new customer arrives\n", "or when new stock arrives.\n", "\n", "Between these arrival times it is constant.\n", "\n", "Hence, to track $X_t$, it is enough to track the jump times and the new values\n", "taken at the jumps.\n", "\n", "In what follows, we denote the jump times by $\\{J_k\\}$ and the values at jumps\n", "by $\\{Y_k\\}$.\n", "\n", "Then we construct the state process via\n", "\n", "\n", "\n", "$$\n", "X_t = \\sum_{k \\geq 0} Y_k \\mathbb 1\\{J_k \\leq t < J_{k+1}\\}\n", " \\qquad (t \\geq 0) \\tag{3.10}\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Simulation\n", "\n", "Let’s simulate this process, starting at $X_0 = 0$.\n", "\n", "As above,\n", "\n", "- $J_k$ is the time of the $k$-th jump (up or down) in inventory. \n", "- $Y_k$ is the size of the inventory after the $k$-th jump. \n", "- $(X_t)$ is defined from these objects via [(3.10)](#equation-xfromy). \n", "\n", "\n", "Here’s a function that generates and returns one path $t \\mapsto X_t$.\n", "\n", "(We are not aiming for computational efficiency at this stage.)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "def sim_path(T=10, seed=123, λ=0.5, α=0.7, b=10):\n", " \"\"\"\n", " Generate a path for inventory starting at b, up to time T.\n", "\n", " Return the path as a function X(t) constructed from (J_k) and (Y_k).\n", " \"\"\"\n", "\n", " J, Y = 0, b\n", " J_vals, Y_vals = [J], [Y]\n", " np.random.seed(seed)\n", "\n", " while True:\n", " W = np.random.exponential(scale=1/λ) # W ~ Exp(λ)\n", " J += W\n", " J_vals.append(J)\n", " if J >= T:\n", " break\n", " # Update Y\n", " if Y == 0:\n", " Y = b\n", " else:\n", " U = np.random.geometric(α)\n", " Y = Y - min(Y, U)\n", " Y_vals.append(Y)\n", " \n", " Y_vals = np.array(Y_vals)\n", " J_vals = np.array(J_vals)\n", "\n", " def X(t):\n", " if t == 0.0:\n", " return Y_vals\n", " else:\n", " k = np.searchsorted(J_vals, t)\n", " return Y_vals[k-1]\n", "\n", " return X" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let’s plot the process $(X_t)$." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "T = 20\n", "X = sim_path(T=T)\n", "\n", "grid = np.linspace(0, T, 100)\n", "\n", "fig, ax = plt.subplots()\n", "ax.step(grid, [X(t) for t in grid], label=\"$X_t$\")\n", "\n", "ax.set(xlabel=\"time\", ylabel=\"inventory\")\n", "\n", "ax.legend()\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "As expected, inventory falls and then jumps back up to $b$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The Embedded Jump Chain\n", "\n", "In models such as the one described above, the embedded discrete time\n", "process $(Y_k)$ is called the “embedded jump chain”.\n", "\n", "It is easy to see that $(Y_k)$ is discrete time finite state Markov chain.\n", "\n", "Its Markov matrix $K$ is\n", "given by $K(x, y) = \\mathbb 1\\{y=b\\}$ when $x=0$ and, when $0 < x \\leq b$,\n", "\n", "\n", "\n", "$$\n", "K(x, y)\n", " =\n", " \\begin{cases}\n", " \\mathbb 0 & \\text{ if } y \\geq x\n", " \\\\\n", " \\PP\\{x - U = y\\} = (1-\\alpha)^{x-y-1} \\alpha \n", " & \\text{ if } 0 < y < x\n", " \\\\\n", " \\PP\\{U \\geq x\\} = (1-\\alpha)^{x-1}\n", " & \\text{ if } y = 0\n", " \\end{cases} \\tag{3.11}\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Markov Property\n", "\n", "The inventory model just described has the Markov property precisely because\n", "\n", "1. the jump chain $(Y_k)$ is Markov in discrete time and \n", "1. the holding times are independent exponential draws. \n", "\n", "\n", "Rather than providing more details on these points here, let us first describe\n", "a more general setting where the arguments will be clearer and more useful." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Jump Processes with Constant Rates\n", "\n", "The examples we have focused on so far are special cases of Markov processes\n", "with constant jump intensities.\n", "\n", "These processes turn out to be very representative (although the constant jump intensity will later be relaxed).\n", "\n", "Let’s now summarize the model and its properties." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Construction\n", "\n", "The data for a Markov process on $S$ with constant jump rates are\n", "\n", "- a parameter $\\lambda > 0$ called the **jump rate**, which governs the jump\n", " intensities and \n", "- a Markov matrix $K$ on $S$, called the **jump matrix**. \n", "\n", "\n", "To run the process we also need an initial condition $\\psi \\in \\dD$.\n", "\n", "The process $(X_t)$ is constructed by holding at each state for an\n", "exponential amount of time, with rate $\\lambda$, and then updating to a\n", "new state via $K$.\n", "\n", "In more detail, the construction is" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### (Constant Rate Jump Chain)\n", "\n", "**Inputs** $\\psi \\in \\dD$, positive constant $\\lambda$, Markov matrix $K$\n", "\n", "**Outputs** Markov chain $(X_t)$\n", "\n", "1. draw $Y_0$ from $\\psi$ \n", "1. set $k = 1$ and $J_0 = 0$ \n", "1. draw $W_k$ from Exp$(\\lambda)$ and set $J_k = J_{k-1} + W_k$ \n", "1. set $X_t = Y_{k-1}$ for all $t$ such that $J_{k-1} \\leq t < J_k$. \n", "1. draw $Y_k$ from $K(Y_{k-1}, \\cdot)$ \n", "1. set $k = k+1$ and go to step 3. \n", "\n", "\n", "An alternative, more parsimonious way to express the same process is to take\n", "\n", "- $(N_t)$ to be a Poisson process with rate $\\lambda$ and \n", "- $(Y_k)$ to be a discrete time Markov chain with Markov matrix $K$ \n", "\n", "\n", "and then set\n", "\n", "$$\n", "X_t := Y_{N_t} \\text{ for all } t \\geq 0\n", "$$\n", "\n", "As before, the discrete time process $(Y_k)$ is called the **embedded jump chain**.\n", "\n", "(Not to be confused with $(X_t)$, which is often called a “jump process” or\n", "“jump chain” due to the fact that it changes states with jumps.)\n", "\n", "The draws $(W_k)$ are called the **wait times** or **holding times**." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Examples\n", "\n", "The Poisson process with rate $\\lambda$ is a jump process on $S = \\ZZ_+$.\n", "\n", "The holding times are obviously exponential with constant rate $\\lambda$.\n", "\n", "The jump matrix is just $K(i, j) = \\mathbb 1\\{j = i+1\\}$, so that the state\n", "jumps up by one at every $J_k$.\n", "\n", "The inventory model is also a jump process with constant rate $\\lambda$, this\n", "time on $S = \\{0, 1, \\ldots, b\\}$.\n", "\n", "The jump matrix was given in [(3.11)](#equation-ijumpkern)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Markov Property\n", "\n", "Let’s show that the jump process $(X_t)$ constructed above satisfies the\n", "Markov property, and obtain the Markov semigroup at the same time.\n", "\n", "We will use two facts:\n", "\n", "- the jump chain $(Y_k)$ has the Markov property in discrete\n", " time and \n", "- the Poisson process has stationary independent increments. \n", "\n", "\n", "From these facts it is intuitive that the distribution of $X_{t+s}$ given\n", "the whole history $\\fF_s = \\{ (N_r)_{r \\leq s}, (Y_k)_{k \\leq N_s} \\}$\n", "depends only on $X_s$.\n", "\n", "Indeed, if we know $X_s$, then we can simply\n", "\n", "- [restart](https://jstac.github.io/continuous_time_mcs/poisson.html#restart-prop) the Poisson process from $N_s$ and then \n", "- starting from $X_s = Y_{N_s}$, update the embedded jump chain $(Y_k)$ using $K$ each time a new jump occurs. \n", "\n", "\n", "Let’s write this more mathematically.\n", "\n", "Fixing $y \\in S$ and $s, t \\geq 0$, we have\n", "\n", "$$\n", "\\PP\\{X_{s + t} = y \\,|\\, \\fF_s \\}\n", " = \\PP\\{Y_{N_{s + t}} = y \\,|\\, \\fF_s \\}\n", " = \\PP\\{Y_{N_s + N_{s + t} - N_s} = y \\,|\\, \\fF_s \\}\n", "$$\n", "\n", "[Recalling](https://jstac.github.io/continuous_time_mcs/poisson.html#restart-prop) that $N_{s + t} - N_s$ is Poisson distributed with rate $t \\lambda$, independent of the history $\\fF_s$, we can write the display above as\n", "\n", "$$\n", "\\PP\\{X_{s + t} = y \\,|\\, \\fF_s \\}\n", " =\n", " \\sum_{k \\geq 0}\n", " \\PP\\{Y_{N_s + k} = y \\,|\\, \\fF_s \\}\n", " \\frac{(t \\lambda )^k}{k!} e^{-t \\lambda}\n", "$$\n", "\n", "Because the embedded jump chain is Markov with Markov matrix $K$, we can simplify further to\n", "\n", "$$\n", "\\PP\\{X_{s + t} = y \\,|\\, \\fF_s \\}\n", " = \\sum_{k \\geq 0}\n", " K^k(Y_{N_s}, y) \\frac{(t \\lambda )^k}{k!} e^{-t \\lambda}\n", " = \\sum_{k \\geq 0} K^k(X_s, y) \\frac{(t \\lambda )^k}{k!} e^{-t \\lambda}\n", "$$\n", "\n", "Since the expression above depends only on $X_s$,\n", "we have proved that $(X_t)$ has the Markov property.\n", "\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Transition Semigroup\n", "\n", "The Markov semigroup can be obtained from our final result, conditioning\n", "on $X_s = x$ to get\n", "\n", "$$\n", "P_t(x, y) = \\PP\\{X_{s + t} = y \\,|\\, X_s = x \\}\n", " = e^{-t \\lambda} \\sum_{k \\geq 0}\n", " K^k(x, y) \\frac{(t \\lambda )^k}{k!}\n", "$$\n", "\n", "If $S$ is finite, we can write this in matrix form and use the definition of\n", "the [matrix exponential](https://en.wikipedia.org/wiki/Matrix_exponential) to\n", "get\n", "\n", "$$\n", "P_t \n", " = e^{-t \\lambda}\n", " \\sum_{k \\geq 0}\n", " \\frac{(t \\lambda K)^k}{k!} \n", " = e^{-t \\lambda} e^{t \\lambda K}\n", " = e^{t \\lambda (K - I)}\n", "$$\n", "\n", "This is a simple and elegant representation of the Markov semigroup that\n", "makes it easy to understand and analyze distribution dynamics.\n", "\n", "For example, if $X_0$ has distribution $\\psi$, then $X_t$ has distribution\n", "\n", "\n", "\n", "$$\n", "\\psi P_t = \\psi e^{t \\lambda (K - I)} \\tag{3.12}\n", "$$\n", "\n", "We just need to plug in $\\lambda$ and $K$ to obtain the entire flow $t \\mapsto \\psi P_t$.\n", "\n", "We will soon extend this representation to the case where $S$ is infinite.\n", "\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Distribution Flows for the Inventory Model\n", "\n", "Let’s apply these ideas to the inventory model described above.\n", "\n", "We fix\n", "\n", "- the parameters $\\alpha$, $b$ and $\\lambda$ in the inventory model and \n", "- an initial condition $X_0 \\sim \\psi_0$, where $\\psi_0$ is an arbitrary\n", " distribution on $S$. \n", "\n", "\n", "The state $S$ is set to $\\{0, \\ldots, b\\}$ and the matrix $K$ is defined by\n", "[(3.11)](#equation-ijumpkern).\n", "\n", "Now we run time forward.\n", "\n", "We are interested in computing the flow of distributions $t \\mapsto \\psi_t$,\n", "where $\\psi_t$ is the distribution of $X_t$.\n", "\n", "According to the theory developed above, we have two options:\n", "\n", "Option 1 is to use simulation.\n", "\n", "The first step is to simulate many independent observations of the process $(X_t^m)_{m=1}^M$.\n", "\n", "(Here $m$ indicates simulation number $m$, which you might think of as the outcome\n", "for firm $m$.)\n", "\n", "Next, for any given $t$, we define $\\hat \\psi_t \\in \\dD$ as the\n", "histogram of observations at time $t$, or, equivalently the cross-sectional\n", "distribution at $t$:\n", "\n", "$$\n", "\\hat \\psi_t(x) := \\frac{1}{M} \\sum_{m=1}^M \\mathbb 1\\{X_t = x\\}\n", " \\qquad (x \\in S)\n", "$$\n", "\n", "When $M$ is large, $\\hat \\psi_t(x)$ will be close to $\\PP\\{X_t = x\\}$ by the law of\n", "large numbers.\n", "\n", "In other words, in the limit we recover $\\psi_t$.\n", "\n", "Option 2 is to insert the parameters into the right hand side of [(3.12)](#equation-distflowconst)\n", "and compute $\\psi_t$ as $\\psi_0 P_t$.\n", "\n", "The figure below is created using option 2, with $\\alpha = 0.6$, $\\lambda = 0.5$ and $b=10$.\n", "\n", "For the initial distribution we pick a binomial distribution.\n", "\n", "Since we cannot compute the entire uncountable flow $t \\mapsto \\psi_t$, we\n", "iterate forward 200 steps at time increments $h=0.1$.\n", "\n", "In the figure, hot colors indicate initial conditions and early dates (so that the\n", "distribution “cools” over time)\n", "\n", "Probability flows for the inventory model. \n", "In the (solved) exercises you will be asked to try to reproduce this figure." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercises" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise \n", "\n", "Consider the binary (Bernoulli) distribution where outcomes $0$ and $1$ each have\n", "probability $0.5$.\n", "\n", "Construct two different random variables with this distribution." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise \n", "\n", "Show by direct calculation that the Poisson matrices $(P_t)$ defined in\n", "[(3.9)](#equation-poissemi) satisfy the semigroup property [(3.7)](#equation-chapkol-ct2).\n", "\n", "Hints\n", "\n", "- Recall that $P_t(j, k) = 0$ whenever $j > k$. \n", "- Consider using the [binomial formula](https://en.wikipedia.org/wiki/Binomial_theorem). " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise \n", "\n", "Consider the distribution over $S^{n+1}$ previously shown in [(3.5)](#equation-mathjointd), which is\n", "\n", "$$\n", "\\mathbf P_\\psi^n(x_0, x_1, \\ldots, x_n)\n", " = \\psi(x_0)\n", " P(x_0, x_1)\n", " \\times \\cdots \\times\n", " P(x_{n-1}, x_n)\n", "$$\n", "\n", "Show that, for any Markov chain $(X_t)$ satisfying [(3.2)](#equation-markovpropd)\n", "and $X_0 \\sim \\psi$, the restriction $(X_0, \\ldots, X_n)$ has joint\n", "distribution $\\mathbf P_\\psi^n$." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Exercise \n", "\n", "Try to produce your own version of the figure [Probability flows for the inventory model.](#flow-fig)\n", "\n", "The initial condition is ψ_0 = binom.pmf(states, n, 0.25) where n = b + 1." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Solutions\n", "\n", ">**Note**\n", ">\n", ">code is currently not supported in sphinx-exercise\n", "so code-cell solutions are immediately after this\n", "solution block." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## [Solution to Exercise 3.1](https://jstac.github.io/continuous_time_mcs/#markov-prop-1)\n", "\n", "This is easy.\n", "\n", "One example is to take $U$ to be uniform on $(0, 1)$ and set $X=0$ if $U <\n", "0.5$ and $1$ otherwise.\n", "\n", "Then $X$ has the desired distribution.\n", "\n", "Alternatively, we could take $Z$ to be standard normal and set $X=0$ if $Z <\n", "0$ and $1$ otherwise." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## [Solution to Exercise 3.2](https://jstac.github.io/continuous_time_mcs/#markov-prop-2)\n", "\n", "Fixing $s, t \\in \\RR_+$ and $j \\leq k$, we have\n", "\n", "\n", "\\begin{aligned}\n", " \\sum_{i \\geq 0} P_s(j, i) P_t(i, k)\n", " & = \n", " e^{-\\lambda (s+t)} \n", " \\sum_{j \\leq i \\leq k}\n", " \\frac{ (\\lambda s)^{i-j} }{(i-j)!} \n", " \\frac{ (\\lambda t)^{k-i} }{(k-i)!} \n", " \\\\\n", " & = \n", " e^{-\\lambda (s+t)} \\lambda^{k-j}\n", " \\sum_{0 \\leq \\ell \\leq k-j}\n", " \\frac{ s^\\ell }{\\ell!} \n", " \\frac{ t^{k-j - \\ell} }{(k-j - \\ell)!} \n", " \\\\\n", " & = \n", " e^{-\\lambda (s+t)} \\lambda^{k-j}\n", " \\sum_{0 \\leq \\ell \\leq k-j}\n", " \\binom{k-j}{\\ell}\n", " \\frac{s^\\ell t^{k-j - \\ell}}{(k-j)!} \n", "\\end{aligned}\n", "\n", "\n", "Applying the binomial formula, we can write this as\n", "\n", "$$\n", "\\sum_{i \\geq 0} P_s(j, i) P_t(i, k)\n", " =\n", " e^{-\\lambda (s+t)} \n", " \\frac{(\\lambda (s + t))^{k-j}}{(k-j)!}\n", " = P_{s+t}(j, k)\n", "$$\n", "\n", "Hence [(3.7)](#equation-chapkol-ct2) holds, and the semigroup property is satisfied." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## [Solution to Exercise 3.3](https://jstac.github.io/continuous_time_mcs/#markov-prop-3)\n", "\n", "Let $(X_t)$ be a Markov chain satisfying [(3.2)](#equation-markovpropd) and $X_0 \\sim \\psi$.\n", "\n", "When $n=0$, we have $\\mathbf P_\\psi^n = \\mathbf P_\\psi^0 = \\psi$, and this\n", "agrees with the distribution of the restriction $(X_0, \\ldots, X_n) = (X_0)$.\n", "\n", "Now suppose the same is true at arbitrary $n-1$, in the sense that\n", "the distribution of $(X_0, \\ldots, X_{n-1})$ is equal to $\\mathbf P_\\psi^{n-1}$ as\n", "defined above.\n", "\n", "Then\n", "\n", "$$\n", "\\PP \\{X_0 = x_0, \\ldots, X_n = x_n\\}\n", " = \\PP \\{X_n = x_n \\,|\\, X_0 = x_0, \\ldots, X_{n-1} = x_{n-1} \\}\n", " \\\\\n", " \\times \\PP \\{X_0 = x_0, \\ldots, X_{n-1} = x_{n-1}\\}\n", "$$\n", "\n", "From the Markov property and the induction hypothesis, the right hand side is\n", "\n", "$$\n", "P (x_{n-1}, x_n )\n", " \\mathbf P_\\psi^{n-1}(x_0, x_1, \\ldots, x_{n-1})\n", " =\n", " P (x_{n-1}, x_n )\n", " \\psi(x_0)\n", " P(x_0, x_1)\n", " \\times \\cdots \\times\n", " P(x_{n-2}, x_{n-1})\n", "$$\n", "\n", "The last expression equals $\\mathbf P_\\psi^n$, which concludes the proof." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## [Solution to Exercise 3.4](https://jstac.github.io/continuous_time_mcs/#markov-prop-4)\n", "\n", "Here is one approach.\n", "\n", "(The statements involving glue are specific to this book and can be deleted\n", "by most readers. They store the output so it can be displayed elsewhere.)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "hide-output": false }, "outputs": [], "source": [ "α = 0.6\n", "λ = 0.5\n", "b = 10\n", "n = b + 1\n", "states = np.arange(n)\n", "I = np.identity(n)\n", "\n", "K = np.zeros((n, n))\n", "K[0, -1] = 1\n", "for i in range(1, n):\n", " for j in range(0, i):\n", " if j == 0:\n", " K[i, j] = (1 - α)**(i-1)\n", " else:\n", " K[i, j] = α * (1 - α)**(i-j-1)\n", "\n", "\n", "def P_t(ψ, t):\n", " return ψ @ expm(t * λ * (K - I))\n", "\n", "def plot_distribution_dynamics(ax, ψ_0, steps=200, step_size=0.1):\n", " ψ = ψ_0\n", " t = 0.0\n", " colors = cm.jet_r(np.linspace(0.0, 1, steps))\n", "\n", " for i in range(steps):\n", " ax.bar(states, ψ, zs=t, zdir='y', \n", " color=colors[i], alpha=0.8, width=0.4)\n", " ψ = P_t(ψ, t=step_size)\n", " t += step_size\n", "\n", " ax.set_xlabel('inventory')\n", " ax.set_ylabel('$t$')\n", "\n", "\n", "ψ_0 = binom.pmf(states, n, 0.25)\n", "fig = plt.figure(figsize=(8, 6))\n", "ax = fig.add_subplot(111, projection='3d')\n", "plot_distribution_dynamics(ax, ψ_0)\n", "\n", "from myst_nb import glue\n", "glue(\"flow_fig\", fig, display=False)\n", "\n", "plt.show()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "

 On a technical level, right continuity of paths for $(X_t)$ implies condition 2, as proved in Theorem 2.12 of [[Liggett, 2010](https://jstac.github.io/continuous_time_mcs/zreferences.html#id8)]. Right continuity of paths allows for jumps, but insists on only finitely many jumps in any bounded interval.\n", "\n", "

 In the definition of $P_t$ in [(3.9)](#equation-poissemi), we use the convention that $0^0 = 1$, which leads to $P_0 = I$ and $\\lim_{t \\to 0} P_t(j, k) = I(j,k)$ for all $j,k$. These facts, along with the semigroup property, imply that $(P_t)$ is a valid Markov semigroup." ] } ], "metadata": { "date": 1634710782.653135, "filename": "markov_prop.md", "kernelspec": null, "title": "The Markov Property" }, "nbformat": 4, "nbformat_minor": 4 }