That means, for all h greater or equal to 0, and t greater than or equal to 0-- h is actually equal to 1-- the distribution of Xt plus h minus Xt is the same as the distribution of X sub h. And again, this easily follows from the definition. There are some stuff which are not either of them. Probability of error given H equals 1 is the probability that the data looks like H equals 0 was a right hypothesis. Download the video from iTunes U or the Internet Archive. Let me put it that way, because you rarely find that opportunity where you can play a fair game. At time t plus 1, lots of things can happen. Stochastic Processes (Video) Syllabus; Co-ordinated by : IIT Delhi; Available from : 2013-06-20. Zn minus 1 back to Z1 is a function of Xn minus 1, down to X1. And then the peculiar property it has is the expected value as the nth term, in this thing called a martingale, conditional on knowing the values of all the previous values, expected value of Z sub n given the value of Z and minus 1, Z and minus 2, all the way down to Z1 is equal to Z sub n minus 1. But these ones are more manageable. Like this part is really irrelevant. /Length 15 Let me conclude with one interesting theorem about martingales. r star is equal to 1 in this case. now there's a lemma. So the probability that Zn is equal to 0 is 1 minus 2 to the minus n. So for each n, if you calculate the expected value of Z sub n, it's equal to 1. There are Markov chains which are not martingales. You can interchange expected value and function with inequality like this, if the function is convex. If X bar is less than 0, and if gamma r star equals 0, r star is the r at which gamma of r equals 0. These are a collection of stochastic processes having the property that-- whose effect of the past on the future is summarized only by the current state. And this is what this curve says. It's the major problem there. So three, tau is tau 0 plus 1, where tau 0 is the first peak, then it is a stopping time. Even if you try to win money so hare, like try to invent something really, really cool and ingenious, you should not be able to win money. And that's what this is saying. And in that case, it's more difficult to analyze. You think what happens. Use OCW to guide your own life-long learning, or to teach others. The week is over. Do you remember Perron-Frobenius theorem? That's called a stationary distribution. So this one-- it's more a intuitive definition, the first one, that it's a collection of random variables indexed by time. It's equal to 1 whatever value of r we choose. Most other stochastic processes, the future will depend on the whole history. And that turns out to be 1. Take the end test. Week 1. If you're not given anything for 100 years back, and all you know is what your capital was 100 years ago, and if we think we're playing a fair game all of this time, which is of course always a question, the expected value of what we have now, conditional on everything from 100 years back through recorded history, is just that last term. So it should describe everything. That's what you need in order to calculate everything else. Just look at 0 comma 1, here. If all the tangents to the curve lie not strictly below, but all the tangents can curve lie beneath the curve, so wherever you draw a tangent, you get something which doesn't cross the curve. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. Now you have 0 here instead of 1. But in expected value, you're designed to go down. It's called optional stopping theorem. I'm not sure if there is a way to make an argument out of it. What you're doing is taking a chance where you're probably going to win of winning $1, and you're risking your life's assets for it, which doesn't make too much sense anymore. x���P(�� �� But that one is slightly different. No enrollment or registration. stream So this is 0. So this is called the stationary distribution. It's a sum of IID random variables and conditional on H equals 1. So the trajectory is like a walk you take on this line, but it's random. And then what it says is the probability that SJ is greater than or equal to alpha is less than or equal to the minus alpha r star. Now somebody at the end of the lecture last time pointed out something, which says that when you do experiments and you keep on making observations until you get the data that you want, there's something very unethical about that. There's one other huge thing that we want to talk about. Flash and JavaScript are required for this feature. And what it turns out to is threshold of rule. So there is a largest eigenvalue, which is positive and real. It's the expected value of X, yes. Might as well define submartingales and supermartingales. So before stating the theorem, I have to define what a stopping point means. And it turns out this is a very major problem as far as stochastic processes are concerned, because it comes up almost everywhere. Often, we may assume that the dynamical models are formulated by systems of differential equations. They should be about the same. If you take more observations, S sub n just changes. So in general, if transition matrix of a Markov chain has positive entries, then there exists a vector pi 1 equal to pi m such that-- I'll just call it v-- Av is equal to v. And that will be the long term behavior as explained. I don't know whether I've ever seen stupider terminology than this. What we're doing is we're following this preset procedure we've set up. Let's say we went up. FreeVideoLectures aim to help millions of students across the world acquire knowledge, gain good grades, get jobs. We have two states, working and broken. For each observation you really want to know what f of Y given H of Y given 0, divided by Y given 1. I mean, it's done in the text. I mean, this is a game you often play-- double or nothing. PROFESSOR: Yes. And since there are many very general theorems that hold for all martingales, you can then apply them to all of these special cases, which is kind of neat. Even, in this picture, you might think, OK, in some cases, it might be the case that you always play in the negative region. Knowledge is your reward. So make sure you understand it. But when you lose, you lose $10. You're going to play within this area, mostly. For just looking at values up to time t, you don't know if it's going to be a peak or if it's going to continue. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. H equals 1 sometimes. And therefore, the sums of these turn out to be this simple kind of martingale. So what happens then is the expected number of tests you make under the hypothesis that H is equal to 0-- now we're using Wald's equality rather than Wald's identity-- it's equal to the expected value of S sub J given H equals 0, divided by the expected value of Z, given H equals 0. It just gave us a relationship between the expected value of S sub J and expected value of J. This curve starts out here, negative slope. But you might make an error when H is equal to 0. I don't know how that greater than 0 got in. Even without all of that, so long as you have a model which tells you what the densities of these observations are, conditional on each hypothesis, you can define this. Then I want to make clear you understood it, because to really understand it, you have to go through the arithmetic yourselves at least once. And a slightly different point of view, which is slightly preferred, when you want to do some math with it, is that-- alternative definition-- it's a probability distribution over paths, over a space of paths. Find materials for this course in the pages linked along the left. So we get the same kind of quantity. This quantity becomes the density of Y conditional on H equals 0. The word martingale comes from gambling, where gamblers used to spend a great deal of time trying to find gambling strategies when to stop, when to start betting bigger, when to start betting smaller, when to do all sorts of things, all sorts of strategies for how to lose less money. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. Xn is independent of all the other X's, therefore it's independent of all the earlier Z's. And you get a different probability density for H equals 1, then you get on the other hypothesis. So this whole thing in here is 0. When you win, you win $1. But if you translate it by 1 over in this direction, what happens is the error of probability is determined by this. stream And you want to make sure that you have that. And that's what happens in gambling. It's like a coin toss game. So at the 100 square root of t, you will be inside this interval like 90% of the time. But if you think about it, you just wrote down all the probabilities. And what happens is then is this quantity in here I get an extra term of that sitting over there. Now, let's talk about more stochastic processes. You've averaged that over X, but you're left move why because of the conditioning here. That this value can affect the future, because that's where you're going to start your process from. So it's not a stopping time under this definition. We don't offer credit or certification for using OCW. Or you stop at either $100 or negative $50, that's still a stopping time.