MITOCW watch?v=o9nw0ubqveo

Please download to get full document.

View again

of 20
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Information Report



Views: 0 | Pages: 20

Extension: PDF | Download: 0

Related documents
MITOCW watch?v=o9nw0ubqveo The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To
MITOCW watch?v=o9nw0ubqveo The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit PROFESSOR: OK, folks. Welcome back. Hope you had a nice long weekend with no classes. You got caught up on all those problem sets that have been sneaking up on you. You enjoyed watching the Patriots and Tom Brady come back. Oh, sorry, I'm showing my local bias. Before we talk about today's topic, I want to take a second to set the stage. And I want you to stop and think about what you've seen so far in this course. We're coming up on the end of the first section of the course. And you've already seen a lot. You've certainly learned about fundamentals of computation. You've seen different kinds of data structures, both mutable and immutable, so tuples and lists, dictionaries, different ways of pulling things together. You've seen a range of algorithms from simple linear code to loops, fors and whiles. You've seen iterative algorithms. You've seen recursive algorithms. You've seen classes of algorithms. Divide and conquer. Greedy algorithms. Bisection search. A range of things. And then most recently, you start pulling things together with classes-- a way to group together data that belongs together along with methods or procedures that are designed to manipulate that data. So you've had actually a fairly good coverage already of a lot of the fundamentals of computation. And you're starting to get geared up to be able to tackle a pretty interesting range of problems. Today and Monday, we're going to take a little bit of a different look at computation. Because now that you've got the tools to start building up your own personal armamentarium of tools, we'd like to ask a couple of important questions. The primary one of which is how efficient are my algorithms? And by efficiency, we'll see it refers both to space and time, but primarily to time. And we'd like to know both how fast are my algorithms going to run and how could I reason about past performance. And that's what we're going to do with today's topics. We're going to talk about orders of growth. We'll define what that means in a few minutes. We're going to talk about what's called the big O notation. And we're going to begin to explore different classes of algorithms. Before we do that though, let's talk about why. And I want to suggest to you there are two reasons this is important to be considering. First question is how could we reason about an algorithm something you write in order to predict how much time is it going to need to solve a problem of a particular size? I might be testing my code on small scale examples. And I want to know if I'd run it on a really big one, how long is it going to take? Can I predict that? Can I make guesses about how much time I'm going to need to solve this problem? Especially if it's in a real world circumstance where time is going to be crucial. Equally important is going the other direction. We want you to begin to reason about the algorithms you write to be able to say how do certain choices in a design of an algorithm influence how much time it's going to take. If I choose to do this recursively, is that going to be different than iteratively? If I choose to do this with a particular kind of structure in my algorithm, what does that say about the amount of time I'm going to need? And you're going to see there's a nice association between classes of algorithms and the interior structure of them. And in particular, we want to ask some fundamental questions. Are there fundamental limits to how much time it's going to take to solve a particular problem, no matter what kind of algorithm I design around this? And we'll see that there are some nice challenges about that. So that's what we're going to do over the next two days. Before we do though, let's maybe ask the obvious question-- why should we care? Could be on a quiz, might matter to you. Better choice is because it actually makes a difference. And I say that because it may not be as obvious to you as it was in an earlier generation. So people with my gray hair or what's left of my gray hair like to tell stories. I'll make it short. But I started programming 41 years ago-- no, sorry, 45 years ago-- on punch cards. You don't know what those are unless you've been to a museum on a machine that filled a half a room and that took about five minutes to execute what you can do in a fraction of a second on your phone. Right. This is to tell you're living in a great time, not independent of what's going to happen on November 8th. All right. We'll stay away from those topics as well, won't we? My point is yeah, I tell old stories. I'm an old guy. But you might argue look, computers are getting so much faster. Does it really matter? And I want to say to you-- maybe it's obvious to you-- yes, absolutely it does. Because in conjunction with us getting faster computers, we're increasing the sizes of the problems. The data sets we want to analyze are getting massive. And I'll give you an example. I just pulled this off of Google, of course. In I don't have more recent numbers-- Google served-- I think I have that number right-- 30 trillion pages on the web. It's either 30 trillionaire or 30 quadrillion. I can't count that many zeros there. It covered 100 million gigabytes of data. And I suggest to you if you want to find a piece of information on the web, can you write a simple little search algorithm that's going to sequentially go through all the pages and find anything in any reasonable amount of time? Probably not. Right? It's just growing way too fast. This, by the way, is of course, why Google makes a lot of money off of their map reduced algorithm for searching the web, written by the way, or co-written by an MIT grad and the parent of a current MIT student. So there's a nice hook in there, not that Google pays MIT royalties for that wonderful thing, by the way. All right. Bad jokes aside, searching Google-- ton of time. Searching a genomics data set-- ton of time. The data sets are growing so fast. You're working for the US government. You want to track terrorists using image surveillance from around the world, growing incredibly rapidly. Pick a problem. The data sets grow so quickly that even if the computers speed up, you still need to think about how to come up with efficient ways to solve those problems. So I want to suggest to you while sometimes simple solutions are great, they are the easy ones to rate-- too write. Sorry. At times, you need to be more sophisticated. Therefore, we want to reason about how do we measure efficiency and how do we relate algorithm design choices to the cost that's going to be associated with it? OK. Even when we do that, we've got a choice to make. Because we could talk about both efficiency in terms of time or in terms of space, meaning how much storage do I have inside the computer? And the reason that's relevant is there's actually in many cases a trade-off between those two. And you've actually seen an example, which you may or may not remember. You may recall when we introduced dictionaries, I showed you a variation where you could compute Fibonacci using the dictionary to keep track of intermediate values. And we'll see in next week that it actually tremendously reduces the time complexity. That's called a trade-off, in the sense that sometimes I can pre-compute portions of the answer, store them away, so that when I've tried to a bigger version of the answer I can just look up those portions. So there's going to be a trade-off here. We're going to focus, for purposes of this lecture and the next one, on time efficiency. How much time is it going to take our algorithms to solve a problem? OK. What are the challenges in doing that before we look at the actual tools? And in fact, this is going to lead into the tools. The first one is even if I've decided on an algorithm, there are lots of ways to implement that. A while loop and a for loop might have slightly different behavior. I could choose to do it with temporary variables or using direct substitution. There's lots of little choices. So an algorithm could be implemented many different ways. How do I measure the actual efficiency of the algorithm? Second one is I might, for a given problem, have different choices of algorithm. A recursive solution versus an iterative one. Using divide and conquer versus straightforward search. We're going to see some examples of that. So I've got to somehow separate those pieces out. And in particular, I'd like to separate out the choice of implementation from the choice of algorithm. I want to measure how hard is the algorithm, not can I come up with a slightly more efficient way of coming up with an implementation. So here are three ways I might do it. And we're going to look at each one of them very briefly. The obvious one is we could be scientists-- time it. Write the code, run a bunch of test case, run a timer, use that to try and come up with a way of estimating efficiency. We'll see some challenges with that. Slightly more abstractly, we could count operations. We could say here are the set of fundamental operations-- mathematical operations, comparisons, setting values, retrieving values. And simply say how many of those operations do I use in my algorithm as a function of the size of the input? And that could be used to give us a sense of efficiency. We're going to see both of those are flawed somewhat more in the first case than the second one. And so we're going to abstract that second one to a more abstract notion of something we call an order of growth. And I'll come back to that in a couple of minutes. This is the one we're going to focus on. It's one that computer scientists use. It leads to what we call complexity classes. So order of growth or big O notation is a way of abstractly describing the behavior of an algorithm, and especially the equivalences of different algorithms. But let's look at those. Timing. Python provides a timer for you. You could import the time But let's look at those. Timing. Python provides a timer for you. You could import the time module. And then you can call, as you can see right down here. I might have defined a really simple little function-- convert Celsius to Fahrenheit. And in particular, I could invoke the clock method from the time module. And what that does is it gives me a number as the number of some fractions of a second currently there. Having done that I could call the function. And then I could call the clock again, and take the difference to tell me how much time it took to execute this. It's going to be a tiny amount of time. And then I could certainly print out some statistics. I could do that over a large number of runs-- different sizes of the input-- and come up with a sense of how much time does it take. Here's the problem with that. Not a bad idea. But again, my goal is to evaluate algorithms. Do different algorithms have different amounts of time associated with them? The good news is is that if I measure running time, it will certainly vary as the algorithm changes. Just what I want to measure. Sorry. But one of the problems is that it will also vary as a function of the implementation. Right? If I use a loop that's got a couple of more steps inside of it in one algorithm than another, it's going to change the time. And I don't really care about that difference. So I'm confounding or conflating implementation influence on time with algorithm influence on time. Not so good. Worse, timing will depend on the computer. My Mac here is pretty old. Well, at least for computer versions. It's about five years old. I'm sure some of you have much more recent Macs or other kinds of machines. Your speeds may be different from mine. That's not going to help me in trying to measure this. And even if I could measure it on small sized problems, it doesn't necessarily predict what happens when I go to a really large sized problems, because of issues like the time it takes to get things out of memory and bring them back in to the computer. So what it says is that timing does vary based on what I'd like to measure, but it varies on a lot of other factors. And it's really not all that valuable. OK. Got rid of the first one. Let's abstract that. By abstract, I'm going to make the following assumption. I'm going to identify a set of primitive operations. Kind of get to say what they are, but the obvious one is to say what does the machine do for me automatically. That would be things like arithmetic or mathematical operations, multiplication, division, subtraction, comparisons, something equal to another thing, something greater than, something less than, assignments, set a name to a value, and retrieval from memory. I'm going to assume that all of these operations take about the same amount of time inside my machine. Nice thing here is then it doesn't matter which machine I'm using. I'm measuring how long does the algorithm take by counting how many operations of this type are done inside of the algorithm. And I'm going to use that count to come up with a number of operations executed as a function of the size of the input. And if I'm lucky, that'll give me a sense of what's the efficiency of the algorithm. So this one's pretty boring. It's got three steps. Right? A multiplication, a division, and an addition-- four, if you count the return. But if I had a little thing here that added up the integers from 0 up to x, I've got a little loop inside here. And I could count operations. So in this case, it's just, as I said, three operations. Here, I've got one operation. I'm doing an assignment. And then inside here, in essence, there's one operation to set i to a value from that iterator. Initially, it's going to be 0. And then it's going to be 1. And you get the idea. And here, that's actually two operations. It's nice Python shorthand. But what is that operation? It says take the value of total and the value of i, add them together-- it's one operation-- and then set that value, or rather, set the name total to that new value. So a second operation. So you can see in here, I've got three operations. And what else do I have? Well, I'm going to go through this loop x times. Right? I do it for i equals 0. And therefore, i equal 1, and so on. So I'm going to run through that loop x times. And if I put that together, I get a nice little expression. 1 plus 3x. Actually, I probably cheated here. I shouldn't say cheated. I probably should have counted the return as one more operation, so that would be 1 plus 3x plus 1, or 3x plus 2 operations. Why should you care? It's a little closer to what I'd like. Because now I've got an expression that tells me something about how much time is this going to take as I change the size of the problem. If x is equal to 10, it's going to take me 32 operations. If x is equal to 100, 302 operations. If x is equal to 1,000, 3,002 operations. And if I wanted the actual time, I'd just multiply that by whatever that constant amount of time is for each operation. I've got a good estimate of that. Sounds pretty good. Not quite what we want, but it's close. So if I was counting operations, what could I say about it? First of all, it certainly depends on the algorithm. That's great. Number of operations is going to directly relate to the algorithm I'm trying to measure, which is what I'm after. Unfortunately, it still depends a little bit on the implementation. Let me show you what I mean by that by backing up for a second. Suppose I were to change this for loop to a while loop. I'll set i equal to 0 outside of the loop. And then while i is less than x plus 1, I'll do the things inside of that. That would actually add one more operation inside the loop, because I both have to set the value of i and I have to test the value of i, as well as doing the other operations down here. And so rather than getting 3x plus 1, I would get 4x plus 1. Eh. As the government says, what's the difference between three and for when you're talking about really big numbers? Problem is in terms of counting, it does depend. And I want to get rid of that in a second, so it still depends a little bit on the implementation. I remind you, I wanted to measure impact of the algorithm. But the other good news is the count is independent of which computer I run on. As long as all my computers come with the same set of basic operations, I don't care what the time of my computer is versus yours to do those operations on measuring how much time it would take. And I should say, by the way, one of the reasons I want to do it is last to know is it going to take femtoseconds or not, but rather to say if this algorithm has a particular behavior, if I double the size of the input, does that double the amount of time I need? Does that quadruple the amount of time I need? Does it increase it by a factor of 10? And here, what matters isn't the speed of the computer. It's the number of operations. The last one I'm not going to really worry about. But we'd have to really think about what are the operations we want to count. I made an assumption that the amount of time it takes to retrieve something from memory is the same as the amount of time it takes to do a numerical computation. That may not be accurate. But this one could probably be dealt with by just agreeing on what are the common operations and then doing the measurement. So this is closer. Excuse me. And certainly, that count varies for different inputs. And we can use it to come up with a relationship between the inputs and the count. And for the most part, it reflects the algorithm, not the implementation. But it's still got that last piece left there, so I need to get rid of the last piece. So what can we say here? Timing and counting do evaluate or reflect implementations? I don't want that. Timing also evaluates the machines. What I want to do is just evaluate the algorithm. And especially, I want to understand how does it scale? I'm going to say what I said a few minutes ago again. If I were to take an algorithm, and I say I know what its complexity is, my question is if I double the size of the input, what does that say to the speed? Because that's going to tell me something about the algorithm. I want to say what happens when I scale it? And in particular, I want to relate that to the size of the input. So here's what we're going to do. We're going to introduce orders of growth. It's a wonderful tool in computer science. And what we're going to focus on is that idea of counting operations. But we're not going to worry about small variations, whether it's three or four steps inside of the loop. We're going to show that that doesn't matter. And if you think about my statement of does it double in terms of size or speed or not-- or I'm sorry-- time or not, whether it goes from three to six or four to eight, it's still a doubling. So I don't care about those pieces inside. I'm going to focus on what happens when the size of the problem gets arbitrarily large. I don't care about counting things from 0 up to x when x is 10 or 20. What happens when it's a million? 100 million? What's the asymptotic behavior of this? And I want to relate that time needed against the size of the input, so I can make that comparison I suggested. OK. So to do that, we've got to do a couple of things. We have to decide what are we going to measure? And then we have to think about how do we count without worrying about implementation details. So we're going to express efficiency in terms of size of input. And usually, this is going to be obvious. If I've got a procedure that takes one argument that's an integer, the size of the integer is the thing I'm going to measure things in. If I double the size of that inte
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!