Scott Page: The experience of innumerable minds: Diversity in policymaking | Gerald R. Ford School of Public Policy
 
International Policy Center Home Page
 
 
WHAT WE DO NEWS & EVENTS PEOPLE OPPORTUNITIES WEISER DIPLOMACY CENTER
 

Scott Page: The experience of innumerable minds: Diversity in policymaking

January 27, 2010 1:21:26
Kaltura Video

Scott Page talks about diversity in public policy decisionmaking and shows that as policy problems grow more difficult the benefits of diversity become even more pronounced, provided that we agree on fundamental ends. January, 2010.

Transcript:

^B00:00:016
>> We're going to go ahead and get started. I'd like to welcome you. I'm Susan Collins, the Joan and Sanford Weill Dean here at the Gerald R. School of Public Policy. And it's a great pleasure to have you here on behalf of the Ford School and our newly launched Center for Public Policy and Diverse Societies. I'd like to extend an especially warm welcome to Scott Page, who is our speaker for this afternoon. He is laned [phonetic] poet collegiate professor of conflict systems, political science and economics here at the University of Michigan. He's also an external faculty member at the Santa Fe Institute. His research integrates the wide variety of disciplines, as you've just heard and he is the author of the difference. How the Power of Diversity Creates Better Groups, Firms Schools and Societies, which was published in 2007 by Princeton University. Scott has produced a very influential policy relevant set of research on the effects of diversity and variation in a number of complex systems, including economies, eco-systems and political institutions. He teaches and consults around the world for academic, non-profit and corporate audiences and we're really delighted to have him here with us to deliver a public lecture on his own campus. Today's lecture represents the Ford School's contribution to the University of Michigan's 2010 Symposium in honor of the Reverend Martin Luther King, Jr. As you may know, the theme for this symposium this year is "I am, was and always will be a catalyst for change." Our then seems particularly well suited to that theme and we're here to explore the change potential of diversity. Whether and how increasing the diversity of perspectives of policy making table can be a powerful catalyst for change. Today's event is just a second public event hosted by the diversity center and I'd like to take just a moment to talk about the center. We opened our doors this fall as the first of its kind initiative designed to shred light on how public policy can most effectively navigate the opportunities and the challenges that arise as societies become increasingly diverse locally, nationally and internationally. A number of academic institutions have explored issues related to diversity through lenses such as social science, education, business and law, but the opening of our center here, The Center for Public Policy of Diverse Societies enables the Ford School to be the first home to a university-based effort that's focused on the policy problems and issues associated with diversity. And we're very proud of that distinction. During this first year, we will continue to host distinguished speakers, and so we encourage you to visit our web page and we will be circulating information about other events as they are planned. And with that, it is my great pleasure to welcome our speaker, Professor Scott Page.
^M00:03:12
[ Applause ]
^M00:03:17
>> Professor Scott Page: It's great to be here. I'm people came out on such a cold day. What I want to do is I want to talk a little bit about some recent work I've been doing on diversity and complexity. So as Susan mentioned, three years I wrote a book called, The Difference, which is about diversity. And at the same time, my co-author finished a book that he had been writing for a decade called, Complexities. So we're really branching out and finishing a project on diversity and complexity. So what I'm going to do is talk about some of those ideas and how they relate to public policy. So the title of this talk borrows a quote from Emerson, which is, "knowledge is the amassed thought and experience of innumerable minds." And what I want to convince you of, hopefully, at the end of the day is how is it that diverse groups of people are so smart and why this is relevant to policy. But to start out, what I want to do is sort of give a poly-sci 101 view of things and sort of give a backdrop of how we typically think about diversity in the realm of politics, right. So this is going to be sort of political science 101, diverse preferences. Then from there, I'm going to get into the stuff we sort of animates my research. I'm going to talk about simple things and difficult things and then eventually, complex things. And I'll talk about how the roles of diversity plays and trying to cope with difficult problems. And then I'll ponder a little bit about how diversity and complexity interplay with one another and how is it that diversity, at least diversity and how we think about the world impacts our belief that makes sense of complex environments. And then I'll close with some thoughts about the new iPad Touch and Detroit. [laughter] Okay, so politics as usual. Here's how we -- I'm not joking, by the way. So politics as usual. Let me think about diversity in the political realm as this, we think about diversity in terms of identity. We break people into groups. Like, whites, African-Americans, Latinos or the young people of America and we look at how they differ in their political views, right, maybe they're -- this gives you charts of how democratic or liberal they are. What percentage voted for Obama? And what we see is we see differences across these groups. Other things we might do is we might look across these groups and ask are there differences in levels of political involvement? Like how many people vote, that sort of thing. And again, if you look at white, black, Asian, Hispanic [inaudible] vote a turnout across those groups. You see huge differences. So these are the sort of things that traditional political science would study. So one of the things we do in traditional political science is we decide that a way to think about diversity is to think about ideological diversity. So we sort of put people in this one dimensional array between sort of left wing and right wing. So here's our current Supreme Court Justice and this is easier to do than all of Congress, but I would have had to past 535 pictures, not so many in the Supreme Court. So these are our Supreme Court justices arranged in terms of most conservative to most liberal, right. According to recent work by Andrew Martin at Washington University. So what you can do is then you can basically figure out if there's nine people, the person who's going to decide everything is the one in the middle. And this case happens to be Kennedy, right. So what you can basically say is the law of the land is set by Justice Kennedy. And you can do this by looking across at their votes and seeing in fact, most of the time, he's what we call the pivotal voter. You can do this historically as well, so here's a Supreme Court from 1959. Each line is a different judge. Right in the black line here is sort of the medium judge and this is how conservative they are at the end. You can see at the end, this black line is Justice Kennedy and that's filling it in. So what we think of when we think of diversity is we think that people have different ideological views. Some are left wing, some are right wing. And what we get through politics most of the time is a somewhat moderate view, okay. Now, that's sort of the political science 101 view of things. When we get into graduate courses, it gets a little more complicated when we talk about some cycles of preferences and that sort of stuff. What I want to do is I want to take a completely view of how to think about diversity in the political realm. And I'm thinking in the context of policy problems. So when I think of it in the context of when we've got to think of something like coming up with a healthcare bill; or figuring out whether we should bail out the banks; or figuring out what we should do to help people, we're confronting a problem. And what I think about now, people taking what, you know, Emerson called our innumerable minds, or our diverse minds and how we apply those diverse minds to try and find solutions to those problems. Now, when we think of a problem, we can categorize how hard it is. It can be simple, it can be difficult or it can be complex. But the point I'm really centering focus on here is that governments make policies. And if you're making a policy, one of the things you're really doing is you're trying to find a solution to a problem. So to start out, I'm going to give you a little bit of a history in terms of how we sought scientifically about solving problems. And if you want to think that, you start with this guy, Frederick Winslow Taylor. Now, the sort of two versions of Taylor. One is of this great scientist, the other is that he's a charlatan, right. I'm not going to take an opinion on this, I'm just going to sort of give you sort of a middle ground view. But Taylor got this idea of what's called scientific management. And that is you could take a problem and you could quantify it in some way and this led to sort of an adage that if you could measure it, you could manage it. And so one of the first problems that Taylor went after was to optimize the shovel, okay. So especially if you [inaudible] people shoveling coal, it's a question of how big your shovel could be. So what Taylor did is he created what we called -- biologists would call a shovel landscape. So in this access, you got the size of the shovel. So here the shovel head is incredibly small. It's otherwise known as a stick, okay. And at this end, you've got a huge shovel. And on this axis, what you've got is the efficiency of the shovel, okay. So the stick you can't shovel anything at all and if I got a shovel the size of Texas, I can't shovel anything at all, but as the shovel gets bigger and bigger and bigger, I get more efficient, but at one point it gets too heavy and I start hurting my back and I get less efficient. So if you plot this shovel landscape, what you get is a single peak function. And people who study difficult problems call this a Mount Fuji problem. So it looks like Mount Fuji. Incredibly easy to solve and you get efficient answers. So if you look at sort of what they call time study people in organizations. I forget what they call scientific management or generally, you look for problems like this and you quantify them and you solve them and then you have everybody do the optimal thing. Well, the problem is what happens when things become a little bit harder. These are what we call difficult problems. [Inaudible] difficult problems, the landscape no longer looks like Mount Fuji, right. Now, it's what we call a rugged landscape. And by rugged landscape is when we see lots of little peaks. So you can't just sort of move along until you get to a peak and stop, because if you do that you'd end up getting stuck at something that might not be [inaudible] so you've got to be more sophisticated. So what I'm going to talk about today is diversity allows us to be more sophisticated. But things can be even worse. In addition to being difficult, things can be complex. So when you think about something being complex, it means a landscape that dances, right. So you want to think of the landscape as not being fixed, but it's moving over time. So think of something like the stock market -- you can see that this is constantly changing. So it's a moving target. Now, they didn't use complex just as a metaphor, it's actually sort of mathematically defined. There's a wonderful book by Stephen Wolfram called, A New Kind of Science, that basically says, look, there's only four things a system can do. One thing it can do is it can be stable, right. It can just go to some stable thing, like an equilibrium. The second thing it can do, which is the [inaudible] in the upper right is, it can just be periodic. So it can go from black to white to black to white, with no on and off -- day, night. That sort of thing. The third thing that a system can do is it can be just completely chaotic. If you change one thing, you can just get this sort of random spreading chaos. This is the butterfly flapping its wings, right. In Asia causing a tornado or hurricane in the Americas. And then the fourth thing it can do, which lies between two and three, so this is not A New Kind of Science, A New Kind of Math before it comes between two and three is that it can be complex. So it can be not quite periodic and not quite chaotic. So what is complexity then? When we talk about complexity, what we mean is either one of two things -- there's sort of two classes of [inaudible] definition. One is we mean is what we call BOR -- Between Order and Randomness. So it's not order and it's not random. It sort of lies between those two things. Or the second way to think of it is that it's deep. That it's difficult to describe. It's difficult to evolve. It's difficult to engineer, or it's difficult to predict. Okay, so something that's complex, it's get sort of hard to deal with. So let me explain these two things in a little more detail. BOR -- if I have a sequence that goes 0101010101 that's completely ordered, there's nothing complex. If I have something that gets a random mass of zeros and ones -- they just flipped a coin here, that's not complex either. It's just random. But if I have something that kind of goes 01001100011100110101, there's a pattern there, but it's not a simple pattern, so this would be complex. Let's go back to the definition -- the other definition, which is deep. This one was harder to explain, right. Since it's harder to explain, it counts as complex. Now, there's another way to see complex. Let me give you an example of complex. Complex is probably best seen, not heard -- let's see if I can get this thing up. Oh, I got to log in here. 
>> [Inaudible]
>> Carl don't look. [Laughter] Uh-oh. I did that for Carl. [Laughs] Everybody's going to get their grade changed in about five minutes here. [Laughs] I didn't -- okay.
^M00:12:14
[ Silence ]
^M00:12:20
>> So this is a street scene from India. So I want you to think of this -- just think of these two definitions: one is between order and randomness, right; and the other is it's difficult, describe or predict.
>> [In unison] [Laughs]
>> Okay, so as you watch this, it's actually a dramatic conclusion to this, so we'll work our way through the whole thing. So first you see -- you're going to see these structures that start to emerge. There's this structure sort of forming on the road and they're stuck and now, they're going to make it in mass, they're going to move across, okay. 
^M00:12:53
[ Silence ]
^M00:13:05
>> So you can almost think of this as one giant particle now. It gets moving across this way and then blocking the path.
^M00:13:10
[ Silence ]
^M00:13:17
>> And it gets so that -- what you want to keep in mind is that this isn't clearly is not ordered, right, but it's also not random, right. Like there's structures and there's certain, you know, in the short range, you can sort of predict the sort of things that are going happen. Like, here, you see this beautiful move by the scooters, right, coming across [laughs] coming across [inaudible] Now, here's the [inaudible] watch the white car in the upper right -- upper, excuse me, upper right. It's going to go -- it's going the wrong way down a one -- I think it might be Carl. I'm not sure. [Laughs] But watch, watch him, he's [inaudible] say, look, I know it's a one way street [laughs] but it's complex enough that maybe nobody'll notice if I just sort of drive down here [laughs] the wrong way, but I'll sneak in with this other group, okay. And then what -- it doesn't just join the other group though. So it just takes a while, but it's sort of fun to at least watch this car. 
^M00:14:13
[ Silence - laughs ]
^M00:14:20
>> Now, it's like, there we go. That's a good shortcut. So [laughs] so that sort of says like the movie being worth a thousand words. See if I'm -- so that's what we think of being complex. Now, the other way to think of it is this deep notion, right. So when you think about deeply describe, there's a notion called minimal description length. [Inaudible] which just is how many words does it take to describe. So the more words it takes, the more complex it is. There also a notion called semi dynamic depth, which is just that if you're going to evolve this thing, how long would it take to evolve? There's something called logical depth, which says if you're going to engineer, if you're going to build it, how long would it take? And then the last one -- this is one that we'll entrust is -- be the Jim Crutchfield cosmisaleasey [phonetic] he used to be a post-doctorate at Michigan, which is how big of a machine would you need to reproduce the same pattern? So how hard is this thing to predict? Okay, the reason I want these formal definitions is if we look at the challenges that the world faces from a public policy standpoint, almost all of them are not in some sort of metaphorical sense, than an actual mathematically sense complex. So here's a list of -- I went to like five different websites to sort of list, you know, huge problems facing the world. So this is like an intersection of those lists. If you go down and you look at these things like, transportation, water, species extinction, poverty, education -- the only ones that aren't complex are things like peak oil. Peak oil gets the resource extraction [inaudible] so even if it's just oil in the ground, you've got to decide at what rate we're going to take it out. So that's an engineering problem, right, but every one of these other things, climate change, epidemics, terrorism, right, economic fragility, all of these problems are complex in a classic sense, right. And so what we want to do is we want to understand how do we deal with these complex problems? All right, but to get there, before we can get the complexity, first we've got to get to the question, sort of difficult. So the first thing I want to talk about is sort of how we use diversity to get around difficult problems. So classic difficulty problem, like one that we talk about a lot within the academy is putting people, so far, only men, they're putting men on the moon, right. This involves something -- all sorts of different problems and we used all sorts of different people to solve those problems. This is a nice sort of example, it's a good exemplar of how we can have diverse groups of people solve something, but it's sort of hard to quantify. So it's better to actually look at sort of newer things that have been [inaudible] where we can actually measure how diversity has been used to solve problems. So this is a guy named Timothy Gowers [assumed spelling] he's a mathematician. He's a winner of the Fields Medal and among mathematicians, you know, he's one of the 10 or 20 best mathematicians of the century. A brilliant person. A couple of years ago, Gowers got a whole bunch of publicity because he said look, I'm a pretty smart guy as mathematicians go. I've got a lot of fame. I've got the field and all that sort of stuff, but I think we can actually do better mathematics if we did it in diverse teams. And so he said, I'm putting this project called the poly math project where I'm going to just pose; so let's do one to start. I'm just going to pose this really hard problem and I'm going to see if a group of us can solve it. So he said, here's this thing called the Hale's Jewett Theorum. Now, the Hale's Jewett Theorum is interesting, because they actually had a proof of it, but nobody could understand the proof. What do I mean by that? The proof was so long, right, it required all these parts, that no one individual actually could've, over a lifetime, like read the whole proof and understood it. So the question is, how do we come up with a shorter proof? The interesting thing about this Theorum and the reason why it's sort of a fun one is, it basically boils down to this: it boils down to sort of, how many boxes do you have to remove to make tic-tac-toe impossible? Now if there's only 3 rows and 3 columns, right, that's pretty easy to figure out. But if you've got "N" rows and "N" columns and you've got "N", you've got "H" different players, suddenly it becomes a much harder problem. All right, so that's the formal. The formal statement of the Theorum goes something like: if you get an N by N by N and cube and you've got C colors, so C players, how many cells do you have to remove so that no one can possibly win. So in this case, it's easy, right? You just remove 3 of them and no one can win, but the bigger problem is really hard. So happens is he posts this thing and a whole bunch of mathematicians sort of look at it; eventually 27 different mathematicians started playing with this, and over 37 days, it only took 37 days for them to find a short workable proof of this Theorum. Right, and what's nice about this is that you can go back and look at it and you can see how different people had sort of different representations of the problem, different people had different tricks and collectively, they are able to come up with something that even someone as brilliant as Gowers couldn't solve. So if you look at that particular example in detail, what you find is sort of 2 things that drive this. There were differences with how some people represented the problem, which we're going to call perspectives and there's differences in the little tricks people use to try and solve the problem, which we're formally going to call heuristics. So let's look at the first of these first. Remember we had a sort of landscape idea. So suppose that I have this landscape of a problem and I say, here's where I can encode things and here's their value, there's going to be what I call local peaks. So I'm going to get stuck with say possibly an A, B or C. Right? Well, somebody else may encode this thing differently. So they might get stuck at D,D,A, E or F, right, because they've got a different representation. So if the two of us work together, well then what's going to happen is, well, the only places that the two of us can get stuck are A and B. Now it doesn't mean that we're going to necessarily get it right, right; we can both get stuck at B, but the important thing here is, right, if we think of, me, I could have gotten stuck at C, right? And if we bring in somebody else who sees the problem differently, C isn't the place where they would get stuck. So there's this big advantage of having these representations because we have different peaks. So one of the things when you look at sort of this, the poly math project and you read the thread, right, of people and the sort of ideas they had, what you see constantly is people saying, here's a different way to frame the problem. And by framing the problem a different way, they sort of created different sets of peaks. Now the second way we solve difficult problems is by using heuristics. So heuristics are like little sets of rules of thumb we have for solving problems, so when you take an IQ test, they'll ask you questions like this: fill in the blank, 1, 2,3,5,blank, 13 and the answer here is 8, right? And you can do this: 1 plus 2 is 3; 2 plus 3 is 5 or you can subtract, 13 minus 8 is 5; 8 minus 5 is 3;  3 minus 5 is 2 is 1, that sort of thing. When they give these IQ tests, they'll also ask you this one typically, right, and this is just to sort of show, you know, can you sort of see how the rule works, right and this one is just squares. Right? Then they give you this one, which is a hard one. And I've got 2 good jokes for this. One of my colleagues saw this and said this is easy. These are the years that Boston won the pennant [laughter] The other sort of nice joke on this one is that I presented this at the World Bank and nobody got it, which is a bad sign, but then I was at a middle school in Detroit and 2 kids got it right away, which is very nice. So in the long run, my money's safe. That's my view. The answer to this one is the answer to the ultimate universal question, which, if you read The Hitchhiker's Guide To The Galaxy, is 42 and the answer to this one is 42, and I put this on not because I want to say here is a hard question, because I want to show, this isn't something that's maybe the key to sort of economic growth, if you believe Weitzman [phonetic] So how do we solve this one? 2 minus 1 is 1 squared; 6 minus 2 is 4, which is 2 squared. 42 minus 6 is 36, which is 6 squared. And 1806 minus 42 is 1764, which is 42 squared. Why does that matter? What was the first one, which is easy? Subtract. What was the second one, which is easy? Square. What was the third one which was incredibly hard? Subtract and square. Right? So what you get is this third problem which is really hard, you just have to combine the first two heuristics. So people who study heuristic development talk about superadditivity in heuristic space. So this is one of these fancy academic words, but if you think about it, what it means is if I've got one heuristic subtract and another heuristic square, I've also got for free a third heuristic, subtract and square. Now Weitzman, who is an economist at MIT, basically says look, this is where growth comes from. Okay? So one of the leading, one of the theories of sort of, economic growth, and here's what he calls a recombinant [phonetic] theory of growth. Okay, suppose you have 20 tricks. Well if I have 20 tricks, that means 190 pairs of tricks, these 1140 triples of tricks and 4845 quadruplets of tricks. So there's all these combinations of things, right, once you've got a small handful. So I should turn the microphone over to my young colleague, John Holland [phonetic] at this point, right, because, he got you in this very room. John had maybe not this exact same slide, but if you look at the combustion engine, this is just a combination of tricks. And it happens to be a combination of tricks where we had all the parts long before we had the engine, right? And it just took someone figuring out how do we combine these different things, right, the piston, you know, the little firing mechanism, right, and the jet to pump the gas in, to create the combustion engine. If you look at the bike, right, we have had the bike for 16 years before somebody thought, let's put a chain on the bike; we had gears and we had the bike and there's a 16 year gap. And what is probably the worst example of this, or maybe the best example is we went, you know, by one estimate, you know, 2000 years before somebody figured out to combine the poached egg in the toaster [laughter] right, and this, by the way, sold over a million units in its first year at Target. Right? There is it, right, this sort of combination of the, heuristics. Now, the examples are nice, but you can quantify this stuff so this is, there's a company called MAT LAB, Mathworks, that makes MAT LAB and every month they run a contest where what they do is they basically say here's a programming contest. And what you have to do is you've got to sort of write code to solve a problem. So over time, this is the date on this axis, you know, and over time, over time, you just follow this thing, is, you've proven some speed in the program, so faster is better. So if you look at all this code, each one of these little blue dots is someone sort of submitting some code, right, and that's the time it takes for the code. If you look at this code, what you see is you see these little tiny improvements and you see an occasional big jumps, and what you find is, these little tiny improvements are people who know, like, one little trick, one little heuristic, like how to write 4 loops or if loops, or how to invert a matrix to have it do a calculation faster, something like that. It's all these little heuristics combining so that we get this sort of massive improvement. When you see the big jumps, those are people who had different perspectives. Those are people who had sort of a different way of representing the problem, right. And so it leads to a larger jump. Now this actually gets, this sort of idea, you can construct a business model from, right. And the Obama Administration has tried to do some of this stuff within the government. And so there's a company called InnoCentive which basically does the following. If you have a problem in the pharmaceutical industry or in the chemical industry that you can't solve, you can just post it. You can just say okay, here's our problem, we can't solve it, and we'll give you $20,000, $50,000, $100,000, $3,000, you know, whatever, to solve this problem. Right now there's 150,000 people that are hired now that are signed up to be solvers on this thing. They've been given over $3 million dollars in awards; and the success rate is hovering around 40%. Now Kreymer Connay [phonetic] at Harvard Business School studied in depth which problems get solved and which ones don't get solved and what's intriguing is that the ones that get solved are the ones that are framed in such a way that it's not clear what discipline you should use to solve it. So if they say, this is a chemical engineering problem, it typically doesn't get solved, because that means that authority [inaudible] engineers that couldn't solve it but if you don't say what it is, then maybe some crazy x-ray crystallographer looks at it, right and they solve the problem. Now, an example of that is one of the problems that Proctor & Gamble had when they were having trouble getting fluoride powder in tubes without making a mess, so they said, we can't put fluoride powder in tubes without making a mess. Some guy who is an electric guitarist on the side, he knows a lot about electricity, so he went on and said, oh that's easy, charge the fluoride, charge, give the opposite charge to the tube and it will run inside. So he just emailed that back and made $25,000 in 14 seconds. [laughter] Right? So what's great about this is that they, I think, look for the nearest wall where they can bang their heads. Right? What's interesting about this, right is that by exposing the problems to lots of diverse minds, you get different ways of representing the problem. They didn't think of this as an electrical problem at all. Right? And you can solve it. So, where, where I come into this story is, is, I started thinking about this work on, you know, taking these ideas of diverse perspectives and heuristics from computer science and psychology and trying to think about, how does this work in a problem solving economy. So with Lu Hong, who is a colleague of mine at Loyola University, we did the following experiment. We created a whole bunch of, we did this sort of mathematically and then computational. We printed out all the little agents on the computer and then sort of just mathematically solved problems, okay, the little landscapes and they got stuck on the little peaks and we ranked them by how smart they were. Now all these agents have to be fairly smart and then we created 2 groups. Okay, the one group with the best 20 agents and another group of 20 random agents. And we had these groups work collectively, right, like as teams, until they got stuck. Now the IQ view of this is that we had some sort of alpha group right, and then we had a diverse group. And so the alpha group were all people who did really well on their own; the diverse group included some people who maybe didn't do as well on their own. But what we found is that the diverse group almost always outperformed the group [inaudible] by a substantial margin. And when we say "almost always" here, we mean this in the mathematical sense, so that means with a probability of one, right, so it's possible that they can't, but we didn't, given the limits we imposed, it happens with a probability of 1. Okay? And this was evidently published a few years ago. Now the reason why this is true, if you think about people having perspective in heuristics as opposed to IQs, the group of really smart people all had very similar tools and the diverse group in here, because I'm from Michigan and East Southern Wisconsin, and trash the state of Illinois, like I always do, this group had people, some of them who had the right tools and others whom had tools that maybe on their own weren't that worthwhile. So what is an example here? Let me clarify this thing. What is an example? An example here is suppose you have people running the world's money supply. Right. And you're trying to figure out what they should do. Well, what tools would you want them to have? You'd want them to have, probably Ph.ds in economics; you'd probably want them to have knowledge of statistics; you'd probably want them to have knowledge of the world banking system, right? So those would be tools A,B,C and D. What would be tools I and L and E and Z? These might be people who understand network robustness. So if I took one of my friends in complex systems who studies, you know, network theory, mathematical network theory, would I want them running the world's money supply? No. Would I want them running my checkbook? No. All right. However, what I want them, when I've got, if I've got 60 people running the money supply, would I want to take the 60th economist out of the pool and add 1 person who knows something about networks? Probably, right. So that's why the diverse group is doing better. Now it's not, since this is like a mathematical theorem, there's going to be conditions that have to hold. So the conditions are, the person will be called the calculus condition, the problem solvers will have to be smart. We call this the calculus condition because when they write their landscapes, it's got to be the case that they can find the local optima. So, in a sense, assuming that they can take derivatives. The second condition is that we've got to be drawing from a diverse set of people. If we draw from a very small set, then obviously you take the best ones. And the third one is an interesting one. The problem's got to be hard. If it's finding out the optimal shovel, anybody can solve it; you don't need a, you don't need a diverse team. Okay. Well, here's where things get now more difficult. So if you have a difficult; ironically, it's a bad choice of words. If you have a difficult problem, it's fairly straightforward. You just want to get diverse minds to it and you want to just have them solve it. And this is sort of how, you know, most think tanks, most consulting companies and most universities work, right, we try and get diverse heads to try and solve problems. When you move to complex problems, though, they become more difficult and the reason they become more difficult is because of the fact, you don't know the answer. So you think about the health care policy, right. People are saying, why don't we do X, why don't we do Y, why don't we do Z. We have no idea what X or Y or Z is going to turn out to do. Right? Now in the old days, they had it easy. Right. You could just go to Delphi [phonetic] and there would be this woman sitting on a 3 legged stool, right and she would be, and Delphi right there is this sort of, some sort of like thing coming out of the ground that makes you a little bit crazy, so typically she would be sort of in an affected state, right, and she would say some crazy thing and you would bring a goat and sacrifice it first by the way, right. And then she would say some crazy thing and then some priest would interpret that for you and then you'd know what was going to happen. Right? Well, we have, we can't do that with health care policy, and so we're sort of stuck. But there's been lots of ways over time where you can confront complex things. This is by the way a brilliant book about The Theory of Everything by David Aurel [phonetic] these are the ways over time that we have tried to predict the future, right. So stars and planets, rolling dice, smoke and fire, flights of birds, magnum forces, guessing. And now we're in the new realm of things we're using called "models." [laughter] Right? But...and one wonders at some point if models won't be as funny as tea leaves and coffee grounds. Right? But this is sort of where we are now. Okay? But here's the rub. Individuals and their individual models aren't very good. Right? As Bernanke found out. So this wonderful book by Phil Tetlock called "Extra Political Judgment" where he basically over a 20 year period, coded predict, thousands of predictions by experts and what he found is that experts that take extreme ideological positions are just a little bit worse than random darts thrown at a dart board. So if someone's got a strong ideological opinion, you're better off not listening to them. Right. And just rolling some dice. If you take people, those are the people he calls "hedgehogs." People he calls "foxes" are people who are sort of much more diverse in their thinking and they're just only these, individually  only, they're only a little bit better, right, than throwing darts at dartboards, when you get to  complex problems. Okay. But here's the important thing now. If he looks at the individuals, they're not that good, but he actually takes the collection of them, they don't do, they actually do reasonably well. They don't do incredibly well, but they do reasonably well. So when we have a problem that's complex, we don't know how things are going to unfold, we're going to have to have some way of making some sort of prediction or forecast of what the complex outcome is going to be. So one thing people have advocated doing, and I'll talk a little bit about this, is turning to crowds, right, and asking, is there somewhere that collections of people, like collections of experts, teams of diverse people, can predict the future. So there's a book written a few years ago by a friend of mine named Jim Suroweicki called "The Wisdom of Crowds," okay, and he begins this book with a story about the 1906 west of England fat stock and poultry exhibition, 787 people guessed the weight of a steer, average guess 1187 pounds, actual weight of the steer, 1186 pounds. They're off by a pound. Okay, several [inaudible] totally amazed by this. But I tease him, that's because he went to Yale and lived in New York, he's never seen a cow. [laughter] Okay, I used to own 9 cattle; it's not that hard to be within like 50 pounds. It gets like really big people. You know, 5 times the size of people, you can be within 10 pounds on a person, you'd be within 50 pounds on a cow. Okay. So 50 pounds isn't that amazing; a pound is amazing. But there's other examples of sort of wise crowds, right. And so if you look at the Iowa Electronic Markets, which are used to sort of predict political outcomes, this is the last presidential election, the Iowa Electronic Market said Obama should get 53.5% of the vote; he actually got 52.1% of the vote, and the final polls ran around 55%. All right, so the Iowa Electronic Markets are sort of freakishly accurate. So one of the things we can do is we can sort of say, well, crowds are wise; let's just give things over to crowds, but that doesn't make much sense. If you're a scientist, what you want to do is you want to try and understand, what is it that makes a crowd wise. Well, if you think about how people make predictions, what they do is we have, what psychologists will tell us and also what we do when we run regressions is we create variables or categories, right, so if we, if I mentioned something to the United States, you might say, well, I can think of the United States in terms of different time zones; you might also think of the United States in terms of sort of the traditional regions, right. How you frame a problem has huge implications for how you think about things and how you work within that realm. So let me describe the case of sort of 2 different companies, and this will get into sort of, just in how important these interpretations are. Both of these companies serve liquid refreshments. One of them is coffee; the other one is beer. One of them divides their world like this; the other one divides their world like that. One of them is really good at what they call blocking and tackling or as Paul Callant [phonetic] and Mussolini both said, "making the trains run on time." Right. [laughter] The other one, oops, let's get this back. The other one is really good at meeting the preferences of their consumers. Right. Well, it turns out this one is really good at making the trains run on time, right. Because everything is in the tame, time zone and everything is fine. But they're terrible at meeting the preferences of their consumers, why? Because people in Texas drink very different stuff, right, than people in Minnesota, especially this week. Right? This other company is really good at meeting the preferences of their consumers, right, because they've broken it down to this sort of taste regions, but they're terrible at making the trains run on time because they've got these -- they've got a couple of different districts that sort of like cross time zones and all sorts of other stuff. Right, so how we find things, it's got huge implications for sort of how things play out. Now what's interesting is we tend to think of these things as sort of being rational and model-based, but if you have a bunch of people, here's the thing that's interesting about policy. When you talk about a policy, we typically can't use data, there's no past data. Like in health care, we can sort of say, well, here's what's happened when we sort of like, other countries have gone on the single payer system, so we don't have lots and lots of past experiences with 50 different health care policies to say what's going to happen so we sort of have to like use our own intuition. So when people think about the world, what they do is they categorize things based on their own experiences. So if you ask a political scientist, when I asked one of my poly sci colleagues, tell me about the state of West Virginia, this is how they would parse it, right; they would say there's 3 congressional districts and they would probably tell me really interesting things to them [laughter] right, about those congressional districts, right? If I asked someone in food service, which I did, this is how I got this graph, about West Virginia, they would say, oh, here's a really interesting thing about West Virginia. You get a hot dog, there's a giant slaw region down here [laughter] and there's a no slaw finger right that sticks up in the top, right. So depending on what you do, right, you're going to parse the world in different ways. And as Borges said, you know, sort of beautifully in this wonderful essay on "The Analytical Language of John Wilkins," here's his categories of animals: those that belong to the emperor, and bulbed ones, those that are trained, mermaids, fabulous ones, others, right. Those that tremble as if they were mad, right and those that have just broken a flower vase. So I think it's, there's lots of ways we can categorize the world, just like there's lots of ways you can frame something, like the health care policy and these different ways we frame things, these different sets of life experiences, these different identities give us these different categories. Why does that matter. How does this relate to this whole notion of sort of diverse complex policies? Well, there's a book written in engineering called "The Spherical Cow" and it says suppose you're trying to predict not the weight of a cow, but how much leather you can get from a cow. Well, that's a difficult thing. If you've ever tried to take the integral of a cow [laughter] right, it's not easy. That's like Carl's [phonetic] take home final for his math class, but it's a very hard thing to do. So he says, instead you should just imagine a spherical cow; we all know like the [inaudible] of a sphere and then you can make a reasonable prediction. And if you do that, right, you do get a reasonable answer, what I'm here to say is this -- you do better if you also construct the gateway cow. Right. So if you do a spherical cow and a gateway cow, you get 2 different predictions with 2 different ways of presenting the problem and collectively, you get a better prediction. Right. So this plays out in policy. So after we had this sort of slight problem with financial markets in the fall, [laughter] the IMF said, wow, you know, we should make sense of this, so they issued something called "the World Global Stability Report" and in the World Global Stability Report, they have this wonderful language where they basically say, you can't understand this with just a spherical cow -- you also need the gateway cow. They didn't quite say that, but they actually said you need 4 models; we can't explain this with 1 model, but we'll show it to you in 4 models. So one of their models is what they call Quintal Correlations [phonetic] where they said, you might think the way to look at how fragile the system would be to say, how correlated is this firm's profits with another firm's profits. But that's meaningless, in a way. What really matters is how correlated with this firm's returns with this other firm's returns when this firm was doing really, really badly. Right. Because you don't care in general if they're correlated; you care if they're correlated in their bottom fifth. [phonetic] So you basically look at their worst fifth of days and ask how they're correlated, and what you do when you see this, is you see things, you see AIG and these numbers that talk about how correlated they are. You see huge numbers between AIG and everybody else, right. And you see tiny numbers between Lehman Brothers and everybody else. Which basically suggests Lehman had to die; AIG had to be saved, right, at least, according to this model. Lehman Brothers probably doesn't like this model much but right, that's [inaudible] Another model they used is they said, look, we can construct a balance sheet domino model where, for each country, you sort of look at how much money they've got in other countries, and then you say, let's suppose we knock out country A and then let's ask it how many other countries fail. Right. So instead of a firm level model, this is a country level model. And I'm sure, I was joking about this with a teacher in my undergraduate class, I'm sure they press the Dubai button sometimes, right [laughter] and saw nothing happens and they said, well, too bad, right. If it looked like this after they pressed the button, probably would have been all sorts of interventions, right? So the point is, even if you're trying to back cast and figure out what's going on, right, what you want to do is not just have one model. Right? You want to have lots and lots of different models. So the work I've done one this, some work with Lu Hong, again, and what we found is that if you construct the best model you can, given how you've parsed West Virginia, so if I look at like a food service person and Ken looks at like a political scientist, if we take the best models that we can, then the more diverse our interpretations, the more diverse we've parsed the state, then the more negatively correlated our predictions are going to be. Right. So if we want our predictions to be different, so one way to get our predictions to be different is for us to sort of parse the world differently. So why is diversity so important in trying to make sense of a complex world. Well, if we parse the world differently, if we, if we're different in the categories we use, then our models are going to be different, our predictions are going to be different. Why does it matter that our predictions are going to be different? Because the following theorem holds and this is a great theorem because 3 people got tenure for this theorem. An economist, a computer sci [inaudible] and a statistician. This is before the Internet, so no one knew anybody else had proven it, so it's all within like 2 years of one another. But this theorem basically shows the following: if the crowd air equals the average air minus the diversity of the prediction, so here's how far the crowd is off, squared; here's how far the individual is off, just averaged across individuals and here's how diverse the individuals' predictions are. Now you can write more elaborate versions of this in what's called the biosphere [inaudible] composition formula, but this is just sort of the simple version, so I want, I've got something that's really complex and I don't know how it's going to play out and I want to know, I've got people making predictions, how far off the crowd is is going to depend on sort of how smart the people are -- that makes sense -- but also, how different they are. And what we just saw right, what Lu and I showed is that a way to get difference is to have different categories, different interpretations, different ways of seeing the world. So we can play this out back with Suroweicki's tablet example, the crowd is only up by a pound; remember, these are squared errors. The average person was off by 56-57 pounds, right, and the reason they happened to be right is because they were diverse. And Suroweicki has a bunch of examples in his book, there's a whole bunch of examples, you know, that you can find the crowds being accurate; in every single case it looks like this. It's not that everybody in the crowd just nailed it; it's the case that they happened, for whatever reason, to have diverse ways of seeing it. So what is this meant for, in practice. And what you see is you see some government; the U.S. Army is in here and you see a whole bunch of companies here. A whole bunch of companies have started internal prediction markets, where they have people within the company make predictions, right, rather than having just some sort of economist or market researcher decide what's going to happen. So Best Buy, for example, [inaudible] business school I'm doing some work with Best Buy, Best Buy has, where they're buying a bunch of plasma TV's, they're trying to figure out how many of these are we going to sell in the next month. Right. So what they typically would do is they would have a market researcher come along and like do some fancy analysis of like, at 46 inches large, what it costs, this is the price point, here's how many we'll sell. They also have other store managers get sort of, you know, making bets on how many they think they're going to sell, and they use that information as well. Now it turns out the store managers so far are a little bit better than the expert, okay? But it sort of varies across place, and because there are different ways of thinking about it, the experts using some formal model and they're using intuition, it turns out after having both of the two, are better. Google is interesting. They call it Prophet, P-R-O-P-H-E-T. They had me in to talk to them and it was the first time all the people from Google could participate in this and got to meet each other before it had all been virtual and they gave the top 100 performers copies of my book and they gave the next 200 performers t-shirts, and I just had to sit there and watched them trade my book for t-shirts. [laughter] Not a happy thing.
>> What was the price?
>> They were free. I just gave it to them.
>> How many books for how many t-shirts.
>> Oh they didn't, they were just trading, here, take this, I don't read books sort of the [laughter]...
>> Can I ask a question about the Best Buy thing?
>> Yeah.
>> Was there money on the bets?
>> So most of these things, here's what's interesting, so most of these companies and it varies across company, so places like Google, it's just a pride thing, like there's a public ranking of where you are and there's a lot of cache for being a, the high scorer. Other places give like, trips to Mandalay Bay, and things like that for the top 3 scorers, and for some reason, that seems to be a common place to send people, I'm not sure why. And then other places actually do have cash payments, but the cash payments tend to be very small, so the cost of running these things is really low, which is remarkable. Here is a...one of my undergraduates did this project for me last summer, which is fascinating. We wanted to ask, how does this play out over time and so one, one, one of the few places where you can actually find people making predictions over time are the MBA draft and the NFL draft. I don't have a necessarily a deep interest in the MBA draft, but it's one of these few cases where you have real experts who dig deep for information, who tell their stories, right, and who make numerical predictions. So here's a series of 7 experts over 7 weeks, or 6 weeks, making predictions on the MBA drafts. And what you're seeing here in this first column is their average error among these 7 people. So they start off average being off by 213, that's only matched up to 86. And then it sort of somewhat smoothly goes down to the end, on average, only off by about 70. Right? But here's what's interesting. If you compute the diversity of their picks and the collective error, right, you see this really intriguing phenomena. That all of a sudden in the last week, there's this total diversity collapse. Everybody starts looking at what everybody else is doing and just starts copying what seems to make the most sense. But the diversity collapse is so extreme that if you look at the crowd's performance, it's actually worse than it was in the previous 2 weeks. Right? So one of the things that's really intriguing about this stuff, when you think about sort of the value of diversity, is you really need to sort of stick to your guns, right. Because even if somebody else has a better opinion, you know, a model that's better than your model, collectively we're probably better off if you stick to your model. And so, when I saw this, right, I immediately showed it to a whole bunch of different people and said, and asked them, you know, what they thought of it and they all said, most of them came back, you know, these government agencies and, and a bunch of corporations and they immediately felt like, we're going to play with this, right, this is something that [inaudible]  they're going to play with in terms of looking at how often they should let people speak to one another, or just also trying to keep sequential data, because what you get here is, you don't want people talking to each other too much but clearly, you want people talking to each other some, because collective error is getting better and better and better, right, but at some point, it fell apart. Okay. So let me close up with, with just some parting thoughts then. So the first one is, we tend to think of diversity in terms of this sort of red/blue thing, right. We think in terms of preference diversity. And even that, I think is overblown, right, because there is, political science makes this distinction, I'll use some language which I borrowed from my wife on this, between instrumental preferences, which are preferences over policies and fundamental preferences, which are preferences over outcomes. So there's a lot of disagreement in preferences over policies; there's not that much disagreement in preferences over outcomes. And let me give you an example. I got a photo of all of the pro-crime, pro-terror, anti-growth, pro-inequality, anti-security, pro-pollution, anti-child collective representatives [laughter] There they are, right. So no one takes these positions. So there's almost complete agreement on crime, terror, growth, inequality, security, pollution and the life of children, right? There's just tremendous differences in what we think the policies are, right that will lead us to better outcomes and there are times, right, and this is, I think, one of the high points in the last, you know, 30 years in American government, which is the '86 tax bill. Unfortunately, there's no pictures of, even though Bradley and Reagan were worked closely on this, there are no pictures of Bradley and Reagan, and I met Bill Bradley a couple of months ago, and I said, you know, I was trying to find a picture of you and Reagan for the '86 tax bill and he just laughed at me and he's like, you're never going to find a picture of me. And the, but anyways, this is a time when you basically have people on both sides of the aisle realizing the tax code is a complete and utter mess, right, and that, we should have fair taxes and they work together. But the way we want to think about this, right, is we want to think in terms of toolboxes, not, when you look, when you look at diverse groups of people, different identity groups, different interest groups, we want to think of them not in terms of their different preferences; we want to think of them in the different sort of cognitive tools they have to bring to bear on policy problems and here's sort of a funny thing. There seems to be a huge non-political advantage, so I spent a lot of time over the last 5 years going and visiting government agencies and visiting corporations, visiting non-profits, visiting universities and the non-political organizations have a huge advantage in leveraging diversity, because of the fact that they have a common goal. Right? So if I go to Microsoft, if I go to Boeing, if I go to Google, if I go to Yahoo, if I go to Novartis, if I go to GenTech, they all stand there and say, this is what we want to do. They have a common mission; there's complete agreement, right, on what they want. And because there's complete agreement on what they want, they're really good at leveraging diversity. Right? In the political realm, we don't see that. So one of the metaphors I've been playing with in the last couple of days is, so the iPad came out today. And the great thing about the iPad is not the iPad itself, right, is that Apple's got this new iPad, which, it's just like a big iPhone, it's like they pumped it with air and made it bigger [laughter] I don't know what the big news is. It's just bigger. But what's interesting about this and about the iPhone is basically, it's just a landscape on which all sorts of people could create applications. Right. So approximately, like $50,000, basically, I mean, 3 people working for 3 months can make an app, so that's probably like, you know, $50,000 bucks of labor, right, plus some computers, and you could make some app and maybe that does something amazing that makes the world a better place, maybe it doesn't, but even if it doesn't, you learned a lot in the process. Right? But Apple's created a platform under which all these diverse talents can come in to try and make the world a better place. Right? Well, what's the equivalent palate for society? I think it's right down the road. Right. Detroit. There's all this infrastructure there, right? There's power, there's water, there's streets, right, like there's a huge airport nearby. And buildings are cheap. right. Land is incredibly cheap, right? We need to start thinking about, how does government make Detroit, right, the equivalent of the iPad Touch, so they can leverage all these things. So one thing about Michigan that intrigues me when I, I remember as an undergrad, it was, when you're an undergrad, you sort of worship  Bo Shembeckler, right and when, right before Bo died, Bo was giving a talk, my wife and I went, and Bo got up and he said the six words that Bo always says, which are "the team, the team, the team." Right. And this is sort of my "the team, the team, the team" is one of those things, like, John Hollander is sitting here, he's like one of the founders of complex systems. One of the things that's so amazing about complex systems -- if you take something like the brain, right, just think of a neuron -- any individual neuron is a really simple thing. Right. It can't do very much, right. It's got axons going in and dendoids [phonetic]  coming out and it's just sort of sigmoidal response function. It's a very simple thing. But if you have differentiated neurons, right, connected in the right way, that can create consciousness, cognition, personality, emotion, all these amazing things. The ramp up, if you have some sort of number of how much more impressive the brain is than the neuron, it would be huge; it would be bigger than 7, which my son Cooper likes to say [laughter] right. We're not getting a similar ramp up in our government and our organizations. But if you say how much better are organizations than people, that number's probably less than 7, right. So I think the challenge and the opportunity is how do we do that. And I think we're trying to do some creative stuff, so this is my last couple of slides here. [inaudible] did this fun thing a couple of weeks ago where they hid 10 balloons, big red, 10 big red balloons across the United States, and they said, see who can find them fastest and whoever finds them fastest gets this prize. Right. MIT won. Here's where the balloons were. MIT won in less than 9 hours. Now it might be [inaudible] win, because, what's funny about this is that people said, oh, MIT won because those MIT people are so smart. What MIT did is they basically, they didn't do anything. They said here's the deal, I can't remember how much money you got, it's like $10,000 or something. MIT said, if you find one of the balloons, you get 1/2 of 1/10th of the prize money. If you're the person who finds the person who finds the balloon, you get 1/4 of 1/10; if you're the person who finds the person who finds the person who finds the balloon, you get 1/8 of 1/10, so basically, the idea was that you get like, if you saw the balloon, say you found it; if not, email some of the others and say, hey, did you see a balloon, right, because that was the way to get money. So what they did is they sort of just created this giant web, right, that could solve problems. So I think that, you know, my final point here is we really do need to sort of, one of the things we learn through complex systems and think about diversity in complex systems is that this amazing sense of wonder, right and this amazing sense of possibility comes with the  ramp ups we can get from diverse people, right, trying to solve problems. And this is why I think, excuse me, universities are our great hope, right, because of the fact that they sort of bring together all these diverse people. But I think we have to keep in mind, this is my last political statement, that, you know, maybe we don't always want to go where everybody else's  go, right. [laughter] Like, and I tried to write John Holland on this, look at this [laughter] Thank you, very very much.
^M00:52:44
[ Applause ]
^M00:52:50
>>And I'm happy to answer questions that people have, yeah. 
>>[inaudible]
>> Yeah. Yeah, no that's a great example, right, so...
>> [inaudible]
>> So the question was he couldn't help thinking of [inaudible] 538.com and this is a website where he has all sort of people make predictions...
>> Terrifically accurate.
>> Right, of, you know, who is going to win which state, you know, who is going to win which congressional races and that sort of stuff and he figures that way to average those, right, to make collective predictions. So yeah, that's just a classic example of sort of combining models, right. There's also, there's a ton of research in computer science in what they call, sort of, ensemble learning theory, which is about, sort of, how do you combine different models. So in the forecasting literature, there's, you know, you sort of, read about statistical papers about, if I've got, which he's leveraging, right, which I've got, I've got 20 predictive models and I know their accuracy and I know their correlation, how should I combine them, how should I rate them.
>> [inaudible] he's got polls from different ideological [inaudible] bases?
>> Right, and also questions that could have been framed in slightly different ways and that sort of stuff. Right. Absolutely. Other question? Yeah?
>> So in biology, I'm sure you're aware that there is a problem of how much diversity in the population leads to a vast adaptation or find a solution and often there's a, an immediate diversion that is often [inaudible] view; too much diversity, you tend to lose the sense of an optimal situation and what determines that is the structure of the landscape underlying the problem, so I'm curious like in public policy, how do you know, do you think there's enough hold on diversity, how do you know what the landscape looks like underneath different types of problem, you know, is there some problem that really just having a bunch of economics  [inaudible] going to the same school is actually the best way to go.
>> All right. So the question is, how do we know how much diversity we should have as a function of sort of like, how rugged the landscape is, right, and, and, so that's a great question and the answer is, like clearly, you want the amount of diversity to fit the, to fit the problem, and just like that, in some sense, if I could draw the equivalent of the shovel landscape, where I had the amount of diversity on this axis and how good your solution is going to be, but the thing is, where that peak would be, would differ depending on the problem. This gets the, I mean, so this is something that's puzzled me for a long time, that we spend a lot of time when we talk about policy problems, we're talking about who is affected by them and the like, but we don't actually go through and do measures of how complex they are or how difficult they are, so if I said, which is more complex or which is more difficult -- health care, welfare care, you know, welfare policy or tax policy. I mean, we don't have any metrics with which to think about those. And yet it would seem to me, right, now this may be a naive view, but if you thought about, from an organizational standpoint, what should the organizational design look like to solve those problems, that should be related in some way to the difficulty of the problem, right? But yet, the way we analyze it, the way, and I was kind of trained in mechanism design as an economist, we focus instead entirely on the incentive problems and the information problems. We don't focus at all, because we sort of assume people can optimize, which is, sort of, I think, not necessarily a great assumption in those settings. So, I'm with you, but I don't know how, it would be a large project in how one would get it under way, but it strikes me to be a very meaningful thing to figure out, exactly how complicated is health care. So you've been hearing for months, health care's complex, health care's complicated, health care's difficult, but the thing is, how does that compare to tax policy, right? How does that compare to [inaudible] we don't know, and if we had a better understanding, it seems like we could understand better, you know, how we'd go about solving it. So like in engineering, if you think about or computer science, how do we solve the problem, in this case the outrythym depends on the complexity of the problem, right. And in ecology, right, we know the diversity of the species that we need depends on sort of how rugged the landscape is and how fast the landscape is changing, right; if the landscape is moving fast, we need a lot more diversity, so yeah, it's a, it's a, you know, maybe we should get some [inaudible] to move into public policy, right. [laughter] Yeah, question?
>> I was wondering if you could comment on the differences or the role that cognitive diversity and cultural diversity play into each other and if you're talking about the same thing or for different constructs.
>> The question is how does cognitive diversity and cultural diversity play in the same, are they the same thing, how do they differ. And this is, in some sense, this is an empirical question, right. And it depends, I think it depends on the particular problem domain. So there's a lot of, in most problem domains, cultural diversity is just going to play in; if you ask someone from different cultures how they make sense of things, you know, have them put them in categories, you know, you see big cultural differences. Right. Particularly if it comes to anything involved in the natural world. Right. So people from sort of Western countries, so let me give you a specific example, so people from Western countries, if you have them look at, like a rain forest, or a group of animals, you sort of a linaen [phonetic] system, like, here's the animals, here's the plants, right, here's the trees, here's you know, the fishes living in water and that sort of stuff, but if you take someone who lives in those cultures, they will say, or in those ecosystems, they will actually think of, they will say, you know, here's this bird that eats the nuts off this tree, right, and here's the flowers that grow along the base of that tree. And if you say, well, why don't you classify them by animals, trees and flowers, they will say, well, that's the crazy person's way [laughter] right, of categorizing these things. So things in the natural world, we tend to, there's huge differences across cultural lines depending on just our familiarity with it. But in some respects, I think it's a very, I think it's just a really intriguing open question about what causes differences in how we, you know, see how the world works. You know, I mean, my sort of very, I would say, superficial reading of this psychology literature says that it's a mixture of sort of culture, your own sense of identity, right, your sort of [inaudible] culture, the stories you've read, the experiences you've had, the training you have, right, all the different models and ideas you carry around in your head, you use some sort of weird, case-based logic to sort of say, you know, this is the way I'll frame this, and you just get sort of massive differences in how people will frame things. For instance, I, this, in my undergraduate class, I had them predict a bunch of things like how many chairs are on the Starbucks in Washtenaw to like, what's the tallest building in Brazil and it was interesting; the tallest building in Brazil, the average guess was really close to correct, but some of the predictions were really low and some of the predictions were really high, and the people who had it really low, they intended to have visited some other country in South America and just said, look, they just don't build big buildings down there and the people who had it as really high, they were like, you've got the Olympics, the Olympics only goes to big cities, sort of thing, and they had that, they had the right explanations for why they picked what they did, so it was very clear there, that there were differences in sort of just, life experience that translated into different ways of, you know, different categories in which, you know, they thought of the country of Brazil and they thought of Rio, the city of Rio and that led to these different predictions. So I think, I think it's a really intriguing question. Yeah.
>> Yeah. I think that to, it's very good talk and I think that the 2 pieces are missing from your talk, one is [inaudible] of reward. Some reward is collective, is averaged out; some can spread to the whole group at no cost. Technologies [inaudible]...
>> Right.
>> For income would be the [inaudible] because you have to share, so, for example, basketball diversity, although they have no other average, but if you play, what kind of golf, you have a, that you pick the winner, then diversity is good for you.
>> Right. Right.
>> Because you have more chances to win, and so there's 2 types of reward systems: one is average out and the other is you only pick the winner. 
>> Right.
>> That is true the latter case, then size matters a lot and that's basically Derek Diamond's [phonetic] argument.
>> Right.
>> The larger the population, the more likely there's invention, because invention that's replicated at low cost because of large populations and you have more innovation, more technology, more economic growth. So, so when you start, so there is a relationship between diversity and size, so it's a density matter. The other is, for different outcomes, actually, that different type of rewards, when you've averaged out, the [inaudible] just pick a winner from the group. I think the solution, for example, you can pick the winner rather average out, average score, so when you zero sum, the other is actually pick the winner, everybody wins and all the technology...
>> Right. So one of my former colleagues, Deirdre McCloskey [phonetic] who is an economic historian, has thought a lot about this first question, and she argues in less empirically and more  sort of you know, in just historical, just from a historical perspective, you think it's where innovations have come, it's come from places that have been both dense but also centers of trade, right, so if you tell stories of, you know, Athens, Rome, Amsterdam, London, United States, right, it's not only that you've got a lot of density, you've also got all these different cultures sort of coming in with all these different ideas, sometimes through people and sometimes it actually gets through the artifacts, right, as things came back from China, right, into Roman, into Italy, that those have embedded knowledge that they are able to be unpacked. Right, so I think it's not only density, but it's also sort of exposure to lots of different sets of ideas. You're absolutely right in terms of this payoff stuff. One of the things that I think that makes public policy so much more difficult than other domains is that we don't get to experiment with lots of different health care plans. Right. I mean, so, and there's also this difference between, you don't have, one of the things that people who study this stuff, the collective problem solving stuff, there's this notion of an oracle. So let me take the example of a, of, of, automobile design, because Jeremy's [phonetic] here. If I'm thinking about the aerodynamics of a car, like you can go into Ford and you can draw on a computer, like you can just change, you know, the picture of the roof a little bit, and they've got a computer program that will tell you exactly what the effects on aerodynamics are. So you've got an oracle. I mean, a perfect oracle. So what that means is that each one of us in this room could go and start designing cars to try and come up with one that's aerodynamic, and we could pick the winner, right, so we could totally leverage the diversity. But now let's take the question of designing the dashboard. Well, now there's no oracle, right? Now if you design a dashboard, we can't press a button and have it come back and say, that dashboard's really cool. Instead, we've got to have a whole bunch of people who dress better than I do, right, sort of look at it, get a sense of it, get a focus group, that sort of stuff, that's incredibly expensive to evaluate it, and so therefore, you can't have 10,000 people and then just pick the winner, so it's not just a matter of density; it's also a matter of having some sort of, you know, oracle on which you can do things. That's why that, the computer programming thing works so well, right. Because you can just press a button and it comes back and it just tells you exactly how fast the computer program runs. Right. So one of the big constraints on, in open source programming is just, the real geniuses in open source programs, are the people who are able to take this giant open source problem, just, you know, whatever the problem is, break it into components, and have those components have oracle-like properties to them, so that you know when you've solved the component correctly. And so if you, Brian Arthur [phonetic] and Paul David [phonetic] have spent a lot of time looking at this. When you look at the ones that have been successful, the open source projects that have been successful, they've been ones in which the sub-components have had well-defined oracles, so that way you could sort of choose the best and you can also very quickly sort of see when there are improvements. On your second point, some of the stuff I'm working on now gets to your second point, you know, what are the payoffs. So one of the really interesting things is, different payoff structures, different incentive structures can either sort of encourage diversity or drive it out. So if you look at the amount that these different investment --- I have a graph on this -- but on this slot where different investment houses were sort of, how much they were leveraging their assets, Morgan Stanley was way below the other companies, but they weren't making as much money. And so, right before the crash, they started leveraging at the same level as everybody else and they imploded, and the reason why is because they started copying other people. So if you let people copy other people, what happens is the individuals become better, but collectively, you typically become worse, because you get sort of a, you lose all that diversity. Now one thing about markets that's really intriguing, though, is in a lot of markets, if you're right and everybody else is wrong, you get a huge payoff. So markets create this sort of weird incentive then, right, and this goes back to Hayak [phonetic] right, for a lot of cognitive diversity, because if you can be contrarian and be right, there's a new book that just came out called "The Greatest Deal Ever Made" right, this guy made 15 billion dollars leveraging against the housing market, right, so if you are right and everybody else is wrong, you can make 15 billion dollars. And one can even tell a story that one reason democracies work well in market economies is democracies are actually sort of free riding of all of the cognitive diversity that the markets are creating, whereas the democracy in a sort of  more totalitarian, I mean, any system that has a common religion and not a very diverse economy, there's not as much cognitive diversity in the pool, and so when you are asked to evaluate policy prescriptions or think about policy, you don't have this, you know, sort of you know, population of diverse thinkers that you can leverage. So I think it's really interesting thing about how the incentive structures, right, in terms of pay, affect how diverse the thinkers are going to be, right, because you could have too much diversity; you could have too little, and then you have to get back to this other question, sort of depends on the problem, and that's why it's complex. Right, right. It's not easy. Yeah.
>> You started out talking about sort of the old politics...
>> Yeah, right.
>> ...old [inaudible] is about aggravating preferences then and medium voter theory and left to right...
>> Right. 
>> ...and then you went into the new, which is more about, assuming we agree on a goal, how do we find the best solution to it, which it sounds like the second one is really behind this paradigm, let's agree on some measure of output and then we find the, the production possibility [inaudible]...
>> Right, right.
>> ...and the best way to get to it. But it seems to then sidestep the question of how do we agree on the, what the outcome is, you know, so the, what the first part, I was thinking they were about different problems, one is about how do we agree what we want and the second is, could we agree what we want, how do we get to it best?
>> Right, now, so, your point, your point is absolutely well taken, so the question was, I mean, one could say, the first thing I was talking about was sort of standard political science, which is about differences in preferences and that, and you're saying, the second part of what you were saying is, sort of like, it's sort of like, economics would solve the problem. Right. But the difference here is what I'm saying is that typically, when an economist will talk about solving a problem, they haven't talked much about diversity. Right. And so what I'm saying is that when we go to this, and when economists actually look and sort of most of the economics literature write, it's about, sort of, how do we get people sort of to put for enough effort to sort of like hidden action, like, hidden information, we talk about aggregation issues and that sort of stuff, but we don't, economists haven't typically haven't looked at how the diverse groups of people solve problems, that's been more sort of within the realm of industrial ecology, engineering, that sort of, you know, psychology, group behavior, that sort of stuff, so what I'm saying is that one of the things that I'm trying to do when I think about, you know, public policy type questions, is to say, a lot of these things are really hard, right. And when we think about how you solve hard problems, you've only got 2 approaches, right. One is to hire super sophisticated people who can somehow solve them, right, or if they have diverse groups of people who can sort of somehow collectively get to a solution. And so, so the point I was trying to make is that when we think about, if you say diversity in politics almost anywhere, people are going to think of that in terms of diverse preferences and diverse wants and people wanting different slices of the pie. And what I was trying to say is, what I wanted to focus on was something different, which is that all those different sets of experiences, those different ways of thinking, there's different sort of, even goals, because goals lead to how we frame things. Right. Our lever on which we can stand to possibly find better solutions to some of these problems, provided we can overcome all these sort of other issues. Right. But you're absolutely right in your dichotomy. Yes?
>> I have a question. So with your Detroit slice...
>> Yeah.
>> So I buy the view, we get all these collective diversity problem solving for Detroit, okay...
>> Hmm.
>> So how you've got this great range of solutions...
>> Right.
>> So then the political science question is...
>> How do you decide among them?
>> No, no...assume, assume you even have some [inaudible] by which to know how you actually put them into place, given that they're winners and losers who, even though this is the optimal prob-, I mean, so Google can do it because the 2 guys who run it can say, great, we're going to go forward, so you get, you know, great ideas about Detroit, some of which involve shrinking the footprint of the city...
>> Right.
>> Others involve changing the racial diversity of the population...
>> Right.
>> ...it's sort of like in the, using, assuming you do all the diversity in the problem solving...
>> Right.
>> Then how do you get the policies implemented?
>> So I think it's, which I think it's, you question was once you've, if you've quote unquote solved the problem, how do you get movement, actually, how do you get sort of 3 stages. One of these is you've got this sort of stage where you have people who are sort of in some sense formulating policies and ideas, right and then there is the second stage in which you somehow need I think probably again a diverse group of people then to figure out which one of these then, which ones of these actually make sense, right, because the thing is, any one person's, you know, somebody's going to have a view about let's shrink the size of the city, or let's create enterprise zones, or let's, you know, create new transportation sectors or something, because the people formally intend to be advocates, you're not, you know, necessarily going to get an objective view of how it's going to play out, so suppose you get diverse people, then let's say, okay, this is the one that we think is going to work best. I think the limitation thing is, is, I mean, it's something outside my area of expertise, right, I think it's probably outside everyone's. But, I think that there's, probably, I think that there's an argument, which, I mean, this gets back to the sort of the red state/blue state slide. I think that we do need a change in our political culture where we sort of understand that if we had a whole bunch of people look at this and if we had a whole bunch of people sort of, you know, make it a reasonable appraisal and say, we've decided this is the right one, and even if mine didn't win, that we should go forward with it. Right. So one of the things like, when you look at those slides and sort of, the prediction slides, right, and I talk about this a lot in the context of my book, is that if you go to a meeting and everybody agrees, there's only 2 possibilities. One is that it was a, there's no reason to have the meeting, or you just probably made a bad decision. Right. But if you go to a meeting and everybody disagrees, right, provided you agree on the ends, which is to make Detroit a better place, odds are the decision was a good one, even if it's not your decision. Right. So I think in terms of the implementation, especially in a place like Detroit, it's just a matter of political will and it's a matter of people sort of willing to give those resources, right. And I don't know how, and again, how you muster that political will, I think is a real, I think it's a hard question. I think it's a challenge that Dave Bing [phonetic] gets up every morning and asks himself, right. But I don't know. If I knew, I'd call. [laughs] Yeah.
>> Just follow up on...
>> Yeah.
>> ...the same discussions, but take it up in another direction...
>> Yeah. 
>> ...so, I guess, what I was thinking about was that earlier change was how the problem gets defined and so with these very complex problems: poverty, terrorism, unlike the examples that you were giving us, it's well to find what people are trying to accomplish...
>> Right. 
>> ...or you can worry about whether you have this oracle or not to...
>> Right, right, right.
>> ...see how well they're doing to solve it but if you have a complex issue where people don't necessarily agree on what they're trying to solve, how do you think through whether or how much diversity is helpful to find the problem before you solve it?
>> Okay, so that's a hard...again, I wish I had easier questions, not hard questions. Now, if I think of this scene as sort of from the perspective of, you know, you know, my training, right, as a sort of a mathematical economist, it's sort of hard to think about if the problem isn't defined, how do you solve the problem, right, because the problem isn't defined, right. But there's, so the problem of question definition, right, I mean, to what extent is it useful to have a diverse group of people even define the problem. You can, you can think of categorizing that as something that, I think you would call the problem of problems, right, which is itself a problem, right, how do we define the problem is. And so then you can say, okay, the problem of problems is just a problem so then we can solve it. But that turns out to be a cheat, because if you sort of do the math on that a little bit, what you realize is that the dimensionality of the problem of problems is much larger, right, than any particular problem would be. And so in one set, dimensionality gets large, given communication problems, it's, you're at best going to get sort of a random, you know, strike through the path, and this is why, and so, I'm not trying to punt on this, but this is why, when you look at sort of, you know, the literature on sort of brainstorming and problem creation, that there's just, there really isn't, much, if there's business school professors here, you may want to correct me, but my reading of this stuff, I guess, a review on this stuff, you know, 3 months ago, I guess really isn't anything that has strong empirical support that, you know, this always works or here's a good way to do it; instead there's sort of like, you know, 40 theories of brainstorming and you know, 30 theories of product development, and its, and I think the reason why there isn't a good science behind it is, it's an econometric problem, a set of possible problems is so large and a set of possible ways to go about framing these things is so large, that there's no way to sort of have a systematic approach to search through them, and so that's being very path dependent and arbitrary in a way, but that doesn't mean that we should throw up our hands, it does mean that if we go back to that sort of open source thing, I think it makes sense to ask, you know, how do we decompose these things into problems that are possibly do-able, or how do we find things that possibly are measurable. But at the same time, I have concerns about that, because of the fact that if we, sometimes the things that are the most important that like, you know, people's life satisfaction are very hard to measure, and so they end up focusing on economics, so I think it's just hard. Yes?
>> [inaudible] the last 2 questions.
>> Yeah.
>> How do you see people's expectations and that whole rate of change affecting some of this. I'm thinking of things like that company Just In Time Engineering or Just In Time Delivery of Goods...
>> Right.
>> And it gets 3 people, and they each said today I need my heart transplant paid for, one says I need my flight to Amsterdam to be safe...
>> Right.
>> ...and the third one says, I need my 4th grade kid to have a high quality classroom.
>> Right.
>> They're not antagonistic to each other.
>> Right.
>> They all agreed on the outcome like your picture.
>> Right.
>> ...that said all those [inaudible] persons, but it seems like, they each needed their thing today, but we don't look at policy like, how do we get all those in one policy, we always keep saying, oh, there's the terror policy, there's the education policy and there's the health policy, but it, they have expectations about it being today, because they see all these other things get delivered just in time, so how does your models address that, or look at how, how that change over time makes a difference?
>> You know, once [inaudible] right, I mean, I think that's, I think it's a separate question. I think that the, [inaudible] the nature of these political problems -- there's fundamental differences in the nature of these problems than some of the standard business forms, so if somebody wants a cup of coffee, they can get a cup of coffee and it has to do with the fact that these are, a lot of things you are describing is what we call, as an economist, you know, are non-rival goods in the sense that they are, you know, that's something I've got to create for everyone and so because I've got to create it for everyone, it's a huge undertaking and you know, you're sort of, the only rejoinder of that is that sometimes there's an anecdote, I was talking to someone from Toyota and they were saying they, you know, showed a concept car at the International Auto Show and 3 weeks later, they saw China was making toy versions of the car. Right. So you can make the toy version of something, you can make an individual product for someone like that really, really quick, but an actual car from soup to nuts takes several years, so I think, the policies you are talking about are things that are going to take, you know, decades to get undone, right, and so even if you get the right policy, that doesn't mean you are going to do it fast, and if anything, having more diverse groups of people to chime in on something may slow things down. Right. But you're less likely to probably have, make big mistakes. Yeah, last question. 
>> You mentioned [inaudible] and some of the [inaudible] come from...
>> Yeah.
>> So I'm wondering what you see as some of the differences between what you're putting forth as [inaudible] diversity to solve problems and how that's different than just [inaudible] a marketing coach, because it seems like [inaudible] coordinating knowledge was [inaudible] to this, but I mean, with the stock market crashing, I see that it's like controlled, it's a controlled market [inaudible]
>> Yeah, so I think there's a similarity between, there's a lot of similarities between people who study complex system stuff and some of the old, you know, Austrian economist in terms of how they think about, you know, diverse things aggregating into something that's better, but that doesn't mean that we don't have to, want to continue thinking about, I think that I would probably be a little bit less laissez faire in the sense that when I look at what happened in the, in the most recent stock market crash, it seems to me that that was, to some extent, a breakdown in diversity and all these people who had the same model in their head, right, yet a few people who didn't, but the few people who didn't were basically people who were buying puts, right, and then we're going to make a ton of money on [inaudible] but they weren't stabilizing the system, right and so what happened is you had all these pos-, what we call positive feedbacks where Morgan Stanley, we all started copying other people by leveraging more because everybody else was leveraging and you ended up with sort of a common model. Basically, everybody believed that you couldn't have the entire real estate market collapse because it had never collapsed in the past, right. In the past when you had the '87 collapse, the New York market collapsed but the West Coast market and the Chicago market were pretty much okay; 2000, you had a collapse of the San Francisco real estate market a little bit, right, not entirely; the West Coast collapsed and the New York market was fine. So you get the sense that the real estate markets were more regional, right. But what they failed to recognize is that you know, the Federal Reserve system, right, which is series of regional banks, that used to be way more regional and now it's basically just one big bank, and they should have sort of realized that, you know, we're probably all one big economy now and the whole thing could go. So there really was sort of a common model in their heads that in a breakdown diversity that caused this thing to collapse, and so one of the, and so, one of the things I'm working on now, I think it's, I think it's really interesting is how do you construct economic and political and social institutions in such a way that sometimes you encourage the right levels of dissent and also that you sort of foster diversity, right, not so much of it that we can't be productive, but enough so that you create a reasonably stable system. And it's not so you can say, well, let's just look to ecology, right, but if you look to ecology, right, we've had mass extinctions; if you read Doug Irwin's [phonetic] book, right, we've had mass extinctions, and a lot of people would argue that it's not the case that every one of those mass extinctions was because a huge meteor hit, right, that sometimes it can just be the internal dynamics of the system can be such that whammo, right, you just lose a whole bunch of different species. So, there's a question of, do we just sort of sit back and let stuff happen or do we actually try and think in careful ways about how we can instruct these institutions so that we maintain enough diversity so we don't have these big large events. All right. Thank you very, very much.
^M01:21:04
[ Applause ]
^M01:21:10
>> I would like to invite you to continue the conversation informally. There are some refreshments right outside of the auditorium. Thank you very much.
^M01:21:17
[ Background noise ]
^M01:21:24