An assignment (WWI in Real Time)

I’m conferencing today, which means no lecture—but also means that I gave the class a writing assignment. Here it is:

Looking back on the course, particularly the readings, write a couple of paragraphs on one event, outcome, or choice that we’ve not yet covered that nonetheless puzzles you. (In other words, identify something that you were surprised by, something that you’d clearly have reasons to expect the opposite in terms of what happened.) Why is it puzzling?

The goal, of course, is to get them thinking like political scientists: being puzzled, identifying questions worth asking and answering, and then beginning the process of building explanations. Effectively, they’re doing what I do with the readings each day, “Wait, why would this happen? Let’s start writing down a game…”

Back on the horse next Tuesday…

On things said for the sake of argument, or why “assumption” isn’t a four-letter word

I’m not going to rehash Phil Arena’s (excellent) post on the role—and ubiquity—of assumptions, but I do want to take the opportunity to talk about how I view the assumptions I make in my own work. Specifically, I want to make a case for why “assumptions” aren’t at all a necessary evil—rather, they’re a necessary and powerful good for doing the stuff of social science. I’ll make two points. First, they help us isolate causal mechanisms when we build theories, enabling us to develop expectations over when and why some set of factors can have an independent effect on an outcome of interest in the absence of some other factors—which helps when we move to empirical models. Second, and I’m repeating myself here (I think), they’re really the only things that we, as social scientists trying to explain the things we observe, bring to the table when it comes to building theories. So, yes, all assumptions are “false” in the sense that they strip away things we would think important if we were to create a complete rendition of something, but they’re also essential—and unavoidable—when it comes to the development of theories (whether formal or informal). Those things we assume away should always come back in our empirical models, to be sure, but I’ll also argue that we have a better sense of what those controls should be when we’re mindful of the assumptions we put into our theories.

First, on the issue of isolation, let’s say that I want to develop a theory of how some factor—say, leadership change—affects temporal patterns of international conflict. If I’m interested in whether there can be a valid link between leadership tenure and war (that is, a valid argument from premises to conclusion), what do I need to do? Let’s say, for example, that my hunch is that new leaders know more about their own willingness to use force than their opponents, such that they take office with private information over their resolve. How should I model this? Well, two things I’d want to do immediately are assume that, while consecutive leaders of the same state can differ in their resolve, there is no other source of variation in preferences that occurs with leader change, and, second, without leadership change, war would not occur in the theory. Do I think either of these are true? Well, of course not. First, partisan change, state-society dynamics, and time until the next election (in democracies) can also produce changes in state preferences across leadership transitions. Second, wars can of course happen for other reasons (if they didn’t, I’d be the first person with a valid argument about the causes of war, and while I’m a little arrogant, I ain’t that bad). But if I want to see what the independent effect leader change is, I can (and should, at this stage of model-building) strip these other things away—so that if war does happen in my model, I’ll know the mechanism driving it. (Put more pithily, if outcomes are overdetermined in your theory, you really can’t say much about the things you’re presumably interested in. And whether they are overdetermined in your theory is totally up to you.)

My next step, of course, is to analyze the model. This amounts to seeing what valid conclusions follow from my premises (assumptions)—no more, no less. Let’s say that I analyze the model and find that, indeed, when new leaders’ personal resolve is private information, we see turnover-driven cycles of reputation-building and conflict. But what do I really have here, if I’ve assumed away all these other sources of potential changes in state preferences? Well, I’ve got a somewhat parsimonious theory of leader change, tenure, and conflict behavior driven by a particular mechanism—reputation dynamics. I don’t have a theory of every possible cause of war, but what I do have is a sense of exactly what patterns my independent variable of interest (time in office) should have on some outcome variables of interest. I have this, notably, because nothing else apart from the proposed mechanism could have caused war in my theoretical model. My model isn’t the world, nor is it the historical record, and when it comes time to take my predictions to the data—to test them against the historical record—I’ll know some important things to control for on the right hand side of my regression: all the things I assumed away. Particularly, those things I believe will affect both temporal changes in state preferences and war should go into the empirical model as controls. That’s pretty useful, as far as I’m concerned. So by being intimately aware of what my theory assumes and what it doesn’t, I have strong expectations about the independent effects of my independent variables, controlling for other relevant factors, and I have an equally strong sense of what I need to control for. And by isolating the factors around my particular proposed causal mechanism/independent variable, I can also be sure that my proposed mechanism can do independent work on its own and the precise conditions under which I expect it to play out. With less precise (or, worse, hidden or implicit) assumptions—that is, with multiple things that could cause war under the same conditions—that would be much more (and unnecessarily) difficult.

Second—and I saved this one because it’s shorter—assumptions really are all we bring to the table when we build theories and try to explain things. If a model is just an argument, then assumptions are just premises—-i.e., things said for the sake of argument. Now, it’s true that if our assumptions can never hold (in my running example, if leaders are all the same in their resolve and it’s always well and publicly known) then my proposed mechanism won’t explain observed phenomena. Sure. That’s trivially true. But let’s think about the elements of our theory/argument; what’s it made up of? Premises, some logical connections drawn between them, and conclusions; in other words, assumptions, some logical connections drawn between them, and implications/hypotheses. The implications depend on the premises and the logic, so I’m clearly not adding hypotheses directly, and logic is, well, pretty much given; so my only contribution—the source of our creativity and power and, in very real sense, our ability to explain—are the premises I use as inputs into my theoretical construct.

That means I value my assumptions pretty highly—again, since I’m not trying to re-write the rules of logic, that’s what I’m really contributing here, and that’s as it should be. My goal in the not-so-hypothetical model above was to see how a particular factor influenced a particular outcome, independently of other factors, if at all; I wanted to know what would have to be true for the proposed relationship to exist. If I didn’t make a ton of false assumptions along the way, I’d get nowhere. But here’s the thing—everything I assumed away that could be related to both IV and DV must come back if I’m going to build an empirical model that controls for potential confounds or sources of spuriousness—but it’s just not necessary (or prudent) to include in the theoretical model I designed for my particular research question.

Predicting social revolutions?

Should social scientists have been able to predict the chaos in Tunis and Egypt? After someone at AEI decided that someone should’ve, then went on to slam scientists for not doing so, Dan Drezner and Phil Arena had some interesting thoughts in rebuttal. I’ve got nothing to add, strictly speaking, to what they said, because the AEI post demonstrates a clear misunderstanding of science. I’ve got another answer to the question—well, at least another reason to say “no”—that has to do precisely with insights gained from political science about things live social revolutions. In short, things like this can’t happen unless they are unpredictable.

The events in Tunis and Egypt had to be fairly unpredictable if they were to happen…otherwise, the repressive organs of the respective states, or the governments through preemptive concessions, would’ve tried to head them off (which, anticipating some unrest, Jordan and Yemen are trying to do right now). This is probably true of anything like a social revolution, a coup, putsch, etc., because they begin with one side—the people or a batch of upset colonels—at a serious bargaining disadvantage with respect to the state, a disadvantage that can only be overturned, even temporarily, through some sort of surprise.

So in the end, we’ve seen how surprisingly these things can spring up: governments in Tunis and Cairo caught off guard, the first unsuspecting of unrest and the second skeptical that it would spread across the border into a traditionally stable Egypt. Then, sensing that things might get rough in their own countries, Jordan and Yemen’s governments start promising reform, hoping to head off popular pressure once it becomes predictable. I don’t know about you, but that’s some fairly useful insight gained from social science theories—especially rigorous logical models of the kind our friend at AEI disparages—wouldn’t you think?

Grad students…read this

For those of you CU students in Intro Game Theory this coming semester (all 27 of you), I’d suggest reading this piece ahead of time. We’ll spend a little time motivating the method early on, but for a good, thoughtful exposition of the role that formalizing our theories can play in conducting rigorous inquiry, there are few better than Harrison Wagner. Read it, then read it again.

When theories meet critiques…and how to handle them

Note. This is aimed, for the most part, at game theory students, but it’s important to note that this is important for theories of all stripes, whether formal or verbal. So, whatever your inclinations for developing explanations, read on.

Theories, in their basic form, consist of assumptions (premises), some logic, and implications (hypotheses, conclusions, etc.), and there are any number of ways to critique them, but today we’re going to set aside the question of the logic of theories and assume that you’ve got a logically consistent, valid argument. (How you get this is another story for another time.) But, assuming that the logic is right, one thing that any scholar will run into when others see their theory, whether in a paper or at a conference, is the question of what gets left out of the model. Granted, given the infinitude of things that could be in any model, the correct answer to “what have you left out?” is, strictly speaking, “nearly everything.” But very often, folks will ask, “But what about factor x? Shouldn’t that also affect the outcome variable? And, if so, why is it not in your model?” Sometimes, that’s a useful critique of your theory, and sometimes it’s not, and the key is identifying when it is and when it isn’t. Of course, as we’ll see, even when it’s not useful for the theory, it often turns out to be good for thinking about controls for testing its implications…but we’ll get there after the jump.

Continue reading