On things said for the sake of argument, or why “assumption” isn’t a four-letter word

I’m not going to rehash Phil Arena’s (excellent) post on the role—and ubiquity—of assumptions, but I do want to take the opportunity to talk about how I view the assumptions I make in my own work. Specifically, I want to make a case for why “assumptions” aren’t at all a necessary evil—rather, they’re a necessary and powerful good for doing the stuff of social science. I’ll make two points. First, they help us isolate causal mechanisms when we build theories, enabling us to develop expectations over when and why some set of factors can have an independent effect on an outcome of interest in the absence of some other factors—which helps when we move to empirical models. Second, and I’m repeating myself here (I think), they’re really the only things that we, as social scientists trying to explain the things we observe, bring to the table when it comes to building theories. So, yes, all assumptions are “false” in the sense that they strip away things we would think important if we were to create a complete rendition of something, but they’re also essential—and unavoidable—when it comes to the development of theories (whether formal or informal). Those things we assume away should always come back in our empirical models, to be sure, but I’ll also argue that we have a better sense of what those controls should be when we’re mindful of the assumptions we put into our theories.

First, on the issue of isolation, let’s say that I want to develop a theory of how some factor—say, leadership change—affects temporal patterns of international conflict. If I’m interested in whether there can be a valid link between leadership tenure and war (that is, a valid argument from premises to conclusion), what do I need to do? Let’s say, for example, that my hunch is that new leaders know more about their own willingness to use force than their opponents, such that they take office with private information over their resolve. How should I model this? Well, two things I’d want to do immediately are assume that, while consecutive leaders of the same state can differ in their resolve, there is no other source of variation in preferences that occurs with leader change, and, second, without leadership change, war would not occur in the theory. Do I think either of these are true? Well, of course not. First, partisan change, state-society dynamics, and time until the next election (in democracies) can also produce changes in state preferences across leadership transitions. Second, wars can of course happen for other reasons (if they didn’t, I’d be the first person with a valid argument about the causes of war, and while I’m a little arrogant, I ain’t that bad). But if I want to see what the independent effect leader change is, I can (and should, at this stage of model-building) strip these other things away—so that if war does happen in my model, I’ll know the mechanism driving it. (Put more pithily, if outcomes are overdetermined in your theory, you really can’t say much about the things you’re presumably interested in. And whether they are overdetermined in your theory is totally up to you.)

My next step, of course, is to analyze the model. This amounts to seeing what valid conclusions follow from my premises (assumptions)—no more, no less. Let’s say that I analyze the model and find that, indeed, when new leaders’ personal resolve is private information, we see turnover-driven cycles of reputation-building and conflict. But what do I really have here, if I’ve assumed away all these other sources of potential changes in state preferences? Well, I’ve got a somewhat parsimonious theory of leader change, tenure, and conflict behavior driven by a particular mechanism—reputation dynamics. I don’t have a theory of every possible cause of war, but what I do have is a sense of exactly what patterns my independent variable of interest (time in office) should have on some outcome variables of interest. I have this, notably, because nothing else apart from the proposed mechanism could have caused war in my theoretical model. My model isn’t the world, nor is it the historical record, and when it comes time to take my predictions to the data—to test them against the historical record—I’ll know some important things to control for on the right hand side of my regression: all the things I assumed away. Particularly, those things I believe will affect both temporal changes in state preferences and war should go into the empirical model as controls. That’s pretty useful, as far as I’m concerned. So by being intimately aware of what my theory assumes and what it doesn’t, I have strong expectations about the independent effects of my independent variables, controlling for other relevant factors, and I have an equally strong sense of what I need to control for. And by isolating the factors around my particular proposed causal mechanism/independent variable, I can also be sure that my proposed mechanism can do independent work on its own and the precise conditions under which I expect it to play out. With less precise (or, worse, hidden or implicit) assumptions—that is, with multiple things that could cause war under the same conditions—that would be much more (and unnecessarily) difficult.

Second—and I saved this one because it’s shorter—assumptions really are all we bring to the table when we build theories and try to explain things. If a model is just an argument, then assumptions are just premises—-i.e., things said for the sake of argument. Now, it’s true that if our assumptions can never hold (in my running example, if leaders are all the same in their resolve and it’s always well and publicly known) then my proposed mechanism won’t explain observed phenomena. Sure. That’s trivially true. But let’s think about the elements of our theory/argument; what’s it made up of? Premises, some logical connections drawn between them, and conclusions; in other words, assumptions, some logical connections drawn between them, and implications/hypotheses. The implications depend on the premises and the logic, so I’m clearly not adding hypotheses directly, and logic is, well, pretty much given; so my only contribution—the source of our creativity and power and, in very real sense, our ability to explain—are the premises I use as inputs into my theoretical construct.

That means I value my assumptions pretty highly—again, since I’m not trying to re-write the rules of logic, that’s what I’m really contributing here, and that’s as it should be. My goal in the not-so-hypothetical model above was to see how a particular factor influenced a particular outcome, independently of other factors, if at all; I wanted to know what would have to be true for the proposed relationship to exist. If I didn’t make a ton of false assumptions along the way, I’d get nowhere. But here’s the thing—everything I assumed away that could be related to both IV and DV must come back if I’m going to build an empirical model that controls for potential confounds or sources of spuriousness—but it’s just not necessary (or prudent) to include in the theoretical model I designed for my particular research question.

What do people mean by “small government”?

There’s a big difference between reducing the size of government and reducing the authority of the government, and they’re all too often conflated. In fact, plenty of people saying they want “smaller government” only want an inexpensive government, one that doesn’t tax them too much or redistribute in ways they dislike. Yet some of these same proponents of cheap government are also quite happy to expand the authority or power of the government, from enhanced police and surveillance powers (like warrantless wiretaps), allowing the use of torture, restrictions on abortion or marriage or free speech (say, flag-burning), etc…this could easily be a longer list.

But whether you support these things or not is irrelevant for the point I want to make; “small” governments can outlaw all kinds of things, can restrict a wide-ranging number of civil liberties and human rights—and just because a government isn’t “big” in terms of how much it costs doesn’t mean that it’s not “big” in terms of its authority and ability to interfere in the lives of its citizens. Yes, one could say that it might be harder to interfere with liberties when the government is smaller, but if the money it still does have goes to police power, then that’s not a (terribly) compelling argument. The point here is that people aren’t often clear what they mean by “small” or even “limited” government.

International security, week 3

Following up on last week’s treatment of the bargaining approach to war, we continued the discussion this week about the (unfortunately?) time-honored dispute over the link between the distribution of power and the probability of war. I won’t belabor the substance of the discussions too much, but two things stood out to me that I thought worth noting today. [Arm raised over dying horse…]

Continue reading

More on game theory and replicable science

Andrew Gelman’s post over at The Monkey Cage, in which he treats an argument about how the threat of meta-analysis should induce more disciplined empirical work in forward-thinking scholars, got me thinking more about the importance of replicability…but in the context of theory rather than empirical work.

Specifically, on that first day of an intro game theory class for grad students, you find yourself explaining to what might be a skeptical crowd the value of both modeling strategic interaction (a rather easy sell) and doing it formally (a tougher sell). The two are, of course, very different ideas—and they’re all too often conflated—but on the latter point, there’s a good argument to be made here that has nothing to do with game theory or assumptions of rationality but everything to do with replicability.

In short, formalizing one’s argument—apart from making it easier to get the logic right—is also a good way (and, in fairness not the only way) to make sure that said argument is replicable. Sentences and words can be sloppy; equations and operators are precise. And by virtue of that precision, tracing an author’s logic becomes easy when s/he provides proofs of how the conclusion was reached. These proofs can be mathematical (that’s the way I happen to do it), but if we’re just trying to prove the validity of an argument, then it can be done syllogistically or in whatever mode one likes. When one formalizes an argument, though, we can flip to the back of the article (or, sometimes, read just beyond the proposition) and trace exactly the logical path to verify that, yes, the hypotheses really do follow from the premises. That’s powerful stuff. Yes, one might contend that there are some nontrivial startup costs to being able to reproduce and verify the logic inside someone else’s formal proofs, but that’s also the case when it comes to advanced statistical analyses that we’d like to replicate as well. So I’m not terribly sympathetic to that objection.

Ultimately, formalizing our arguments allows us to create very clear replication files for our theories, rendering them transparent, reproducible, extendable, and—this is key—open to a greater degree of scrutiny. When our logic (whether good, bad, or absent) can’t hide behind verbiage, we’re better off as a discipline. We can scrutinize, correct, refine, refute, and improve in a way that we can’t when readers have to work too hard to back out the logic of our argumentation. Again, this doesn’t have to be formal, but formality does make complex logical structures with lots of moving parts easier to handle. (Hell, I need that mathematical crutch when the moving parts become too many, and I’m happy to admit it.)

Think of it this way: we’d be justifiably skeptical of empirical work that didn’t provide replication materials, and I’d argue that we should be equally skeptical of work that obfuscates its logic—intentionally or not—by not providing the reader some kind of transparent recipe for tracing their path from premises to conclusion. Yes, if you provide the details of your logic, you’re perhaps more likely to be firmly refuted, but—like the scholars addressed in Gelman’s post—that’s all the more reason to make sure you get the logic right the first time around.

On the (non) obsolescence of industrial war

Some years ago, I read Gen. Rupert Smith’s semi-memoir/discourse on “war amongst the people,” The Utility of Force. Most notable among some rather sweeping claims was that what he called “industiral war”—or war between states with standing armies, using mechanized forces that engage one another on the battlefield—is obsolete, that war has fundamentally changed to now involve something involving military forces against non-state groups organized to use violence. Quite apart from the rather strange invalid inferential logic used to justify this claim—i.e., just because we’ve not seen something in a while, it won’t come back—this line of reasoning really bothers me. In fact, we can see that it commits the logical fallacy of affirming the consequent:

  1. If industrial war were obsolete, we wouldn’t see it occur.
  2. We haven’t seen industrial war in a while.
  3. Therefore, industrial war is obsolete.
Straightforwardly, we can see that the conclusion, (3), doesn’t follow from the premises. Why? Because there are any number of reasons that (3) might be true without (1) and (2) being true: unipolarity, economic strain, a working great power concert, military technology, etc.

In fact, periods of peace between the great powers have certainly existed in the past, and just because we’re not seeing war between them now, I can’t imagine that this also implies that states with modern militaries in the future won’t have disputes that they might settle by force—force involving the instruments of industrial warfare. More after the fold, including my thoughts on why tanks are just as important when they’re not in use as when they are. Continue reading