International security, week one

This is the first in what I plan to be be a semester-long series of posts about each class session I teach (International Security for grad students, War and Peace in East Asia for the undergrads)—either summing up, adding something extra, or highlighting a useful point that I don’t want to forget.

I’ve never felt satisfied with my choices of first week readings in graduate seminars, but today might be as close as I’ve come to that. I assigned Europe’s Last Summer, giving the students an opportunity to develop some free case knowledge about the object we’ll be studying all semester (war) and using the historiography of the war to make a few big points: challenges to inference, multiple and conjunctural causality, the differences between structural/underlying and proximate causes, and how the sheer costs of war—especially in a war that began as a deliberate act of policy—pose such an obstacle to explaining it. (Yes, this was intentional and probably ham-handed foreshadowing for week 2, but I just ran with it.)

Another important point, though, is a tradeoff we hit upon when talking about underlying and proximate causes of war. First, we did a thought experiment: removing, alternately, Germany’s incentives for preventive war and the Serbian crisis, then thinking about how much longer or shorter the book would’ve been in the absence of either (taking those two factors, for the sake of argument, as the sole underlying and proximate causes). Discussion then went somewhere else. There’s another tradeoff between underlying and proximate causes, particularly when they exist at different levels of aggregation: measurement error. Figuring out just what happened inside Austria and Germany in 1914 has taken a long time, and we may not be all the way there yet, but contrast this with measures of military power, great power status, etc. At the lower level of aggregation, we have finger-grained information, but these lower levels could be more prone to measurement error (via observability in this case) than coarser concepts like military power, measured at the level of the state. (Incidentally, I also think I remember Will Moore talking about something similar with respect to temporal aggregation during a presentation at a past Peace Science meeting…I think.)

Either way, it’s a tradeoff that doesn’t necessarily make one feel better about getting a handle on the interplay of underlying and proximate causes, but it’s surely one that’s worth bearing in mind—especially as we get into the messiness of weighing them against one another in the mind-numbingly overdetermined instance of any one war…

My three favorite books on the First World War

Reading excessively obsessively a lot about World War I almost seems like a rite of passage for us international conflict-types, and while my fascination with the war came late, it did dominate my reading for a good chunk of the last year. Perhaps not surprisingly, I’ve been asked what my favorite book on the war actually is, and after giving a typically academic response (you know the type: endless qualifications and definitional hedging), I gave three.

Now, I’m not going to pretend to have some vast knowledge of the scholarship on the war, which is why I’m saying “favorite” instead of “best,” but my answer is all about what you want out of the book: a discussion of its origins, an in-the-weeds look at military strategy, or a sweeping look at the whole of the conflict. For each of those, here’s what I came up with:

I’m actually having my graduate international security class read Europe’s Last Summer as a kind of warm-up for that class, to get them thinking about the politics of war and decisions over it, and the reason is, frankly, that it’s a well-written synthesis of fairly recent scholarship that draws some strong conclusions worthy of discussion. That said, it’s mostly about the July Crisis, ending before the action really gets going.

Three Armies on the Somme is similarly narrow in focus, taking a close and, I’ve got to admit, utterly spellbinding look at the creation of military strategy on the Western Front, focusing on the titanic Battle of the Somme that began in the summer of 1916. It’s got a fascinating take on attrition and trench warfare; given the other side’s strategy, responding in kind was a best response given the technological constraints of the time—a deeply tragic Nash equilibrium to a vexing, and profoundly high-stakes, problem.

Finally, Stevenson’s Cataclysm is necessarily broader in scope, covering the politics and sociology of the home fronts as well as the fighting and diplomacy itself, but no less interesting, even for a reader like me focused on bargaining and diplomacy, both before and during the war, and the problems of coalitional warfare and consensus-building. It’s a big book; don’t be fooled by the page numbers, because the print is tiny—makes you feel like whatever you read next is a large-print edition.

Still, if you’re looking around for some accessible book-length treatments of the war, I think it’s hard to go wrong with any of these.

This semester’s graduate conflict syllabus

After spending a lot more time on it than I anticipated (go figure), I finally put the finishing touches on my graduate conflict syllabus this morning, which you can see here. There are some subtle changes from the last time I taught the course (which I discussed here), but the main difference—I think—is a slight change in my own approach to teaching graduate courses.

I hit on this unintentionally the last time around, where I said in class something to the effect of “This isn’t a class about war and peace. It’s about how to study war and peace.” I hadn’t used precisely those terms before, but I think I summed up my approach pretty well in that statement; even a substantive seminar has to be, on some level, a kind of “methods” course. It’s about evaluating arguments, theories, and research designs on the surface, but it’s also about students learning what good (and bad) research “looks like” and how to apply the same (hopefully high) standards they use when judging others’ work to their own. In other words, it’s a course on “research and how to do it.”

Ultimately, that means I worry less and less about achieving the proper breadth of coverage—for me, depth is the key. I assign topics based on (a) the connections between important pieces on the topic, (b) bodies of work that help me make points about theory development, explanation, and the logic of inference, (c) how well I know the topic, and, lastly, (d) the visibility or trendiness of the topic. The first two points, though, are paramount, and I’m increasingly okay with that. Ultimately, students can learn a substantive literature that I leave out (from rivalry to the steps to war to international institutions to nuclear proliferation) on their own, but the real stuff of their training is in teaching them how to evaluate work and produce their own…and that means a rather different set of selection criteria than I would choose if my only goal were to give the state of the art.

In service of this, I’m doing something a bit egotistical different (at least for me) during week 13 on coalitions: I’m turning the class into a kind of mini book workshop (I might even spring for lunch, but we’ll have to see if they earn it first). The class will read the core chapters of the book manuscript I’m working on (two raw chapters and two component articles), and then ask questions and give feedback. Quite apart from my own selfish perfectly reasonable desire to get said feedback, I’m hoping this proves a good way for them to (a) see how the sausage is made, so to speak, and (b) learn how to critique someone’s work with them sitting in the same room. I’m pretty excited about it.

I’m also going to try to use this class as a jumping off point to get back into blogging again. We’ll see how it goes.

On things said for the sake of argument, or why “assumption” isn’t a four-letter word

I’m not going to rehash Phil Arena’s (excellent) post on the role—and ubiquity—of assumptions, but I do want to take the opportunity to talk about how I view the assumptions I make in my own work. Specifically, I want to make a case for why “assumptions” aren’t at all a necessary evil—rather, they’re a necessary and powerful good for doing the stuff of social science. I’ll make two points. First, they help us isolate causal mechanisms when we build theories, enabling us to develop expectations over when and why some set of factors can have an independent effect on an outcome of interest in the absence of some other factors—which helps when we move to empirical models. Second, and I’m repeating myself here (I think), they’re really the only things that we, as social scientists trying to explain the things we observe, bring to the table when it comes to building theories. So, yes, all assumptions are “false” in the sense that they strip away things we would think important if we were to create a complete rendition of something, but they’re also essential—and unavoidable—when it comes to the development of theories (whether formal or informal). Those things we assume away should always come back in our empirical models, to be sure, but I’ll also argue that we have a better sense of what those controls should be when we’re mindful of the assumptions we put into our theories.

First, on the issue of isolation, let’s say that I want to develop a theory of how some factor—say, leadership change—affects temporal patterns of international conflict. If I’m interested in whether there can be a valid link between leadership tenure and war (that is, a valid argument from premises to conclusion), what do I need to do? Let’s say, for example, that my hunch is that new leaders know more about their own willingness to use force than their opponents, such that they take office with private information over their resolve. How should I model this? Well, two things I’d want to do immediately are assume that, while consecutive leaders of the same state can differ in their resolve, there is no other source of variation in preferences that occurs with leader change, and, second, without leadership change, war would not occur in the theory. Do I think either of these are true? Well, of course not. First, partisan change, state-society dynamics, and time until the next election (in democracies) can also produce changes in state preferences across leadership transitions. Second, wars can of course happen for other reasons (if they didn’t, I’d be the first person with a valid argument about the causes of war, and while I’m a little arrogant, I ain’t that bad). But if I want to see what the independent effect leader change is, I can (and should, at this stage of model-building) strip these other things away—so that if war does happen in my model, I’ll know the mechanism driving it. (Put more pithily, if outcomes are overdetermined in your theory, you really can’t say much about the things you’re presumably interested in. And whether they are overdetermined in your theory is totally up to you.)

My next step, of course, is to analyze the model. This amounts to seeing what valid conclusions follow from my premises (assumptions)—no more, no less. Let’s say that I analyze the model and find that, indeed, when new leaders’ personal resolve is private information, we see turnover-driven cycles of reputation-building and conflict. But what do I really have here, if I’ve assumed away all these other sources of potential changes in state preferences? Well, I’ve got a somewhat parsimonious theory of leader change, tenure, and conflict behavior driven by a particular mechanism—reputation dynamics. I don’t have a theory of every possible cause of war, but what I do have is a sense of exactly what patterns my independent variable of interest (time in office) should have on some outcome variables of interest. I have this, notably, because nothing else apart from the proposed mechanism could have caused war in my theoretical model. My model isn’t the world, nor is it the historical record, and when it comes time to take my predictions to the data—to test them against the historical record—I’ll know some important things to control for on the right hand side of my regression: all the things I assumed away. Particularly, those things I believe will affect both temporal changes in state preferences and war should go into the empirical model as controls. That’s pretty useful, as far as I’m concerned. So by being intimately aware of what my theory assumes and what it doesn’t, I have strong expectations about the independent effects of my independent variables, controlling for other relevant factors, and I have an equally strong sense of what I need to control for. And by isolating the factors around my particular proposed causal mechanism/independent variable, I can also be sure that my proposed mechanism can do independent work on its own and the precise conditions under which I expect it to play out. With less precise (or, worse, hidden or implicit) assumptions—that is, with multiple things that could cause war under the same conditions—that would be much more (and unnecessarily) difficult.

Second—and I saved this one because it’s shorter—assumptions really are all we bring to the table when we build theories and try to explain things. If a model is just an argument, then assumptions are just premises—-i.e., things said for the sake of argument. Now, it’s true that if our assumptions can never hold (in my running example, if leaders are all the same in their resolve and it’s always well and publicly known) then my proposed mechanism won’t explain observed phenomena. Sure. That’s trivially true. But let’s think about the elements of our theory/argument; what’s it made up of? Premises, some logical connections drawn between them, and conclusions; in other words, assumptions, some logical connections drawn between them, and implications/hypotheses. The implications depend on the premises and the logic, so I’m clearly not adding hypotheses directly, and logic is, well, pretty much given; so my only contribution—the source of our creativity and power and, in very real sense, our ability to explain—are the premises I use as inputs into my theoretical construct.

That means I value my assumptions pretty highly—again, since I’m not trying to re-write the rules of logic, that’s what I’m really contributing here, and that’s as it should be. My goal in the not-so-hypothetical model above was to see how a particular factor influenced a particular outcome, independently of other factors, if at all; I wanted to know what would have to be true for the proposed relationship to exist. If I didn’t make a ton of false assumptions along the way, I’d get nowhere. But here’s the thing—everything I assumed away that could be related to both IV and DV must come back if I’m going to build an empirical model that controls for potential confounds or sources of spuriousness—but it’s just not necessary (or prudent) to include in the theoretical model I designed for my particular research question.

More on game theory and replicable science

Andrew Gelman’s post over at The Monkey Cage, in which he treats an argument about how the threat of meta-analysis should induce more disciplined empirical work in forward-thinking scholars, got me thinking more about the importance of replicability…but in the context of theory rather than empirical work.

Specifically, on that first day of an intro game theory class for grad students, you find yourself explaining to what might be a skeptical crowd the value of both modeling strategic interaction (a rather easy sell) and doing it formally (a tougher sell). The two are, of course, very different ideas—and they’re all too often conflated—but on the latter point, there’s a good argument to be made here that has nothing to do with game theory or assumptions of rationality but everything to do with replicability.

In short, formalizing one’s argument—apart from making it easier to get the logic right—is also a good way (and, in fairness not the only way) to make sure that said argument is replicable. Sentences and words can be sloppy; equations and operators are precise. And by virtue of that precision, tracing an author’s logic becomes easy when s/he provides proofs of how the conclusion was reached. These proofs can be mathematical (that’s the way I happen to do it), but if we’re just trying to prove the validity of an argument, then it can be done syllogistically or in whatever mode one likes. When one formalizes an argument, though, we can flip to the back of the article (or, sometimes, read just beyond the proposition) and trace exactly the logical path to verify that, yes, the hypotheses really do follow from the premises. That’s powerful stuff. Yes, one might contend that there are some nontrivial startup costs to being able to reproduce and verify the logic inside someone else’s formal proofs, but that’s also the case when it comes to advanced statistical analyses that we’d like to replicate as well. So I’m not terribly sympathetic to that objection.

Ultimately, formalizing our arguments allows us to create very clear replication files for our theories, rendering them transparent, reproducible, extendable, and—this is key—open to a greater degree of scrutiny. When our logic (whether good, bad, or absent) can’t hide behind verbiage, we’re better off as a discipline. We can scrutinize, correct, refine, refute, and improve in a way that we can’t when readers have to work too hard to back out the logic of our argumentation. Again, this doesn’t have to be formal, but formality does make complex logical structures with lots of moving parts easier to handle. (Hell, I need that mathematical crutch when the moving parts become too many, and I’m happy to admit it.)

Think of it this way: we’d be justifiably skeptical of empirical work that didn’t provide replication materials, and I’d argue that we should be equally skeptical of work that obfuscates its logic—intentionally or not—by not providing the reader some kind of transparent recipe for tracing their path from premises to conclusion. Yes, if you provide the details of your logic, you’re perhaps more likely to be firmly refuted, but—like the scholars addressed in Gelman’s post—that’s all the more reason to make sure you get the logic right the first time around.

IS week 2 follow-up: how we model war

With some time on my hands before watching Kentucky brutalize LSU down in Baton Rouge, I want to return to another topic we covered in international security this week: specifically, the choices we make when we model war. This will be a long-running discussion, I think, but we had a question asked about how relevant the group of models we read—which treat war as a costly lottery—are for something other than interstate war.

In line with Thursday’s post, I’d consider that a question worth thinking about. I asked the class what every model we read assumed about a war, and we came down on three things:

  1. war is costly
  2. the outcome is probabilistic
  3. bargaining stops once war begins

When you’re talking about costly lottery models, that’s pretty much it, no? We’ll spend more time during the semester on what it means to relax assumptions 1 (in the leader-centric weeks) and 3 (in the war-as-costly-process week), but we can still say a lot about what the costly lottery assumptions get us in terms of, say, civil/intrastate or extra-state war.

The answer, I think, is quite a lot, though as always it depends on what we’re asking the model to tell us. To the extent that intrastate wars have these features—especially their costliness and the probabilistic nature of the outcome—then we’d expect to see similar dynamics in their causes. That is, discrepancies between the distribution of benefits and the distribution of power, shifting power, private information with incentives to lie—all these things—can lead players to fight a war that has the features of a costly lottery.

Now, there are many differences between the belligerents that fight interstate and intra-state wars, sure—agency problems, enforcement problems, etc. maybe differ across them—and to the extent that we’re interested in those features of intrastate wars, we’d want to model them explicitly. But unless they’d give us profoundly different answers about the effects of, say, shifting power and incentives to misrepresent—and in some cases they very well might—we’ve no need to complicate our models with them unnecessarily. Insurgencies, for example, may have the flavor of players bargaining before a decisive third audience—the public—or wars may have varying degrees of the risk of pure stalemate, etc., and if we want to know about the effects of those features, we build them in…

…but until we get to that point, there’s little wrong with seeing just how much we can translate from one context to another based on the extent to which one simple set of assumptions characterizes both.

International security, week 1

I taught the first session of International Security yesterday (see this post of the syllabus and the rationale behind it), and we spent a lot of time, as promised, on the role and promise of assumptions in theory-building and testing. I can go on at (too great a) length about these things, as I’m sure my students discovered, but it allowed for some good, in-depth discussions of a few critical points that I think are worth repeating here. Note that this isn’t an exhaustive outline of what we covered, but just some points I want to revisit. Below the break, of course…

Continue reading

This semester’s graduate syllabus: international security

As excited as I am about teaching International Security this semester, it’s never easy putting together a graduate syllabus. My own fetish for brevity comes into tension with my enthusiasm for the topic and the ever-present temptation to cover everything, and, in case you’re interested, here’s my latest attempt at striking that balance.

Inevitably, syllabi are statements about what we view as important, whether or not we intend for them to send such a signal. We may assign some things to make it more difficult to weasel out of reading them, but I don’t get the sense that students put a lot of effort into figuring out which is which. So in putting this course together, I tried to think hard about what’s “important” in the study of war and peace, not in terms of big outstanding questions  or trendy topics (though they’re covered) or what I consider “good” or exemplary work (that’s also represented in spots), but in terms of what someone who wants to start a research agenda in this subfield really needs to know. And I’ve come down on something that will, perhaps, be totally unsurprising: theory, both its development and its use.

First, the development of theories. We’re getting better as a subfield about trying hard to produce logically valid arguments, the kind that imply their own evidence (and can thus be falsified), but we’ve got a long way to go (which is good news for anyone getting started in IR). A senior colleague of mine has said (though I’m paraphrasing) that IR is characterized by a lot of sloppy answers to a lot of important questions, and I’ve decided that I want to push my grad students in the direction of developing good answers to those big, pressing questions about why large groups of people get together and kill each other and things they value in large numbers. I don’t want to set them on a particular topic, nor do I want them to adopt a specific tool, but I want them to be able to evaluate and develop logically valid arguments about, i.e. useful models of, the political world. As my students will see throughout the semester, it’s hard coming up with valid arguments that can then be used to add empirical content to the subfield. It’s hard, but it’s eminently worth it.

Second, the use of theories. Too often, some of our most useful and insightful theories, especially formal ones, elude empirical testing, and while it’s understandable—because, yes, it’s difficult—I want my students to get to the point of engaging the best arguments we have on the level of designing an appropriate research design, using the right sample, etc. in light of what the underlying assumptions of the model tell them to. When we engage theories only on the level of their hypotheses, it’s too easy to miss what the structure of the argument itself is telling us about the proper domain in which the argument applies, the error structure we should expect, and the functional forms of our variables. In short, using theories well (and responsibly) requires being able to identify and understand the critical nuts and bolts of the logical structure that produces their implications, and that’s what this course is aimed at: understanding what the arguments out there really say, what they imply, and what that means for testing them.

So what’s “important” for an IR course? It’s not just moving from one “image” to another (or reversing them), changing units of analysis, or blending the study of interstate and civil war—it’s learning how to those things effectively and responsibly. And as my poor students are about to find out, that ain’t easy.

But it sure is rewarding. I can’t wait to get into that classroom.