Some thoughts on journal editing, one year in

I’ve been the Editor of Conflict Management and Peace Science (@cmpseditors if you just have to see our tweets) for just over a year now. And even though that specific year was 2020, I think it’s still worthwhile to collect some observations about the experience. You won’t find much deep insight here, but I do hope that sharing a few things I’ve learned, as well as a few things I like and dislike about editing, might be useful for authors, reviewers, readers, and scholars who might want to edit a journal themselves.

Before we get going, though, two caveats:

First, I’m not speaking for CMPS‘s entire editorial team, which consists of me, three excellent associate editors (Amy Yuen, Megan Shannon, and Nikolay Marinov), and our intrepid editorial assistant (Kevin Galambos).1 Our talents and skills and insights are all different, so I’d be shocked if the things we’d write about aren’t different to at least some degree. They’re smart, talented, and committed scholars that don’t need me to speak for them.

Second, you won’t find any broad statements here about the peer-review system, unless you divine them from my reflections on the day-to-day of helping authors get their research reviewed by their peers and published in our journal.2 Everybody’s got ideas, most of them incompatible, about how to “fix” the system, but I’m focusing today on what it’s like to move from one side of the editorial desk to the other, from author/reviewer to editor.

Those caveats in place, what have I learned?

  • Reviewer 2 really isn’t all that bad. (Thanks, @daveamp.) Reviewers typically operate in good faith. It’s easy to complain about reviewers, especially when they recommend rejection of our work or give us an insufficiently enthusiastic R&R. Hell, it’s even cathartic to rage about them a little bit. But having seen more than a few reviews by this point, I can say that the vast majority of the time, CMPS reviewers take the task seriously and genuinely engage with the work. It’s the rare case that a reviewer just totally mails in the review or wildly misconstrues a manuscript. And, as one of this four-person editorial collective that has to hand down rejections to about 87% of new submissions, I appreciate how seriously reviewers take the effort. I don’t know other journals’ reviewer pools, but after 13 months on the job, I’m certainly fond of ours.
  • If I’ve got any complaints about reviews, though, it’s that they aren’t always detailed enough. I like it when reviewers make a case for the decision; it makes the job of assessing what I should require of the authors that much easier. A one sentence “accept this” (unless it’s after some specific revisions), “reject this, because it sucks,” or “reject this, because it’s not the paper I would write” just isn’t all that useful. It’s also better, if not absolutely essential, for reviewers to offer solutions to the problems they identify; it certainly helps me, as an editor, to distinguish good-faith engagement from the occasional, shall we say, less than good-faith engagement with the work.
  • In a related vein, editing a journal isn’t just—and nor should it be—aggregating reviewer recommendations like votes. Plenty of manuscripts come back to us with, say, all three reviewers recommending R&R, but we still reject them. I know that’s frustrating, but there are good reasons for this. Some reviewers’ rejections are others’ R&Rs, and vice versa; it’s rare that any two reviewers will use the same standard for the overall recommendation.3 Sometimes, the sum of requested changes is just too much for an R&R, requiring a completely different paper…and, yes, what counts as “too much” is up to the editor. Finally, sometimes the reviews in total reveal a flaw that the reviewers didn’t see individually; it’s of course the editor’s responsibility to make it clear when this is the case. This all means that, as an editor, I pay far more attention to the content of the reviews, and what they mean for required changes to the manuscript, than to the tally of accept, reject, and R&R recommendations.
  • I also very much appreciate it when reviewers and authors ask for extensions. If we ask for your review, it’s because we want to know what you think, and if we invite you to revise and resubmit, it’s because we want to see those revisions! So ask for extensions if you need them. Patience is good all around, and though we made that our watchword once the COVID-19 pandemic took hold in March, we don’t plan on abandoning it. I mean, sure, I can imagine situations in which I wouldn’t grant someone extra time to get in a review (say, extreme delinquency, in which case they should be free of the burden anyway, or saying something untoward about Pearl Jam or That Mitchell and Webb Look), but they’re very, very few.
  • And speaking of being flexible with reviewers, I’ve gotten some of the best, most insightful reviews from scholars recommended by folks who had to decline review invitations. So, if you need to reject a review request, I understand. But, if you can dash off some recommendations for other potential reviewers, which you can do if you follow the “decline” link in the email, so much the better; we’re asking you because of your expertise, and you’re far more likely to know other experts on the paper than we are (unless, of course, it’s about Pearl Jam or That Mitchell and Webb Look). A larger reviewer pool helps us build a larger and more diverse readership, and that’s a good thing.
  • On the downside, editing does mean that, more often than not, you’re the bearer of bad news. Like I said, our acceptance rate hovers around 13%. But the quality of the reviews, and the chance to frame those reviews—especially the negative ones—for authors in helpful ways makes up for it a bit. When I write rejections, I try to focus on what steps the authors can take to improve their chances at other journals, and the good-faith efforts of CMPS reviewers make that possible.4
  • Finally, I’m glad (though “relieved” might be an even better word) that I proposed an editorial team model for CMPS.5 Submissions to CMPS spiked considerably in 2020 from the year before we took over, and sharing the burden—all the judgment calls, all the moments when you need to consult someone else on a decision, all the excitement of seeing good work get published in the journal you work for—makes this whole enterprise much easier and, frankly, much more rewarding than it would be otherwise.

All told, editing really is rewarding. An editor’s choices help set the terms of debate in the field—and we take that responsibility seriously at CMPS—but editors also get to learn more about what’s happening in the field and see that knowledge grow in something like real time. And, sure, it can be tedious, and it definitely entails a big time commitment, but a good reviewer pool—which we’re fortunate to have at CMPS—makes it worthwhile.


  1. That guy needs a website, doesn’t he? (Hint, Kevin, Hint.) He’s putting together some pretty remarkable data on multilateral military exercises with Vito D’Orazio at UT Dallas. ↩︎
  2. Where by “our” I mean the Peace Science Society (International), not just those of us tasked with stewarding it. Really. ↩︎
  3. And trying to impose a standard would be a fool’s errand. ↩︎
  4. Did I say how much I like our reviewers? Really. They keep this operation going. ↩︎
  5. A model I learned from Dan Nexon, when I got to serve an Associate Editor at ISQ in 2017-2018. ↩︎

Leave a comment